From noreply at buildbot.pypy.org Wed Jan 4 00:06:01 2012
From: noreply at buildbot.pypy.org (fijal)
Date: Wed, 4 Jan 2012 00:06:01 +0100 (CET)
Subject: [pypy-commit] pypy.org extradoc: merge
Message-ID: <20120103230601.724CA82B1C@wyvern.cs.uni-duesseldorf.de>
Author: Maciej Fijalkowski
Branch: extradoc
Changeset: r301:ce65b7d4c181
Date: 2012-01-04 01:05 +0200
http://bitbucket.org/pypy/pypy.org/changeset/ce65b7d4c181/
Log: merge
diff --git a/compat.html b/compat.html
--- a/compat.html
+++ b/compat.html
@@ -52,7 +52,7 @@
PyPy has alpha/beta-level support for the CPython C API, however, as of 1.7
release this feature is not yet complete. Many libraries will require
a bit of effort to work, but there are known success stories. Check out
-PyPy blog for updates.
From noreply at buildbot.pypy.org Fri Jan 6 23:18:44 2012
From: noreply at buildbot.pypy.org (fijal)
Date: Fri, 6 Jan 2012 23:18:44 +0100 (CET)
Subject: [pypy-commit] pypy import-numpy: abandon this approach
Message-ID: <20120106221844.BB87A82BFF@wyvern.cs.uni-duesseldorf.de>
Author: Maciej Fijalkowski
Branch: import-numpy
Changeset: r51083:eb12a969ddf7
Date: 2012-01-07 00:18 +0200
http://bitbucket.org/pypy/pypy/changeset/eb12a969ddf7/
Log: abandon this approach
From noreply at buildbot.pypy.org Sat Jan 7 02:37:51 2012
From: noreply at buildbot.pypy.org (fijal)
Date: Sat, 7 Jan 2012 02:37:51 +0100 (CET)
Subject: [pypy-commit] pypy numpy-concatenate: closed branch that went
nowhere
Message-ID: <20120107013751.2B93C82BFF@wyvern.cs.uni-duesseldorf.de>
Author: Maciej Fijalkowski
Branch: numpy-concatenate
Changeset: r51084:c62c1d1837b7
Date: 2012-01-07 03:37 +0200
http://bitbucket.org/pypy/pypy/changeset/c62c1d1837b7/
Log: closed branch that went nowhere
From noreply at buildbot.pypy.org Sat Jan 7 11:06:18 2012
From: noreply at buildbot.pypy.org (arigo)
Date: Sat, 7 Jan 2012 11:06:18 +0100 (CET)
Subject: [pypy-commit] pypy concurrent-marksweep: Redo the explicit
collect(), at least the most useful case.
Message-ID: <20120107100618.84F3D82BFF@wyvern.cs.uni-duesseldorf.de>
Author: Armin Rigo
Branch: concurrent-marksweep
Changeset: r51085:735c5da06f6c
Date: 2012-01-06 19:05 +0100
http://bitbucket.org/pypy/pypy/changeset/735c5da06f6c/
Log: Redo the explicit collect(), at least the most useful case.
diff --git a/pypy/rpython/memory/gc/concurrentgen.py b/pypy/rpython/memory/gc/concurrentgen.py
--- a/pypy/rpython/memory/gc/concurrentgen.py
+++ b/pypy/rpython/memory/gc/concurrentgen.py
@@ -433,9 +433,7 @@
debug_start("gc-major")
#
# We have to first wait for the previous minor collection to finish:
- debug_start("gc-major-wait")
self.stop_collection(wait=True)
- debug_stop("gc-major-wait")
#
# Start the major collection.
self._start_major_collection()
@@ -443,13 +441,11 @@
debug_stop("gc-major")
- def sync_end_of_collection(self):
+ def wait_for_the_end_of_collection(self):
"""In the mutator thread: wait for the minor collection currently
running (if any) to finish, and synchronize the two threads."""
if self.collector.running != 0:
- debug_start("gc-stop")
- self._stop_collection()
- debug_stop("gc-stop")
+ self.stop_collection(wait=True)
#
# We must *not* run execute_finalizers_ll() here, because it
# can start the next collection, and then this function returns
@@ -461,7 +457,9 @@
def stop_collection(self, wait):
if wait:
+ debug_start("gc-stop")
self.acquire(self.finished_lock)
+ debug_stop("gc-stop")
else:
if not self.try_acquire(self.finished_lock):
return False
@@ -503,7 +501,13 @@
def collect(self, gen=4):
+ debug_start("gc-forced-collect")
+ self.trigger_next_collection(force_major_collection=True)
+ self.wait_for_the_end_of_collection()
+ self.execute_finalizers_ll()
+ debug_stop("gc-forced-collect")
return
+ # XXX reimplement this:
"""
gen=0: Trigger a minor collection if none is running. Never blocks,
except if it happens to start a major collection.
@@ -532,15 +536,14 @@
self.execute_finalizers_ll()
debug_stop("gc-forced-collect")
- def trigger_next_collection(self, force_major_collection=False):
- """In the mutator thread: triggers the next minor collection."""
+ def trigger_next_collection(self, force_major_collection):
+ """In the mutator thread: triggers the next minor or major collection."""
#
# In case the previous collection is not over yet, wait for it
self.wait_for_the_end_of_collection()
#
# Choose between a minor and a major collection
- if (force_major_collection or
- self.size_still_available_before_major < 0):
+ if force_major_collection:
self._start_major_collection()
else:
self._start_minor_collection()
From noreply at buildbot.pypy.org Sat Jan 7 11:06:19 2012
From: noreply at buildbot.pypy.org (arigo)
Date: Sat, 7 Jan 2012 11:06:19 +0100 (CET)
Subject: [pypy-commit] pypy closed-branches: Merge closed head 0fe83ac4f0da
on branch numpy-full-fromstring
Message-ID: <20120107100619.DB3A882BFF@wyvern.cs.uni-duesseldorf.de>
Author: Armin Rigo
Branch: closed-branches
Changeset: r51086:cf8c8221023a
Date: 2012-01-07 11:04 +0100
http://bitbucket.org/pypy/pypy/changeset/cf8c8221023a/
Log: Merge closed head 0fe83ac4f0da on branch numpy-full-fromstring
From noreply at buildbot.pypy.org Sat Jan 7 11:06:21 2012
From: noreply at buildbot.pypy.org (arigo)
Date: Sat, 7 Jan 2012 11:06:21 +0100 (CET)
Subject: [pypy-commit] pypy closed-branches: Merge closed head 10e52e09cda7
on branch windows-no-err-dlg
Message-ID: <20120107100621.0286F82BFF@wyvern.cs.uni-duesseldorf.de>
Author: Armin Rigo
Branch: closed-branches
Changeset: r51087:fc8babbb0d49
Date: 2012-01-07 11:04 +0100
http://bitbucket.org/pypy/pypy/changeset/fc8babbb0d49/
Log: Merge closed head 10e52e09cda7 on branch windows-no-err-dlg
From noreply at buildbot.pypy.org Sat Jan 7 11:06:22 2012
From: noreply at buildbot.pypy.org (arigo)
Date: Sat, 7 Jan 2012 11:06:22 +0100 (CET)
Subject: [pypy-commit] pypy closed-branches: Merge closed head bae684cd82fb
on branch counter-decay
Message-ID: <20120107100622.1B69882BFF@wyvern.cs.uni-duesseldorf.de>
Author: Armin Rigo
Branch: closed-branches
Changeset: r51088:274493f9237a
Date: 2012-01-07 11:04 +0100
http://bitbucket.org/pypy/pypy/changeset/274493f9237a/
Log: Merge closed head bae684cd82fb on branch counter-decay
From noreply at buildbot.pypy.org Sat Jan 7 11:06:23 2012
From: noreply at buildbot.pypy.org (arigo)
Date: Sat, 7 Jan 2012 11:06:23 +0100 (CET)
Subject: [pypy-commit] pypy closed-branches: Merge closed head 6b116d5dea60
on branch numpy-faster-setslice
Message-ID: <20120107100623.296A682BFF@wyvern.cs.uni-duesseldorf.de>
Author: Armin Rigo
Branch: closed-branches
Changeset: r51089:907165accd25
Date: 2012-01-07 11:04 +0100
http://bitbucket.org/pypy/pypy/changeset/907165accd25/
Log: Merge closed head 6b116d5dea60 on branch numpy-faster-setslice
From noreply at buildbot.pypy.org Sat Jan 7 11:06:24 2012
From: noreply at buildbot.pypy.org (arigo)
Date: Sat, 7 Jan 2012 11:06:24 +0100 (CET)
Subject: [pypy-commit] pypy closed-branches: Merge closed head 93bb4d305fdb
on branch nedbat-sandbox-2
Message-ID: <20120107100624.3C68E82BFF@wyvern.cs.uni-duesseldorf.de>
Author: Armin Rigo
Branch: closed-branches
Changeset: r51090:57ce7dbc2991
Date: 2012-01-07 11:04 +0100
http://bitbucket.org/pypy/pypy/changeset/57ce7dbc2991/
Log: Merge closed head 93bb4d305fdb on branch nedbat-sandbox-2
From noreply at buildbot.pypy.org Sat Jan 7 11:06:25 2012
From: noreply at buildbot.pypy.org (arigo)
Date: Sat, 7 Jan 2012 11:06:25 +0100 (CET)
Subject: [pypy-commit] pypy closed-branches: Merge closed head cef73b42fc52
on branch numpypy-repr-fix
Message-ID: <20120107100625.49F3782BFF@wyvern.cs.uni-duesseldorf.de>
Author: Armin Rigo
Branch: closed-branches
Changeset: r51091:6ea46dc2c7b0
Date: 2012-01-07 11:04 +0100
http://bitbucket.org/pypy/pypy/changeset/6ea46dc2c7b0/
Log: Merge closed head cef73b42fc52 on branch numpypy-repr-fix
From noreply at buildbot.pypy.org Sat Jan 7 11:06:26 2012
From: noreply at buildbot.pypy.org (arigo)
Date: Sat, 7 Jan 2012 11:06:26 +0100 (CET)
Subject: [pypy-commit] pypy closed-branches: Merge closed head 2adf19881a7c
on branch numpy-dtype-strings
Message-ID: <20120107100626.64D1F82BFF@wyvern.cs.uni-duesseldorf.de>
Author: Armin Rigo
Branch: closed-branches
Changeset: r51092:5bfacfad4b18
Date: 2012-01-07 11:04 +0100
http://bitbucket.org/pypy/pypy/changeset/5bfacfad4b18/
Log: Merge closed head 2adf19881a7c on branch numpy-dtype-strings
From noreply at buildbot.pypy.org Sat Jan 7 11:06:28 2012
From: noreply at buildbot.pypy.org (arigo)
Date: Sat, 7 Jan 2012 11:06:28 +0100 (CET)
Subject: [pypy-commit] pypy closed-branches: Merge closed head 51e67e28230a
on branch numpy-ndim-size
Message-ID: <20120107100628.4B14982BFF@wyvern.cs.uni-duesseldorf.de>
Author: Armin Rigo
Branch: closed-branches
Changeset: r51093:4c2484433848
Date: 2012-01-07 11:04 +0100
http://bitbucket.org/pypy/pypy/changeset/4c2484433848/
Log: Merge closed head 51e67e28230a on branch numpy-ndim-size
From noreply at buildbot.pypy.org Sat Jan 7 11:06:29 2012
From: noreply at buildbot.pypy.org (arigo)
Date: Sat, 7 Jan 2012 11:06:29 +0100 (CET)
Subject: [pypy-commit] pypy closed-branches: Merge closed head aaab53d723c0
on branch numpy-sort
Message-ID: <20120107100629.5758382BFF@wyvern.cs.uni-duesseldorf.de>
Author: Armin Rigo
Branch: closed-branches
Changeset: r51094:1b7c79d96aa3
Date: 2012-01-07 11:04 +0100
http://bitbucket.org/pypy/pypy/changeset/1b7c79d96aa3/
Log: Merge closed head aaab53d723c0 on branch numpy-sort
From noreply at buildbot.pypy.org Sat Jan 7 11:06:30 2012
From: noreply at buildbot.pypy.org (arigo)
Date: Sat, 7 Jan 2012 11:06:30 +0100 (CET)
Subject: [pypy-commit] pypy closed-branches: Merge closed head e1b50a7fd007
on branch numpy-dtype
Message-ID: <20120107100630.669E882BFF@wyvern.cs.uni-duesseldorf.de>
Author: Armin Rigo
Branch: closed-branches
Changeset: r51095:cb83722c2596
Date: 2012-01-07 11:04 +0100
http://bitbucket.org/pypy/pypy/changeset/cb83722c2596/
Log: Merge closed head e1b50a7fd007 on branch numpy-dtype
From noreply at buildbot.pypy.org Sat Jan 7 11:06:31 2012
From: noreply at buildbot.pypy.org (arigo)
Date: Sat, 7 Jan 2012 11:06:31 +0100 (CET)
Subject: [pypy-commit] pypy closed-branches: Merge closed head 1436740d3b9b
on branch numpy-complex
Message-ID: <20120107100631.B2AFD82BFF@wyvern.cs.uni-duesseldorf.de>
Author: Armin Rigo
Branch: closed-branches
Changeset: r51096:3efb35fc9cd7
Date: 2012-01-07 11:04 +0100
http://bitbucket.org/pypy/pypy/changeset/3efb35fc9cd7/
Log: Merge closed head 1436740d3b9b on branch numpy-complex
From noreply at buildbot.pypy.org Sat Jan 7 11:06:32 2012
From: noreply at buildbot.pypy.org (arigo)
Date: Sat, 7 Jan 2012 11:06:32 +0100 (CET)
Subject: [pypy-commit] pypy closed-branches: Merge closed head c260a0d96e73
on branch jit-raw-array-of-struct
Message-ID: <20120107100632.D1C6182BFF@wyvern.cs.uni-duesseldorf.de>
Author: Armin Rigo
Branch: closed-branches
Changeset: r51097:6da401a761cc
Date: 2012-01-07 11:04 +0100
http://bitbucket.org/pypy/pypy/changeset/6da401a761cc/
Log: Merge closed head c260a0d96e73 on branch jit-raw-array-of-struct
From noreply at buildbot.pypy.org Sat Jan 7 11:06:33 2012
From: noreply at buildbot.pypy.org (arigo)
Date: Sat, 7 Jan 2012 11:06:33 +0100 (CET)
Subject: [pypy-commit] pypy closed-branches: Merge closed head 16ea77edcb5e
on branch separate-applevel-numpy
Message-ID: <20120107100633.E112A82BFF@wyvern.cs.uni-duesseldorf.de>
Author: Armin Rigo
Branch: closed-branches
Changeset: r51098:7aae8a854792
Date: 2012-01-07 11:04 +0100
http://bitbucket.org/pypy/pypy/changeset/7aae8a854792/
Log: Merge closed head 16ea77edcb5e on branch separate-applevel-numpy
From noreply at buildbot.pypy.org Sat Jan 7 11:06:35 2012
From: noreply at buildbot.pypy.org (arigo)
Date: Sat, 7 Jan 2012 11:06:35 +0100 (CET)
Subject: [pypy-commit] pypy closed-branches: Merge closed head eb12a969ddf7
on branch import-numpy
Message-ID: <20120107100635.177D682BFF@wyvern.cs.uni-duesseldorf.de>
Author: Armin Rigo
Branch: closed-branches
Changeset: r51099:8947a5d05606
Date: 2012-01-07 11:04 +0100
http://bitbucket.org/pypy/pypy/changeset/8947a5d05606/
Log: Merge closed head eb12a969ddf7 on branch import-numpy
From noreply at buildbot.pypy.org Sat Jan 7 11:06:36 2012
From: noreply at buildbot.pypy.org (arigo)
Date: Sat, 7 Jan 2012 11:06:36 +0100 (CET)
Subject: [pypy-commit] pypy closed-branches: Merge closed head c62c1d1837b7
on branch numpy-concatenate
Message-ID: <20120107100636.44CE882BFF@wyvern.cs.uni-duesseldorf.de>
Author: Armin Rigo
Branch: closed-branches
Changeset: r51100:84207b40e275
Date: 2012-01-07 11:04 +0100
http://bitbucket.org/pypy/pypy/changeset/84207b40e275/
Log: Merge closed head c62c1d1837b7 on branch numpy-concatenate
From noreply at buildbot.pypy.org Sat Jan 7 11:06:37 2012
From: noreply at buildbot.pypy.org (arigo)
Date: Sat, 7 Jan 2012 11:06:37 +0100 (CET)
Subject: [pypy-commit] pypy closed-branches: re-close this branch
Message-ID: <20120107100637.5C5FF82BFF@wyvern.cs.uni-duesseldorf.de>
Author: Armin Rigo
Branch: closed-branches
Changeset: r51101:77e727fe4df1
Date: 2012-01-07 11:04 +0100
http://bitbucket.org/pypy/pypy/changeset/77e727fe4df1/
Log: re-close this branch
From noreply at buildbot.pypy.org Sat Jan 7 12:01:54 2012
From: noreply at buildbot.pypy.org (hager)
Date: Sat, 7 Jan 2012 12:01:54 +0100 (CET)
Subject: [pypy-commit] pypy ppc-jit-backend: remove test_return_pointer
because it is obsolete now
Message-ID: <20120107110154.726C382BFF@wyvern.cs.uni-duesseldorf.de>
Author: Sven Hager
Branch: ppc-jit-backend
Changeset: r51102:5152aab1cfbb
Date: 2012-01-07 12:01 +0100
http://bitbucket.org/pypy/pypy/changeset/5152aab1cfbb/
Log: remove test_return_pointer because it is obsolete now
diff --git a/pypy/jit/backend/test/runner_test.py b/pypy/jit/backend/test/runner_test.py
--- a/pypy/jit/backend/test/runner_test.py
+++ b/pypy/jit/backend/test/runner_test.py
@@ -523,25 +523,6 @@
def test_ovf_operations_reversed(self):
self.test_ovf_operations(reversed=True)
-
- def test_return_pointer(self):
- u_box, U_box = self.alloc_instance(self.U)
- i0 = BoxInt()
- i1 = BoxInt()
- ptr = BoxPtr()
-
- operations = [
- ResOperation(rop.FINISH, [ptr], None, descr=BasicFailDescr(1))
- ]
- inputargs = [i0, ptr, i1]
- looptoken = JitCellToken()
- self.cpu.compile_loop(inputargs, operations, looptoken)
- self.cpu.set_future_value_int(0, 10)
- self.cpu.set_future_value_ref(1, u_box.value)
- self.cpu.set_future_value_int(2, 20)
- fail = self.cpu.execute_token(looptoken)
- result = self.cpu.get_latest_value_ref(0)
- assert result == u_box.value
def test_spilling(self):
ops = '''
From noreply at buildbot.pypy.org Sat Jan 7 12:15:59 2012
From: noreply at buildbot.pypy.org (fijal)
Date: Sat, 7 Jan 2012 12:15:59 +0100 (CET)
Subject: [pypy-commit] pypy better-jit-hooks: add some jit hooks,
a bit ugly but works
Message-ID: <20120107111559.D357A82BFF@wyvern.cs.uni-duesseldorf.de>
Author: Maciej Fijalkowski
Branch: better-jit-hooks
Changeset: r51103:4c1b9df9819d
Date: 2012-01-07 13:15 +0200
http://bitbucket.org/pypy/pypy/changeset/4c1b9df9819d/
Log: add some jit hooks, a bit ugly but works
diff --git a/pypy/interpreter/eval.py b/pypy/interpreter/eval.py
--- a/pypy/interpreter/eval.py
+++ b/pypy/interpreter/eval.py
@@ -2,7 +2,6 @@
This module defines the abstract base classes that support execution:
Code and Frame.
"""
-from pypy.rlib import jit
from pypy.interpreter.error import OperationError
from pypy.interpreter.baseobjspace import Wrappable
diff --git a/pypy/jit/metainterp/test/test_jitportal.py b/pypy/jit/metainterp/test/test_jitportal.py
--- a/pypy/jit/metainterp/test/test_jitportal.py
+++ b/pypy/jit/metainterp/test/test_jitportal.py
@@ -1,8 +1,10 @@
from pypy.rlib.jit import JitDriver, JitPortal
+from pypy.rlib import jit_hooks
from pypy.jit.metainterp.test.support import LLJitMixin
from pypy.jit.codewriter.policy import JitPolicy
from pypy.jit.metainterp.jitprof import ABORT_FORCE_QUASIIMMUT
+from pypy.jit.metainterp.resoperation import rop
class TestJitPortal(LLJitMixin):
def test_abort_quasi_immut(self):
@@ -94,3 +96,21 @@
self.meta_interp(loop, [1, 10], policy=JitPolicy(MyJitPortal()))
assert sorted(called.keys()) == ['bridge', (10, 1, "loop")]
+ def test_resop_interface(self):
+ driver = JitDriver(greens = [], reds = ['i'])
+
+ def loop(i):
+ while i > 0:
+ driver.jit_merge_point(i=i)
+ i -= 1
+
+ def main():
+ loop(1)
+ op = jit_hooks.resop_new(rop.INT_ADD,
+ [jit_hooks.boxint_new(3),
+ jit_hooks.boxint_new(4)],
+ jit_hooks.boxint_new(1))
+ return jit_hooks.resop_opnum(op)
+
+ res = self.meta_interp(main, [])
+ assert res == rop.INT_ADD
diff --git a/pypy/jit/metainterp/warmspot.py b/pypy/jit/metainterp/warmspot.py
--- a/pypy/jit/metainterp/warmspot.py
+++ b/pypy/jit/metainterp/warmspot.py
@@ -112,7 +112,7 @@
return ll_meta_interp(function, args, backendopt=backendopt,
translate_support_code=True, **kwds)
-def _find_jit_marker(graphs, marker_name):
+def _find_jit_marker(graphs, marker_name, check_driver=True):
results = []
for graph in graphs:
for block in graph.iterblocks():
@@ -120,8 +120,8 @@
op = block.operations[i]
if (op.opname == 'jit_marker' and
op.args[0].value == marker_name and
- (op.args[1].value is None or
- op.args[1].value.active)): # the jitdriver
+ (not check_driver or op.args[1].value is None or
+ op.args[1].value.active)): # the jitdriver
results.append((graph, block, i))
return results
@@ -140,6 +140,9 @@
"found several jit_merge_points in the same graph")
return results
+def find_access_helpers(graphs):
+ return _find_jit_marker(graphs, 'access_helper', False)
+
def locate_jit_merge_point(graph):
[(graph, block, pos)] = find_jit_merge_points([graph])
return block, pos, block.operations[pos]
@@ -217,6 +220,7 @@
verbose = False # not self.cpu.translate_support_code
self.codewriter.make_jitcodes(verbose=verbose)
self.rewrite_can_enter_jits()
+ self.rewrite_access_helpers()
self.rewrite_set_param()
self.rewrite_force_virtual(vrefinfo)
self.rewrite_force_quasi_immutable()
@@ -621,6 +625,20 @@
graph = self.annhelper.getgraph(func, args_s, s_result)
return self.annhelper.graph2delayed(graph, FUNC)
+ def rewrite_access_helpers(self):
+ ah = find_access_helpers(self.translator.graphs)
+ for graph, block, index in ah:
+ op = block.operations[index]
+ self.rewrite_access_helper(op)
+
+ def rewrite_access_helper(self, op):
+ ARGS = [arg.concretetype for arg in op.args[2:]]
+ RESULT = op.result.concretetype
+ ptr = self.helper_func(lltype.Ptr(lltype.FuncType(ARGS, RESULT)),
+ op.args[1].value)
+ op.opname = 'direct_call'
+ op.args = [Constant(ptr, lltype.Void)] + op.args[2:]
+
def rewrite_jit_merge_points(self, policy):
for jd in self.jitdrivers_sd:
self.rewrite_jit_merge_point(jd, policy)
diff --git a/pypy/rlib/jit.py b/pypy/rlib/jit.py
--- a/pypy/rlib/jit.py
+++ b/pypy/rlib/jit.py
@@ -777,7 +777,8 @@
assert isinstance(s_inst, annmodel.SomeInstance)
def specialize_call(self, hop):
- from pypy.rpython.lltypesystem import lltype, rclass
+ from pypy.rpython.lltypesystem import rclass, lltype
+
classrepr = rclass.get_type_repr(hop.rtyper)
hop.exception_cannot_occur()
diff --git a/pypy/rlib/jit_hooks.py b/pypy/rlib/jit_hooks.py
new file mode 100644
--- /dev/null
+++ b/pypy/rlib/jit_hooks.py
@@ -0,0 +1,58 @@
+
+from pypy.rpython.extregistry import ExtRegistryEntry
+from pypy.annotation import model as annmodel
+from pypy.rpython.lltypesystem import llmemory, lltype
+from pypy.rpython.lltypesystem import rclass
+from pypy.rpython.annlowlevel import cast_instance_to_base_ptr,\
+ cast_base_ptr_to_instance
+
+def register_helper(helper, s_result):
+
+ class Entry(ExtRegistryEntry):
+ _about_ = helper
+
+ def compute_result_annotation(self, *args):
+ return s_result
+
+ def specialize_call(self, hop):
+ from pypy.rpython.lltypesystem import lltype
+
+ c_func = hop.inputconst(lltype.Void, helper)
+ c_name = hop.inputconst(lltype.Void, 'access_helper')
+ args_v = [hop.inputarg(arg, arg=i)
+ for i, arg in enumerate(hop.args_r)]
+ return hop.genop('jit_marker', [c_name, c_func] + args_v,
+ resulttype=hop.r_result)
+
+def _cast_to_box(llref):
+ from pypy.jit.metainterp.history import AbstractValue
+
+ ptr = lltype.cast_opaque_ptr(rclass.OBJECTPTR, llref)
+ return cast_base_ptr_to_instance(AbstractValue, ptr)
+
+def resop_new(no, llargs, llres):
+ from pypy.jit.metainterp.history import ResOperation
+
+ args = [_cast_to_box(llarg) for llarg in llargs]
+ res = _cast_to_box(llres)
+ rop = ResOperation(no, args, res)
+ return lltype.cast_opaque_ptr(llmemory.GCREF,
+ cast_instance_to_base_ptr(rop))
+
+register_helper(resop_new, annmodel.SomePtr(llmemory.GCREF))
+
+def boxint_new(no):
+ from pypy.jit.metainterp.history import BoxInt
+ return lltype.cast_opaque_ptr(llmemory.GCREF,
+ cast_instance_to_base_ptr(BoxInt(no)))
+
+register_helper(boxint_new, annmodel.SomePtr(llmemory.GCREF))
+
+def resop_opnum(llop):
+ from pypy.jit.metainterp.resoperation import AbstractResOp
+
+ opptr = lltype.cast_opaque_ptr(rclass.OBJECTPTR, llop)
+ op = cast_base_ptr_to_instance(AbstractResOp, opptr)
+ return op.getopnum()
+
+register_helper(resop_opnum, annmodel.SomeInteger())
From noreply at buildbot.pypy.org Sat Jan 7 12:23:22 2012
From: noreply at buildbot.pypy.org (fijal)
Date: Sat, 7 Jan 2012 12:23:22 +0100 (CET)
Subject: [pypy-commit] pypy better-jit-hooks: simplify and remove dead code
Message-ID: <20120107112322.AB10D82BFF@wyvern.cs.uni-duesseldorf.de>
Author: Maciej Fijalkowski
Branch: better-jit-hooks
Changeset: r51104:6c698de2d866
Date: 2012-01-07 13:22 +0200
http://bitbucket.org/pypy/pypy/changeset/6c698de2d866/
Log: simplify and remove dead code
diff --git a/pypy/module/pypyjit/interp_resop.py b/pypy/module/pypyjit/interp_resop.py
--- a/pypy/module/pypyjit/interp_resop.py
+++ b/pypy/module/pypyjit/interp_resop.py
@@ -104,20 +104,3 @@
num = GetSetProperty(WrappedOp.descr_num),
)
WrappedOp.acceptable_as_base_class = False
-
-from pypy.rpython.extregistry import ExtRegistryEntry
-
-class WrappedOpRegistry(ExtRegistryEntry):
- _type_ = WrappedOp
-
- def compute_annotation(self):
- from pypy.annotation import model as annmodel
- clsdef = self.bookkeeper.getuniqueclassdef(WrappedOp)
- if not clsdef.attrs:
- resopclsdef = self.bookkeeper.getuniqueclassdef(AbstractResOp)
- attrs = {'offset': annmodel.SomeInteger(),
- 'repr_of_resop': annmodel.SomeString(can_be_None=False),
- 'op': annmodel.SomeInstance(resopclsdef)}
- for attrname, s_v in attrs.iteritems():
- clsdef.generalize_attr(attrname, s_v)
- return annmodel.SomeInstance(clsdef, can_be_None=True)
diff --git a/pypy/rlib/jit_hooks.py b/pypy/rlib/jit_hooks.py
--- a/pypy/rlib/jit_hooks.py
+++ b/pypy/rlib/jit_hooks.py
@@ -30,29 +30,32 @@
ptr = lltype.cast_opaque_ptr(rclass.OBJECTPTR, llref)
return cast_base_ptr_to_instance(AbstractValue, ptr)
+def _cast_to_resop(llref):
+ from pypy.jit.metainterp.resoperation import AbstractResOp
+
+ ptr = lltype.cast_opaque_ptr(rclass.OBJECTPTR, llref)
+ return cast_base_ptr_to_instance(AbstractResOp, ptr)
+
+def _cast_to_gcref(obj):
+ return lltype.cast_opaque_ptr(llmemory.GCREF,
+ cast_instance_to_base_ptr(obj))
+
def resop_new(no, llargs, llres):
from pypy.jit.metainterp.history import ResOperation
args = [_cast_to_box(llarg) for llarg in llargs]
res = _cast_to_box(llres)
- rop = ResOperation(no, args, res)
- return lltype.cast_opaque_ptr(llmemory.GCREF,
- cast_instance_to_base_ptr(rop))
+ return _cast_to_gcref(ResOperation(no, args, res))
register_helper(resop_new, annmodel.SomePtr(llmemory.GCREF))
def boxint_new(no):
from pypy.jit.metainterp.history import BoxInt
- return lltype.cast_opaque_ptr(llmemory.GCREF,
- cast_instance_to_base_ptr(BoxInt(no)))
+ return _cast_to_gcref(BoxInt(no))
register_helper(boxint_new, annmodel.SomePtr(llmemory.GCREF))
def resop_opnum(llop):
- from pypy.jit.metainterp.resoperation import AbstractResOp
-
- opptr = lltype.cast_opaque_ptr(rclass.OBJECTPTR, llop)
- op = cast_base_ptr_to_instance(AbstractResOp, opptr)
- return op.getopnum()
+ return _cast_to_resop(llop).getopnum()
register_helper(resop_opnum, annmodel.SomeInteger())
From noreply at buildbot.pypy.org Sat Jan 7 12:49:26 2012
From: noreply at buildbot.pypy.org (fijal)
Date: Sat, 7 Jan 2012 12:49:26 +0100 (CET)
Subject: [pypy-commit] pypy better-jit-hooks: add stuff to test_ztranslation
and make it pass
Message-ID: <20120107114926.DE4F482BFF@wyvern.cs.uni-duesseldorf.de>
Author: Maciej Fijalkowski
Branch: better-jit-hooks
Changeset: r51105:76c53cca9b18
Date: 2012-01-07 13:48 +0200
http://bitbucket.org/pypy/pypy/changeset/76c53cca9b18/
Log: add stuff to test_ztranslation and make it pass
diff --git a/pypy/jit/metainterp/test/test_ztranslation.py b/pypy/jit/metainterp/test/test_ztranslation.py
--- a/pypy/jit/metainterp/test/test_ztranslation.py
+++ b/pypy/jit/metainterp/test/test_ztranslation.py
@@ -3,7 +3,9 @@
from pypy.jit.backend.llgraph import runner
from pypy.rlib.jit import JitDriver, unroll_parameters, set_param
from pypy.rlib.jit import PARAMETERS, dont_look_inside, hint
+from pypy.rlib.jit_hooks import boxint_new, resop_new, resop_opnum
from pypy.jit.metainterp.jitprof import Profiler
+from pypy.jit.metainterp.resoperation import rop
from pypy.rpython.lltypesystem import lltype, llmemory
class TranslationTest:
@@ -22,6 +24,7 @@
# - jitdriver hooks
# - two JITs
# - string concatenation, slicing and comparison
+ # - jit hooks interface
class Frame(object):
_virtualizable2_ = ['l[*]']
@@ -91,7 +94,9 @@
return f.i
#
def main(i, j):
- return f(i) - f2(i+j, i, j)
+ op = resop_new(rop.INT_ADD, [boxint_new(3), boxint_new(5)],
+ boxint_new(8))
+ return f(i) - f2(i+j, i, j) + resop_opnum(op)
res = ll_meta_interp(main, [40, 5], CPUClass=self.CPUClass,
type_system=self.type_system,
listops=True)
diff --git a/pypy/jit/metainterp/warmspot.py b/pypy/jit/metainterp/warmspot.py
--- a/pypy/jit/metainterp/warmspot.py
+++ b/pypy/jit/metainterp/warmspot.py
@@ -1,4 +1,5 @@
import sys, py
+from pypy.tool.sourcetools import func_with_new_name
from pypy.rpython.lltypesystem import lltype, llmemory
from pypy.rpython.annlowlevel import llhelper, MixLevelHelperAnnotator,\
cast_base_ptr_to_instance, hlstr
@@ -634,10 +635,14 @@
def rewrite_access_helper(self, op):
ARGS = [arg.concretetype for arg in op.args[2:]]
RESULT = op.result.concretetype
- ptr = self.helper_func(lltype.Ptr(lltype.FuncType(ARGS, RESULT)),
- op.args[1].value)
+ FUNCPTR = lltype.Ptr(lltype.FuncType(ARGS, RESULT))
+ # make sure we make a copy of function so it no longer belongs
+ # to extregistry
+ func = op.args[1].value
+ func = func_with_new_name(func, func.func_name + '_compiled')
+ ptr = self.helper_func(FUNCPTR, func)
op.opname = 'direct_call'
- op.args = [Constant(ptr, lltype.Void)] + op.args[2:]
+ op.args = [Constant(ptr, FUNCPTR)] + op.args[2:]
def rewrite_jit_merge_points(self, policy):
for jd in self.jitdrivers_sd:
diff --git a/pypy/rlib/jit_hooks.py b/pypy/rlib/jit_hooks.py
--- a/pypy/rlib/jit_hooks.py
+++ b/pypy/rlib/jit_hooks.py
@@ -5,6 +5,7 @@
from pypy.rpython.lltypesystem import rclass
from pypy.rpython.annlowlevel import cast_instance_to_base_ptr,\
cast_base_ptr_to_instance
+from pypy.rlib.objectmodel import specialize
def register_helper(helper, s_result):
@@ -36,6 +37,7 @@
ptr = lltype.cast_opaque_ptr(rclass.OBJECTPTR, llref)
return cast_base_ptr_to_instance(AbstractResOp, ptr)
+ at specialize.argtype(0)
def _cast_to_gcref(obj):
return lltype.cast_opaque_ptr(llmemory.GCREF,
cast_instance_to_base_ptr(obj))
@@ -43,7 +45,7 @@
def resop_new(no, llargs, llres):
from pypy.jit.metainterp.history import ResOperation
- args = [_cast_to_box(llarg) for llarg in llargs]
+ args = [_cast_to_box(llargs[i]) for i in range(len(llargs))]
res = _cast_to_box(llres)
return _cast_to_gcref(ResOperation(no, args, res))
From noreply at buildbot.pypy.org Sat Jan 7 12:53:54 2012
From: noreply at buildbot.pypy.org (fijal)
Date: Sat, 7 Jan 2012 12:53:54 +0100 (CET)
Subject: [pypy-commit] pypy better-jit-hooks: an attempt to use the new
interface
Message-ID: <20120107115354.283A382BFF@wyvern.cs.uni-duesseldorf.de>
Author: Maciej Fijalkowski
Branch: better-jit-hooks
Changeset: r51106:51d2eea00745
Date: 2012-01-07 13:53 +0200
http://bitbucket.org/pypy/pypy/changeset/51d2eea00745/
Log: an attempt to use the new interface
diff --git a/pypy/module/pypyjit/interp_resop.py b/pypy/module/pypyjit/interp_resop.py
--- a/pypy/module/pypyjit/interp_resop.py
+++ b/pypy/module/pypyjit/interp_resop.py
@@ -1,14 +1,15 @@
from pypy.interpreter.typedef import TypeDef, GetSetProperty
from pypy.interpreter.baseobjspace import Wrappable
-from pypy.interpreter.gateway import unwrap_spec, interp2app
+from pypy.interpreter.gateway import unwrap_spec, interp2app, NoneNotWrapped
from pypy.interpreter.pycode import PyCode
from pypy.interpreter.error import OperationError
-from pypy.rpython.lltypesystem import lltype
+from pypy.rpython.lltypesystem import lltype, llmemory
from pypy.rpython.annlowlevel import cast_base_ptr_to_instance
from pypy.rpython.lltypesystem.rclass import OBJECT
from pypy.jit.metainterp.resoperation import rop, AbstractResOp
from pypy.rlib.nonconst import NonConstant
+from pypy.rlib import jit_hooks
class Cache(object):
in_recursion = False
@@ -77,7 +78,19 @@
return space.w_None
def wrap_oplist(space, logops, operations, ops_offset):
- return [WrappedOp(op, ops_offset[op], logops.repr_of_resop(op)) for op in operations]
+ return [WrappedOp(jit_hooks._cast_to_gcref(op),
+ ops_offset[op],
+ logops.repr_of_resop(op)) for op in operations]
+
+ at unwrap_spec(num=int, offset=int, repr=str)
+def descr_new_resop(space, num, w_args, w_res=NoneNotWrapped, offset=-1,
+ repr=''):
+ args = [jit_hooks.boxint_new(space.int_w(w_arg)) for w_arg in w_args]
+ if w_res is None:
+ llres = lltype.nullptr(llmemory.GCREF)
+ else:
+ llres = jit_hooks.boxint_new(space.int_w(w_res))
+ return WrappedOp(jit_hooks.resop_new(num, args, llres), offset, repr)
class WrappedOp(Wrappable):
""" A class representing a single ResOperation, wrapped nicely
@@ -91,16 +104,13 @@
return space.wrap(self.repr_of_resop)
def descr_num(self, space):
- return space.wrap(self.op.getopnum())
-
- def descr_name(self, space):
- return space.wrap(self.op.getopname())
+ return space.wrap(jit_hooks.resop_opnum(self.op))
WrappedOp.typedef = TypeDef(
'ResOperation',
__doc__ = WrappedOp.__doc__,
+ __new__ = interp2app(descr_new_resop),
__repr__ = interp2app(WrappedOp.descr_repr),
- name = GetSetProperty(WrappedOp.descr_name),
num = GetSetProperty(WrappedOp.descr_num),
)
WrappedOp.acceptable_as_base_class = False
diff --git a/pypy/module/pypyjit/test/test_jit_hook.py b/pypy/module/pypyjit/test/test_jit_hook.py
--- a/pypy/module/pypyjit/test/test_jit_hook.py
+++ b/pypy/module/pypyjit/test/test_jit_hook.py
@@ -95,7 +95,7 @@
assert elem[2][2] == False
assert len(elem[3]) == 3
int_add = elem[3][0]
- assert int_add.name == 'int_add'
+ #assert int_add.name == 'int_add'
assert int_add.num == self.int_add_num
self.on_compile_bridge()
assert len(all) == 2
From noreply at buildbot.pypy.org Sat Jan 7 13:02:43 2012
From: noreply at buildbot.pypy.org (fijal)
Date: Sat, 7 Jan 2012 13:02:43 +0100 (CET)
Subject: [pypy-commit] pypy better-jit-hooks: what's untested is broken
Message-ID: <20120107120243.7C60582BFF@wyvern.cs.uni-duesseldorf.de>
Author: Maciej Fijalkowski
Branch: better-jit-hooks
Changeset: r51107:80d78ac9430f
Date: 2012-01-07 14:02 +0200
http://bitbucket.org/pypy/pypy/changeset/80d78ac9430f/
Log: what's untested is broken
diff --git a/pypy/module/pypyjit/interp_resop.py b/pypy/module/pypyjit/interp_resop.py
--- a/pypy/module/pypyjit/interp_resop.py
+++ b/pypy/module/pypyjit/interp_resop.py
@@ -83,9 +83,10 @@
logops.repr_of_resop(op)) for op in operations]
@unwrap_spec(num=int, offset=int, repr=str)
-def descr_new_resop(space, num, w_args, w_res=NoneNotWrapped, offset=-1,
+def descr_new_resop(space, w_tp, num, w_args, w_res=NoneNotWrapped, offset=-1,
repr=''):
- args = [jit_hooks.boxint_new(space.int_w(w_arg)) for w_arg in w_args]
+ args = [jit_hooks.boxint_new(space.int_w(w_arg)) for w_arg in
+ space.listview(w_args)]
if w_res is None:
llres = lltype.nullptr(llmemory.GCREF)
else:
diff --git a/pypy/module/pypyjit/test/test_jit_hook.py b/pypy/module/pypyjit/test/test_jit_hook.py
--- a/pypy/module/pypyjit/test/test_jit_hook.py
+++ b/pypy/module/pypyjit/test/test_jit_hook.py
@@ -158,3 +158,9 @@
pypyjit.set_abort_hook(hook)
self.on_abort()
assert l == [('pypyjit', 'ABORT_TOO_LONG')]
+
+ def test_creation(self):
+ import pypyjit
+
+ op = pypyjit.ResOperation(self.int_add_num, [1, 3], 4)
+ assert op.num == self.int_add_num
From noreply at buildbot.pypy.org Sat Jan 7 13:07:39 2012
From: noreply at buildbot.pypy.org (fijal)
Date: Sat, 7 Jan 2012 13:07:39 +0100 (CET)
Subject: [pypy-commit] pypy better-jit-hooks: bah
Message-ID: <20120107120739.8D5FB82BFF@wyvern.cs.uni-duesseldorf.de>
Author: Maciej Fijalkowski
Branch: better-jit-hooks
Changeset: r51108:ac3f9f68d10e
Date: 2012-01-07 14:07 +0200
http://bitbucket.org/pypy/pypy/changeset/ac3f9f68d10e/
Log: bah
diff --git a/pypy/module/pypyjit/interp_resop.py b/pypy/module/pypyjit/interp_resop.py
--- a/pypy/module/pypyjit/interp_resop.py
+++ b/pypy/module/pypyjit/interp_resop.py
@@ -88,7 +88,7 @@
args = [jit_hooks.boxint_new(space.int_w(w_arg)) for w_arg in
space.listview(w_args)]
if w_res is None:
- llres = lltype.nullptr(llmemory.GCREF)
+ llres = lltype.nullptr(llmemory.GCREF.TO)
else:
llres = jit_hooks.boxint_new(space.int_w(w_res))
return WrappedOp(jit_hooks.resop_new(num, args, llres), offset, repr)
From noreply at buildbot.pypy.org Sat Jan 7 13:30:03 2012
From: noreply at buildbot.pypy.org (fijal)
Date: Sat, 7 Jan 2012 13:30:03 +0100 (CET)
Subject: [pypy-commit] pypy better-jit-hooks: run those in a slightly
different order, so we rewrite them before jitcodes
Message-ID: <20120107123003.9963682BFF@wyvern.cs.uni-duesseldorf.de>
Author: Maciej Fijalkowski
Branch: better-jit-hooks
Changeset: r51109:4e244fe6f3f8
Date: 2012-01-07 14:29 +0200
http://bitbucket.org/pypy/pypy/changeset/4e244fe6f3f8/
Log: run those in a slightly different order, so we rewrite them before
jitcodes
diff --git a/pypy/jit/metainterp/warmspot.py b/pypy/jit/metainterp/warmspot.py
--- a/pypy/jit/metainterp/warmspot.py
+++ b/pypy/jit/metainterp/warmspot.py
@@ -219,9 +219,9 @@
self.portal = policy.portal
verbose = False # not self.cpu.translate_support_code
+ self.rewrite_access_helpers()
self.codewriter.make_jitcodes(verbose=verbose)
self.rewrite_can_enter_jits()
- self.rewrite_access_helpers()
self.rewrite_set_param()
self.rewrite_force_virtual(vrefinfo)
self.rewrite_force_quasi_immutable()
From noreply at buildbot.pypy.org Sat Jan 7 13:45:22 2012
From: noreply at buildbot.pypy.org (fijal)
Date: Sat, 7 Jan 2012 13:45:22 +0100 (CET)
Subject: [pypy-commit] pypy translation-time-measurments: add some
measurments
Message-ID: <20120107124522.D3B8582BFF@wyvern.cs.uni-duesseldorf.de>
Author: Maciej Fijalkowski
Branch: translation-time-measurments
Changeset: r51110:6c5f73bd4ec9
Date: 2012-01-07 14:44 +0200
http://bitbucket.org/pypy/pypy/changeset/6c5f73bd4ec9/
Log: add some measurments
diff --git a/pypy/annotation/annrpython.py b/pypy/annotation/annrpython.py
--- a/pypy/annotation/annrpython.py
+++ b/pypy/annotation/annrpython.py
@@ -1,5 +1,6 @@
import sys
import types
+import time
from pypy.tool.ansi_print import ansi_log, raise_nicer_exception
from pypy.tool.pairtype import pair
from pypy.tool.error import (format_blocked_annotation_error,
@@ -25,6 +26,7 @@
import pypy.rpython.extfuncregistry # has side effects
import pypy.rlib.nonconst # has side effects
+ self.counter = {}
if translator is None:
# interface for tests
from pypy.translator.translator import TranslationContext
@@ -247,9 +249,17 @@
block, graph = self.pendingblocks.popitem()
if annmodel.DEBUG:
self.flowin_block = block # we need to keep track of block
+ t0 = time.time()
self.processblock(graph, block)
+ tk = time.time()
+ self.counter[graph] = self.counter.get(graph, 0) + tk - t0
self.policy.no_more_blocks_to_annotate(self)
if not self.pendingblocks:
+ import os
+ f = open('/tmp/annotator%d' % os.getpid(), 'w')
+ for k, v in self.counter.iteritems():
+ f.write('%s: %d' % (k, v))
+ f.close()
break # finished
# make sure that the return variables of all graphs is annotated
if self.added_blocks is not None:
From noreply at buildbot.pypy.org Sat Jan 7 13:53:57 2012
From: noreply at buildbot.pypy.org (fijal)
Date: Sat, 7 Jan 2012 13:53:57 +0100 (CET)
Subject: [pypy-commit] pypy better-jit-hooks: for what is worth,
don't look into interp_resop for now. It's hard enough to
Message-ID: <20120107125357.D6A4B82BFF@wyvern.cs.uni-duesseldorf.de>
Author: Maciej Fijalkowski
Branch: better-jit-hooks
Changeset: r51111:41fea5d51df6
Date: 2012-01-07 14:53 +0200
http://bitbucket.org/pypy/pypy/changeset/41fea5d51df6/
Log: for what is worth, don't look into interp_resop for now. It's hard
enough to get this working.
diff --git a/pypy/module/pypyjit/policy.py b/pypy/module/pypyjit/policy.py
--- a/pypy/module/pypyjit/policy.py
+++ b/pypy/module/pypyjit/policy.py
@@ -69,12 +69,16 @@
modname == 'thread.os_thread'):
return True
if '.' in modname:
- modname, _ = modname.split('.', 1)
+ modname, rest = modname.split('.', 1)
+ else:
+ rest = ''
if modname in ['pypyjit', 'signal', 'micronumpy', 'math', 'exceptions',
'imp', 'sys', 'array', '_ffi', 'itertools', 'operator',
'posix', '_socket', '_sre', '_lsprof', '_weakref',
'__pypy__', 'cStringIO', '_collections', 'struct',
'mmap', 'marshal']:
+ if modname == 'pypyjit' and 'interp_resop' in rest:
+ return False
return True
return False
diff --git a/pypy/module/pypyjit/test/test_policy.py b/pypy/module/pypyjit/test/test_policy.py
--- a/pypy/module/pypyjit/test/test_policy.py
+++ b/pypy/module/pypyjit/test/test_policy.py
@@ -52,6 +52,7 @@
for modname in 'pypyjit', 'signal', 'micronumpy', 'math', 'imp':
assert pypypolicy.look_inside_pypy_module(modname)
assert pypypolicy.look_inside_pypy_module(modname + '.foo')
+ assert not pypypolicy.look_inside_pypy_module('pypyjit.interp_resop')
def test_see_jit_module():
assert pypypolicy.look_inside_pypy_module('pypyjit.interp_jit')
From noreply at buildbot.pypy.org Sat Jan 7 14:07:43 2012
From: noreply at buildbot.pypy.org (fijal)
Date: Sat, 7 Jan 2012 14:07:43 +0100 (CET)
Subject: [pypy-commit] pypy translation-time-measurments: style
Message-ID: <20120107130743.6E59682BFF@wyvern.cs.uni-duesseldorf.de>
Author: Maciej Fijalkowski
Branch: translation-time-measurments
Changeset: r51112:cd2fd844ad80
Date: 2012-01-07 15:07 +0200
http://bitbucket.org/pypy/pypy/changeset/cd2fd844ad80/
Log: style
diff --git a/pypy/annotation/annrpython.py b/pypy/annotation/annrpython.py
--- a/pypy/annotation/annrpython.py
+++ b/pypy/annotation/annrpython.py
@@ -258,7 +258,7 @@
import os
f = open('/tmp/annotator%d' % os.getpid(), 'w')
for k, v in self.counter.iteritems():
- f.write('%s: %d' % (k, v))
+ f.write('%s: %f\n' % (k, v))
f.close()
break # finished
# make sure that the return variables of all graphs is annotated
From noreply at buildbot.pypy.org Sat Jan 7 14:12:11 2012
From: noreply at buildbot.pypy.org (fijal)
Date: Sat, 7 Jan 2012 14:12:11 +0100 (CET)
Subject: [pypy-commit] pypy better-jit-hooks: I believe this is an actual
problem
Message-ID: <20120107131211.6423B82BFF@wyvern.cs.uni-duesseldorf.de>
Author: Maciej Fijalkowski
Branch: better-jit-hooks
Changeset: r51113:5ca484324ef7
Date: 2012-01-07 15:11 +0200
http://bitbucket.org/pypy/pypy/changeset/5ca484324ef7/
Log: I believe this is an actual problem
diff --git a/pypy/jit/backend/x86/assembler.py b/pypy/jit/backend/x86/assembler.py
--- a/pypy/jit/backend/x86/assembler.py
+++ b/pypy/jit/backend/x86/assembler.py
@@ -492,7 +492,7 @@
except ValueError:
debug_print("Bridge out of guard", descr_number,
"was already compiled!")
- return
+ raise
self.setup(original_loop_token)
if log:
From noreply at buildbot.pypy.org Sat Jan 7 14:51:52 2012
From: noreply at buildbot.pypy.org (amauryfa)
Date: Sat, 7 Jan 2012 14:51:52 +0100 (CET)
Subject: [pypy-commit] pypy default: cpyext: Add support for
PyInterpreterState.next.
Message-ID: <20120107135152.B602682BFF@wyvern.cs.uni-duesseldorf.de>
Author: Amaury Forgeot d'Arc
Branch:
Changeset: r51114:416009084c6f
Date: 2012-01-07 12:10 +0100
http://bitbucket.org/pypy/pypy/changeset/416009084c6f/
Log: cpyext: Add support for PyInterpreterState.next. Always NULL, since
there is only one interpreter...
diff --git a/pypy/module/cpyext/include/pystate.h b/pypy/module/cpyext/include/pystate.h
--- a/pypy/module/cpyext/include/pystate.h
+++ b/pypy/module/cpyext/include/pystate.h
@@ -5,7 +5,7 @@
struct _is; /* Forward */
typedef struct _is {
- int _foo;
+ struct _is *next;
} PyInterpreterState;
typedef struct _ts {
diff --git a/pypy/module/cpyext/pystate.py b/pypy/module/cpyext/pystate.py
--- a/pypy/module/cpyext/pystate.py
+++ b/pypy/module/cpyext/pystate.py
@@ -2,7 +2,10 @@
cpython_api, generic_cpy_call, CANNOT_FAIL, CConfig, cpython_struct)
from pypy.rpython.lltypesystem import rffi, lltype
-PyInterpreterState = lltype.Ptr(cpython_struct("PyInterpreterState", ()))
+PyInterpreterStateStruct = lltype.ForwardReference()
+PyInterpreterState = lltype.Ptr(PyInterpreterStateStruct)
+cpython_struct(
+ "PyInterpreterState", [('next', PyInterpreterState)], PyInterpreterStateStruct)
PyThreadState = lltype.Ptr(cpython_struct("PyThreadState", [('interp', PyInterpreterState)]))
@cpython_api([], PyThreadState, error=CANNOT_FAIL)
@@ -54,7 +57,8 @@
class InterpreterState(object):
def __init__(self, space):
- self.interpreter_state = lltype.malloc(PyInterpreterState.TO, flavor='raw', immortal=True)
+ self.interpreter_state = lltype.malloc(
+ PyInterpreterState.TO, flavor='raw', zero=True, immortal=True)
def new_thread_state(self):
capsule = ThreadStateCapsule()
diff --git a/pypy/module/cpyext/test/test_pystate.py b/pypy/module/cpyext/test/test_pystate.py
--- a/pypy/module/cpyext/test/test_pystate.py
+++ b/pypy/module/cpyext/test/test_pystate.py
@@ -37,6 +37,7 @@
def test_thread_state_interp(self, space, api):
ts = api.PyThreadState_Get()
assert ts.c_interp == api.PyInterpreterState_Head()
+ assert ts.c_interp.c_next == nullptr(PyInterpreterState.TO)
def test_basic_threadstate_dance(self, space, api):
# Let extension modules call these functions,
From noreply at buildbot.pypy.org Sat Jan 7 14:51:53 2012
From: noreply at buildbot.pypy.org (amauryfa)
Date: Sat, 7 Jan 2012 14:51:53 +0100 (CET)
Subject: [pypy-commit] pypy default: Fix checkmodule.py for almost all
modules
Message-ID: <20120107135153.E0B7682BFF@wyvern.cs.uni-duesseldorf.de>
Author: Amaury Forgeot d'Arc
Branch:
Changeset: r51115:e8a394c064fd
Date: 2012-01-07 13:08 +0100
http://bitbucket.org/pypy/pypy/changeset/e8a394c064fd/
Log: Fix checkmodule.py for almost all modules
diff --git a/pypy/interpreter/baseobjspace.py b/pypy/interpreter/baseobjspace.py
--- a/pypy/interpreter/baseobjspace.py
+++ b/pypy/interpreter/baseobjspace.py
@@ -1591,12 +1591,15 @@
'ArithmeticError',
'AssertionError',
'AttributeError',
+ 'BaseException',
+ 'DeprecationWarning',
'EOFError',
'EnvironmentError',
'Exception',
'FloatingPointError',
'IOError',
'ImportError',
+ 'ImportWarning',
'IndentationError',
'IndexError',
'KeyError',
@@ -1617,7 +1620,10 @@
'TabError',
'TypeError',
'UnboundLocalError',
+ 'UnicodeDecodeError',
'UnicodeError',
+ 'UnicodeEncodeError',
+ 'UnicodeTranslateError',
'ValueError',
'ZeroDivisionError',
'UnicodeEncodeError',
diff --git a/pypy/module/sys/__init__.py b/pypy/module/sys/__init__.py
--- a/pypy/module/sys/__init__.py
+++ b/pypy/module/sys/__init__.py
@@ -42,7 +42,7 @@
'argv' : 'state.get(space).w_argv',
'py3kwarning' : 'space.w_False',
'warnoptions' : 'state.get(space).w_warnoptions',
- 'builtin_module_names' : 'state.w_None',
+ 'builtin_module_names' : 'space.w_None',
'pypy_getudir' : 'state.pypy_getudir', # not translated
'pypy_initial_path' : 'state.pypy_initial_path',
diff --git a/pypy/objspace/fake/checkmodule.py b/pypy/objspace/fake/checkmodule.py
--- a/pypy/objspace/fake/checkmodule.py
+++ b/pypy/objspace/fake/checkmodule.py
@@ -1,8 +1,10 @@
from pypy.objspace.fake.objspace import FakeObjSpace, W_Root
+from pypy.config.pypyoption import get_pypy_config
def checkmodule(modname):
- space = FakeObjSpace()
+ config = get_pypy_config(translating=True)
+ space = FakeObjSpace(config)
mod = __import__('pypy.module.%s' % modname, None, None, ['__doc__'])
# force computation and record what we wrap
module = mod.Module(space, W_Root())
diff --git a/pypy/objspace/fake/objspace.py b/pypy/objspace/fake/objspace.py
--- a/pypy/objspace/fake/objspace.py
+++ b/pypy/objspace/fake/objspace.py
@@ -93,9 +93,9 @@
class FakeObjSpace(ObjSpace):
- def __init__(self):
+ def __init__(self, config=None):
self._seen_extras = []
- ObjSpace.__init__(self)
+ ObjSpace.__init__(self, config=config)
def float_w(self, w_obj):
is_root(w_obj)
@@ -135,6 +135,9 @@
def newfloat(self, x):
return w_some_obj()
+ def newcomplex(self, x, y):
+ return w_some_obj()
+
def marshal_w(self, w_obj):
"NOT_RPYTHON"
raise NotImplementedError
@@ -215,6 +218,10 @@
expected_length = 3
return [w_some_obj()] * expected_length
+ def unpackcomplex(self, w_complex):
+ is_root(w_complex)
+ return 1.1, 2.2
+
def allocate_instance(self, cls, w_subtype):
is_root(w_subtype)
return instantiate(cls)
@@ -232,6 +239,11 @@
def exec_(self, *args, **kwds):
pass
+ def createexecutioncontext(self):
+ ec = ObjSpace.createexecutioncontext(self)
+ ec._py_repr = None
+ return ec
+
# ----------
def translates(self, func=None, argtypes=None, **kwds):
@@ -267,18 +279,21 @@
ObjSpace.ExceptionTable +
['int', 'str', 'float', 'long', 'tuple', 'list',
'dict', 'unicode', 'complex', 'slice', 'bool',
- 'type', 'basestring']):
+ 'type', 'basestring', 'object']):
setattr(FakeObjSpace, 'w_' + name, w_some_obj())
#
for (name, _, arity, _) in ObjSpace.MethodTable:
args = ['w_%d' % i for i in range(arity)]
+ params = args[:]
d = {'is_root': is_root,
'w_some_obj': w_some_obj}
+ if name in ('get',):
+ params[-1] += '=None'
exec compile2("""\
def meth(self, %s):
%s
return w_some_obj()
- """ % (', '.join(args),
+ """ % (', '.join(params),
'; '.join(['is_root(%s)' % arg for arg in args]))) in d
meth = func_with_new_name(d['meth'], name)
setattr(FakeObjSpace, name, meth)
@@ -301,9 +316,12 @@
pass
FakeObjSpace.default_compiler = FakeCompiler()
-class FakeModule(object):
+class FakeModule(Wrappable):
+ def __init__(self):
+ self.w_dict = w_some_obj()
def get(self, name):
name + "xx" # check that it's a string
return w_some_obj()
FakeObjSpace.sys = FakeModule()
FakeObjSpace.sys.filesystemencoding = 'foobar'
+FakeObjSpace.builtin = FakeModule()
diff --git a/pypy/objspace/fake/test/test_objspace.py b/pypy/objspace/fake/test/test_objspace.py
--- a/pypy/objspace/fake/test/test_objspace.py
+++ b/pypy/objspace/fake/test/test_objspace.py
@@ -40,7 +40,7 @@
def test_constants(self):
space = self.space
space.translates(lambda: (space.w_None, space.w_True, space.w_False,
- space.w_int, space.w_str,
+ space.w_int, space.w_str, space.w_object,
space.w_TypeError))
def test_wrap(self):
@@ -72,3 +72,9 @@
def test_newlist(self):
self.space.newlist([W_Root(), W_Root()])
+
+ def test_default_values(self):
+ # the __get__ method takes either 2 or 3 arguments
+ space = self.space
+ space.translates(lambda: (space.get(W_Root(), W_Root()),
+ space.get(W_Root(), W_Root(), W_Root())))
From noreply at buildbot.pypy.org Sat Jan 7 14:51:55 2012
From: noreply at buildbot.pypy.org (amauryfa)
Date: Sat, 7 Jan 2012 14:51:55 +0100 (CET)
Subject: [pypy-commit] pypy default: cpyext: export Py_ByteArrayType and
Py_MemoryViewType
Message-ID: <20120107135155.118FF82BFF@wyvern.cs.uni-duesseldorf.de>
Author: Amaury Forgeot d'Arc
Branch:
Changeset: r51116:4bace20eef15
Date: 2012-01-07 13:15 +0100
http://bitbucket.org/pypy/pypy/changeset/4bace20eef15/
Log: cpyext: export Py_ByteArrayType and Py_MemoryViewType
diff --git a/pypy/module/cpyext/api.py b/pypy/module/cpyext/api.py
--- a/pypy/module/cpyext/api.py
+++ b/pypy/module/cpyext/api.py
@@ -23,6 +23,7 @@
from pypy.interpreter.function import StaticMethod
from pypy.objspace.std.sliceobject import W_SliceObject
from pypy.module.__builtin__.descriptor import W_Property
+from pypy.module.__builtin__.interp_memoryview import W_MemoryView
from pypy.rlib.entrypoint import entrypoint
from pypy.rlib.unroll import unrolling_iterable
from pypy.rlib.objectmodel import specialize
@@ -387,6 +388,8 @@
"Float": "space.w_float",
"Long": "space.w_long",
"Complex": "space.w_complex",
+ "ByteArray": "space.w_bytearray",
+ "MemoryView": "space.gettypeobject(W_MemoryView.typedef)",
"BaseObject": "space.w_object",
'None': 'space.type(space.w_None)',
'NotImplemented': 'space.type(space.w_NotImplemented)',
From noreply at buildbot.pypy.org Sat Jan 7 14:51:56 2012
From: noreply at buildbot.pypy.org (amauryfa)
Date: Sat, 7 Jan 2012 14:51:56 +0100 (CET)
Subject: [pypy-commit] pypy default: Add stubs for PyObject_GetBuffer: pypy
does not yet implement
Message-ID: <20120107135156.3BC1382BFF@wyvern.cs.uni-duesseldorf.de>
Author: Amaury Forgeot d'Arc
Branch:
Changeset: r51117:75b3dbc7d326
Date: 2012-01-07 13:41 +0100
http://bitbucket.org/pypy/pypy/changeset/75b3dbc7d326/
Log: Add stubs for PyObject_GetBuffer: pypy does not yet implement the
new buffer interface.
diff --git a/pypy/module/cpyext/buffer.py b/pypy/module/cpyext/buffer.py
--- a/pypy/module/cpyext/buffer.py
+++ b/pypy/module/cpyext/buffer.py
@@ -1,6 +1,36 @@
+from pypy.interpreter.error import OperationError
from pypy.rpython.lltypesystem import rffi, lltype
from pypy.module.cpyext.api import (
cpython_api, CANNOT_FAIL, Py_buffer)
+from pypy.module.cpyext.pyobject import PyObject
+
+ at cpython_api([PyObject], rffi.INT_real, error=CANNOT_FAIL)
+def PyObject_CheckBuffer(space, w_obj):
+ """Return 1 if obj supports the buffer interface otherwise 0."""
+ return 0 # the bf_getbuffer field is never filled by cpyext
+
+ at cpython_api([PyObject, lltype.Ptr(Py_buffer), rffi.INT_real],
+ rffi.INT_real, error=-1)
+def PyObject_GetBuffer(space, w_obj, view, flags):
+ """Export obj into a Py_buffer, view. These arguments must
+ never be NULL. The flags argument is a bit field indicating what
+ kind of buffer the caller is prepared to deal with and therefore what
+ kind of buffer the exporter is allowed to return. The buffer interface
+ allows for complicated memory sharing possibilities, but some caller may
+ not be able to handle all the complexity but may want to see if the
+ exporter will let them take a simpler view to its memory.
+
+ Some exporters may not be able to share memory in every possible way and
+ may need to raise errors to signal to some consumers that something is
+ just not possible. These errors should be a BufferError unless
+ there is another error that is actually causing the problem. The
+ exporter can use flags information to simplify how much of the
+ Py_buffer structure is filled in with non-default values and/or
+ raise an error if the object can't support a simpler view of its memory.
+
+ 0 is returned on success and -1 on error."""
+ raise OperationError(space.w_TypeError, space.wrap(
+ 'PyPy does not yet implement the new buffer interface'))
@cpython_api([lltype.Ptr(Py_buffer), lltype.Char], rffi.INT_real, error=CANNOT_FAIL)
def PyBuffer_IsContiguous(space, view, fortran):
diff --git a/pypy/module/cpyext/include/object.h b/pypy/module/cpyext/include/object.h
--- a/pypy/module/cpyext/include/object.h
+++ b/pypy/module/cpyext/include/object.h
@@ -123,10 +123,6 @@
typedef Py_ssize_t (*segcountproc)(PyObject *, Py_ssize_t *);
typedef Py_ssize_t (*charbufferproc)(PyObject *, Py_ssize_t, char **);
-typedef int (*objobjproc)(PyObject *, PyObject *);
-typedef int (*visitproc)(PyObject *, void *);
-typedef int (*traverseproc)(PyObject *, visitproc, void *);
-
/* Py3k buffer interface */
typedef struct bufferinfo {
void *buf;
@@ -153,6 +149,41 @@
typedef int (*getbufferproc)(PyObject *, Py_buffer *, int);
typedef void (*releasebufferproc)(PyObject *, Py_buffer *);
+ /* Flags for getting buffers */
+#define PyBUF_SIMPLE 0
+#define PyBUF_WRITABLE 0x0001
+/* we used to include an E, backwards compatible alias */
+#define PyBUF_WRITEABLE PyBUF_WRITABLE
+#define PyBUF_FORMAT 0x0004
+#define PyBUF_ND 0x0008
+#define PyBUF_STRIDES (0x0010 | PyBUF_ND)
+#define PyBUF_C_CONTIGUOUS (0x0020 | PyBUF_STRIDES)
+#define PyBUF_F_CONTIGUOUS (0x0040 | PyBUF_STRIDES)
+#define PyBUF_ANY_CONTIGUOUS (0x0080 | PyBUF_STRIDES)
+#define PyBUF_INDIRECT (0x0100 | PyBUF_STRIDES)
+
+#define PyBUF_CONTIG (PyBUF_ND | PyBUF_WRITABLE)
+#define PyBUF_CONTIG_RO (PyBUF_ND)
+
+#define PyBUF_STRIDED (PyBUF_STRIDES | PyBUF_WRITABLE)
+#define PyBUF_STRIDED_RO (PyBUF_STRIDES)
+
+#define PyBUF_RECORDS (PyBUF_STRIDES | PyBUF_WRITABLE | PyBUF_FORMAT)
+#define PyBUF_RECORDS_RO (PyBUF_STRIDES | PyBUF_FORMAT)
+
+#define PyBUF_FULL (PyBUF_INDIRECT | PyBUF_WRITABLE | PyBUF_FORMAT)
+#define PyBUF_FULL_RO (PyBUF_INDIRECT | PyBUF_FORMAT)
+
+
+#define PyBUF_READ 0x100
+#define PyBUF_WRITE 0x200
+#define PyBUF_SHADOW 0x400
+/* end Py3k buffer interface */
+
+typedef int (*objobjproc)(PyObject *, PyObject *);
+typedef int (*visitproc)(PyObject *, void *);
+typedef int (*traverseproc)(PyObject *, visitproc, void *);
+
typedef struct {
/* For numbers without flag bit Py_TPFLAGS_CHECKTYPES set, all
arguments are guaranteed to be of the object's type (modulo
diff --git a/pypy/module/cpyext/stubs.py b/pypy/module/cpyext/stubs.py
--- a/pypy/module/cpyext/stubs.py
+++ b/pypy/module/cpyext/stubs.py
@@ -34,141 +34,6 @@
@cpython_api([PyObject], rffi.INT_real, error=CANNOT_FAIL)
def PyObject_CheckBuffer(space, obj):
- """Return 1 if obj supports the buffer interface otherwise 0."""
- raise NotImplementedError
-
- at cpython_api([PyObject, Py_buffer, rffi.INT_real], rffi.INT_real, error=-1)
-def PyObject_GetBuffer(space, obj, view, flags):
- """Export obj into a Py_buffer, view. These arguments must
- never be NULL. The flags argument is a bit field indicating what
- kind of buffer the caller is prepared to deal with and therefore what
- kind of buffer the exporter is allowed to return. The buffer interface
- allows for complicated memory sharing possibilities, but some caller may
- not be able to handle all the complexity but may want to see if the
- exporter will let them take a simpler view to its memory.
-
- Some exporters may not be able to share memory in every possible way and
- may need to raise errors to signal to some consumers that something is
- just not possible. These errors should be a BufferError unless
- there is another error that is actually causing the problem. The
- exporter can use flags information to simplify how much of the
- Py_buffer structure is filled in with non-default values and/or
- raise an error if the object can't support a simpler view of its memory.
-
- 0 is returned on success and -1 on error.
-
- The following table gives possible values to the flags arguments.
-
- Flag
-
- Description
-
- PyBUF_SIMPLE
-
- This is the default flag state. The returned
- buffer may or may not have writable memory. The
- format of the data will be assumed to be unsigned
- bytes. This is a "stand-alone" flag constant. It
- never needs to be '|'d to the others. The exporter
- will raise an error if it cannot provide such a
- contiguous buffer of bytes.
-
- PyBUF_WRITABLE
-
- The returned buffer must be writable. If it is
- not writable, then raise an error.
-
- PyBUF_STRIDES
-
- This implies PyBUF_ND. The returned
- buffer must provide strides information (i.e. the
- strides cannot be NULL). This would be used when
- the consumer can handle strided, discontiguous
- arrays. Handling strides automatically assumes
- you can handle shape. The exporter can raise an
- error if a strided representation of the data is
- not possible (i.e. without the suboffsets).
-
- PyBUF_ND
-
- The returned buffer must provide shape
- information. The memory will be assumed C-style
- contiguous (last dimension varies the
- fastest). The exporter may raise an error if it
- cannot provide this kind of contiguous buffer. If
- this is not given then shape will be NULL.
-
- PyBUF_C_CONTIGUOUS
- PyBUF_F_CONTIGUOUS
- PyBUF_ANY_CONTIGUOUS
-
- These flags indicate that the contiguity returned
- buffer must be respectively, C-contiguous (last
- dimension varies the fastest), Fortran contiguous
- (first dimension varies the fastest) or either
- one. All of these flags imply
- PyBUF_STRIDES and guarantee that the
- strides buffer info structure will be filled in
- correctly.
-
- PyBUF_INDIRECT
-
- This flag indicates the returned buffer must have
- suboffsets information (which can be NULL if no
- suboffsets are needed). This can be used when
- the consumer can handle indirect array
- referencing implied by these suboffsets. This
- implies PyBUF_STRIDES.
-
- PyBUF_FORMAT
-
- The returned buffer must have true format
- information if this flag is provided. This would
- be used when the consumer is going to be checking
- for what 'kind' of data is actually stored. An
- exporter should always be able to provide this
- information if requested. If format is not
- explicitly requested then the format must be
- returned as NULL (which means 'B', or
- unsigned bytes)
-
- PyBUF_STRIDED
-
- This is equivalent to (PyBUF_STRIDES |
- PyBUF_WRITABLE).
-
- PyBUF_STRIDED_RO
-
- This is equivalent to (PyBUF_STRIDES).
-
- PyBUF_RECORDS
-
- This is equivalent to (PyBUF_STRIDES |
- PyBUF_FORMAT | PyBUF_WRITABLE).
-
- PyBUF_RECORDS_RO
-
- This is equivalent to (PyBUF_STRIDES |
- PyBUF_FORMAT).
-
- PyBUF_FULL
-
- This is equivalent to (PyBUF_INDIRECT |
- PyBUF_FORMAT | PyBUF_WRITABLE).
-
- PyBUF_FULL_RO
-
- This is equivalent to (PyBUF_INDIRECT |
- PyBUF_FORMAT).
-
- PyBUF_CONTIG
-
- This is equivalent to (PyBUF_ND |
- PyBUF_WRITABLE).
-
- PyBUF_CONTIG_RO
-
- This is equivalent to (PyBUF_ND)."""
raise NotImplementedError
@cpython_api([rffi.CCHARP], Py_ssize_t, error=CANNOT_FAIL)
From noreply at buildbot.pypy.org Sat Jan 7 14:58:01 2012
From: noreply at buildbot.pypy.org (fijal)
Date: Sat, 7 Jan 2012 14:58:01 +0100 (CET)
Subject: [pypy-commit] pypy better-jit-hooks: oops, a test and a fix
Message-ID: <20120107135801.797CC82BFF@wyvern.cs.uni-duesseldorf.de>
Author: Maciej Fijalkowski
Branch: better-jit-hooks
Changeset: r51118:6249a65d583e
Date: 2012-01-07 15:57 +0200
http://bitbucket.org/pypy/pypy/changeset/6249a65d583e/
Log: oops, a test and a fix
diff --git a/pypy/module/pypyjit/interp_resop.py b/pypy/module/pypyjit/interp_resop.py
--- a/pypy/module/pypyjit/interp_resop.py
+++ b/pypy/module/pypyjit/interp_resop.py
@@ -79,7 +79,7 @@
def wrap_oplist(space, logops, operations, ops_offset):
return [WrappedOp(jit_hooks._cast_to_gcref(op),
- ops_offset[op],
+ ops_offset.get(op, 0),
logops.repr_of_resop(op)) for op in operations]
@unwrap_spec(num=int, offset=int, repr=str)
diff --git a/pypy/module/pypyjit/test/test_jit_hook.py b/pypy/module/pypyjit/test/test_jit_hook.py
--- a/pypy/module/pypyjit/test/test_jit_hook.py
+++ b/pypy/module/pypyjit/test/test_jit_hook.py
@@ -56,7 +56,8 @@
greenkey = [ConstInt(0), ConstInt(0), ConstPtr(code_gcref)]
offset = {}
for i, op in enumerate(oplist):
- offset[op] = i
+ if i != 1:
+ offset[op] = i
def interp_on_compile():
pypy_portal.on_compile(pypyjitdriver, logger, JitCellToken(),
From noreply at buildbot.pypy.org Sat Jan 7 18:01:14 2012
From: noreply at buildbot.pypy.org (antocuni)
Date: Sat, 7 Jan 2012 18:01:14 +0100 (CET)
Subject: [pypy-commit] extradoc extradoc: my dates
Message-ID: <20120107170114.832F782BFF@wyvern.cs.uni-duesseldorf.de>
Author: Antonio Cuni
Branch: extradoc
Changeset: r4001:218b3a396820
Date: 2012-01-07 18:01 +0100
http://bitbucket.org/pypy/extradoc/changeset/218b3a396820/
Log: my dates
diff --git a/sprintinfo/leysin-winter-2012/people.txt b/sprintinfo/leysin-winter-2012/people.txt
--- a/sprintinfo/leysin-winter-2012/people.txt
+++ b/sprintinfo/leysin-winter-2012/people.txt
@@ -12,6 +12,7 @@
==================== ============== =======================
Armin Rigo private
David Schneider 17/22 ermina
+Antonio Cuni 16/22 ermina
==================== ============== =======================
From noreply at buildbot.pypy.org Sat Jan 7 18:15:27 2012
From: noreply at buildbot.pypy.org (antocuni)
Date: Sat, 7 Jan 2012 18:15:27 +0100 (CET)
Subject: [pypy-commit] extradoc extradoc: add a note about my dates
Message-ID: <20120107171527.852A382BFF@wyvern.cs.uni-duesseldorf.de>
Author: Antonio Cuni
Branch: extradoc
Changeset: r4002:4f9fc086064f
Date: 2012-01-07 18:15 +0100
http://bitbucket.org/pypy/extradoc/changeset/4f9fc086064f/
Log: add a note about my dates
diff --git a/sprintinfo/leysin-winter-2012/people.txt b/sprintinfo/leysin-winter-2012/people.txt
--- a/sprintinfo/leysin-winter-2012/people.txt
+++ b/sprintinfo/leysin-winter-2012/people.txt
@@ -12,7 +12,7 @@
==================== ============== =======================
Armin Rigo private
David Schneider 17/22 ermina
-Antonio Cuni 16/22 ermina
+Antonio Cuni 16/22 ermina, might arrive on the 15th
==================== ============== =======================
From noreply at buildbot.pypy.org Sat Jan 7 18:28:24 2012
From: noreply at buildbot.pypy.org (rguillebert)
Date: Sat, 7 Jan 2012 18:28:24 +0100 (CET)
Subject: [pypy-commit] extradoc extradoc: Add myself
Message-ID: <20120107172824.47D8782BFF@wyvern.cs.uni-duesseldorf.de>
Author: Romain Guillebert
Branch: extradoc
Changeset: r4003:09006ec5a359
Date: 2012-01-07 18:26 +0100
http://bitbucket.org/pypy/extradoc/changeset/09006ec5a359/
Log: Add myself
diff --git a/sprintinfo/leysin-winter-2012/people.txt b/sprintinfo/leysin-winter-2012/people.txt
--- a/sprintinfo/leysin-winter-2012/people.txt
+++ b/sprintinfo/leysin-winter-2012/people.txt
@@ -12,6 +12,7 @@
==================== ============== =======================
Armin Rigo private
David Schneider 17/22 ermina
+Romain Guillebert 15/22 ermina
==================== ============== =======================
From noreply at buildbot.pypy.org Sat Jan 7 18:28:25 2012
From: noreply at buildbot.pypy.org (rguillebert)
Date: Sat, 7 Jan 2012 18:28:25 +0100 (CET)
Subject: [pypy-commit] extradoc extradoc: Merge heads
Message-ID: <20120107172825.6489682BFF@wyvern.cs.uni-duesseldorf.de>
Author: Romain Guillebert
Branch: extradoc
Changeset: r4004:9601f2597df0
Date: 2012-01-07 18:27 +0100
http://bitbucket.org/pypy/extradoc/changeset/9601f2597df0/
Log: Merge heads
diff --git a/sprintinfo/leysin-winter-2012/people.txt b/sprintinfo/leysin-winter-2012/people.txt
--- a/sprintinfo/leysin-winter-2012/people.txt
+++ b/sprintinfo/leysin-winter-2012/people.txt
@@ -12,6 +12,7 @@
==================== ============== =======================
Armin Rigo private
David Schneider 17/22 ermina
+Antonio Cuni 16/22 ermina, might arrive on the 15th
Romain Guillebert 15/22 ermina
==================== ============== =======================
From noreply at buildbot.pypy.org Sat Jan 7 19:08:39 2012
From: noreply at buildbot.pypy.org (arigo)
Date: Sat, 7 Jan 2012 19:08:39 +0100 (CET)
Subject: [pypy-commit] pypy concurrent-marksweep: Random progress.
Message-ID: <20120107180839.34B0C82BFF@wyvern.cs.uni-duesseldorf.de>
Author: Armin Rigo
Branch: concurrent-marksweep
Changeset: r51119:38b03b6eef08
Date: 2012-01-07 14:46 +0100
http://bitbucket.org/pypy/pypy/changeset/38b03b6eef08/
Log: Random progress.
diff --git a/pypy/rpython/memory/gc/concurrentgen.py b/pypy/rpython/memory/gc/concurrentgen.py
--- a/pypy/rpython/memory/gc/concurrentgen.py
+++ b/pypy/rpython/memory/gc/concurrentgen.py
@@ -16,22 +16,13 @@
#
# A concurrent generational mark&sweep GC.
#
-# This uses a separate thread to run the minor collections in parallel.
-# See concurrentgen.txt for some details.
-#
-# Based on observations of the timing of collections with "minimark"
-# (on translate.py): about 15% of the time in minor collections
-# (including 2% in walk_roots), and about 7% in major collections.
-# So out of a total of 22% this should parallelize 20%.
-#
+# This uses a separate thread to run the collections in parallel.
# This is an entirely non-moving collector, with a generational write
# barrier adapted to the concurrent marking done by the collector thread.
+# See concurrentgen.txt for some details.
#
WORD = LONG_BIT // 8
-WORD_POWER_2 = {32: 2, 64: 3}[LONG_BIT]
-assert 1 << WORD_POWER_2 == WORD
-FLOAT_ALMOST_MAXINT = float(sys.maxint) * 0.9999
# Objects start with an integer 'tid', which is decomposed as follows.
@@ -49,8 +40,8 @@
class ConcurrentGenGC(GCBase):
_alloc_flavor_ = "raw"
- inline_simple_malloc = True
- inline_simple_malloc_varsize = True
+ #inline_simple_malloc = True
+ #inline_simple_malloc_varsize = True
needs_deletion_barrier = True
needs_weakref_read_barrier = True
prebuilt_gc_objects_are_static_roots = False
@@ -59,7 +50,7 @@
HDRPTR = lltype.Ptr(lltype.ForwardReference())
HDR = lltype.Struct('header', ('tid', lltype.Signed),
- ('next', HDRPTR)) # <-- kill me later
+ ('next', HDRPTR)) # <-- kill me later XXX
HDRPTR.TO.become(HDR)
HDRSIZE = llmemory.sizeof(HDR)
NULL = lltype.nullptr(HDR)
@@ -85,7 +76,7 @@
**kwds):
GCBase.__init__(self, config, **kwds)
self.read_from_env = read_from_env
- self.nursery_size = nursery_size
+ self.minimal_nursery_size = nursery_size
#
self.main_thread_ident = ll_thread.get_ident() # non-transl. debug only
#
@@ -106,6 +97,7 @@
def _nursery_full(additional_size):
# a hack to reduce the code size in _account_for_nursery():
# avoids the 'self' argument.
+ assert self.nursery_size_still_available < 0
self.nursery_full(additional_size)
_nursery_full._dont_inline_ = True
self._nursery_full = _nursery_full
@@ -156,7 +148,7 @@
#
self.collector.setup()
#
- self.set_minimal_nursery_size(self.nursery_size)
+ self.set_minimal_nursery_size(self.minimal_nursery_size)
if self.read_from_env:
#
newsize = env.read_from_env('PYPY_GC_NURSERY')
@@ -176,6 +168,21 @@
self.old_objects_size = r_uint(0) # approx size of 'old objs' box
self.nursery_size_still_available = intmask(self.nursery_size)
+ def update_total_memory_size(self):
+ # compute the new value for 'total_memory_size': it should be
+ # twice old_objects_size, but never less than 2/3rd of the old value,
+ # and at least 4 * minimal_nursery_size.
+ absolute_maximum = r_uint(-1)
+ if self.old_objects_size < absolute_maximum // 2:
+ tms = self.old_objects_size * 2
+ else:
+ tms = absolute_maximum
+ tms = max(tms, self.total_memory_size // 3 * 2)
+ tms = max(tms, 4 * self.minimal_nursery_size)
+ self.total_memory_size = tms
+ debug_print("total memory size:", tms)
+
+
def _teardown(self):
"Stop the collector thread after tests have run."
self.wait_for_the_end_of_collection()
@@ -261,6 +268,8 @@
def _account_for_nursery(self, additional_size):
self.nursery_size_still_available -= additional_size
+ debug_print("malloc:", additional_size,
+ "still_available:", self.nursery_size_still_available)
if self.nursery_size_still_available < 0:
self._nursery_full(additional_size)
_account_for_nursery._always_inline_ = True
@@ -379,19 +388,14 @@
def nursery_full(self, additional_size):
# See concurrentgen.txt.
#
- assert self.nursery_size_still_available < 0
- #
# Handle big allocations specially
if additional_size > intmask(self.total_memory_size >> 4):
xxxxxxxxxxxx
self.handle_big_allocation(additional_size)
return
#
- waiting_for_major_collection = self.collector.major_collection_phase != 0
- #
- if (self.collector.running == 0 or
- self.stop_collection(wait=waiting_for_major_collection)):
- # The previous collection finished.
+ if self.collector.running == 0 or self.stop_collection():
+ # The previous collection finished; no collection is running now.
#
# Expand the nursery if we can, up to 25% of total_memory_size.
# In some cases, the limiting factor is that the nursery size
@@ -400,15 +404,17 @@
expand_to = self.total_memory_size >> 2
expand_to = min(expand_to, self.total_memory_size -
self.old_objects_size)
- self.nursery_size_still_available += intmask(expand_to -
- self.nursery_size)
- self.nursery_size = expand_to
- #
- # If 'nursery_size_still_available' has been increased to a
- # nonnegative number, then we are done: we can just continue
- # filling the nursery.
- if self.nursery_size_still_available >= 0:
- return
+ if expand_to > self.nursery_size:
+ debug_print("expanded nursery size:", expand_to)
+ self.nursery_size_still_available += intmask(expand_to -
+ self.nursery_size)
+ self.nursery_size = expand_to
+ #
+ # If 'nursery_size_still_available' has been increased to a
+ # nonnegative number, then we are done: we can just continue
+ # filling the nursery.
+ if self.nursery_size_still_available >= 0:
+ return
#
# Else, we trigger the next minor collection now.
self._start_minor_collection()
@@ -423,46 +429,45 @@
newsize = min(newsize, self.total_memory_size >> 2)
self.nursery_size = newsize
self.nursery_size_still_available = intmask(newsize)
+ debug_print("nursery size:", self.nursery_size)
+ debug_print("total memory size:", self.total_memory_size)
return
- yyy
-
- else:
- # The previous collection is not finished yet.
- # At this point we want a full collection to occur.
- debug_start("gc-major")
- #
- # We have to first wait for the previous minor collection to finish:
- self.stop_collection(wait=True)
- #
- # Start the major collection.
- self._start_major_collection()
- #
- debug_stop("gc-major")
+ # The previous collection is likely not finished yet.
+ # At this point we want a full collection to occur.
+ debug_start("gc-major")
+ #
+ # We have to first wait for the previous minor collection to finish:
+ self.wait_for_the_end_of_collection()
+ #
+ # Start the major collection.
+ self._start_major_collection()
+ #
+ debug_stop("gc-major")
def wait_for_the_end_of_collection(self):
- """In the mutator thread: wait for the minor collection currently
- running (if any) to finish, and synchronize the two threads."""
if self.collector.running != 0:
self.stop_collection(wait=True)
- #
- # We must *not* run execute_finalizers_ll() here, because it
- # can start the next collection, and then this function returns
- # with a collection in progress, which it should not. Be careful
- # to call execute_finalizers_ll() in the caller somewhere.
- ll_assert(self.collector.running == 0,
- "collector thread not paused?")
- def stop_collection(self, wait):
- if wait:
- debug_start("gc-stop")
- self.acquire(self.finished_lock)
+ def stop_collection(self, wait=False):
+ ll_assert(self.collector.running != 0, "stop_collection: running == 0")
+ #
+ major_collection = (self.collector.major_collection_phase == 2)
+ debug_start("gc-stop")
+ try:
+ debug_print("wait:", int(wait))
+ if major_collection:
+ debug_print("ending a major collection")
+ if wait or major_collection:
+ self.acquire(self.finished_lock)
+ else:
+ if not self.try_acquire(self.finished_lock):
+ return False
+ finally:
+ debug_print("old objects size:", self.old_objects_size)
debug_stop("gc-stop")
- else:
- if not self.try_acquire(self.finished_lock):
- return False
self.collector.running = 0
#debug_print("collector.running = 0")
#
@@ -475,6 +480,11 @@
if self.DEBUG:
self.debug_check_lists()
#
+ if major_collection:
+ self.collector.major_collection_phase = 0
+ # Update the total memory usage to 2 times the old objects' size
+ self.update_total_memory_size()
+ #
return True
@@ -502,8 +512,9 @@
def collect(self, gen=4):
debug_start("gc-forced-collect")
- self.trigger_next_collection(force_major_collection=True)
self.wait_for_the_end_of_collection()
+ self._start_major_collection()
+ self.nursery_full(0)
self.execute_finalizers_ll()
debug_stop("gc-forced-collect")
return
@@ -527,29 +538,6 @@
gen>=4: Do a full synchronous major collection.
"""
- debug_start("gc-forced-collect")
- debug_print("collect, gen =", gen)
- if gen >= 1 or self.collector.running <= 0:
- self.trigger_next_collection(gen >= 3)
- if gen == 2 or gen >= 4:
- self.wait_for_the_end_of_collection()
- self.execute_finalizers_ll()
- debug_stop("gc-forced-collect")
-
- def trigger_next_collection(self, force_major_collection):
- """In the mutator thread: triggers the next minor or major collection."""
- #
- # In case the previous collection is not over yet, wait for it
- self.wait_for_the_end_of_collection()
- #
- # Choose between a minor and a major collection
- if force_major_collection:
- self._start_major_collection()
- else:
- self._start_minor_collection()
- #
- self.execute_finalizers_ll()
-
def _start_minor_collection(self, major_collection_phase=0):
#
@@ -633,10 +621,12 @@
self.collector.delayed_aging_objects = self.collector.aging_objects
self.collector.aging_objects = self.old_objects
self.old_objects = self.NULL
-
#self.collect_weakref_pages = self.weakref_pages
#self.collect_finalizer_pages = self.finalizer_pages
#
+ # Now there are no more old objects
+ self.old_objects_size = r_uint(0)
+ #
# Start again the collector thread
self._start_collection_common(major_collection_phase=2)
#
@@ -652,7 +642,6 @@
self.collector.running = 1
#debug_print("collector.running = 1")
self.release(self.ready_to_start_lock)
- self.nursery_size_still_available = self.nursery_size
def _add_stack_root(self, root):
# NB. it's ok to edit 'gray_objects' from the mutator thread here,
@@ -943,8 +932,10 @@
# its size ends up being accounted here or not --- but it will
# be at the following minor collection, because the object is
# young again. So, careful about overflows.
- ll_assert(surviving_size <= self.gc.total_memory_size,
- "surviving_size too large")
+ if surviving_size > self.gc.total_memory_size:
+ debug_print("surviving_size too large!",
+ surviving_size, self.gc.total_memory_size)
+ ll_assert(False, "surviving_size too large")
limit = self.gc.total_memory_size - surviving_size
if self.gc.old_objects_size <= limit:
self.gc.old_objects_size += surviving_size
From noreply at buildbot.pypy.org Sat Jan 7 19:08:40 2012
From: noreply at buildbot.pypy.org (arigo)
Date: Sat, 7 Jan 2012 19:08:40 +0100 (CET)
Subject: [pypy-commit] pypy concurrent-marksweep: Tweak tweak tweak.
Message-ID: <20120107180840.5DA9882C00@wyvern.cs.uni-duesseldorf.de>
Author: Armin Rigo
Branch: concurrent-marksweep
Changeset: r51120:73514c0443a5
Date: 2012-01-07 19:01 +0100
http://bitbucket.org/pypy/pypy/changeset/73514c0443a5/
Log: Tweak tweak tweak.
diff --git a/pypy/rpython/memory/gc/concurrentgen.py b/pypy/rpython/memory/gc/concurrentgen.py
--- a/pypy/rpython/memory/gc/concurrentgen.py
+++ b/pypy/rpython/memory/gc/concurrentgen.py
@@ -63,20 +63,19 @@
# Automatically adjust the remaining parameters from the environment.
"read_from_env": True,
- # The default size of the nursery: use 6 MB by default.
- # Environment variable: PYPY_GC_NURSERY
- "nursery_size": 6*1024*1024,
+ # The minimal RAM usage: use 24 MB by default.
+ # Environment variable: PYPY_GC_MIN
+ "min_heap_size": 6*1024*1024,
}
def __init__(self, config,
read_from_env=False,
- nursery_size=32*WORD,
- fill_factor=2.0, # xxx kill
+ min_heap_size=128*WORD,
**kwds):
GCBase.__init__(self, config, **kwds)
self.read_from_env = read_from_env
- self.minimal_nursery_size = nursery_size
+ self.min_heap_size = r_uint(min_heap_size)
#
self.main_thread_ident = ll_thread.get_ident() # non-transl. debug only
#
@@ -93,14 +92,6 @@
# is a collection running and the mutator tries to change an object
# that was not scanned yet.
self._init_writebarrier_logic()
- #
- def _nursery_full(additional_size):
- # a hack to reduce the code size in _account_for_nursery():
- # avoids the 'self' argument.
- assert self.nursery_size_still_available < 0
- self.nursery_full(additional_size)
- _nursery_full._dont_inline_ = True
- self._nursery_full = _nursery_full
def _initialize(self):
# Initialize the GC. In normal translated program, this function
@@ -118,7 +109,9 @@
# contains the objects that the write barrier re-marked as young
# (so they are "old young objects").
self.new_young_objects = self.NULL
+ self.new_young_objects_size = r_uint(0)
self.old_objects = self.NULL
+ self.old_objects_size = r_uint(0) # total size of self.old_objects
#
# See concurrentgen.txt for more information about these fields.
self.current_young_marker = MARK_BYTE_1
@@ -148,37 +141,48 @@
#
self.collector.setup()
#
- self.set_minimal_nursery_size(self.minimal_nursery_size)
+ self.set_min_heap_size(self.min_heap_size)
if self.read_from_env:
#
- newsize = env.read_from_env('PYPY_GC_NURSERY')
+ newsize = env.read_from_env('PYPY_GC_MIN')
if newsize > 0:
- self.set_minimal_nursery_size(newsize)
+ self.set_min_heap_size(r_uint(newsize))
#
- debug_print("minimal nursery size:", self.minimal_nursery_size)
+ debug_print("minimal heap size:", self.min_heap_size)
debug_stop("gc-startup")
- def set_minimal_nursery_size(self, newsize):
- # See concurrentgen.txt. At the start of the process, 'newsize' is
- # a quarter of the total memory size.
- newsize = min(newsize, (sys.maxint - 65535) // 4)
- self.minimal_nursery_size = r_uint(newsize)
- self.total_memory_size = r_uint(4 * newsize) # total size
- self.nursery_size = r_uint(newsize) # size of the '->new...' box
- self.old_objects_size = r_uint(0) # approx size of 'old objs' box
- self.nursery_size_still_available = intmask(self.nursery_size)
+ def set_min_heap_size(self, newsize):
+ # See concurrentgen.txt.
+ self.min_heap_size = newsize
+ self.total_memory_size = newsize # total heap size
+ self.nursery_limit = newsize >> 2 # total size of the '->new...' box
+ #
+ # The in-use portion of the '->new...' box contains the objs
+ # that are in the 'new_young_objects' list. The total of their
+ # size is 'new_young_objects_size'.
+ #
+ # The 'old objects' box contains the objs that are in the
+ # 'old_objects' list. The total of their size is 'old_objects_size'.
+ #
+ # The write barrier occasionally resets the mark byte of objects
+ # to 'young'. This is done without adding or removing objects
+ # to the above lists, and consequently without correcting the
+ # '*_size' variables. Because of that, the 'old_objects' lists
+ # may contain a few objects that are not marked 'old' any more,
+ # and conversely, prebuilt objects may end up marked 'old' but
+ # are never added to the 'old_objects' list.
def update_total_memory_size(self):
# compute the new value for 'total_memory_size': it should be
# twice old_objects_size, but never less than 2/3rd of the old value,
- # and at least 4 * minimal_nursery_size.
+ # and at least 'min_heap_size'
absolute_maximum = r_uint(-1)
if self.old_objects_size < absolute_maximum // 2:
tms = self.old_objects_size * 2
else:
tms = absolute_maximum
tms = max(tms, self.total_memory_size // 3 * 2)
- tms = max(tms, 4 * self.minimal_nursery_size)
+ tms = max(tms, self.min_heap_size)
self.total_memory_size = tms
debug_print("total memory size:", tms)
@@ -228,7 +232,6 @@
size_gc_header = self.gcheaderbuilder.size_gc_header
totalsize = size_gc_header + size
rawtotalsize = raw_malloc_usage(totalsize)
- self._account_for_nursery(rawtotalsize)
adr = llarena.arena_malloc(rawtotalsize, 2)
if adr == llmemory.NULL:
raise MemoryError
@@ -237,7 +240,11 @@
hdr = self.header(obj)
hdr.tid = self.combine(typeid, self.current_young_marker, 0)
hdr.next = self.new_young_objects
+ debug_print("malloc:", rawtotalsize, obj)
self.new_young_objects = hdr
+ self.new_young_objects_size += r_uint(rawtotalsize)
+ if self.new_young_objects_size > self.nursery_limit:
+ self.nursery_overflowed(obj)
return llmemory.cast_adr_to_ptr(obj, llmemory.GCREF)
def malloc_varsize_clear(self, typeid, length, size, itemsize,
@@ -253,7 +260,6 @@
raise MemoryError
#
rawtotalsize = raw_malloc_usage(totalsize)
- self._account_for_nursery(rawtotalsize)
adr = llarena.arena_malloc(rawtotalsize, 2)
if adr == llmemory.NULL:
raise MemoryError
@@ -263,17 +269,13 @@
hdr = self.header(obj)
hdr.tid = self.combine(typeid, self.current_young_marker, 0)
hdr.next = self.new_young_objects
+ debug_print("malloc:", rawtotalsize, obj)
self.new_young_objects = hdr
+ self.new_young_objects_size += r_uint(rawtotalsize)
+ if self.new_young_objects_size > self.nursery_limit:
+ self.nursery_overflowed(obj)
return llmemory.cast_adr_to_ptr(obj, llmemory.GCREF)
- def _account_for_nursery(self, additional_size):
- self.nursery_size_still_available -= additional_size
- debug_print("malloc:", additional_size,
- "still_available:", self.nursery_size_still_available)
- if self.nursery_size_still_available < 0:
- self._nursery_full(additional_size)
- _account_for_nursery._always_inline_ = True
-
# ----------
# Other functions in the GC API
@@ -322,7 +324,7 @@
cym = self.current_young_marker
com = self.current_old_marker
mark = self.get_mark(obj)
- #debug_print("deletion_barrier:", mark, obj)
+ debug_print("deletion_barrier:", mark, obj)
#
if mark == com: # most common case, make it fast
#
@@ -385,16 +387,12 @@
# ----------
- def nursery_full(self, additional_size):
- # See concurrentgen.txt.
+ def nursery_overflowed(self, newest_obj):
+ # See concurrentgen.txt. Called after the nursery overflowed.
#
- # Handle big allocations specially
- if additional_size > intmask(self.total_memory_size >> 4):
- xxxxxxxxxxxx
- self.handle_big_allocation(additional_size)
- return
+ debug_start("gc-nursery-full")
#
- if self.collector.running == 0 or self.stop_collection():
+ if self.previous_collection_finished():
# The previous collection finished; no collection is running now.
#
# Expand the nursery if we can, up to 25% of total_memory_size.
@@ -404,46 +402,51 @@
expand_to = self.total_memory_size >> 2
expand_to = min(expand_to, self.total_memory_size -
self.old_objects_size)
- if expand_to > self.nursery_size:
- debug_print("expanded nursery size:", expand_to)
- self.nursery_size_still_available += intmask(expand_to -
- self.nursery_size)
- self.nursery_size = expand_to
+ if expand_to > self.nursery_limit:
+ debug_print("expanding nursery limit to:", expand_to)
+ self.nursery_limit = expand_to
#
- # If 'nursery_size_still_available' has been increased to a
- # nonnegative number, then we are done: we can just continue
- # filling the nursery.
- if self.nursery_size_still_available >= 0:
+ # If 'new_young_objects_size' is not greater than this
+ # expanded 'nursery_size', then we are done: we can just
+ # continue filling the nursery.
+ if self.new_young_objects_size <= self.nursery_limit:
+ debug_stop("gc-nursery-full")
return
#
# Else, we trigger the next minor collection now.
+ self.flagged_objects.append(newest_obj)
self._start_minor_collection()
#
- # Now there is no new object left. Reset the nursery size to
- # be at most 25% of total_memory_size, and initially no more than
- # 3/4*total_memory_size - old_objects_size. If that value is not
- # positive, then we immediately go into major collection mode.
+ # Now there is no new object left.
+ ll_assert(self.new_young_objects_size == r_uint(0),
+ "new object left behind?")
+ #
+ # Reset the nursery size to be at most 25% of
+ # total_memory_size, and initially no more than
+ # 3/4*total_memory_size - old_objects_size. If that value
+ # is not positive, then we immediately go into major
+ # collection mode.
three_quarters = (self.total_memory_size >> 2) * 3
if self.old_objects_size < three_quarters:
newsize = three_quarters - self.old_objects_size
newsize = min(newsize, self.total_memory_size >> 2)
- self.nursery_size = newsize
- self.nursery_size_still_available = intmask(newsize)
- debug_print("nursery size:", self.nursery_size)
+ self.nursery_limit = newsize
debug_print("total memory size:", self.total_memory_size)
+ debug_print("initial nursery limit:", self.nursery_limit)
+ debug_stop("gc-nursery-full")
return
# The previous collection is likely not finished yet.
# At this point we want a full collection to occur.
- debug_start("gc-major")
+ debug_print("starting a major collection")
#
# We have to first wait for the previous minor collection to finish:
self.wait_for_the_end_of_collection()
#
# Start the major collection.
- self._start_major_collection()
+ self._start_major_collection(newest_obj)
#
- debug_stop("gc-major")
+ debug_stop("gc-nursery-full")
def wait_for_the_end_of_collection(self):
@@ -451,23 +454,27 @@
self.stop_collection(wait=True)
- def stop_collection(self, wait=False):
+ def previous_collection_finished(self):
+ return self.collector.running == 0 or self.stop_collection(wait=False)
+
+
+ def stop_collection(self, wait):
ll_assert(self.collector.running != 0, "stop_collection: running == 0")
#
+ debug_start("gc-stop")
major_collection = (self.collector.major_collection_phase == 2)
- debug_start("gc-stop")
- try:
- debug_print("wait:", int(wait))
- if major_collection:
- debug_print("ending a major collection")
- if wait or major_collection:
- self.acquire(self.finished_lock)
- else:
- if not self.try_acquire(self.finished_lock):
- return False
- finally:
- debug_print("old objects size:", self.old_objects_size)
- debug_stop("gc-stop")
+ if major_collection or wait:
+ debug_print("waiting for the end of collection, major =",
+ int(major_collection))
+ self.acquire(self.finished_lock)
+ else:
+ if not self.try_acquire(self.finished_lock):
+ debug_print("minor collection not finished!")
+ debug_stop("gc-stop")
+ return False
+ #
+ debug_print("old objects size:", self.old_objects_size)
+ debug_stop("gc-stop")
self.collector.running = 0
#debug_print("collector.running = 0")
#
@@ -513,8 +520,8 @@
def collect(self, gen=4):
debug_start("gc-forced-collect")
self.wait_for_the_end_of_collection()
- self._start_major_collection()
- self.nursery_full(0)
+ self._start_major_collection(llmemory.NULL)
+ self.wait_for_the_end_of_collection()
self.execute_finalizers_ll()
debug_stop("gc-forced-collect")
return
@@ -541,7 +548,7 @@
def _start_minor_collection(self, major_collection_phase=0):
#
- debug_start("gc-start")
+ debug_start("gc-minor-start")
#
# Scan the stack roots and the refs in non-GC objects
self.root_walker.walk_roots(
@@ -575,19 +582,22 @@
# Copy a few 'mutator' fields to 'collector' fields
self.collector.aging_objects = self.new_young_objects
self.new_young_objects = self.NULL
+ self.new_young_objects_size = r_uint(0)
#self.collect_weakref_pages = self.weakref_pages
#self.collect_finalizer_pages = self.finalizer_pages
#
# Start the collector thread
self._start_collection_common(major_collection_phase)
#
- debug_stop("gc-start")
+ debug_stop("gc-minor-start")
- def _start_major_collection(self):
+ def _start_major_collection(self, newest_obj):
#
debug_start("gc-major-collection")
#
# Force a minor collection's marking step to occur now
+ if newest_obj:
+ self.flagged_objects.append(newest_obj)
self._start_minor_collection(major_collection_phase=1)
#
# Wait for it to finish
@@ -600,6 +610,10 @@
ll_assert(self.new_young_objects == self.NULL,
"new_young_obejcts should be empty here")
#
+ # Keep this newest_obj alive
+ if newest_obj:
+ self.collector.gray_objects.append(newest_obj)
+ #
# Scan again the stack roots and the refs in non-GC objects
self.root_walker.walk_roots(
ConcurrentGenGC._add_stack_root, # stack roots
@@ -621,12 +635,10 @@
self.collector.delayed_aging_objects = self.collector.aging_objects
self.collector.aging_objects = self.old_objects
self.old_objects = self.NULL
+ self.old_objects_size = r_uint(0)
#self.collect_weakref_pages = self.weakref_pages
#self.collect_finalizer_pages = self.finalizer_pages
#
- # Now there are no more old objects
- self.old_objects_size = r_uint(0)
- #
# Start again the collector thread
self._start_collection_common(major_collection_phase=2)
#
@@ -647,7 +659,8 @@
# NB. it's ok to edit 'gray_objects' from the mutator thread here,
# because the collector thread is not running yet
obj = root.address[0]
- #debug_print("_add_stack_root", obj)
+ debug_print("_add_stack_root", obj)
+ assert 'DEAD' not in repr(obj)
self.get_mark(obj)
self.collector.gray_objects.append(obj)
@@ -672,19 +685,28 @@
def debug_check_lists(self):
# just check that they are correct, non-infinite linked lists
- self.debug_check_list(self.new_young_objects)
- self.debug_check_list(self.old_objects)
+ self.debug_check_list(self.new_young_objects,
+ self.new_young_objects_size)
+ self.debug_check_list(self.old_objects, self.old_objects_size)
- def debug_check_list(self, list):
+ def debug_check_list(self, list, totalsize):
previous = self.NULL
count = 0
+ size = r_uint(0)
+ size_gc_header = self.gcheaderbuilder.size_gc_header
while list != self.NULL:
- # prevent constant-folding, and detects loops
+ obj = llmemory.cast_ptr_to_adr(list) + size_gc_header
+ size1 = size_gc_header + self.get_size(obj)
+ print "debug:", llmemory.raw_malloc_usage(size1)
+ size += llmemory.raw_malloc_usage(size1)
+ # detect loops
ll_assert(list != previous, "loop!")
count += 1
if count & (count-1) == 0: # only on powers of two, to
previous = list # detect loops of any size
list = list.next
+ print "\tTOTAL:", size
+ ll_assert(size == totalsize, "bogus total size in linked list")
return count
def acquire(self, lock):
@@ -890,14 +912,12 @@
def collector_mark(self):
- surviving_size = r_uint(0)
- #
while True:
#
# Do marking. The following function call is interrupted
# if the mutator's write barrier adds new objects to
# 'extra_objects_to_mark'.
- surviving_size += self._collect_mark()
+ self._collect_mark()
#
# Move the objects from 'extra_objects_to_mark' to
# 'gray_objects'. This requires the mutex lock.
@@ -923,31 +943,11 @@
# Else release mutex_lock and try again.
self.release(self.mutex_lock)
#
- # When we sweep during minor collections, we add the size of
- # the surviving now-old objects to the following field. Note
- # that the write barrier may make objects young again, without
- # decreasing the value here. During the following minor
- # collection this variable will be increased *again*. When the
- # write barrier triggers on an aging object, it is random whether
- # its size ends up being accounted here or not --- but it will
- # be at the following minor collection, because the object is
- # young again. So, careful about overflows.
- if surviving_size > self.gc.total_memory_size:
- debug_print("surviving_size too large!",
- surviving_size, self.gc.total_memory_size)
- ll_assert(False, "surviving_size too large")
- limit = self.gc.total_memory_size - surviving_size
- if self.gc.old_objects_size <= limit:
- self.gc.old_objects_size += surviving_size
- else:
- self.gc.old_objects_size = self.gc.total_memory_size
- #
self.running = 2
#debug_print("collection_running = 2")
self.release(self.mutex_lock)
def _collect_mark(self):
- surviving_size = r_uint(0)
extra_objects_to_mark = self.gc.extra_objects_to_mark
cam = self.current_aging_marker
com = self.current_old_marker
@@ -956,9 +956,6 @@
if self.get_mark(obj) != cam:
continue
#
- # Record the object's size
- surviving_size += raw_malloc_usage(self.gc.get_size(obj))
- #
# Scan the content of 'obj'. We use a snapshot-at-the-
# beginning order, meaning that we want to scan the state
# of the object as it was at the beginning of the current
@@ -980,6 +977,7 @@
# we scan a modified content --- and the original content
# is never scanned.
#
+ debug_print("mark:", obj)
self.gc.trace(obj, self._collect_add_pending, None)
self.set_mark(obj, com)
#
@@ -991,8 +989,6 @@
# reference further objects that will soon be accessed too.
if extra_objects_to_mark.non_empty():
break
- #
- return surviving_size
def _collect_add_pending(self, root, ignored):
obj = root.address[0]
@@ -1004,10 +1000,12 @@
def collector_sweep(self):
if self.major_collection_phase != 1: # no sweeping during phase 1
+ self.update_size = self.gc.old_objects_size
lst = self._collect_do_sweep(self.aging_objects,
self.current_aging_marker,
self.gc.old_objects)
self.gc.old_objects = lst
+ self.gc.old_objects_size = self.update_size
#
self.running = -1
#debug_print("collection_running = -1")
@@ -1017,6 +1015,7 @@
# Finish the delayed sweep from the previous minor collection.
# The objects left unmarked were left with 'cam', which is
# now 'com' because we switched their values.
+ self.update_size = r_uint(0)
lst = self._collect_do_sweep(self.delayed_aging_objects,
self.current_old_marker,
self.aging_objects)
@@ -1024,6 +1023,7 @@
self.delayed_aging_objects = self.NULL
def _collect_do_sweep(self, hdr, still_not_marked, linked_list):
+ size_gc_header = self.gc.gcheaderbuilder.size_gc_header
#
while hdr != self.NULL:
nexthdr = hdr.next
@@ -1031,6 +1031,7 @@
if mark == still_not_marked:
# the object is still not marked. Free it.
blockadr = llmemory.cast_ptr_to_adr(hdr)
+ debug_print("free:", blockadr + size_gc_header)
blockadr = llarena.getfakearenaaddress(blockadr)
llarena.arena_free(blockadr)
#
@@ -1043,6 +1044,11 @@
hdr.next = linked_list
linked_list = hdr
#
+ # count its size
+ obj = llmemory.cast_ptr_to_adr(hdr) + size_gc_header
+ size1 = size_gc_header + self.gc.get_size(obj)
+ self.update_size += llmemory.raw_malloc_usage(size1)
+ #
hdr = nexthdr
#
return linked_list
From noreply at buildbot.pypy.org Sat Jan 7 19:08:41 2012
From: noreply at buildbot.pypy.org (arigo)
Date: Sat, 7 Jan 2012 19:08:41 +0100 (CET)
Subject: [pypy-commit] pypy concurrent-marksweep: fix. now test_direct passes
Message-ID: <20120107180841.8135582CAA@wyvern.cs.uni-duesseldorf.de>
Author: Armin Rigo
Branch: concurrent-marksweep
Changeset: r51121:2a11fde484ba
Date: 2012-01-07 19:08 +0100
http://bitbucket.org/pypy/pypy/changeset/2a11fde484ba/
Log: fix. now test_direct passes
diff --git a/pypy/rpython/memory/gc/concurrentgen.py b/pypy/rpython/memory/gc/concurrentgen.py
--- a/pypy/rpython/memory/gc/concurrentgen.py
+++ b/pypy/rpython/memory/gc/concurrentgen.py
@@ -269,6 +269,8 @@
hdr = self.header(obj)
hdr.tid = self.combine(typeid, self.current_young_marker, 0)
hdr.next = self.new_young_objects
+ totalsize = llarena.round_up_for_allocation(totalsize)
+ rawtotalsize = raw_malloc_usage(totalsize)
debug_print("malloc:", rawtotalsize, obj)
self.new_young_objects = hdr
self.new_young_objects_size += r_uint(rawtotalsize)
From noreply at buildbot.pypy.org Sat Jan 7 21:01:47 2012
From: noreply at buildbot.pypy.org (fijal)
Date: Sat, 7 Jan 2012 21:01:47 +0100 (CET)
Subject: [pypy-commit] pypy default: merge import-numpy,
rename numpypy to _numpypy
Message-ID: <20120107200147.D22E282BFF@wyvern.cs.uni-duesseldorf.de>
Author: Maciej Fijalkowski
Branch:
Changeset: r51122:cc8f110fc52d
Date: 2012-01-07 22:00 +0200
http://bitbucket.org/pypy/pypy/changeset/cc8f110fc52d/
Log: merge import-numpy, rename numpypy to _numpypy
diff --git a/pypy/module/micronumpy/__init__.py b/pypy/module/micronumpy/__init__.py
--- a/pypy/module/micronumpy/__init__.py
+++ b/pypy/module/micronumpy/__init__.py
@@ -9,7 +9,7 @@
appleveldefs = {}
class Module(MixedModule):
- applevel_name = 'numpypy'
+ applevel_name = '_numpypy'
submodules = {
'pypy': PyPyModule
diff --git a/pypy/module/micronumpy/app_numpy.py b/pypy/module/micronumpy/app_numpy.py
--- a/pypy/module/micronumpy/app_numpy.py
+++ b/pypy/module/micronumpy/app_numpy.py
@@ -1,6 +1,6 @@
import math
-import numpypy
+import _numpypy
inf = float("inf")
@@ -14,29 +14,29 @@
return mean(a)
def identity(n, dtype=None):
- a = numpypy.zeros((n,n), dtype=dtype)
+ a = _numpypy.zeros((n,n), dtype=dtype)
for i in range(n):
a[i][i] = 1
return a
def mean(a):
if not hasattr(a, "mean"):
- a = numpypy.array(a)
+ a = _numpypy.array(a)
return a.mean()
def sum(a):
if not hasattr(a, "sum"):
- a = numpypy.array(a)
+ a = _numpypy.array(a)
return a.sum()
def min(a):
if not hasattr(a, "min"):
- a = numpypy.array(a)
+ a = _numpypy.array(a)
return a.min()
def max(a):
if not hasattr(a, "max"):
- a = numpypy.array(a)
+ a = _numpypy.array(a)
return a.max()
def arange(start, stop=None, step=1, dtype=None):
@@ -47,9 +47,9 @@
stop = start
start = 0
if dtype is None:
- test = numpypy.array([start, stop, step, 0])
+ test = _numpypy.array([start, stop, step, 0])
dtype = test.dtype
- arr = numpypy.zeros(int(math.ceil((stop - start) / step)), dtype=dtype)
+ arr = _numpypy.zeros(int(math.ceil((stop - start) / step)), dtype=dtype)
i = start
for j in range(arr.size):
arr[j] = i
@@ -90,5 +90,5 @@
you should assign the new shape to the shape attribute of the array
'''
if not hasattr(a, 'reshape'):
- a = numpypy.array(a)
+ a = _numpypy.array(a)
return a.reshape(shape)
diff --git a/pypy/module/micronumpy/test/test_dtypes.py b/pypy/module/micronumpy/test/test_dtypes.py
--- a/pypy/module/micronumpy/test/test_dtypes.py
+++ b/pypy/module/micronumpy/test/test_dtypes.py
@@ -3,7 +3,7 @@
class AppTestDtypes(BaseNumpyAppTest):
def test_dtype(self):
- from numpypy import dtype
+ from _numpypy import dtype
d = dtype('?')
assert d.num == 0
@@ -14,7 +14,7 @@
raises(TypeError, dtype, 1042)
def test_dtype_with_types(self):
- from numpypy import dtype
+ from _numpypy import dtype
assert dtype(bool).num == 0
assert dtype(int).num == 7
@@ -22,13 +22,13 @@
assert dtype(float).num == 12
def test_array_dtype_attr(self):
- from numpypy import array, dtype
+ from _numpypy import array, dtype
a = array(range(5), long)
assert a.dtype is dtype(long)
def test_repr_str(self):
- from numpypy import dtype
+ from _numpypy import dtype
assert repr(dtype) == ""
d = dtype('?')
@@ -36,7 +36,7 @@
assert str(d) == "bool"
def test_bool_array(self):
- from numpypy import array, False_, True_
+ from _numpypy import array, False_, True_
a = array([0, 1, 2, 2.5], dtype='?')
assert a[0] is False_
@@ -44,7 +44,7 @@
assert a[i] is True_
def test_copy_array_with_dtype(self):
- from numpypy import array, False_, True_, int64
+ from _numpypy import array, False_, True_, int64
a = array([0, 1, 2, 3], dtype=long)
# int on 64-bit, long in 32-bit
@@ -58,35 +58,35 @@
assert b[0] is False_
def test_zeros_bool(self):
- from numpypy import zeros, False_
+ from _numpypy import zeros, False_
a = zeros(10, dtype=bool)
for i in range(10):
assert a[i] is False_
def test_ones_bool(self):
- from numpypy import ones, True_
+ from _numpypy import ones, True_
a = ones(10, dtype=bool)
for i in range(10):
assert a[i] is True_
def test_zeros_long(self):
- from numpypy import zeros, int64
+ from _numpypy import zeros, int64
a = zeros(10, dtype=long)
for i in range(10):
assert isinstance(a[i], int64)
assert a[1] == 0
def test_ones_long(self):
- from numpypy import ones, int64
+ from _numpypy import ones, int64
a = ones(10, dtype=long)
for i in range(10):
assert isinstance(a[i], int64)
assert a[1] == 1
def test_overflow(self):
- from numpypy import array, dtype
+ from _numpypy import array, dtype
assert array([128], 'b')[0] == -128
assert array([256], 'B')[0] == 0
assert array([32768], 'h')[0] == -32768
@@ -98,7 +98,7 @@
raises(OverflowError, "array([2**64], 'Q')")
def test_bool_binop_types(self):
- from numpypy import array, dtype
+ from _numpypy import array, dtype
types = [
'?', 'b', 'B', 'h', 'H', 'i', 'I', 'l', 'L', 'q', 'Q', 'f', 'd'
]
@@ -107,7 +107,7 @@
assert (a + array([0], t)).dtype is dtype(t)
def test_binop_types(self):
- from numpypy import array, dtype
+ from _numpypy import array, dtype
tests = [('b','B','h'), ('b','h','h'), ('b','H','i'), ('b','i','i'),
('b','l','l'), ('b','q','q'), ('b','Q','d'), ('B','h','h'),
('B','H','H'), ('B','i','i'), ('B','I','I'), ('B','l','l'),
@@ -129,7 +129,7 @@
assert (array([1], d1) + array([1], d2)).dtype is dtype(dout)
def test_add_int8(self):
- from numpypy import array, dtype
+ from _numpypy import array, dtype
a = array(range(5), dtype="int8")
b = a + a
@@ -138,7 +138,7 @@
assert b[i] == i * 2
def test_add_int16(self):
- from numpypy import array, dtype
+ from _numpypy import array, dtype
a = array(range(5), dtype="int16")
b = a + a
@@ -147,7 +147,7 @@
assert b[i] == i * 2
def test_add_uint32(self):
- from numpypy import array, dtype
+ from _numpypy import array, dtype
a = array(range(5), dtype="I")
b = a + a
@@ -156,19 +156,19 @@
assert b[i] == i * 2
def test_shape(self):
- from numpypy import dtype
+ from _numpypy import dtype
assert dtype(long).shape == ()
def test_cant_subclass(self):
- from numpypy import dtype
+ from _numpypy import dtype
# You can't subclass dtype
raises(TypeError, type, "Foo", (dtype,), {})
class AppTestTypes(BaseNumpyAppTest):
def test_abstract_types(self):
- import numpypy as numpy
+ import _numpypy as numpy
raises(TypeError, numpy.generic, 0)
raises(TypeError, numpy.number, 0)
raises(TypeError, numpy.integer, 0)
@@ -181,7 +181,7 @@
raises(TypeError, numpy.inexact, 0)
def test_bool(self):
- import numpypy as numpy
+ import _numpypy as numpy
assert numpy.bool_.mro() == [numpy.bool_, numpy.generic, object]
assert numpy.bool_(3) is numpy.True_
@@ -196,7 +196,7 @@
assert numpy.bool_("False") is numpy.True_
def test_int8(self):
- import numpypy as numpy
+ import _numpypy as numpy
assert numpy.int8.mro() == [numpy.int8, numpy.signedinteger, numpy.integer, numpy.number, numpy.generic, object]
@@ -218,7 +218,7 @@
assert numpy.int8('128') == -128
def test_uint8(self):
- import numpypy as numpy
+ import _numpypy as numpy
assert numpy.uint8.mro() == [numpy.uint8, numpy.unsignedinteger, numpy.integer, numpy.number, numpy.generic, object]
@@ -241,7 +241,7 @@
assert numpy.uint8('256') == 0
def test_int16(self):
- import numpypy as numpy
+ import _numpypy as numpy
x = numpy.int16(3)
assert x == 3
@@ -251,7 +251,7 @@
assert numpy.int16('32768') == -32768
def test_uint16(self):
- import numpypy as numpy
+ import _numpypy as numpy
assert numpy.uint16(65535) == 65535
assert numpy.uint16(65536) == 0
@@ -260,7 +260,7 @@
def test_int32(self):
import sys
- import numpypy as numpy
+ import _numpypy as numpy
x = numpy.int32(23)
assert x == 23
@@ -275,7 +275,7 @@
def test_uint32(self):
import sys
- import numpypy as numpy
+ import _numpypy as numpy
assert numpy.uint32(10) == 10
@@ -286,14 +286,14 @@
assert numpy.uint32('4294967296') == 0
def test_int_(self):
- import numpypy as numpy
+ import _numpypy as numpy
assert numpy.int_ is numpy.dtype(int).type
assert numpy.int_.mro() == [numpy.int_, numpy.signedinteger, numpy.integer, numpy.number, numpy.generic, int, object]
def test_int64(self):
import sys
- import numpypy as numpy
+ import _numpypy as numpy
if sys.maxint == 2 ** 63 -1:
assert numpy.int64.mro() == [numpy.int64, numpy.signedinteger, numpy.integer, numpy.number, numpy.generic, int, object]
@@ -315,7 +315,7 @@
def test_uint64(self):
import sys
- import numpypy as numpy
+ import _numpypy as numpy
assert numpy.uint64.mro() == [numpy.uint64, numpy.unsignedinteger, numpy.integer, numpy.number, numpy.generic, object]
@@ -330,7 +330,7 @@
raises(OverflowError, numpy.uint64(18446744073709551616))
def test_float32(self):
- import numpypy as numpy
+ import _numpypy as numpy
assert numpy.float32.mro() == [numpy.float32, numpy.floating, numpy.inexact, numpy.number, numpy.generic, object]
@@ -339,7 +339,7 @@
raises(ValueError, numpy.float32, '23.2df')
def test_float64(self):
- import numpypy as numpy
+ import _numpypy as numpy
assert numpy.float64.mro() == [numpy.float64, numpy.floating, numpy.inexact, numpy.number, numpy.generic, float, object]
@@ -352,7 +352,7 @@
raises(ValueError, numpy.float64, '23.2df')
def test_subclass_type(self):
- import numpypy as numpy
+ import _numpypy as numpy
class X(numpy.float64):
def m(self):
diff --git a/pypy/module/micronumpy/test/test_module.py b/pypy/module/micronumpy/test/test_module.py
--- a/pypy/module/micronumpy/test/test_module.py
+++ b/pypy/module/micronumpy/test/test_module.py
@@ -3,33 +3,33 @@
class AppTestNumPyModule(BaseNumpyAppTest):
def test_mean(self):
- from numpypy import array, mean
+ from _numpypy import array, mean
assert mean(array(range(5))) == 2.0
assert mean(range(5)) == 2.0
def test_average(self):
- from numpypy import array, average
+ from _numpypy import array, average
assert average(range(10)) == 4.5
assert average(array(range(10))) == 4.5
def test_sum(self):
- from numpypy import array, sum
+ from _numpypy import array, sum
assert sum(range(10)) == 45
assert sum(array(range(10))) == 45
def test_min(self):
- from numpypy import array, min
+ from _numpypy import array, min
assert min(range(10)) == 0
assert min(array(range(10))) == 0
def test_max(self):
- from numpypy import array, max
+ from _numpypy import array, max
assert max(range(10)) == 9
assert max(array(range(10))) == 9
def test_constants(self):
import math
- from numpypy import inf, e, pi
+ from _numpypy import inf, e, pi
assert type(inf) is float
assert inf == float("inf")
assert e == math.e
diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py
--- a/pypy/module/micronumpy/test/test_numarray.py
+++ b/pypy/module/micronumpy/test/test_numarray.py
@@ -161,7 +161,7 @@
class AppTestNumArray(BaseNumpyAppTest):
def test_ndarray(self):
- from numpypy import ndarray, array, dtype
+ from _numpypy import ndarray, array, dtype
assert type(ndarray) is type
assert type(array) is not type
@@ -176,12 +176,12 @@
assert a.dtype is dtype(int)
def test_type(self):
- from numpypy import array
+ from _numpypy import array
ar = array(range(5))
assert type(ar) is type(ar + ar)
def test_ndim(self):
- from numpypy import array
+ from _numpypy import array
x = array(0.2)
assert x.ndim == 0
x = array([1, 2])
@@ -190,12 +190,12 @@
assert x.ndim == 2
x = array([[[1, 2], [3, 4]], [[5, 6], [7, 8]]])
assert x.ndim == 3
- # numpy actually raises an AttributeError, but numpypy raises an
+ # numpy actually raises an AttributeError, but _numpypy raises an
# TypeError
raises(TypeError, 'x.ndim = 3')
def test_init(self):
- from numpypy import zeros
+ from _numpypy import zeros
a = zeros(15)
# Check that storage was actually zero'd.
assert a[10] == 0.0
@@ -204,7 +204,7 @@
assert a[13] == 5.3
def test_size(self):
- from numpypy import array
+ from _numpypy import array
assert array(3).size == 1
a = array([1, 2, 3])
assert a.size == 3
@@ -215,13 +215,13 @@
Test that empty() works.
"""
- from numpypy import empty
+ from _numpypy import empty
a = empty(2)
a[1] = 1.0
assert a[1] == 1.0
def test_ones(self):
- from numpypy import ones
+ from _numpypy import ones
a = ones(3)
assert len(a) == 3
assert a[0] == 1
@@ -230,7 +230,7 @@
assert a[2] == 4
def test_copy(self):
- from numpypy import arange, array
+ from _numpypy import arange, array
a = arange(5)
b = a.copy()
for i in xrange(5):
@@ -247,12 +247,12 @@
assert (c == b).all()
def test_iterator_init(self):
- from numpypy import array
+ from _numpypy import array
a = array(range(5))
assert a[3] == 3
def test_getitem(self):
- from numpypy import array
+ from _numpypy import array
a = array(range(5))
raises(IndexError, "a[5]")
a = a + a
@@ -261,7 +261,7 @@
raises(IndexError, "a[-6]")
def test_getitem_tuple(self):
- from numpypy import array
+ from _numpypy import array
a = array(range(5))
raises(IndexError, "a[(1,2)]")
for i in xrange(5):
@@ -271,7 +271,7 @@
assert a[i] == b[i]
def test_setitem(self):
- from numpypy import array
+ from _numpypy import array
a = array(range(5))
a[-1] = 5.0
assert a[4] == 5.0
@@ -279,7 +279,7 @@
raises(IndexError, "a[-6] = 3.0")
def test_setitem_tuple(self):
- from numpypy import array
+ from _numpypy import array
a = array(range(5))
raises(IndexError, "a[(1,2)] = [0,1]")
for i in xrange(5):
@@ -290,7 +290,7 @@
assert a[i] == i
def test_setslice_array(self):
- from numpypy import array
+ from _numpypy import array
a = array(range(5))
b = array(range(2))
a[1:4:2] = b
@@ -301,7 +301,7 @@
assert b[1] == 0.
def test_setslice_of_slice_array(self):
- from numpypy import array, zeros
+ from _numpypy import array, zeros
a = zeros(5)
a[::2] = array([9., 10., 11.])
assert a[0] == 9.
@@ -320,7 +320,7 @@
assert a[0] == 3.
def test_setslice_list(self):
- from numpypy import array
+ from _numpypy import array
a = array(range(5), float)
b = [0., 1.]
a[1:4:2] = b
@@ -328,14 +328,14 @@
assert a[3] == 1.
def test_setslice_constant(self):
- from numpypy import array
+ from _numpypy import array
a = array(range(5), float)
a[1:4:2] = 0.
assert a[1] == 0.
assert a[3] == 0.
def test_scalar(self):
- from numpypy import array, dtype
+ from _numpypy import array, dtype
a = array(3)
raises(IndexError, "a[0]")
raises(IndexError, "a[0] = 5")
@@ -344,13 +344,13 @@
assert a.dtype is dtype(int)
def test_len(self):
- from numpypy import array
+ from _numpypy import array
a = array(range(5))
assert len(a) == 5
assert len(a + a) == 5
def test_shape(self):
- from numpypy import array
+ from _numpypy import array
a = array(range(5))
assert a.shape == (5,)
b = a + a
@@ -359,7 +359,7 @@
assert c.shape == (3,)
def test_set_shape(self):
- from numpypy import array, zeros
+ from _numpypy import array, zeros
a = array([])
a.shape = []
a = array(range(12))
@@ -379,7 +379,7 @@
a.shape = (1,)
def test_reshape(self):
- from numpypy import array, zeros
+ from _numpypy import array, zeros
a = array(range(12))
exc = raises(ValueError, "b = a.reshape((3, 10))")
assert str(exc.value) == "total size of new array must be unchanged"
@@ -392,7 +392,7 @@
a.shape = (12, 2)
def test_slice_reshape(self):
- from numpypy import zeros, arange
+ from _numpypy import zeros, arange
a = zeros((4, 2, 3))
b = a[::2, :, :]
b.shape = (2, 6)
@@ -428,13 +428,13 @@
raises(ValueError, arange(10).reshape, (5, -1, -1))
def test_reshape_varargs(self):
- from numpypy import arange
+ from _numpypy import arange
z = arange(96).reshape(12, -1)
y = z.reshape(4, 3, 8)
assert y.shape == (4, 3, 8)
def test_add(self):
- from numpypy import array
+ from _numpypy import array
a = array(range(5))
b = a + a
for i in range(5):
@@ -447,7 +447,7 @@
assert c[i] == bool(a[i] + b[i])
def test_add_other(self):
- from numpypy import array
+ from _numpypy import array
a = array(range(5))
b = array([i for i in reversed(range(5))])
c = a + b
@@ -455,20 +455,20 @@
assert c[i] == 4
def test_add_constant(self):
- from numpypy import array
+ from _numpypy import array
a = array(range(5))
b = a + 5
for i in range(5):
assert b[i] == i + 5
def test_radd(self):
- from numpypy import array
+ from _numpypy import array
r = 3 + array(range(3))
for i in range(3):
assert r[i] == i + 3
def test_add_list(self):
- from numpypy import array, ndarray
+ from _numpypy import array, ndarray
a = array(range(5))
b = list(reversed(range(5)))
c = a + b
@@ -477,14 +477,14 @@
assert c[i] == 4
def test_subtract(self):
- from numpypy import array
+ from _numpypy import array
a = array(range(5))
b = a - a
for i in range(5):
assert b[i] == 0
def test_subtract_other(self):
- from numpypy import array
+ from _numpypy import array
a = array(range(5))
b = array([1, 1, 1, 1, 1])
c = a - b
@@ -492,34 +492,34 @@
assert c[i] == i - 1
def test_subtract_constant(self):
- from numpypy import array
+ from _numpypy import array
a = array(range(5))
b = a - 5
for i in range(5):
assert b[i] == i - 5
def test_scalar_subtract(self):
- from numpypy import int32
+ from _numpypy import int32
assert int32(2) - 1 == 1
assert 1 - int32(2) == -1
def test_mul(self):
- import numpypy
+ import _numpypy
- a = numpypy.array(range(5))
+ a = _numpypy.array(range(5))
b = a * a
for i in range(5):
assert b[i] == i * i
- a = numpypy.array(range(5), dtype=bool)
+ a = _numpypy.array(range(5), dtype=bool)
b = a * a
- assert b.dtype is numpypy.dtype(bool)
- assert b[0] is numpypy.False_
+ assert b.dtype is _numpypy.dtype(bool)
+ assert b[0] is _numpypy.False_
for i in range(1, 5):
- assert b[i] is numpypy.True_
+ assert b[i] is _numpypy.True_
def test_mul_constant(self):
- from numpypy import array
+ from _numpypy import array
a = array(range(5))
b = a * 5
for i in range(5):
@@ -527,7 +527,7 @@
def test_div(self):
from math import isnan
- from numpypy import array, dtype, inf
+ from _numpypy import array, dtype, inf
a = array(range(1, 6))
b = a / a
@@ -559,7 +559,7 @@
assert c[2] == -inf
def test_div_other(self):
- from numpypy import array
+ from _numpypy import array
a = array(range(5))
b = array([2, 2, 2, 2, 2], float)
c = a / b
@@ -567,14 +567,14 @@
assert c[i] == i / 2.0
def test_div_constant(self):
- from numpypy import array
+ from _numpypy import array
a = array(range(5))
b = a / 5.0
for i in range(5):
assert b[i] == i / 5.0
def test_pow(self):
- from numpypy import array
+ from _numpypy import array
a = array(range(5), float)
b = a ** a
for i in range(5):
@@ -584,7 +584,7 @@
assert (a ** 2 == a * a).all()
def test_pow_other(self):
- from numpypy import array
+ from _numpypy import array
a = array(range(5), float)
b = array([2, 2, 2, 2, 2])
c = a ** b
@@ -592,14 +592,14 @@
assert c[i] == i ** 2
def test_pow_constant(self):
- from numpypy import array
+ from _numpypy import array
a = array(range(5), float)
b = a ** 2
for i in range(5):
assert b[i] == i ** 2
def test_mod(self):
- from numpypy import array
+ from _numpypy import array
a = array(range(1, 6))
b = a % a
for i in range(5):
@@ -612,7 +612,7 @@
assert b[i] == 1
def test_mod_other(self):
- from numpypy import array
+ from _numpypy import array
a = array(range(5))
b = array([2, 2, 2, 2, 2])
c = a % b
@@ -620,14 +620,14 @@
assert c[i] == i % 2
def test_mod_constant(self):
- from numpypy import array
+ from _numpypy import array
a = array(range(5))
b = a % 2
for i in range(5):
assert b[i] == i % 2
def test_pos(self):
- from numpypy import array
+ from _numpypy import array
a = array([1., -2., 3., -4., -5.])
b = +a
for i in range(5):
@@ -638,7 +638,7 @@
assert a[i] == i
def test_neg(self):
- from numpypy import array
+ from _numpypy import array
a = array([1., -2., 3., -4., -5.])
b = -a
for i in range(5):
@@ -649,7 +649,7 @@
assert a[i] == -i
def test_abs(self):
- from numpypy import array
+ from _numpypy import array
a = array([1., -2., 3., -4., -5.])
b = abs(a)
for i in range(5):
@@ -660,7 +660,7 @@
assert a[i + 5] == abs(i)
def test_auto_force(self):
- from numpypy import array
+ from _numpypy import array
a = array(range(5))
b = a - 1
a[2] = 3
@@ -674,7 +674,7 @@
assert c[1] == 4
def test_getslice(self):
- from numpypy import array
+ from _numpypy import array
a = array(range(5))
s = a[1:5]
assert len(s) == 4
@@ -688,7 +688,7 @@
assert s[0] == 5
def test_getslice_step(self):
- from numpypy import array
+ from _numpypy import array
a = array(range(10))
s = a[1:9:2]
assert len(s) == 4
@@ -696,7 +696,7 @@
assert s[i] == a[2 * i + 1]
def test_slice_update(self):
- from numpypy import array
+ from _numpypy import array
a = array(range(5))
s = a[0:3]
s[1] = 10
@@ -706,7 +706,7 @@
def test_slice_invaidate(self):
# check that slice shares invalidation list with
- from numpypy import array
+ from _numpypy import array
a = array(range(5))
s = a[0:2]
b = array([10, 11])
@@ -720,13 +720,13 @@
assert d[1] == 12
def test_mean(self):
- from numpypy import array
+ from _numpypy import array
a = array(range(5))
assert a.mean() == 2.0
assert a[:4].mean() == 1.5
def test_sum(self):
- from numpypy import array
+ from _numpypy import array
a = array(range(5))
assert a.sum() == 10.0
assert a[:4].sum() == 6.0
@@ -735,8 +735,8 @@
assert a.sum() == 5
def test_identity(self):
- from numpypy import identity, array
- from numpypy import int32, float64, dtype
+ from _numpypy import identity, array
+ from _numpypy import int32, float64, dtype
a = identity(0)
assert len(a) == 0
assert a.dtype == dtype('float64')
@@ -755,32 +755,32 @@
assert (d == [[1, 0, 0], [0, 1, 0], [0, 0, 1]]).all()
def test_prod(self):
- from numpypy import array
+ from _numpypy import array
a = array(range(1, 6))
assert a.prod() == 120.0
assert a[:4].prod() == 24.0
def test_max(self):
- from numpypy import array
+ from _numpypy import array
a = array([-1.2, 3.4, 5.7, -3.0, 2.7])
assert a.max() == 5.7
b = array([])
raises(ValueError, "b.max()")
def test_max_add(self):
- from numpypy import array
+ from _numpypy import array
a = array([-1.2, 3.4, 5.7, -3.0, 2.7])
assert (a + a).max() == 11.4
def test_min(self):
- from numpypy import array
+ from _numpypy import array
a = array([-1.2, 3.4, 5.7, -3.0, 2.7])
assert a.min() == -3.0
b = array([])
raises(ValueError, "b.min()")
def test_argmax(self):
- from numpypy import array
+ from _numpypy import array
a = array([-1.2, 3.4, 5.7, -3.0, 2.7])
r = a.argmax()
assert r == 2
@@ -801,14 +801,14 @@
assert a.argmax() == 2
def test_argmin(self):
- from numpypy import array
+ from _numpypy import array
a = array([-1.2, 3.4, 5.7, -3.0, 2.7])
assert a.argmin() == 3
b = array([])
raises(ValueError, "b.argmin()")
def test_all(self):
- from numpypy import array
+ from _numpypy import array
a = array(range(5))
assert a.all() == False
a[0] = 3.0
@@ -817,7 +817,7 @@
assert b.all() == True
def test_any(self):
- from numpypy import array, zeros
+ from _numpypy import array, zeros
a = array(range(5))
assert a.any() == True
b = zeros(5)
@@ -826,7 +826,7 @@
assert c.any() == False
def test_dot(self):
- from numpypy import array, dot
+ from _numpypy import array, dot
a = array(range(5))
assert a.dot(a) == 30.0
@@ -836,14 +836,14 @@
assert (dot(5, [1, 2, 3]) == [5, 10, 15]).all()
def test_dot_constant(self):
- from numpypy import array
+ from _numpypy import array
a = array(range(5))
b = a.dot(2.5)
for i in xrange(5):
assert b[i] == 2.5 * a[i]
def test_dtype_guessing(self):
- from numpypy import array, dtype, float64, int8, bool_
+ from _numpypy import array, dtype, float64, int8, bool_
assert array([True]).dtype is dtype(bool)
assert array([True, False]).dtype is dtype(bool)
@@ -860,7 +860,7 @@
def test_comparison(self):
import operator
- from numpypy import array, dtype
+ from _numpypy import array, dtype
a = array(range(5))
b = array(range(5), float)
@@ -879,7 +879,7 @@
assert c[i] == func(b[i], 3)
def test_nonzero(self):
- from numpypy import array
+ from _numpypy import array
a = array([1, 2])
raises(ValueError, bool, a)
raises(ValueError, bool, a == a)
@@ -889,7 +889,7 @@
assert not bool(array([0]))
def test_slice_assignment(self):
- from numpypy import array
+ from _numpypy import array
a = array(range(5))
a[::-1] = a
assert (a == [0, 1, 2, 1, 0]).all()
@@ -899,8 +899,8 @@
assert (a == [8, 6, 4, 2, 0]).all()
def test_debug_repr(self):
- from numpypy import zeros, sin
- from numpypy.pypy import debug_repr
+ from _numpypy import zeros, sin
+ from _numpypy.pypy import debug_repr
a = zeros(1)
assert debug_repr(a) == 'Array'
assert debug_repr(a + a) == 'Call2(add, Array, Array)'
@@ -914,8 +914,8 @@
assert debug_repr(b) == 'Array'
def test_remove_invalidates(self):
- from numpypy import array
- from numpypy.pypy import remove_invalidates
+ from _numpypy import array
+ from _numpypy.pypy import remove_invalidates
a = array([1, 2, 3])
b = a + a
remove_invalidates(a)
@@ -923,7 +923,7 @@
assert b[0] == 28
def test_virtual_views(self):
- from numpypy import arange
+ from _numpypy import arange
a = arange(15)
c = (a + a)
d = c[::2]
@@ -941,7 +941,7 @@
assert b[1] == 2
def test_tolist_scalar(self):
- from numpypy import int32, bool_
+ from _numpypy import int32, bool_
x = int32(23)
assert x.tolist() == 23
assert type(x.tolist()) is int
@@ -949,13 +949,13 @@
assert y.tolist() is True
def test_tolist_zerodim(self):
- from numpypy import array
+ from _numpypy import array
x = array(3)
assert x.tolist() == 3
assert type(x.tolist()) is int
def test_tolist_singledim(self):
- from numpypy import array
+ from _numpypy import array
a = array(range(5))
assert a.tolist() == [0, 1, 2, 3, 4]
assert type(a.tolist()[0]) is int
@@ -963,17 +963,17 @@
assert b.tolist() == [0.2, 0.4, 0.6]
def test_tolist_multidim(self):
- from numpypy import array
+ from _numpypy import array
a = array([[1, 2], [3, 4]])
assert a.tolist() == [[1, 2], [3, 4]]
def test_tolist_view(self):
- from numpypy import array
+ from _numpypy import array
a = array([[1, 2], [3, 4]])
assert (a + a).tolist() == [[2, 4], [6, 8]]
def test_tolist_slice(self):
- from numpypy import array
+ from _numpypy import array
a = array([[17.1, 27.2], [40.3, 50.3]])
assert a[:, 0].tolist() == [17.1, 40.3]
assert a[0].tolist() == [17.1, 27.2]
@@ -981,23 +981,23 @@
class AppTestMultiDim(BaseNumpyAppTest):
def test_init(self):
- import numpypy
- a = numpypy.zeros((2, 2))
+ import _numpypy
+ a = _numpypy.zeros((2, 2))
assert len(a) == 2
def test_shape(self):
- import numpypy
- assert numpypy.zeros(1).shape == (1,)
- assert numpypy.zeros((2, 2)).shape == (2, 2)
- assert numpypy.zeros((3, 1, 2)).shape == (3, 1, 2)
- assert numpypy.array([[1], [2], [3]]).shape == (3, 1)
- assert len(numpypy.zeros((3, 1, 2))) == 3
- raises(TypeError, len, numpypy.zeros(()))
- raises(ValueError, numpypy.array, [[1, 2], 3])
+ import _numpypy
+ assert _numpypy.zeros(1).shape == (1,)
+ assert _numpypy.zeros((2, 2)).shape == (2, 2)
+ assert _numpypy.zeros((3, 1, 2)).shape == (3, 1, 2)
+ assert _numpypy.array([[1], [2], [3]]).shape == (3, 1)
+ assert len(_numpypy.zeros((3, 1, 2))) == 3
+ raises(TypeError, len, _numpypy.zeros(()))
+ raises(ValueError, _numpypy.array, [[1, 2], 3])
def test_getsetitem(self):
- import numpypy
- a = numpypy.zeros((2, 3, 1))
+ import _numpypy
+ a = _numpypy.zeros((2, 3, 1))
raises(IndexError, a.__getitem__, (2, 0, 0))
raises(IndexError, a.__getitem__, (0, 3, 0))
raises(IndexError, a.__getitem__, (0, 0, 1))
@@ -1008,8 +1008,8 @@
assert a[1, -1, 0] == 3
def test_slices(self):
- import numpypy
- a = numpypy.zeros((4, 3, 2))
+ import _numpypy
+ a = _numpypy.zeros((4, 3, 2))
raises(IndexError, a.__getitem__, (4,))
raises(IndexError, a.__getitem__, (3, 3))
raises(IndexError, a.__getitem__, (slice(None), 3))
@@ -1042,51 +1042,51 @@
assert a[1][2][1] == 15
def test_init_2(self):
- import numpypy
- raises(ValueError, numpypy.array, [[1], 2])
- raises(ValueError, numpypy.array, [[1, 2], [3]])
- raises(ValueError, numpypy.array, [[[1, 2], [3, 4], 5]])
- raises(ValueError, numpypy.array, [[[1, 2], [3, 4], [5]]])
- a = numpypy.array([[1, 2], [4, 5]])
+ import _numpypy
+ raises(ValueError, _numpypy.array, [[1], 2])
+ raises(ValueError, _numpypy.array, [[1, 2], [3]])
+ raises(ValueError, _numpypy.array, [[[1, 2], [3, 4], 5]])
+ raises(ValueError, _numpypy.array, [[[1, 2], [3, 4], [5]]])
+ a = _numpypy.array([[1, 2], [4, 5]])
assert a[0, 1] == 2
assert a[0][1] == 2
- a = numpypy.array(([[[1, 2], [3, 4], [5, 6]]]))
+ a = _numpypy.array(([[[1, 2], [3, 4], [5, 6]]]))
assert (a[0, 1] == [3, 4]).all()
def test_setitem_slice(self):
- import numpypy
- a = numpypy.zeros((3, 4))
+ import _numpypy
+ a = _numpypy.zeros((3, 4))
a[1] = [1, 2, 3, 4]
assert a[1, 2] == 3
raises(TypeError, a[1].__setitem__, [1, 2, 3])
- a = numpypy.array([[1, 2], [3, 4]])
+ a = _numpypy.array([[1, 2], [3, 4]])
assert (a == [[1, 2], [3, 4]]).all()
- a[1] = numpypy.array([5, 6])
+ a[1] = _numpypy.array([5, 6])
assert (a == [[1, 2], [5, 6]]).all()
- a[:, 1] = numpypy.array([8, 10])
+ a[:, 1] = _numpypy.array([8, 10])
assert (a == [[1, 8], [5, 10]]).all()
- a[0, :: -1] = numpypy.array([11, 12])
+ a[0, :: -1] = _numpypy.array([11, 12])
assert (a == [[12, 11], [5, 10]]).all()
def test_ufunc(self):
- from numpypy import array
+ from _numpypy import array
a = array([[1, 2], [3, 4], [5, 6]])
assert ((a + a) == \
array([[1 + 1, 2 + 2], [3 + 3, 4 + 4], [5 + 5, 6 + 6]])).all()
def test_getitem_add(self):
- from numpypy import array
+ from _numpypy import array
a = array([[1, 2], [3, 4], [5, 6], [7, 8], [9, 10]])
assert (a + a)[1, 1] == 8
def test_ufunc_negative(self):
- from numpypy import array, negative
+ from _numpypy import array, negative
a = array([[1, 2], [3, 4]])
b = negative(a + a)
assert (b == [[-2, -4], [-6, -8]]).all()
def test_getitem_3(self):
- from numpypy import array
+ from _numpypy import array
a = array([[1, 2], [3, 4], [5, 6], [7, 8],
[9, 10], [11, 12], [13, 14]])
b = a[::2]
@@ -1097,12 +1097,12 @@
assert c[1][1] == 12
def test_multidim_ones(self):
- from numpypy import ones
+ from _numpypy import ones
a = ones((1, 2, 3))
assert a[0, 1, 2] == 1.0
def test_multidim_setslice(self):
- from numpypy import zeros, ones
+ from _numpypy import zeros, ones
a = zeros((3, 3))
b = ones((3, 3))
a[:, 1:3] = b[:, 1:3]
@@ -1113,21 +1113,21 @@
assert (a == [[1, 0, 1], [1, 0, 1], [1, 0, 1]]).all()
def test_broadcast_ufunc(self):
- from numpypy import array
+ from _numpypy import array
a = array([[1, 2], [3, 4], [5, 6]])
b = array([5, 6])
c = ((a + b) == [[1 + 5, 2 + 6], [3 + 5, 4 + 6], [5 + 5, 6 + 6]])
assert c.all()
def test_broadcast_setslice(self):
- from numpypy import zeros, ones
+ from _numpypy import zeros, ones
a = zeros((10, 10))
b = ones(10)
a[:, :] = b
assert a[3, 5] == 1
def test_broadcast_shape_agreement(self):
- from numpypy import zeros, array
+ from _numpypy import zeros, array
a = zeros((3, 1, 3))
b = array(((10, 11, 12), (20, 21, 22), (30, 31, 32)))
c = ((a + b) == [b, b, b])
@@ -1141,7 +1141,7 @@
assert c.all()
def test_broadcast_scalar(self):
- from numpypy import zeros
+ from _numpypy import zeros
a = zeros((4, 5), 'd')
a[:, 1] = 3
assert a[2, 1] == 3
@@ -1152,14 +1152,14 @@
assert a[3, 2] == 0
def test_broadcast_call2(self):
- from numpypy import zeros, ones
+ from _numpypy import zeros, ones
a = zeros((4, 1, 5))
b = ones((4, 3, 5))
b[:] = (a + a)
assert (b == zeros((4, 3, 5))).all()
def test_broadcast_virtualview(self):
- from numpypy import arange, zeros
+ from _numpypy import arange, zeros
a = arange(8).reshape([2, 2, 2])
b = (a + a)[1, 1]
c = zeros((2, 2, 2))
@@ -1167,13 +1167,13 @@
assert (c == [[[12, 14], [12, 14]], [[12, 14], [12, 14]]]).all()
def test_argmax(self):
- from numpypy import array
+ from _numpypy import array
a = array([[1, 2], [3, 4], [5, 6]])
assert a.argmax() == 5
assert a[:2, ].argmax() == 3
def test_broadcast_wrong_shapes(self):
- from numpypy import zeros
+ from _numpypy import zeros
a = zeros((4, 3, 2))
b = zeros((4, 2))
exc = raises(ValueError, lambda: a + b)
@@ -1181,7 +1181,7 @@
" together with shapes (4,3,2) (4,2)"
def test_reduce(self):
- from numpypy import array
+ from _numpypy import array
a = array([[1, 2, 3, 4], [5, 6, 7, 8], [9, 10, 11, 12]])
assert a.sum() == (13 * 12) / 2
b = a[1:, 1::2]
@@ -1189,7 +1189,7 @@
assert c.sum() == (6 + 8 + 10 + 12) * 2
def test_transpose(self):
- from numpypy import array
+ from _numpypy import array
a = array(((range(3), range(3, 6)),
(range(6, 9), range(9, 12)),
(range(12, 15), range(15, 18)),
@@ -1208,7 +1208,7 @@
assert(b[:, 0] == a[0, :]).all()
def test_flatiter(self):
- from numpypy import array, flatiter
+ from _numpypy import array, flatiter
a = array([[10, 30], [40, 60]])
f_iter = a.flat
assert f_iter.next() == 10
@@ -1223,23 +1223,23 @@
assert s == 140
def test_flatiter_array_conv(self):
- from numpypy import array, dot
+ from _numpypy import array, dot
a = array([1, 2, 3])
assert dot(a.flat, a.flat) == 14
def test_flatiter_varray(self):
- from numpypy import ones
+ from _numpypy import ones
a = ones((2, 2))
assert list(((a + a).flat)) == [2, 2, 2, 2]
def test_slice_copy(self):
- from numpypy import zeros
+ from _numpypy import zeros
a = zeros((10, 10))
b = a[0].copy()
assert (b == zeros(10)).all()
def test_array_interface(self):
- from numpypy import array
+ from _numpypy import array
a = array([1, 2, 3])
i = a.__array_interface__
assert isinstance(i['data'][0], int)
@@ -1261,7 +1261,7 @@
def test_fromstring(self):
import sys
- from numpypy import fromstring, array, uint8, float32, int32
+ from _numpypy import fromstring, array, uint8, float32, int32
a = fromstring(self.data)
for i in range(4):
@@ -1325,7 +1325,7 @@
assert (u == [1, 0]).all()
def test_fromstring_types(self):
- from numpypy import (fromstring, int8, int16, int32, int64, uint8,
+ from _numpypy import (fromstring, int8, int16, int32, int64, uint8,
uint16, uint32, float32, float64)
a = fromstring('\xFF', dtype=int8)
@@ -1350,7 +1350,7 @@
assert j[0] == 12
def test_fromstring_invalid(self):
- from numpypy import fromstring, uint16, uint8, int32
+ from _numpypy import fromstring, uint16, uint8, int32
#default dtype is 64-bit float, so 3 bytes should fail
raises(ValueError, fromstring, "\x01\x02\x03")
#3 bytes is not modulo 2 bytes (int16)
@@ -1361,7 +1361,7 @@
class AppTestRepr(BaseNumpyAppTest):
def test_repr(self):
- from numpypy import array, zeros
+ from _numpypy import array, zeros
int_size = array(5).dtype.itemsize
a = array(range(5), float)
assert repr(a) == "array([0.0, 1.0, 2.0, 3.0, 4.0])"
@@ -1389,7 +1389,7 @@
assert repr(a) == "array(0.2)"
def test_repr_multi(self):
- from numpypy import arange, zeros
+ from _numpypy import arange, zeros
a = zeros((3, 4))
assert repr(a) == '''array([[0.0, 0.0, 0.0, 0.0],
[0.0, 0.0, 0.0, 0.0],
@@ -1414,7 +1414,7 @@
[500, 1001]])'''
def test_repr_slice(self):
- from numpypy import array, zeros
+ from _numpypy import array, zeros
a = array(range(5), float)
b = a[1::2]
assert repr(b) == "array([1.0, 3.0])"
@@ -1429,7 +1429,7 @@
assert repr(b) == "array([], shape=(0, 5), dtype=int16)"
def test_str(self):
- from numpypy import array, zeros
+ from _numpypy import array, zeros
a = array(range(5), float)
assert str(a) == "[0.0 1.0 2.0 3.0 4.0]"
assert str((2 * a)[:]) == "[0.0 2.0 4.0 6.0 8.0]"
@@ -1462,7 +1462,7 @@
assert r == '[[[0.0 0.0]\n [0.0 0.0]]\n\n [[0.0 0.0]\n [0.0 0.0]]]'
def test_str_slice(self):
- from numpypy import array, zeros
+ from _numpypy import array, zeros
a = array(range(5), float)
b = a[1::2]
assert str(b) == "[1.0 3.0]"
@@ -1478,7 +1478,7 @@
class AppTestRanges(BaseNumpyAppTest):
def test_arange(self):
- from numpypy import arange, array, dtype
+ from _numpypy import arange, array, dtype
a = arange(3)
assert (a == [0, 1, 2]).all()
assert a.dtype is dtype(int)
@@ -1500,7 +1500,7 @@
class AppTestRanges(BaseNumpyAppTest):
def test_app_reshape(self):
- from numpypy import arange, array, dtype, reshape
+ from _numpypy import arange, array, dtype, reshape
a = arange(12)
b = reshape(a, (3, 4))
assert b.shape == (3, 4)
diff --git a/pypy/module/micronumpy/test/test_ufuncs.py b/pypy/module/micronumpy/test/test_ufuncs.py
--- a/pypy/module/micronumpy/test/test_ufuncs.py
+++ b/pypy/module/micronumpy/test/test_ufuncs.py
@@ -4,14 +4,14 @@
class AppTestUfuncs(BaseNumpyAppTest):
def test_ufunc_instance(self):
- from numpypy import add, ufunc
+ from _numpypy import add, ufunc
assert isinstance(add, ufunc)
assert repr(add) == ""
assert repr(ufunc) == ""
def test_ufunc_attrs(self):
- from numpypy import add, multiply, sin
+ from _numpypy import add, multiply, sin
assert add.identity == 0
assert multiply.identity == 1
@@ -22,7 +22,7 @@
assert sin.nin == 1
def test_wrong_arguments(self):
- from numpypy import add, sin
+ from _numpypy import add, sin
raises(ValueError, add, 1)
raises(TypeError, add, 1, 2, 3)
@@ -30,14 +30,14 @@
raises(ValueError, sin)
def test_single_item(self):
- from numpypy import negative, sign, minimum
+ from _numpypy import negative, sign, minimum
assert negative(5.0) == -5.0
assert sign(-0.0) == 0.0
assert minimum(2.0, 3.0) == 2.0
def test_sequence(self):
- from numpypy import array, ndarray, negative, minimum
+ from _numpypy import array, ndarray, negative, minimum
a = array(range(3))
b = [2.0, 1.0, 0.0]
c = 1.0
@@ -71,7 +71,7 @@
assert min_c_b[i] == min(b[i], c)
def test_negative(self):
- from numpypy import array, negative
+ from _numpypy import array, negative
a = array([-5.0, 0.0, 1.0])
b = negative(a)
@@ -86,7 +86,7 @@
assert negative(a + a)[3] == -6
def test_abs(self):
- from numpypy import array, absolute
+ from _numpypy import array, absolute
a = array([-5.0, -0.0, 1.0])
b = absolute(a)
@@ -94,7 +94,7 @@
assert b[i] == abs(a[i])
def test_add(self):
- from numpypy import array, add
+ from _numpypy import array, add
a = array([-5.0, -0.0, 1.0])
b = array([ 3.0, -2.0,-3.0])
@@ -103,7 +103,7 @@
assert c[i] == a[i] + b[i]
def test_divide(self):
- from numpypy import array, divide
+ from _numpypy import array, divide
a = array([-5.0, -0.0, 1.0])
b = array([ 3.0, -2.0,-3.0])
@@ -114,7 +114,7 @@
assert (divide(array([-10]), array([2])) == array([-5])).all()
def test_fabs(self):
- from numpypy import array, fabs
+ from _numpypy import array, fabs
from math import fabs as math_fabs
a = array([-5.0, -0.0, 1.0])
@@ -123,7 +123,7 @@
assert b[i] == math_fabs(a[i])
def test_minimum(self):
- from numpypy import array, minimum
+ from _numpypy import array, minimum
a = array([-5.0, -0.0, 1.0])
b = array([ 3.0, -2.0,-3.0])
@@ -132,7 +132,7 @@
assert c[i] == min(a[i], b[i])
def test_maximum(self):
- from numpypy import array, maximum
+ from _numpypy import array, maximum
a = array([-5.0, -0.0, 1.0])
b = array([ 3.0, -2.0,-3.0])
@@ -145,7 +145,7 @@
assert isinstance(x, (int, long))
def test_multiply(self):
- from numpypy import array, multiply
+ from _numpypy import array, multiply
a = array([-5.0, -0.0, 1.0])
b = array([ 3.0, -2.0,-3.0])
@@ -154,7 +154,7 @@
assert c[i] == a[i] * b[i]
def test_sign(self):
- from numpypy import array, sign, dtype
+ from _numpypy import array, sign, dtype
reference = [-1.0, 0.0, 0.0, 1.0]
a = array([-5.0, -0.0, 0.0, 6.0])
@@ -173,7 +173,7 @@
assert a[1] == 0
def test_reciporocal(self):
- from numpypy import array, reciprocal
+ from _numpypy import array, reciprocal
reference = [-0.2, float("inf"), float("-inf"), 2.0]
a = array([-5.0, 0.0, -0.0, 0.5])
@@ -182,7 +182,7 @@
assert b[i] == reference[i]
def test_subtract(self):
- from numpypy import array, subtract
+ from _numpypy import array, subtract
a = array([-5.0, -0.0, 1.0])
b = array([ 3.0, -2.0,-3.0])
@@ -191,7 +191,7 @@
assert c[i] == a[i] - b[i]
def test_floor(self):
- from numpypy import array, floor
+ from _numpypy import array, floor
reference = [-2.0, -1.0, 0.0, 1.0, 1.0]
a = array([-1.4, -1.0, 0.0, 1.0, 1.4])
@@ -200,7 +200,7 @@
assert b[i] == reference[i]
def test_copysign(self):
- from numpypy import array, copysign
+ from _numpypy import array, copysign
reference = [5.0, -0.0, 0.0, -6.0]
a = array([-5.0, 0.0, 0.0, 6.0])
@@ -216,7 +216,7 @@
def test_exp(self):
import math
- from numpypy import array, exp
+ from _numpypy import array, exp
a = array([-5.0, -0.0, 0.0, 12345678.0, float("inf"),
-float('inf'), -12343424.0])
@@ -230,7 +230,7 @@
def test_sin(self):
import math
- from numpypy import array, sin
+ from _numpypy import array, sin
a = array([0, 1, 2, 3, math.pi, math.pi*1.5, math.pi*2])
b = sin(a)
@@ -243,7 +243,7 @@
def test_cos(self):
import math
- from numpypy import array, cos
+ from _numpypy import array, cos
a = array([0, 1, 2, 3, math.pi, math.pi*1.5, math.pi*2])
b = cos(a)
@@ -252,7 +252,7 @@
def test_tan(self):
import math
- from numpypy import array, tan
+ from _numpypy import array, tan
a = array([0, 1, 2, 3, math.pi, math.pi*1.5, math.pi*2])
b = tan(a)
@@ -262,7 +262,7 @@
def test_arcsin(self):
import math
- from numpypy import array, arcsin
+ from _numpypy import array, arcsin
a = array([-1, -0.5, -0.33, 0, 0.33, 0.5, 1])
b = arcsin(a)
@@ -276,7 +276,7 @@
def test_arccos(self):
import math
- from numpypy import array, arccos
+ from _numpypy import array, arccos
a = array([-1, -0.5, -0.33, 0, 0.33, 0.5, 1])
b = arccos(a)
@@ -291,7 +291,7 @@
def test_arctan(self):
import math
- from numpypy import array, arctan
+ from _numpypy import array, arctan
a = array([-3, -2, -1, 0, 1, 2, 3, float('inf'), float('-inf')])
b = arctan(a)
@@ -304,7 +304,7 @@
def test_arcsinh(self):
import math
- from numpypy import arcsinh, inf
+ from _numpypy import arcsinh, inf
for v in [inf, -inf, 1.0, math.e]:
assert math.asinh(v) == arcsinh(v)
@@ -312,7 +312,7 @@
def test_arctanh(self):
import math
- from numpypy import arctanh
+ from _numpypy import arctanh
for v in [.99, .5, 0, -.5, -.99]:
assert math.atanh(v) == arctanh(v)
@@ -323,7 +323,7 @@
def test_sqrt(self):
import math
- from numpypy import sqrt
+ from _numpypy import sqrt
nan, inf = float("nan"), float("inf")
data = [1, 2, 3, inf]
@@ -333,13 +333,13 @@
assert math.isnan(sqrt(nan))
def test_reduce_errors(self):
- from numpypy import sin, add
+ from _numpypy import sin, add
raises(ValueError, sin.reduce, [1, 2, 3])
raises(TypeError, add.reduce, 1)
def test_reduce(self):
- from numpypy import add, maximum
+ from _numpypy import add, maximum
assert add.reduce([1, 2, 3]) == 6
assert maximum.reduce([1]) == 1
@@ -348,7 +348,7 @@
def test_comparisons(self):
import operator
- from numpypy import equal, not_equal, less, less_equal, greater, greater_equal
+ from _numpypy import equal, not_equal, less, less_equal, greater, greater_equal
for ufunc, func in [
(equal, operator.eq),
From noreply at buildbot.pypy.org Sat Jan 7 21:01:49 2012
From: noreply at buildbot.pypy.org (fijal)
Date: Sat, 7 Jan 2012 21:01:49 +0100 (CET)
Subject: [pypy-commit] pypy default: create applevel part here
Message-ID: <20120107200149.0891082BFF@wyvern.cs.uni-duesseldorf.de>
Author: Maciej Fijalkowski
Branch:
Changeset: r51123:7bb8b38d8563
Date: 2012-01-07 22:01 +0200
http://bitbucket.org/pypy/pypy/changeset/7bb8b38d8563/
Log: create applevel part here
diff --git a/lib_pypy/numpypy/__init__.py b/lib_pypy/numpypy/__init__.py
new file mode 100644
--- /dev/null
+++ b/lib_pypy/numpypy/__init__.py
@@ -0,0 +1,1 @@
+from _numpypy import *
From noreply at buildbot.pypy.org Sat Jan 7 21:21:31 2012
From: noreply at buildbot.pypy.org (alex_gaynor)
Date: Sat, 7 Jan 2012 21:21:31 +0100 (CET)
Subject: [pypy-commit] pypy better-jit-hooks: merged default in
Message-ID: <20120107202131.9BE6682BFF@wyvern.cs.uni-duesseldorf.de>
Author: Alex Gaynor
Branch: better-jit-hooks
Changeset: r51124:ae6912658a2f
Date: 2012-01-07 14:02 -0600
http://bitbucket.org/pypy/pypy/changeset/ae6912658a2f/
Log: merged default in
diff --git a/lib_pypy/numpypy/__init__.py b/lib_pypy/numpypy/__init__.py
new file mode 100644
--- /dev/null
+++ b/lib_pypy/numpypy/__init__.py
@@ -0,0 +1,1 @@
+from _numpypy import *
diff --git a/pypy/doc/coding-guide.rst b/pypy/doc/coding-guide.rst
--- a/pypy/doc/coding-guide.rst
+++ b/pypy/doc/coding-guide.rst
@@ -175,15 +175,15 @@
RPython
=================
-RPython Definition, not
------------------------
+RPython Definition
+------------------
-The list and exact details of the "RPython" restrictions are a somewhat
-evolving topic. In particular, we have no formal language definition
-as we find it more practical to discuss and evolve the set of
-restrictions while working on the whole program analysis. If you
-have any questions about the restrictions below then please feel
-free to mail us at pypy-dev at codespeak net.
+RPython is a restricted subset of Python that is amenable to static analysis.
+Although there are additions to the language and some things might surprisingly
+work, this is a rough list of restrictions that should be considered. Note
+that there are tons of special cased restrictions that you'll encounter
+as you go. The exact definition is "RPython is everything that our translation
+toolchain can accept" :)
.. _`wrapped object`: coding-guide.html#wrapping-rules
@@ -198,7 +198,7 @@
contain both a string and a int must be avoided. It is allowed to
mix None (basically with the role of a null pointer) with many other
types: `wrapped objects`, class instances, lists, dicts, strings, etc.
- but *not* with int and floats.
+ but *not* with int, floats or tuples.
**constants**
@@ -209,9 +209,12 @@
have this restriction, so if you need mutable global state, store it
in the attributes of some prebuilt singleton instance.
+
+
**control structures**
- all allowed but yield, ``for`` loops restricted to builtin types
+ all allowed, ``for`` loops restricted to builtin types, generators
+ very restricted.
**range**
@@ -226,7 +229,8 @@
**generators**
- generators are not supported.
+ generators are supported, but their exact scope is very limited. you can't
+ merge two different generator in one control point.
**exceptions**
@@ -245,22 +249,27 @@
**strings**
- a lot of, but not all string methods are supported. Indexes can be
+ a lot of, but not all string methods are supported and those that are
+ supported, not necesarilly accept all arguments. Indexes can be
negative. In case they are not, then you get slightly more efficient
code if the translator can prove that they are non-negative. When
slicing a string it is necessary to prove that the slice start and
- stop indexes are non-negative.
+ stop indexes are non-negative. There is no implicit str-to-unicode cast
+ anywhere.
**tuples**
no variable-length tuples; use them to store or return pairs or n-tuples of
- values. Each combination of types for elements and length constitute a separate
- and not mixable type.
+ values. Each combination of types for elements and length constitute
+ a separate and not mixable type.
**lists**
lists are used as an allocated array. Lists are over-allocated, so list.append()
- is reasonably fast. Negative or out-of-bound indexes are only allowed for the
+ is reasonably fast. However, if you use a fixed-size list, the code
+ is more efficient. Annotator can figure out most of the time that your
+ list is fixed-size, even when you use list comprehension.
+ Negative or out-of-bound indexes are only allowed for the
most common operations, as follows:
- *indexing*:
@@ -287,16 +296,14 @@
**dicts**
- dicts with a unique key type only, provided it is hashable.
- String keys have been the only allowed key types for a while, but this was generalized.
- After some re-optimization,
- the implementation could safely decide that all string dict keys should be interned.
+ dicts with a unique key type only, provided it is hashable. Custom
+ hash functions and custom equality will not be honored.
+ Use ``pypy.rlib.objectmodel.r_dict`` for custom hash functions.
**list comprehensions**
- may be used to create allocated, initialized arrays.
- After list over-allocation was introduced, there is no longer any restriction.
+ May be used to create allocated, initialized arrays.
**functions**
@@ -334,9 +341,7 @@
**objects**
- in PyPy, wrapped objects are borrowed from the object space. Just like
- in CPython, code that needs e.g. a dictionary can use a wrapped dict
- and the object space operations on it.
+ Normal rules apply.
This layout makes the number of types to take care about quite limited.
diff --git a/pypy/interpreter/baseobjspace.py b/pypy/interpreter/baseobjspace.py
--- a/pypy/interpreter/baseobjspace.py
+++ b/pypy/interpreter/baseobjspace.py
@@ -1591,12 +1591,15 @@
'ArithmeticError',
'AssertionError',
'AttributeError',
+ 'BaseException',
+ 'DeprecationWarning',
'EOFError',
'EnvironmentError',
'Exception',
'FloatingPointError',
'IOError',
'ImportError',
+ 'ImportWarning',
'IndentationError',
'IndexError',
'KeyError',
@@ -1617,7 +1620,10 @@
'TabError',
'TypeError',
'UnboundLocalError',
+ 'UnicodeDecodeError',
'UnicodeError',
+ 'UnicodeEncodeError',
+ 'UnicodeTranslateError',
'ValueError',
'ZeroDivisionError',
'UnicodeEncodeError',
diff --git a/pypy/jit/metainterp/optimizeopt/test/test_multilabel.py b/pypy/jit/metainterp/optimizeopt/test/test_multilabel.py
--- a/pypy/jit/metainterp/optimizeopt/test/test_multilabel.py
+++ b/pypy/jit/metainterp/optimizeopt/test/test_multilabel.py
@@ -442,6 +442,22 @@
"""
self.optimize_loop(ops, expected)
+ def test_optimizer_renaming_boxes_not_imported(self):
+ ops = """
+ [p1]
+ i1 = strlen(p1)
+ label(p1)
+ jump(p1)
+ """
+ expected = """
+ [p1]
+ i1 = strlen(p1)
+ label(p1, i1)
+ i11 = same_as(i1)
+ jump(p1, i11)
+ """
+ self.optimize_loop(ops, expected)
+
class TestLLtype(OptimizeoptTestMultiLabel, LLtypeMixin):
diff --git a/pypy/jit/metainterp/optimizeopt/unroll.py b/pypy/jit/metainterp/optimizeopt/unroll.py
--- a/pypy/jit/metainterp/optimizeopt/unroll.py
+++ b/pypy/jit/metainterp/optimizeopt/unroll.py
@@ -271,6 +271,10 @@
if newresult is not op.result and not newvalue.is_constant():
op = ResOperation(rop.SAME_AS, [op.result], newresult)
self.optimizer._newoperations.append(op)
+ if self.optimizer.loop.logops:
+ debug_print(' Falling back to add extra: ' +
+ self.optimizer.loop.logops.repr_of_resop(op))
+
self.optimizer.flush()
self.optimizer.emitting_dissabled = False
@@ -435,7 +439,13 @@
return
for a in op.getarglist():
if not isinstance(a, Const) and a not in seen:
- self.ensure_short_op_emitted(self.short_boxes.producer(a), optimizer, seen)
+ self.ensure_short_op_emitted(self.short_boxes.producer(a), optimizer,
+ seen)
+
+ if self.optimizer.loop.logops:
+ debug_print(' Emitting short op: ' +
+ self.optimizer.loop.logops.repr_of_resop(op))
+
optimizer.send_extra_operation(op)
seen[op.result] = True
if op.is_ovf():
diff --git a/pypy/module/cpyext/api.py b/pypy/module/cpyext/api.py
--- a/pypy/module/cpyext/api.py
+++ b/pypy/module/cpyext/api.py
@@ -23,6 +23,7 @@
from pypy.interpreter.function import StaticMethod
from pypy.objspace.std.sliceobject import W_SliceObject
from pypy.module.__builtin__.descriptor import W_Property
+from pypy.module.__builtin__.interp_memoryview import W_MemoryView
from pypy.rlib.entrypoint import entrypoint
from pypy.rlib.unroll import unrolling_iterable
from pypy.rlib.objectmodel import specialize
@@ -387,6 +388,8 @@
"Float": "space.w_float",
"Long": "space.w_long",
"Complex": "space.w_complex",
+ "ByteArray": "space.w_bytearray",
+ "MemoryView": "space.gettypeobject(W_MemoryView.typedef)",
"BaseObject": "space.w_object",
'None': 'space.type(space.w_None)',
'NotImplemented': 'space.type(space.w_NotImplemented)',
diff --git a/pypy/module/cpyext/buffer.py b/pypy/module/cpyext/buffer.py
--- a/pypy/module/cpyext/buffer.py
+++ b/pypy/module/cpyext/buffer.py
@@ -1,6 +1,36 @@
+from pypy.interpreter.error import OperationError
from pypy.rpython.lltypesystem import rffi, lltype
from pypy.module.cpyext.api import (
cpython_api, CANNOT_FAIL, Py_buffer)
+from pypy.module.cpyext.pyobject import PyObject
+
+ at cpython_api([PyObject], rffi.INT_real, error=CANNOT_FAIL)
+def PyObject_CheckBuffer(space, w_obj):
+ """Return 1 if obj supports the buffer interface otherwise 0."""
+ return 0 # the bf_getbuffer field is never filled by cpyext
+
+ at cpython_api([PyObject, lltype.Ptr(Py_buffer), rffi.INT_real],
+ rffi.INT_real, error=-1)
+def PyObject_GetBuffer(space, w_obj, view, flags):
+ """Export obj into a Py_buffer, view. These arguments must
+ never be NULL. The flags argument is a bit field indicating what
+ kind of buffer the caller is prepared to deal with and therefore what
+ kind of buffer the exporter is allowed to return. The buffer interface
+ allows for complicated memory sharing possibilities, but some caller may
+ not be able to handle all the complexity but may want to see if the
+ exporter will let them take a simpler view to its memory.
+
+ Some exporters may not be able to share memory in every possible way and
+ may need to raise errors to signal to some consumers that something is
+ just not possible. These errors should be a BufferError unless
+ there is another error that is actually causing the problem. The
+ exporter can use flags information to simplify how much of the
+ Py_buffer structure is filled in with non-default values and/or
+ raise an error if the object can't support a simpler view of its memory.
+
+ 0 is returned on success and -1 on error."""
+ raise OperationError(space.w_TypeError, space.wrap(
+ 'PyPy does not yet implement the new buffer interface'))
@cpython_api([lltype.Ptr(Py_buffer), lltype.Char], rffi.INT_real, error=CANNOT_FAIL)
def PyBuffer_IsContiguous(space, view, fortran):
diff --git a/pypy/module/cpyext/include/object.h b/pypy/module/cpyext/include/object.h
--- a/pypy/module/cpyext/include/object.h
+++ b/pypy/module/cpyext/include/object.h
@@ -123,10 +123,6 @@
typedef Py_ssize_t (*segcountproc)(PyObject *, Py_ssize_t *);
typedef Py_ssize_t (*charbufferproc)(PyObject *, Py_ssize_t, char **);
-typedef int (*objobjproc)(PyObject *, PyObject *);
-typedef int (*visitproc)(PyObject *, void *);
-typedef int (*traverseproc)(PyObject *, visitproc, void *);
-
/* Py3k buffer interface */
typedef struct bufferinfo {
void *buf;
@@ -153,6 +149,41 @@
typedef int (*getbufferproc)(PyObject *, Py_buffer *, int);
typedef void (*releasebufferproc)(PyObject *, Py_buffer *);
+ /* Flags for getting buffers */
+#define PyBUF_SIMPLE 0
+#define PyBUF_WRITABLE 0x0001
+/* we used to include an E, backwards compatible alias */
+#define PyBUF_WRITEABLE PyBUF_WRITABLE
+#define PyBUF_FORMAT 0x0004
+#define PyBUF_ND 0x0008
+#define PyBUF_STRIDES (0x0010 | PyBUF_ND)
+#define PyBUF_C_CONTIGUOUS (0x0020 | PyBUF_STRIDES)
+#define PyBUF_F_CONTIGUOUS (0x0040 | PyBUF_STRIDES)
+#define PyBUF_ANY_CONTIGUOUS (0x0080 | PyBUF_STRIDES)
+#define PyBUF_INDIRECT (0x0100 | PyBUF_STRIDES)
+
+#define PyBUF_CONTIG (PyBUF_ND | PyBUF_WRITABLE)
+#define PyBUF_CONTIG_RO (PyBUF_ND)
+
+#define PyBUF_STRIDED (PyBUF_STRIDES | PyBUF_WRITABLE)
+#define PyBUF_STRIDED_RO (PyBUF_STRIDES)
+
+#define PyBUF_RECORDS (PyBUF_STRIDES | PyBUF_WRITABLE | PyBUF_FORMAT)
+#define PyBUF_RECORDS_RO (PyBUF_STRIDES | PyBUF_FORMAT)
+
+#define PyBUF_FULL (PyBUF_INDIRECT | PyBUF_WRITABLE | PyBUF_FORMAT)
+#define PyBUF_FULL_RO (PyBUF_INDIRECT | PyBUF_FORMAT)
+
+
+#define PyBUF_READ 0x100
+#define PyBUF_WRITE 0x200
+#define PyBUF_SHADOW 0x400
+/* end Py3k buffer interface */
+
+typedef int (*objobjproc)(PyObject *, PyObject *);
+typedef int (*visitproc)(PyObject *, void *);
+typedef int (*traverseproc)(PyObject *, visitproc, void *);
+
typedef struct {
/* For numbers without flag bit Py_TPFLAGS_CHECKTYPES set, all
arguments are guaranteed to be of the object's type (modulo
diff --git a/pypy/module/cpyext/include/pystate.h b/pypy/module/cpyext/include/pystate.h
--- a/pypy/module/cpyext/include/pystate.h
+++ b/pypy/module/cpyext/include/pystate.h
@@ -5,7 +5,7 @@
struct _is; /* Forward */
typedef struct _is {
- int _foo;
+ struct _is *next;
} PyInterpreterState;
typedef struct _ts {
diff --git a/pypy/module/cpyext/pystate.py b/pypy/module/cpyext/pystate.py
--- a/pypy/module/cpyext/pystate.py
+++ b/pypy/module/cpyext/pystate.py
@@ -2,7 +2,10 @@
cpython_api, generic_cpy_call, CANNOT_FAIL, CConfig, cpython_struct)
from pypy.rpython.lltypesystem import rffi, lltype
-PyInterpreterState = lltype.Ptr(cpython_struct("PyInterpreterState", ()))
+PyInterpreterStateStruct = lltype.ForwardReference()
+PyInterpreterState = lltype.Ptr(PyInterpreterStateStruct)
+cpython_struct(
+ "PyInterpreterState", [('next', PyInterpreterState)], PyInterpreterStateStruct)
PyThreadState = lltype.Ptr(cpython_struct("PyThreadState", [('interp', PyInterpreterState)]))
@cpython_api([], PyThreadState, error=CANNOT_FAIL)
@@ -54,7 +57,8 @@
class InterpreterState(object):
def __init__(self, space):
- self.interpreter_state = lltype.malloc(PyInterpreterState.TO, flavor='raw', immortal=True)
+ self.interpreter_state = lltype.malloc(
+ PyInterpreterState.TO, flavor='raw', zero=True, immortal=True)
def new_thread_state(self):
capsule = ThreadStateCapsule()
diff --git a/pypy/module/cpyext/stubs.py b/pypy/module/cpyext/stubs.py
--- a/pypy/module/cpyext/stubs.py
+++ b/pypy/module/cpyext/stubs.py
@@ -34,141 +34,6 @@
@cpython_api([PyObject], rffi.INT_real, error=CANNOT_FAIL)
def PyObject_CheckBuffer(space, obj):
- """Return 1 if obj supports the buffer interface otherwise 0."""
- raise NotImplementedError
-
- at cpython_api([PyObject, Py_buffer, rffi.INT_real], rffi.INT_real, error=-1)
-def PyObject_GetBuffer(space, obj, view, flags):
- """Export obj into a Py_buffer, view. These arguments must
- never be NULL. The flags argument is a bit field indicating what
- kind of buffer the caller is prepared to deal with and therefore what
- kind of buffer the exporter is allowed to return. The buffer interface
- allows for complicated memory sharing possibilities, but some caller may
- not be able to handle all the complexity but may want to see if the
- exporter will let them take a simpler view to its memory.
-
- Some exporters may not be able to share memory in every possible way and
- may need to raise errors to signal to some consumers that something is
- just not possible. These errors should be a BufferError unless
- there is another error that is actually causing the problem. The
- exporter can use flags information to simplify how much of the
- Py_buffer structure is filled in with non-default values and/or
- raise an error if the object can't support a simpler view of its memory.
-
- 0 is returned on success and -1 on error.
-
- The following table gives possible values to the flags arguments.
-
- Flag
-
- Description
-
- PyBUF_SIMPLE
-
- This is the default flag state. The returned
- buffer may or may not have writable memory. The
- format of the data will be assumed to be unsigned
- bytes. This is a "stand-alone" flag constant. It
- never needs to be '|'d to the others. The exporter
- will raise an error if it cannot provide such a
- contiguous buffer of bytes.
-
- PyBUF_WRITABLE
-
- The returned buffer must be writable. If it is
- not writable, then raise an error.
-
- PyBUF_STRIDES
-
- This implies PyBUF_ND. The returned
- buffer must provide strides information (i.e. the
- strides cannot be NULL). This would be used when
- the consumer can handle strided, discontiguous
- arrays. Handling strides automatically assumes
- you can handle shape. The exporter can raise an
- error if a strided representation of the data is
- not possible (i.e. without the suboffsets).
-
- PyBUF_ND
-
- The returned buffer must provide shape
- information. The memory will be assumed C-style
- contiguous (last dimension varies the
- fastest). The exporter may raise an error if it
- cannot provide this kind of contiguous buffer. If
- this is not given then shape will be NULL.
-
- PyBUF_C_CONTIGUOUS
- PyBUF_F_CONTIGUOUS
- PyBUF_ANY_CONTIGUOUS
-
- These flags indicate that the contiguity returned
- buffer must be respectively, C-contiguous (last
- dimension varies the fastest), Fortran contiguous
- (first dimension varies the fastest) or either
- one. All of these flags imply
- PyBUF_STRIDES and guarantee that the
- strides buffer info structure will be filled in
- correctly.
-
- PyBUF_INDIRECT
-
- This flag indicates the returned buffer must have
- suboffsets information (which can be NULL if no
- suboffsets are needed). This can be used when
- the consumer can handle indirect array
- referencing implied by these suboffsets. This
- implies PyBUF_STRIDES.
-
- PyBUF_FORMAT
-
- The returned buffer must have true format
- information if this flag is provided. This would
- be used when the consumer is going to be checking
- for what 'kind' of data is actually stored. An
- exporter should always be able to provide this
- information if requested. If format is not
- explicitly requested then the format must be
- returned as NULL (which means 'B', or
- unsigned bytes)
-
- PyBUF_STRIDED
-
- This is equivalent to (PyBUF_STRIDES |
- PyBUF_WRITABLE).
-
- PyBUF_STRIDED_RO
-
- This is equivalent to (PyBUF_STRIDES).
-
- PyBUF_RECORDS
-
- This is equivalent to (PyBUF_STRIDES |
- PyBUF_FORMAT | PyBUF_WRITABLE).
-
- PyBUF_RECORDS_RO
-
- This is equivalent to (PyBUF_STRIDES |
- PyBUF_FORMAT).
-
- PyBUF_FULL
-
- This is equivalent to (PyBUF_INDIRECT |
- PyBUF_FORMAT | PyBUF_WRITABLE).
-
- PyBUF_FULL_RO
-
- This is equivalent to (PyBUF_INDIRECT |
- PyBUF_FORMAT).
-
- PyBUF_CONTIG
-
- This is equivalent to (PyBUF_ND |
- PyBUF_WRITABLE).
-
- PyBUF_CONTIG_RO
-
- This is equivalent to (PyBUF_ND)."""
raise NotImplementedError
@cpython_api([rffi.CCHARP], Py_ssize_t, error=CANNOT_FAIL)
diff --git a/pypy/module/cpyext/test/test_pystate.py b/pypy/module/cpyext/test/test_pystate.py
--- a/pypy/module/cpyext/test/test_pystate.py
+++ b/pypy/module/cpyext/test/test_pystate.py
@@ -37,6 +37,7 @@
def test_thread_state_interp(self, space, api):
ts = api.PyThreadState_Get()
assert ts.c_interp == api.PyInterpreterState_Head()
+ assert ts.c_interp.c_next == nullptr(PyInterpreterState.TO)
def test_basic_threadstate_dance(self, space, api):
# Let extension modules call these functions,
diff --git a/pypy/module/micronumpy/__init__.py b/pypy/module/micronumpy/__init__.py
--- a/pypy/module/micronumpy/__init__.py
+++ b/pypy/module/micronumpy/__init__.py
@@ -9,7 +9,7 @@
appleveldefs = {}
class Module(MixedModule):
- applevel_name = 'numpypy'
+ applevel_name = '_numpypy'
submodules = {
'pypy': PyPyModule
diff --git a/pypy/module/micronumpy/app_numpy.py b/pypy/module/micronumpy/app_numpy.py
--- a/pypy/module/micronumpy/app_numpy.py
+++ b/pypy/module/micronumpy/app_numpy.py
@@ -1,6 +1,6 @@
import math
-import numpypy
+import _numpypy
inf = float("inf")
@@ -14,29 +14,29 @@
return mean(a)
def identity(n, dtype=None):
- a = numpypy.zeros((n,n), dtype=dtype)
+ a = _numpypy.zeros((n,n), dtype=dtype)
for i in range(n):
a[i][i] = 1
return a
def mean(a):
if not hasattr(a, "mean"):
- a = numpypy.array(a)
+ a = _numpypy.array(a)
return a.mean()
def sum(a):
if not hasattr(a, "sum"):
- a = numpypy.array(a)
+ a = _numpypy.array(a)
return a.sum()
def min(a):
if not hasattr(a, "min"):
- a = numpypy.array(a)
+ a = _numpypy.array(a)
return a.min()
def max(a):
if not hasattr(a, "max"):
- a = numpypy.array(a)
+ a = _numpypy.array(a)
return a.max()
def arange(start, stop=None, step=1, dtype=None):
@@ -47,9 +47,9 @@
stop = start
start = 0
if dtype is None:
- test = numpypy.array([start, stop, step, 0])
+ test = _numpypy.array([start, stop, step, 0])
dtype = test.dtype
- arr = numpypy.zeros(int(math.ceil((stop - start) / step)), dtype=dtype)
+ arr = _numpypy.zeros(int(math.ceil((stop - start) / step)), dtype=dtype)
i = start
for j in range(arr.size):
arr[j] = i
@@ -90,5 +90,5 @@
you should assign the new shape to the shape attribute of the array
'''
if not hasattr(a, 'reshape'):
- a = numpypy.array(a)
+ a = _numpypy.array(a)
return a.reshape(shape)
diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py
--- a/pypy/module/micronumpy/interp_numarray.py
+++ b/pypy/module/micronumpy/interp_numarray.py
@@ -429,13 +429,10 @@
res.append(')')
else:
concrete.to_str(space, 1, res, indent=' ')
- if (dtype is interp_dtype.get_dtype_cache(space).w_float64dtype or \
- dtype.kind == interp_dtype.SIGNEDLTR and \
- dtype.itemtype.get_element_size() == rffi.sizeof(lltype.Signed)) \
- and self.size:
- # Do not print dtype
- pass
- else:
+ if (dtype is not interp_dtype.get_dtype_cache(space).w_float64dtype and
+ not (dtype.kind == interp_dtype.SIGNEDLTR and
+ dtype.itemtype.get_element_size() == rffi.sizeof(lltype.Signed)) or
+ not self.size):
res.append(", dtype=" + dtype.name)
res.append(")")
return space.wrap(res.build())
diff --git a/pypy/module/micronumpy/test/test_dtypes.py b/pypy/module/micronumpy/test/test_dtypes.py
--- a/pypy/module/micronumpy/test/test_dtypes.py
+++ b/pypy/module/micronumpy/test/test_dtypes.py
@@ -3,7 +3,7 @@
class AppTestDtypes(BaseNumpyAppTest):
def test_dtype(self):
- from numpypy import dtype
+ from _numpypy import dtype
d = dtype('?')
assert d.num == 0
@@ -14,7 +14,7 @@
raises(TypeError, dtype, 1042)
def test_dtype_with_types(self):
- from numpypy import dtype
+ from _numpypy import dtype
assert dtype(bool).num == 0
assert dtype(int).num == 7
@@ -22,13 +22,13 @@
assert dtype(float).num == 12
def test_array_dtype_attr(self):
- from numpypy import array, dtype
+ from _numpypy import array, dtype
a = array(range(5), long)
assert a.dtype is dtype(long)
def test_repr_str(self):
- from numpypy import dtype
+ from _numpypy import dtype
assert repr(dtype) == ""
d = dtype('?')
@@ -36,7 +36,7 @@
assert str(d) == "bool"
def test_bool_array(self):
- from numpypy import array, False_, True_
+ from _numpypy import array, False_, True_
a = array([0, 1, 2, 2.5], dtype='?')
assert a[0] is False_
@@ -44,7 +44,7 @@
assert a[i] is True_
def test_copy_array_with_dtype(self):
- from numpypy import array, False_, True_, int64
+ from _numpypy import array, False_, True_, int64
a = array([0, 1, 2, 3], dtype=long)
# int on 64-bit, long in 32-bit
@@ -58,35 +58,35 @@
assert b[0] is False_
def test_zeros_bool(self):
- from numpypy import zeros, False_
+ from _numpypy import zeros, False_
a = zeros(10, dtype=bool)
for i in range(10):
assert a[i] is False_
def test_ones_bool(self):
- from numpypy import ones, True_
+ from _numpypy import ones, True_
a = ones(10, dtype=bool)
for i in range(10):
assert a[i] is True_
def test_zeros_long(self):
- from numpypy import zeros, int64
+ from _numpypy import zeros, int64
a = zeros(10, dtype=long)
for i in range(10):
assert isinstance(a[i], int64)
assert a[1] == 0
def test_ones_long(self):
- from numpypy import ones, int64
+ from _numpypy import ones, int64
a = ones(10, dtype=long)
for i in range(10):
assert isinstance(a[i], int64)
assert a[1] == 1
def test_overflow(self):
- from numpypy import array, dtype
+ from _numpypy import array, dtype
assert array([128], 'b')[0] == -128
assert array([256], 'B')[0] == 0
assert array([32768], 'h')[0] == -32768
@@ -98,7 +98,7 @@
raises(OverflowError, "array([2**64], 'Q')")
def test_bool_binop_types(self):
- from numpypy import array, dtype
+ from _numpypy import array, dtype
types = [
'?', 'b', 'B', 'h', 'H', 'i', 'I', 'l', 'L', 'q', 'Q', 'f', 'd'
]
@@ -107,7 +107,7 @@
assert (a + array([0], t)).dtype is dtype(t)
def test_binop_types(self):
- from numpypy import array, dtype
+ from _numpypy import array, dtype
tests = [('b','B','h'), ('b','h','h'), ('b','H','i'), ('b','i','i'),
('b','l','l'), ('b','q','q'), ('b','Q','d'), ('B','h','h'),
('B','H','H'), ('B','i','i'), ('B','I','I'), ('B','l','l'),
@@ -129,7 +129,7 @@
assert (array([1], d1) + array([1], d2)).dtype is dtype(dout)
def test_add_int8(self):
- from numpypy import array, dtype
+ from _numpypy import array, dtype
a = array(range(5), dtype="int8")
b = a + a
@@ -138,7 +138,7 @@
assert b[i] == i * 2
def test_add_int16(self):
- from numpypy import array, dtype
+ from _numpypy import array, dtype
a = array(range(5), dtype="int16")
b = a + a
@@ -147,7 +147,7 @@
assert b[i] == i * 2
def test_add_uint32(self):
- from numpypy import array, dtype
+ from _numpypy import array, dtype
a = array(range(5), dtype="I")
b = a + a
@@ -156,19 +156,19 @@
assert b[i] == i * 2
def test_shape(self):
- from numpypy import dtype
+ from _numpypy import dtype
assert dtype(long).shape == ()
def test_cant_subclass(self):
- from numpypy import dtype
+ from _numpypy import dtype
# You can't subclass dtype
raises(TypeError, type, "Foo", (dtype,), {})
class AppTestTypes(BaseNumpyAppTest):
def test_abstract_types(self):
- import numpypy as numpy
+ import _numpypy as numpy
raises(TypeError, numpy.generic, 0)
raises(TypeError, numpy.number, 0)
raises(TypeError, numpy.integer, 0)
@@ -181,7 +181,7 @@
raises(TypeError, numpy.inexact, 0)
def test_bool(self):
- import numpypy as numpy
+ import _numpypy as numpy
assert numpy.bool_.mro() == [numpy.bool_, numpy.generic, object]
assert numpy.bool_(3) is numpy.True_
@@ -196,7 +196,7 @@
assert numpy.bool_("False") is numpy.True_
def test_int8(self):
- import numpypy as numpy
+ import _numpypy as numpy
assert numpy.int8.mro() == [numpy.int8, numpy.signedinteger, numpy.integer, numpy.number, numpy.generic, object]
@@ -218,7 +218,7 @@
assert numpy.int8('128') == -128
def test_uint8(self):
- import numpypy as numpy
+ import _numpypy as numpy
assert numpy.uint8.mro() == [numpy.uint8, numpy.unsignedinteger, numpy.integer, numpy.number, numpy.generic, object]
@@ -241,7 +241,7 @@
assert numpy.uint8('256') == 0
def test_int16(self):
- import numpypy as numpy
+ import _numpypy as numpy
x = numpy.int16(3)
assert x == 3
@@ -251,7 +251,7 @@
assert numpy.int16('32768') == -32768
def test_uint16(self):
- import numpypy as numpy
+ import _numpypy as numpy
assert numpy.uint16(65535) == 65535
assert numpy.uint16(65536) == 0
@@ -260,7 +260,7 @@
def test_int32(self):
import sys
- import numpypy as numpy
+ import _numpypy as numpy
x = numpy.int32(23)
assert x == 23
@@ -275,7 +275,7 @@
def test_uint32(self):
import sys
- import numpypy as numpy
+ import _numpypy as numpy
assert numpy.uint32(10) == 10
@@ -286,14 +286,14 @@
assert numpy.uint32('4294967296') == 0
def test_int_(self):
- import numpypy as numpy
+ import _numpypy as numpy
assert numpy.int_ is numpy.dtype(int).type
assert numpy.int_.mro() == [numpy.int_, numpy.signedinteger, numpy.integer, numpy.number, numpy.generic, int, object]
def test_int64(self):
import sys
- import numpypy as numpy
+ import _numpypy as numpy
if sys.maxint == 2 ** 63 -1:
assert numpy.int64.mro() == [numpy.int64, numpy.signedinteger, numpy.integer, numpy.number, numpy.generic, int, object]
@@ -315,7 +315,7 @@
def test_uint64(self):
import sys
- import numpypy as numpy
+ import _numpypy as numpy
assert numpy.uint64.mro() == [numpy.uint64, numpy.unsignedinteger, numpy.integer, numpy.number, numpy.generic, object]
@@ -330,7 +330,7 @@
raises(OverflowError, numpy.uint64(18446744073709551616))
def test_float32(self):
- import numpypy as numpy
+ import _numpypy as numpy
assert numpy.float32.mro() == [numpy.float32, numpy.floating, numpy.inexact, numpy.number, numpy.generic, object]
@@ -339,7 +339,7 @@
raises(ValueError, numpy.float32, '23.2df')
def test_float64(self):
- import numpypy as numpy
+ import _numpypy as numpy
assert numpy.float64.mro() == [numpy.float64, numpy.floating, numpy.inexact, numpy.number, numpy.generic, float, object]
@@ -352,7 +352,7 @@
raises(ValueError, numpy.float64, '23.2df')
def test_subclass_type(self):
- import numpypy as numpy
+ import _numpypy as numpy
class X(numpy.float64):
def m(self):
diff --git a/pypy/module/micronumpy/test/test_module.py b/pypy/module/micronumpy/test/test_module.py
--- a/pypy/module/micronumpy/test/test_module.py
+++ b/pypy/module/micronumpy/test/test_module.py
@@ -3,33 +3,33 @@
class AppTestNumPyModule(BaseNumpyAppTest):
def test_mean(self):
- from numpypy import array, mean
+ from _numpypy import array, mean
assert mean(array(range(5))) == 2.0
assert mean(range(5)) == 2.0
def test_average(self):
- from numpypy import array, average
+ from _numpypy import array, average
assert average(range(10)) == 4.5
assert average(array(range(10))) == 4.5
def test_sum(self):
- from numpypy import array, sum
+ from _numpypy import array, sum
assert sum(range(10)) == 45
assert sum(array(range(10))) == 45
def test_min(self):
- from numpypy import array, min
+ from _numpypy import array, min
assert min(range(10)) == 0
assert min(array(range(10))) == 0
def test_max(self):
- from numpypy import array, max
+ from _numpypy import array, max
assert max(range(10)) == 9
assert max(array(range(10))) == 9
def test_constants(self):
import math
- from numpypy import inf, e, pi
+ from _numpypy import inf, e, pi
assert type(inf) is float
assert inf == float("inf")
assert e == math.e
diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py
--- a/pypy/module/micronumpy/test/test_numarray.py
+++ b/pypy/module/micronumpy/test/test_numarray.py
@@ -161,7 +161,7 @@
class AppTestNumArray(BaseNumpyAppTest):
def test_ndarray(self):
- from numpypy import ndarray, array, dtype
+ from _numpypy import ndarray, array, dtype
assert type(ndarray) is type
assert type(array) is not type
@@ -176,25 +176,26 @@
assert a.dtype is dtype(int)
def test_type(self):
- from numpypy import array
+ from _numpypy import array
ar = array(range(5))
assert type(ar) is type(ar + ar)
def test_ndim(self):
- from numpypy import array
+ from _numpypy import array
x = array(0.2)
assert x.ndim == 0
- x = array([1,2])
+ x = array([1, 2])
assert x.ndim == 1
- x = array([[1,2], [3,4]])
+ x = array([[1, 2], [3, 4]])
assert x.ndim == 2
- x = array([[[1,2], [3,4]], [[5,6], [7,8]] ])
+ x = array([[[1, 2], [3, 4]], [[5, 6], [7, 8]]])
assert x.ndim == 3
- # numpy actually raises an AttributeError, but numpypy raises an AttributeError
- raises (TypeError, 'x.ndim=3')
-
+ # numpy actually raises an AttributeError, but _numpypy raises an
+ # TypeError
+ raises(TypeError, 'x.ndim = 3')
+
def test_init(self):
- from numpypy import zeros
+ from _numpypy import zeros
a = zeros(15)
# Check that storage was actually zero'd.
assert a[10] == 0.0
@@ -203,7 +204,7 @@
assert a[13] == 5.3
def test_size(self):
- from numpypy import array
+ from _numpypy import array
assert array(3).size == 1
a = array([1, 2, 3])
assert a.size == 3
@@ -214,13 +215,13 @@
Test that empty() works.
"""
- from numpypy import empty
+ from _numpypy import empty
a = empty(2)
a[1] = 1.0
assert a[1] == 1.0
def test_ones(self):
- from numpypy import ones
+ from _numpypy import ones
a = ones(3)
assert len(a) == 3
assert a[0] == 1
@@ -229,7 +230,7 @@
assert a[2] == 4
def test_copy(self):
- from numpypy import arange, array
+ from _numpypy import arange, array
a = arange(5)
b = a.copy()
for i in xrange(5):
@@ -246,12 +247,12 @@
assert (c == b).all()
def test_iterator_init(self):
- from numpypy import array
+ from _numpypy import array
a = array(range(5))
assert a[3] == 3
def test_getitem(self):
- from numpypy import array
+ from _numpypy import array
a = array(range(5))
raises(IndexError, "a[5]")
a = a + a
@@ -260,7 +261,7 @@
raises(IndexError, "a[-6]")
def test_getitem_tuple(self):
- from numpypy import array
+ from _numpypy import array
a = array(range(5))
raises(IndexError, "a[(1,2)]")
for i in xrange(5):
@@ -270,7 +271,7 @@
assert a[i] == b[i]
def test_setitem(self):
- from numpypy import array
+ from _numpypy import array
a = array(range(5))
a[-1] = 5.0
assert a[4] == 5.0
@@ -278,7 +279,7 @@
raises(IndexError, "a[-6] = 3.0")
def test_setitem_tuple(self):
- from numpypy import array
+ from _numpypy import array
a = array(range(5))
raises(IndexError, "a[(1,2)] = [0,1]")
for i in xrange(5):
@@ -289,7 +290,7 @@
assert a[i] == i
def test_setslice_array(self):
- from numpypy import array
+ from _numpypy import array
a = array(range(5))
b = array(range(2))
a[1:4:2] = b
@@ -300,7 +301,7 @@
assert b[1] == 0.
def test_setslice_of_slice_array(self):
- from numpypy import array, zeros
+ from _numpypy import array, zeros
a = zeros(5)
a[::2] = array([9., 10., 11.])
assert a[0] == 9.
@@ -319,7 +320,7 @@
assert a[0] == 3.
def test_setslice_list(self):
- from numpypy import array
+ from _numpypy import array
a = array(range(5), float)
b = [0., 1.]
a[1:4:2] = b
@@ -327,14 +328,14 @@
assert a[3] == 1.
def test_setslice_constant(self):
- from numpypy import array
+ from _numpypy import array
a = array(range(5), float)
a[1:4:2] = 0.
assert a[1] == 0.
assert a[3] == 0.
def test_scalar(self):
- from numpypy import array, dtype
+ from _numpypy import array, dtype
a = array(3)
raises(IndexError, "a[0]")
raises(IndexError, "a[0] = 5")
@@ -343,13 +344,13 @@
assert a.dtype is dtype(int)
def test_len(self):
- from numpypy import array
+ from _numpypy import array
a = array(range(5))
assert len(a) == 5
assert len(a + a) == 5
def test_shape(self):
- from numpypy import array
+ from _numpypy import array
a = array(range(5))
assert a.shape == (5,)
b = a + a
@@ -358,7 +359,7 @@
assert c.shape == (3,)
def test_set_shape(self):
- from numpypy import array, zeros
+ from _numpypy import array, zeros
a = array([])
a.shape = []
a = array(range(12))
@@ -378,7 +379,7 @@
a.shape = (1,)
def test_reshape(self):
- from numpypy import array, zeros
+ from _numpypy import array, zeros
a = array(range(12))
exc = raises(ValueError, "b = a.reshape((3, 10))")
assert str(exc.value) == "total size of new array must be unchanged"
@@ -391,7 +392,7 @@
a.shape = (12, 2)
def test_slice_reshape(self):
- from numpypy import zeros, arange
+ from _numpypy import zeros, arange
a = zeros((4, 2, 3))
b = a[::2, :, :]
b.shape = (2, 6)
@@ -427,13 +428,13 @@
raises(ValueError, arange(10).reshape, (5, -1, -1))
def test_reshape_varargs(self):
- from numpypy import arange
+ from _numpypy import arange
z = arange(96).reshape(12, -1)
y = z.reshape(4, 3, 8)
assert y.shape == (4, 3, 8)
def test_add(self):
- from numpypy import array
+ from _numpypy import array
a = array(range(5))
b = a + a
for i in range(5):
@@ -446,7 +447,7 @@
assert c[i] == bool(a[i] + b[i])
def test_add_other(self):
- from numpypy import array
+ from _numpypy import array
a = array(range(5))
b = array([i for i in reversed(range(5))])
c = a + b
@@ -454,20 +455,20 @@
assert c[i] == 4
def test_add_constant(self):
- from numpypy import array
+ from _numpypy import array
a = array(range(5))
b = a + 5
for i in range(5):
assert b[i] == i + 5
def test_radd(self):
- from numpypy import array
+ from _numpypy import array
r = 3 + array(range(3))
for i in range(3):
assert r[i] == i + 3
def test_add_list(self):
- from numpypy import array, ndarray
+ from _numpypy import array, ndarray
a = array(range(5))
b = list(reversed(range(5)))
c = a + b
@@ -476,14 +477,14 @@
assert c[i] == 4
def test_subtract(self):
- from numpypy import array
+ from _numpypy import array
a = array(range(5))
b = a - a
for i in range(5):
assert b[i] == 0
def test_subtract_other(self):
- from numpypy import array
+ from _numpypy import array
a = array(range(5))
b = array([1, 1, 1, 1, 1])
c = a - b
@@ -491,34 +492,34 @@
assert c[i] == i - 1
def test_subtract_constant(self):
- from numpypy import array
+ from _numpypy import array
a = array(range(5))
b = a - 5
for i in range(5):
assert b[i] == i - 5
def test_scalar_subtract(self):
- from numpypy import int32
+ from _numpypy import int32
assert int32(2) - 1 == 1
assert 1 - int32(2) == -1
def test_mul(self):
- import numpypy
+ import _numpypy
- a = numpypy.array(range(5))
+ a = _numpypy.array(range(5))
b = a * a
for i in range(5):
assert b[i] == i * i
- a = numpypy.array(range(5), dtype=bool)
+ a = _numpypy.array(range(5), dtype=bool)
b = a * a
- assert b.dtype is numpypy.dtype(bool)
- assert b[0] is numpypy.False_
+ assert b.dtype is _numpypy.dtype(bool)
+ assert b[0] is _numpypy.False_
for i in range(1, 5):
- assert b[i] is numpypy.True_
+ assert b[i] is _numpypy.True_
def test_mul_constant(self):
- from numpypy import array
+ from _numpypy import array
a = array(range(5))
b = a * 5
for i in range(5):
@@ -526,7 +527,7 @@
def test_div(self):
from math import isnan
- from numpypy import array, dtype, inf
+ from _numpypy import array, dtype, inf
a = array(range(1, 6))
b = a / a
@@ -558,7 +559,7 @@
assert c[2] == -inf
def test_div_other(self):
- from numpypy import array
+ from _numpypy import array
a = array(range(5))
b = array([2, 2, 2, 2, 2], float)
c = a / b
@@ -566,14 +567,14 @@
assert c[i] == i / 2.0
def test_div_constant(self):
- from numpypy import array
+ from _numpypy import array
a = array(range(5))
b = a / 5.0
for i in range(5):
assert b[i] == i / 5.0
def test_pow(self):
- from numpypy import array
+ from _numpypy import array
a = array(range(5), float)
b = a ** a
for i in range(5):
@@ -583,7 +584,7 @@
assert (a ** 2 == a * a).all()
def test_pow_other(self):
- from numpypy import array
+ from _numpypy import array
a = array(range(5), float)
b = array([2, 2, 2, 2, 2])
c = a ** b
@@ -591,14 +592,14 @@
assert c[i] == i ** 2
def test_pow_constant(self):
- from numpypy import array
+ from _numpypy import array
a = array(range(5), float)
b = a ** 2
for i in range(5):
assert b[i] == i ** 2
def test_mod(self):
- from numpypy import array
+ from _numpypy import array
a = array(range(1, 6))
b = a % a
for i in range(5):
@@ -611,7 +612,7 @@
assert b[i] == 1
def test_mod_other(self):
- from numpypy import array
+ from _numpypy import array
a = array(range(5))
b = array([2, 2, 2, 2, 2])
c = a % b
@@ -619,14 +620,14 @@
assert c[i] == i % 2
def test_mod_constant(self):
- from numpypy import array
+ from _numpypy import array
a = array(range(5))
b = a % 2
for i in range(5):
assert b[i] == i % 2
def test_pos(self):
- from numpypy import array
+ from _numpypy import array
a = array([1., -2., 3., -4., -5.])
b = +a
for i in range(5):
@@ -637,7 +638,7 @@
assert a[i] == i
def test_neg(self):
- from numpypy import array
+ from _numpypy import array
a = array([1., -2., 3., -4., -5.])
b = -a
for i in range(5):
@@ -648,7 +649,7 @@
assert a[i] == -i
def test_abs(self):
- from numpypy import array
+ from _numpypy import array
a = array([1., -2., 3., -4., -5.])
b = abs(a)
for i in range(5):
@@ -659,7 +660,7 @@
assert a[i + 5] == abs(i)
def test_auto_force(self):
- from numpypy import array
+ from _numpypy import array
a = array(range(5))
b = a - 1
a[2] = 3
@@ -673,7 +674,7 @@
assert c[1] == 4
def test_getslice(self):
- from numpypy import array
+ from _numpypy import array
a = array(range(5))
s = a[1:5]
assert len(s) == 4
@@ -687,7 +688,7 @@
assert s[0] == 5
def test_getslice_step(self):
- from numpypy import array
+ from _numpypy import array
a = array(range(10))
s = a[1:9:2]
assert len(s) == 4
@@ -695,7 +696,7 @@
assert s[i] == a[2 * i + 1]
def test_slice_update(self):
- from numpypy import array
+ from _numpypy import array
a = array(range(5))
s = a[0:3]
s[1] = 10
@@ -705,7 +706,7 @@
def test_slice_invaidate(self):
# check that slice shares invalidation list with
- from numpypy import array
+ from _numpypy import array
a = array(range(5))
s = a[0:2]
b = array([10, 11])
@@ -719,13 +720,13 @@
assert d[1] == 12
def test_mean(self):
- from numpypy import array
+ from _numpypy import array
a = array(range(5))
assert a.mean() == 2.0
assert a[:4].mean() == 1.5
def test_sum(self):
- from numpypy import array
+ from _numpypy import array
a = array(range(5))
assert a.sum() == 10.0
assert a[:4].sum() == 6.0
@@ -734,8 +735,8 @@
assert a.sum() == 5
def test_identity(self):
- from numpypy import identity, array
- from numpypy import int32, float64, dtype
+ from _numpypy import identity, array
+ from _numpypy import int32, float64, dtype
a = identity(0)
assert len(a) == 0
assert a.dtype == dtype('float64')
@@ -754,32 +755,32 @@
assert (d == [[1, 0, 0], [0, 1, 0], [0, 0, 1]]).all()
def test_prod(self):
- from numpypy import array
+ from _numpypy import array
a = array(range(1, 6))
assert a.prod() == 120.0
assert a[:4].prod() == 24.0
def test_max(self):
- from numpypy import array
+ from _numpypy import array
a = array([-1.2, 3.4, 5.7, -3.0, 2.7])
assert a.max() == 5.7
b = array([])
raises(ValueError, "b.max()")
def test_max_add(self):
- from numpypy import array
+ from _numpypy import array
a = array([-1.2, 3.4, 5.7, -3.0, 2.7])
assert (a + a).max() == 11.4
def test_min(self):
- from numpypy import array
+ from _numpypy import array
a = array([-1.2, 3.4, 5.7, -3.0, 2.7])
assert a.min() == -3.0
b = array([])
raises(ValueError, "b.min()")
def test_argmax(self):
- from numpypy import array
+ from _numpypy import array
a = array([-1.2, 3.4, 5.7, -3.0, 2.7])
r = a.argmax()
assert r == 2
@@ -800,14 +801,14 @@
assert a.argmax() == 2
def test_argmin(self):
- from numpypy import array
+ from _numpypy import array
a = array([-1.2, 3.4, 5.7, -3.0, 2.7])
assert a.argmin() == 3
b = array([])
raises(ValueError, "b.argmin()")
def test_all(self):
- from numpypy import array
+ from _numpypy import array
a = array(range(5))
assert a.all() == False
a[0] = 3.0
@@ -816,7 +817,7 @@
assert b.all() == True
def test_any(self):
- from numpypy import array, zeros
+ from _numpypy import array, zeros
a = array(range(5))
assert a.any() == True
b = zeros(5)
@@ -825,7 +826,7 @@
assert c.any() == False
def test_dot(self):
- from numpypy import array, dot
+ from _numpypy import array, dot
a = array(range(5))
assert a.dot(a) == 30.0
@@ -835,14 +836,14 @@
assert (dot(5, [1, 2, 3]) == [5, 10, 15]).all()
def test_dot_constant(self):
- from numpypy import array
+ from _numpypy import array
a = array(range(5))
b = a.dot(2.5)
for i in xrange(5):
assert b[i] == 2.5 * a[i]
def test_dtype_guessing(self):
- from numpypy import array, dtype, float64, int8, bool_
+ from _numpypy import array, dtype, float64, int8, bool_
assert array([True]).dtype is dtype(bool)
assert array([True, False]).dtype is dtype(bool)
@@ -859,7 +860,7 @@
def test_comparison(self):
import operator
- from numpypy import array, dtype
+ from _numpypy import array, dtype
a = array(range(5))
b = array(range(5), float)
@@ -878,7 +879,7 @@
assert c[i] == func(b[i], 3)
def test_nonzero(self):
- from numpypy import array
+ from _numpypy import array
a = array([1, 2])
raises(ValueError, bool, a)
raises(ValueError, bool, a == a)
@@ -888,7 +889,7 @@
assert not bool(array([0]))
def test_slice_assignment(self):
- from numpypy import array
+ from _numpypy import array
a = array(range(5))
a[::-1] = a
assert (a == [0, 1, 2, 1, 0]).all()
@@ -898,8 +899,8 @@
assert (a == [8, 6, 4, 2, 0]).all()
def test_debug_repr(self):
- from numpypy import zeros, sin
- from numpypy.pypy import debug_repr
+ from _numpypy import zeros, sin
+ from _numpypy.pypy import debug_repr
a = zeros(1)
assert debug_repr(a) == 'Array'
assert debug_repr(a + a) == 'Call2(add, Array, Array)'
@@ -913,8 +914,8 @@
assert debug_repr(b) == 'Array'
def test_remove_invalidates(self):
- from numpypy import array
- from numpypy.pypy import remove_invalidates
+ from _numpypy import array
+ from _numpypy.pypy import remove_invalidates
a = array([1, 2, 3])
b = a + a
remove_invalidates(a)
@@ -922,7 +923,7 @@
assert b[0] == 28
def test_virtual_views(self):
- from numpypy import arange
+ from _numpypy import arange
a = arange(15)
c = (a + a)
d = c[::2]
@@ -940,7 +941,7 @@
assert b[1] == 2
def test_tolist_scalar(self):
- from numpypy import int32, bool_
+ from _numpypy import int32, bool_
x = int32(23)
assert x.tolist() == 23
assert type(x.tolist()) is int
@@ -948,13 +949,13 @@
assert y.tolist() is True
def test_tolist_zerodim(self):
- from numpypy import array
+ from _numpypy import array
x = array(3)
assert x.tolist() == 3
assert type(x.tolist()) is int
def test_tolist_singledim(self):
- from numpypy import array
+ from _numpypy import array
a = array(range(5))
assert a.tolist() == [0, 1, 2, 3, 4]
assert type(a.tolist()[0]) is int
@@ -962,17 +963,17 @@
assert b.tolist() == [0.2, 0.4, 0.6]
def test_tolist_multidim(self):
- from numpypy import array
+ from _numpypy import array
a = array([[1, 2], [3, 4]])
assert a.tolist() == [[1, 2], [3, 4]]
def test_tolist_view(self):
- from numpypy import array
+ from _numpypy import array
a = array([[1, 2], [3, 4]])
assert (a + a).tolist() == [[2, 4], [6, 8]]
def test_tolist_slice(self):
- from numpypy import array
+ from _numpypy import array
a = array([[17.1, 27.2], [40.3, 50.3]])
assert a[:, 0].tolist() == [17.1, 40.3]
assert a[0].tolist() == [17.1, 27.2]
@@ -980,23 +981,23 @@
class AppTestMultiDim(BaseNumpyAppTest):
def test_init(self):
- import numpypy
- a = numpypy.zeros((2, 2))
+ import _numpypy
+ a = _numpypy.zeros((2, 2))
assert len(a) == 2
def test_shape(self):
- import numpypy
- assert numpypy.zeros(1).shape == (1,)
- assert numpypy.zeros((2, 2)).shape == (2, 2)
- assert numpypy.zeros((3, 1, 2)).shape == (3, 1, 2)
- assert numpypy.array([[1], [2], [3]]).shape == (3, 1)
- assert len(numpypy.zeros((3, 1, 2))) == 3
- raises(TypeError, len, numpypy.zeros(()))
- raises(ValueError, numpypy.array, [[1, 2], 3])
+ import _numpypy
+ assert _numpypy.zeros(1).shape == (1,)
+ assert _numpypy.zeros((2, 2)).shape == (2, 2)
+ assert _numpypy.zeros((3, 1, 2)).shape == (3, 1, 2)
+ assert _numpypy.array([[1], [2], [3]]).shape == (3, 1)
+ assert len(_numpypy.zeros((3, 1, 2))) == 3
+ raises(TypeError, len, _numpypy.zeros(()))
+ raises(ValueError, _numpypy.array, [[1, 2], 3])
def test_getsetitem(self):
- import numpypy
- a = numpypy.zeros((2, 3, 1))
+ import _numpypy
+ a = _numpypy.zeros((2, 3, 1))
raises(IndexError, a.__getitem__, (2, 0, 0))
raises(IndexError, a.__getitem__, (0, 3, 0))
raises(IndexError, a.__getitem__, (0, 0, 1))
@@ -1007,8 +1008,8 @@
assert a[1, -1, 0] == 3
def test_slices(self):
- import numpypy
- a = numpypy.zeros((4, 3, 2))
+ import _numpypy
+ a = _numpypy.zeros((4, 3, 2))
raises(IndexError, a.__getitem__, (4,))
raises(IndexError, a.__getitem__, (3, 3))
raises(IndexError, a.__getitem__, (slice(None), 3))
@@ -1041,51 +1042,51 @@
assert a[1][2][1] == 15
def test_init_2(self):
- import numpypy
- raises(ValueError, numpypy.array, [[1], 2])
- raises(ValueError, numpypy.array, [[1, 2], [3]])
- raises(ValueError, numpypy.array, [[[1, 2], [3, 4], 5]])
- raises(ValueError, numpypy.array, [[[1, 2], [3, 4], [5]]])
- a = numpypy.array([[1, 2], [4, 5]])
+ import _numpypy
+ raises(ValueError, _numpypy.array, [[1], 2])
+ raises(ValueError, _numpypy.array, [[1, 2], [3]])
+ raises(ValueError, _numpypy.array, [[[1, 2], [3, 4], 5]])
+ raises(ValueError, _numpypy.array, [[[1, 2], [3, 4], [5]]])
+ a = _numpypy.array([[1, 2], [4, 5]])
assert a[0, 1] == 2
assert a[0][1] == 2
- a = numpypy.array(([[[1, 2], [3, 4], [5, 6]]]))
+ a = _numpypy.array(([[[1, 2], [3, 4], [5, 6]]]))
assert (a[0, 1] == [3, 4]).all()
def test_setitem_slice(self):
- import numpypy
- a = numpypy.zeros((3, 4))
+ import _numpypy
+ a = _numpypy.zeros((3, 4))
a[1] = [1, 2, 3, 4]
assert a[1, 2] == 3
raises(TypeError, a[1].__setitem__, [1, 2, 3])
- a = numpypy.array([[1, 2], [3, 4]])
+ a = _numpypy.array([[1, 2], [3, 4]])
assert (a == [[1, 2], [3, 4]]).all()
- a[1] = numpypy.array([5, 6])
+ a[1] = _numpypy.array([5, 6])
assert (a == [[1, 2], [5, 6]]).all()
- a[:, 1] = numpypy.array([8, 10])
+ a[:, 1] = _numpypy.array([8, 10])
assert (a == [[1, 8], [5, 10]]).all()
- a[0, :: -1] = numpypy.array([11, 12])
+ a[0, :: -1] = _numpypy.array([11, 12])
assert (a == [[12, 11], [5, 10]]).all()
def test_ufunc(self):
- from numpypy import array
+ from _numpypy import array
a = array([[1, 2], [3, 4], [5, 6]])
assert ((a + a) == \
array([[1 + 1, 2 + 2], [3 + 3, 4 + 4], [5 + 5, 6 + 6]])).all()
def test_getitem_add(self):
- from numpypy import array
+ from _numpypy import array
a = array([[1, 2], [3, 4], [5, 6], [7, 8], [9, 10]])
assert (a + a)[1, 1] == 8
def test_ufunc_negative(self):
- from numpypy import array, negative
+ from _numpypy import array, negative
a = array([[1, 2], [3, 4]])
b = negative(a + a)
assert (b == [[-2, -4], [-6, -8]]).all()
def test_getitem_3(self):
- from numpypy import array
+ from _numpypy import array
a = array([[1, 2], [3, 4], [5, 6], [7, 8],
[9, 10], [11, 12], [13, 14]])
b = a[::2]
@@ -1096,12 +1097,12 @@
assert c[1][1] == 12
def test_multidim_ones(self):
- from numpypy import ones
+ from _numpypy import ones
a = ones((1, 2, 3))
assert a[0, 1, 2] == 1.0
def test_multidim_setslice(self):
- from numpypy import zeros, ones
+ from _numpypy import zeros, ones
a = zeros((3, 3))
b = ones((3, 3))
a[:, 1:3] = b[:, 1:3]
@@ -1112,21 +1113,21 @@
assert (a == [[1, 0, 1], [1, 0, 1], [1, 0, 1]]).all()
def test_broadcast_ufunc(self):
- from numpypy import array
+ from _numpypy import array
a = array([[1, 2], [3, 4], [5, 6]])
b = array([5, 6])
c = ((a + b) == [[1 + 5, 2 + 6], [3 + 5, 4 + 6], [5 + 5, 6 + 6]])
assert c.all()
def test_broadcast_setslice(self):
- from numpypy import zeros, ones
+ from _numpypy import zeros, ones
a = zeros((10, 10))
b = ones(10)
a[:, :] = b
assert a[3, 5] == 1
def test_broadcast_shape_agreement(self):
- from numpypy import zeros, array
+ from _numpypy import zeros, array
a = zeros((3, 1, 3))
b = array(((10, 11, 12), (20, 21, 22), (30, 31, 32)))
c = ((a + b) == [b, b, b])
@@ -1140,7 +1141,7 @@
assert c.all()
def test_broadcast_scalar(self):
- from numpypy import zeros
+ from _numpypy import zeros
a = zeros((4, 5), 'd')
a[:, 1] = 3
assert a[2, 1] == 3
@@ -1151,14 +1152,14 @@
assert a[3, 2] == 0
def test_broadcast_call2(self):
- from numpypy import zeros, ones
+ from _numpypy import zeros, ones
a = zeros((4, 1, 5))
b = ones((4, 3, 5))
b[:] = (a + a)
assert (b == zeros((4, 3, 5))).all()
def test_broadcast_virtualview(self):
- from numpypy import arange, zeros
+ from _numpypy import arange, zeros
a = arange(8).reshape([2, 2, 2])
b = (a + a)[1, 1]
c = zeros((2, 2, 2))
@@ -1166,13 +1167,13 @@
assert (c == [[[12, 14], [12, 14]], [[12, 14], [12, 14]]]).all()
def test_argmax(self):
- from numpypy import array
+ from _numpypy import array
a = array([[1, 2], [3, 4], [5, 6]])
assert a.argmax() == 5
assert a[:2, ].argmax() == 3
def test_broadcast_wrong_shapes(self):
- from numpypy import zeros
+ from _numpypy import zeros
a = zeros((4, 3, 2))
b = zeros((4, 2))
exc = raises(ValueError, lambda: a + b)
@@ -1180,7 +1181,7 @@
" together with shapes (4,3,2) (4,2)"
def test_reduce(self):
- from numpypy import array
+ from _numpypy import array
a = array([[1, 2, 3, 4], [5, 6, 7, 8], [9, 10, 11, 12]])
assert a.sum() == (13 * 12) / 2
b = a[1:, 1::2]
@@ -1188,7 +1189,7 @@
assert c.sum() == (6 + 8 + 10 + 12) * 2
def test_transpose(self):
- from numpypy import array
+ from _numpypy import array
a = array(((range(3), range(3, 6)),
(range(6, 9), range(9, 12)),
(range(12, 15), range(15, 18)),
@@ -1207,7 +1208,7 @@
assert(b[:, 0] == a[0, :]).all()
def test_flatiter(self):
- from numpypy import array, flatiter
+ from _numpypy import array, flatiter
a = array([[10, 30], [40, 60]])
f_iter = a.flat
assert f_iter.next() == 10
@@ -1222,23 +1223,23 @@
assert s == 140
def test_flatiter_array_conv(self):
- from numpypy import array, dot
+ from _numpypy import array, dot
a = array([1, 2, 3])
assert dot(a.flat, a.flat) == 14
def test_flatiter_varray(self):
- from numpypy import ones
+ from _numpypy import ones
a = ones((2, 2))
assert list(((a + a).flat)) == [2, 2, 2, 2]
def test_slice_copy(self):
- from numpypy import zeros
+ from _numpypy import zeros
a = zeros((10, 10))
b = a[0].copy()
assert (b == zeros(10)).all()
def test_array_interface(self):
- from numpypy import array
+ from _numpypy import array
a = array([1, 2, 3])
i = a.__array_interface__
assert isinstance(i['data'][0], int)
@@ -1260,7 +1261,7 @@
def test_fromstring(self):
import sys
- from numpypy import fromstring, array, uint8, float32, int32
+ from _numpypy import fromstring, array, uint8, float32, int32
a = fromstring(self.data)
for i in range(4):
@@ -1324,7 +1325,7 @@
assert (u == [1, 0]).all()
def test_fromstring_types(self):
- from numpypy import (fromstring, int8, int16, int32, int64, uint8,
+ from _numpypy import (fromstring, int8, int16, int32, int64, uint8,
uint16, uint32, float32, float64)
a = fromstring('\xFF', dtype=int8)
@@ -1349,7 +1350,7 @@
assert j[0] == 12
def test_fromstring_invalid(self):
- from numpypy import fromstring, uint16, uint8, int32
+ from _numpypy import fromstring, uint16, uint8, int32
#default dtype is 64-bit float, so 3 bytes should fail
raises(ValueError, fromstring, "\x01\x02\x03")
#3 bytes is not modulo 2 bytes (int16)
@@ -1360,8 +1361,8 @@
class AppTestRepr(BaseNumpyAppTest):
def test_repr(self):
- from numpypy import array, zeros
- intSize = array(5).dtype.itemsize
+ from _numpypy import array, zeros
+ int_size = array(5).dtype.itemsize
a = array(range(5), float)
assert repr(a) == "array([0.0, 1.0, 2.0, 3.0, 4.0])"
a = array([], float)
@@ -1369,12 +1370,12 @@
a = zeros(1001)
assert repr(a) == "array([0.0, 0.0, 0.0, ..., 0.0, 0.0, 0.0])"
a = array(range(5), long)
- if a.dtype.itemsize == intSize:
+ if a.dtype.itemsize == int_size:
assert repr(a) == "array([0, 1, 2, 3, 4])"
else:
assert repr(a) == "array([0, 1, 2, 3, 4], dtype=int64)"
a = array(range(5), 'int32')
- if a.dtype.itemsize == intSize:
+ if a.dtype.itemsize == int_size:
assert repr(a) == "array([0, 1, 2, 3, 4])"
else:
assert repr(a) == "array([0, 1, 2, 3, 4], dtype=int32)"
@@ -1388,7 +1389,7 @@
assert repr(a) == "array(0.2)"
def test_repr_multi(self):
- from numpypy import arange, zeros
+ from _numpypy import arange, zeros
a = zeros((3, 4))
assert repr(a) == '''array([[0.0, 0.0, 0.0, 0.0],
[0.0, 0.0, 0.0, 0.0],
@@ -1413,7 +1414,7 @@
[500, 1001]])'''
def test_repr_slice(self):
- from numpypy import array, zeros
+ from _numpypy import array, zeros
a = array(range(5), float)
b = a[1::2]
assert repr(b) == "array([1.0, 3.0])"
@@ -1428,7 +1429,7 @@
assert repr(b) == "array([], shape=(0, 5), dtype=int16)"
def test_str(self):
- from numpypy import array, zeros
+ from _numpypy import array, zeros
a = array(range(5), float)
assert str(a) == "[0.0 1.0 2.0 3.0 4.0]"
assert str((2 * a)[:]) == "[0.0 2.0 4.0 6.0 8.0]"
@@ -1461,7 +1462,7 @@
assert r == '[[[0.0 0.0]\n [0.0 0.0]]\n\n [[0.0 0.0]\n [0.0 0.0]]]'
def test_str_slice(self):
- from numpypy import array, zeros
+ from _numpypy import array, zeros
a = array(range(5), float)
b = a[1::2]
assert str(b) == "[1.0 3.0]"
@@ -1477,7 +1478,7 @@
class AppTestRanges(BaseNumpyAppTest):
def test_arange(self):
- from numpypy import arange, array, dtype
+ from _numpypy import arange, array, dtype
a = arange(3)
assert (a == [0, 1, 2]).all()
assert a.dtype is dtype(int)
@@ -1499,7 +1500,7 @@
class AppTestRanges(BaseNumpyAppTest):
def test_app_reshape(self):
- from numpypy import arange, array, dtype, reshape
+ from _numpypy import arange, array, dtype, reshape
a = arange(12)
b = reshape(a, (3, 4))
assert b.shape == (3, 4)
diff --git a/pypy/module/micronumpy/test/test_ufuncs.py b/pypy/module/micronumpy/test/test_ufuncs.py
--- a/pypy/module/micronumpy/test/test_ufuncs.py
+++ b/pypy/module/micronumpy/test/test_ufuncs.py
@@ -4,14 +4,14 @@
class AppTestUfuncs(BaseNumpyAppTest):
def test_ufunc_instance(self):
- from numpypy import add, ufunc
+ from _numpypy import add, ufunc
assert isinstance(add, ufunc)
assert repr(add) == ""
assert repr(ufunc) == ""
def test_ufunc_attrs(self):
- from numpypy import add, multiply, sin
+ from _numpypy import add, multiply, sin
assert add.identity == 0
assert multiply.identity == 1
@@ -22,7 +22,7 @@
assert sin.nin == 1
def test_wrong_arguments(self):
- from numpypy import add, sin
+ from _numpypy import add, sin
raises(ValueError, add, 1)
raises(TypeError, add, 1, 2, 3)
@@ -30,14 +30,14 @@
raises(ValueError, sin)
def test_single_item(self):
- from numpypy import negative, sign, minimum
+ from _numpypy import negative, sign, minimum
assert negative(5.0) == -5.0
assert sign(-0.0) == 0.0
assert minimum(2.0, 3.0) == 2.0
def test_sequence(self):
- from numpypy import array, ndarray, negative, minimum
+ from _numpypy import array, ndarray, negative, minimum
a = array(range(3))
b = [2.0, 1.0, 0.0]
c = 1.0
@@ -71,7 +71,7 @@
assert min_c_b[i] == min(b[i], c)
def test_negative(self):
- from numpypy import array, negative
+ from _numpypy import array, negative
a = array([-5.0, 0.0, 1.0])
b = negative(a)
@@ -86,7 +86,7 @@
assert negative(a + a)[3] == -6
def test_abs(self):
- from numpypy import array, absolute
+ from _numpypy import array, absolute
a = array([-5.0, -0.0, 1.0])
b = absolute(a)
@@ -94,7 +94,7 @@
assert b[i] == abs(a[i])
def test_add(self):
- from numpypy import array, add
+ from _numpypy import array, add
a = array([-5.0, -0.0, 1.0])
b = array([ 3.0, -2.0,-3.0])
@@ -103,7 +103,7 @@
assert c[i] == a[i] + b[i]
def test_divide(self):
- from numpypy import array, divide
+ from _numpypy import array, divide
a = array([-5.0, -0.0, 1.0])
b = array([ 3.0, -2.0,-3.0])
@@ -114,7 +114,7 @@
assert (divide(array([-10]), array([2])) == array([-5])).all()
def test_fabs(self):
- from numpypy import array, fabs
+ from _numpypy import array, fabs
from math import fabs as math_fabs
a = array([-5.0, -0.0, 1.0])
@@ -123,7 +123,7 @@
assert b[i] == math_fabs(a[i])
def test_minimum(self):
- from numpypy import array, minimum
+ from _numpypy import array, minimum
a = array([-5.0, -0.0, 1.0])
b = array([ 3.0, -2.0,-3.0])
@@ -132,7 +132,7 @@
assert c[i] == min(a[i], b[i])
def test_maximum(self):
- from numpypy import array, maximum
+ from _numpypy import array, maximum
a = array([-5.0, -0.0, 1.0])
b = array([ 3.0, -2.0,-3.0])
@@ -145,7 +145,7 @@
assert isinstance(x, (int, long))
def test_multiply(self):
- from numpypy import array, multiply
+ from _numpypy import array, multiply
a = array([-5.0, -0.0, 1.0])
b = array([ 3.0, -2.0,-3.0])
@@ -154,7 +154,7 @@
assert c[i] == a[i] * b[i]
def test_sign(self):
- from numpypy import array, sign, dtype
+ from _numpypy import array, sign, dtype
reference = [-1.0, 0.0, 0.0, 1.0]
a = array([-5.0, -0.0, 0.0, 6.0])
@@ -173,7 +173,7 @@
assert a[1] == 0
def test_reciporocal(self):
- from numpypy import array, reciprocal
+ from _numpypy import array, reciprocal
reference = [-0.2, float("inf"), float("-inf"), 2.0]
a = array([-5.0, 0.0, -0.0, 0.5])
@@ -182,7 +182,7 @@
assert b[i] == reference[i]
def test_subtract(self):
- from numpypy import array, subtract
+ from _numpypy import array, subtract
a = array([-5.0, -0.0, 1.0])
b = array([ 3.0, -2.0,-3.0])
@@ -191,7 +191,7 @@
assert c[i] == a[i] - b[i]
def test_floor(self):
- from numpypy import array, floor
+ from _numpypy import array, floor
reference = [-2.0, -1.0, 0.0, 1.0, 1.0]
a = array([-1.4, -1.0, 0.0, 1.0, 1.4])
@@ -200,7 +200,7 @@
assert b[i] == reference[i]
def test_copysign(self):
- from numpypy import array, copysign
+ from _numpypy import array, copysign
reference = [5.0, -0.0, 0.0, -6.0]
a = array([-5.0, 0.0, 0.0, 6.0])
@@ -216,7 +216,7 @@
def test_exp(self):
import math
- from numpypy import array, exp
+ from _numpypy import array, exp
a = array([-5.0, -0.0, 0.0, 12345678.0, float("inf"),
-float('inf'), -12343424.0])
@@ -230,7 +230,7 @@
def test_sin(self):
import math
- from numpypy import array, sin
+ from _numpypy import array, sin
a = array([0, 1, 2, 3, math.pi, math.pi*1.5, math.pi*2])
b = sin(a)
@@ -243,7 +243,7 @@
def test_cos(self):
import math
- from numpypy import array, cos
+ from _numpypy import array, cos
a = array([0, 1, 2, 3, math.pi, math.pi*1.5, math.pi*2])
b = cos(a)
@@ -252,7 +252,7 @@
def test_tan(self):
import math
- from numpypy import array, tan
+ from _numpypy import array, tan
a = array([0, 1, 2, 3, math.pi, math.pi*1.5, math.pi*2])
b = tan(a)
@@ -262,7 +262,7 @@
def test_arcsin(self):
import math
- from numpypy import array, arcsin
+ from _numpypy import array, arcsin
a = array([-1, -0.5, -0.33, 0, 0.33, 0.5, 1])
b = arcsin(a)
@@ -276,7 +276,7 @@
def test_arccos(self):
import math
- from numpypy import array, arccos
+ from _numpypy import array, arccos
a = array([-1, -0.5, -0.33, 0, 0.33, 0.5, 1])
b = arccos(a)
@@ -291,7 +291,7 @@
def test_arctan(self):
import math
- from numpypy import array, arctan
+ from _numpypy import array, arctan
a = array([-3, -2, -1, 0, 1, 2, 3, float('inf'), float('-inf')])
b = arctan(a)
@@ -304,7 +304,7 @@
def test_arcsinh(self):
import math
- from numpypy import arcsinh, inf
+ from _numpypy import arcsinh, inf
for v in [inf, -inf, 1.0, math.e]:
assert math.asinh(v) == arcsinh(v)
@@ -312,7 +312,7 @@
def test_arctanh(self):
import math
- from numpypy import arctanh
+ from _numpypy import arctanh
for v in [.99, .5, 0, -.5, -.99]:
assert math.atanh(v) == arctanh(v)
@@ -323,7 +323,7 @@
def test_sqrt(self):
import math
- from numpypy import sqrt
+ from _numpypy import sqrt
nan, inf = float("nan"), float("inf")
data = [1, 2, 3, inf]
@@ -333,13 +333,13 @@
assert math.isnan(sqrt(nan))
def test_reduce_errors(self):
- from numpypy import sin, add
+ from _numpypy import sin, add
raises(ValueError, sin.reduce, [1, 2, 3])
raises(TypeError, add.reduce, 1)
def test_reduce(self):
- from numpypy import add, maximum
+ from _numpypy import add, maximum
assert add.reduce([1, 2, 3]) == 6
assert maximum.reduce([1]) == 1
@@ -348,7 +348,7 @@
def test_comparisons(self):
import operator
- from numpypy import equal, not_equal, less, less_equal, greater, greater_equal
+ from _numpypy import equal, not_equal, less, less_equal, greater, greater_equal
for ufunc, func in [
(equal, operator.eq),
diff --git a/pypy/module/sys/__init__.py b/pypy/module/sys/__init__.py
--- a/pypy/module/sys/__init__.py
+++ b/pypy/module/sys/__init__.py
@@ -42,7 +42,7 @@
'argv' : 'state.get(space).w_argv',
'py3kwarning' : 'space.w_False',
'warnoptions' : 'state.get(space).w_warnoptions',
- 'builtin_module_names' : 'state.w_None',
+ 'builtin_module_names' : 'space.w_None',
'pypy_getudir' : 'state.pypy_getudir', # not translated
'pypy_initial_path' : 'state.pypy_initial_path',
diff --git a/pypy/objspace/fake/checkmodule.py b/pypy/objspace/fake/checkmodule.py
--- a/pypy/objspace/fake/checkmodule.py
+++ b/pypy/objspace/fake/checkmodule.py
@@ -1,8 +1,10 @@
from pypy.objspace.fake.objspace import FakeObjSpace, W_Root
+from pypy.config.pypyoption import get_pypy_config
def checkmodule(modname):
- space = FakeObjSpace()
+ config = get_pypy_config(translating=True)
+ space = FakeObjSpace(config)
mod = __import__('pypy.module.%s' % modname, None, None, ['__doc__'])
# force computation and record what we wrap
module = mod.Module(space, W_Root())
diff --git a/pypy/objspace/fake/objspace.py b/pypy/objspace/fake/objspace.py
--- a/pypy/objspace/fake/objspace.py
+++ b/pypy/objspace/fake/objspace.py
@@ -93,9 +93,9 @@
class FakeObjSpace(ObjSpace):
- def __init__(self):
+ def __init__(self, config=None):
self._seen_extras = []
- ObjSpace.__init__(self)
+ ObjSpace.__init__(self, config=config)
def float_w(self, w_obj):
is_root(w_obj)
@@ -135,6 +135,9 @@
def newfloat(self, x):
return w_some_obj()
+ def newcomplex(self, x, y):
+ return w_some_obj()
+
def marshal_w(self, w_obj):
"NOT_RPYTHON"
raise NotImplementedError
@@ -215,6 +218,10 @@
expected_length = 3
return [w_some_obj()] * expected_length
+ def unpackcomplex(self, w_complex):
+ is_root(w_complex)
+ return 1.1, 2.2
+
def allocate_instance(self, cls, w_subtype):
is_root(w_subtype)
return instantiate(cls)
@@ -232,6 +239,11 @@
def exec_(self, *args, **kwds):
pass
+ def createexecutioncontext(self):
+ ec = ObjSpace.createexecutioncontext(self)
+ ec._py_repr = None
+ return ec
+
# ----------
def translates(self, func=None, argtypes=None, **kwds):
@@ -267,18 +279,21 @@
ObjSpace.ExceptionTable +
['int', 'str', 'float', 'long', 'tuple', 'list',
'dict', 'unicode', 'complex', 'slice', 'bool',
- 'type', 'basestring']):
+ 'type', 'basestring', 'object']):
setattr(FakeObjSpace, 'w_' + name, w_some_obj())
#
for (name, _, arity, _) in ObjSpace.MethodTable:
args = ['w_%d' % i for i in range(arity)]
+ params = args[:]
d = {'is_root': is_root,
'w_some_obj': w_some_obj}
+ if name in ('get',):
+ params[-1] += '=None'
exec compile2("""\
def meth(self, %s):
%s
return w_some_obj()
- """ % (', '.join(args),
+ """ % (', '.join(params),
'; '.join(['is_root(%s)' % arg for arg in args]))) in d
meth = func_with_new_name(d['meth'], name)
setattr(FakeObjSpace, name, meth)
@@ -301,9 +316,12 @@
pass
FakeObjSpace.default_compiler = FakeCompiler()
-class FakeModule(object):
+class FakeModule(Wrappable):
+ def __init__(self):
+ self.w_dict = w_some_obj()
def get(self, name):
name + "xx" # check that it's a string
return w_some_obj()
FakeObjSpace.sys = FakeModule()
FakeObjSpace.sys.filesystemencoding = 'foobar'
+FakeObjSpace.builtin = FakeModule()
diff --git a/pypy/objspace/fake/test/test_objspace.py b/pypy/objspace/fake/test/test_objspace.py
--- a/pypy/objspace/fake/test/test_objspace.py
+++ b/pypy/objspace/fake/test/test_objspace.py
@@ -40,7 +40,7 @@
def test_constants(self):
space = self.space
space.translates(lambda: (space.w_None, space.w_True, space.w_False,
- space.w_int, space.w_str,
+ space.w_int, space.w_str, space.w_object,
space.w_TypeError))
def test_wrap(self):
@@ -72,3 +72,9 @@
def test_newlist(self):
self.space.newlist([W_Root(), W_Root()])
+
+ def test_default_values(self):
+ # the __get__ method takes either 2 or 3 arguments
+ space = self.space
+ space.translates(lambda: (space.get(W_Root(), W_Root()),
+ space.get(W_Root(), W_Root(), W_Root())))
From noreply at buildbot.pypy.org Sat Jan 7 21:21:32 2012
From: noreply at buildbot.pypy.org (alex_gaynor)
Date: Sat, 7 Jan 2012 21:21:32 +0100 (CET)
Subject: [pypy-commit] pypy better-jit-hooks: review notes
Message-ID: <20120107202132.C14A182BFF@wyvern.cs.uni-duesseldorf.de>
Author: Alex Gaynor
Branch: better-jit-hooks
Changeset: r51125:1c83e7759323
Date: 2012-01-07 14:21 -0600
http://bitbucket.org/pypy/pypy/changeset/1c83e7759323/
Log: review notes
diff --git a/REVIEW.rst b/REVIEW.rst
new file mode 100644
--- /dev/null
+++ b/REVIEW.rst
@@ -0,0 +1,12 @@
+REVIEW NOTES
+============
+
+* ``namespace=locals()``, can we please not use ``locals()``, even in tests? I find it super hard to read, and it's bad for the JIT.
+* Don't we already have a thing named portal (portal call maybe?) is the name confusing?
+* ``interp_reso.pyp:wrap_greenkey()`` should do something useful on non-pypyjit jds.
+* The ``WrappedOp`` constructor doesn't make much sense, it can only create an op with integer args?
+* Let's at least expose ``name`` on ``WrappedOp``.
+* DebugMergePoints don't appears to get their metadata.
+* Someone else should review the annotator magic.
+* Are entry_bridge's compiled seperately anymore? (``set_compile_hook`` docstring)
+
From noreply at buildbot.pypy.org Sat Jan 7 22:04:40 2012
From: noreply at buildbot.pypy.org (mattip)
Date: Sat, 7 Jan 2012 22:04:40 +0100 (CET)
Subject: [pypy-commit] pypy numpypy-axisops: add jit_merge_point
Message-ID: <20120107210440.3B49682BFF@wyvern.cs.uni-duesseldorf.de>
Author: mattip
Branch: numpypy-axisops
Changeset: r51126:f3a9a6a5871d
Date: 2012-01-06 16:32 +0200
http://bitbucket.org/pypy/pypy/changeset/f3a9a6a5871d/
Log: add jit_merge_point
diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py
--- a/pypy/module/micronumpy/interp_numarray.py
+++ b/pypy/module/micronumpy/interp_numarray.py
@@ -297,7 +297,7 @@
descr_min = _reduce_ufunc_impl("minimum")
def _reduce_argmax_argmin_impl(op_name):
- reduce_driver = jit.JitDriver(
+ axisreduce_driver = jit.JitDriver(
greens=['shapelen', 'sig'],
reds=['result', 'idx', 'frame', 'self', 'cur_best', 'dtype'],
get_printable_location=signature.new_printable_location(op_name),
@@ -312,7 +312,7 @@
result = 0
idx = 1
while not frame.done():
- reduce_driver.jit_merge_point(sig=sig,
+ axisreduce_driver.jit_merge_point(sig=sig,
shapelen=shapelen,
self=self, dtype=dtype,
frame=frame, result=result,
@@ -783,18 +783,28 @@
return value
def compute(self):
+ reduce_driver = jit.JitDriver(
+ greens=['shapelen', 'sig', 'self'],
+ reds=['result', 'ri', 'frame', 'nextval', 'dtype', 'value'],
+ get_printable_location=\
+ signature.new_printable_location(self.binfunc),
+ )
self.computing = True
dtype = self.dtype
result = W_NDimArray(self.size, self.shape, dtype)
self.values = self.values.get_concrete()
shapelen = len(result.shape)
- objlen = len(self.values.shape)
sig = self.find_sig(res_shape=result.shape, arr=self.values)
ri = ArrayIterator(result.size)
frame = sig.create_frame(self.values, dim=self.dim)
value = self.get_identity(sig, frame, shapelen)
+ nextval = 0.
while not frame.done():
- #XXX add jit_merge_point
+ reduce_driver.jit_merge_point(frame=frame, self=self,
+ value=value, sig=sig,
+ shapelen=shapelen, ri=ri,
+ nextval=nextval, dtype=dtype,
+ result=result)
if frame.iterators[0].axis_done:
value = self.get_identity(sig, frame, shapelen)
ri = ri.next(shapelen)
diff --git a/pypy/module/micronumpy/test/test_zjit.py b/pypy/module/micronumpy/test/test_zjit.py
--- a/pypy/module/micronumpy/test/test_zjit.py
+++ b/pypy/module/micronumpy/test/test_zjit.py
@@ -115,6 +115,21 @@
"int_add": 1, "int_ge": 1, "guard_false": 1,
"jump": 1, 'arraylen_gc': 1})
+ def define_sum2d():
+ return """
+ a = [[1, 2], [3, 4], [5, 6], [7, 8], [9, 10]]
+ b = sum(a,0)
+ b -> 1
+ """
+
+ def test_axissum(self):
+ py.test.skip("2dsum")
+ result = self.run("sum2d")
+ assert result == 30
+ self.check_simple_loop({"getinteriorfield_raw": 2, "float_add": 2,
+ "int_add": 1, "int_ge": 1, "guard_false": 1,
+ "jump": 1, 'arraylen_gc': 1})
+
def define_prod():
return """
a = |30|
From noreply at buildbot.pypy.org Sat Jan 7 22:04:41 2012
From: noreply at buildbot.pypy.org (mattip)
Date: Sat, 7 Jan 2012 22:04:41 +0100 (CET)
Subject: [pypy-commit] pypy numpypy-axisops: jit_merge_point translates,
zjit test for sum() of 2d array fails
Message-ID: <20120107210441.637C182BFF@wyvern.cs.uni-duesseldorf.de>
Author: mattip
Branch: numpypy-axisops
Changeset: r51127:579c843af22b
Date: 2012-01-07 22:57 +0200
http://bitbucket.org/pypy/pypy/changeset/579c843af22b/
Log: jit_merge_point translates, zjit test for sum() of 2d array fails
diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py
--- a/pypy/module/micronumpy/interp_numarray.py
+++ b/pypy/module/micronumpy/interp_numarray.py
@@ -36,6 +36,14 @@
get_printable_location=signature.new_printable_location('slice'),
)
+axisreduce_driver = jit.JitDriver(
+ greens=['shapelen', 'sig'],
+ virtualizables=['frame'],
+ reds=['self','result', 'ri', 'frame', 'nextval', 'dtype', 'value'],
+ get_printable_location=signature.new_printable_location('reduce'),
+)
+
+
def _find_shape_and_elems(space, w_iterable):
shape = [space.len_w(w_iterable)]
batch = space.listview(w_iterable)
@@ -297,7 +305,7 @@
descr_min = _reduce_ufunc_impl("minimum")
def _reduce_argmax_argmin_impl(op_name):
- axisreduce_driver = jit.JitDriver(
+ reduce_driver = jit.JitDriver(
greens=['shapelen', 'sig'],
reds=['result', 'idx', 'frame', 'self', 'cur_best', 'dtype'],
get_printable_location=signature.new_printable_location(op_name),
@@ -312,7 +320,7 @@
result = 0
idx = 1
while not frame.done():
- axisreduce_driver.jit_merge_point(sig=sig,
+ reduce_driver.jit_merge_point(sig=sig,
shapelen=shapelen,
self=self, dtype=dtype,
frame=frame, result=result,
@@ -760,7 +768,6 @@
self.dtype = res_dtype
self.dim = dim
self.identity = identity
- self.computing = False
def _del_sources(self):
self.values = None
@@ -783,13 +790,6 @@
return value
def compute(self):
- reduce_driver = jit.JitDriver(
- greens=['shapelen', 'sig', 'self'],
- reds=['result', 'ri', 'frame', 'nextval', 'dtype', 'value'],
- get_printable_location=\
- signature.new_printable_location(self.binfunc),
- )
- self.computing = True
dtype = self.dtype
result = W_NDimArray(self.size, self.shape, dtype)
self.values = self.values.get_concrete()
@@ -798,9 +798,9 @@
ri = ArrayIterator(result.size)
frame = sig.create_frame(self.values, dim=self.dim)
value = self.get_identity(sig, frame, shapelen)
- nextval = 0.
+ nextval = sig.eval(frame, self.values).convert_to(dtype)
while not frame.done():
- reduce_driver.jit_merge_point(frame=frame, self=self,
+ axisreduce_driver.jit_merge_point(frame=frame, self=self,
value=value, sig=sig,
shapelen=shapelen, ri=ri,
nextval=nextval, dtype=dtype,
diff --git a/pypy/module/micronumpy/signature.py b/pypy/module/micronumpy/signature.py
--- a/pypy/module/micronumpy/signature.py
+++ b/pypy/module/micronumpy/signature.py
@@ -344,3 +344,7 @@
def eval(self, frame, arr):
return self.right.eval(frame, arr)
+
+ def debug_repr(self):
+ return 'ReduceSig(%s, %s, %s)' % (self.name, self.left.debug_repr(),
+ self.right.debug_repr())
diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py
--- a/pypy/module/micronumpy/test/test_numarray.py
+++ b/pypy/module/micronumpy/test/test_numarray.py
@@ -743,6 +743,7 @@
def test_reduceND(self):
from numpypy import arange
a = arange(15).reshape(5, 3)
+ assert a.sum() == 105
assert (a.sum(0) == [30, 35, 40]).all()
assert (a.sum(1) == [3, 12, 21, 30, 39]).all()
assert (a.max(0) == [12, 13, 14]).all()
diff --git a/pypy/module/micronumpy/test/test_zjit.py b/pypy/module/micronumpy/test/test_zjit.py
--- a/pypy/module/micronumpy/test/test_zjit.py
+++ b/pypy/module/micronumpy/test/test_zjit.py
@@ -115,16 +115,15 @@
"int_add": 1, "int_ge": 1, "guard_false": 1,
"jump": 1, 'arraylen_gc': 1})
- def define_sum2d():
+ def define_axissum():
return """
a = [[1, 2], [3, 4], [5, 6], [7, 8], [9, 10]]
- b = sum(a,0)
- b -> 1
+ b = sum(a) #,0)
+ #b -> 1
"""
def test_axissum(self):
- py.test.skip("2dsum")
- result = self.run("sum2d")
+ result = self.run("axissum")
assert result == 30
self.check_simple_loop({"getinteriorfield_raw": 2, "float_add": 2,
"int_add": 1, "int_ge": 1, "guard_false": 1,
From noreply at buildbot.pypy.org Sat Jan 7 22:49:11 2012
From: noreply at buildbot.pypy.org (fijal)
Date: Sat, 7 Jan 2012 22:49:11 +0100 (CET)
Subject: [pypy-commit] pypy numpypy-axisops: improve the error message
Message-ID: <20120107214911.2B15C82BFF@wyvern.cs.uni-duesseldorf.de>
Author: Maciej Fijalkowski
Branch: numpypy-axisops
Changeset: r51128:a65f5ec8c18b
Date: 2012-01-07 23:48 +0200
http://bitbucket.org/pypy/pypy/changeset/a65f5ec8c18b/
Log: improve the error message
diff --git a/pypy/module/micronumpy/test/test_zjit.py b/pypy/module/micronumpy/test/test_zjit.py
--- a/pypy/module/micronumpy/test/test_zjit.py
+++ b/pypy/module/micronumpy/test/test_zjit.py
@@ -47,6 +47,8 @@
def f(i):
interp = InterpreterState(codes[i])
interp.run(space)
+ if not len(interp.results):
+ raise Exception("need results")
w_res = interp.results[-1]
if isinstance(w_res, BaseArray):
concr = w_res.get_concrete_or_scalar()
From noreply at buildbot.pypy.org Sat Jan 7 23:26:01 2012
From: noreply at buildbot.pypy.org (mattip)
Date: Sat, 7 Jan 2012 23:26:01 +0100 (CET)
Subject: [pypy-commit] pypy numpypy-axisops: add optional arguments to sum
in compile, axissum test now runs in test_zjit
Message-ID: <20120107222601.A282D82BFF@wyvern.cs.uni-duesseldorf.de>
Author: mattip
Branch: numpypy-axisops
Changeset: r51129:834eda1cb2d7
Date: 2012-01-08 00:24 +0200
http://bitbucket.org/pypy/pypy/changeset/834eda1cb2d7/
Log: add optional arguments to sum in compile, axissum test now runs in
test_zjit
diff --git a/pypy/module/micronumpy/compile.py b/pypy/module/micronumpy/compile.py
--- a/pypy/module/micronumpy/compile.py
+++ b/pypy/module/micronumpy/compile.py
@@ -372,13 +372,17 @@
def execute(self, interp):
if self.name in SINGLE_ARG_FUNCTIONS:
- if len(self.args) != 1:
+ if len(self.args) != 1 and self.name != 'sum':
raise ArgumentMismatch
arr = self.args[0].execute(interp)
if not isinstance(arr, BaseArray):
raise ArgumentNotAnArray
if self.name == "sum":
- w_res = arr.descr_sum(interp.space)
+ if len(self.args)>1:
+ w_res = arr.descr_sum(interp.space,
+ self.args[1].execute(interp))
+ else:
+ w_res = arr.descr_sum(interp.space)
elif self.name == "prod":
w_res = arr.descr_prod(interp.space)
elif self.name == "max":
@@ -416,7 +420,7 @@
('\]', 'array_right'),
('(->)|[\+\-\*\/]', 'operator'),
('=', 'assign'),
- (',', 'coma'),
+ (',', 'comma'),
('\|', 'pipe'),
('\(', 'paren_left'),
('\)', 'paren_right'),
@@ -504,7 +508,7 @@
return SliceConstant(start, stop, step)
- def parse_expression(self, tokens):
+ def parse_expression(self, tokens, accept_comma=False):
stack = []
while tokens.remaining():
token = tokens.pop()
@@ -524,9 +528,13 @@
stack.append(RangeConstant(tokens.pop().v))
end = tokens.pop()
assert end.name == 'pipe'
+ elif accept_comma and token.name == 'comma':
+ continue
else:
tokens.push()
break
+ if accept_comma:
+ return stack
stack.reverse()
lhs = stack.pop()
while stack:
@@ -540,7 +548,7 @@
args = []
tokens.pop() # lparen
while tokens.get(0).name != 'paren_right':
- args.append(self.parse_expression(tokens))
+ args += self.parse_expression(tokens, accept_comma=True)
return FunctionCall(name, args)
def parse_array_const(self, tokens):
@@ -556,7 +564,7 @@
token = tokens.pop()
if token.name == 'array_right':
return elems
- assert token.name == 'coma'
+ assert token.name == 'comma'
def parse_statement(self, tokens):
if (tokens.get(0).name == 'identifier' and
diff --git a/pypy/module/micronumpy/test/test_zjit.py b/pypy/module/micronumpy/test/test_zjit.py
--- a/pypy/module/micronumpy/test/test_zjit.py
+++ b/pypy/module/micronumpy/test/test_zjit.py
@@ -120,8 +120,8 @@
def define_axissum():
return """
a = [[1, 2], [3, 4], [5, 6], [7, 8], [9, 10]]
- b = sum(a) #,0)
- #b -> 1
+ b = sum(a,0)
+ b -> 1
"""
def test_axissum(self):
From noreply at buildbot.pypy.org Sun Jan 8 11:56:36 2012
From: noreply at buildbot.pypy.org (fijal)
Date: Sun, 8 Jan 2012 11:56:36 +0100 (CET)
Subject: [pypy-commit] pypy default: (mikefc) implementation of var and std
Message-ID: <20120108105636.D518F82110@wyvern.cs.uni-duesseldorf.de>
Author: Maciej Fijalkowski
Branch:
Changeset: r51130:da8d76b03c38
Date: 2012-01-08 12:56 +0200
http://bitbucket.org/pypy/pypy/changeset/da8d76b03c38/
Log: (mikefc) implementation of var and std
diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py
--- a/pypy/module/micronumpy/interp_numarray.py
+++ b/pypy/module/micronumpy/interp_numarray.py
@@ -563,6 +563,18 @@
def descr_mean(self, space):
return space.div(self.descr_sum(space), space.wrap(self.size))
+ def descr_var(self, space):
+ ''' var = mean( (values - mean(values))**2 ) '''
+ w_res = self.descr_sub(space, self.descr_mean(space))
+ assert isinstance(w_res, BaseArray)
+ w_res = w_res.descr_pow(space, space.wrap(2))
+ assert isinstance(w_res, BaseArray)
+ return w_res.descr_mean(space)
+
+ def descr_std(self, space):
+ ''' std(v) = sqrt(var(v)) '''
+ return interp_ufuncs.get(space).sqrt.call(space, [self.descr_var(space)] )
+
def descr_nonzero(self, space):
if self.size > 1:
raise OperationError(space.w_ValueError, space.wrap(
@@ -1204,6 +1216,8 @@
all = interp2app(BaseArray.descr_all),
any = interp2app(BaseArray.descr_any),
dot = interp2app(BaseArray.descr_dot),
+ var = interp2app(BaseArray.descr_var),
+ std = interp2app(BaseArray.descr_std),
copy = interp2app(BaseArray.descr_copy),
reshape = interp2app(BaseArray.descr_reshape),
diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py
--- a/pypy/module/micronumpy/test/test_numarray.py
+++ b/pypy/module/micronumpy/test/test_numarray.py
@@ -978,6 +978,20 @@
assert a[:, 0].tolist() == [17.1, 40.3]
assert a[0].tolist() == [17.1, 27.2]
+ def test_var(self):
+ from _numpypy import array
+ a = array(range(10))
+ assert a.var() == 8.25
+ a = array([5.0])
+ assert a.var() == 0.0
+
+ def test_std(self):
+ from _numpypy import array
+ a = array(range(10))
+ assert a.std() == 2.8722813232690143
+ a = array([5.0])
+ assert a.std() == 0.0
+
class AppTestMultiDim(BaseNumpyAppTest):
def test_init(self):
From noreply at buildbot.pypy.org Sun Jan 8 11:59:40 2012
From: noreply at buildbot.pypy.org (fijal)
Date: Sun, 8 Jan 2012 11:59:40 +0100 (CET)
Subject: [pypy-commit] pypy default: (mikefc) partially import fromnumeric
stuff
Message-ID: <20120108105940.841BA82110@wyvern.cs.uni-duesseldorf.de>
Author: Maciej Fijalkowski
Branch:
Changeset: r51131:52ed6dd082e1
Date: 2012-01-08 12:59 +0200
http://bitbucket.org/pypy/pypy/changeset/52ed6dd082e1/
Log: (mikefc) partially import fromnumeric stuff
diff --git a/lib_pypy/numpypy/__init__.py b/lib_pypy/numpypy/__init__.py
--- a/lib_pypy/numpypy/__init__.py
+++ b/lib_pypy/numpypy/__init__.py
@@ -1,1 +1,2 @@
from _numpypy import *
+from fromnumeric import *
From noreply at buildbot.pypy.org Sun Jan 8 13:13:09 2012
From: noreply at buildbot.pypy.org (fijal)
Date: Sun, 8 Jan 2012 13:13:09 +0100 (CET)
Subject: [pypy-commit] pypy default: there are assert that say "this must be
in reg". Force it
Message-ID: <20120108121309.E274C82110@wyvern.cs.uni-duesseldorf.de>
Author: Maciej Fijalkowski
Branch:
Changeset: r51132:882458b48b05
Date: 2012-01-08 14:12 +0200
http://bitbucket.org/pypy/pypy/changeset/882458b48b05/
Log: there are assert that say "this must be in reg". Force it
diff --git a/pypy/jit/backend/x86/regalloc.py b/pypy/jit/backend/x86/regalloc.py
--- a/pypy/jit/backend/x86/regalloc.py
+++ b/pypy/jit/backend/x86/regalloc.py
@@ -741,7 +741,7 @@
self.xrm.possibly_free_var(op.getarg(0))
def consider_cast_int_to_float(self, op):
- loc0 = self.rm.loc(op.getarg(0))
+ loc0 = self.rm.force_allocate_reg(op.getarg(0))
loc1 = self.xrm.force_allocate_reg(op.result)
self.Perform(op, [loc0], loc1)
self.rm.possibly_free_var(op.getarg(0))
From noreply at buildbot.pypy.org Sun Jan 8 13:18:38 2012
From: noreply at buildbot.pypy.org (fijal)
Date: Sun, 8 Jan 2012 13:18:38 +0100 (CET)
Subject: [pypy-commit] pypy default: missing files
Message-ID: <20120108121838.4B6BC82110@wyvern.cs.uni-duesseldorf.de>
Author: Maciej Fijalkowski
Branch:
Changeset: r51133:6bed35212c06
Date: 2012-01-08 14:18 +0200
http://bitbucket.org/pypy/pypy/changeset/6bed35212c06/
Log: missing files
diff --git a/lib_pypy/numpypy/fromnumeric.py b/lib_pypy/numpypy/fromnumeric.py
new file mode 100644
--- /dev/null
+++ b/lib_pypy/numpypy/fromnumeric.py
@@ -0,0 +1,2400 @@
+######################################################################
+# This is a copy of numpy/core/fromnumeric.py modified for numpypy
+######################################################################
+# Each name in __all__ was a function in 'numeric' that is now
+# a method in 'numpy'.
+# When the corresponding method is added to numpypy BaseArray
+# each function should be added as a module function
+# at the applevel
+# This can be as simple as doing the following
+#
+# def func(a, ...):
+# if not hasattr(a, 'func')
+# a = numpypy.array(a)
+# return a.func(...)
+#
+######################################################################
+
+import numpypy
+
+# Module containing non-deprecated functions borrowed from Numeric.
+__docformat__ = "restructuredtext en"
+
+# functions that are now methods
+__all__ = ['take', 'reshape', 'choose', 'repeat', 'put',
+ 'swapaxes', 'transpose', 'sort', 'argsort', 'argmax', 'argmin',
+ 'searchsorted', 'alen',
+ 'resize', 'diagonal', 'trace', 'ravel', 'nonzero', 'shape',
+ 'compress', 'clip', 'sum', 'product', 'prod', 'sometrue', 'alltrue',
+ 'any', 'all', 'cumsum', 'cumproduct', 'cumprod', 'ptp', 'ndim',
+ 'rank', 'size', 'around', 'round_', 'mean', 'std', 'var', 'squeeze',
+ 'amax', 'amin',
+ ]
+
+def take(a, indices, axis=None, out=None, mode='raise'):
+ """
+ Take elements from an array along an axis.
+
+ This function does the same thing as "fancy" indexing (indexing arrays
+ using arrays); however, it can be easier to use if you need elements
+ along a given axis.
+
+ Parameters
+ ----------
+ a : array_like
+ The source array.
+ indices : array_like
+ The indices of the values to extract.
+ axis : int, optional
+ The axis over which to select values. By default, the flattened
+ input array is used.
+ out : ndarray, optional
+ If provided, the result will be placed in this array. It should
+ be of the appropriate shape and dtype.
+ mode : {'raise', 'wrap', 'clip'}, optional
+ Specifies how out-of-bounds indices will behave.
+
+ * 'raise' -- raise an error (default)
+ * 'wrap' -- wrap around
+ * 'clip' -- clip to the range
+
+ 'clip' mode means that all indices that are too large are replaced
+ by the index that addresses the last element along that axis. Note
+ that this disables indexing with negative numbers.
+
+ Returns
+ -------
+ subarray : ndarray
+ The returned array has the same type as `a`.
+
+ See Also
+ --------
+ ndarray.take : equivalent method
+
+ Examples
+ --------
+ >>> a = [4, 3, 5, 7, 6, 8]
+ >>> indices = [0, 1, 4]
+ >>> np.take(a, indices)
+ array([4, 3, 6])
+
+ In this example if `a` is an ndarray, "fancy" indexing can be used.
+
+ >>> a = np.array(a)
+ >>> a[indices]
+ array([4, 3, 6])
+
+ """
+ raise NotImplemented('Waiting on interp level method')
+
+
+# not deprecated --- copy if necessary, view otherwise
+def reshape(a, newshape, order='C'):
+ """
+ Gives a new shape to an array without changing its data.
+
+ Parameters
+ ----------
+ a : array_like
+ Array to be reshaped.
+ newshape : int or tuple of ints
+ The new shape should be compatible with the original shape. If
+ an integer, then the result will be a 1-D array of that length.
+ One shape dimension can be -1. In this case, the value is inferred
+ from the length of the array and remaining dimensions.
+ order : {'C', 'F', 'A'}, optional
+ Determines whether the array data should be viewed as in C
+ (row-major) order, FORTRAN (column-major) order, or the C/FORTRAN
+ order should be preserved.
+
+ Returns
+ -------
+ reshaped_array : ndarray
+ This will be a new view object if possible; otherwise, it will
+ be a copy.
+
+
+ See Also
+ --------
+ ndarray.reshape : Equivalent method.
+
+ Notes
+ -----
+
+ It is not always possible to change the shape of an array without
+ copying the data. If you want an error to be raise if the data is copied,
+ you should assign the new shape to the shape attribute of the array::
+
+ >>> a = np.zeros((10, 2))
+ # A transpose make the array non-contiguous
+ >>> b = a.T
+ # Taking a view makes it possible to modify the shape without modiying the
+ # initial object.
+ >>> c = b.view()
+ >>> c.shape = (20)
+ AttributeError: incompatible shape for a non-contiguous array
+
+
+ Examples
+ --------
+ >>> a = np.array([[1,2,3], [4,5,6]])
+ >>> np.reshape(a, 6)
+ array([1, 2, 3, 4, 5, 6])
+ >>> np.reshape(a, 6, order='F')
+ array([1, 4, 2, 5, 3, 6])
+
+ >>> np.reshape(a, (3,-1)) # the unspecified value is inferred to be 2
+ array([[1, 2],
+ [3, 4],
+ [5, 6]])
+
+ """
+ if not hasattr(a, 'reshape'):
+ a = numpypy.array(a)
+ return a.reshape(newshape)
+
+
+def choose(a, choices, out=None, mode='raise'):
+ """
+ Construct an array from an index array and a set of arrays to choose from.
+
+ First of all, if confused or uncertain, definitely look at the Examples -
+ in its full generality, this function is less simple than it might
+ seem from the following code description (below ndi =
+ `numpy.lib.index_tricks`):
+
+ ``np.choose(a,c) == np.array([c[a[I]][I] for I in ndi.ndindex(a.shape)])``.
+
+ But this omits some subtleties. Here is a fully general summary:
+
+ Given an "index" array (`a`) of integers and a sequence of `n` arrays
+ (`choices`), `a` and each choice array are first broadcast, as necessary,
+ to arrays of a common shape; calling these *Ba* and *Bchoices[i], i =
+ 0,...,n-1* we have that, necessarily, ``Ba.shape == Bchoices[i].shape``
+ for each `i`. Then, a new array with shape ``Ba.shape`` is created as
+ follows:
+
+ * if ``mode=raise`` (the default), then, first of all, each element of
+ `a` (and thus `Ba`) must be in the range `[0, n-1]`; now, suppose that
+ `i` (in that range) is the value at the `(j0, j1, ..., jm)` position
+ in `Ba` - then the value at the same position in the new array is the
+ value in `Bchoices[i]` at that same position;
+
+ * if ``mode=wrap``, values in `a` (and thus `Ba`) may be any (signed)
+ integer; modular arithmetic is used to map integers outside the range
+ `[0, n-1]` back into that range; and then the new array is constructed
+ as above;
+
+ * if ``mode=clip``, values in `a` (and thus `Ba`) may be any (signed)
+ integer; negative integers are mapped to 0; values greater than `n-1`
+ are mapped to `n-1`; and then the new array is constructed as above.
+
+ Parameters
+ ----------
+ a : int array
+ This array must contain integers in `[0, n-1]`, where `n` is the number
+ of choices, unless ``mode=wrap`` or ``mode=clip``, in which cases any
+ integers are permissible.
+ choices : sequence of arrays
+ Choice arrays. `a` and all of the choices must be broadcastable to the
+ same shape. If `choices` is itself an array (not recommended), then
+ its outermost dimension (i.e., the one corresponding to
+ ``choices.shape[0]``) is taken as defining the "sequence".
+ out : array, optional
+ If provided, the result will be inserted into this array. It should
+ be of the appropriate shape and dtype.
+ mode : {'raise' (default), 'wrap', 'clip'}, optional
+ Specifies how indices outside `[0, n-1]` will be treated:
+
+ * 'raise' : an exception is raised
+ * 'wrap' : value becomes value mod `n`
+ * 'clip' : values < 0 are mapped to 0, values > n-1 are mapped to n-1
+
+ Returns
+ -------
+ merged_array : array
+ The merged result.
+
+ Raises
+ ------
+ ValueError: shape mismatch
+ If `a` and each choice array are not all broadcastable to the same
+ shape.
+
+ See Also
+ --------
+ ndarray.choose : equivalent method
+
+ Notes
+ -----
+ To reduce the chance of misinterpretation, even though the following
+ "abuse" is nominally supported, `choices` should neither be, nor be
+ thought of as, a single array, i.e., the outermost sequence-like container
+ should be either a list or a tuple.
+
+ Examples
+ --------
+
+ >>> choices = [[0, 1, 2, 3], [10, 11, 12, 13],
+ ... [20, 21, 22, 23], [30, 31, 32, 33]]
+ >>> np.choose([2, 3, 1, 0], choices
+ ... # the first element of the result will be the first element of the
+ ... # third (2+1) "array" in choices, namely, 20; the second element
+ ... # will be the second element of the fourth (3+1) choice array, i.e.,
+ ... # 31, etc.
+ ... )
+ array([20, 31, 12, 3])
+ >>> np.choose([2, 4, 1, 0], choices, mode='clip') # 4 goes to 3 (4-1)
+ array([20, 31, 12, 3])
+ >>> # because there are 4 choice arrays
+ >>> np.choose([2, 4, 1, 0], choices, mode='wrap') # 4 goes to (4 mod 4)
+ array([20, 1, 12, 3])
+ >>> # i.e., 0
+
+ A couple examples illustrating how choose broadcasts:
+
+ >>> a = [[1, 0, 1], [0, 1, 0], [1, 0, 1]]
+ >>> choices = [-10, 10]
+ >>> np.choose(a, choices)
+ array([[ 10, -10, 10],
+ [-10, 10, -10],
+ [ 10, -10, 10]])
+
+ >>> # With thanks to Anne Archibald
+ >>> a = np.array([0, 1]).reshape((2,1,1))
+ >>> c1 = np.array([1, 2, 3]).reshape((1,3,1))
+ >>> c2 = np.array([-1, -2, -3, -4, -5]).reshape((1,1,5))
+ >>> np.choose(a, (c1, c2)) # result is 2x3x5, res[0,:,:]=c1, res[1,:,:]=c2
+ array([[[ 1, 1, 1, 1, 1],
+ [ 2, 2, 2, 2, 2],
+ [ 3, 3, 3, 3, 3]],
+ [[-1, -2, -3, -4, -5],
+ [-1, -2, -3, -4, -5],
+ [-1, -2, -3, -4, -5]]])
+
+ """
+ raise NotImplemented('Waiting on interp level method')
+
+
+def repeat(a, repeats, axis=None):
+ """
+ Repeat elements of an array.
+
+ Parameters
+ ----------
+ a : array_like
+ Input array.
+ repeats : {int, array of ints}
+ The number of repetitions for each element. `repeats` is broadcasted
+ to fit the shape of the given axis.
+ axis : int, optional
+ The axis along which to repeat values. By default, use the
+ flattened input array, and return a flat output array.
+
+ Returns
+ -------
+ repeated_array : ndarray
+ Output array which has the same shape as `a`, except along
+ the given axis.
+
+ See Also
+ --------
+ tile : Tile an array.
+
+ Examples
+ --------
+ >>> x = np.array([[1,2],[3,4]])
+ >>> np.repeat(x, 2)
+ array([1, 1, 2, 2, 3, 3, 4, 4])
+ >>> np.repeat(x, 3, axis=1)
+ array([[1, 1, 1, 2, 2, 2],
+ [3, 3, 3, 4, 4, 4]])
+ >>> np.repeat(x, [1, 2], axis=0)
+ array([[1, 2],
+ [3, 4],
+ [3, 4]])
+
+ """
+ raise NotImplemented('Waiting on interp level method')
+
+
+def put(a, ind, v, mode='raise'):
+ """
+ Replaces specified elements of an array with given values.
+
+ The indexing works on the flattened target array. `put` is roughly
+ equivalent to:
+
+ ::
+
+ a.flat[ind] = v
+
+ Parameters
+ ----------
+ a : ndarray
+ Target array.
+ ind : array_like
+ Target indices, interpreted as integers.
+ v : array_like
+ Values to place in `a` at target indices. If `v` is shorter than
+ `ind` it will be repeated as necessary.
+ mode : {'raise', 'wrap', 'clip'}, optional
+ Specifies how out-of-bounds indices will behave.
+
+ * 'raise' -- raise an error (default)
+ * 'wrap' -- wrap around
+ * 'clip' -- clip to the range
+
+ 'clip' mode means that all indices that are too large are replaced
+ by the index that addresses the last element along that axis. Note
+ that this disables indexing with negative numbers.
+
+ See Also
+ --------
+ putmask, place
+
+ Examples
+ --------
+ >>> a = np.arange(5)
+ >>> np.put(a, [0, 2], [-44, -55])
+ >>> a
+ array([-44, 1, -55, 3, 4])
+
+ >>> a = np.arange(5)
+ >>> np.put(a, 22, -5, mode='clip')
+ >>> a
+ array([ 0, 1, 2, 3, -5])
+
+ """
+ raise NotImplemented('Waiting on interp level method')
+
+
+def swapaxes(a, axis1, axis2):
+ """
+ Interchange two axes of an array.
+
+ Parameters
+ ----------
+ a : array_like
+ Input array.
+ axis1 : int
+ First axis.
+ axis2 : int
+ Second axis.
+
+ Returns
+ -------
+ a_swapped : ndarray
+ If `a` is an ndarray, then a view of `a` is returned; otherwise
+ a new array is created.
+
+ Examples
+ --------
+ >>> x = np.array([[1,2,3]])
+ >>> np.swapaxes(x,0,1)
+ array([[1],
+ [2],
+ [3]])
+
+ >>> x = np.array([[[0,1],[2,3]],[[4,5],[6,7]]])
+ >>> x
+ array([[[0, 1],
+ [2, 3]],
+ [[4, 5],
+ [6, 7]]])
+
+ >>> np.swapaxes(x,0,2)
+ array([[[0, 4],
+ [2, 6]],
+ [[1, 5],
+ [3, 7]]])
+
+ """
+ raise NotImplemented('Waiting on interp level method')
+
+
+def transpose(a, axes=None):
+ """
+ Permute the dimensions of an array.
+
+ Parameters
+ ----------
+ a : array_like
+ Input array.
+ axes : list of ints, optional
+ By default, reverse the dimensions, otherwise permute the axes
+ according to the values given.
+
+ Returns
+ -------
+ p : ndarray
+ `a` with its axes permuted. A view is returned whenever
+ possible.
+
+ See Also
+ --------
+ rollaxis
+
+ Examples
+ --------
+ >>> x = np.arange(4).reshape((2,2))
+ >>> x
+ array([[0, 1],
+ [2, 3]])
+
+ >>> np.transpose(x)
+ array([[0, 2],
+ [1, 3]])
+
+ >>> x = np.ones((1, 2, 3))
+ >>> np.transpose(x, (1, 0, 2)).shape
+ (2, 1, 3)
+
+ """
+ raise NotImplemented('Waiting on interp level method')
+
+
+def sort(a, axis=-1, kind='quicksort', order=None):
+ """
+ Return a sorted copy of an array.
+
+ Parameters
+ ----------
+ a : array_like
+ Array to be sorted.
+ axis : int or None, optional
+ Axis along which to sort. If None, the array is flattened before
+ sorting. The default is -1, which sorts along the last axis.
+ kind : {'quicksort', 'mergesort', 'heapsort'}, optional
+ Sorting algorithm. Default is 'quicksort'.
+ order : list, optional
+ When `a` is a structured array, this argument specifies which fields
+ to compare first, second, and so on. This list does not need to
+ include all of the fields.
+
+ Returns
+ -------
+ sorted_array : ndarray
+ Array of the same type and shape as `a`.
+
+ See Also
+ --------
+ ndarray.sort : Method to sort an array in-place.
+ argsort : Indirect sort.
+ lexsort : Indirect stable sort on multiple keys.
+ searchsorted : Find elements in a sorted array.
+
+ Notes
+ -----
+ The various sorting algorithms are characterized by their average speed,
+ worst case performance, work space size, and whether they are stable. A
+ stable sort keeps items with the same key in the same relative
+ order. The three available algorithms have the following
+ properties:
+
+ =========== ======= ============= ============ =======
+ kind speed worst case work space stable
+ =========== ======= ============= ============ =======
+ 'quicksort' 1 O(n^2) 0 no
+ 'mergesort' 2 O(n*log(n)) ~n/2 yes
+ 'heapsort' 3 O(n*log(n)) 0 no
+ =========== ======= ============= ============ =======
+
+ All the sort algorithms make temporary copies of the data when
+ sorting along any but the last axis. Consequently, sorting along
+ the last axis is faster and uses less space than sorting along
+ any other axis.
+
+ The sort order for complex numbers is lexicographic. If both the real
+ and imaginary parts are non-nan then the order is determined by the
+ real parts except when they are equal, in which case the order is
+ determined by the imaginary parts.
+
+ Previous to numpy 1.4.0 sorting real and complex arrays containing nan
+ values led to undefined behaviour. In numpy versions >= 1.4.0 nan
+ values are sorted to the end. The extended sort order is:
+
+ * Real: [R, nan]
+ * Complex: [R + Rj, R + nanj, nan + Rj, nan + nanj]
+
+ where R is a non-nan real value. Complex values with the same nan
+ placements are sorted according to the non-nan part if it exists.
+ Non-nan values are sorted as before.
+
+ Examples
+ --------
+ >>> a = np.array([[1,4],[3,1]])
+ >>> np.sort(a) # sort along the last axis
+ array([[1, 4],
+ [1, 3]])
+ >>> np.sort(a, axis=None) # sort the flattened array
+ array([1, 1, 3, 4])
+ >>> np.sort(a, axis=0) # sort along the first axis
+ array([[1, 1],
+ [3, 4]])
+
+ Use the `order` keyword to specify a field to use when sorting a
+ structured array:
+
+ >>> dtype = [('name', 'S10'), ('height', float), ('age', int)]
+ >>> values = [('Arthur', 1.8, 41), ('Lancelot', 1.9, 38),
+ ... ('Galahad', 1.7, 38)]
+ >>> a = np.array(values, dtype=dtype) # create a structured array
+ >>> np.sort(a, order='height') # doctest: +SKIP
+ array([('Galahad', 1.7, 38), ('Arthur', 1.8, 41),
+ ('Lancelot', 1.8999999999999999, 38)],
+ dtype=[('name', '|S10'), ('height', '>> np.sort(a, order=['age', 'height']) # doctest: +SKIP
+ array([('Galahad', 1.7, 38), ('Lancelot', 1.8999999999999999, 38),
+ ('Arthur', 1.8, 41)],
+ dtype=[('name', '|S10'), ('height', '>> x = np.array([3, 1, 2])
+ >>> np.argsort(x)
+ array([1, 2, 0])
+
+ Two-dimensional array:
+
+ >>> x = np.array([[0, 3], [2, 2]])
+ >>> x
+ array([[0, 3],
+ [2, 2]])
+
+ >>> np.argsort(x, axis=0)
+ array([[0, 1],
+ [1, 0]])
+
+ >>> np.argsort(x, axis=1)
+ array([[0, 1],
+ [0, 1]])
+
+ Sorting with keys:
+
+ >>> x = np.array([(1, 0), (0, 1)], dtype=[('x', '>> x
+ array([(1, 0), (0, 1)],
+ dtype=[('x', '>> np.argsort(x, order=('x','y'))
+ array([1, 0])
+
+ >>> np.argsort(x, order=('y','x'))
+ array([0, 1])
+
+ """
+ raise NotImplemented('Waiting on interp level method')
+
+
+def argmax(a, axis=None):
+ """
+ Indices of the maximum values along an axis.
+
+ Parameters
+ ----------
+ a : array_like
+ Input array.
+ axis : int, optional
+ By default, the index is into the flattened array, otherwise
+ along the specified axis.
+
+ Returns
+ -------
+ index_array : ndarray of ints
+ Array of indices into the array. It has the same shape as `a.shape`
+ with the dimension along `axis` removed.
+
+ See Also
+ --------
+ ndarray.argmax, argmin
+ amax : The maximum value along a given axis.
+ unravel_index : Convert a flat index into an index tuple.
+
+ Notes
+ -----
+ In case of multiple occurrences of the maximum values, the indices
+ corresponding to the first occurrence are returned.
+
+ Examples
+ --------
+ >>> a = np.arange(6).reshape(2,3)
+ >>> a
+ array([[0, 1, 2],
+ [3, 4, 5]])
+ >>> np.argmax(a)
+ 5
+ >>> np.argmax(a, axis=0)
+ array([1, 1, 1])
+ >>> np.argmax(a, axis=1)
+ array([2, 2])
+
+ >>> b = np.arange(6)
+ >>> b[1] = 5
+ >>> b
+ array([0, 5, 2, 3, 4, 5])
+ >>> np.argmax(b) # Only the first occurrence is returned.
+ 1
+
+ """
+ if not hasattr(a, 'argmax'):
+ a = numpypy.array(a)
+ return a.argmax()
+
+
+def argmin(a, axis=None):
+ """
+ Return the indices of the minimum values along an axis.
+
+ See Also
+ --------
+ argmax : Similar function. Please refer to `numpy.argmax` for detailed
+ documentation.
+
+ """
+ if not hasattr(a, 'argmin'):
+ a = numpypy.array(a)
+ return a.argmin()
+
+
+def searchsorted(a, v, side='left'):
+ """
+ Find indices where elements should be inserted to maintain order.
+
+ Find the indices into a sorted array `a` such that, if the corresponding
+ elements in `v` were inserted before the indices, the order of `a` would
+ be preserved.
+
+ Parameters
+ ----------
+ a : 1-D array_like
+ Input array, sorted in ascending order.
+ v : array_like
+ Values to insert into `a`.
+ side : {'left', 'right'}, optional
+ If 'left', the index of the first suitable location found is given. If
+ 'right', return the last such index. If there is no suitable
+ index, return either 0 or N (where N is the length of `a`).
+
+ Returns
+ -------
+ indices : array of ints
+ Array of insertion points with the same shape as `v`.
+
+ See Also
+ --------
+ sort : Return a sorted copy of an array.
+ histogram : Produce histogram from 1-D data.
+
+ Notes
+ -----
+ Binary search is used to find the required insertion points.
+
+ As of Numpy 1.4.0 `searchsorted` works with real/complex arrays containing
+ `nan` values. The enhanced sort order is documented in `sort`.
+
+ Examples
+ --------
+ >>> np.searchsorted([1,2,3,4,5], 3)
+ 2
+ >>> np.searchsorted([1,2,3,4,5], 3, side='right')
+ 3
+ >>> np.searchsorted([1,2,3,4,5], [-10, 10, 2, 3])
+ array([0, 5, 1, 2])
+
+ """
+ raise NotImplemented('Waiting on interp level method')
+
+
+def resize(a, new_shape):
+ """
+ Return a new array with the specified shape.
+
+ If the new array is larger than the original array, then the new
+ array is filled with repeated copies of `a`. Note that this behavior
+ is different from a.resize(new_shape) which fills with zeros instead
+ of repeated copies of `a`.
+
+ Parameters
+ ----------
+ a : array_like
+ Array to be resized.
+
+ new_shape : int or tuple of int
+ Shape of resized array.
+
+ Returns
+ -------
+ reshaped_array : ndarray
+ The new array is formed from the data in the old array, repeated
+ if necessary to fill out the required number of elements. The
+ data are repeated in the order that they are stored in memory.
+
+ See Also
+ --------
+ ndarray.resize : resize an array in-place.
+
+ Examples
+ --------
+ >>> a=np.array([[0,1],[2,3]])
+ >>> np.resize(a,(1,4))
+ array([[0, 1, 2, 3]])
+ >>> np.resize(a,(2,4))
+ array([[0, 1, 2, 3],
+ [0, 1, 2, 3]])
+
+ """
+ raise NotImplemented('Waiting on interp level method')
+
+
+def squeeze(a):
+ """
+ Remove single-dimensional entries from the shape of an array.
+
+ Parameters
+ ----------
+ a : array_like
+ Input data.
+
+ Returns
+ -------
+ squeezed : ndarray
+ The input array, but with with all dimensions of length 1
+ removed. Whenever possible, a view on `a` is returned.
+
+ Examples
+ --------
+ >>> x = np.array([[[0], [1], [2]]])
+ >>> x.shape
+ (1, 3, 1)
+ >>> np.squeeze(x).shape
+ (3,)
+
+ """
+ raise NotImplemented('Waiting on interp level method')
+
+
+def diagonal(a, offset=0, axis1=0, axis2=1):
+ """
+ Return specified diagonals.
+
+ If `a` is 2-D, returns the diagonal of `a` with the given offset,
+ i.e., the collection of elements of the form ``a[i, i+offset]``. If
+ `a` has more than two dimensions, then the axes specified by `axis1`
+ and `axis2` are used to determine the 2-D sub-array whose diagonal is
+ returned. The shape of the resulting array can be determined by
+ removing `axis1` and `axis2` and appending an index to the right equal
+ to the size of the resulting diagonals.
+
+ Parameters
+ ----------
+ a : array_like
+ Array from which the diagonals are taken.
+ offset : int, optional
+ Offset of the diagonal from the main diagonal. Can be positive or
+ negative. Defaults to main diagonal (0).
+ axis1 : int, optional
+ Axis to be used as the first axis of the 2-D sub-arrays from which
+ the diagonals should be taken. Defaults to first axis (0).
+ axis2 : int, optional
+ Axis to be used as the second axis of the 2-D sub-arrays from
+ which the diagonals should be taken. Defaults to second axis (1).
+
+ Returns
+ -------
+ array_of_diagonals : ndarray
+ If `a` is 2-D, a 1-D array containing the diagonal is returned.
+ If the dimension of `a` is larger, then an array of diagonals is
+ returned, "packed" from left-most dimension to right-most (e.g.,
+ if `a` is 3-D, then the diagonals are "packed" along rows).
+
+ Raises
+ ------
+ ValueError
+ If the dimension of `a` is less than 2.
+
+ See Also
+ --------
+ diag : MATLAB work-a-like for 1-D and 2-D arrays.
+ diagflat : Create diagonal arrays.
+ trace : Sum along diagonals.
+
+ Examples
+ --------
+ >>> a = np.arange(4).reshape(2,2)
+ >>> a
+ array([[0, 1],
+ [2, 3]])
+ >>> a.diagonal()
+ array([0, 3])
+ >>> a.diagonal(1)
+ array([1])
+
+ A 3-D example:
+
+ >>> a = np.arange(8).reshape(2,2,2); a
+ array([[[0, 1],
+ [2, 3]],
+ [[4, 5],
+ [6, 7]]])
+ >>> a.diagonal(0, # Main diagonals of two arrays created by skipping
+ ... 0, # across the outer(left)-most axis last and
+ ... 1) # the "middle" (row) axis first.
+ array([[0, 6],
+ [1, 7]])
+
+ The sub-arrays whose main diagonals we just obtained; note that each
+ corresponds to fixing the right-most (column) axis, and that the
+ diagonals are "packed" in rows.
+
+ >>> a[:,:,0] # main diagonal is [0 6]
+ array([[0, 2],
+ [4, 6]])
+ >>> a[:,:,1] # main diagonal is [1 7]
+ array([[1, 3],
+ [5, 7]])
+
+ """
+ raise NotImplemented('Waiting on interp level method')
+
+
+def trace(a, offset=0, axis1=0, axis2=1, dtype=None, out=None):
+ """
+ Return the sum along diagonals of the array.
+
+ If `a` is 2-D, the sum along its diagonal with the given offset
+ is returned, i.e., the sum of elements ``a[i,i+offset]`` for all i.
+
+ If `a` has more than two dimensions, then the axes specified by axis1 and
+ axis2 are used to determine the 2-D sub-arrays whose traces are returned.
+ The shape of the resulting array is the same as that of `a` with `axis1`
+ and `axis2` removed.
+
+ Parameters
+ ----------
+ a : array_like
+ Input array, from which the diagonals are taken.
+ offset : int, optional
+ Offset of the diagonal from the main diagonal. Can be both positive
+ and negative. Defaults to 0.
+ axis1, axis2 : int, optional
+ Axes to be used as the first and second axis of the 2-D sub-arrays
+ from which the diagonals should be taken. Defaults are the first two
+ axes of `a`.
+ dtype : dtype, optional
+ Determines the data-type of the returned array and of the accumulator
+ where the elements are summed. If dtype has the value None and `a` is
+ of integer type of precision less than the default integer
+ precision, then the default integer precision is used. Otherwise,
+ the precision is the same as that of `a`.
+ out : ndarray, optional
+ Array into which the output is placed. Its type is preserved and
+ it must be of the right shape to hold the output.
+
+ Returns
+ -------
+ sum_along_diagonals : ndarray
+ If `a` is 2-D, the sum along the diagonal is returned. If `a` has
+ larger dimensions, then an array of sums along diagonals is returned.
+
+ See Also
+ --------
+ diag, diagonal, diagflat
+
+ Examples
+ --------
+ >>> np.trace(np.eye(3))
+ 3.0
+ >>> a = np.arange(8).reshape((2,2,2))
+ >>> np.trace(a)
+ array([6, 8])
+
+ >>> a = np.arange(24).reshape((2,2,2,3))
+ >>> np.trace(a).shape
+ (2, 3)
+
+ """
+ raise NotImplemented('Waiting on interp level method')
+
+def ravel(a, order='C'):
+ """
+ Return a flattened array.
+
+ A 1-D array, containing the elements of the input, is returned. A copy is
+ made only if needed.
+
+ Parameters
+ ----------
+ a : array_like
+ Input array. The elements in ``a`` are read in the order specified by
+ `order`, and packed as a 1-D array.
+ order : {'C','F', 'A', 'K'}, optional
+ The elements of ``a`` are read in this order. 'C' means to view
+ the elements in C (row-major) order. 'F' means to view the elements
+ in Fortran (column-major) order. 'A' means to view the elements
+ in 'F' order if a is Fortran contiguous, 'C' order otherwise.
+ 'K' means to view the elements in the order they occur in memory,
+ except for reversing the data when strides are negative.
+ By default, 'C' order is used.
+
+ Returns
+ -------
+ 1d_array : ndarray
+ Output of the same dtype as `a`, and of shape ``(a.size(),)``.
+
+ See Also
+ --------
+ ndarray.flat : 1-D iterator over an array.
+ ndarray.flatten : 1-D array copy of the elements of an array
+ in row-major order.
+
+ Notes
+ -----
+ In row-major order, the row index varies the slowest, and the column
+ index the quickest. This can be generalized to multiple dimensions,
+ where row-major order implies that the index along the first axis
+ varies slowest, and the index along the last quickest. The opposite holds
+ for Fortran-, or column-major, mode.
+
+ Examples
+ --------
+ It is equivalent to ``reshape(-1, order=order)``.
+
+ >>> x = np.array([[1, 2, 3], [4, 5, 6]])
+ >>> print np.ravel(x)
+ [1 2 3 4 5 6]
+
+ >>> print x.reshape(-1)
+ [1 2 3 4 5 6]
+
+ >>> print np.ravel(x, order='F')
+ [1 4 2 5 3 6]
+
+ When ``order`` is 'A', it will preserve the array's 'C' or 'F' ordering:
+
+ >>> print np.ravel(x.T)
+ [1 4 2 5 3 6]
+ >>> print np.ravel(x.T, order='A')
+ [1 2 3 4 5 6]
+
+ When ``order`` is 'K', it will preserve orderings that are neither 'C'
+ nor 'F', but won't reverse axes:
+
+ >>> a = np.arange(3)[::-1]; a
+ array([2, 1, 0])
+ >>> a.ravel(order='C')
+ array([2, 1, 0])
+ >>> a.ravel(order='K')
+ array([2, 1, 0])
+
+ >>> a = np.arange(12).reshape(2,3,2).swapaxes(1,2); a
+ array([[[ 0, 2, 4],
+ [ 1, 3, 5]],
+ [[ 6, 8, 10],
+ [ 7, 9, 11]]])
+ >>> a.ravel(order='C')
+ array([ 0, 2, 4, 1, 3, 5, 6, 8, 10, 7, 9, 11])
+ >>> a.ravel(order='K')
+ array([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11])
+
+ """
+ raise NotImplemented('Waiting on interp level method')
+
+
+def nonzero(a):
+ """
+ Return the indices of the elements that are non-zero.
+
+ Returns a tuple of arrays, one for each dimension of `a`, containing
+ the indices of the non-zero elements in that dimension. The
+ corresponding non-zero values can be obtained with::
+
+ a[nonzero(a)]
+
+ To group the indices by element, rather than dimension, use::
+
+ transpose(nonzero(a))
+
+ The result of this is always a 2-D array, with a row for
+ each non-zero element.
+
+ Parameters
+ ----------
+ a : array_like
+ Input array.
+
+ Returns
+ -------
+ tuple_of_arrays : tuple
+ Indices of elements that are non-zero.
+
+ See Also
+ --------
+ flatnonzero :
+ Return indices that are non-zero in the flattened version of the input
+ array.
+ ndarray.nonzero :
+ Equivalent ndarray method.
+ count_nonzero :
+ Counts the number of non-zero elements in the input array.
+
+ Examples
+ --------
+ >>> x = np.eye(3)
+ >>> x
+ array([[ 1., 0., 0.],
+ [ 0., 1., 0.],
+ [ 0., 0., 1.]])
+ >>> np.nonzero(x)
+ (array([0, 1, 2]), array([0, 1, 2]))
+
+ >>> x[np.nonzero(x)]
+ array([ 1., 1., 1.])
+ >>> np.transpose(np.nonzero(x))
+ array([[0, 0],
+ [1, 1],
+ [2, 2]])
+
+ A common use for ``nonzero`` is to find the indices of an array, where
+ a condition is True. Given an array `a`, the condition `a` > 3 is a
+ boolean array and since False is interpreted as 0, np.nonzero(a > 3)
+ yields the indices of the `a` where the condition is true.
+
+ >>> a = np.array([[1,2,3],[4,5,6],[7,8,9]])
+ >>> a > 3
+ array([[False, False, False],
+ [ True, True, True],
+ [ True, True, True]], dtype=bool)
+ >>> np.nonzero(a > 3)
+ (array([1, 1, 1, 2, 2, 2]), array([0, 1, 2, 0, 1, 2]))
+
+ The ``nonzero`` method of the boolean array can also be called.
+
+ >>> (a > 3).nonzero()
+ (array([1, 1, 1, 2, 2, 2]), array([0, 1, 2, 0, 1, 2]))
+
+ """
+ raise NotImplemented('Waiting on interp level method')
+
+
+def shape(a):
+ """
+ Return the shape of an array.
+
+ Parameters
+ ----------
+ a : array_like
+ Input array.
+
+ Returns
+ -------
+ shape : tuple of ints
+ The elements of the shape tuple give the lengths of the
+ corresponding array dimensions.
+
+ See Also
+ --------
+ alen
+ ndarray.shape : Equivalent array method.
+
+ Examples
+ --------
+ >>> np.shape(np.eye(3))
+ (3, 3)
+ >>> np.shape([[1, 2]])
+ (1, 2)
+ >>> np.shape([0])
+ (1,)
+ >>> np.shape(0)
+ ()
+
+ >>> a = np.array([(1, 2), (3, 4)], dtype=[('x', 'i4'), ('y', 'i4')])
+ >>> np.shape(a)
+ (2,)
+ >>> a.shape
+ (2,)
+
+ """
+ if not hasattr(a, 'shape'):
+ a = numpypy.array(a)
+ return a.shape
+
+
+def compress(condition, a, axis=None, out=None):
+ """
+ Return selected slices of an array along given axis.
+
+ When working along a given axis, a slice along that axis is returned in
+ `output` for each index where `condition` evaluates to True. When
+ working on a 1-D array, `compress` is equivalent to `extract`.
+
+ Parameters
+ ----------
+ condition : 1-D array of bools
+ Array that selects which entries to return. If len(condition)
+ is less than the size of `a` along the given axis, then output is
+ truncated to the length of the condition array.
+ a : array_like
+ Array from which to extract a part.
+ axis : int, optional
+ Axis along which to take slices. If None (default), work on the
+ flattened array.
+ out : ndarray, optional
+ Output array. Its type is preserved and it must be of the right
+ shape to hold the output.
+
+ Returns
+ -------
+ compressed_array : ndarray
+ A copy of `a` without the slices along axis for which `condition`
+ is false.
+
+ See Also
+ --------
+ take, choose, diag, diagonal, select
+ ndarray.compress : Equivalent method.
+ numpy.doc.ufuncs : Section "Output arguments"
+
+ Examples
+ --------
+ >>> a = np.array([[1, 2], [3, 4], [5, 6]])
+ >>> a
+ array([[1, 2],
+ [3, 4],
+ [5, 6]])
+ >>> np.compress([0, 1], a, axis=0)
+ array([[3, 4]])
+ >>> np.compress([False, True, True], a, axis=0)
+ array([[3, 4],
+ [5, 6]])
+ >>> np.compress([False, True], a, axis=1)
+ array([[2],
+ [4],
+ [6]])
+
+ Working on the flattened array does not return slices along an axis but
+ selects elements.
+
+ >>> np.compress([False, True], a)
+ array([2])
+
+ """
+ raise NotImplemented('Waiting on interp level method')
+
+
+def clip(a, a_min, a_max, out=None):
+ """
+ Clip (limit) the values in an array.
+
+ Given an interval, values outside the interval are clipped to
+ the interval edges. For example, if an interval of ``[0, 1]``
+ is specified, values smaller than 0 become 0, and values larger
+ than 1 become 1.
+
+ Parameters
+ ----------
+ a : array_like
+ Array containing elements to clip.
+ a_min : scalar or array_like
+ Minimum value.
+ a_max : scalar or array_like
+ Maximum value. If `a_min` or `a_max` are array_like, then they will
+ be broadcasted to the shape of `a`.
+ out : ndarray, optional
+ The results will be placed in this array. It may be the input
+ array for in-place clipping. `out` must be of the right shape
+ to hold the output. Its type is preserved.
+
+ Returns
+ -------
+ clipped_array : ndarray
+ An array with the elements of `a`, but where values
+ < `a_min` are replaced with `a_min`, and those > `a_max`
+ with `a_max`.
+
+ See Also
+ --------
+ numpy.doc.ufuncs : Section "Output arguments"
+
+ Examples
+ --------
+ >>> a = np.arange(10)
+ >>> np.clip(a, 1, 8)
+ array([1, 1, 2, 3, 4, 5, 6, 7, 8, 8])
+ >>> a
+ array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])
+ >>> np.clip(a, 3, 6, out=a)
+ array([3, 3, 3, 3, 4, 5, 6, 6, 6, 6])
+ >>> a = np.arange(10)
+ >>> a
+ array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])
+ >>> np.clip(a, [3,4,1,1,1,4,4,4,4,4], 8)
+ array([3, 4, 2, 3, 4, 5, 6, 7, 8, 8])
+
+ """
+ raise NotImplemented('Waiting on interp level method')
+
+
+def sum(a, axis=None, dtype=None, out=None):
+ """
+ Sum of array elements over a given axis.
+
+ Parameters
+ ----------
+ a : array_like
+ Elements to sum.
+ axis : integer, optional
+ Axis over which the sum is taken. By default `axis` is None,
+ and all elements are summed.
+ dtype : dtype, optional
+ The type of the returned array and of the accumulator in which
+ the elements are summed. By default, the dtype of `a` is used.
+ An exception is when `a` has an integer type with less precision
+ than the default platform integer. In that case, the default
+ platform integer is used instead.
+ out : ndarray, optional
+ Array into which the output is placed. By default, a new array is
+ created. If `out` is given, it must be of the appropriate shape
+ (the shape of `a` with `axis` removed, i.e.,
+ ``numpy.delete(a.shape, axis)``). Its type is preserved. See
+ `doc.ufuncs` (Section "Output arguments") for more details.
+
+ Returns
+ -------
+ sum_along_axis : ndarray
+ An array with the same shape as `a`, with the specified
+ axis removed. If `a` is a 0-d array, or if `axis` is None, a scalar
+ is returned. If an output array is specified, a reference to
+ `out` is returned.
+
+ See Also
+ --------
+ ndarray.sum : Equivalent method.
+
+ cumsum : Cumulative sum of array elements.
+
+ trapz : Integration of array values using the composite trapezoidal rule.
+
+ mean, average
+
+ Notes
+ -----
+ Arithmetic is modular when using integer types, and no error is
+ raised on overflow.
+
+ Examples
+ --------
+ >>> np.sum([0.5, 1.5])
+ 2.0
+ >>> np.sum([0.5, 0.7, 0.2, 1.5], dtype=np.int32)
+ 1
+ >>> np.sum([[0, 1], [0, 5]])
+ 6
+ >>> np.sum([[0, 1], [0, 5]], axis=0)
+ array([0, 6])
+ >>> np.sum([[0, 1], [0, 5]], axis=1)
+ array([1, 5])
+
+ If the accumulator is too small, overflow occurs:
+
+ >>> np.ones(128, dtype=np.int8).sum(dtype=np.int8)
+ -128
+
+ """
+ if not hasattr(a, "sum"):
+ a = numpypy.array(a)
+ return a.sum()
+
+
+def product (a, axis=None, dtype=None, out=None):
+ """
+ Return the product of array elements over a given axis.
+
+ See Also
+ --------
+ prod : equivalent function; see for details.
+
+ """
+ raise NotImplemented('Waiting on interp level method')
+
+
+def sometrue(a, axis=None, out=None):
+ """
+ Check whether some values are true.
+
+ Refer to `any` for full documentation.
+
+ See Also
+ --------
+ any : equivalent function
+
+ """
+ if not hasattr(a, 'any'):
+ a = numpypy.array(a)
+ return a.any()
+
+
+def alltrue (a, axis=None, out=None):
+ """
+ Check if all elements of input array are true.
+
+ See Also
+ --------
+ numpy.all : Equivalent function; see for details.
+
+ """
+ if not hasattr(a, 'all'):
+ a = numpypy.array(a)
+ return a.all()
+
+def any(a,axis=None, out=None):
+ """
+ Test whether any array element along a given axis evaluates to True.
+
+ Returns single boolean unless `axis` is not ``None``
+
+ Parameters
+ ----------
+ a : array_like
+ Input array or object that can be converted to an array.
+ axis : int, optional
+ Axis along which a logical OR is performed. The default
+ (`axis` = `None`) is to perform a logical OR over a flattened
+ input array. `axis` may be negative, in which case it counts
+ from the last to the first axis.
+ out : ndarray, optional
+ Alternate output array in which to place the result. It must have
+ the same shape as the expected output and its type is preserved
+ (e.g., if it is of type float, then it will remain so, returning
+ 1.0 for True and 0.0 for False, regardless of the type of `a`).
+ See `doc.ufuncs` (Section "Output arguments") for details.
+
+ Returns
+ -------
+ any : bool or ndarray
+ A new boolean or `ndarray` is returned unless `out` is specified,
+ in which case a reference to `out` is returned.
+
+ See Also
+ --------
+ ndarray.any : equivalent method
+
+ all : Test whether all elements along a given axis evaluate to True.
+
+ Notes
+ -----
+ Not a Number (NaN), positive infinity and negative infinity evaluate
+ to `True` because these are not equal to zero.
+
+ Examples
+ --------
+ >>> np.any([[True, False], [True, True]])
+ True
+
+ >>> np.any([[True, False], [False, False]], axis=0)
+ array([ True, False], dtype=bool)
+
+ >>> np.any([-1, 0, 5])
+ True
+
+ >>> np.any(np.nan)
+ True
+
+ >>> o=np.array([False])
+ >>> z=np.any([-1, 4, 5], out=o)
+ >>> z, o
+ (array([ True], dtype=bool), array([ True], dtype=bool))
+ >>> # Check now that z is a reference to o
+ >>> z is o
+ True
+ >>> id(z), id(o) # identity of z and o # doctest: +SKIP
+ (191614240, 191614240)
+
+ """
+ if not hasattr(a, 'any'):
+ a = numpypy.array(a)
+ return a.any()
+
+
+def all(a,axis=None, out=None):
+ """
+ Test whether all array elements along a given axis evaluate to True.
+
+ Parameters
+ ----------
+ a : array_like
+ Input array or object that can be converted to an array.
+ axis : int, optional
+ Axis along which a logical AND is performed.
+ The default (`axis` = `None`) is to perform a logical AND
+ over a flattened input array. `axis` may be negative, in which
+ case it counts from the last to the first axis.
+ out : ndarray, optional
+ Alternate output array in which to place the result.
+ It must have the same shape as the expected output and its
+ type is preserved (e.g., if ``dtype(out)`` is float, the result
+ will consist of 0.0's and 1.0's). See `doc.ufuncs` (Section
+ "Output arguments") for more details.
+
+ Returns
+ -------
+ all : ndarray, bool
+ A new boolean or array is returned unless `out` is specified,
+ in which case a reference to `out` is returned.
+
+ See Also
+ --------
+ ndarray.all : equivalent method
+
+ any : Test whether any element along a given axis evaluates to True.
+
+ Notes
+ -----
+ Not a Number (NaN), positive infinity and negative infinity
+ evaluate to `True` because these are not equal to zero.
+
+ Examples
+ --------
+ >>> np.all([[True,False],[True,True]])
+ False
+
+ >>> np.all([[True,False],[True,True]], axis=0)
+ array([ True, False], dtype=bool)
+
+ >>> np.all([-1, 4, 5])
+ True
+
+ >>> np.all([1.0, np.nan])
+ True
+
+ >>> o=np.array([False])
+ >>> z=np.all([-1, 4, 5], out=o)
+ >>> id(z), id(o), z # doctest: +SKIP
+ (28293632, 28293632, array([ True], dtype=bool))
+
+ """
+ if not hasattr(a, 'all'):
+ a = numpypy.array(a)
+ return a.all()
+
+
+def cumsum (a, axis=None, dtype=None, out=None):
+ """
+ Return the cumulative sum of the elements along a given axis.
+
+ Parameters
+ ----------
+ a : array_like
+ Input array.
+ axis : int, optional
+ Axis along which the cumulative sum is computed. The default
+ (None) is to compute the cumsum over the flattened array.
+ dtype : dtype, optional
+ Type of the returned array and of the accumulator in which the
+ elements are summed. If `dtype` is not specified, it defaults
+ to the dtype of `a`, unless `a` has an integer dtype with a
+ precision less than that of the default platform integer. In
+ that case, the default platform integer is used.
+ out : ndarray, optional
+ Alternative output array in which to place the result. It must
+ have the same shape and buffer length as the expected output
+ but the type will be cast if necessary. See `doc.ufuncs`
+ (Section "Output arguments") for more details.
+
+ Returns
+ -------
+ cumsum_along_axis : ndarray.
+ A new array holding the result is returned unless `out` is
+ specified, in which case a reference to `out` is returned. The
+ result has the same size as `a`, and the same shape as `a` if
+ `axis` is not None or `a` is a 1-d array.
+
+
+ See Also
+ --------
+ sum : Sum array elements.
+
+ trapz : Integration of array values using the composite trapezoidal rule.
+
+ Notes
+ -----
+ Arithmetic is modular when using integer types, and no error is
+ raised on overflow.
+
+ Examples
+ --------
+ >>> a = np.array([[1,2,3], [4,5,6]])
+ >>> a
+ array([[1, 2, 3],
+ [4, 5, 6]])
+ >>> np.cumsum(a)
+ array([ 1, 3, 6, 10, 15, 21])
+ >>> np.cumsum(a, dtype=float) # specifies type of output value(s)
+ array([ 1., 3., 6., 10., 15., 21.])
+
+ >>> np.cumsum(a,axis=0) # sum over rows for each of the 3 columns
+ array([[1, 2, 3],
+ [5, 7, 9]])
+ >>> np.cumsum(a,axis=1) # sum over columns for each of the 2 rows
+ array([[ 1, 3, 6],
+ [ 4, 9, 15]])
+
+ """
+ raise NotImplemented('Waiting on interp level method')
+
+
+def cumproduct(a, axis=None, dtype=None, out=None):
+ """
+ Return the cumulative product over the given axis.
+
+
+ See Also
+ --------
+ cumprod : equivalent function; see for details.
+
+ """
+ raise NotImplemented('Waiting on interp level method')
+
+
+def ptp(a, axis=None, out=None):
+ """
+ Range of values (maximum - minimum) along an axis.
+
+ The name of the function comes from the acronym for 'peak to peak'.
+
+ Parameters
+ ----------
+ a : array_like
+ Input values.
+ axis : int, optional
+ Axis along which to find the peaks. By default, flatten the
+ array.
+ out : array_like
+ Alternative output array in which to place the result. It must
+ have the same shape and buffer length as the expected output,
+ but the type of the output values will be cast if necessary.
+
+ Returns
+ -------
+ ptp : ndarray
+ A new array holding the result, unless `out` was
+ specified, in which case a reference to `out` is returned.
+
+ Examples
+ --------
+ >>> x = np.arange(4).reshape((2,2))
+ >>> x
+ array([[0, 1],
+ [2, 3]])
+
+ >>> np.ptp(x, axis=0)
+ array([2, 2])
+
+ >>> np.ptp(x, axis=1)
+ array([1, 1])
+
+ """
+ raise NotImplemented('Waiting on interp level method')
+
+
+def amax(a, axis=None, out=None):
+ """
+ Return the maximum of an array or maximum along an axis.
+
+ Parameters
+ ----------
+ a : array_like
+ Input data.
+ axis : int, optional
+ Axis along which to operate. By default flattened input is used.
+ out : ndarray, optional
+ Alternate output array in which to place the result. Must be of
+ the same shape and buffer length as the expected output. See
+ `doc.ufuncs` (Section "Output arguments") for more details.
+
+ Returns
+ -------
+ amax : ndarray or scalar
+ Maximum of `a`. If `axis` is None, the result is a scalar value.
+ If `axis` is given, the result is an array of dimension
+ ``a.ndim - 1``.
+
+ See Also
+ --------
+ nanmax : NaN values are ignored instead of being propagated.
+ fmax : same behavior as the C99 fmax function.
+ argmax : indices of the maximum values.
+
+ Notes
+ -----
+ NaN values are propagated, that is if at least one item is NaN, the
+ corresponding max value will be NaN as well. To ignore NaN values
+ (MATLAB behavior), please use nanmax.
+
+ Examples
+ --------
+ >>> a = np.arange(4).reshape((2,2))
+ >>> a
+ array([[0, 1],
+ [2, 3]])
+ >>> np.amax(a)
+ 3
+ >>> np.amax(a, axis=0)
+ array([2, 3])
+ >>> np.amax(a, axis=1)
+ array([1, 3])
+
+ >>> b = np.arange(5, dtype=np.float)
+ >>> b[2] = np.NaN
+ >>> np.amax(b)
+ nan
+ >>> np.nanmax(b)
+ 4.0
+
+ """
+ if not hasattr(a, "max"):
+ a = numpypy.array(a)
+ return a.max()
+
+
+def amin(a, axis=None, out=None):
+ """
+ Return the minimum of an array or minimum along an axis.
+
+ Parameters
+ ----------
+ a : array_like
+ Input data.
+ axis : int, optional
+ Axis along which to operate. By default a flattened input is used.
+ out : ndarray, optional
+ Alternative output array in which to place the result. Must
+ be of the same shape and buffer length as the expected output.
+ See `doc.ufuncs` (Section "Output arguments") for more details.
+
+ Returns
+ -------
+ amin : ndarray
+ A new array or a scalar array with the result.
+
+ See Also
+ --------
+ nanmin: nan values are ignored instead of being propagated
+ fmin: same behavior as the C99 fmin function
+ argmin: Return the indices of the minimum values.
+
+ amax, nanmax, fmax
+
+ Notes
+ -----
+ NaN values are propagated, that is if at least one item is nan, the
+ corresponding min value will be nan as well. To ignore NaN values (matlab
+ behavior), please use nanmin.
+
+ Examples
+ --------
+ >>> a = np.arange(4).reshape((2,2))
+ >>> a
+ array([[0, 1],
+ [2, 3]])
+ >>> np.amin(a) # Minimum of the flattened array
+ 0
+ >>> np.amin(a, axis=0) # Minima along the first axis
+ array([0, 1])
+ >>> np.amin(a, axis=1) # Minima along the second axis
+ array([0, 2])
+
+ >>> b = np.arange(5, dtype=np.float)
+ >>> b[2] = np.NaN
+ >>> np.amin(b)
+ nan
+ >>> np.nanmin(b)
+ 0.0
+
+ """
+ # amin() is equivalent to min()
+ if not hasattr(a, 'min'):
+ a = numpypy.array(a)
+ return a.min()
+
+def alen(a):
+ """
+ Return the length of the first dimension of the input array.
+
+ Parameters
+ ----------
+ a : array_like
+ Input array.
+
+ Returns
+ -------
+ l : int
+ Length of the first dimension of `a`.
+
+ See Also
+ --------
+ shape, size
+
+ Examples
+ --------
+ >>> a = np.zeros((7,4,5))
+ >>> a.shape[0]
+ 7
+ >>> np.alen(a)
+ 7
+
+ """
+ if not hasattr(a, 'shape'):
+ a = numpypy.array(a)
+ return a.shape[0]
+
+
+def prod(a, axis=None, dtype=None, out=None):
+ """
+ Return the product of array elements over a given axis.
+
+ Parameters
+ ----------
+ a : array_like
+ Input data.
+ axis : int, optional
+ Axis over which the product is taken. By default, the product
+ of all elements is calculated.
+ dtype : data-type, optional
+ The data-type of the returned array, as well as of the accumulator
+ in which the elements are multiplied. By default, if `a` is of
+ integer type, `dtype` is the default platform integer. (Note: if
+ the type of `a` is unsigned, then so is `dtype`.) Otherwise,
+ the dtype is the same as that of `a`.
+ out : ndarray, optional
+ Alternative output array in which to place the result. It must have
+ the same shape as the expected output, but the type of the
+ output values will be cast if necessary.
+
+ Returns
+ -------
+ product_along_axis : ndarray, see `dtype` parameter above.
+ An array shaped as `a` but with the specified axis removed.
+ Returns a reference to `out` if specified.
+
+ See Also
+ --------
+ ndarray.prod : equivalent method
+ numpy.doc.ufuncs : Section "Output arguments"
+
+ Notes
+ -----
+ Arithmetic is modular when using integer types, and no error is
+ raised on overflow. That means that, on a 32-bit platform:
+
+ >>> x = np.array([536870910, 536870910, 536870910, 536870910])
+ >>> np.prod(x) #random
+ 16
+
+ Examples
+ --------
+ By default, calculate the product of all elements:
+
+ >>> np.prod([1.,2.])
+ 2.0
+
+ Even when the input array is two-dimensional:
+
+ >>> np.prod([[1.,2.],[3.,4.]])
+ 24.0
+
+ But we can also specify the axis over which to multiply:
+
+ >>> np.prod([[1.,2.],[3.,4.]], axis=1)
+ array([ 2., 12.])
+
+ If the type of `x` is unsigned, then the output type is
+ the unsigned platform integer:
+
+ >>> x = np.array([1, 2, 3], dtype=np.uint8)
+ >>> np.prod(x).dtype == np.uint
+ True
+
+ If `x` is of a signed integer type, then the output type
+ is the default platform integer:
+
+ >>> x = np.array([1, 2, 3], dtype=np.int8)
+ >>> np.prod(x).dtype == np.int
+ True
+
+ """
+ raise NotImplemented('Waiting on interp level method')
+
+
+def cumprod(a, axis=None, dtype=None, out=None):
+ """
+ Return the cumulative product of elements along a given axis.
+
+ Parameters
+ ----------
+ a : array_like
+ Input array.
+ axis : int, optional
+ Axis along which the cumulative product is computed. By default
+ the input is flattened.
+ dtype : dtype, optional
+ Type of the returned array, as well as of the accumulator in which
+ the elements are multiplied. If *dtype* is not specified, it
+ defaults to the dtype of `a`, unless `a` has an integer dtype with
+ a precision less than that of the default platform integer. In
+ that case, the default platform integer is used instead.
+ out : ndarray, optional
+ Alternative output array in which to place the result. It must
+ have the same shape and buffer length as the expected output
+ but the type of the resulting values will be cast if necessary.
+
+ Returns
+ -------
+ cumprod : ndarray
+ A new array holding the result is returned unless `out` is
+ specified, in which case a reference to out is returned.
+
+ See Also
+ --------
+ numpy.doc.ufuncs : Section "Output arguments"
+
+ Notes
+ -----
+ Arithmetic is modular when using integer types, and no error is
+ raised on overflow.
+
+ Examples
+ --------
+ >>> a = np.array([1,2,3])
+ >>> np.cumprod(a) # intermediate results 1, 1*2
+ ... # total product 1*2*3 = 6
+ array([1, 2, 6])
+ >>> a = np.array([[1, 2, 3], [4, 5, 6]])
+ >>> np.cumprod(a, dtype=float) # specify type of output
+ array([ 1., 2., 6., 24., 120., 720.])
+
+ The cumulative product for each column (i.e., over the rows) of `a`:
+
+ >>> np.cumprod(a, axis=0)
+ array([[ 1, 2, 3],
+ [ 4, 10, 18]])
+
+ The cumulative product for each row (i.e. over the columns) of `a`:
+
+ >>> np.cumprod(a,axis=1)
+ array([[ 1, 2, 6],
+ [ 4, 20, 120]])
+
+ """
+ raise NotImplemented('Waiting on interp level method')
+
+
+def ndim(a):
+ """
+ Return the number of dimensions of an array.
+
+ Parameters
+ ----------
+ a : array_like
+ Input array. If it is not already an ndarray, a conversion is
+ attempted.
+
+ Returns
+ -------
+ number_of_dimensions : int
+ The number of dimensions in `a`. Scalars are zero-dimensional.
+
+ See Also
+ --------
+ ndarray.ndim : equivalent method
+ shape : dimensions of array
+ ndarray.shape : dimensions of array
+
+ Examples
+ --------
+ >>> np.ndim([[1,2,3],[4,5,6]])
+ 2
+ >>> np.ndim(np.array([[1,2,3],[4,5,6]]))
+ 2
+ >>> np.ndim(1)
+ 0
+
+ """
+ if not hasattr(a, 'ndim'):
+ a = numpypy.array(a)
+ return a.ndim
+
+
+def rank(a):
+ """
+ Return the number of dimensions of an array.
+
+ If `a` is not already an array, a conversion is attempted.
+ Scalars are zero dimensional.
+
+ Parameters
+ ----------
+ a : array_like
+ Array whose number of dimensions is desired. If `a` is not an array,
+ a conversion is attempted.
+
+ Returns
+ -------
+ number_of_dimensions : int
+ The number of dimensions in the array.
+
+ See Also
+ --------
+ ndim : equivalent function
+ ndarray.ndim : equivalent property
+ shape : dimensions of array
+ ndarray.shape : dimensions of array
+
+ Notes
+ -----
+ In the old Numeric package, `rank` was the term used for the number of
+ dimensions, but in Numpy `ndim` is used instead.
+
+ Examples
+ --------
+ >>> np.rank([1,2,3])
+ 1
+ >>> np.rank(np.array([[1,2,3],[4,5,6]]))
+ 2
+ >>> np.rank(1)
+ 0
+
+ """
+ if not hasattr(a, 'ndim'):
+ a = numpypy.array(a)
+ return a.ndim
+
+
+def size(a, axis=None):
+ """
+ Return the number of elements along a given axis.
+
+ Parameters
+ ----------
+ a : array_like
+ Input data.
+ axis : int, optional
+ Axis along which the elements are counted. By default, give
+ the total number of elements.
+
+ Returns
+ -------
+ element_count : int
+ Number of elements along the specified axis.
+
+ See Also
+ --------
+ shape : dimensions of array
+ ndarray.shape : dimensions of array
+ ndarray.size : number of elements in array
+
+ Examples
+ --------
+ >>> a = np.array([[1,2,3],[4,5,6]])
+ >>> np.size(a)
+ 6
+ >>> np.size(a,1)
+ 3
+ >>> np.size(a,0)
+ 2
+
+ """
+ raise NotImplemented('Waiting on interp level method')
+
+
+def around(a, decimals=0, out=None):
+ """
+ Evenly round to the given number of decimals.
+
+ Parameters
+ ----------
+ a : array_like
+ Input data.
+ decimals : int, optional
+ Number of decimal places to round to (default: 0). If
+ decimals is negative, it specifies the number of positions to
+ the left of the decimal point.
+ out : ndarray, optional
+ Alternative output array in which to place the result. It must have
+ the same shape as the expected output, but the type of the output
+ values will be cast if necessary. See `doc.ufuncs` (Section
+ "Output arguments") for details.
+
+ Returns
+ -------
+ rounded_array : ndarray
+ An array of the same type as `a`, containing the rounded values.
+ Unless `out` was specified, a new array is created. A reference to
+ the result is returned.
+
+ The real and imaginary parts of complex numbers are rounded
+ separately. The result of rounding a float is a float.
+
+ See Also
+ --------
+ ndarray.round : equivalent method
+
+ ceil, fix, floor, rint, trunc
+
+
+ Notes
+ -----
+ For values exactly halfway between rounded decimal values, Numpy
+ rounds to the nearest even value. Thus 1.5 and 2.5 round to 2.0,
+ -0.5 and 0.5 round to 0.0, etc. Results may also be surprising due
+ to the inexact representation of decimal fractions in the IEEE
+ floating point standard [1]_ and errors introduced when scaling
+ by powers of ten.
+
+ References
+ ----------
+ .. [1] "Lecture Notes on the Status of IEEE 754", William Kahan,
+ http://www.cs.berkeley.edu/~wkahan/ieee754status/IEEE754.PDF
+ .. [2] "How Futile are Mindless Assessments of
+ Roundoff in Floating-Point Computation?", William Kahan,
+ http://www.cs.berkeley.edu/~wkahan/Mindless.pdf
+
+ Examples
+ --------
+ >>> np.around([0.37, 1.64])
+ array([ 0., 2.])
+ >>> np.around([0.37, 1.64], decimals=1)
+ array([ 0.4, 1.6])
+ >>> np.around([.5, 1.5, 2.5, 3.5, 4.5]) # rounds to nearest even value
+ array([ 0., 2., 2., 4., 4.])
+ >>> np.around([1,2,3,11], decimals=1) # ndarray of ints is returned
+ array([ 1, 2, 3, 11])
+ >>> np.around([1,2,3,11], decimals=-1)
+ array([ 0, 0, 0, 10])
+
+ """
+ raise NotImplemented('Waiting on interp level method')
+
+
+def round_(a, decimals=0, out=None):
+ """
+ Round an array to the given number of decimals.
+
+ Refer to `around` for full documentation.
+
+ See Also
+ --------
+ around : equivalent function
+
+ """
+ raise NotImplemented('Waiting on interp level method')
+
+
+def mean(a, axis=None, dtype=None, out=None):
+ """
+ Compute the arithmetic mean along the specified axis.
+
+ Returns the average of the array elements. The average is taken over
+ the flattened array by default, otherwise over the specified axis.
+ `float64` intermediate and return values are used for integer inputs.
+
+ Parameters
+ ----------
+ a : array_like
+ Array containing numbers whose mean is desired. If `a` is not an
+ array, a conversion is attempted.
+ axis : int, optional
+ Axis along which the means are computed. The default is to compute
+ the mean of the flattened array.
+ dtype : data-type, optional
+ Type to use in computing the mean. For integer inputs, the default
+ is `float64`; for floating point inputs, it is the same as the
+ input dtype.
+ out : ndarray, optional
+ Alternate output array in which to place the result. The default
+ is ``None``; if provided, it must have the same shape as the
+ expected output, but the type will be cast if necessary.
+ See `doc.ufuncs` for details.
+
+ Returns
+ -------
+ m : ndarray, see dtype parameter above
+ If `out=None`, returns a new array containing the mean values,
+ otherwise a reference to the output array is returned.
+
+ See Also
+ --------
+ average : Weighted average
+
+ Notes
+ -----
+ The arithmetic mean is the sum of the elements along the axis divided
+ by the number of elements.
+
+ Note that for floating-point input, the mean is computed using the
+ same precision the input has. Depending on the input data, this can
+ cause the results to be inaccurate, especially for `float32` (see
+ example below). Specifying a higher-precision accumulator using the
+ `dtype` keyword can alleviate this issue.
+
+ Examples
+ --------
+ >>> a = np.array([[1, 2], [3, 4]])
+ >>> np.mean(a)
+ 2.5
+ >>> np.mean(a, axis=0)
+ array([ 2., 3.])
+ >>> np.mean(a, axis=1)
+ array([ 1.5, 3.5])
+
+ In single precision, `mean` can be inaccurate:
+
+ >>> a = np.zeros((2, 512*512), dtype=np.float32)
+ >>> a[0, :] = 1.0
+ >>> a[1, :] = 0.1
+ >>> np.mean(a)
+ 0.546875
+
+ Computing the mean in float64 is more accurate:
+
+ >>> np.mean(a, dtype=np.float64)
+ 0.55000000074505806
+
+ """
+ if not hasattr(a, "mean"):
+ a = numpypy.array(a)
+ return a.mean()
+
+
+def std(a, axis=None, dtype=None, out=None, ddof=0):
+ """
+ Compute the standard deviation along the specified axis.
+
+ Returns the standard deviation, a measure of the spread of a distribution,
+ of the array elements. The standard deviation is computed for the
+ flattened array by default, otherwise over the specified axis.
+
+ Parameters
+ ----------
+ a : array_like
+ Calculate the standard deviation of these values.
+ axis : int, optional
+ Axis along which the standard deviation is computed. The default is
+ to compute the standard deviation of the flattened array.
+ dtype : dtype, optional
+ Type to use in computing the standard deviation. For arrays of
+ integer type the default is float64, for arrays of float types it is
+ the same as the array type.
+ out : ndarray, optional
+ Alternative output array in which to place the result. It must have
+ the same shape as the expected output but the type (of the calculated
+ values) will be cast if necessary.
+ ddof : int, optional
+ Means Delta Degrees of Freedom. The divisor used in calculations
+ is ``N - ddof``, where ``N`` represents the number of elements.
+ By default `ddof` is zero.
+
+ Returns
+ -------
+ standard_deviation : ndarray, see dtype parameter above.
+ If `out` is None, return a new array containing the standard deviation,
+ otherwise return a reference to the output array.
+
+ See Also
+ --------
+ var, mean
+ numpy.doc.ufuncs : Section "Output arguments"
+
+ Notes
+ -----
+ The standard deviation is the square root of the average of the squared
+ deviations from the mean, i.e., ``std = sqrt(mean(abs(x - x.mean())**2))``.
+
+ The average squared deviation is normally calculated as ``x.sum() / N``, where
+ ``N = len(x)``. If, however, `ddof` is specified, the divisor ``N - ddof``
+ is used instead. In standard statistical practice, ``ddof=1`` provides an
+ unbiased estimator of the variance of the infinite population. ``ddof=0``
+ provides a maximum likelihood estimate of the variance for normally
+ distributed variables. The standard deviation computed in this function
+ is the square root of the estimated variance, so even with ``ddof=1``, it
+ will not be an unbiased estimate of the standard deviation per se.
+
+ Note that, for complex numbers, `std` takes the absolute
+ value before squaring, so that the result is always real and nonnegative.
+
+ For floating-point input, the *std* is computed using the same
+ precision the input has. Depending on the input data, this can cause
+ the results to be inaccurate, especially for float32 (see example below).
+ Specifying a higher-accuracy accumulator using the `dtype` keyword can
+ alleviate this issue.
+
+ Examples
+ --------
+ >>> a = np.array([[1, 2], [3, 4]])
+ >>> np.std(a)
+ 1.1180339887498949
+ >>> np.std(a, axis=0)
+ array([ 1., 1.])
+ >>> np.std(a, axis=1)
+ array([ 0.5, 0.5])
+
+ In single precision, std() can be inaccurate:
+
+ >>> a = np.zeros((2,512*512), dtype=np.float32)
+ >>> a[0,:] = 1.0
+ >>> a[1,:] = 0.1
+ >>> np.std(a)
+ 0.45172946707416706
+
+ Computing the standard deviation in float64 is more accurate:
+
+ >>> np.std(a, dtype=np.float64)
+ 0.44999999925552653
+
+ """
+ if not hasattr(a, "std"):
+ a = numpypy.array(a)
+ return a.std()
+
+
+def var(a, axis=None, dtype=None, out=None, ddof=0):
+ """
+ Compute the variance along the specified axis.
+
+ Returns the variance of the array elements, a measure of the spread of a
+ distribution. The variance is computed for the flattened array by
+ default, otherwise over the specified axis.
+
+ Parameters
+ ----------
+ a : array_like
+ Array containing numbers whose variance is desired. If `a` is not an
+ array, a conversion is attempted.
+ axis : int, optional
+ Axis along which the variance is computed. The default is to compute
+ the variance of the flattened array.
+ dtype : data-type, optional
+ Type to use in computing the variance. For arrays of integer type
+ the default is `float32`; for arrays of float types it is the same as
+ the array type.
+ out : ndarray, optional
+ Alternate output array in which to place the result. It must have
+ the same shape as the expected output, but the type is cast if
+ necessary.
+ ddof : int, optional
+ "Delta Degrees of Freedom": the divisor used in the calculation is
+ ``N - ddof``, where ``N`` represents the number of elements. By
+ default `ddof` is zero.
+
+ Returns
+ -------
+ variance : ndarray, see dtype parameter above
+ If ``out=None``, returns a new array containing the variance;
+ otherwise, a reference to the output array is returned.
+
+ See Also
+ --------
+ std : Standard deviation
+ mean : Average
+ numpy.doc.ufuncs : Section "Output arguments"
+
+ Notes
+ -----
+ The variance is the average of the squared deviations from the mean,
+ i.e., ``var = mean(abs(x - x.mean())**2)``.
+
+ The mean is normally calculated as ``x.sum() / N``, where ``N = len(x)``.
+ If, however, `ddof` is specified, the divisor ``N - ddof`` is used
+ instead. In standard statistical practice, ``ddof=1`` provides an
+ unbiased estimator of the variance of a hypothetical infinite population.
+ ``ddof=0`` provides a maximum likelihood estimate of the variance for
+ normally distributed variables.
+
+ Note that for complex numbers, the absolute value is taken before
+ squaring, so that the result is always real and nonnegative.
+
+ For floating-point input, the variance is computed using the same
+ precision the input has. Depending on the input data, this can cause
+ the results to be inaccurate, especially for `float32` (see example
+ below). Specifying a higher-accuracy accumulator using the ``dtype``
+ keyword can alleviate this issue.
+
+ Examples
+ --------
+ >>> a = np.array([[1,2],[3,4]])
+ >>> np.var(a)
+ 1.25
+ >>> np.var(a,0)
+ array([ 1., 1.])
+ >>> np.var(a,1)
+ array([ 0.25, 0.25])
+
+ In single precision, var() can be inaccurate:
+
+ >>> a = np.zeros((2,512*512), dtype=np.float32)
+ >>> a[0,:] = 1.0
+ >>> a[1,:] = 0.1
+ >>> np.var(a)
+ 0.20405951142311096
+
+ Computing the standard deviation in float64 is more accurate:
+
+ >>> np.var(a, dtype=np.float64)
+ 0.20249999932997387
+ >>> ((1-0.55)**2 + (0.1-0.55)**2)/2
+ 0.20250000000000001
+
+ """
+ if not hasattr(a, "var"):
+ a = numpypy.array(a)
+ return a.var()
diff --git a/lib_pypy/numpypy/test/test_fromnumeric.py b/lib_pypy/numpypy/test/test_fromnumeric.py
new file mode 100644
--- /dev/null
+++ b/lib_pypy/numpypy/test/test_fromnumeric.py
@@ -0,0 +1,109 @@
+
+from pypy.module.micronumpy.test.test_base import BaseNumpyAppTest
+
+class AppTestFromNumeric(BaseNumpyAppTest):
+ def test_argmax(self):
+ # tests taken from numpy/core/fromnumeric.py docstring
+ from numpypy import array, arange, argmax
+ a = arange(6).reshape((2,3))
+ assert argmax(a) == 5
+ # assert (argmax(a, axis=0) == array([1, 1, 1])).all()
+ # assert (argmax(a, axis=1) == array([2, 2])).all()
+ b = arange(6)
+ b[1] = 5
+ assert argmax(b) == 1
+
+ def test_argmin(self):
+ # tests adapted from test_argmax
+ from numpypy import array, arange, argmin
+ a = arange(6).reshape((2,3))
+ assert argmin(a) == 0
+ # assert (argmax(a, axis=0) == array([0, 0, 0])).all()
+ # assert (argmax(a, axis=1) == array([0, 0])).all()
+ b = arange(6)
+ b[1] = 0
+ assert argmin(b) == 0
+
+ def test_shape(self):
+ # tests taken from numpy/core/fromnumeric.py docstring
+ from numpypy import array, identity, shape
+ assert shape(identity(3)) == (3, 3)
+ assert shape([[1, 2]]) == (1, 2)
+ assert shape([0]) == (1,)
+ assert shape(0) == ()
+ # a = array([(1, 2), (3, 4)], dtype=[('x', 'i4'), ('y', 'i4')])
+ # assert shape(a) == (2,)
+
+ def test_sum(self):
+ # tests taken from numpy/core/fromnumeric.py docstring
+ from numpypy import array, sum, ones
+ assert sum([0.5, 1.5])== 2.0
+ assert sum([[0, 1], [0, 5]]) == 6
+ # assert sum([0.5, 0.7, 0.2, 1.5], dtype=int32) == 1
+ # assert (sum([[0, 1], [0, 5]], axis=0) == array([0, 6])).all()
+ # assert (sum([[0, 1], [0, 5]], axis=1) == array([1, 5])).all()
+ # If the accumulator is too small, overflow occurs:
+ # assert ones(128, dtype=int8).sum(dtype=int8) == -128
+
+ def test_amin(self):
+ # tests taken from numpy/core/fromnumeric.py docstring
+ from numpypy import array, arange, amin
+ a = arange(4).reshape((2,2))
+ assert amin(a) == 0
+ # # Minima along the first axis
+ # assert (amin(a, axis=0) == array([0, 1])).all()
+ # # Minima along the second axis
+ # assert (amin(a, axis=1) == array([0, 2])).all()
+ # # NaN behaviour
+ # b = arange(5, dtype=float)
+ # b[2] = NaN
+ # assert amin(b) == nan
+ # assert nanmin(b) == 0.0
+
+ def test_amax(self):
+ # tests taken from numpy/core/fromnumeric.py docstring
+ from numpypy import array, arange, amax
+ a = arange(4).reshape((2,2))
+ assert amax(a) == 3
+ # assert (amax(a, axis=0) == array([2, 3])).all()
+ # assert (amax(a, axis=1) == array([1, 3])).all()
+ # # NaN behaviour
+ # b = arange(5, dtype=float)
+ # b[2] = NaN
+ # assert amax(b) == nan
+ # assert nanmax(b) == 4.0
+
+ def test_alen(self):
+ # tests taken from numpy/core/fromnumeric.py docstring
+ from numpypy import array, zeros, alen
+ a = zeros((7,4,5))
+ assert a.shape[0] == 7
+ assert alen(a) == 7
+
+ def test_ndim(self):
+ # tests taken from numpy/core/fromnumeric.py docstring
+ from numpypy import array, ndim
+ assert ndim([[1,2,3],[4,5,6]]) == 2
+ assert ndim(array([[1,2,3],[4,5,6]])) == 2
+ assert ndim(1) == 0
+
+ def test_rank(self):
+ # tests taken from numpy/core/fromnumeric.py docstring
+ from numpypy import array, rank
+ assert rank([[1,2,3],[4,5,6]]) == 2
+ assert rank(array([[1,2,3],[4,5,6]])) == 2
+ assert rank(1) == 0
+
+ def test_var(self):
+ from numpypy import array, var
+ a = array([[1,2],[3,4]])
+ assert var(a) == 1.25
+ # assert (np.var(a,0) == array([ 1., 1.])).all()
+ # assert (np.var(a,1) == array([ 0.25, 0.25])).all()
+
+ def test_std(self):
+ from numpypy import array, std
+ a = array([[1, 2], [3, 4]])
+ assert std(a) == 1.1180339887498949
+ # assert (std(a, axis=0) == array([ 1., 1.])).all()
+ # assert (std(a, axis=1) == array([ 0.5, 0.5]).all()
From noreply at buildbot.pypy.org Sun Jan 8 14:33:03 2012
From: noreply at buildbot.pypy.org (fijal)
Date: Sun, 8 Jan 2012 14:33:03 +0100 (CET)
Subject: [pypy-commit] pypy default: minor tests and fixes
Message-ID: <20120108133303.5F74D82110@wyvern.cs.uni-duesseldorf.de>
Author: Maciej Fijalkowski
Branch:
Changeset: r51134:c14c5276c0e1
Date: 2012-01-08 15:31 +0200
http://bitbucket.org/pypy/pypy/changeset/c14c5276c0e1/
Log: minor tests and fixes
diff --git a/pypy/module/micronumpy/__init__.py b/pypy/module/micronumpy/__init__.py
--- a/pypy/module/micronumpy/__init__.py
+++ b/pypy/module/micronumpy/__init__.py
@@ -48,6 +48,7 @@
'int_': 'interp_boxes.W_LongBox',
'inexact': 'interp_boxes.W_InexactBox',
'floating': 'interp_boxes.W_FloatingBox',
+ 'float_': 'interp_boxes.W_Float64Box',
'float32': 'interp_boxes.W_Float32Box',
'float64': 'interp_boxes.W_Float64Box',
}
diff --git a/pypy/module/micronumpy/interp_boxes.py b/pypy/module/micronumpy/interp_boxes.py
--- a/pypy/module/micronumpy/interp_boxes.py
+++ b/pypy/module/micronumpy/interp_boxes.py
@@ -78,6 +78,7 @@
descr_sub = _binop_impl("subtract")
descr_mul = _binop_impl("multiply")
descr_div = _binop_impl("divide")
+ descr_pow = _binop_impl("power")
descr_eq = _binop_impl("equal")
descr_ne = _binop_impl("not_equal")
descr_lt = _binop_impl("less")
@@ -103,7 +104,7 @@
_attrs_ = ()
class W_IntegerBox(W_NumberBox):
- pass
+ descr__new__, get_dtype = new_dtype_getter("long")
class W_SignedIntegerBox(W_IntegerBox):
pass
@@ -170,6 +171,7 @@
__sub__ = interp2app(W_GenericBox.descr_sub),
__mul__ = interp2app(W_GenericBox.descr_mul),
__div__ = interp2app(W_GenericBox.descr_div),
+ __pow__ = interp2app(W_GenericBox.descr_pow),
__radd__ = interp2app(W_GenericBox.descr_radd),
__rsub__ = interp2app(W_GenericBox.descr_rsub),
@@ -198,6 +200,7 @@
)
W_IntegerBox.typedef = TypeDef("integer", W_NumberBox.typedef,
+ __new__ = interp2app(W_IntegerBox.descr__new__.im_func),
__module__ = "numpypy",
)
diff --git a/pypy/module/micronumpy/test/test_dtypes.py b/pypy/module/micronumpy/test/test_dtypes.py
--- a/pypy/module/micronumpy/test/test_dtypes.py
+++ b/pypy/module/micronumpy/test/test_dtypes.py
@@ -166,6 +166,15 @@
# You can't subclass dtype
raises(TypeError, type, "Foo", (dtype,), {})
+ def test_new(self):
+ import _numpypy as np
+ assert np.int_(4) == 4
+ assert np.float_(3.4) == 3.4
+
+ def test_pow(self):
+ from _numpypy import int_
+ assert int_(4) ** 2 == 16
+
class AppTestTypes(BaseNumpyAppTest):
def test_abstract_types(self):
import _numpypy as numpy
From noreply at buildbot.pypy.org Sun Jan 8 14:33:04 2012
From: noreply at buildbot.pypy.org (fijal)
Date: Sun, 8 Jan 2012 14:33:04 +0100 (CET)
Subject: [pypy-commit] pypy default: simplification. We're not java
Message-ID: <20120108133304.8D84782110@wyvern.cs.uni-duesseldorf.de>
Author: Maciej Fijalkowski
Branch:
Changeset: r51135:014afe8c57ac
Date: 2012-01-08 15:32 +0200
http://bitbucket.org/pypy/pypy/changeset/014afe8c57ac/
Log: simplification. We're not java
diff --git a/pypy/jit/metainterp/resoperation.py b/pypy/jit/metainterp/resoperation.py
--- a/pypy/jit/metainterp/resoperation.py
+++ b/pypy/jit/metainterp/resoperation.py
@@ -16,15 +16,13 @@
# debug
name = ""
pc = 0
+ opnum = 0
def __init__(self, result):
self.result = result
- # methods implemented by each concrete class
- # ------------------------------------------
-
def getopnum(self):
- raise NotImplementedError
+ return self.opnum
# methods implemented by the arity mixins
# ---------------------------------------
@@ -590,12 +588,9 @@
baseclass = PlainResOp
mixin = arity2mixin.get(arity, N_aryOp)
- def getopnum(self):
- return opnum
-
cls_name = '%s_OP' % name
bases = (get_base_class(mixin, baseclass),)
- dic = {'getopnum': getopnum}
+ dic = {'opnum': opnum}
return type(cls_name, bases, dic)
setup(__name__ == '__main__') # print out the table when run directly
From noreply at buildbot.pypy.org Sun Jan 8 19:03:28 2012
From: noreply at buildbot.pypy.org (fijal)
Date: Sun, 8 Jan 2012 19:03:28 +0100 (CET)
Subject: [pypy-commit] pypy better-jit-hooks: improve the hooks to be called
before and after optimization
Message-ID: <20120108180328.06C2382110@wyvern.cs.uni-duesseldorf.de>
Author: Maciej Fijalkowski
Branch: better-jit-hooks
Changeset: r51136:6521c5b63450
Date: 2012-01-08 20:02 +0200
http://bitbucket.org/pypy/pypy/changeset/6521c5b63450/
Log: improve the hooks to be called before and after optimization
diff --git a/pypy/jit/metainterp/compile.py b/pypy/jit/metainterp/compile.py
--- a/pypy/jit/metainterp/compile.py
+++ b/pypy/jit/metainterp/compile.py
@@ -305,6 +305,13 @@
show_procedures(metainterp_sd, loop)
loop.check_consistency()
+ if metainterp_sd.warmrunnerdesc is not None:
+ portal = metainterp_sd.warmrunnerdesc.portal
+ portal.before_compile(jitdriver_sd.jitdriver, metainterp_sd.logger_ops,
+ original_jitcell_token, loop.operations, type,
+ greenkey)
+ else:
+ portal = None
operations = get_deep_immutable_oplist(loop.operations)
metainterp_sd.profiler.start_backend()
debug_start("jit-backend")
@@ -316,11 +323,10 @@
finally:
debug_stop("jit-backend")
metainterp_sd.profiler.end_backend()
- if metainterp_sd.warmrunnerdesc is not None:
- portal = metainterp_sd.warmrunnerdesc.portal
- portal.on_compile(jitdriver_sd.jitdriver, metainterp_sd.logger_ops,
- original_jitcell_token, loop.operations, type,
- greenkey, ops_offset, asmstart, asmlen)
+ if portal is not None:
+ portal.after_compile(jitdriver_sd.jitdriver, metainterp_sd.logger_ops,
+ original_jitcell_token, loop.operations, type,
+ greenkey, ops_offset, asmstart, asmlen)
metainterp_sd.stats.add_new_loop(loop)
if not we_are_translated():
metainterp_sd.stats.compiled()
@@ -341,8 +347,15 @@
show_procedures(metainterp_sd)
seen = dict.fromkeys(inputargs)
TreeLoop.check_consistency_of_branch(operations, seen)
+ if metainterp_sd.warmrunnerdesc is not None:
+ portal = metainterp_sd.warmrunnerdesc.portal
+ portal.before_compile_bridge(jitdriver_sd.jitdriver,
+ metainterp_sd.logger_ops,
+ original_loop_token, operations, n)
+ else:
+ portal = None
+ operations = get_deep_immutable_oplist(operations)
metainterp_sd.profiler.start_backend()
- operations = get_deep_immutable_oplist(operations)
debug_start("jit-backend")
try:
tp = metainterp_sd.cpu.compile_bridge(faildescr, inputargs, operations,
@@ -351,12 +364,12 @@
finally:
debug_stop("jit-backend")
metainterp_sd.profiler.end_backend()
- if metainterp_sd.warmrunnerdesc is not None:
- portal = metainterp_sd.warmrunnerdesc.portal
- portal.on_compile_bridge(jitdriver_sd.jitdriver,
- metainterp_sd.logger_ops,
- original_loop_token, operations, n, ops_offset,
- asmstart, asmlen)
+ if portal is not None:
+ portal.after_compile_bridge(jitdriver_sd.jitdriver,
+ metainterp_sd.logger_ops,
+ original_loop_token, operations, n,
+ ops_offset,
+ asmstart, asmlen)
if not we_are_translated():
metainterp_sd.stats.compiled()
metainterp_sd.log("compiled new bridge")
diff --git a/pypy/jit/metainterp/test/test_jitportal.py b/pypy/jit/metainterp/test/test_jitportal.py
--- a/pypy/jit/metainterp/test/test_jitportal.py
+++ b/pypy/jit/metainterp/test/test_jitportal.py
@@ -41,14 +41,25 @@
assert reasons == [ABORT_FORCE_QUASIIMMUT] * 2
def test_on_compile(self):
- called = {}
+ called = []
class MyJitPortal(JitPortal):
- def on_compile(self, jitdriver, logger, looptoken, operations,
- type, greenkey, ops_offset, asmaddr, asmlen):
+ def after_compile(self, jitdriver, logger, looptoken, operations,
+ type, greenkey, ops_offset, asmaddr, asmlen):
assert asmaddr == 0
assert asmlen == 0
- called[(greenkey[1].getint(), greenkey[0].getint(), type)] = looptoken
+ called.append(("compile", greenkey[1].getint(),
+ greenkey[0].getint(), type))
+
+ def before_compile(self, jitdriver, logger, looptoken, oeprations,
+ type, greenkey):
+ called.append(("optimize", greenkey[1].getint(),
+ greenkey[0].getint(), type))
+
+ def before_optimize(self, jitdriver, logger, looptoken, oeprations,
+ type, greenkey):
+ called.append(("trace", greenkey[1].getint(),
+ greenkey[0].getint(), type))
portal = MyJitPortal()
@@ -62,26 +73,35 @@
i += 1
self.meta_interp(loop, [1, 4], policy=JitPolicy(portal))
- assert sorted(called.keys()) == [(4, 1, "loop")]
+ assert called == [#("trace", 4, 1, "loop"),
+ ("optimize", 4, 1, "loop"),
+ ("compile", 4, 1, "loop")]
self.meta_interp(loop, [2, 4], policy=JitPolicy(portal))
- assert sorted(called.keys()) == [(4, 1, "loop"),
- (4, 2, "loop")]
+ assert called == [#("trace", 4, 1, "loop"),
+ ("optimize", 4, 1, "loop"),
+ ("compile", 4, 1, "loop"),
+ #("trace", 4, 2, "loop"),
+ ("optimize", 4, 2, "loop"),
+ ("compile", 4, 2, "loop")]
def test_on_compile_bridge(self):
- called = {}
+ called = []
class MyJitPortal(JitPortal):
- def on_compile(self, jitdriver, logger, looptoken, operations,
+ def after_compile(self, jitdriver, logger, looptoken, operations,
type, greenkey, ops_offset, asmaddr, asmlen):
assert asmaddr == 0
assert asmlen == 0
- called[(greenkey[1].getint(), greenkey[0].getint(), type)] = looptoken
+ called.append("compile")
- def on_compile_bridge(self, jitdriver, logger, orig_token,
- operations, n, ops_offset, asmstart, asmlen):
- assert 'bridge' not in called
- called['bridge'] = orig_token
+ def after_compile_bridge(self, jitdriver, logger, orig_token,
+ operations, n, ops_offset, asmstart, asmlen):
+ called.append("compile_bridge")
+ def before_compile_bridge(self, jitdriver, logger, orig_token,
+ operations, n):
+ called.append("before_compile_bridge")
+
driver = JitDriver(greens = ['n', 'm'], reds = ['i'])
def loop(n, m):
@@ -94,7 +114,7 @@
i += 1
self.meta_interp(loop, [1, 10], policy=JitPolicy(MyJitPortal()))
- assert sorted(called.keys()) == ['bridge', (10, 1, "loop")]
+ assert called == ["compile", "before_compile_bridge", "compile_bridge"]
def test_resop_interface(self):
driver = JitDriver(greens = [], reds = ['i'])
diff --git a/pypy/rlib/jit.py b/pypy/rlib/jit.py
--- a/pypy/rlib/jit.py
+++ b/pypy/rlib/jit.py
@@ -731,29 +731,62 @@
like JIT loops compiled, aborts etc.
An instance of this class might be returned by the policy.get_jit_portal
method in order to function.
+
+ each hook will accept some of the following args:
+
+
+ greenkey - a list of green boxes
+ jitdriver - an instance of jitdriver where tracing started
+ logger - an instance of jit.metainterp.logger.LogOperations
+ ops_offset
+ asmaddr - (int) raw address of assembler block
+ asmlen - assembler block length
+ type - either 'loop' or 'entry bridge'
"""
def on_abort(self, reason, jitdriver, greenkey):
""" A hook called each time a loop is aborted with jitdriver and
greenkey where it started, reason is a string why it got aborted
"""
- def on_compile(self, jitdriver, logger, looptoken, operations, type,
- greenkey, ops_offset, asmaddr, asmlen):
- """ A hook called when loop is compiled. Overwrite
- for your own jitdriver if you want to do something special, like
- call applevel code.
+ #def before_optimize(self, jitdriver, logger, looptoken, operations,
+ # type, greenkey):
+ # """ A hook called before optimizer is run, args described in class
+ # docstring. Overwrite for custom behavior
+ # """
+ # DISABLED
- jitdriver - an instance of jitdriver where tracing started
- logger - an instance of jit.metainterp.logger.LogOperations
- asmaddr - (int) raw address of assembler block
- asmlen - assembler block length
- type - either 'loop' or 'entry bridge'
+ def before_compile(self, jitdriver, logger, looptoken, operations, type,
+ greenkey):
+ """ A hook called after a loop is optimized, before compiling assembler,
+ args described ni class docstring. Overwrite for custom behavior
"""
- def on_compile_bridge(self, jitdriver, logger, orig_looptoken, operations,
- fail_descr_no, ops_offset, asmaddr, asmlen):
- """ A hook called when a bridge is compiled. Overwrite
- for your own jitdriver if you want to do something special
+ def after_compile(self, jitdriver, logger, looptoken, operations, type,
+ greenkey, ops_offset, asmaddr, asmlen):
+ """ A hook called after a loop has compiled assembler,
+ args described in class docstring. Overwrite for custom behavior
+ """
+
+ #def before_optimize_bridge(self, jitdriver, logger, orig_looptoken,
+ # operations, fail_descr_no):
+ # """ A hook called before a bridge is optimized.
+ # Args described in class docstring, Overwrite for
+ # custom behavior
+ # """
+ # DISABLED
+
+ def before_compile_bridge(self, jitdriver, logger, orig_looptoken,
+ operations, fail_descr_no):
+ """ A hook called before a bridge is compiled, but after optimizations
+ are performed. Args described in class docstring, Overwrite for
+ custom behavior
+ """
+
+ def after_compile_bridge(self, jitdriver, logger, orig_looptoken,
+ operations, fail_descr_no, ops_offset, asmaddr,
+ asmlen):
+ """ A hook called after a bridge is compiled, args described in class
+ docstring, Overwrite for custom behavior
"""
def get_stats(self):
From noreply at buildbot.pypy.org Sun Jan 8 19:19:30 2012
From: noreply at buildbot.pypy.org (fijal)
Date: Sun, 8 Jan 2012 19:19:30 +0100 (CET)
Subject: [pypy-commit] pypy better-jit-hooks: update and improve the hooks
Message-ID: <20120108181930.642DC82110@wyvern.cs.uni-duesseldorf.de>
Author: Maciej Fijalkowski
Branch: better-jit-hooks
Changeset: r51137:5ed435c1abb6
Date: 2012-01-08 20:18 +0200
http://bitbucket.org/pypy/pypy/changeset/5ed435c1abb6/
Log: update and improve the hooks
diff --git a/pypy/module/pypyjit/__init__.py b/pypy/module/pypyjit/__init__.py
--- a/pypy/module/pypyjit/__init__.py
+++ b/pypy/module/pypyjit/__init__.py
@@ -8,6 +8,7 @@
'set_param': 'interp_jit.set_param',
'residual_call': 'interp_jit.residual_call',
'set_compile_hook': 'interp_resop.set_compile_hook',
+ 'set_optimize_hook': 'interp_resop.set_optimize_hook',
'set_abort_hook': 'interp_resop.set_abort_hook',
'ResOperation': 'interp_resop.WrappedOp',
}
diff --git a/pypy/module/pypyjit/interp_resop.py b/pypy/module/pypyjit/interp_resop.py
--- a/pypy/module/pypyjit/interp_resop.py
+++ b/pypy/module/pypyjit/interp_resop.py
@@ -61,6 +61,37 @@
cache.in_recursion = NonConstant(False)
return space.w_None
+def set_optimize_hook(space, w_hook):
+ """ set_compile_hook(hook)
+
+ Set a compiling hook that will be called each time a loop is optimized,
+ but before assembler compilation. This allows to add additional
+ optimizations on Python level.
+
+ The hook will be called with the following signature:
+ hook(jitdriver_name, loop_type, greenkey or guard_number, operations)
+
+ jitdriver_name is the name of this particular jitdriver, 'pypyjit' is
+ the main interpreter loop
+
+ loop_type can be either `loop` `entry_bridge` or `bridge`
+ in case loop is not `bridge`, greenkey will be a tuple of constants
+ or a string describing it.
+
+ for the interpreter loop` it'll be a tuple
+ (code, offset, is_being_profiled)
+
+ Note that jit hook is not reentrant. It means that if the code
+ inside the jit hook is itself jitted, it will get compiled, but the
+ jit hook won't be called for that.
+
+ Result value will be the resulting list of operations, or None
+ """
+ cache = space.fromcache(Cache)
+ cache.w_optimize_hook = w_hook
+ cache.in_recursion = NonConstant(False)
+ return space.w_None
+
def set_abort_hook(space, w_hook):
""" set_abort_hook(hook)
diff --git a/pypy/module/pypyjit/policy.py b/pypy/module/pypyjit/policy.py
--- a/pypy/module/pypyjit/policy.py
+++ b/pypy/module/pypyjit/policy.py
@@ -1,8 +1,10 @@
from pypy.jit.codewriter.policy import JitPolicy
from pypy.rlib.jit import JitPortal
+from pypy.rlib import jit_hooks
from pypy.interpreter.error import OperationError
from pypy.jit.metainterp.jitprof import counter_names
-from pypy.module.pypyjit.interp_resop import wrap_oplist, Cache, wrap_greenkey
+from pypy.module.pypyjit.interp_resop import wrap_oplist, Cache, wrap_greenkey,\
+ WrappedOp
class PyPyPortal(JitPortal):
def on_abort(self, reason, jitdriver, greenkey):
@@ -21,18 +23,28 @@
e.write_unraisable(space, "jit hook ", cache.w_abort_hook)
cache.in_recursion = False
- def on_compile(self, jitdriver, logger, looptoken, operations, type,
- greenkey, ops_offset, asmstart, asmlen):
+ def after_compile(self, jitdriver, logger, looptoken, operations, type,
+ greenkey, ops_offset, asmstart, asmlen):
self._compile_hook(jitdriver, logger, operations, type,
ops_offset, asmstart, asmlen,
wrap_greenkey(self.space, jitdriver, greenkey))
- def on_compile_bridge(self, jitdriver, logger, orig_looptoken, operations,
- n, ops_offset, asmstart, asmlen):
+ def after_compile_bridge(self, jitdriver, logger, orig_looptoken,
+ operations, n, ops_offset, asmstart, asmlen):
self._compile_hook(jitdriver, logger, operations, 'bridge',
ops_offset, asmstart, asmlen,
self.space.wrap(n))
+ def before_compile(self, jitdriver, logger, looptoken, operations, type,
+ greenkey):
+ self._optimize_hook(jitdriver, logger, operations, type,
+ wrap_greenkey(self.space, jitdriver, greenkey))
+
+ def before_compile_bridge(self, jitdriver, logger, orig_looptoken,
+ operations, n):
+ self._optimize_hook(jitdriver, logger, operations, 'bridge',
+ self.space.wrap(n))
+
def _compile_hook(self, jitdriver, logger, operations, type,
ops_offset, asmstart, asmlen, w_arg):
space = self.space
@@ -55,6 +67,34 @@
e.write_unraisable(space, "jit hook ", cache.w_compile_hook)
cache.in_recursion = False
+ def _optimize_hook(self, jitdriver, logger, operations, type, w_arg):
+ space = self.space
+ cache = space.fromcache(Cache)
+ if cache.in_recursion:
+ return
+ if space.is_true(cache.w_optimize_hook):
+ logops = logger._make_log_operations()
+ list_w = wrap_oplist(space, logops, operations, {})
+ cache.in_recursion = True
+ try:
+ w_res = space.call_function(cache.w_optimize_hook,
+ space.wrap(jitdriver.name),
+ space.wrap(type),
+ w_arg,
+ space.newlist(list_w))
+ if space.is_w(w_res, space.w_None):
+ return
+ l = []
+ for w_item in space.listview(w_res):
+ item = space.interp_w(WrappedOp, w_item)
+ l.append(jit_hooks._cast_to_resop(item.op))
+ operations[:] = l # modifying operations above is probably not
+ # a great idea since types may not work and we'll end up with
+ # half-working list and a segfault/fatal RPython error
+ except OperationError, e:
+ e.write_unraisable(space, "jit hook ", cache.w_compile_hook)
+ cache.in_recursion = False
+
pypy_portal = PyPyPortal()
class PyPyJitPolicy(JitPolicy):
diff --git a/pypy/module/pypyjit/test/test_jit_hook.py b/pypy/module/pypyjit/test/test_jit_hook.py
--- a/pypy/module/pypyjit/test/test_jit_hook.py
+++ b/pypy/module/pypyjit/test/test_jit_hook.py
@@ -47,7 +47,7 @@
code_gcref = lltype.cast_opaque_ptr(llmemory.GCREF, ll_code)
logger = Logger(MockSD())
- oplist = parse("""
+ cls.origoplist = parse("""
[i1, i2]
i3 = int_add(i1, i2)
debug_merge_point(0, 0, 0, 0, ConstPtr(ptr0))
@@ -55,19 +55,23 @@
""", namespace={'ptr0': code_gcref}).operations
greenkey = [ConstInt(0), ConstInt(0), ConstPtr(code_gcref)]
offset = {}
- for i, op in enumerate(oplist):
+ for i, op in enumerate(cls.origoplist):
if i != 1:
offset[op] = i
def interp_on_compile():
- pypy_portal.on_compile(pypyjitdriver, logger, JitCellToken(),
- oplist, 'loop', greenkey, offset,
- 0, 0)
+ pypy_portal.after_compile(pypyjitdriver, logger, JitCellToken(),
+ cls.oplist, 'loop', greenkey, offset,
+ 0, 0)
def interp_on_compile_bridge():
- pypy_portal.on_compile_bridge(pypyjitdriver, logger,
- JitCellToken(), oplist, 0,
- offset, 0, 0)
+ pypy_portal.after_compile_bridge(pypyjitdriver, logger,
+ JitCellToken(), cls.oplist, 0,
+ offset, 0, 0)
+
+ def interp_on_optimize():
+ pypy_portal.before_compile(pypyjitdriver, logger, JitCellToken(),
+ cls.oplist, 'loop', greenkey)
def interp_on_abort():
pypy_portal.on_abort(ABORT_TOO_LONG, pypyjitdriver, greenkey)
@@ -76,6 +80,10 @@
cls.w_on_compile_bridge = space.wrap(interp2app(interp_on_compile_bridge))
cls.w_on_abort = space.wrap(interp2app(interp_on_abort))
cls.w_int_add_num = space.wrap(rop.INT_ADD)
+ cls.w_on_optimize = space.wrap(interp2app(interp_on_optimize))
+
+ def setup_method(self, meth):
+ self.__class__.oplist = self.origoplist
def test_on_compile(self):
import pypyjit
@@ -160,6 +168,22 @@
self.on_abort()
assert l == [('pypyjit', 'ABORT_TOO_LONG')]
+ def test_on_optimize(self):
+ import pypyjit
+ l = []
+
+ def hook(name, looptype, tuple_or_guard_no, ops, *args):
+ l.append(ops)
+
+ def optimize_hook(name, looptype, tuple_or_guard_no, ops):
+ return []
+
+ pypyjit.set_compile_hook(hook)
+ pypyjit.set_optimize_hook(optimize_hook)
+ self.on_optimize()
+ self.on_compile()
+ assert l == [[]]
+
def test_creation(self):
import pypyjit
From noreply at buildbot.pypy.org Sun Jan 8 19:29:02 2012
From: noreply at buildbot.pypy.org (fijal)
Date: Sun, 8 Jan 2012 19:29:02 +0100 (CET)
Subject: [pypy-commit] pypy better-jit-hooks: improve a bit how to get to
items
Message-ID: <20120108182902.51DB282110@wyvern.cs.uni-duesseldorf.de>
Author: Maciej Fijalkowski
Branch: better-jit-hooks
Changeset: r51138:f4129eca042d
Date: 2012-01-08 20:28 +0200
http://bitbucket.org/pypy/pypy/changeset/f4129eca042d/
Log: improve a bit how to get to items
diff --git a/pypy/jit/metainterp/test/test_jitportal.py b/pypy/jit/metainterp/test/test_jitportal.py
--- a/pypy/jit/metainterp/test/test_jitportal.py
+++ b/pypy/jit/metainterp/test/test_jitportal.py
@@ -5,6 +5,7 @@
from pypy.jit.codewriter.policy import JitPolicy
from pypy.jit.metainterp.jitprof import ABORT_FORCE_QUASIIMMUT
from pypy.jit.metainterp.resoperation import rop
+from pypy.rpython.annlowlevel import hlstr
class TestJitPortal(LLJitMixin):
def test_abort_quasi_immut(self):
@@ -130,7 +131,9 @@
[jit_hooks.boxint_new(3),
jit_hooks.boxint_new(4)],
jit_hooks.boxint_new(1))
- return jit_hooks.resop_opnum(op)
+ assert hlstr(jit_hooks.resop_getopname(op)) == 'int_add'
+ assert jit_hooks.resop_getopnum(op) == rop.INT_ADD
+ box = jit_hooks.resop_getarg(op, 0)
+ assert jit_hooks.box_getint(box) == 3
- res = self.meta_interp(main, [])
- assert res == rop.INT_ADD
+ self.meta_interp(main, [])
diff --git a/pypy/rlib/jit_hooks.py b/pypy/rlib/jit_hooks.py
--- a/pypy/rlib/jit_hooks.py
+++ b/pypy/rlib/jit_hooks.py
@@ -4,26 +4,28 @@
from pypy.rpython.lltypesystem import llmemory, lltype
from pypy.rpython.lltypesystem import rclass
from pypy.rpython.annlowlevel import cast_instance_to_base_ptr,\
- cast_base_ptr_to_instance
+ cast_base_ptr_to_instance, llstr, hlstr
from pypy.rlib.objectmodel import specialize
-def register_helper(helper, s_result):
-
- class Entry(ExtRegistryEntry):
- _about_ = helper
+def register_helper(s_result):
+ def wrapper(helper):
+ class Entry(ExtRegistryEntry):
+ _about_ = helper
- def compute_result_annotation(self, *args):
- return s_result
+ def compute_result_annotation(self, *args):
+ return s_result
- def specialize_call(self, hop):
- from pypy.rpython.lltypesystem import lltype
+ def specialize_call(self, hop):
+ from pypy.rpython.lltypesystem import lltype
- c_func = hop.inputconst(lltype.Void, helper)
- c_name = hop.inputconst(lltype.Void, 'access_helper')
- args_v = [hop.inputarg(arg, arg=i)
- for i, arg in enumerate(hop.args_r)]
- return hop.genop('jit_marker', [c_name, c_func] + args_v,
- resulttype=hop.r_result)
+ c_func = hop.inputconst(lltype.Void, helper)
+ c_name = hop.inputconst(lltype.Void, 'access_helper')
+ args_v = [hop.inputarg(arg, arg=i)
+ for i, arg in enumerate(hop.args_r)]
+ return hop.genop('jit_marker', [c_name, c_func] + args_v,
+ resulttype=hop.r_result)
+ return helper
+ return wrapper
def _cast_to_box(llref):
from pypy.jit.metainterp.history import AbstractValue
@@ -42,6 +44,7 @@
return lltype.cast_opaque_ptr(llmemory.GCREF,
cast_instance_to_base_ptr(obj))
+ at register_helper(annmodel.SomePtr(llmemory.GCREF))
def resop_new(no, llargs, llres):
from pypy.jit.metainterp.history import ResOperation
@@ -49,15 +52,23 @@
res = _cast_to_box(llres)
return _cast_to_gcref(ResOperation(no, args, res))
-register_helper(resop_new, annmodel.SomePtr(llmemory.GCREF))
-
+ at register_helper(annmodel.SomePtr(llmemory.GCREF))
def boxint_new(no):
from pypy.jit.metainterp.history import BoxInt
return _cast_to_gcref(BoxInt(no))
-register_helper(boxint_new, annmodel.SomePtr(llmemory.GCREF))
-
-def resop_opnum(llop):
+ at register_helper(annmodel.SomeInteger())
+def resop_getopnum(llop):
return _cast_to_resop(llop).getopnum()
-register_helper(resop_opnum, annmodel.SomeInteger())
+ at register_helper(annmodel.SomeString(can_be_None=True))
+def resop_getopname(llop):
+ return llstr(_cast_to_resop(llop).getopname())
+
+ at register_helper(annmodel.SomePtr(llmemory.GCREF))
+def resop_getarg(llop, no):
+ return _cast_to_gcref(_cast_to_resop(llop).getarg(no))
+
+ at register_helper(annmodel.SomeInteger())
+def box_getint(llbox):
+ return _cast_to_box(llbox).getint()
From noreply at buildbot.pypy.org Sun Jan 8 19:39:53 2012
From: noreply at buildbot.pypy.org (fijal)
Date: Sun, 8 Jan 2012 19:39:53 +0100 (CET)
Subject: [pypy-commit] pypy better-jit-hooks: expose some more
Message-ID: <20120108183953.EE59282110@wyvern.cs.uni-duesseldorf.de>
Author: Maciej Fijalkowski
Branch: better-jit-hooks
Changeset: r51139:143e2aef1cb6
Date: 2012-01-08 20:39 +0200
http://bitbucket.org/pypy/pypy/changeset/143e2aef1cb6/
Log: expose some more
diff --git a/pypy/jit/metainterp/test/test_jitportal.py b/pypy/jit/metainterp/test/test_jitportal.py
--- a/pypy/jit/metainterp/test/test_jitportal.py
+++ b/pypy/jit/metainterp/test/test_jitportal.py
@@ -135,5 +135,14 @@
assert jit_hooks.resop_getopnum(op) == rop.INT_ADD
box = jit_hooks.resop_getarg(op, 0)
assert jit_hooks.box_getint(box) == 3
+ box2 = jit_hooks.box_clone(box)
+ assert box2 != box
+ assert jit_hooks.box_getint(box2) == 3
+ assert not jit_hooks.box_isconst(box2)
+ box3 = jit_hooks.box_constbox(box)
+ assert jit_hooks.box_getint(box) == 3
+ assert jit_hooks.box_isconst(box3)
+ box4 = jit_hooks.box_nonconstbox(box)
+ assert not jit_hooks.box_isconst(box4)
self.meta_interp(main, [])
diff --git a/pypy/rlib/jit_hooks.py b/pypy/rlib/jit_hooks.py
--- a/pypy/rlib/jit_hooks.py
+++ b/pypy/rlib/jit_hooks.py
@@ -72,3 +72,20 @@
@register_helper(annmodel.SomeInteger())
def box_getint(llbox):
return _cast_to_box(llbox).getint()
+
+ at register_helper(annmodel.SomePtr(llmemory.GCREF))
+def box_clone(llbox):
+ return _cast_to_gcref(_cast_to_box(llbox).clonebox())
+
+ at register_helper(annmodel.SomePtr(llmemory.GCREF))
+def box_constbox(llbox):
+ return _cast_to_gcref(_cast_to_box(llbox).constbox())
+
+ at register_helper(annmodel.SomePtr(llmemory.GCREF))
+def box_nonconstbox(llbox):
+ return _cast_to_gcref(_cast_to_box(llbox).nonconstbox())
+
+ at register_helper(annmodel.SomeBool())
+def box_isconst(llbox):
+ from pypy.jit.metainterp.history import Const
+ return isinstance(_cast_to_box(llbox), Const)
From noreply at buildbot.pypy.org Sun Jan 8 19:53:30 2012
From: noreply at buildbot.pypy.org (fijal)
Date: Sun, 8 Jan 2012 19:53:30 +0100 (CET)
Subject: [pypy-commit] pypy better-jit-hooks: increasingly boring exerice of
exposing more and more
Message-ID: <20120108185330.2C56582110@wyvern.cs.uni-duesseldorf.de>
Author: Maciej Fijalkowski
Branch: better-jit-hooks
Changeset: r51140:f59c4f53adf9
Date: 2012-01-08 20:53 +0200
http://bitbucket.org/pypy/pypy/changeset/f59c4f53adf9/
Log: increasingly boring exerice of exposing more and more
diff --git a/pypy/module/pypyjit/__init__.py b/pypy/module/pypyjit/__init__.py
--- a/pypy/module/pypyjit/__init__.py
+++ b/pypy/module/pypyjit/__init__.py
@@ -11,6 +11,7 @@
'set_optimize_hook': 'interp_resop.set_optimize_hook',
'set_abort_hook': 'interp_resop.set_abort_hook',
'ResOperation': 'interp_resop.WrappedOp',
+ 'Box': 'interp_resop.WrappedBox',
}
def setup_after_space_initialization(self):
diff --git a/pypy/module/pypyjit/interp_resop.py b/pypy/module/pypyjit/interp_resop.py
--- a/pypy/module/pypyjit/interp_resop.py
+++ b/pypy/module/pypyjit/interp_resop.py
@@ -5,7 +5,7 @@
from pypy.interpreter.pycode import PyCode
from pypy.interpreter.error import OperationError
from pypy.rpython.lltypesystem import lltype, llmemory
-from pypy.rpython.annlowlevel import cast_base_ptr_to_instance
+from pypy.rpython.annlowlevel import cast_base_ptr_to_instance, hlstr
from pypy.rpython.lltypesystem.rclass import OBJECT
from pypy.jit.metainterp.resoperation import rop, AbstractResOp
from pypy.rlib.nonconst import NonConstant
@@ -114,14 +114,12 @@
logops.repr_of_resop(op)) for op in operations]
@unwrap_spec(num=int, offset=int, repr=str)
-def descr_new_resop(space, w_tp, num, w_args, w_res=NoneNotWrapped, offset=-1,
+def descr_new_resop(space, w_tp, num, w_args, w_res=None, offset=-1,
repr=''):
- args = [jit_hooks.boxint_new(space.int_w(w_arg)) for w_arg in
+ args = [space.interp_w(WrappedBox, w_arg).llbox for w_arg in
space.listview(w_args)]
- if w_res is None:
- llres = lltype.nullptr(llmemory.GCREF.TO)
- else:
- llres = jit_hooks.boxint_new(space.int_w(w_res))
+ llres = space.interp_w(WrappedBox, w_res).llbox
+ # XXX None case
return WrappedOp(jit_hooks.resop_new(num, args, llres), offset, repr)
class WrappedOp(Wrappable):
@@ -136,7 +134,14 @@
return space.wrap(self.repr_of_resop)
def descr_num(self, space):
- return space.wrap(jit_hooks.resop_opnum(self.op))
+ return space.wrap(jit_hooks.resop_getopnum(self.op))
+
+ def descr_name(self, space):
+ return space.wrap(hlstr(jit_hooks.resop_getopname(self.op)))
+
+ @unwrap_spec(no=int)
+ def descr_getarg(self, space, no):
+ return WrappedBox(jit_hooks.resop_getarg(self.op, no))
WrappedOp.typedef = TypeDef(
'ResOperation',
@@ -144,5 +149,26 @@
__new__ = interp2app(descr_new_resop),
__repr__ = interp2app(WrappedOp.descr_repr),
num = GetSetProperty(WrappedOp.descr_num),
+ name = GetSetProperty(WrappedOp.descr_name),
+ getarg = interp2app(WrappedOp.descr_getarg),
)
WrappedOp.acceptable_as_base_class = False
+
+class WrappedBox(Wrappable):
+ """ A class representing a single box
+ """
+ def __init__(self, llbox):
+ self.llbox = llbox
+
+ def descr_getint(self, space):
+ return space.wrap(jit_hooks.box_getint(self.llbox))
+
+ at unwrap_spec(no=int)
+def descr_new_box(space, w_tp, no):
+ return WrappedBox(jit_hooks.boxint_new(no))
+
+WrappedBox.typedef = TypeDef(
+ 'Box',
+ __new__ = interp2app(descr_new_box),
+ getint = interp2app(WrappedBox.descr_getint),
+)
diff --git a/pypy/module/pypyjit/test/test_jit_hook.py b/pypy/module/pypyjit/test/test_jit_hook.py
--- a/pypy/module/pypyjit/test/test_jit_hook.py
+++ b/pypy/module/pypyjit/test/test_jit_hook.py
@@ -48,9 +48,10 @@
logger = Logger(MockSD())
cls.origoplist = parse("""
- [i1, i2]
+ [i1, i2, p2]
i3 = int_add(i1, i2)
debug_merge_point(0, 0, 0, 0, ConstPtr(ptr0))
+ guard_nonnull(p2) []
guard_true(i3) []
""", namespace={'ptr0': code_gcref}).operations
greenkey = [ConstInt(0), ConstInt(0), ConstPtr(code_gcref)]
@@ -102,7 +103,7 @@
assert elem[2][0].co_name == 'function'
assert elem[2][1] == 0
assert elem[2][2] == False
- assert len(elem[3]) == 3
+ assert len(elem[3]) == 4
int_add = elem[3][0]
#assert int_add.name == 'int_add'
assert int_add.num == self.int_add_num
@@ -185,7 +186,10 @@
assert l == [[]]
def test_creation(self):
- import pypyjit
+ from pypyjit import Box, ResOperation
- op = pypyjit.ResOperation(self.int_add_num, [1, 3], 4)
+ op = ResOperation(self.int_add_num, [Box(1), Box(3)], Box(4))
assert op.num == self.int_add_num
+ assert op.name == 'int_add'
+ box = op.getarg(0)
+ assert box.getint() == 1
From noreply at buildbot.pypy.org Sun Jan 8 19:54:32 2012
From: noreply at buildbot.pypy.org (alex_gaynor)
Date: Sun, 8 Jan 2012 19:54:32 +0100 (CET)
Subject: [pypy-commit] pypy better-jit-hooks: added support for float
getinteriorfield_raws
Message-ID: <20120108185432.073AF82110@wyvern.cs.uni-duesseldorf.de>
Author: Alex Gaynor
Branch: better-jit-hooks
Changeset: r51141:e44952da636d
Date: 2012-01-08 12:53 -0600
http://bitbucket.org/pypy/pypy/changeset/e44952da636d/
Log: added support for float getinteriorfield_raws
diff --git a/pypy/jit/metainterp/optimizeopt/fficall.py b/pypy/jit/metainterp/optimizeopt/fficall.py
--- a/pypy/jit/metainterp/optimizeopt/fficall.py
+++ b/pypy/jit/metainterp/optimizeopt/fficall.py
@@ -234,11 +234,11 @@
# longlongs are treated as floats, see
# e.g. llsupport/descr.py:getDescrClass
is_float = True
- elif kind == 'u':
+ elif kind == 'u' or kind == 's':
# they're all False
pass
else:
- assert False, "unsupported ffitype or kind"
+ raise NotImplementedError("unsupported ffitype or kind: %s" % kind)
#
fieldsize = rffi.getintfield(ffitype, 'c_size')
return self.optimizer.cpu.interiorfielddescrof_dynamic(
diff --git a/pypy/jit/metainterp/test/test_fficall.py b/pypy/jit/metainterp/test/test_fficall.py
--- a/pypy/jit/metainterp/test/test_fficall.py
+++ b/pypy/jit/metainterp/test/test_fficall.py
@@ -148,28 +148,38 @@
self.check_resops({'jump': 1, 'int_lt': 2, 'setinteriorfield_raw': 4,
'getinteriorfield_raw': 8, 'int_add': 6, 'guard_true': 2})
- def test_array_getitem_uint8(self):
+ def _test_getitem_type(self, TYPE, ffitype, COMPUTE_TYPE):
+ reds = ["n", "i", "s", "data"]
+ if COMPUTE_TYPE is lltype.Float:
+ # Move the float var to the back.
+ reds.remove("s")
+ reds.append("s")
myjitdriver = JitDriver(
greens = [],
- reds = ["n", "i", "s", "data"],
+ reds = reds,
)
def f(data, n):
- i = s = 0
+ i = 0
+ s = rffi.cast(COMPUTE_TYPE, 0)
while i < n:
myjitdriver.jit_merge_point(n=n, i=i, s=s, data=data)
- s += rffi.cast(lltype.Signed, array_getitem(types.uchar, 1, data, 0, 0))
+ s += rffi.cast(COMPUTE_TYPE, array_getitem(ffitype, rffi.sizeof(TYPE), data, 0, 0))
i += 1
return s
+ def main(n):
+ with lltype.scoped_alloc(rffi.CArray(TYPE), 1) as data:
+ data[0] = rffi.cast(TYPE, 200)
+ return f(data, n)
+ assert self.meta_interp(main, [10]) == 2000
- def main(n):
- with lltype.scoped_alloc(rffi.CArray(rffi.UCHAR), 1) as data:
- data[0] = rffi.cast(rffi.UCHAR, 200)
- return f(data, n)
-
- assert self.meta_interp(main, [10]) == 2000
+ def test_array_getitem_uint8(self):
+ self._test_getitem_type(rffi.UCHAR, types.uchar, lltype.Signed)
self.check_resops({'jump': 1, 'int_lt': 2, 'getinteriorfield_raw': 2,
'guard_true': 2, 'int_add': 4})
+ def test_array_getitem_float(self):
+ self._test_getitem_type(rffi.FLOAT, types.float, lltype.Float)
+
class TestFfiCall(FfiCallTests, LLJitMixin):
supports_all = False
From noreply at buildbot.pypy.org Sun Jan 8 19:54:33 2012
From: noreply at buildbot.pypy.org (alex_gaynor)
Date: Sun, 8 Jan 2012 19:54:33 +0100 (CET)
Subject: [pypy-commit] pypy better-jit-hooks: merged upstream
Message-ID: <20120108185433.3604E82110@wyvern.cs.uni-duesseldorf.de>
Author: Alex Gaynor
Branch: better-jit-hooks
Changeset: r51142:ade5f6c6f404
Date: 2012-01-08 12:54 -0600
http://bitbucket.org/pypy/pypy/changeset/ade5f6c6f404/
Log: merged upstream
diff --git a/pypy/jit/metainterp/compile.py b/pypy/jit/metainterp/compile.py
--- a/pypy/jit/metainterp/compile.py
+++ b/pypy/jit/metainterp/compile.py
@@ -305,6 +305,13 @@
show_procedures(metainterp_sd, loop)
loop.check_consistency()
+ if metainterp_sd.warmrunnerdesc is not None:
+ portal = metainterp_sd.warmrunnerdesc.portal
+ portal.before_compile(jitdriver_sd.jitdriver, metainterp_sd.logger_ops,
+ original_jitcell_token, loop.operations, type,
+ greenkey)
+ else:
+ portal = None
operations = get_deep_immutable_oplist(loop.operations)
metainterp_sd.profiler.start_backend()
debug_start("jit-backend")
@@ -316,11 +323,10 @@
finally:
debug_stop("jit-backend")
metainterp_sd.profiler.end_backend()
- if metainterp_sd.warmrunnerdesc is not None:
- portal = metainterp_sd.warmrunnerdesc.portal
- portal.on_compile(jitdriver_sd.jitdriver, metainterp_sd.logger_ops,
- original_jitcell_token, loop.operations, type,
- greenkey, ops_offset, asmstart, asmlen)
+ if portal is not None:
+ portal.after_compile(jitdriver_sd.jitdriver, metainterp_sd.logger_ops,
+ original_jitcell_token, loop.operations, type,
+ greenkey, ops_offset, asmstart, asmlen)
metainterp_sd.stats.add_new_loop(loop)
if not we_are_translated():
metainterp_sd.stats.compiled()
@@ -341,8 +347,15 @@
show_procedures(metainterp_sd)
seen = dict.fromkeys(inputargs)
TreeLoop.check_consistency_of_branch(operations, seen)
+ if metainterp_sd.warmrunnerdesc is not None:
+ portal = metainterp_sd.warmrunnerdesc.portal
+ portal.before_compile_bridge(jitdriver_sd.jitdriver,
+ metainterp_sd.logger_ops,
+ original_loop_token, operations, n)
+ else:
+ portal = None
+ operations = get_deep_immutable_oplist(operations)
metainterp_sd.profiler.start_backend()
- operations = get_deep_immutable_oplist(operations)
debug_start("jit-backend")
try:
tp = metainterp_sd.cpu.compile_bridge(faildescr, inputargs, operations,
@@ -351,12 +364,12 @@
finally:
debug_stop("jit-backend")
metainterp_sd.profiler.end_backend()
- if metainterp_sd.warmrunnerdesc is not None:
- portal = metainterp_sd.warmrunnerdesc.portal
- portal.on_compile_bridge(jitdriver_sd.jitdriver,
- metainterp_sd.logger_ops,
- original_loop_token, operations, n, ops_offset,
- asmstart, asmlen)
+ if portal is not None:
+ portal.after_compile_bridge(jitdriver_sd.jitdriver,
+ metainterp_sd.logger_ops,
+ original_loop_token, operations, n,
+ ops_offset,
+ asmstart, asmlen)
if not we_are_translated():
metainterp_sd.stats.compiled()
metainterp_sd.log("compiled new bridge")
diff --git a/pypy/jit/metainterp/test/test_jitportal.py b/pypy/jit/metainterp/test/test_jitportal.py
--- a/pypy/jit/metainterp/test/test_jitportal.py
+++ b/pypy/jit/metainterp/test/test_jitportal.py
@@ -5,6 +5,7 @@
from pypy.jit.codewriter.policy import JitPolicy
from pypy.jit.metainterp.jitprof import ABORT_FORCE_QUASIIMMUT
from pypy.jit.metainterp.resoperation import rop
+from pypy.rpython.annlowlevel import hlstr
class TestJitPortal(LLJitMixin):
def test_abort_quasi_immut(self):
@@ -41,14 +42,25 @@
assert reasons == [ABORT_FORCE_QUASIIMMUT] * 2
def test_on_compile(self):
- called = {}
+ called = []
class MyJitPortal(JitPortal):
- def on_compile(self, jitdriver, logger, looptoken, operations,
- type, greenkey, ops_offset, asmaddr, asmlen):
+ def after_compile(self, jitdriver, logger, looptoken, operations,
+ type, greenkey, ops_offset, asmaddr, asmlen):
assert asmaddr == 0
assert asmlen == 0
- called[(greenkey[1].getint(), greenkey[0].getint(), type)] = looptoken
+ called.append(("compile", greenkey[1].getint(),
+ greenkey[0].getint(), type))
+
+ def before_compile(self, jitdriver, logger, looptoken, oeprations,
+ type, greenkey):
+ called.append(("optimize", greenkey[1].getint(),
+ greenkey[0].getint(), type))
+
+ def before_optimize(self, jitdriver, logger, looptoken, oeprations,
+ type, greenkey):
+ called.append(("trace", greenkey[1].getint(),
+ greenkey[0].getint(), type))
portal = MyJitPortal()
@@ -62,26 +74,35 @@
i += 1
self.meta_interp(loop, [1, 4], policy=JitPolicy(portal))
- assert sorted(called.keys()) == [(4, 1, "loop")]
+ assert called == [#("trace", 4, 1, "loop"),
+ ("optimize", 4, 1, "loop"),
+ ("compile", 4, 1, "loop")]
self.meta_interp(loop, [2, 4], policy=JitPolicy(portal))
- assert sorted(called.keys()) == [(4, 1, "loop"),
- (4, 2, "loop")]
+ assert called == [#("trace", 4, 1, "loop"),
+ ("optimize", 4, 1, "loop"),
+ ("compile", 4, 1, "loop"),
+ #("trace", 4, 2, "loop"),
+ ("optimize", 4, 2, "loop"),
+ ("compile", 4, 2, "loop")]
def test_on_compile_bridge(self):
- called = {}
+ called = []
class MyJitPortal(JitPortal):
- def on_compile(self, jitdriver, logger, looptoken, operations,
+ def after_compile(self, jitdriver, logger, looptoken, operations,
type, greenkey, ops_offset, asmaddr, asmlen):
assert asmaddr == 0
assert asmlen == 0
- called[(greenkey[1].getint(), greenkey[0].getint(), type)] = looptoken
+ called.append("compile")
- def on_compile_bridge(self, jitdriver, logger, orig_token,
- operations, n, ops_offset, asmstart, asmlen):
- assert 'bridge' not in called
- called['bridge'] = orig_token
+ def after_compile_bridge(self, jitdriver, logger, orig_token,
+ operations, n, ops_offset, asmstart, asmlen):
+ called.append("compile_bridge")
+ def before_compile_bridge(self, jitdriver, logger, orig_token,
+ operations, n):
+ called.append("before_compile_bridge")
+
driver = JitDriver(greens = ['n', 'm'], reds = ['i'])
def loop(n, m):
@@ -94,7 +115,7 @@
i += 1
self.meta_interp(loop, [1, 10], policy=JitPolicy(MyJitPortal()))
- assert sorted(called.keys()) == ['bridge', (10, 1, "loop")]
+ assert called == ["compile", "before_compile_bridge", "compile_bridge"]
def test_resop_interface(self):
driver = JitDriver(greens = [], reds = ['i'])
@@ -110,7 +131,18 @@
[jit_hooks.boxint_new(3),
jit_hooks.boxint_new(4)],
jit_hooks.boxint_new(1))
- return jit_hooks.resop_opnum(op)
+ assert hlstr(jit_hooks.resop_getopname(op)) == 'int_add'
+ assert jit_hooks.resop_getopnum(op) == rop.INT_ADD
+ box = jit_hooks.resop_getarg(op, 0)
+ assert jit_hooks.box_getint(box) == 3
+ box2 = jit_hooks.box_clone(box)
+ assert box2 != box
+ assert jit_hooks.box_getint(box2) == 3
+ assert not jit_hooks.box_isconst(box2)
+ box3 = jit_hooks.box_constbox(box)
+ assert jit_hooks.box_getint(box) == 3
+ assert jit_hooks.box_isconst(box3)
+ box4 = jit_hooks.box_nonconstbox(box)
+ assert not jit_hooks.box_isconst(box4)
- res = self.meta_interp(main, [])
- assert res == rop.INT_ADD
+ self.meta_interp(main, [])
diff --git a/pypy/module/pypyjit/__init__.py b/pypy/module/pypyjit/__init__.py
--- a/pypy/module/pypyjit/__init__.py
+++ b/pypy/module/pypyjit/__init__.py
@@ -8,8 +8,10 @@
'set_param': 'interp_jit.set_param',
'residual_call': 'interp_jit.residual_call',
'set_compile_hook': 'interp_resop.set_compile_hook',
+ 'set_optimize_hook': 'interp_resop.set_optimize_hook',
'set_abort_hook': 'interp_resop.set_abort_hook',
'ResOperation': 'interp_resop.WrappedOp',
+ 'Box': 'interp_resop.WrappedBox',
}
def setup_after_space_initialization(self):
diff --git a/pypy/module/pypyjit/interp_resop.py b/pypy/module/pypyjit/interp_resop.py
--- a/pypy/module/pypyjit/interp_resop.py
+++ b/pypy/module/pypyjit/interp_resop.py
@@ -5,7 +5,7 @@
from pypy.interpreter.pycode import PyCode
from pypy.interpreter.error import OperationError
from pypy.rpython.lltypesystem import lltype, llmemory
-from pypy.rpython.annlowlevel import cast_base_ptr_to_instance
+from pypy.rpython.annlowlevel import cast_base_ptr_to_instance, hlstr
from pypy.rpython.lltypesystem.rclass import OBJECT
from pypy.jit.metainterp.resoperation import rop, AbstractResOp
from pypy.rlib.nonconst import NonConstant
@@ -61,6 +61,37 @@
cache.in_recursion = NonConstant(False)
return space.w_None
+def set_optimize_hook(space, w_hook):
+ """ set_compile_hook(hook)
+
+ Set a compiling hook that will be called each time a loop is optimized,
+ but before assembler compilation. This allows to add additional
+ optimizations on Python level.
+
+ The hook will be called with the following signature:
+ hook(jitdriver_name, loop_type, greenkey or guard_number, operations)
+
+ jitdriver_name is the name of this particular jitdriver, 'pypyjit' is
+ the main interpreter loop
+
+ loop_type can be either `loop` `entry_bridge` or `bridge`
+ in case loop is not `bridge`, greenkey will be a tuple of constants
+ or a string describing it.
+
+ for the interpreter loop` it'll be a tuple
+ (code, offset, is_being_profiled)
+
+ Note that jit hook is not reentrant. It means that if the code
+ inside the jit hook is itself jitted, it will get compiled, but the
+ jit hook won't be called for that.
+
+ Result value will be the resulting list of operations, or None
+ """
+ cache = space.fromcache(Cache)
+ cache.w_optimize_hook = w_hook
+ cache.in_recursion = NonConstant(False)
+ return space.w_None
+
def set_abort_hook(space, w_hook):
""" set_abort_hook(hook)
@@ -83,14 +114,12 @@
logops.repr_of_resop(op)) for op in operations]
@unwrap_spec(num=int, offset=int, repr=str)
-def descr_new_resop(space, w_tp, num, w_args, w_res=NoneNotWrapped, offset=-1,
+def descr_new_resop(space, w_tp, num, w_args, w_res=None, offset=-1,
repr=''):
- args = [jit_hooks.boxint_new(space.int_w(w_arg)) for w_arg in
+ args = [space.interp_w(WrappedBox, w_arg).llbox for w_arg in
space.listview(w_args)]
- if w_res is None:
- llres = lltype.nullptr(llmemory.GCREF.TO)
- else:
- llres = jit_hooks.boxint_new(space.int_w(w_res))
+ llres = space.interp_w(WrappedBox, w_res).llbox
+ # XXX None case
return WrappedOp(jit_hooks.resop_new(num, args, llres), offset, repr)
class WrappedOp(Wrappable):
@@ -105,7 +134,14 @@
return space.wrap(self.repr_of_resop)
def descr_num(self, space):
- return space.wrap(jit_hooks.resop_opnum(self.op))
+ return space.wrap(jit_hooks.resop_getopnum(self.op))
+
+ def descr_name(self, space):
+ return space.wrap(hlstr(jit_hooks.resop_getopname(self.op)))
+
+ @unwrap_spec(no=int)
+ def descr_getarg(self, space, no):
+ return WrappedBox(jit_hooks.resop_getarg(self.op, no))
WrappedOp.typedef = TypeDef(
'ResOperation',
@@ -113,5 +149,26 @@
__new__ = interp2app(descr_new_resop),
__repr__ = interp2app(WrappedOp.descr_repr),
num = GetSetProperty(WrappedOp.descr_num),
+ name = GetSetProperty(WrappedOp.descr_name),
+ getarg = interp2app(WrappedOp.descr_getarg),
)
WrappedOp.acceptable_as_base_class = False
+
+class WrappedBox(Wrappable):
+ """ A class representing a single box
+ """
+ def __init__(self, llbox):
+ self.llbox = llbox
+
+ def descr_getint(self, space):
+ return space.wrap(jit_hooks.box_getint(self.llbox))
+
+ at unwrap_spec(no=int)
+def descr_new_box(space, w_tp, no):
+ return WrappedBox(jit_hooks.boxint_new(no))
+
+WrappedBox.typedef = TypeDef(
+ 'Box',
+ __new__ = interp2app(descr_new_box),
+ getint = interp2app(WrappedBox.descr_getint),
+)
diff --git a/pypy/module/pypyjit/policy.py b/pypy/module/pypyjit/policy.py
--- a/pypy/module/pypyjit/policy.py
+++ b/pypy/module/pypyjit/policy.py
@@ -1,8 +1,10 @@
from pypy.jit.codewriter.policy import JitPolicy
from pypy.rlib.jit import JitPortal
+from pypy.rlib import jit_hooks
from pypy.interpreter.error import OperationError
from pypy.jit.metainterp.jitprof import counter_names
-from pypy.module.pypyjit.interp_resop import wrap_oplist, Cache, wrap_greenkey
+from pypy.module.pypyjit.interp_resop import wrap_oplist, Cache, wrap_greenkey,\
+ WrappedOp
class PyPyPortal(JitPortal):
def on_abort(self, reason, jitdriver, greenkey):
@@ -21,18 +23,28 @@
e.write_unraisable(space, "jit hook ", cache.w_abort_hook)
cache.in_recursion = False
- def on_compile(self, jitdriver, logger, looptoken, operations, type,
- greenkey, ops_offset, asmstart, asmlen):
+ def after_compile(self, jitdriver, logger, looptoken, operations, type,
+ greenkey, ops_offset, asmstart, asmlen):
self._compile_hook(jitdriver, logger, operations, type,
ops_offset, asmstart, asmlen,
wrap_greenkey(self.space, jitdriver, greenkey))
- def on_compile_bridge(self, jitdriver, logger, orig_looptoken, operations,
- n, ops_offset, asmstart, asmlen):
+ def after_compile_bridge(self, jitdriver, logger, orig_looptoken,
+ operations, n, ops_offset, asmstart, asmlen):
self._compile_hook(jitdriver, logger, operations, 'bridge',
ops_offset, asmstart, asmlen,
self.space.wrap(n))
+ def before_compile(self, jitdriver, logger, looptoken, operations, type,
+ greenkey):
+ self._optimize_hook(jitdriver, logger, operations, type,
+ wrap_greenkey(self.space, jitdriver, greenkey))
+
+ def before_compile_bridge(self, jitdriver, logger, orig_looptoken,
+ operations, n):
+ self._optimize_hook(jitdriver, logger, operations, 'bridge',
+ self.space.wrap(n))
+
def _compile_hook(self, jitdriver, logger, operations, type,
ops_offset, asmstart, asmlen, w_arg):
space = self.space
@@ -55,6 +67,34 @@
e.write_unraisable(space, "jit hook ", cache.w_compile_hook)
cache.in_recursion = False
+ def _optimize_hook(self, jitdriver, logger, operations, type, w_arg):
+ space = self.space
+ cache = space.fromcache(Cache)
+ if cache.in_recursion:
+ return
+ if space.is_true(cache.w_optimize_hook):
+ logops = logger._make_log_operations()
+ list_w = wrap_oplist(space, logops, operations, {})
+ cache.in_recursion = True
+ try:
+ w_res = space.call_function(cache.w_optimize_hook,
+ space.wrap(jitdriver.name),
+ space.wrap(type),
+ w_arg,
+ space.newlist(list_w))
+ if space.is_w(w_res, space.w_None):
+ return
+ l = []
+ for w_item in space.listview(w_res):
+ item = space.interp_w(WrappedOp, w_item)
+ l.append(jit_hooks._cast_to_resop(item.op))
+ operations[:] = l # modifying operations above is probably not
+ # a great idea since types may not work and we'll end up with
+ # half-working list and a segfault/fatal RPython error
+ except OperationError, e:
+ e.write_unraisable(space, "jit hook ", cache.w_compile_hook)
+ cache.in_recursion = False
+
pypy_portal = PyPyPortal()
class PyPyJitPolicy(JitPolicy):
diff --git a/pypy/module/pypyjit/test/test_jit_hook.py b/pypy/module/pypyjit/test/test_jit_hook.py
--- a/pypy/module/pypyjit/test/test_jit_hook.py
+++ b/pypy/module/pypyjit/test/test_jit_hook.py
@@ -47,27 +47,32 @@
code_gcref = lltype.cast_opaque_ptr(llmemory.GCREF, ll_code)
logger = Logger(MockSD())
- oplist = parse("""
- [i1, i2]
+ cls.origoplist = parse("""
+ [i1, i2, p2]
i3 = int_add(i1, i2)
debug_merge_point(0, 0, 0, 0, ConstPtr(ptr0))
+ guard_nonnull(p2) []
guard_true(i3) []
""", namespace={'ptr0': code_gcref}).operations
greenkey = [ConstInt(0), ConstInt(0), ConstPtr(code_gcref)]
offset = {}
- for i, op in enumerate(oplist):
+ for i, op in enumerate(cls.origoplist):
if i != 1:
offset[op] = i
def interp_on_compile():
- pypy_portal.on_compile(pypyjitdriver, logger, JitCellToken(),
- oplist, 'loop', greenkey, offset,
- 0, 0)
+ pypy_portal.after_compile(pypyjitdriver, logger, JitCellToken(),
+ cls.oplist, 'loop', greenkey, offset,
+ 0, 0)
def interp_on_compile_bridge():
- pypy_portal.on_compile_bridge(pypyjitdriver, logger,
- JitCellToken(), oplist, 0,
- offset, 0, 0)
+ pypy_portal.after_compile_bridge(pypyjitdriver, logger,
+ JitCellToken(), cls.oplist, 0,
+ offset, 0, 0)
+
+ def interp_on_optimize():
+ pypy_portal.before_compile(pypyjitdriver, logger, JitCellToken(),
+ cls.oplist, 'loop', greenkey)
def interp_on_abort():
pypy_portal.on_abort(ABORT_TOO_LONG, pypyjitdriver, greenkey)
@@ -76,6 +81,10 @@
cls.w_on_compile_bridge = space.wrap(interp2app(interp_on_compile_bridge))
cls.w_on_abort = space.wrap(interp2app(interp_on_abort))
cls.w_int_add_num = space.wrap(rop.INT_ADD)
+ cls.w_on_optimize = space.wrap(interp2app(interp_on_optimize))
+
+ def setup_method(self, meth):
+ self.__class__.oplist = self.origoplist
def test_on_compile(self):
import pypyjit
@@ -94,7 +103,7 @@
assert elem[2][0].co_name == 'function'
assert elem[2][1] == 0
assert elem[2][2] == False
- assert len(elem[3]) == 3
+ assert len(elem[3]) == 4
int_add = elem[3][0]
#assert int_add.name == 'int_add'
assert int_add.num == self.int_add_num
@@ -160,8 +169,27 @@
self.on_abort()
assert l == [('pypyjit', 'ABORT_TOO_LONG')]
+ def test_on_optimize(self):
+ import pypyjit
+ l = []
+
+ def hook(name, looptype, tuple_or_guard_no, ops, *args):
+ l.append(ops)
+
+ def optimize_hook(name, looptype, tuple_or_guard_no, ops):
+ return []
+
+ pypyjit.set_compile_hook(hook)
+ pypyjit.set_optimize_hook(optimize_hook)
+ self.on_optimize()
+ self.on_compile()
+ assert l == [[]]
+
def test_creation(self):
- import pypyjit
+ from pypyjit import Box, ResOperation
- op = pypyjit.ResOperation(self.int_add_num, [1, 3], 4)
+ op = ResOperation(self.int_add_num, [Box(1), Box(3)], Box(4))
assert op.num == self.int_add_num
+ assert op.name == 'int_add'
+ box = op.getarg(0)
+ assert box.getint() == 1
diff --git a/pypy/rlib/jit.py b/pypy/rlib/jit.py
--- a/pypy/rlib/jit.py
+++ b/pypy/rlib/jit.py
@@ -731,29 +731,62 @@
like JIT loops compiled, aborts etc.
An instance of this class might be returned by the policy.get_jit_portal
method in order to function.
+
+ each hook will accept some of the following args:
+
+
+ greenkey - a list of green boxes
+ jitdriver - an instance of jitdriver where tracing started
+ logger - an instance of jit.metainterp.logger.LogOperations
+ ops_offset
+ asmaddr - (int) raw address of assembler block
+ asmlen - assembler block length
+ type - either 'loop' or 'entry bridge'
"""
def on_abort(self, reason, jitdriver, greenkey):
""" A hook called each time a loop is aborted with jitdriver and
greenkey where it started, reason is a string why it got aborted
"""
- def on_compile(self, jitdriver, logger, looptoken, operations, type,
- greenkey, ops_offset, asmaddr, asmlen):
- """ A hook called when loop is compiled. Overwrite
- for your own jitdriver if you want to do something special, like
- call applevel code.
+ #def before_optimize(self, jitdriver, logger, looptoken, operations,
+ # type, greenkey):
+ # """ A hook called before optimizer is run, args described in class
+ # docstring. Overwrite for custom behavior
+ # """
+ # DISABLED
- jitdriver - an instance of jitdriver where tracing started
- logger - an instance of jit.metainterp.logger.LogOperations
- asmaddr - (int) raw address of assembler block
- asmlen - assembler block length
- type - either 'loop' or 'entry bridge'
+ def before_compile(self, jitdriver, logger, looptoken, operations, type,
+ greenkey):
+ """ A hook called after a loop is optimized, before compiling assembler,
+ args described ni class docstring. Overwrite for custom behavior
"""
- def on_compile_bridge(self, jitdriver, logger, orig_looptoken, operations,
- fail_descr_no, ops_offset, asmaddr, asmlen):
- """ A hook called when a bridge is compiled. Overwrite
- for your own jitdriver if you want to do something special
+ def after_compile(self, jitdriver, logger, looptoken, operations, type,
+ greenkey, ops_offset, asmaddr, asmlen):
+ """ A hook called after a loop has compiled assembler,
+ args described in class docstring. Overwrite for custom behavior
+ """
+
+ #def before_optimize_bridge(self, jitdriver, logger, orig_looptoken,
+ # operations, fail_descr_no):
+ # """ A hook called before a bridge is optimized.
+ # Args described in class docstring, Overwrite for
+ # custom behavior
+ # """
+ # DISABLED
+
+ def before_compile_bridge(self, jitdriver, logger, orig_looptoken,
+ operations, fail_descr_no):
+ """ A hook called before a bridge is compiled, but after optimizations
+ are performed. Args described in class docstring, Overwrite for
+ custom behavior
+ """
+
+ def after_compile_bridge(self, jitdriver, logger, orig_looptoken,
+ operations, fail_descr_no, ops_offset, asmaddr,
+ asmlen):
+ """ A hook called after a bridge is compiled, args described in class
+ docstring, Overwrite for custom behavior
"""
def get_stats(self):
diff --git a/pypy/rlib/jit_hooks.py b/pypy/rlib/jit_hooks.py
--- a/pypy/rlib/jit_hooks.py
+++ b/pypy/rlib/jit_hooks.py
@@ -4,26 +4,28 @@
from pypy.rpython.lltypesystem import llmemory, lltype
from pypy.rpython.lltypesystem import rclass
from pypy.rpython.annlowlevel import cast_instance_to_base_ptr,\
- cast_base_ptr_to_instance
+ cast_base_ptr_to_instance, llstr, hlstr
from pypy.rlib.objectmodel import specialize
-def register_helper(helper, s_result):
-
- class Entry(ExtRegistryEntry):
- _about_ = helper
+def register_helper(s_result):
+ def wrapper(helper):
+ class Entry(ExtRegistryEntry):
+ _about_ = helper
- def compute_result_annotation(self, *args):
- return s_result
+ def compute_result_annotation(self, *args):
+ return s_result
- def specialize_call(self, hop):
- from pypy.rpython.lltypesystem import lltype
+ def specialize_call(self, hop):
+ from pypy.rpython.lltypesystem import lltype
- c_func = hop.inputconst(lltype.Void, helper)
- c_name = hop.inputconst(lltype.Void, 'access_helper')
- args_v = [hop.inputarg(arg, arg=i)
- for i, arg in enumerate(hop.args_r)]
- return hop.genop('jit_marker', [c_name, c_func] + args_v,
- resulttype=hop.r_result)
+ c_func = hop.inputconst(lltype.Void, helper)
+ c_name = hop.inputconst(lltype.Void, 'access_helper')
+ args_v = [hop.inputarg(arg, arg=i)
+ for i, arg in enumerate(hop.args_r)]
+ return hop.genop('jit_marker', [c_name, c_func] + args_v,
+ resulttype=hop.r_result)
+ return helper
+ return wrapper
def _cast_to_box(llref):
from pypy.jit.metainterp.history import AbstractValue
@@ -42,6 +44,7 @@
return lltype.cast_opaque_ptr(llmemory.GCREF,
cast_instance_to_base_ptr(obj))
+ at register_helper(annmodel.SomePtr(llmemory.GCREF))
def resop_new(no, llargs, llres):
from pypy.jit.metainterp.history import ResOperation
@@ -49,15 +52,40 @@
res = _cast_to_box(llres)
return _cast_to_gcref(ResOperation(no, args, res))
-register_helper(resop_new, annmodel.SomePtr(llmemory.GCREF))
-
+ at register_helper(annmodel.SomePtr(llmemory.GCREF))
def boxint_new(no):
from pypy.jit.metainterp.history import BoxInt
return _cast_to_gcref(BoxInt(no))
-register_helper(boxint_new, annmodel.SomePtr(llmemory.GCREF))
-
-def resop_opnum(llop):
+ at register_helper(annmodel.SomeInteger())
+def resop_getopnum(llop):
return _cast_to_resop(llop).getopnum()
-register_helper(resop_opnum, annmodel.SomeInteger())
+ at register_helper(annmodel.SomeString(can_be_None=True))
+def resop_getopname(llop):
+ return llstr(_cast_to_resop(llop).getopname())
+
+ at register_helper(annmodel.SomePtr(llmemory.GCREF))
+def resop_getarg(llop, no):
+ return _cast_to_gcref(_cast_to_resop(llop).getarg(no))
+
+ at register_helper(annmodel.SomeInteger())
+def box_getint(llbox):
+ return _cast_to_box(llbox).getint()
+
+ at register_helper(annmodel.SomePtr(llmemory.GCREF))
+def box_clone(llbox):
+ return _cast_to_gcref(_cast_to_box(llbox).clonebox())
+
+ at register_helper(annmodel.SomePtr(llmemory.GCREF))
+def box_constbox(llbox):
+ return _cast_to_gcref(_cast_to_box(llbox).constbox())
+
+ at register_helper(annmodel.SomePtr(llmemory.GCREF))
+def box_nonconstbox(llbox):
+ return _cast_to_gcref(_cast_to_box(llbox).nonconstbox())
+
+ at register_helper(annmodel.SomeBool())
+def box_isconst(llbox):
+ from pypy.jit.metainterp.history import Const
+ return isinstance(_cast_to_box(llbox), Const)
From noreply at buildbot.pypy.org Sun Jan 8 19:55:12 2012
From: noreply at buildbot.pypy.org (alex_gaynor)
Date: Sun, 8 Jan 2012 19:55:12 +0100 (CET)
Subject: [pypy-commit] pypy default: added support for float
getinteriorfield_raws
Message-ID: <20120108185512.5CFB282110@wyvern.cs.uni-duesseldorf.de>
Author: Alex Gaynor
Branch:
Changeset: r51143:e89672d5d28f
Date: 2012-01-08 12:53 -0600
http://bitbucket.org/pypy/pypy/changeset/e89672d5d28f/
Log: added support for float getinteriorfield_raws
diff --git a/pypy/jit/metainterp/optimizeopt/fficall.py b/pypy/jit/metainterp/optimizeopt/fficall.py
--- a/pypy/jit/metainterp/optimizeopt/fficall.py
+++ b/pypy/jit/metainterp/optimizeopt/fficall.py
@@ -234,11 +234,11 @@
# longlongs are treated as floats, see
# e.g. llsupport/descr.py:getDescrClass
is_float = True
- elif kind == 'u':
+ elif kind == 'u' or kind == 's':
# they're all False
pass
else:
- assert False, "unsupported ffitype or kind"
+ raise NotImplementedError("unsupported ffitype or kind: %s" % kind)
#
fieldsize = rffi.getintfield(ffitype, 'c_size')
return self.optimizer.cpu.interiorfielddescrof_dynamic(
diff --git a/pypy/jit/metainterp/test/test_fficall.py b/pypy/jit/metainterp/test/test_fficall.py
--- a/pypy/jit/metainterp/test/test_fficall.py
+++ b/pypy/jit/metainterp/test/test_fficall.py
@@ -148,28 +148,38 @@
self.check_resops({'jump': 1, 'int_lt': 2, 'setinteriorfield_raw': 4,
'getinteriorfield_raw': 8, 'int_add': 6, 'guard_true': 2})
- def test_array_getitem_uint8(self):
+ def _test_getitem_type(self, TYPE, ffitype, COMPUTE_TYPE):
+ reds = ["n", "i", "s", "data"]
+ if COMPUTE_TYPE is lltype.Float:
+ # Move the float var to the back.
+ reds.remove("s")
+ reds.append("s")
myjitdriver = JitDriver(
greens = [],
- reds = ["n", "i", "s", "data"],
+ reds = reds,
)
def f(data, n):
- i = s = 0
+ i = 0
+ s = rffi.cast(COMPUTE_TYPE, 0)
while i < n:
myjitdriver.jit_merge_point(n=n, i=i, s=s, data=data)
- s += rffi.cast(lltype.Signed, array_getitem(types.uchar, 1, data, 0, 0))
+ s += rffi.cast(COMPUTE_TYPE, array_getitem(ffitype, rffi.sizeof(TYPE), data, 0, 0))
i += 1
return s
+ def main(n):
+ with lltype.scoped_alloc(rffi.CArray(TYPE), 1) as data:
+ data[0] = rffi.cast(TYPE, 200)
+ return f(data, n)
+ assert self.meta_interp(main, [10]) == 2000
- def main(n):
- with lltype.scoped_alloc(rffi.CArray(rffi.UCHAR), 1) as data:
- data[0] = rffi.cast(rffi.UCHAR, 200)
- return f(data, n)
-
- assert self.meta_interp(main, [10]) == 2000
+ def test_array_getitem_uint8(self):
+ self._test_getitem_type(rffi.UCHAR, types.uchar, lltype.Signed)
self.check_resops({'jump': 1, 'int_lt': 2, 'getinteriorfield_raw': 2,
'guard_true': 2, 'int_add': 4})
+ def test_array_getitem_float(self):
+ self._test_getitem_type(rffi.FLOAT, types.float, lltype.Float)
+
class TestFfiCall(FfiCallTests, LLJitMixin):
supports_all = False
From noreply at buildbot.pypy.org Sun Jan 8 20:46:52 2012
From: noreply at buildbot.pypy.org (fijal)
Date: Sun, 8 Jan 2012 20:46:52 +0100 (CET)
Subject: [pypy-commit] pypy better-jit-hooks: expose some more of API
Message-ID: <20120108194652.5061082110@wyvern.cs.uni-duesseldorf.de>
Author: Maciej Fijalkowski
Branch: better-jit-hooks
Changeset: r51144:4628182fa0e4
Date: 2012-01-08 21:46 +0200
http://bitbucket.org/pypy/pypy/changeset/4628182fa0e4/
Log: expose some more of API
diff --git a/pypy/jit/metainterp/test/test_jitportal.py b/pypy/jit/metainterp/test/test_jitportal.py
--- a/pypy/jit/metainterp/test/test_jitportal.py
+++ b/pypy/jit/metainterp/test/test_jitportal.py
@@ -144,5 +144,12 @@
assert jit_hooks.box_isconst(box3)
box4 = jit_hooks.box_nonconstbox(box)
assert not jit_hooks.box_isconst(box4)
+ box5 = jit_hooks.boxint_new(18)
+ jit_hooks.resop_setarg(op, 0, box5)
+ assert jit_hooks.resop_getarg(op, 0) == box5
+ box6 = jit_hooks.resop_getresult(op)
+ assert jit_hooks.box_getint(box6) == 1
+ jit_hooks.resop_setresult(op, box5)
+ assert jit_hooks.resop_getresult(op) == box5
self.meta_interp(main, [])
diff --git a/pypy/module/pypyjit/interp_resop.py b/pypy/module/pypyjit/interp_resop.py
--- a/pypy/module/pypyjit/interp_resop.py
+++ b/pypy/module/pypyjit/interp_resop.py
@@ -113,13 +113,34 @@
ops_offset.get(op, 0),
logops.repr_of_resop(op)) for op in operations]
- at unwrap_spec(num=int, offset=int, repr=str)
-def descr_new_resop(space, w_tp, num, w_args, w_res=None, offset=-1,
+class WrappedBox(Wrappable):
+ """ A class representing a single box
+ """
+ def __init__(self, llbox):
+ self.llbox = llbox
+
+ def descr_getint(self, space):
+ return space.wrap(jit_hooks.box_getint(self.llbox))
+
+ at unwrap_spec(no=int)
+def descr_new_box(space, w_tp, no):
+ return WrappedBox(jit_hooks.boxint_new(no))
+
+WrappedBox.typedef = TypeDef(
+ 'Box',
+ __new__ = interp2app(descr_new_box),
+ getint = interp2app(WrappedBox.descr_getint),
+)
+
+ at unwrap_spec(num=int, offset=int, repr=str, res=WrappedBox)
+def descr_new_resop(space, w_tp, num, w_args, res, offset=-1,
repr=''):
args = [space.interp_w(WrappedBox, w_arg).llbox for w_arg in
space.listview(w_args)]
- llres = space.interp_w(WrappedBox, w_res).llbox
- # XXX None case
+ if res is None:
+ llres = jit_hooks.emptyval()
+ else:
+ llres = res.llbox
return WrappedOp(jit_hooks.resop_new(num, args, llres), offset, repr)
class WrappedOp(Wrappable):
@@ -143,6 +164,19 @@
def descr_getarg(self, space, no):
return WrappedBox(jit_hooks.resop_getarg(self.op, no))
+ @unwrap_spec(no=int, box=WrappedBox)
+ def descr_setarg(self, space, no, box):
+ jit_hooks.resop_setarg(self.op, no, box.llbox)
+ return space.w_None
+
+ def descr_getresult(self, space):
+ return WrappedBox(jit_hooks.resop_getresult(self.op))
+
+ @unwrap_spec(box=WrappedBox)
+ def descr_setresult(self, space, box):
+ jit_hooks.resop_setresult(self.op, box.llbox)
+ return space.w_None
+
WrappedOp.typedef = TypeDef(
'ResOperation',
__doc__ = WrappedOp.__doc__,
@@ -151,24 +185,8 @@
num = GetSetProperty(WrappedOp.descr_num),
name = GetSetProperty(WrappedOp.descr_name),
getarg = interp2app(WrappedOp.descr_getarg),
+ setarg = interp2app(WrappedOp.descr_setarg),
+ result = GetSetProperty(WrappedOp.descr_getresult,
+ WrappedOp.descr_setresult)
)
WrappedOp.acceptable_as_base_class = False
-
-class WrappedBox(Wrappable):
- """ A class representing a single box
- """
- def __init__(self, llbox):
- self.llbox = llbox
-
- def descr_getint(self, space):
- return space.wrap(jit_hooks.box_getint(self.llbox))
-
- at unwrap_spec(no=int)
-def descr_new_box(space, w_tp, no):
- return WrappedBox(jit_hooks.boxint_new(no))
-
-WrappedBox.typedef = TypeDef(
- 'Box',
- __new__ = interp2app(descr_new_box),
- getint = interp2app(WrappedBox.descr_getint),
-)
diff --git a/pypy/module/pypyjit/policy.py b/pypy/module/pypyjit/policy.py
--- a/pypy/module/pypyjit/policy.py
+++ b/pypy/module/pypyjit/policy.py
@@ -88,9 +88,11 @@
for w_item in space.listview(w_res):
item = space.interp_w(WrappedOp, w_item)
l.append(jit_hooks._cast_to_resop(item.op))
- operations[:] = l # modifying operations above is probably not
+ del operations[:] # modifying operations above is probably not
# a great idea since types may not work and we'll end up with
# half-working list and a segfault/fatal RPython error
+ for elem in l:
+ operations.append(elem)
except OperationError, e:
e.write_unraisable(space, "jit hook ", cache.w_compile_hook)
cache.in_recursion = False
diff --git a/pypy/module/pypyjit/test/test_jit_hook.py b/pypy/module/pypyjit/test/test_jit_hook.py
--- a/pypy/module/pypyjit/test/test_jit_hook.py
+++ b/pypy/module/pypyjit/test/test_jit_hook.py
@@ -193,3 +193,9 @@
assert op.name == 'int_add'
box = op.getarg(0)
assert box.getint() == 1
+ box2 = op.result
+ assert box2.getint() == 4
+ op.setarg(0, box2)
+ assert op.getarg(0).getint() == 4
+ op.result = box
+ assert op.result.getint() == 1
diff --git a/pypy/rlib/jit_hooks.py b/pypy/rlib/jit_hooks.py
--- a/pypy/rlib/jit_hooks.py
+++ b/pypy/rlib/jit_hooks.py
@@ -44,6 +44,9 @@
return lltype.cast_opaque_ptr(llmemory.GCREF,
cast_instance_to_base_ptr(obj))
+def emptyval():
+ return lltype.nullptr(llmemory.GCREF.TO)
+
@register_helper(annmodel.SomePtr(llmemory.GCREF))
def resop_new(no, llargs, llres):
from pypy.jit.metainterp.history import ResOperation
@@ -69,6 +72,18 @@
def resop_getarg(llop, no):
return _cast_to_gcref(_cast_to_resop(llop).getarg(no))
+ at register_helper(annmodel.s_None)
+def resop_setarg(llop, no, llbox):
+ _cast_to_resop(llop).setarg(no, _cast_to_box(llbox))
+
+ at register_helper(annmodel.SomePtr(llmemory.GCREF))
+def resop_getresult(llop):
+ return _cast_to_gcref(_cast_to_resop(llop).result)
+
+ at register_helper(annmodel.s_None)
+def resop_setresult(llop, llbox):
+ _cast_to_resop(llop).result = _cast_to_box(llbox)
+
@register_helper(annmodel.SomeInteger())
def box_getint(llbox):
return _cast_to_box(llbox).getint()
From noreply at buildbot.pypy.org Sun Jan 8 20:57:34 2012
From: noreply at buildbot.pypy.org (fijal)
Date: Sun, 8 Jan 2012 20:57:34 +0100 (CET)
Subject: [pypy-commit] pypy better-jit-hooks: obscure translation fix and a
real fix
Message-ID: <20120108195734.5909182110@wyvern.cs.uni-duesseldorf.de>
Author: Maciej Fijalkowski
Branch: better-jit-hooks
Changeset: r51145:6014710c801a
Date: 2012-01-08 21:57 +0200
http://bitbucket.org/pypy/pypy/changeset/6014710c801a/
Log: obscure translation fix and a real fix
diff --git a/pypy/module/pypyjit/interp_resop.py b/pypy/module/pypyjit/interp_resop.py
--- a/pypy/module/pypyjit/interp_resop.py
+++ b/pypy/module/pypyjit/interp_resop.py
@@ -17,6 +17,7 @@
def __init__(self, space):
self.w_compile_hook = space.w_None
self.w_abort_hook = space.w_None
+ self.w_optimize_hook = space.w_None
def wrap_greenkey(space, jitdriver, greenkey):
if jitdriver.name == 'pypyjit':
@@ -174,6 +175,7 @@
@unwrap_spec(box=WrappedBox)
def descr_setresult(self, space, box):
+ assert isinstance(box, WrappedBox)
jit_hooks.resop_setresult(self.op, box.llbox)
return space.w_None
From noreply at buildbot.pypy.org Sun Jan 8 21:03:51 2012
From: noreply at buildbot.pypy.org (fijal)
Date: Sun, 8 Jan 2012 21:03:51 +0100 (CET)
Subject: [pypy-commit] pypy better-jit-hooks: remove nonworking stuff and
return space.w_None
Message-ID: <20120108200351.F222682110@wyvern.cs.uni-duesseldorf.de>
Author: Maciej Fijalkowski
Branch: better-jit-hooks
Changeset: r51146:22a0d8fd2ca8
Date: 2012-01-08 22:03 +0200
http://bitbucket.org/pypy/pypy/changeset/22a0d8fd2ca8/
Log: remove nonworking stuff and return space.w_None
diff --git a/pypy/module/pypyjit/interp_resop.py b/pypy/module/pypyjit/interp_resop.py
--- a/pypy/module/pypyjit/interp_resop.py
+++ b/pypy/module/pypyjit/interp_resop.py
@@ -60,7 +60,6 @@
cache = space.fromcache(Cache)
cache.w_compile_hook = w_hook
cache.in_recursion = NonConstant(False)
- return space.w_None
def set_optimize_hook(space, w_hook):
""" set_compile_hook(hook)
@@ -91,7 +90,6 @@
cache = space.fromcache(Cache)
cache.w_optimize_hook = w_hook
cache.in_recursion = NonConstant(False)
- return space.w_None
def set_abort_hook(space, w_hook):
""" set_abort_hook(hook)
@@ -107,7 +105,6 @@
cache = space.fromcache(Cache)
cache.w_abort_hook = w_hook
cache.in_recursion = NonConstant(False)
- return space.w_None
def wrap_oplist(space, logops, operations, ops_offset):
return [WrappedOp(jit_hooks._cast_to_gcref(op),
@@ -168,16 +165,13 @@
@unwrap_spec(no=int, box=WrappedBox)
def descr_setarg(self, space, no, box):
jit_hooks.resop_setarg(self.op, no, box.llbox)
- return space.w_None
def descr_getresult(self, space):
return WrappedBox(jit_hooks.resop_getresult(self.op))
- @unwrap_spec(box=WrappedBox)
- def descr_setresult(self, space, box):
- assert isinstance(box, WrappedBox)
+ def descr_setresult(self, space, w_box):
+ box = space.interp_w(WrappedBox, w_box)
jit_hooks.resop_setresult(self.op, box.llbox)
- return space.w_None
WrappedOp.typedef = TypeDef(
'ResOperation',
From noreply at buildbot.pypy.org Sun Jan 8 21:17:03 2012
From: noreply at buildbot.pypy.org (amauryfa)
Date: Sun, 8 Jan 2012 21:17:03 +0100 (CET)
Subject: [pypy-commit] pypy default: issue900: Implement processor pinning
on win32,
Message-ID: <20120108201703.D4C0482110@wyvern.cs.uni-duesseldorf.de>
Author: Amaury Forgeot d'Arc
Branch:
Changeset: r51147:a7e8e37cbf30
Date: 2012-01-08 21:16 +0100
http://bitbucket.org/pypy/pypy/changeset/a7e8e37cbf30/
Log: issue900: Implement processor pinning on win32, should fix
inconsistent figures with cProfile.
diff --git a/pypy/module/_lsprof/interp_lsprof.py b/pypy/module/_lsprof/interp_lsprof.py
--- a/pypy/module/_lsprof/interp_lsprof.py
+++ b/pypy/module/_lsprof/interp_lsprof.py
@@ -19,8 +19,9 @@
# cpu affinity settings
srcdir = py.path.local(pypydir).join('translator', 'c', 'src')
-eci = ExternalCompilationInfo(separate_module_files=
- [srcdir.join('profiling.c')])
+eci = ExternalCompilationInfo(
+ separate_module_files=[srcdir.join('profiling.c')],
+ export_symbols=['pypy_setup_profiling', 'pypy_teardown_profiling'])
c_setup_profiling = rffi.llexternal('pypy_setup_profiling',
[], lltype.Void,
diff --git a/pypy/translator/c/src/profiling.c b/pypy/translator/c/src/profiling.c
--- a/pypy/translator/c/src/profiling.c
+++ b/pypy/translator/c/src/profiling.c
@@ -29,6 +29,35 @@
profiling_setup = 0;
}
}
+
+#elif defined(_WIN32)
+#include
+
+DWORD_PTR base_affinity_mask;
+int profiling_setup = 0;
+
+void pypy_setup_profiling() {
+ if (!profiling_setup) {
+ DWORD_PTR affinity_mask, system_affinity_mask;
+ GetProcessAffinityMask(GetCurrentProcess(),
+ &base_affinity_mask, &system_affinity_mask);
+ affinity_mask = 1;
+ /* Pick one cpu allowed by the system */
+ if (system_affinity_mask)
+ while ((affinity_mask & system_affinity_mask) == 0)
+ affinity_mask <<= 1;
+ SetProcessAffinityMask(GetCurrentProcess(), affinity_mask);
+ profiling_setup = 1;
+ }
+}
+
+void pypy_teardown_profiling() {
+ if (profiling_setup) {
+ SetProcessAffinityMask(GetCurrentProcess(), base_affinity_mask);
+ profiling_setup = 0;
+ }
+}
+
#else
void pypy_setup_profiling() { }
void pypy_teardown_profiling() { }
From noreply at buildbot.pypy.org Sun Jan 8 21:57:05 2012
From: noreply at buildbot.pypy.org (fijal)
Date: Sun, 8 Jan 2012 21:57:05 +0100 (CET)
Subject: [pypy-commit] pypy default: fix test_resoperaion?
Message-ID: <20120108205705.9017482110@wyvern.cs.uni-duesseldorf.de>
Author: Maciej Fijalkowski
Branch:
Changeset: r51148:58b011f973ba
Date: 2012-01-08 22:48 +0200
http://bitbucket.org/pypy/pypy/changeset/58b011f973ba/
Log: fix test_resoperaion?
diff --git a/pypy/jit/metainterp/test/test_resoperation.py b/pypy/jit/metainterp/test/test_resoperation.py
--- a/pypy/jit/metainterp/test/test_resoperation.py
+++ b/pypy/jit/metainterp/test/test_resoperation.py
@@ -30,17 +30,17 @@
cls = rop.opclasses[rop.rop.INT_ADD]
assert issubclass(cls, rop.PlainResOp)
assert issubclass(cls, rop.BinaryOp)
- assert cls.getopnum.im_func(None) == rop.rop.INT_ADD
+ assert cls.getopnum.im_func(cls) == rop.rop.INT_ADD
cls = rop.opclasses[rop.rop.CALL]
assert issubclass(cls, rop.ResOpWithDescr)
assert issubclass(cls, rop.N_aryOp)
- assert cls.getopnum.im_func(None) == rop.rop.CALL
+ assert cls.getopnum.im_func(cls) == rop.rop.CALL
cls = rop.opclasses[rop.rop.GUARD_TRUE]
assert issubclass(cls, rop.GuardResOp)
assert issubclass(cls, rop.UnaryOp)
- assert cls.getopnum.im_func(None) == rop.rop.GUARD_TRUE
+ assert cls.getopnum.im_func(cls) == rop.rop.GUARD_TRUE
def test_mixins_in_common_base():
INT_ADD = rop.opclasses[rop.rop.INT_ADD]
From noreply at buildbot.pypy.org Sun Jan 8 21:57:06 2012
From: noreply at buildbot.pypy.org (fijal)
Date: Sun, 8 Jan 2012 21:57:06 +0100 (CET)
Subject: [pypy-commit] pypy default: merge
Message-ID: <20120108205706.B74DB82110@wyvern.cs.uni-duesseldorf.de>
Author: Maciej Fijalkowski
Branch:
Changeset: r51149:9835710fde04
Date: 2012-01-08 22:56 +0200
http://bitbucket.org/pypy/pypy/changeset/9835710fde04/
Log: merge
diff --git a/pypy/jit/metainterp/optimizeopt/fficall.py b/pypy/jit/metainterp/optimizeopt/fficall.py
--- a/pypy/jit/metainterp/optimizeopt/fficall.py
+++ b/pypy/jit/metainterp/optimizeopt/fficall.py
@@ -234,11 +234,11 @@
# longlongs are treated as floats, see
# e.g. llsupport/descr.py:getDescrClass
is_float = True
- elif kind == 'u':
+ elif kind == 'u' or kind == 's':
# they're all False
pass
else:
- assert False, "unsupported ffitype or kind"
+ raise NotImplementedError("unsupported ffitype or kind: %s" % kind)
#
fieldsize = rffi.getintfield(ffitype, 'c_size')
return self.optimizer.cpu.interiorfielddescrof_dynamic(
diff --git a/pypy/jit/metainterp/test/test_fficall.py b/pypy/jit/metainterp/test/test_fficall.py
--- a/pypy/jit/metainterp/test/test_fficall.py
+++ b/pypy/jit/metainterp/test/test_fficall.py
@@ -148,28 +148,38 @@
self.check_resops({'jump': 1, 'int_lt': 2, 'setinteriorfield_raw': 4,
'getinteriorfield_raw': 8, 'int_add': 6, 'guard_true': 2})
- def test_array_getitem_uint8(self):
+ def _test_getitem_type(self, TYPE, ffitype, COMPUTE_TYPE):
+ reds = ["n", "i", "s", "data"]
+ if COMPUTE_TYPE is lltype.Float:
+ # Move the float var to the back.
+ reds.remove("s")
+ reds.append("s")
myjitdriver = JitDriver(
greens = [],
- reds = ["n", "i", "s", "data"],
+ reds = reds,
)
def f(data, n):
- i = s = 0
+ i = 0
+ s = rffi.cast(COMPUTE_TYPE, 0)
while i < n:
myjitdriver.jit_merge_point(n=n, i=i, s=s, data=data)
- s += rffi.cast(lltype.Signed, array_getitem(types.uchar, 1, data, 0, 0))
+ s += rffi.cast(COMPUTE_TYPE, array_getitem(ffitype, rffi.sizeof(TYPE), data, 0, 0))
i += 1
return s
+ def main(n):
+ with lltype.scoped_alloc(rffi.CArray(TYPE), 1) as data:
+ data[0] = rffi.cast(TYPE, 200)
+ return f(data, n)
+ assert self.meta_interp(main, [10]) == 2000
- def main(n):
- with lltype.scoped_alloc(rffi.CArray(rffi.UCHAR), 1) as data:
- data[0] = rffi.cast(rffi.UCHAR, 200)
- return f(data, n)
-
- assert self.meta_interp(main, [10]) == 2000
+ def test_array_getitem_uint8(self):
+ self._test_getitem_type(rffi.UCHAR, types.uchar, lltype.Signed)
self.check_resops({'jump': 1, 'int_lt': 2, 'getinteriorfield_raw': 2,
'guard_true': 2, 'int_add': 4})
+ def test_array_getitem_float(self):
+ self._test_getitem_type(rffi.FLOAT, types.float, lltype.Float)
+
class TestFfiCall(FfiCallTests, LLJitMixin):
supports_all = False
diff --git a/pypy/module/_lsprof/interp_lsprof.py b/pypy/module/_lsprof/interp_lsprof.py
--- a/pypy/module/_lsprof/interp_lsprof.py
+++ b/pypy/module/_lsprof/interp_lsprof.py
@@ -19,8 +19,9 @@
# cpu affinity settings
srcdir = py.path.local(pypydir).join('translator', 'c', 'src')
-eci = ExternalCompilationInfo(separate_module_files=
- [srcdir.join('profiling.c')])
+eci = ExternalCompilationInfo(
+ separate_module_files=[srcdir.join('profiling.c')],
+ export_symbols=['pypy_setup_profiling', 'pypy_teardown_profiling'])
c_setup_profiling = rffi.llexternal('pypy_setup_profiling',
[], lltype.Void,
diff --git a/pypy/translator/c/src/profiling.c b/pypy/translator/c/src/profiling.c
--- a/pypy/translator/c/src/profiling.c
+++ b/pypy/translator/c/src/profiling.c
@@ -29,6 +29,35 @@
profiling_setup = 0;
}
}
+
+#elif defined(_WIN32)
+#include
+
+DWORD_PTR base_affinity_mask;
+int profiling_setup = 0;
+
+void pypy_setup_profiling() {
+ if (!profiling_setup) {
+ DWORD_PTR affinity_mask, system_affinity_mask;
+ GetProcessAffinityMask(GetCurrentProcess(),
+ &base_affinity_mask, &system_affinity_mask);
+ affinity_mask = 1;
+ /* Pick one cpu allowed by the system */
+ if (system_affinity_mask)
+ while ((affinity_mask & system_affinity_mask) == 0)
+ affinity_mask <<= 1;
+ SetProcessAffinityMask(GetCurrentProcess(), affinity_mask);
+ profiling_setup = 1;
+ }
+}
+
+void pypy_teardown_profiling() {
+ if (profiling_setup) {
+ SetProcessAffinityMask(GetCurrentProcess(), base_affinity_mask);
+ profiling_setup = 0;
+ }
+}
+
#else
void pypy_setup_profiling() { }
void pypy_teardown_profiling() { }
From noreply at buildbot.pypy.org Sun Jan 8 22:37:47 2012
From: noreply at buildbot.pypy.org (fijal)
Date: Sun, 8 Jan 2012 22:37:47 +0100 (CET)
Subject: [pypy-commit] pypy better-jit-hooks: oops
Message-ID: <20120108213747.40F0382110@wyvern.cs.uni-duesseldorf.de>
Author: Maciej Fijalkowski
Branch: better-jit-hooks
Changeset: r51150:03976db091c4
Date: 2012-01-08 23:37 +0200
http://bitbucket.org/pypy/pypy/changeset/03976db091c4/
Log: oops
diff --git a/pypy/module/pypyjit/policy.py b/pypy/module/pypyjit/policy.py
--- a/pypy/module/pypyjit/policy.py
+++ b/pypy/module/pypyjit/policy.py
@@ -83,6 +83,7 @@
w_arg,
space.newlist(list_w))
if space.is_w(w_res, space.w_None):
+ cache.in_recursion = False
return
l = []
for w_item in space.listview(w_res):
From noreply at buildbot.pypy.org Sun Jan 8 23:22:03 2012
From: noreply at buildbot.pypy.org (mattip)
Date: Sun, 8 Jan 2012 23:22:03 +0100 (CET)
Subject: [pypy-commit] pypy numpypy-axisops: cleanup but no real progress
Message-ID: <20120108222203.A15A582110@wyvern.cs.uni-duesseldorf.de>
Author: mattip
Branch: numpypy-axisops
Changeset: r51151:de99533d42d0
Date: 2012-01-09 00:20 +0200
http://bitbucket.org/pypy/pypy/changeset/de99533d42d0/
Log: cleanup but no real progress
diff --git a/pypy/module/micronumpy/app_numpy.py b/pypy/module/micronumpy/app_numpy.py
--- a/pypy/module/micronumpy/app_numpy.py
+++ b/pypy/module/micronumpy/app_numpy.py
@@ -58,10 +58,10 @@
a = numpypy.array(a)
return a.min()
-def max(a):
+def max(a, axis=None):
if not hasattr(a, "max"):
a = numpypy.array(a)
- return a.max()
+ return a.max(axis)
def arange(start, stop=None, step=1, dtype=None):
'''arange([start], stop[, step], dtype=None)
diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py
--- a/pypy/module/micronumpy/interp_numarray.py
+++ b/pypy/module/micronumpy/interp_numarray.py
@@ -39,8 +39,8 @@
axisreduce_driver = jit.JitDriver(
greens=['shapelen', 'sig'],
virtualizables=['frame'],
- reds=['self','result', 'ri', 'frame', 'nextval', 'dtype', 'value'],
- get_printable_location=signature.new_printable_location('reduce'),
+ reds=['identity', 'self','result', 'ri', 'frame', 'nextval', 'dtype', 'value'],
+ get_printable_location=signature.new_printable_location('axisreduce'),
)
@@ -692,6 +692,7 @@
# to allow garbage-collecting them
raise NotImplementedError
+ @jit.unroll_safe
def compute(self):
result = W_NDimArray(self.size, self.shape, self.find_dtype())
shapelen = len(self.shape)
@@ -757,6 +758,8 @@
class Reduce(VirtualArray):
+ _immutable_fields_ = ['dim', 'binfunc', 'dtype', 'identity']
+
def __init__(self, binfunc, name, dim, res_dtype, values, identity=None):
shape = values.shape[0:dim] + values.shape[dim + 1:len(values.shape)]
VirtualArray.__init__(self, name, shape, res_dtype)
@@ -789,11 +792,13 @@
value = self.identity.convert_to(self.dtype)
return value
+ @jit.unroll_safe
def compute(self):
dtype = self.dtype
result = W_NDimArray(self.size, self.shape, dtype)
self.values = self.values.get_concrete()
shapelen = len(result.shape)
+ identity = self.identity
sig = self.find_sig(res_shape=result.shape, arr=self.values)
ri = ArrayIterator(result.size)
frame = sig.create_frame(self.values, dim=self.dim)
@@ -804,9 +809,14 @@
value=value, sig=sig,
shapelen=shapelen, ri=ri,
nextval=nextval, dtype=dtype,
+ identity=identity,
result=result)
if frame.iterators[0].axis_done:
- value = self.get_identity(sig, frame, shapelen)
+ if identity is None:
+ value = sig.eval(frame, self.values).convert_to(dtype)
+ frame.next(shapelen)
+ else:
+ value = identity.convert_to(dtype)
ri = ri.next(shapelen)
assert isinstance(sig, signature.ReduceSignature)
nextval = sig.eval(frame, self.values).convert_to(dtype)
diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py
--- a/pypy/module/micronumpy/test/test_numarray.py
+++ b/pypy/module/micronumpy/test/test_numarray.py
@@ -744,13 +744,11 @@
from numpypy import arange
a = arange(15).reshape(5, 3)
assert a.sum() == 105
+ assert a.max() == 14
assert (a.sum(0) == [30, 35, 40]).all()
assert (a.sum(1) == [3, 12, 21, 30, 39]).all()
assert (a.max(0) == [12, 13, 14]).all()
assert (a.max(1) == [2, 5, 8, 11, 14]).all()
- b = a.copy()
- #b should be an array, not a view
- assert (b.sum(1) == [3, 12, 21, 30, 39]).all()
def test_identity(self):
from numpypy import identity, array
diff --git a/pypy/module/micronumpy/test/test_zjit.py b/pypy/module/micronumpy/test/test_zjit.py
--- a/pypy/module/micronumpy/test/test_zjit.py
+++ b/pypy/module/micronumpy/test/test_zjit.py
@@ -127,9 +127,17 @@
def test_axissum(self):
result = self.run("axissum")
assert result == 30
- self.check_simple_loop({"getinteriorfield_raw": 2, "float_add": 2,
- "int_add": 1, "int_ge": 1, "guard_false": 1,
- "jump": 1, 'arraylen_gc': 1})
+ self.check_simple_loop({'arraylen_gc': 1,
+ 'call': 1,
+ 'getfield_gc': 3,
+ "getinteriorfield_raw": 1,
+ "guard_class": 1,
+ "guard_false": 2,
+ 'guard_no_exception': 1,
+ "float_add": 1,
+ "jump": 1,
+ 'setinteriorfield_raw': 1,
+ })
def define_prod():
return """
From noreply at buildbot.pypy.org Sun Jan 8 23:38:53 2012
From: noreply at buildbot.pypy.org (amauryfa)
Date: Sun, 8 Jan 2012 23:38:53 +0100 (CET)
Subject: [pypy-commit] pypy default: ArgErr.getmsg() does not include the
function name anymore.
Message-ID: <20120108223853.3BB7F82110@wyvern.cs.uni-duesseldorf.de>
Author: Amaury Forgeot d'Arc
Branch:
Changeset: r51152:62df4f51cdc8
Date: 2012-01-08 20:35 +0100
http://bitbucket.org/pypy/pypy/changeset/62df4f51cdc8/
Log: ArgErr.getmsg() does not include the function name anymore. This
will make it easier to support Python3 and its unicode names.
diff --git a/pypy/annotation/description.py b/pypy/annotation/description.py
--- a/pypy/annotation/description.py
+++ b/pypy/annotation/description.py
@@ -257,7 +257,8 @@
try:
inputcells = args.match_signature(signature, defs_s)
except ArgErr, e:
- raise TypeError, "signature mismatch: %s" % e.getmsg(self.name)
+ raise TypeError("signature mismatch: %s() %s" %
+ (self.name, e.getmsg()))
return inputcells
def specialize(self, inputcells, op=None):
diff --git a/pypy/interpreter/argument.py b/pypy/interpreter/argument.py
--- a/pypy/interpreter/argument.py
+++ b/pypy/interpreter/argument.py
@@ -428,8 +428,8 @@
return self._match_signature(w_firstarg,
scope_w, signature, defaults_w, 0)
except ArgErr, e:
- raise OperationError(self.space.w_TypeError,
- self.space.wrap(e.getmsg(fnname)))
+ raise operationerrfmt(self.space.w_TypeError,
+ "%s() %s", fnname, e.getmsg())
def _parse(self, w_firstarg, signature, defaults_w, blindargs=0):
"""Parse args and kwargs according to the signature of a code object,
@@ -450,8 +450,8 @@
try:
return self._parse(w_firstarg, signature, defaults_w, blindargs)
except ArgErr, e:
- raise OperationError(self.space.w_TypeError,
- self.space.wrap(e.getmsg(fnname)))
+ raise operationerrfmt(self.space.w_TypeError,
+ "%s() %s", fnname, e.getmsg())
@staticmethod
def frompacked(space, w_args=None, w_kwds=None):
@@ -626,7 +626,7 @@
class ArgErr(Exception):
- def getmsg(self, fnname):
+ def getmsg(self):
raise NotImplementedError
class ArgErrCount(ArgErr):
@@ -642,11 +642,10 @@
self.num_args = got_nargs
self.num_kwds = nkwds
- def getmsg(self, fnname):
+ def getmsg(self):
n = self.expected_nargs
if n == 0:
- msg = "%s() takes no arguments (%d given)" % (
- fnname,
+ msg = "takes no arguments (%d given)" % (
self.num_args + self.num_kwds)
else:
defcount = self.num_defaults
@@ -672,8 +671,7 @@
msg2 = " non-keyword"
else:
msg2 = ""
- msg = "%s() takes %s %d%s argument%s (%d given)" % (
- fnname,
+ msg = "takes %s %d%s argument%s (%d given)" % (
msg1,
n,
msg2,
@@ -686,9 +684,8 @@
def __init__(self, argname):
self.argname = argname
- def getmsg(self, fnname):
- msg = "%s() got multiple values for keyword argument '%s'" % (
- fnname,
+ def getmsg(self):
+ msg = "got multiple values for keyword argument '%s'" % (
self.argname)
return msg
@@ -722,13 +719,11 @@
break
self.kwd_name = name
- def getmsg(self, fnname):
+ def getmsg(self):
if self.num_kwds == 1:
- msg = "%s() got an unexpected keyword argument '%s'" % (
- fnname,
+ msg = "got an unexpected keyword argument '%s'" % (
self.kwd_name)
else:
- msg = "%s() got %d unexpected keyword arguments" % (
- fnname,
+ msg = "got %d unexpected keyword arguments" % (
self.num_kwds)
return msg
diff --git a/pypy/interpreter/test/test_argument.py b/pypy/interpreter/test/test_argument.py
--- a/pypy/interpreter/test/test_argument.py
+++ b/pypy/interpreter/test/test_argument.py
@@ -393,8 +393,8 @@
class FakeArgErr(ArgErr):
- def getmsg(self, fname):
- return "msg "+fname
+ def getmsg(self):
+ return "msg"
def _match_signature(*args):
raise FakeArgErr()
@@ -404,7 +404,7 @@
excinfo = py.test.raises(OperationError, args.parse_obj, "obj", "foo",
Signature(["a", "b"], None, None))
assert excinfo.value.w_type is TypeError
- assert excinfo.value._w_value == "msg foo"
+ assert excinfo.value.get_w_value(space) == "foo() msg"
def test_args_parsing_into_scope(self):
@@ -448,8 +448,8 @@
class FakeArgErr(ArgErr):
- def getmsg(self, fname):
- return "msg "+fname
+ def getmsg(self):
+ return "msg"
def _match_signature(*args):
raise FakeArgErr()
@@ -460,7 +460,7 @@
"obj", [None, None], "foo",
Signature(["a", "b"], None, None))
assert excinfo.value.w_type is TypeError
- assert excinfo.value._w_value == "msg foo"
+ assert excinfo.value.get_w_value(space) == "foo() msg"
def test_topacked_frompacked(self):
space = DummySpace()
@@ -493,35 +493,35 @@
# got_nargs, nkwds, expected_nargs, has_vararg, has_kwarg,
# defaults_w, missing_args
err = ArgErrCount(1, 0, 0, False, False, None, 0)
- s = err.getmsg('foo')
- assert s == "foo() takes no arguments (1 given)"
+ s = err.getmsg()
+ assert s == "takes no arguments (1 given)"
err = ArgErrCount(0, 0, 1, False, False, [], 1)
- s = err.getmsg('foo')
- assert s == "foo() takes exactly 1 argument (0 given)"
+ s = err.getmsg()
+ assert s == "takes exactly 1 argument (0 given)"
err = ArgErrCount(3, 0, 2, False, False, [], 0)
- s = err.getmsg('foo')
- assert s == "foo() takes exactly 2 arguments (3 given)"
+ s = err.getmsg()
+ assert s == "takes exactly 2 arguments (3 given)"
err = ArgErrCount(3, 0, 2, False, False, ['a'], 0)
- s = err.getmsg('foo')
- assert s == "foo() takes at most 2 arguments (3 given)"
+ s = err.getmsg()
+ assert s == "takes at most 2 arguments (3 given)"
err = ArgErrCount(1, 0, 2, True, False, [], 1)
- s = err.getmsg('foo')
- assert s == "foo() takes at least 2 arguments (1 given)"
+ s = err.getmsg()
+ assert s == "takes at least 2 arguments (1 given)"
err = ArgErrCount(0, 1, 2, True, False, ['a'], 1)
- s = err.getmsg('foo')
- assert s == "foo() takes at least 1 non-keyword argument (0 given)"
+ s = err.getmsg()
+ assert s == "takes at least 1 non-keyword argument (0 given)"
err = ArgErrCount(2, 1, 1, False, True, [], 0)
- s = err.getmsg('foo')
- assert s == "foo() takes exactly 1 non-keyword argument (2 given)"
+ s = err.getmsg()
+ assert s == "takes exactly 1 non-keyword argument (2 given)"
err = ArgErrCount(0, 1, 1, False, True, [], 1)
- s = err.getmsg('foo')
- assert s == "foo() takes exactly 1 non-keyword argument (0 given)"
+ s = err.getmsg()
+ assert s == "takes exactly 1 non-keyword argument (0 given)"
err = ArgErrCount(0, 1, 1, True, True, [], 1)
- s = err.getmsg('foo')
- assert s == "foo() takes at least 1 non-keyword argument (0 given)"
+ s = err.getmsg()
+ assert s == "takes at least 1 non-keyword argument (0 given)"
err = ArgErrCount(2, 1, 1, False, True, ['a'], 0)
- s = err.getmsg('foo')
- assert s == "foo() takes at most 1 non-keyword argument (2 given)"
+ s = err.getmsg()
+ assert s == "takes at most 1 non-keyword argument (2 given)"
def test_bad_type_for_star(self):
space = self.space
@@ -543,12 +543,12 @@
def test_unknown_keywords(self):
space = DummySpace()
err = ArgErrUnknownKwds(space, 1, ['a', 'b'], [True, False], None)
- s = err.getmsg('foo')
- assert s == "foo() got an unexpected keyword argument 'b'"
+ s = err.getmsg()
+ assert s == "got an unexpected keyword argument 'b'"
err = ArgErrUnknownKwds(space, 2, ['a', 'b', 'c'],
[True, False, False], None)
- s = err.getmsg('foo')
- assert s == "foo() got 2 unexpected keyword arguments"
+ s = err.getmsg()
+ assert s == "got 2 unexpected keyword arguments"
def test_unknown_unicode_keyword(self):
class DummySpaceUnicode(DummySpace):
@@ -558,13 +558,13 @@
err = ArgErrUnknownKwds(space, 1, ['a', None, 'b', 'c'],
[True, False, True, True],
[unichr(0x1234), u'b', u'c'])
- s = err.getmsg('foo')
- assert s == "foo() got an unexpected keyword argument '\xe1\x88\xb4'"
+ s = err.getmsg()
+ assert s == "got an unexpected keyword argument '\xe1\x88\xb4'"
def test_multiple_values(self):
err = ArgErrMultipleValues('bla')
- s = err.getmsg('foo')
- assert s == "foo() got multiple values for keyword argument 'bla'"
+ s = err.getmsg()
+ assert s == "got multiple values for keyword argument 'bla'"
class AppTestArgument:
def test_error_message(self):
From noreply at buildbot.pypy.org Mon Jan 9 11:38:33 2012
From: noreply at buildbot.pypy.org (arigo)
Date: Mon, 9 Jan 2012 11:38:33 +0100 (CET)
Subject: [pypy-commit] pypy concurrent-marksweep: Remove the extra debug
prints.
Message-ID: <20120109103833.B8DBE82110@wyvern.cs.uni-duesseldorf.de>
Author: Armin Rigo
Branch: concurrent-marksweep
Changeset: r51153:75ce27172ee1
Date: 2012-01-09 11:38 +0100
http://bitbucket.org/pypy/pypy/changeset/75ce27172ee1/
Log: Remove the extra debug prints.
diff --git a/pypy/rpython/memory/gc/concurrentgen.py b/pypy/rpython/memory/gc/concurrentgen.py
--- a/pypy/rpython/memory/gc/concurrentgen.py
+++ b/pypy/rpython/memory/gc/concurrentgen.py
@@ -240,7 +240,7 @@
hdr = self.header(obj)
hdr.tid = self.combine(typeid, self.current_young_marker, 0)
hdr.next = self.new_young_objects
- debug_print("malloc:", rawtotalsize, obj)
+ #debug_print("malloc:", rawtotalsize, obj)
self.new_young_objects = hdr
self.new_young_objects_size += r_uint(rawtotalsize)
if self.new_young_objects_size > self.nursery_limit:
@@ -271,7 +271,7 @@
hdr.next = self.new_young_objects
totalsize = llarena.round_up_for_allocation(totalsize)
rawtotalsize = raw_malloc_usage(totalsize)
- debug_print("malloc:", rawtotalsize, obj)
+ #debug_print("malloc:", rawtotalsize, obj)
self.new_young_objects = hdr
self.new_young_objects_size += r_uint(rawtotalsize)
if self.new_young_objects_size > self.nursery_limit:
@@ -326,7 +326,7 @@
cym = self.current_young_marker
com = self.current_old_marker
mark = self.get_mark(obj)
- debug_print("deletion_barrier:", mark, obj)
+ #debug_print("deletion_barrier:", mark, obj)
#
if mark == com: # most common case, make it fast
#
@@ -661,8 +661,8 @@
# NB. it's ok to edit 'gray_objects' from the mutator thread here,
# because the collector thread is not running yet
obj = root.address[0]
- debug_print("_add_stack_root", obj)
- assert 'DEAD' not in repr(obj)
+ #debug_print("_add_stack_root", obj)
+ #assert 'DEAD' not in repr(obj)
self.get_mark(obj)
self.collector.gray_objects.append(obj)
@@ -699,7 +699,7 @@
while list != self.NULL:
obj = llmemory.cast_ptr_to_adr(list) + size_gc_header
size1 = size_gc_header + self.get_size(obj)
- print "debug:", llmemory.raw_malloc_usage(size1)
+ #print "debug:", llmemory.raw_malloc_usage(size1)
size += llmemory.raw_malloc_usage(size1)
# detect loops
ll_assert(list != previous, "loop!")
@@ -707,7 +707,7 @@
if count & (count-1) == 0: # only on powers of two, to
previous = list # detect loops of any size
list = list.next
- print "\tTOTAL:", size
+ #print "\tTOTAL:", size
ll_assert(size == totalsize, "bogus total size in linked list")
return count
@@ -979,7 +979,7 @@
# we scan a modified content --- and the original content
# is never scanned.
#
- debug_print("mark:", obj)
+ #debug_print("mark:", obj)
self.gc.trace(obj, self._collect_add_pending, None)
self.set_mark(obj, com)
#
@@ -1033,7 +1033,7 @@
if mark == still_not_marked:
# the object is still not marked. Free it.
blockadr = llmemory.cast_ptr_to_adr(hdr)
- debug_print("free:", blockadr + size_gc_header)
+ #debug_print("free:", blockadr + size_gc_header)
blockadr = llarena.getfakearenaaddress(blockadr)
llarena.arena_free(blockadr)
#
From noreply at buildbot.pypy.org Mon Jan 9 11:56:36 2012
From: noreply at buildbot.pypy.org (bivab)
Date: Mon, 9 Jan 2012 11:56:36 +0100 (CET)
Subject: [pypy-commit] pypy arm-backend-2: update some more tests
Message-ID: <20120109105636.B55AA82110@wyvern.cs.uni-duesseldorf.de>
Author: David Schneider
Branch: arm-backend-2
Changeset: r51154:9688a1080b2b
Date: 2011-12-30 20:28 +0100
http://bitbucket.org/pypy/pypy/changeset/9688a1080b2b/
Log: update some more tests
diff --git a/pypy/jit/backend/arm/test/test_gc_integration.py b/pypy/jit/backend/arm/test/test_gc_integration.py
--- a/pypy/jit/backend/arm/test/test_gc_integration.py
+++ b/pypy/jit/backend/arm/test/test_gc_integration.py
@@ -46,82 +46,19 @@
return ['compressed'] + shape[1:]
-class MockGcRootMap2(object):
- is_shadow_stack = False
-
- def get_basic_shape(self, is_64_bit):
- return ['shape']
-
- def add_frame_offset(self, shape, offset):
- shape.append(offset)
-
- def add_callee_save_reg(self, shape, reg_index):
- index_to_name = {1: 'ebx', 2: 'esi', 3: 'edi'}
- shape.append(index_to_name[reg_index])
-
- def compress_callshape(self, shape, datablockwrapper):
- assert datablockwrapper == 'fakedatablockwrapper'
- assert shape[0] == 'shape'
- return ['compressed'] + shape[1:]
-
-
class MockGcDescr(GcCache):
- is_shadow_stack = False
-
- def get_funcptr_for_new(self):
- return 123
-
- get_funcptr_for_newarray = get_funcptr_for_new
- get_funcptr_for_newstr = get_funcptr_for_new
- get_funcptr_for_newunicode = get_funcptr_for_new
get_malloc_slowpath_addr = None
-
+ write_barrier_descr = None
moving_gc = True
gcrootmap = MockGcRootMap()
def initialize(self):
pass
- record_constptrs = GcLLDescr_framework.record_constptrs.im_func
+ _record_constptrs = GcLLDescr_framework._record_constptrs.im_func
rewrite_assembler = GcLLDescr_framework.rewrite_assembler.im_func
-class TestRegallocDirectGcIntegration(object):
-
- def test_mark_gc_roots(self):
- py.test.skip('roots')
- cpu = CPU(None, None)
- cpu.setup_once()
- regalloc = Regalloc(MockAssembler(cpu, MockGcDescr(False)))
- regalloc.assembler.datablockwrapper = 'fakedatablockwrapper'
- boxes = [BoxPtr() for i in range(len(ARMv7RegisterManager.all_regs))]
- longevity = {}
- for box in boxes:
- longevity[box] = (0, 1)
- regalloc.fm = ARMFrameManager()
- regalloc.rm = ARMv7RegisterManager(longevity, regalloc.fm,
- assembler=regalloc.assembler)
- regalloc.xrm = VFPRegisterManager(longevity, regalloc.fm,
- assembler=regalloc.assembler)
- cpu = regalloc.assembler.cpu
- for box in boxes:
- regalloc.rm.try_allocate_reg(box)
- TP = lltype.FuncType([], lltype.Signed)
- calldescr = cpu.calldescrof(TP, TP.ARGS, TP.RESULT,
- EffectInfo.MOST_GENERAL)
- regalloc.rm._check_invariants()
- box = boxes[0]
- regalloc.position = 0
- regalloc.consider_call(ResOperation(rop.CALL, [box], BoxInt(),
- calldescr))
- assert len(regalloc.assembler.movs) == 3
- #
- mark = regalloc.get_mark_gc_roots(cpu.gc_ll_descr.gcrootmap)
- assert mark[0] == 'compressed'
- base = -WORD * FRAME_FIXED_SIZE
- expected = ['ebx', 'esi', 'edi', base, base-WORD, base-WORD*2]
- assert dict.fromkeys(mark[1:]) == dict.fromkeys(expected)
-
class TestRegallocGcIntegration(BaseTestRegalloc):
cpu = CPU(None, None)
@@ -199,42 +136,32 @@
'''
self.interpret(ops, [0, 0, 0, 0, 0, 0, 0, 0, 0], run=False)
+NOT_INITIALIZED = chr(0xdd)
+
class GCDescrFastpathMalloc(GcLLDescription):
gcrootmap = None
- expected_malloc_slowpath_size = WORD*2
+ write_barrier_descr = None
def __init__(self):
- GcCache.__init__(self, False)
+ GcLLDescription.__init__(self, None)
# create a nursery
- NTP = rffi.CArray(lltype.Signed)
- self.nursery = lltype.malloc(NTP, 16, flavor='raw')
- self.addrs = lltype.malloc(rffi.CArray(lltype.Signed), 3,
+ NTP = rffi.CArray(lltype.Char)
+ self.nursery = lltype.malloc(NTP, 64, flavor='raw')
+ for i in range(64):
+ self.nursery[i] = NOT_INITIALIZED
+ self.addrs = lltype.malloc(rffi.CArray(lltype.Signed), 2,
flavor='raw')
self.addrs[0] = rffi.cast(lltype.Signed, self.nursery)
- self.addrs[1] = self.addrs[0] + 16*WORD
- self.addrs[2] = 0
- # 16 WORDs
+ self.addrs[1] = self.addrs[0] + 64
+ self.calls = []
def malloc_slowpath(size):
- assert size == self.expected_malloc_slowpath_size
+ self.calls.append(size)
+ # reset the nursery
nadr = rffi.cast(lltype.Signed, self.nursery)
self.addrs[0] = nadr + size
- self.addrs[2] += 1
return nadr
- self.malloc_slowpath = malloc_slowpath
- self.MALLOC_SLOWPATH = lltype.FuncType([lltype.Signed],
- lltype.Signed)
- self._counter = 123000
-
- def can_inline_malloc(self, descr):
- return True
-
- def get_funcptr_for_new(self):
- return 42
-# return llhelper(lltype.Ptr(self.NEW_TP), self.new)
-
- def init_size_descr(self, S, descr):
- descr.tid = self._counter
- self._counter += 1
+ self.generate_function('malloc_nursery', malloc_slowpath,
+ [lltype.Signed], lltype.Signed)
def get_nursery_free_addr(self):
return rffi.cast(lltype.Signed, self.addrs)
@@ -243,209 +170,61 @@
return rffi.cast(lltype.Signed, self.addrs) + WORD
def get_malloc_slowpath_addr(self):
- fptr = llhelper(lltype.Ptr(self.MALLOC_SLOWPATH), self.malloc_slowpath)
- return rffi.cast(lltype.Signed, fptr)
+ return self.get_malloc_fn_addr('malloc_nursery')
- get_funcptr_for_newarray = None
- get_funcptr_for_newstr = None
- get_funcptr_for_newunicode = None
+ def check_nothing_in_nursery(self):
+ # CALL_MALLOC_NURSERY should not write anything in the nursery
+ for i in range(64):
+ assert self.nursery[i] == NOT_INITIALIZED
class TestMallocFastpath(BaseTestRegalloc):
def setup_method(self, method):
cpu = CPU(None, None)
- cpu.vtable_offset = WORD
cpu.gc_ll_descr = GCDescrFastpathMalloc()
cpu.setup_once()
+ self.cpu = cpu
- # hack: specify 'tid' explicitly, because this test is not running
- # with the gc transformer
- NODE = lltype.GcStruct('node', ('tid', lltype.Signed),
- ('value', lltype.Signed))
- nodedescr = cpu.sizeof(NODE)
- valuedescr = cpu.fielddescrof(NODE, 'value')
-
- self.cpu = cpu
- self.nodedescr = nodedescr
- vtable = lltype.malloc(rclass.OBJECT_VTABLE, immortal=True)
- vtable_int = cpu.cast_adr_to_int(llmemory.cast_ptr_to_adr(vtable))
- NODE2 = lltype.GcStruct('node2',
- ('parent', rclass.OBJECT),
- ('tid', lltype.Signed),
- ('vtable', lltype.Ptr(rclass.OBJECT_VTABLE)))
- descrsize = cpu.sizeof(NODE2)
- heaptracker.register_known_gctype(cpu, vtable, NODE2)
- self.descrsize = descrsize
- self.vtable_int = vtable_int
-
- self.namespace = locals().copy()
-
def test_malloc_fastpath(self):
ops = '''
- [i0]
- p0 = new(descr=nodedescr)
- setfield_gc(p0, i0, descr=valuedescr)
- finish(p0)
+ []
+ p0 = call_malloc_nursery(16)
+ p1 = call_malloc_nursery(32)
+ p2 = call_malloc_nursery(16)
+ finish(p0, p1, p2)
'''
- self.interpret(ops, [42])
- # check the nursery
+ self.interpret(ops, [])
+ # check the returned pointers
gc_ll_descr = self.cpu.gc_ll_descr
- assert gc_ll_descr.nursery[0] == self.nodedescr.tid
- assert gc_ll_descr.nursery[1] == 42
nurs_adr = rffi.cast(lltype.Signed, gc_ll_descr.nursery)
- assert gc_ll_descr.addrs[0] == nurs_adr + (WORD*2)
- assert gc_ll_descr.addrs[2] == 0 # slowpath never called
+ ref = self.cpu.get_latest_value_ref
+ assert rffi.cast(lltype.Signed, ref(0)) == nurs_adr + 0
+ assert rffi.cast(lltype.Signed, ref(1)) == nurs_adr + 16
+ assert rffi.cast(lltype.Signed, ref(2)) == nurs_adr + 48
+ # check the nursery content and state
+ gc_ll_descr.check_nothing_in_nursery()
+ assert gc_ll_descr.addrs[0] == nurs_adr + 64
+ # slowpath never called
+ assert gc_ll_descr.calls == []
def test_malloc_slowpath(self):
ops = '''
[]
- p0 = new(descr=nodedescr)
- p1 = new(descr=nodedescr)
- p2 = new(descr=nodedescr)
- p3 = new(descr=nodedescr)
- p4 = new(descr=nodedescr)
- p5 = new(descr=nodedescr)
- p6 = new(descr=nodedescr)
- p7 = new(descr=nodedescr)
- p8 = new(descr=nodedescr)
- finish(p0, p1, p2, p3, p4, p5, p6, p7, p8)
+ p0 = call_malloc_nursery(16)
+ p1 = call_malloc_nursery(32)
+ p2 = call_malloc_nursery(24) # overflow
+ finish(p0, p1, p2)
'''
self.interpret(ops, [])
+ # check the returned pointers
+ gc_ll_descr = self.cpu.gc_ll_descr
+ nurs_adr = rffi.cast(lltype.Signed, gc_ll_descr.nursery)
+ ref = self.cpu.get_latest_value_ref
+ assert rffi.cast(lltype.Signed, ref(0)) == nurs_adr + 0
+ assert rffi.cast(lltype.Signed, ref(1)) == nurs_adr + 16
+ assert rffi.cast(lltype.Signed, ref(2)) == nurs_adr + 0
+ # check the nursery content and state
+ gc_ll_descr.check_nothing_in_nursery()
+ assert gc_ll_descr.addrs[0] == nurs_adr + 24
# this should call slow path once
- gc_ll_descr = self.cpu.gc_ll_descr
- nadr = rffi.cast(lltype.Signed, gc_ll_descr.nursery)
- assert gc_ll_descr.addrs[0] == nadr + (WORD*2)
- assert gc_ll_descr.addrs[2] == 1 # slowpath called once
-
- def test_new_with_vtable(self):
- ops = '''
- [i0, i1]
- p0 = new_with_vtable(ConstClass(vtable))
- guard_class(p0, ConstClass(vtable)) [i0]
- finish(i1)
- '''
- self.interpret(ops, [0, 1])
- assert self.getint(0) == 1
- gc_ll_descr = self.cpu.gc_ll_descr
- assert gc_ll_descr.nursery[0] == self.descrsize.tid
- assert gc_ll_descr.nursery[1] == self.vtable_int
- nurs_adr = rffi.cast(lltype.Signed, gc_ll_descr.nursery)
- assert gc_ll_descr.addrs[0] == nurs_adr + (WORD*3)
- assert gc_ll_descr.addrs[2] == 0 # slowpath never called
-
-
-class Seen(Exception):
- pass
-
-
-class GCDescrFastpathMallocVarsize(GCDescrFastpathMalloc):
- def can_inline_malloc_varsize(self, arraydescr, num_elem):
- return num_elem < 5
-
- def get_funcptr_for_newarray(self):
- return 52
-
- def init_array_descr(self, A, descr):
- descr.tid = self._counter
- self._counter += 1
-
- def args_for_new_array(self, descr):
- raise Seen("args_for_new_array")
-
-
-class TestMallocVarsizeFastpath(BaseTestRegalloc):
- def setup_method(self, method):
- cpu = CPU(None, None)
- cpu.vtable_offset = WORD
- cpu.gc_ll_descr = GCDescrFastpathMallocVarsize()
- cpu.setup_once()
- self.cpu = cpu
-
- ARRAY = lltype.GcArray(lltype.Signed)
- arraydescr = cpu.arraydescrof(ARRAY)
- self.arraydescr = arraydescr
- ARRAYCHAR = lltype.GcArray(lltype.Char)
- arraychardescr = cpu.arraydescrof(ARRAYCHAR)
-
- self.namespace = locals().copy()
-
- def test_malloc_varsize_fastpath(self):
- # Hack. Running the GcLLDescr_framework without really having
- # a complete GC means that we end up with both the tid and the
- # length being at offset 0. In this case, so the length overwrites
- # the tid. This is of course only the case in this test class.
- ops = '''
- []
- p0 = new_array(4, descr=arraydescr)
- setarrayitem_gc(p0, 0, 142, descr=arraydescr)
- setarrayitem_gc(p0, 3, 143, descr=arraydescr)
- finish(p0)
- '''
- self.interpret(ops, [])
- # check the nursery
- gc_ll_descr = self.cpu.gc_ll_descr
- assert gc_ll_descr.nursery[0] == 4
- assert gc_ll_descr.nursery[1] == 142
- assert gc_ll_descr.nursery[4] == 143
- nurs_adr = rffi.cast(lltype.Signed, gc_ll_descr.nursery)
- assert gc_ll_descr.addrs[0] == nurs_adr + (WORD*5)
- assert gc_ll_descr.addrs[2] == 0 # slowpath never called
-
- def test_malloc_varsize_slowpath(self):
- ops = '''
- []
- p0 = new_array(4, descr=arraydescr)
- setarrayitem_gc(p0, 0, 420, descr=arraydescr)
- setarrayitem_gc(p0, 3, 430, descr=arraydescr)
- p1 = new_array(4, descr=arraydescr)
- setarrayitem_gc(p1, 0, 421, descr=arraydescr)
- setarrayitem_gc(p1, 3, 431, descr=arraydescr)
- p2 = new_array(4, descr=arraydescr)
- setarrayitem_gc(p2, 0, 422, descr=arraydescr)
- setarrayitem_gc(p2, 3, 432, descr=arraydescr)
- p3 = new_array(4, descr=arraydescr)
- setarrayitem_gc(p3, 0, 423, descr=arraydescr)
- setarrayitem_gc(p3, 3, 433, descr=arraydescr)
- finish(p0, p1, p2, p3)
- '''
- gc_ll_descr = self.cpu.gc_ll_descr
- gc_ll_descr.expected_malloc_slowpath_size = 5*WORD
- self.interpret(ops, [])
- assert gc_ll_descr.addrs[2] == 1 # slowpath called once
-
- def test_malloc_varsize_too_big(self):
- ops = '''
- []
- p0 = new_array(5, descr=arraydescr)
- finish(p0)
- '''
- py.test.raises(Seen, self.interpret, ops, [])
-
- def test_malloc_varsize_variable(self):
- ops = '''
- [i0]
- p0 = new_array(i0, descr=arraydescr)
- finish(p0)
- '''
- py.test.raises(Seen, self.interpret, ops, [])
-
- def test_malloc_array_of_char(self):
- # check that fastpath_malloc_varsize() respects the alignment
- # of the pointer in the nursery
- ops = '''
- []
- p1 = new_array(1, descr=arraychardescr)
- p2 = new_array(2, descr=arraychardescr)
- p3 = new_array(3, descr=arraychardescr)
- p4 = new_array(4, descr=arraychardescr)
- finish(p1, p2, p3, p4)
- '''
- self.interpret(ops, [])
- p1 = self.getptr(0, llmemory.GCREF)
- p2 = self.getptr(1, llmemory.GCREF)
- p3 = self.getptr(2, llmemory.GCREF)
- p4 = self.getptr(3, llmemory.GCREF)
- assert p1._obj.intval & (WORD-1) == 0 # aligned
- assert p2._obj.intval & (WORD-1) == 0 # aligned
- assert p3._obj.intval & (WORD-1) == 0 # aligned
- assert p4._obj.intval & (WORD-1) == 0 # aligned
+ assert gc_ll_descr.calls == [24]
diff --git a/pypy/jit/backend/arm/test/test_generated.py b/pypy/jit/backend/arm/test/test_generated.py
--- a/pypy/jit/backend/arm/test/test_generated.py
+++ b/pypy/jit/backend/arm/test/test_generated.py
@@ -137,7 +137,7 @@
looptoken = JitCellToken()
cpu.compile_loop(inputargs, operations, looptoken)
args = [-5 , 24 , 46 , -15 , 13 , -8 , 0 , -6 , 6 , 6]
- op = cpu.execute_token(looptoken)
+ op = cpu.execute_token(looptoken, *args)
assert op.identifier == 2
assert cpu.get_latest_value_int(0) == 24
assert cpu.get_latest_value_int(1) == -32
diff --git a/pypy/jit/backend/arm/test/test_regalloc.py b/pypy/jit/backend/arm/test/test_regalloc.py
--- a/pypy/jit/backend/arm/test/test_regalloc.py
+++ b/pypy/jit/backend/arm/test/test_regalloc.py
@@ -151,20 +151,20 @@
loop = self.parse(ops)
looptoken = JitCellToken()
self.cpu.compile_loop(loop.inputargs, loop.operations, looptoken)
- args = []
+ arguments = []
for arg in args:
if isinstance(arg, int):
- args.append(arg)
+ arguments.append(arg)
elif isinstance(arg, float):
arg = longlong.getfloatstorage(arg)
- args.append(arg)
+ arguments.append(arg)
else:
assert isinstance(lltype.typeOf(arg), lltype.Ptr)
llgcref = lltype.cast_opaque_ptr(llmemory.GCREF, arg)
- args.append(llgcref)
+ arguments.append(llgcref)
loop._jitcelltoken = looptoken
if run:
- self.cpu.execute_token(looptoken, *args)
+ self.cpu.execute_token(looptoken, *arguments)
return loop
def prepare_loop(self, ops):
From noreply at buildbot.pypy.org Mon Jan 9 11:56:37 2012
From: noreply at buildbot.pypy.org (bivab)
Date: Mon, 9 Jan 2012 11:56:37 +0100 (CET)
Subject: [pypy-commit] pypy arm-backend-2: remove assertion,
that does not work anymore
Message-ID: <20120109105637.DEB3082110@wyvern.cs.uni-duesseldorf.de>
Author: David Schneider
Branch: arm-backend-2
Changeset: r51155:72a4e791d5e5
Date: 2011-12-30 20:29 +0100
http://bitbucket.org/pypy/pypy/changeset/72a4e791d5e5/
Log: remove assertion, that does not work anymore
diff --git a/pypy/jit/backend/arm/assembler.py b/pypy/jit/backend/arm/assembler.py
--- a/pypy/jit/backend/arm/assembler.py
+++ b/pypy/jit/backend/arm/assembler.py
@@ -1039,8 +1039,6 @@
def malloc_cond(self, nursery_free_adr, nursery_top_adr, size):
assert size & (WORD-1) == 0 # must be correctly aligned
- size = max(size, self.cpu.gc_ll_descr.minimal_size_in_nursery)
- size = (size + WORD - 1) & ~(WORD - 1) # round up
self.mc.gen_load_int(r.r0.value, nursery_free_adr)
self.mc.LDR_ri(r.r0.value, r.r0.value)
From noreply at buildbot.pypy.org Mon Jan 9 11:56:39 2012
From: noreply at buildbot.pypy.org (bivab)
Date: Mon, 9 Jan 2012 11:56:39 +0100 (CET)
Subject: [pypy-commit] pypy arm-backend-2: make sure we get an int here
Message-ID: <20120109105639.0D68F82110@wyvern.cs.uni-duesseldorf.de>
Author: David Schneider
Branch: arm-backend-2
Changeset: r51156:a796398e72b0
Date: 2011-12-30 20:29 +0100
http://bitbucket.org/pypy/pypy/changeset/a796398e72b0/
Log: make sure we get an int here
diff --git a/pypy/jit/backend/arm/codebuilder.py b/pypy/jit/backend/arm/codebuilder.py
--- a/pypy/jit/backend/arm/codebuilder.py
+++ b/pypy/jit/backend/arm/codebuilder.py
@@ -175,7 +175,8 @@
assert target_ofs & 0x3 == 0
self.write32(c << 28 | 0xA << 24 | (target_ofs >> 2) & 0xFFFFFF)
- def BL(self, target, c=cond.AL):
+ def BL(self, addr, c=cond.AL):
+ target = rffi.cast(rffi.INT, addr)
if c == cond.AL:
self.ADD_ri(reg.lr.value, reg.pc.value, arch.PC_OFFSET / 2)
self.LDR_ri(reg.pc.value, reg.pc.value, imm=-arch.PC_OFFSET / 2)
From noreply at buildbot.pypy.org Mon Jan 9 11:56:40 2012
From: noreply at buildbot.pypy.org (bivab)
Date: Mon, 9 Jan 2012 11:56:40 +0100 (CET)
Subject: [pypy-commit] pypy arm-backend-2: fix tests
Message-ID: <20120109105640.3492182110@wyvern.cs.uni-duesseldorf.de>
Author: David Schneider
Branch: arm-backend-2
Changeset: r51157:b7e4239284ca
Date: 2011-12-31 16:14 +0100
http://bitbucket.org/pypy/pypy/changeset/b7e4239284ca/
Log: fix tests
diff --git a/pypy/jit/backend/arm/test/test_recompilation.py b/pypy/jit/backend/arm/test/test_recompilation.py
--- a/pypy/jit/backend/arm/test/test_recompilation.py
+++ b/pypy/jit/backend/arm/test/test_recompilation.py
@@ -71,19 +71,19 @@
i2 = int_lt(i1, 20)
guard_true(i2, descr=fdescr1) [i1]
jump(i1, i10, i11, i12, i13, i14, i15, i16, descr=targettoken)
- ''', [0])
+ ''', [0, 0, 0, 0, 0, 0, 0, 0])
other_loop = self.interpret('''
- [i3]
+ [i3, i10, i11, i12, i13, i14, i15, i16]
label(i3, descr=targettoken2)
guard_false(i3, descr=fdescr2) [i3]
jump(i3, descr=targettoken2)
- ''', [1])
+ ''', [1, 0, 0, 0, 0, 0, 0, 0])
ops = '''
[i3]
jump(i3, 1, 2, 3, 4, 5, 6, 7, descr=targettoken)
'''
bridge = self.attach_bridge(ops, other_loop, 1)
- fail = self.run(other_loop, 1)
+ fail = self.run(other_loop, 1, 0, 0, 0, 0, 0, 0, 0)
assert fail.identifier == 1
def test_bridge_jumps_to_self_deeper(self):
@@ -99,7 +99,7 @@
i5 = int_lt(i3, 20)
guard_true(i5) [i99, i3]
jump(i3, i30, 1, i30, i30, i30, descr=targettoken)
- ''', [0])
+ ''', [0, 0, 0, 0, 0, 0])
assert self.getint(0) == 0
assert self.getint(1) == 1
ops = '''
@@ -120,9 +120,9 @@
guard_op = loop.operations[6]
#assert loop._jitcelltoken.compiled_loop_token.param_depth == 0
# the force_spill() forces the stack to grow
- assert guard_op.getdescr()._arm_bridge_frame_depth > loop_frame_depth
+ #assert guard_op.getdescr()._x86_bridge_frame_depth > loop_frame_depth
#assert guard_op.getdescr()._x86_bridge_param_depth == 0
- self.run(loop, 0, 0, 0)
+ self.run(loop, 0, 0, 0, 0, 0, 0)
assert self.getint(0) == 1
assert self.getint(1) == 20
@@ -138,7 +138,7 @@
i5 = int_lt(i3, 20)
guard_true(i5) [i99, i3]
jump(i3, i1, i2, descr=targettoken)
- ''', [0])
+ ''', [0, 0, 0])
assert self.getint(0) == 0
assert self.getint(1) == 1
ops = '''
@@ -149,4 +149,4 @@
self.run(loop, 0, 0, 0)
assert self.getint(0) == 1
assert self.getint(1) == 20
-
+
diff --git a/pypy/jit/backend/arm/test/test_regalloc.py b/pypy/jit/backend/arm/test/test_regalloc.py
--- a/pypy/jit/backend/arm/test/test_regalloc.py
+++ b/pypy/jit/backend/arm/test/test_regalloc.py
@@ -178,14 +178,15 @@
return self.cpu.get_latest_value_int(index)
def getfloat(self, index):
- return self.cpu.get_latest_value_float(index)
+ v = self.cpu.get_latest_value_float(index)
+ return longlong.getrealfloat(v)
def getints(self, end):
return [self.cpu.get_latest_value_int(index) for
index in range(0, end)]
def getfloats(self, end):
- return [self.cpu.get_latest_value_float(index) for
+ return [self.getfloat(index) for
index in range(0, end)]
def getptr(self, index, T):
@@ -229,9 +230,9 @@
guard_true(i5) [i4, i1, i2, i3]
jump(i4, i1, i2, i3, descr=targettoken)
'''
- self.interpret(ops, [0, 0, 0, 0])
+ loop = self.interpret(ops, [0, 0, 0, 0])
ops2 = '''
- [i5]
+ [i5, i6, i7, i8]
label(i5, descr=targettoken2)
i1 = int_add(i5, 1)
i3 = int_add(i1, 1)
@@ -240,13 +241,13 @@
guard_true(i2) [i4]
jump(i4, descr=targettoken2)
'''
- loop2 = self.interpret(ops2, [0])
+ loop2 = self.interpret(ops2, [0, 0, 0, 0])
bridge_ops = '''
[i4]
jump(i4, i4, i4, i4, descr=targettoken)
'''
- self.attach_bridge(bridge_ops, loop2, 5)
- self.run(loop2, 0)
+ bridge = self.attach_bridge(bridge_ops, loop2, 5)
+ self.run(loop2, 0, 0, 0, 0)
assert self.getint(0) == 31
assert self.getint(1) == 30
assert self.getint(2) == 30
@@ -283,7 +284,7 @@
'''
loop = self.interpret(ops, [0])
assert self.getint(0) == 1
- self.attach_bridge(bridge_ops, loop, 2)
+ bridge = self.attach_bridge(bridge_ops, loop, 2)
self.run(loop, 0)
assert self.getint(0) == 1
@@ -309,8 +310,8 @@
loop = self.interpret(ops, [0, 10])
assert self.getint(0) == 0
assert self.getint(1) == 10
- self.attach_bridge(bridge_ops, loop, 0)
- relf.run(loop, 0, 10)
+ bridge = self.attach_bridge(bridge_ops, loop, 0)
+ self.run(loop, 0, 10)
assert self.getint(0) == 0
assert self.getint(1) == 10
@@ -352,7 +353,7 @@
jump(i4, 3, i5, 4, descr=targettoken)
'''
self.interpret(ops, [0, 0, 0, 0])
- assert self.getints(4) == [1 << 29, 30, 3, 4]
+ assert self.getints(4) == [1<<29, 30, 3, 4]
ops = '''
[i0, i1, i2, i3]
label(i0, i1, i2, i3, descr=targettoken)
@@ -363,7 +364,7 @@
jump(i4, i5, 3, 4, descr=targettoken)
'''
self.interpret(ops, [0, 0, 0, 0])
- assert self.getints(4) == [1 << 29, 30, 3, 4]
+ assert self.getints(4) == [1<<29, 30, 3, 4]
ops = '''
[i0, i3, i1, i2]
label(i0, i3, i1, i2, descr=targettoken)
@@ -374,7 +375,7 @@
jump(i4, 4, i5, 3, descr=targettoken)
'''
self.interpret(ops, [0, 0, 0, 0])
- assert self.getints(4) == [1 << 29, 30, 3, 4]
+ assert self.getints(4) == [1<<29, 30, 3, 4]
def test_result_selected_reg_via_neg(self):
ops = '''
@@ -388,7 +389,7 @@
'''
self.interpret(ops, [0, 0, 3, 0])
assert self.getints(3) == [1, -3, 10]
-
+
def test_compare_memory_result_survives(self):
ops = '''
[i0, i1, i2, i3]
@@ -411,7 +412,7 @@
guard_true(i5) [i2, i1]
jump(i0, i18, i15, i16, i2, i1, i4, descr=targettoken)
'''
- self.interpret(ops, [0, 1, 2, 3])
+ self.interpret(ops, [0, 1, 2, 3, 0, 0, 0])
def test_op_result_unused(self):
ops = '''
@@ -445,8 +446,7 @@
finish(i0, i1, i2, i3, i4, i5, i6, i7, i8)
'''
self.attach_bridge(bridge_ops, loop, 1)
- args = [i for i in range(9)]
- self.run(loop, *args)
+ self.run(loop, 0, 1, 2, 3, 4, 5, 6, 7, 8)
assert self.getints(9) == range(9)
def test_loopargs(self):
@@ -456,7 +456,8 @@
jump(i4, i1, i2, i3)
"""
regalloc = self.prepare_loop(ops)
- assert len(regalloc.rm.reg_bindings) == 2
+ assert len(regalloc.rm.reg_bindings) == 4
+ assert len(regalloc.frame_manager.bindings) == 0
def test_loopargs_2(self):
ops = """
@@ -465,7 +466,7 @@
finish(i4, i1, i2, i3)
"""
regalloc = self.prepare_loop(ops)
- assert len(regalloc.rm.reg_bindings) == 2
+ assert len(regalloc.rm.reg_bindings) == 4
def test_loopargs_3(self):
ops = """
@@ -475,7 +476,7 @@
jump(i4, i1, i2, i3)
"""
regalloc = self.prepare_loop(ops)
- assert len(regalloc.rm.reg_bindings) == 2
+ assert len(regalloc.rm.reg_bindings) == 4
class TestRegallocCompOps(BaseTestRegalloc):
@@ -617,7 +618,8 @@
class TestRegallocFloats(BaseTestRegalloc):
def test_float_add(self):
- py.test.skip('need floats')
+ if not self.cpu.supports_floats:
+ py.test.skip("requires floats")
ops = '''
[f0, f1]
f2 = float_add(f0, f1)
@@ -627,7 +629,8 @@
assert self.getfloats(3) == [4.5, 3.0, 1.5]
def test_float_adds_stack(self):
- py.test.skip('need floats')
+ if not self.cpu.supports_floats:
+ py.test.skip("requires floats")
ops = '''
[f0, f1, f2, f3, f4, f5, f6, f7, f8]
f9 = float_add(f0, f1)
@@ -639,7 +642,8 @@
.4, .5, .6, .7, .8, .9]
def test_lt_const(self):
- py.test.skip('need floats')
+ if not self.cpu.supports_floats:
+ py.test.skip("requires floats")
ops = '''
[f0]
i1 = float_lt(3.5, f0)
@@ -649,7 +653,8 @@
assert self.getint(0) == 0
def test_bug_float_is_true_stack(self):
- py.test.skip('need floats')
+ if not self.cpu.supports_floats:
+ py.test.skip("requires floats")
# NB. float_is_true no longer exists. Unsure if keeping this test
# makes sense any more.
ops = '''
@@ -681,8 +686,8 @@
i10 = call(ConstClass(f1ptr), i0, descr=f1_calldescr)
finish(i10, i1, i2, i3, i4, i5, i6, i7, i8, i9)
'''
- self.interpret(ops, [4, 7, 9, 9, 9, 9, 9, 9, 9, 9, 9])
- assert self.getints(11) == [5, 7, 9, 9, 9, 9, 9, 9, 9, 9, 9]
+ self.interpret(ops, [4, 7, 9, 9, 9, 9, 9, 9, 9, 9])
+ assert self.getints(10) == [5, 7, 9, 9, 9, 9, 9, 9, 9, 9]
def test_two_calls(self):
ops = '''
@@ -691,8 +696,8 @@
i11 = call(ConstClass(f2ptr), i10, i1, descr=f2_calldescr)
finish(i11, i1, i2, i3, i4, i5, i6, i7, i8, i9)
'''
- self.interpret(ops, [4, 7, 9, 9, 9, 9, 9, 9, 9, 9, 9])
- assert self.getints(11) == [5 * 7, 7, 9, 9, 9, 9, 9, 9, 9, 9, 9]
+ self.interpret(ops, [4, 7, 9, 9, 9, 9, 9, 9, 9, 9])
+ assert self.getints(10) == [5 * 7, 7, 9, 9, 9, 9, 9, 9, 9, 9]
def test_call_many_arguments(self):
ops = '''
@@ -747,7 +752,7 @@
loop = """
[i0, i1, i2, i3, i4, i5, i6, i7, i8, i9, i10, i11, i12, i13, i14]
label(i0, i1, i2, i3, i4, i5, i6, i7, i8, i9, i10, i11, i12, i13, i14, descr=targettoken)
- jump(i1, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, descr=targettoken)
+ jump(i1, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, descr=targettoken)
"""
self.interpret(loop, range(15), run=False)
# ensure compiling this loop works
From noreply at buildbot.pypy.org Mon Jan 9 11:56:41 2012
From: noreply at buildbot.pypy.org (bivab)
Date: Mon, 9 Jan 2012 11:56:41 +0100 (CET)
Subject: [pypy-commit] pypy arm-backend-2: add a DOUBLEWORD constant to
replace all the 2 * WORD
Message-ID: <20120109105641.644A882110@wyvern.cs.uni-duesseldorf.de>
Author: David Schneider
Branch: arm-backend-2
Changeset: r51158:0a79c804ce94
Date: 2012-01-03 11:08 +0100
http://bitbucket.org/pypy/pypy/changeset/0a79c804ce94/
Log: add a DOUBLEWORD constant to replace all the 2 * WORD
diff --git a/pypy/jit/backend/arm/arch.py b/pypy/jit/backend/arm/arch.py
--- a/pypy/jit/backend/arm/arch.py
+++ b/pypy/jit/backend/arm/arch.py
@@ -4,6 +4,7 @@
FUNC_ALIGN = 8
WORD = 4
+DOUBLE_WORD = 8
# the number of registers that we need to save around malloc calls
N_REGISTERS_SAVED_BY_MALLOC = 9
diff --git a/pypy/jit/backend/arm/assembler.py b/pypy/jit/backend/arm/assembler.py
--- a/pypy/jit/backend/arm/assembler.py
+++ b/pypy/jit/backend/arm/assembler.py
@@ -6,7 +6,7 @@
decode64
from pypy.jit.backend.arm import conditions as c
from pypy.jit.backend.arm import registers as r
-from pypy.jit.backend.arm.arch import WORD, FUNC_ALIGN, \
+from pypy.jit.backend.arm.arch import WORD, DOUBLE_WORD, FUNC_ALIGN, \
PC_OFFSET, N_REGISTERS_SAVED_BY_MALLOC
from pypy.jit.backend.arm.codebuilder import ARMv7Builder, OverwritingBuilder
from pypy.jit.backend.arm.regalloc import (Regalloc, ARMFrameManager,
@@ -85,7 +85,7 @@
self.STACK_FIXED_AREA += N_REGISTERS_SAVED_BY_MALLOC * WORD
if self.cpu.supports_floats:
self.STACK_FIXED_AREA += (len(r.callee_saved_vfp_registers)
- * 2 * WORD)
+ * DOUBLE_WORD)
if self.STACK_FIXED_AREA % 8 != 0:
self.STACK_FIXED_AREA += WORD # Stack alignment
assert self.STACK_FIXED_AREA % 8 == 0
@@ -202,16 +202,16 @@
enc = rffi.cast(rffi.CCHARP, mem_loc)
frame_depth = frame_loc - (regs_loc + len(r.all_regs)
- * WORD + len(r.all_vfp_regs) * 2 * WORD)
+ * WORD + len(r.all_vfp_regs) * DOUBLE_WORD)
assert (frame_loc - frame_depth) % 4 == 0
stack = rffi.cast(rffi.CCHARP, frame_loc - frame_depth)
assert regs_loc % 4 == 0
vfp_regs = rffi.cast(rffi.CCHARP, regs_loc)
- assert (regs_loc + len(r.all_vfp_regs) * 2 * WORD) % 4 == 0
+ assert (regs_loc + len(r.all_vfp_regs) * DOUBLE_WORD) % 4 == 0
assert frame_depth >= 0
regs = rffi.cast(rffi.CCHARP,
- regs_loc + len(r.all_vfp_regs) * 2 * WORD)
+ regs_loc + len(r.all_vfp_regs) * DOUBLE_WORD)
i = -1
fail_index = -1
while(True):
@@ -253,7 +253,7 @@
else: # REG_LOC
reg = ord(enc[i])
if group == self.FLOAT_TYPE:
- value = decode64(vfp_regs, reg * 2 * WORD)
+ value = decode64(vfp_regs, reg * DOUBLE_WORD)
self.fail_boxes_float.setitem(fail_index, value)
continue
else:
diff --git a/pypy/jit/backend/arm/runner.py b/pypy/jit/backend/arm/runner.py
--- a/pypy/jit/backend/arm/runner.py
+++ b/pypy/jit/backend/arm/runner.py
@@ -1,5 +1,5 @@
from pypy.jit.backend.arm.assembler import AssemblerARM
-from pypy.jit.backend.arm.arch import WORD
+from pypy.jit.backend.arm.arch import WORD, DOUBLE_WORD
from pypy.jit.backend.arm.registers import all_regs, all_vfp_regs
from pypy.jit.backend.llsupport.llmodel import AbstractLLCPU
from pypy.rpython.llinterp import LLInterpreter
@@ -111,7 +111,7 @@
addr_end_of_frame = (addr_of_force_index -
(frame_depth +
len(all_regs) * WORD +
- len(all_vfp_regs) * 2 * WORD))
+ len(all_vfp_regs) * DOUBLE_WORD))
fail_index_2 = self.assembler.failure_recovery_func(
faildescr._failure_recovery_code,
addr_of_force_index,
From noreply at buildbot.pypy.org Mon Jan 9 11:56:42 2012
From: noreply at buildbot.pypy.org (bivab)
Date: Mon, 9 Jan 2012 11:56:42 +0100 (CET)
Subject: [pypy-commit] pypy arm-backend-2: remove unused imports
Message-ID: <20120109105642.8D80D82110@wyvern.cs.uni-duesseldorf.de>
Author: David Schneider
Branch: arm-backend-2
Changeset: r51159:2c283e1293a8
Date: 2012-01-03 11:09 +0100
http://bitbucket.org/pypy/pypy/changeset/2c283e1293a8/
Log: remove unused imports
diff --git a/pypy/jit/backend/arm/assembler.py b/pypy/jit/backend/arm/assembler.py
--- a/pypy/jit/backend/arm/assembler.py
+++ b/pypy/jit/backend/arm/assembler.py
@@ -1,7 +1,6 @@
from __future__ import with_statement
import os
from pypy.jit.backend.arm.helper.assembler import saved_registers, \
- count_reg_args, \
decode32, encode32, \
decode64
from pypy.jit.backend.arm import conditions as c
@@ -13,7 +12,6 @@
ARMv7RegisterManager, check_imm_arg,
operations as regalloc_operations,
operations_with_guard as regalloc_operations_with_guard)
-from pypy.jit.backend.arm.jump import remap_frame_layout
from pypy.jit.backend.llsupport.asmmemmgr import MachineDataBlockWrapper
from pypy.jit.backend.model import CompiledLoopToken
from pypy.jit.codewriter import longlong
diff --git a/pypy/jit/backend/arm/regalloc.py b/pypy/jit/backend/arm/regalloc.py
--- a/pypy/jit/backend/arm/regalloc.py
+++ b/pypy/jit/backend/arm/regalloc.py
@@ -13,7 +13,7 @@
)
from pypy.jit.backend.arm.jump import remap_frame_layout_mixed
from pypy.jit.backend.arm.arch import MY_COPY_OF_REGS
-from pypy.jit.backend.arm.arch import WORD, N_REGISTERS_SAVED_BY_MALLOC
+from pypy.jit.backend.arm.arch import WORD
from pypy.jit.codewriter import longlong
from pypy.jit.metainterp.history import (Const, ConstInt, ConstFloat, ConstPtr,
Box, BoxPtr,
From noreply at buildbot.pypy.org Mon Jan 9 11:56:43 2012
From: noreply at buildbot.pypy.org (bivab)
Date: Mon, 9 Jan 2012 11:56:43 +0100 (CET)
Subject: [pypy-commit] pypy arm-backend-2: rename field
Message-ID: <20120109105643.B363182110@wyvern.cs.uni-duesseldorf.de>
Author: David Schneider
Branch: arm-backend-2
Changeset: r51160:cad3c03c5ac1
Date: 2012-01-03 11:10 +0100
http://bitbucket.org/pypy/pypy/changeset/cad3c03c5ac1/
Log: rename field
diff --git a/pypy/jit/backend/arm/assembler.py b/pypy/jit/backend/arm/assembler.py
--- a/pypy/jit/backend/arm/assembler.py
+++ b/pypy/jit/backend/arm/assembler.py
@@ -577,7 +577,7 @@
operations = self.setup(original_loop_token, operations)
self._dump(operations, 'bridge')
assert isinstance(faildescr, AbstractFailDescr)
- code = faildescr._failure_recovery_code
+ code = faildescr._arm_failure_recovery_code
enc = rffi.cast(rffi.CCHARP, code)
frame_depth = faildescr._arm_current_frame_depth
arglocs = self.decode_inputargs(enc)
@@ -638,7 +638,7 @@
tok.faillocs, save_exc=tok.save_exc)
# store info on the descr
descr._arm_current_frame_depth = tok.faillocs[0].getint()
- descr._failure_recovery_code = memaddr
+ descr._arm_failure_recovery_code = memaddr
descr._arm_guard_pos = pos
def process_pending_guards(self, block_start):
diff --git a/pypy/jit/backend/arm/runner.py b/pypy/jit/backend/arm/runner.py
--- a/pypy/jit/backend/arm/runner.py
+++ b/pypy/jit/backend/arm/runner.py
@@ -113,7 +113,7 @@
len(all_regs) * WORD +
len(all_vfp_regs) * DOUBLE_WORD))
fail_index_2 = self.assembler.failure_recovery_func(
- faildescr._failure_recovery_code,
+ faildescr._arm_failure_recovery_code,
addr_of_force_index,
addr_end_of_frame)
self.assembler.leave_jitted_hook()
From noreply at buildbot.pypy.org Mon Jan 9 11:56:44 2012
From: noreply at buildbot.pypy.org (bivab)
Date: Mon, 9 Jan 2012 11:56:44 +0100 (CET)
Subject: [pypy-commit] pypy arm-backend-2: write the fail index here
Message-ID: <20120109105644.DA6D382110@wyvern.cs.uni-duesseldorf.de>
Author: David Schneider
Branch: arm-backend-2
Changeset: r51161:5767eb76b3f3
Date: 2012-01-03 11:11 +0100
http://bitbucket.org/pypy/pypy/changeset/5767eb76b3f3/
Log: write the fail index here
diff --git a/pypy/jit/backend/arm/regalloc.py b/pypy/jit/backend/arm/regalloc.py
--- a/pypy/jit/backend/arm/regalloc.py
+++ b/pypy/jit/backend/arm/regalloc.py
@@ -1068,6 +1068,7 @@
# do the call
faildescr = guard_op.getdescr()
fail_index = self.cpu.get_fail_descr_number(faildescr)
+ self.assembler._write_fail_index(fail_index)
args = [imm(rffi.cast(lltype.Signed, op.getarg(0).getint()))]
self.assembler.emit_op_call(op, args, self, fcond, fail_index)
# then reopen the stack
From noreply at buildbot.pypy.org Mon Jan 9 11:56:46 2012
From: noreply at buildbot.pypy.org (bivab)
Date: Mon, 9 Jan 2012 11:56:46 +0100 (CET)
Subject: [pypy-commit] pypy arm-backend-2: move the actual call to
assembler.py
Message-ID: <20120109105646.17F7F82110@wyvern.cs.uni-duesseldorf.de>
Author: David Schneider
Branch: arm-backend-2
Changeset: r51162:96d252d2a2e6
Date: 2012-01-03 11:13 +0100
http://bitbucket.org/pypy/pypy/changeset/96d252d2a2e6/
Log: move the actual call to assembler.py
diff --git a/pypy/jit/backend/arm/opassembler.py b/pypy/jit/backend/arm/opassembler.py
--- a/pypy/jit/backend/arm/opassembler.py
+++ b/pypy/jit/backend/arm/opassembler.py
@@ -1193,6 +1193,18 @@
self.propagate_memoryerror_if_r0_is_null()
return fcond
+ def emit_op_call_malloc_nursery(self, op, arglocs, regalloc, fcond):
+ # registers r0 and r1 are allocated for this call
+ assert len(arglocs) == 1
+ size = arglocs[0].value
+ gc_ll_descr = self.cpu.gc_ll_descr
+ self.malloc_cond(
+ gc_ll_descr.get_nursery_free_addr(),
+ gc_ll_descr.get_nursery_top_addr(),
+ size
+ )
+ return fcond
+
class FloatOpAssemlber(object):
_mixin_ = True
diff --git a/pypy/jit/backend/arm/regalloc.py b/pypy/jit/backend/arm/regalloc.py
--- a/pypy/jit/backend/arm/regalloc.py
+++ b/pypy/jit/backend/arm/regalloc.py
@@ -953,13 +953,7 @@
self.possibly_free_var(op.result)
self.possibly_free_var(t)
- gc_ll_descr = self.assembler.cpu.gc_ll_descr
- self.assembler.malloc_cond(
- gc_ll_descr.get_nursery_free_addr(),
- gc_ll_descr.get_nursery_top_addr(),
- size
- )
-
+ return [imm(size)]
def get_mark_gc_roots(self, gcrootmap, use_copy_area=False):
shape = gcrootmap.get_basic_shape(False)
for v, val in self.frame_manager.bindings.items():
From noreply at buildbot.pypy.org Mon Jan 9 11:56:47 2012
From: noreply at buildbot.pypy.org (bivab)
Date: Mon, 9 Jan 2012 11:56:47 +0100 (CET)
Subject: [pypy-commit] pypy arm-backend-2: remove the condition flag from
BKPT, which is an uncondional instruction
Message-ID: <20120109105647.3EC1E82110@wyvern.cs.uni-duesseldorf.de>
Author: David Schneider
Branch: arm-backend-2
Changeset: r51163:7fe04da61940
Date: 2012-01-03 11:14 +0100
http://bitbucket.org/pypy/pypy/changeset/7fe04da61940/
Log: remove the condition flag from BKPT, which is an uncondional
instruction
diff --git a/pypy/jit/backend/arm/codebuilder.py b/pypy/jit/backend/arm/codebuilder.py
--- a/pypy/jit/backend/arm/codebuilder.py
+++ b/pypy/jit/backend/arm/codebuilder.py
@@ -154,8 +154,9 @@
instr = self._encode_reg_list(cond << 28 | 0x8BD << 16, regs)
self.write32(instr)
- def BKPT(self, cond=cond.AL):
- self.write32(cond << 28 | 0x1200070)
+ def BKPT(self):
+ """Unconditional breakpoint"""
+ self.write32(0x1200070)
# corresponds to the instruction vmrs APSR_nzcv, fpscr
def VMRS(self, cond=cond.AL):
From noreply at buildbot.pypy.org Mon Jan 9 11:56:48 2012
From: noreply at buildbot.pypy.org (bivab)
Date: Mon, 9 Jan 2012 11:56:48 +0100 (CET)
Subject: [pypy-commit] pypy arm-backend-2: add an alignment check after
malloc calls for debugging
Message-ID: <20120109105648.67A3982110@wyvern.cs.uni-duesseldorf.de>
Author: David Schneider
Branch: arm-backend-2
Changeset: r51164:f02dc5c4e43c
Date: 2012-01-03 12:50 +0100
http://bitbucket.org/pypy/pypy/changeset/f02dc5c4e43c/
Log: add an alignment check after malloc calls for debugging
diff --git a/pypy/jit/backend/arm/assembler.py b/pypy/jit/backend/arm/assembler.py
--- a/pypy/jit/backend/arm/assembler.py
+++ b/pypy/jit/backend/arm/assembler.py
@@ -57,6 +57,8 @@
STACK_FIXED_AREA = -1
+ debug = True
+
def __init__(self, cpu, failargs_limit=1000):
self.cpu = cpu
self.fail_boxes_int = values_array(lltype.Signed, failargs_limit)
diff --git a/pypy/jit/backend/arm/opassembler.py b/pypy/jit/backend/arm/opassembler.py
--- a/pypy/jit/backend/arm/opassembler.py
+++ b/pypy/jit/backend/arm/opassembler.py
@@ -1191,6 +1191,7 @@
def emit_op_call_malloc_gc(self, op, arglocs, regalloc, fcond):
self.emit_op_call(op, arglocs, regalloc, fcond)
self.propagate_memoryerror_if_r0_is_null()
+ self._alignment_check()
return fcond
def emit_op_call_malloc_nursery(self, op, arglocs, regalloc, fcond):
@@ -1203,8 +1204,19 @@
gc_ll_descr.get_nursery_top_addr(),
size
)
+ self._alignment_check()
return fcond
+ def _alignment_check(self):
+ if not self.debug:
+ return
+ self.mc.MOV_rr(r.ip.value, r.r0.value)
+ self.mc.AND_ri(r.ip.value, r.ip.value, 3)
+ self.mc.CMP_ri(r.ip.value, 0)
+ self.mc.MOV_rr(r.pc.value, r.pc.value, cond=c.EQ)
+ self.mc.BKPT()
+ self.mc.NOP()
+
class FloatOpAssemlber(object):
_mixin_ = True
From noreply at buildbot.pypy.org Mon Jan 9 11:56:49 2012
From: noreply at buildbot.pypy.org (bivab)
Date: Mon, 9 Jan 2012 11:56:49 +0100 (CET)
Subject: [pypy-commit] pypy arm-backend-2: simplify some conditional paths
in the generated code
Message-ID: <20120109105649.94DA482110@wyvern.cs.uni-duesseldorf.de>
Author: David Schneider
Branch: arm-backend-2
Changeset: r51165:895cbdd61311
Date: 2012-01-03 12:51 +0100
http://bitbucket.org/pypy/pypy/changeset/895cbdd61311/
Log: simplify some conditional paths in the generated code
diff --git a/pypy/jit/backend/arm/assembler.py b/pypy/jit/backend/arm/assembler.py
--- a/pypy/jit/backend/arm/assembler.py
+++ b/pypy/jit/backend/arm/assembler.py
@@ -329,17 +329,13 @@
mc.LDR_ri(reg.value, r.fp.value, imm=ofs)
mc.CMP_ri(r.r0.value, 0)
- jmp_pos = mc.currpos()
- mc.NOP()
+ mc.B(self.propagate_exception_path, c=c.EQ)
nursery_free_adr = self.cpu.gc_ll_descr.get_nursery_free_addr()
mc.gen_load_int(r.r1.value, nursery_free_adr)
mc.LDR_ri(r.r1.value, r.r1.value)
# see above
mc.POP([r.ip.value, r.pc.value])
- pmc = OverwritingBuilder(mc, jmp_pos, WORD)
- pmc.B_offs(jmp_pos, c=c.EQ)
- mc.B(self.propagate_exception_path)
rawstart = mc.materialize(self.cpu.asmmemmgr, [])
self.malloc_slowpath = rawstart
@@ -1055,9 +1051,6 @@
self.mc.CMP_rr(r.r1.value, r.ip.value)
- fast_jmp_pos = self.mc.currpos()
- self.mc.NOP()
-
# XXX update
# See comments in _build_malloc_slowpath for the
# details of the two helper functions that we are calling below.
@@ -1071,11 +1064,7 @@
# a no-op.
self.mark_gc_roots(self.write_new_force_index(),
use_copy_area=True)
- self.mc.BL(self.malloc_slowpath)
-
- offset = self.mc.currpos() - fast_jmp_pos
- pmc = OverwritingBuilder(self.mc, fast_jmp_pos, WORD)
- pmc.ADD_ri(r.pc.value, r.pc.value, offset - PC_OFFSET, cond=c.LS)
+ self.mc.BL(self.malloc_slowpath, c=c.HI)
self.mc.gen_load_int(r.ip.value, nursery_free_adr)
self.mc.STR_ri(r.r1.value, r.ip.value)
From noreply at buildbot.pypy.org Mon Jan 9 11:56:50 2012
From: noreply at buildbot.pypy.org (bivab)
Date: Mon, 9 Jan 2012 11:56:50 +0100 (CET)
Subject: [pypy-commit] pypy arm-backend-2: modify stack_locations store
position and the offset to the FP. Get rid of the special case for the
first slot in the spilling area currently used for the FORCE_TOKEN
Message-ID: <20120109105650.C096782110@wyvern.cs.uni-duesseldorf.de>
Author: David Schneider
Branch: arm-backend-2
Changeset: r51166:5e9aadf0b867
Date: 2012-01-04 15:57 +0100
http://bitbucket.org/pypy/pypy/changeset/5e9aadf0b867/
Log: modify stack_locations store position and the offset to the FP. Get
rid of the special case for the first slot in the spilling area
currently used for the FORCE_TOKEN
diff --git a/pypy/jit/backend/arm/assembler.py b/pypy/jit/backend/arm/assembler.py
--- a/pypy/jit/backend/arm/assembler.py
+++ b/pypy/jit/backend/arm/assembler.py
@@ -680,9 +680,7 @@
OverwritingBuilder.size_of_gen_load_int + WORD)
# Note: the frame_depth is one less than the value stored in the frame
# manager
- if frame_depth == 1:
- return
- n = (frame_depth - 1) * WORD
+ n = frame_depth * WORD
# ensure the sp is 8 byte aligned when patching it
if n % 8 != 0:
@@ -840,7 +838,7 @@
temp = r.lr
else:
temp = r.ip
- offset = loc.position * WORD
+ offset = loc.value
if not check_imm_arg(offset, size=0xFFF):
self.mc.PUSH([temp.value], cond=cond)
self.mc.gen_load_int(temp.value, -offset, cond=cond)
@@ -861,7 +859,7 @@
assert loc is not r.lr, 'lr is not supported as a target \
when moving from the stack'
# unspill a core register
- offset = prev_loc.position * WORD
+ offset = prev_loc.value
if not check_imm_arg(offset, size=0xFFF):
self.mc.PUSH([r.lr.value], cond=cond)
pushed = True
@@ -875,7 +873,7 @@
assert prev_loc.type == FLOAT, 'trying to load from an \
incompatible location into a float register'
# load spilled value into vfp reg
- offset = prev_loc.position * WORD
+ offset = prev_loc.value
self.mc.PUSH([r.ip.value], cond=cond)
pushed = True
if not check_imm_arg(offset):
@@ -905,7 +903,7 @@
incompatible location from a float register'
# spill vfp register
self.mc.PUSH([r.ip.value], cond=cond)
- offset = loc.position * WORD
+ offset = loc.value
if not check_imm_arg(offset):
self.mc.gen_load_int(r.ip.value, offset, cond=cond)
self.mc.SUB_rr(r.ip.value, r.fp.value, r.ip.value, cond=cond)
@@ -948,7 +946,7 @@
self.mc.POP([r.ip.value], cond=cond)
elif vfp_loc.is_stack() and vfp_loc.type == FLOAT:
# load spilled vfp value into two core registers
- offset = vfp_loc.position * WORD
+ offset = vfp_loc.value
if not check_imm_arg(offset, size=0xFFF):
self.mc.PUSH([r.ip.value], cond=cond)
self.mc.gen_load_int(r.ip.value, -offset, cond=cond)
@@ -971,7 +969,7 @@
self.mc.VMOV_cr(vfp_loc.value, reg1.value, reg2.value, cond=cond)
elif vfp_loc.is_stack():
# move from two core registers to a float stack location
- offset = vfp_loc.position * WORD
+ offset = vfp_loc.value
if not check_imm_arg(offset, size=0xFFF):
self.mc.PUSH([r.ip.value], cond=cond)
self.mc.gen_load_int(r.ip.value, -offset, cond=cond)
diff --git a/pypy/jit/backend/arm/locations.py b/pypy/jit/backend/arm/locations.py
--- a/pypy/jit/backend/arm/locations.py
+++ b/pypy/jit/backend/arm/locations.py
@@ -1,5 +1,5 @@
from pypy.jit.metainterp.history import INT, FLOAT
-from pypy.jit.backend.arm.arch import WORD
+from pypy.jit.backend.arm.arch import WORD, DOUBLE_WORD
class AssemblerLocation(object):
@@ -110,9 +110,13 @@
class StackLocation(AssemblerLocation):
_immutable_ = True
- def __init__(self, position, num_words=1, type=INT):
+ def __init__(self, position, fp_offset, type=INT):
+ if type == FLOAT:
+ self.width = DOUBLE_WORD
+ else:
+ self.width = WORD
self.position = position
- self.width = num_words * WORD
+ self.value = fp_offset
self.type = type
def __repr__(self):
diff --git a/pypy/jit/backend/arm/regalloc.py b/pypy/jit/backend/arm/regalloc.py
--- a/pypy/jit/backend/arm/regalloc.py
+++ b/pypy/jit/backend/arm/regalloc.py
@@ -54,26 +54,28 @@
return "" % (id(self),)
+def get_fp_offset(i):
+ if i >= 0:
+ # Take the FORCE_TOKEN into account
+ return (1 + i) * WORD
+ else:
+ return i * WORD
+
+
class ARMFrameManager(FrameManager):
def __init__(self):
FrameManager.__init__(self)
- self.used = [True] # keep first slot free
+ #self.used = [True] # keep first slot free
# XXX refactor frame to avoid this issue of keeping the first slot
# reserved
@staticmethod
- def frame_pos(loc, type):
- num_words = ARMFrameManager.frame_size(type)
- if type == FLOAT:
- if loc > 0:
- # Make sure that loc is an even value
- # the frame layout requires loc to be even if it is a spilled
- # value!!
- assert (loc & 1) == 0
- return locations.StackLocation(loc + 1,
- num_words=num_words, type=type)
- return locations.StackLocation(loc, num_words=num_words, type=type)
+ def frame_pos(i, box_type):
+ if box_type == FLOAT:
+ return locations.StackLocation(i, get_fp_offset(i + 1), box_type)
+ else:
+ return locations.StackLocation(i, get_fp_offset(i), box_type)
@staticmethod
def frame_size(type):
@@ -84,10 +86,7 @@
@staticmethod
def get_loc_index(loc):
assert loc.is_stack()
- if loc.type == FLOAT:
- return loc.position - 1
- else:
- return loc.position
+ return loc.position
def void(self, op, fcond):
@@ -721,7 +720,6 @@
else:
src_locations2.append(src_loc)
dst_locations2.append(dst_loc)
-
remap_frame_layout_mixed(self.assembler,
src_locations1, dst_locations1, tmploc,
src_locations2, dst_locations2, vfptmploc)
@@ -960,6 +958,7 @@
if (isinstance(v, BoxPtr) and self.rm.stays_alive(v)):
assert val.is_stack()
gcrootmap.add_frame_offset(shape, val.position * -WORD)
+ gcrootmap.add_frame_offset(shape, -val.value)
for v, reg in self.rm.reg_bindings.items():
if reg is r.r0:
continue
From noreply at buildbot.pypy.org Mon Jan 9 11:56:51 2012
From: noreply at buildbot.pypy.org (bivab)
Date: Mon, 9 Jan 2012 11:56:51 +0100 (CET)
Subject: [pypy-commit] pypy arm-backend-2: port encoding of locations used
for guards from the x86 backend
Message-ID: <20120109105651.F16C482110@wyvern.cs.uni-duesseldorf.de>
Author: David Schneider
Branch: arm-backend-2
Changeset: r51167:ffbd6f34a8c3
Date: 2012-01-04 15:58 +0100
http://bitbucket.org/pypy/pypy/changeset/ffbd6f34a8c3/
Log: port encoding of locations used for guards from the x86 backend
diff --git a/pypy/jit/backend/arm/assembler.py b/pypy/jit/backend/arm/assembler.py
--- a/pypy/jit/backend/arm/assembler.py
+++ b/pypy/jit/backend/arm/assembler.py
@@ -11,6 +11,7 @@
from pypy.jit.backend.arm.regalloc import (Regalloc, ARMFrameManager,
ARMv7RegisterManager, check_imm_arg,
operations as regalloc_operations,
+ get_fp_offset,
operations_with_guard as regalloc_operations_with_guard)
from pypy.jit.backend.llsupport.asmmemmgr import MachineDataBlockWrapper
from pypy.jit.backend.model import CompiledLoopToken
@@ -30,30 +31,6 @@
class AssemblerARM(ResOpAssembler):
- """
- Encoding for locations in memory
- types:
- \xED = FLOAT
- \xEE = REF
- \xEF = INT
- location:
- \xFC = stack location
- \xFD = imm location
- emtpy = reg location
- \xFE = Empty loc
-
- \xFF = END_OF_LOCS
- """
- FLOAT_TYPE = '\xED'
- REF_TYPE = '\xEE'
- INT_TYPE = '\xEF'
-
- STACK_LOC = '\xFC'
- IMM_LOC = '\xFD'
- # REG_LOC is empty
- EMPTY_LOC = '\xFE'
-
- END_OF_LOCS = '\xFF'
STACK_FIXED_AREA = -1
@@ -183,132 +160,138 @@
"""mem_loc is a structure in memory describing where the values for
the failargs are stored. frame loc is the address of the frame
pointer for the frame to be decoded frame """
- return self.decode_registers_and_descr(mem_loc,
- frame_pointer, stack_pointer)
+ vfp_registers = rffi.cast(rffi.LONGLONGP, stack_pointer)
+ registers = rffi.ptradd(vfp_registers, len(r.all_vfp_regs))
+ registers = rffi.cast(rffi.LONGP, registers)
+ return self.decode_registers_and_descr(mem_loc, frame_pointer,
+ registers, vfp_registers)
self.failure_recovery_func = failure_recovery_func
- recovery_func_sign = lltype.Ptr(lltype.FuncType([lltype.Signed,
- lltype.Signed, lltype.Signed], lltype.Signed))
+ recovery_func_sign = lltype.Ptr(lltype.FuncType([lltype.Signed] * 3,
+ lltype.Signed))
@rgc.no_collect
- def decode_registers_and_descr(self, mem_loc, frame_loc, regs_loc):
+ def decode_registers_and_descr(self, mem_loc, frame_pointer,
+ registers, vfp_registers):
"""Decode locations encoded in memory at mem_loc and write the values
to the failboxes. Values for spilled vars and registers are stored on
stack at frame_loc """
- # XXX check if units are correct here, when comparing words and bytes
- # and stuff assert 0, 'check if units are correct here, when comparing
- # words and bytes and stuff'
+ assert frame_pointer & 1 == 0
+ bytecode = rffi.cast(rffi.UCHARP, mem_loc)
+ num = 0
+ value = 0
+ fvalue = 0
+ code_inputarg = False
+ while True:
+ code = bytecode[0]
+ bytecode = rffi.ptradd(bytecode, 1)
+ if code >= self.CODE_FROMSTACK:
+ if code > 0x7F:
+ shift = 7
+ code &= 0x7F
+ while True:
+ nextcode = rffi.cast(lltype.Signed, bytecode[0])
+ bytecode = rffi.ptradd(bytecode, 1)
+ code |= (nextcode & 0x7F) << shift
+ shift += 7
+ if nextcode <= 0x7F:
+ break
+ # load the value from the stack
+ kind = code & 3
+ code = int((code - self.CODE_FROMSTACK) >> 2)
+ if code_inputarg:
+ code = ~code
+ code_inputarg = False
+ if kind == self.DESCR_FLOAT:
+ # we use code + 1 to get the hi word of the double worded float
+ stackloc = frame_pointer - get_fp_offset(int(code) + 1)
+ assert stackloc & 3 == 0
+ fvalue = rffi.cast(rffi.LONGLONGP, stackloc)[0]
+ else:
+ stackloc = frame_pointer - get_fp_offset(int(code))
+ assert stackloc & 1 == 0
+ value = rffi.cast(rffi.LONGP, stackloc)[0]
+ else:
+ # 'code' identifies a register: load its value
+ kind = code & 3
+ if kind == self.DESCR_SPECIAL:
+ if code == self.CODE_HOLE:
+ num += 1
+ continue
+ if code == self.CODE_INPUTARG:
+ code_inputarg = True
+ continue
+ assert code == self.CODE_STOP
+ break
+ code >>= 2
+ if kind == self.DESCR_FLOAT:
+ fvalue = vfp_registers[code]
+ else:
+ value = registers[code]
+ # store the loaded value into fail_boxes_
+ if kind == self.DESCR_FLOAT:
+ tgt = self.fail_boxes_float.get_addr_for_num(num)
+ rffi.cast(rffi.LONGLONGP, tgt)[0] = fvalue
+ else:
+ if kind == self.DESCR_INT:
+ tgt = self.fail_boxes_int.get_addr_for_num(num)
+ elif kind == self.DESCR_REF:
+ assert (value & 3) == 0, "misaligned pointer"
+ tgt = self.fail_boxes_ptr.get_addr_for_num(num)
+ else:
+ assert 0, "bogus kind"
+ rffi.cast(rffi.LONGP, tgt)[0] = value
+ num += 1
+ self.fail_boxes_count = num
+ fail_index = rffi.cast(rffi.INTP, bytecode)[0]
+ fail_index = rffi.cast(lltype.Signed, fail_index)
+ return fail_index
- enc = rffi.cast(rffi.CCHARP, mem_loc)
- frame_depth = frame_loc - (regs_loc + len(r.all_regs)
- * WORD + len(r.all_vfp_regs) * DOUBLE_WORD)
- assert (frame_loc - frame_depth) % 4 == 0
- stack = rffi.cast(rffi.CCHARP, frame_loc - frame_depth)
- assert regs_loc % 4 == 0
- vfp_regs = rffi.cast(rffi.CCHARP, regs_loc)
- assert (regs_loc + len(r.all_vfp_regs) * DOUBLE_WORD) % 4 == 0
- assert frame_depth >= 0
-
- regs = rffi.cast(rffi.CCHARP,
- regs_loc + len(r.all_vfp_regs) * DOUBLE_WORD)
- i = -1
- fail_index = -1
- while(True):
- i += 1
- fail_index += 1
- res = enc[i]
- if res == self.END_OF_LOCS:
+ def decode_inputargs(self, code):
+ descr_to_box_type = [REF, INT, FLOAT]
+ bytecode = rffi.cast(rffi.UCHARP, code)
+ arglocs = []
+ code_inputarg = False
+ while 1:
+ # decode the next instruction from the bytecode
+ code = rffi.cast(lltype.Signed, bytecode[0])
+ bytecode = rffi.ptradd(bytecode, 1)
+ if code >= self.CODE_FROMSTACK:
+ # 'code' identifies a stack location
+ if code > 0x7F:
+ shift = 7
+ code &= 0x7F
+ while True:
+ nextcode = rffi.cast(lltype.Signed, bytecode[0])
+ bytecode = rffi.ptradd(bytecode, 1)
+ code |= (nextcode & 0x7F) << shift
+ shift += 7
+ if nextcode <= 0x7F:
+ break
+ kind = code & 3
+ code = (code - self.CODE_FROMSTACK) >> 2
+ if code_inputarg:
+ code = ~code
+ code_inputarg = False
+ loc = ARMFrameManager.frame_pos(code, descr_to_box_type[kind])
+ elif code == self.CODE_STOP:
break
- if res == self.EMPTY_LOC:
+ elif code == self.CODE_HOLE:
continue
-
- group = res
- i += 1
- res = enc[i]
- if res == self.IMM_LOC:
- # imm value
- if group == self.INT_TYPE or group == self.REF_TYPE:
- value = decode32(enc, i + 1)
- i += 4
+ elif code == self.CODE_INPUTARG:
+ code_inputarg = True
+ continue
+ else:
+ # 'code' identifies a register
+ kind = code & 3
+ code >>= 2
+ if kind == self.DESCR_FLOAT:
+ loc = r.all_vfp_regs[code]
else:
- assert group == self.FLOAT_TYPE
- adr = decode32(enc, i + 1)
- tp = rffi.CArrayPtr(longlong.FLOATSTORAGE)
- value = rffi.cast(tp, adr)[0]
- self.fail_boxes_float.setitem(fail_index, value)
- i += 4
- continue
- elif res == self.STACK_LOC:
- stack_loc = decode32(enc, i + 1)
- i += 4
- if group == self.FLOAT_TYPE:
- value = decode64(stack,
- frame_depth - (stack_loc + 1) * WORD)
- fvalue = rffi.cast(longlong.FLOATSTORAGE, value)
- self.fail_boxes_float.setitem(fail_index, fvalue)
- continue
- else:
- value = decode32(stack, frame_depth - stack_loc * WORD)
- else: # REG_LOC
- reg = ord(enc[i])
- if group == self.FLOAT_TYPE:
- value = decode64(vfp_regs, reg * DOUBLE_WORD)
- self.fail_boxes_float.setitem(fail_index, value)
- continue
- else:
- value = decode32(regs, reg * WORD)
-
- if group == self.INT_TYPE:
- self.fail_boxes_int.setitem(fail_index, value)
- elif group == self.REF_TYPE:
- assert (value & 3) == 0, "misaligned pointer"
- tgt = self.fail_boxes_ptr.get_addr_for_num(fail_index)
- rffi.cast(rffi.LONGP, tgt)[0] = value
- else:
- assert 0, 'unknown type'
-
- assert enc[i] == self.END_OF_LOCS
- descr = decode32(enc, i + 1)
- self.fail_boxes_count = fail_index
- self.fail_force_index = frame_loc
- return descr
-
- def decode_inputargs(self, enc):
- locs = []
- j = 0
- while enc[j] != self.END_OF_LOCS:
- res = enc[j]
- if res == self.EMPTY_LOC:
- j += 1
- continue
-
- assert res in [self.FLOAT_TYPE, self.INT_TYPE, self.REF_TYPE], \
- 'location type is not supported'
- res_type = res
- j += 1
- res = enc[j]
- if res == self.IMM_LOC:
- # XXX decode imm if necessary
- assert 0, 'Imm Locations are not supported'
- elif res == self.STACK_LOC:
- if res_type == self.FLOAT_TYPE:
- t = FLOAT
- elif res_type == self.INT_TYPE:
- t = INT
- else:
- t = REF
- stack_loc = decode32(enc, j + 1)
- loc = ARMFrameManager.frame_pos(stack_loc, t)
- j += 4
- else: # REG_LOC
- if res_type == self.FLOAT_TYPE:
- loc = r.all_vfp_regs[ord(res)]
- else:
- loc = r.all_regs[ord(res)]
- j += 1
- locs.append(loc)
- return locs
+ loc = r.all_regs[code]
+ arglocs.append(loc)
+ return arglocs[:]
def _build_malloc_slowpath(self):
mc = ARMv7Builder()
@@ -364,85 +347,78 @@
return mc.materialize(self.cpu.asmmemmgr, [],
self.cpu.gc_ll_descr.gcrootmap)
- def gen_descr_encoding(self, descr, args, arglocs):
- # The size of the allocated memory is based on the following sizes
- # first argloc is the frame depth and not considered for the memory
- # allocation
- # 4 bytes for the value
- # 1 byte for the type
- # 1 byte for the location
- # 1 separator byte
- # 4 bytes for the faildescr
- # const floats are stored in memory and the box contains the address
- memsize = (len(arglocs) - 1) * 6 + 5
+ DESCR_REF = 0x00
+ DESCR_INT = 0x01
+ DESCR_FLOAT = 0x02
+ DESCR_SPECIAL = 0x03
+ CODE_FROMSTACK = 64
+ CODE_STOP = 0 | DESCR_SPECIAL
+ CODE_HOLE = 4 | DESCR_SPECIAL
+ CODE_INPUTARG = 8 | DESCR_SPECIAL
+
+ def gen_descr_encoding(self, descr, failargs, locs):
+ buf = []
+ for i in range(len(failargs)):
+ arg = failargs[i]
+ if arg is not None:
+ if arg.type == REF:
+ kind = self.DESCR_REF
+ elif arg.type == INT:
+ kind = self.DESCR_INT
+ elif arg.type == FLOAT:
+ kind = self.DESCR_FLOAT
+ else:
+ raise AssertionError("bogus kind")
+ loc = locs[i]
+ if loc.is_stack():
+ pos = loc.position
+ if pos < 0:
+ buf.append(chr(self.CODE_INPUTARG))
+ pos = ~pos
+ n = self.CODE_FROMSTACK // 4 + pos
+ else:
+ assert loc.is_reg() or loc.is_vfp_reg()
+ n = loc.value
+ n = kind + 4 * n
+ while n > 0x7F:
+ buf.append(chr((n & 0x7F) | 0x80))
+ n >>= 7
+ else:
+ n = self.CODE_HOLE
+ buf.append(chr(n))
+ buf.append(chr(self.CODE_STOP))
+
+ fdescr = self.cpu.get_fail_descr_number(descr)
+ buf.append(chr(fdescr & 0xFF))
+ buf.append(chr(fdescr >> 8 & 0xFF))
+ buf.append(chr(fdescr >> 16 & 0xFF))
+ buf.append(chr(fdescr >> 24 & 0xFF))
+
+ # assert that the fail_boxes lists are big enough
+ assert len(failargs) <= self.fail_boxes_int.SIZE
+
+ memsize = len(buf)
memaddr = self.datablockwrapper.malloc_aligned(memsize, alignment=1)
mem = rffi.cast(rffi.CArrayPtr(lltype.Char), memaddr)
- i = 0
- j = 0
- while i < len(args):
- if arglocs[i + 1]:
- arg = args[i]
- loc = arglocs[i + 1]
- if arg.type == INT:
- mem[j] = self.INT_TYPE
- j += 1
- elif arg.type == REF:
- mem[j] = self.REF_TYPE
- j += 1
- elif arg.type == FLOAT:
- mem[j] = self.FLOAT_TYPE
- j += 1
- else:
- assert 0, 'unknown type'
-
- if loc.is_reg() or loc.is_vfp_reg():
- mem[j] = chr(loc.value)
- j += 1
- elif loc.is_imm() or loc.is_imm_float():
- assert (arg.type == INT or arg.type == REF
- or arg.type == FLOAT)
- mem[j] = self.IMM_LOC
- encode32(mem, j + 1, loc.getint())
- j += 5
- else:
- assert loc.is_stack()
- mem[j] = self.STACK_LOC
- if arg.type == FLOAT:
- # Float locs store the location number with an offset
- # of 1 -.- so we need to take this into account here
- # when generating the encoding
- encode32(mem, j + 1, loc.position - 1)
- else:
- encode32(mem, j + 1, loc.position)
- j += 5
- else:
- mem[j] = self.EMPTY_LOC
- j += 1
- i += 1
-
- mem[j] = chr(0xFF)
-
- n = self.cpu.get_fail_descr_number(descr)
- encode32(mem, j + 1, n)
+ for i in range(memsize):
+ mem[i] = buf[i]
return memaddr
def _gen_path_to_exit_path(self, descr, args, arglocs,
save_exc, fcond=c.AL):
assert isinstance(save_exc, bool)
- memaddr = self.gen_descr_encoding(descr, args, arglocs)
+ memaddr = self.gen_descr_encoding(descr, args, arglocs[1:])
self.gen_exit_code(self.mc, memaddr, save_exc, fcond)
return memaddr
def gen_exit_code(self, mc, memaddr, save_exc, fcond=c.AL):
assert isinstance(save_exc, bool)
self.mc.gen_load_int(r.ip.value, memaddr)
- #mc.LDR_ri(r.ip.value, r.pc.value, imm=WORD)
if save_exc:
path = self._leave_jitted_hook_save_exc
else:
path = self._leave_jitted_hook
mc.B(path)
- #mc.write32(memaddr)
def align(self):
while(self.mc.currpos() % FUNC_ALIGN != 0):
@@ -576,9 +552,8 @@
self._dump(operations, 'bridge')
assert isinstance(faildescr, AbstractFailDescr)
code = faildescr._arm_failure_recovery_code
- enc = rffi.cast(rffi.CCHARP, code)
frame_depth = faildescr._arm_current_frame_depth
- arglocs = self.decode_inputargs(enc)
+ arglocs = self.decode_inputargs(code)
if not we_are_translated():
assert len(inputargs) == len(arglocs)
diff --git a/pypy/jit/backend/arm/locations.py b/pypy/jit/backend/arm/locations.py
--- a/pypy/jit/backend/arm/locations.py
+++ b/pypy/jit/backend/arm/locations.py
@@ -80,9 +80,6 @@
def is_imm(self):
return True
- def as_key(self):
- return self.value + 40
-
class ConstFloatLoc(AssemblerLocation):
"""This class represents an imm float value which is stored in memory at
@@ -103,9 +100,6 @@
def is_imm_float(self):
return True
- def as_key(self):
- return -1 * self.value
-
class StackLocation(AssemblerLocation):
_immutable_ = True
@@ -132,7 +126,7 @@
return True
def as_key(self):
- return -self.position
+ return self.position + 10000
def imm(i):
diff --git a/pypy/jit/backend/arm/regalloc.py b/pypy/jit/backend/arm/regalloc.py
--- a/pypy/jit/backend/arm/regalloc.py
+++ b/pypy/jit/backend/arm/regalloc.py
@@ -327,6 +327,7 @@
count = 0
n_register_args = len(r.argument_regs)
cur_frame_pos = - (self.assembler.STACK_FIXED_AREA / WORD) + 1
+ cur_frame_pos = 1 - (self.assembler.STACK_FIXED_AREA // WORD)
for box in inputargs:
assert isinstance(box, Box)
# handle inputargs in argument registers
From noreply at buildbot.pypy.org Mon Jan 9 11:56:53 2012
From: noreply at buildbot.pypy.org (bivab)
Date: Mon, 9 Jan 2012 11:56:53 +0100 (CET)
Subject: [pypy-commit] pypy arm-backend-2: Add the condition code for always
here
Message-ID: <20120109105653.22E4282110@wyvern.cs.uni-duesseldorf.de>
Author: David Schneider
Branch: arm-backend-2
Changeset: r51168:120e4541efaf
Date: 2012-01-04 15:59 +0100
http://bitbucket.org/pypy/pypy/changeset/120e4541efaf/
Log: Add the condition code for always here
diff --git a/pypy/jit/backend/arm/codebuilder.py b/pypy/jit/backend/arm/codebuilder.py
--- a/pypy/jit/backend/arm/codebuilder.py
+++ b/pypy/jit/backend/arm/codebuilder.py
@@ -156,7 +156,7 @@
def BKPT(self):
"""Unconditional breakpoint"""
- self.write32(0x1200070)
+ self.write32(cond.AL << 28 | 0x1200070)
# corresponds to the instruction vmrs APSR_nzcv, fpscr
def VMRS(self, cond=cond.AL):
From noreply at buildbot.pypy.org Mon Jan 9 11:56:54 2012
From: noreply at buildbot.pypy.org (bivab)
Date: Mon, 9 Jan 2012 11:56:54 +0100 (CET)
Subject: [pypy-commit] pypy arm-backend-2: Use the codebuilder to write the
bytecode used to describe the failarg locations for a guard. Also abuse the
link register to pass the location of the encoding around.
Message-ID: <20120109105654.4A8A882110@wyvern.cs.uni-duesseldorf.de>
Author: David Schneider
Branch: arm-backend-2
Changeset: r51169:10eab3fbb965
Date: 2012-01-09 11:49 +0100
http://bitbucket.org/pypy/pypy/changeset/10eab3fbb965/
Log: Use the codebuilder to write the bytecode used to describe the
failarg locations for a guard. Also abuse the link register to pass
the location of the encoding around.
diff --git a/pypy/jit/backend/arm/assembler.py b/pypy/jit/backend/arm/assembler.py
--- a/pypy/jit/backend/arm/assembler.py
+++ b/pypy/jit/backend/arm/assembler.py
@@ -129,7 +129,7 @@
def _gen_leave_jitted_hook_code(self, save_exc):
mc = ARMv7Builder()
# XXX add a check if cpu supports floats
- with saved_registers(mc, r.caller_resp + [r.ip], r.caller_vfp_resp):
+ with saved_registers(mc, r.caller_resp + [r.lr], r.caller_vfp_resp):
addr = self.cpu.get_on_leave_jitted_int(save_exception=save_exc)
mc.BL(addr)
assert self._exit_code_addr != 0
@@ -334,7 +334,7 @@
self._insert_checks(mc)
with saved_registers(mc, r.all_regs, r.all_vfp_regs):
# move mem block address, to r0 to pass as
- mc.MOV_rr(r.r0.value, r.ip.value)
+ mc.MOV_rr(r.r0.value, r.lr.value)
# pass the current frame pointer as second param
mc.MOV_rr(r.r1.value, r.fp.value)
# pass the current stack pointer as third param
@@ -357,7 +357,7 @@
CODE_INPUTARG = 8 | DESCR_SPECIAL
def gen_descr_encoding(self, descr, failargs, locs):
- buf = []
+ assert self.mc is not None
for i in range(len(failargs)):
arg = failargs[i]
if arg is not None:
@@ -373,7 +373,7 @@
if loc.is_stack():
pos = loc.position
if pos < 0:
- buf.append(chr(self.CODE_INPUTARG))
+ self.mc.writechar(chr(self.CODE_INPUTARG))
pos = ~pos
n = self.CODE_FROMSTACK // 4 + pos
else:
@@ -381,44 +381,33 @@
n = loc.value
n = kind + 4 * n
while n > 0x7F:
- buf.append(chr((n & 0x7F) | 0x80))
+ self.mc.writechar(chr((n & 0x7F) | 0x80))
n >>= 7
else:
n = self.CODE_HOLE
- buf.append(chr(n))
- buf.append(chr(self.CODE_STOP))
+ self.mc.writechar(chr(n))
+ self.mc.writechar(chr(self.CODE_STOP))
fdescr = self.cpu.get_fail_descr_number(descr)
- buf.append(chr(fdescr & 0xFF))
- buf.append(chr(fdescr >> 8 & 0xFF))
- buf.append(chr(fdescr >> 16 & 0xFF))
- buf.append(chr(fdescr >> 24 & 0xFF))
+ self.mc.write32(fdescr)
+ self.align()
# assert that the fail_boxes lists are big enough
assert len(failargs) <= self.fail_boxes_int.SIZE
- memsize = len(buf)
- memaddr = self.datablockwrapper.malloc_aligned(memsize, alignment=1)
- mem = rffi.cast(rffi.CArrayPtr(lltype.Char), memaddr)
- for i in range(memsize):
- mem[i] = buf[i]
- return memaddr
-
def _gen_path_to_exit_path(self, descr, args, arglocs,
save_exc, fcond=c.AL):
assert isinstance(save_exc, bool)
- memaddr = self.gen_descr_encoding(descr, args, arglocs[1:])
- self.gen_exit_code(self.mc, memaddr, save_exc, fcond)
- return memaddr
+ self.gen_exit_code(self.mc, save_exc, fcond)
+ self.gen_descr_encoding(descr, args, arglocs[1:])
- def gen_exit_code(self, mc, memaddr, save_exc, fcond=c.AL):
+ def gen_exit_code(self, mc, save_exc, fcond=c.AL):
assert isinstance(save_exc, bool)
- self.mc.gen_load_int(r.ip.value, memaddr)
if save_exc:
path = self._leave_jitted_hook_save_exc
else:
path = self._leave_jitted_hook
- mc.B(path)
+ mc.BL(path)
def align(self):
while(self.mc.currpos() % FUNC_ALIGN != 0):
@@ -551,7 +540,7 @@
operations = self.setup(original_loop_token, operations)
self._dump(operations, 'bridge')
assert isinstance(faildescr, AbstractFailDescr)
- code = faildescr._arm_failure_recovery_code
+ code = self._find_failure_recovery_bytecode(faildescr)
frame_depth = faildescr._arm_current_frame_depth
arglocs = self.decode_inputargs(code)
if not we_are_translated():
@@ -585,6 +574,11 @@
frame_depth)
self.teardown()
+ def _find_failure_recovery_bytecode(self, faildescr):
+ guard_addr = faildescr._arm_block_start + faildescr._arm_guard_pos
+ # a guard requires 3 words to encode the jump to the exit code.
+ return guard_addr + 3 * WORD
+
def fixup_target_tokens(self, rawstart):
for targettoken in self.target_tokens_currently_compiling:
targettoken._arm_loop_code += rawstart
@@ -607,11 +601,10 @@
pos = self.mc.currpos()
tok.pos_recovery_stub = pos
- memaddr = self._gen_path_to_exit_path(descr, tok.failargs,
+ self._gen_path_to_exit_path(descr, tok.failargs,
tok.faillocs, save_exc=tok.save_exc)
# store info on the descr
descr._arm_current_frame_depth = tok.faillocs[0].getint()
- descr._arm_failure_recovery_code = memaddr
descr._arm_guard_pos = pos
def process_pending_guards(self, block_start):
diff --git a/pypy/jit/backend/arm/runner.py b/pypy/jit/backend/arm/runner.py
--- a/pypy/jit/backend/arm/runner.py
+++ b/pypy/jit/backend/arm/runner.py
@@ -106,6 +106,7 @@
assert fail_index >= 0, "already forced!"
faildescr = self.get_fail_descr_from_number(fail_index)
rffi.cast(TP, addr_of_force_index)[0] = ~fail_index
+ bytecode = self.assembler._find_failure_recovery_bytecode(faildescr)
# start of "no gc operation!" block
frame_depth = faildescr._arm_current_frame_depth * WORD
addr_end_of_frame = (addr_of_force_index -
@@ -113,7 +114,7 @@
len(all_regs) * WORD +
len(all_vfp_regs) * DOUBLE_WORD))
fail_index_2 = self.assembler.failure_recovery_func(
- faildescr._arm_failure_recovery_code,
+ bytecode,
addr_of_force_index,
addr_end_of_frame)
self.assembler.leave_jitted_hook()
From noreply at buildbot.pypy.org Mon Jan 9 12:46:10 2012
From: noreply at buildbot.pypy.org (arigo)
Date: Mon, 9 Jan 2012 12:46:10 +0100 (CET)
Subject: [pypy-commit] pypy concurrent-marksweep: Fix: I corrected the
comment but not the actual value
Message-ID: <20120109114610.ECF5182110@wyvern.cs.uni-duesseldorf.de>
Author: Armin Rigo
Branch: concurrent-marksweep
Changeset: r51170:9ec48159f6e4
Date: 2012-01-09 12:45 +0100
http://bitbucket.org/pypy/pypy/changeset/9ec48159f6e4/
Log: Fix: I corrected the comment but not the actual value
diff --git a/pypy/rpython/memory/gc/concurrentgen.py b/pypy/rpython/memory/gc/concurrentgen.py
--- a/pypy/rpython/memory/gc/concurrentgen.py
+++ b/pypy/rpython/memory/gc/concurrentgen.py
@@ -65,7 +65,7 @@
# The minimal RAM usage: use 24 MB by default.
# Environment variable: PYPY_GC_MIN
- "min_heap_size": 6*1024*1024,
+ "min_heap_size": 24*1024*1024,
}
From noreply at buildbot.pypy.org Mon Jan 9 16:06:22 2012
From: noreply at buildbot.pypy.org (fijal)
Date: Mon, 9 Jan 2012 16:06:22 +0100 (CET)
Subject: [pypy-commit] pypy better-jit-hooks: make sure there are no more
attrs on base class
Message-ID: <20120109150622.4615F82110@wyvern.cs.uni-duesseldorf.de>
Author: Maciej Fijalkowski
Branch: better-jit-hooks
Changeset: r51171:04ce4efd6ee6
Date: 2012-01-09 17:05 +0200
http://bitbucket.org/pypy/pypy/changeset/04ce4efd6ee6/
Log: make sure there are no more attrs on base class
diff --git a/pypy/jit/metainterp/resoperation.py b/pypy/jit/metainterp/resoperation.py
--- a/pypy/jit/metainterp/resoperation.py
+++ b/pypy/jit/metainterp/resoperation.py
@@ -17,6 +17,8 @@
name = ""
pc = 0
+ _attrs_ = ('result',)
+
def __init__(self, result):
self.result = result
From noreply at buildbot.pypy.org Mon Jan 9 17:30:47 2012
From: noreply at buildbot.pypy.org (fijal)
Date: Mon, 9 Jan 2012 17:30:47 +0100 (CET)
Subject: [pypy-commit] pypy better-jit-hooks: (fijal,
arigo) improve the assembler check (hopefully) usable for other
Message-ID: <20120109163047.508A582110@wyvern.cs.uni-duesseldorf.de>
Author: Maciej Fijalkowski
Branch: better-jit-hooks
Changeset: r51172:941c2be81863
Date: 2012-01-09 18:30 +0200
http://bitbucket.org/pypy/pypy/changeset/941c2be81863/
Log: (fijal, arigo) improve the assembler check (hopefully) usable for
other processors
diff --git a/pypy/jit/backend/test/runner_test.py b/pypy/jit/backend/test/runner_test.py
--- a/pypy/jit/backend/test/runner_test.py
+++ b/pypy/jit/backend/test/runner_test.py
@@ -28,6 +28,9 @@
class Runner(object):
+ add_loop_instruction = ['overload for a specific cpu']
+ bridge_loop_instruction = ['overload for a specific cpu']
+
def execute_operation(self, opname, valueboxes, result_type, descr=None):
inputargs, operations = self._get_single_operation_list(opname,
result_type,
@@ -3006,23 +3009,21 @@
self.cpu.assembler.set_debug(True) # always on untranslated
assert asmlen != 0
cpuname = autodetect_main_model_and_size()
- if 'x86' in cpuname:
- # XXX we have to check the precise assembler, otherwise
- # we don't quite know if borders are correct
- def checkops(mc, startline, ops):
- for i in range(startline, len(mc)):
- assert mc[i].split("\t")[-1].startswith(ops[i - startline])
+ # XXX we have to check the precise assembler, otherwise
+ # we don't quite know if borders are correct
+
+ def checkops(mc, startline, ops):
+ for i in range(startline, len(mc)):
+ assert mc[i].split("\t")[-1].startswith(ops[i - startline])
- data = ctypes.string_at(asm, asmlen)
- mc = list(machine_code_dump(data, asm, cpuname))
- assert len(mc) == 5
- checkops(mc, 1, ['add', 'test', 'je', 'jmp'])
- data = ctypes.string_at(basm, basmlen)
- mc = list(machine_code_dump(data, basm, cpuname))
- assert len(mc) == 4
- checkops(mc, 1, ['lea', 'mov', 'jmp'])
- else:
- raise Exception("Implement this test for your CPU")
+ data = ctypes.string_at(asm, asmlen)
+ mc = list(machine_code_dump(data, asm, cpuname))
+ assert len(mc) == 5
+ checkops(mc, 1, self.add_loop_instructions)
+ data = ctypes.string_at(basm, basmlen)
+ mc = list(machine_code_dump(data, basm, cpuname))
+ assert len(mc) == 4
+ checkops(mc, 1, self.bridge_loop_instructions)
def test_compile_bridge_with_target(self):
diff --git a/pypy/jit/backend/x86/test/test_runner.py b/pypy/jit/backend/x86/test/test_runner.py
--- a/pypy/jit/backend/x86/test/test_runner.py
+++ b/pypy/jit/backend/x86/test/test_runner.py
@@ -33,6 +33,9 @@
# for the individual tests see
# ====> ../../test/runner_test.py
+ add_loop_instructions = ['add', 'test', 'je', 'jmp']
+ bridge_loop_instructions = ['lea', 'mov', 'jmp']
+
def setup_method(self, meth):
self.cpu = CPU(rtyper=None, stats=FakeStats())
self.cpu.setup_once()
From noreply at buildbot.pypy.org Mon Jan 9 17:32:01 2012
From: noreply at buildbot.pypy.org (fijal)
Date: Mon, 9 Jan 2012 17:32:01 +0100 (CET)
Subject: [pypy-commit] pypy better-jit-hooks: remove nonsense method,
update the docstring
Message-ID: <20120109163201.AAF5D82110@wyvern.cs.uni-duesseldorf.de>
Author: Maciej Fijalkowski
Branch: better-jit-hooks
Changeset: r51173:c39f96d8c69b
Date: 2012-01-09 18:31 +0200
http://bitbucket.org/pypy/pypy/changeset/c39f96d8c69b/
Log: remove nonsense method, update the docstring
diff --git a/pypy/jit/codewriter/policy.py b/pypy/jit/codewriter/policy.py
--- a/pypy/jit/codewriter/policy.py
+++ b/pypy/jit/codewriter/policy.py
@@ -88,14 +88,6 @@
raise ValueError("access_directly on a function which we don't see %s" % graph)
return res
- def get_jit_portal(self):
- """ Returns a None or an instance of pypy.rlib.jit.JitPortal
- The portal methods are called for various special cases in the JIT
- as a mean to give feedback to the user. Read JitPortal's docstring
- for details.
- """
- return None
-
def contains_unsupported_variable_type(graph, supports_floats,
supports_longlong,
supports_singlefloats):
diff --git a/pypy/rlib/jit.py b/pypy/rlib/jit.py
--- a/pypy/rlib/jit.py
+++ b/pypy/rlib/jit.py
@@ -729,8 +729,7 @@
""" This is the main connector between the JIT and the interpreter.
Several methods on portal will be invoked at various stages of JIT running
like JIT loops compiled, aborts etc.
- An instance of this class might be returned by the policy.get_jit_portal
- method in order to function.
+ An instance of this class will be available as policy.portal.
each hook will accept some of the following args:
From noreply at buildbot.pypy.org Mon Jan 9 18:08:50 2012
From: noreply at buildbot.pypy.org (fijal)
Date: Mon, 9 Jan 2012 18:08:50 +0100 (CET)
Subject: [pypy-commit] pypy better-jit-hooks: use try: finally: for
cache.in_recursion
Message-ID: <20120109170850.2BE3B82110@wyvern.cs.uni-duesseldorf.de>
Author: Maciej Fijalkowski
Branch: better-jit-hooks
Changeset: r51174:d142d1bd4aa9
Date: 2012-01-09 19:07 +0200
http://bitbucket.org/pypy/pypy/changeset/d142d1bd4aa9/
Log: use try: finally: for cache.in_recursion
diff --git a/pypy/module/pypyjit/policy.py b/pypy/module/pypyjit/policy.py
--- a/pypy/module/pypyjit/policy.py
+++ b/pypy/module/pypyjit/policy.py
@@ -15,13 +15,15 @@
if space.is_true(cache.w_abort_hook):
cache.in_recursion = True
try:
- space.call_function(cache.w_abort_hook,
- space.wrap(jitdriver.name),
- wrap_greenkey(space, jitdriver, greenkey),
- space.wrap(counter_names[reason]))
- except OperationError, e:
- e.write_unraisable(space, "jit hook ", cache.w_abort_hook)
- cache.in_recursion = False
+ try:
+ space.call_function(cache.w_abort_hook,
+ space.wrap(jitdriver.name),
+ wrap_greenkey(space, jitdriver, greenkey),
+ space.wrap(counter_names[reason]))
+ except OperationError, e:
+ e.write_unraisable(space, "jit hook ", cache.w_abort_hook)
+ finally:
+ cache.in_recursion = False
def after_compile(self, jitdriver, logger, looptoken, operations, type,
greenkey, ops_offset, asmstart, asmlen):
@@ -56,16 +58,18 @@
list_w = wrap_oplist(space, logops, operations, ops_offset)
cache.in_recursion = True
try:
- space.call_function(cache.w_compile_hook,
- space.wrap(jitdriver.name),
- space.wrap(type),
- w_arg,
- space.newlist(list_w),
- space.wrap(asmstart),
- space.wrap(asmlen))
- except OperationError, e:
- e.write_unraisable(space, "jit hook ", cache.w_compile_hook)
- cache.in_recursion = False
+ try:
+ space.call_function(cache.w_compile_hook,
+ space.wrap(jitdriver.name),
+ space.wrap(type),
+ w_arg,
+ space.newlist(list_w),
+ space.wrap(asmstart),
+ space.wrap(asmlen))
+ except OperationError, e:
+ e.write_unraisable(space, "jit hook ", cache.w_compile_hook)
+ finally:
+ cache.in_recursion = False
def _optimize_hook(self, jitdriver, logger, operations, type, w_arg):
space = self.space
@@ -77,26 +81,28 @@
list_w = wrap_oplist(space, logops, operations, {})
cache.in_recursion = True
try:
- w_res = space.call_function(cache.w_optimize_hook,
- space.wrap(jitdriver.name),
- space.wrap(type),
- w_arg,
- space.newlist(list_w))
- if space.is_w(w_res, space.w_None):
- cache.in_recursion = False
- return
- l = []
- for w_item in space.listview(w_res):
- item = space.interp_w(WrappedOp, w_item)
- l.append(jit_hooks._cast_to_resop(item.op))
- del operations[:] # modifying operations above is probably not
- # a great idea since types may not work and we'll end up with
- # half-working list and a segfault/fatal RPython error
- for elem in l:
- operations.append(elem)
- except OperationError, e:
- e.write_unraisable(space, "jit hook ", cache.w_compile_hook)
- cache.in_recursion = False
+ try:
+ w_res = space.call_function(cache.w_optimize_hook,
+ space.wrap(jitdriver.name),
+ space.wrap(type),
+ w_arg,
+ space.newlist(list_w))
+ if space.is_w(w_res, space.w_None):
+ return
+ l = []
+ for w_item in space.listview(w_res):
+ item = space.interp_w(WrappedOp, w_item)
+ l.append(jit_hooks._cast_to_resop(item.op))
+ del operations[:] # modifying operations above is
+ # probably not a great idea since types may not work
+ # and we'll end up with half-working list and
+ # a segfault/fatal RPython error
+ for elem in l:
+ operations.append(elem)
+ except OperationError, e:
+ e.write_unraisable(space, "jit hook ", cache.w_compile_hook)
+ finally:
+ cache.in_recursion = False
pypy_portal = PyPyPortal()
From noreply at buildbot.pypy.org Mon Jan 9 18:08:51 2012
From: noreply at buildbot.pypy.org (fijal)
Date: Mon, 9 Jan 2012 18:08:51 +0100 (CET)
Subject: [pypy-commit] pypy better-jit-hooks: add a name to another jitdriver
Message-ID: <20120109170851.5050182110@wyvern.cs.uni-duesseldorf.de>
Author: Maciej Fijalkowski
Branch: better-jit-hooks
Changeset: r51175:1b168f836dde
Date: 2012-01-09 19:08 +0200
http://bitbucket.org/pypy/pypy/changeset/1b168f836dde/
Log: add a name to another jitdriver
diff --git a/pypy/interpreter/generator.py b/pypy/interpreter/generator.py
--- a/pypy/interpreter/generator.py
+++ b/pypy/interpreter/generator.py
@@ -162,7 +162,8 @@
# generate 2 versions of the function and 2 jit drivers.
def _create_unpack_into():
jitdriver = jit.JitDriver(greens=['pycode'],
- reds=['self', 'frame', 'results'])
+ reds=['self', 'frame', 'results'],
+ name='unpack_into')
def unpack_into(self, results):
"""This is a hack for performance: runs the generator and collects
all produced items in a list."""
@@ -196,4 +197,4 @@
self.frame = None
return unpack_into
unpack_into = _create_unpack_into()
- unpack_into_w = _create_unpack_into()
\ No newline at end of file
+ unpack_into_w = _create_unpack_into()
From noreply at buildbot.pypy.org Mon Jan 9 18:17:54 2012
From: noreply at buildbot.pypy.org (fijal)
Date: Mon, 9 Jan 2012 18:17:54 +0100 (CET)
Subject: [pypy-commit] pypy better-jit-hooks: rename JitPortal to
JitHookInterface
Message-ID: <20120109171754.D6C0582110@wyvern.cs.uni-duesseldorf.de>
Author: Maciej Fijalkowski
Branch: better-jit-hooks
Changeset: r51176:8fdbf83e4cce
Date: 2012-01-09 19:17 +0200
http://bitbucket.org/pypy/pypy/changeset/8fdbf83e4cce/
Log: rename JitPortal to JitHookInterface
diff --git a/pypy/jit/codewriter/policy.py b/pypy/jit/codewriter/policy.py
--- a/pypy/jit/codewriter/policy.py
+++ b/pypy/jit/codewriter/policy.py
@@ -8,15 +8,15 @@
class JitPolicy(object):
- def __init__(self, portal=None):
+ def __init__(self, jithookiface=None):
self.unsafe_loopy_graphs = set()
self.supports_floats = False
self.supports_longlong = False
self.supports_singlefloats = False
- if portal is None:
- from pypy.rlib.jit import JitPortal
- portal = JitPortal()
- self.portal = portal
+ if jithookiface is None:
+ from pypy.rlib.jit import JitHookInterface
+ jithookiface = JitHookInterface()
+ self.jithookiface = jithookiface
def set_supports_floats(self, flag):
self.supports_floats = flag
diff --git a/pypy/jit/metainterp/compile.py b/pypy/jit/metainterp/compile.py
--- a/pypy/jit/metainterp/compile.py
+++ b/pypy/jit/metainterp/compile.py
@@ -306,12 +306,12 @@
loop.check_consistency()
if metainterp_sd.warmrunnerdesc is not None:
- portal = metainterp_sd.warmrunnerdesc.portal
- portal.before_compile(jitdriver_sd.jitdriver, metainterp_sd.logger_ops,
- original_jitcell_token, loop.operations, type,
- greenkey)
+ hooks = metainterp_sd.warmrunnerdesc.hooks
+ hooks.before_compile(jitdriver_sd.jitdriver, metainterp_sd.logger_ops,
+ original_jitcell_token, loop.operations, type,
+ greenkey)
else:
- portal = None
+ hooks = None
operations = get_deep_immutable_oplist(loop.operations)
metainterp_sd.profiler.start_backend()
debug_start("jit-backend")
@@ -323,8 +323,8 @@
finally:
debug_stop("jit-backend")
metainterp_sd.profiler.end_backend()
- if portal is not None:
- portal.after_compile(jitdriver_sd.jitdriver, metainterp_sd.logger_ops,
+ if hooks is not None:
+ hooks.after_compile(jitdriver_sd.jitdriver, metainterp_sd.logger_ops,
original_jitcell_token, loop.operations, type,
greenkey, ops_offset, asmstart, asmlen)
metainterp_sd.stats.add_new_loop(loop)
@@ -348,12 +348,12 @@
seen = dict.fromkeys(inputargs)
TreeLoop.check_consistency_of_branch(operations, seen)
if metainterp_sd.warmrunnerdesc is not None:
- portal = metainterp_sd.warmrunnerdesc.portal
- portal.before_compile_bridge(jitdriver_sd.jitdriver,
+ hooks = metainterp_sd.warmrunnerdesc.hooks
+ hooks.before_compile_bridge(jitdriver_sd.jitdriver,
metainterp_sd.logger_ops,
original_loop_token, operations, n)
else:
- portal = None
+ hooks = None
operations = get_deep_immutable_oplist(operations)
metainterp_sd.profiler.start_backend()
debug_start("jit-backend")
@@ -364,12 +364,12 @@
finally:
debug_stop("jit-backend")
metainterp_sd.profiler.end_backend()
- if portal is not None:
- portal.after_compile_bridge(jitdriver_sd.jitdriver,
- metainterp_sd.logger_ops,
- original_loop_token, operations, n,
- ops_offset,
- asmstart, asmlen)
+ if hooks is not None:
+ hooks.after_compile_bridge(jitdriver_sd.jitdriver,
+ metainterp_sd.logger_ops,
+ original_loop_token, operations, n,
+ ops_offset,
+ asmstart, asmlen)
if not we_are_translated():
metainterp_sd.stats.compiled()
metainterp_sd.log("compiled new bridge")
diff --git a/pypy/jit/metainterp/pyjitpl.py b/pypy/jit/metainterp/pyjitpl.py
--- a/pypy/jit/metainterp/pyjitpl.py
+++ b/pypy/jit/metainterp/pyjitpl.py
@@ -1795,8 +1795,8 @@
debug_print('~~~ ABORTING TRACING')
jd_sd = self.jitdriver_sd
greenkey = self.current_merge_points[0][0][:jd_sd.num_green_args]
- self.staticdata.warmrunnerdesc.portal.on_abort(reason, jd_sd.jitdriver,
- greenkey)
+ self.staticdata.warmrunnerdesc.hooks.on_abort(reason, jd_sd.jitdriver,
+ greenkey)
self.staticdata.stats.aborted()
def blackhole_if_trace_too_long(self):
diff --git a/pypy/jit/metainterp/test/test_jitportal.py b/pypy/jit/metainterp/test/test_jitiface.py
rename from pypy/jit/metainterp/test/test_jitportal.py
rename to pypy/jit/metainterp/test/test_jitiface.py
--- a/pypy/jit/metainterp/test/test_jitportal.py
+++ b/pypy/jit/metainterp/test/test_jitiface.py
@@ -1,5 +1,5 @@
-from pypy.rlib.jit import JitDriver, JitPortal
+from pypy.rlib.jit import JitDriver, JitHookInterface
from pypy.rlib import jit_hooks
from pypy.jit.metainterp.test.support import LLJitMixin
from pypy.jit.codewriter.policy import JitPolicy
@@ -7,17 +7,17 @@
from pypy.jit.metainterp.resoperation import rop
from pypy.rpython.annlowlevel import hlstr
-class TestJitPortal(LLJitMixin):
+class TestJitHookInterface(LLJitMixin):
def test_abort_quasi_immut(self):
reasons = []
- class MyJitPortal(JitPortal):
+ class MyJitIface(JitHookInterface):
def on_abort(self, reason, jitdriver, greenkey):
assert jitdriver is myjitdriver
assert len(greenkey) == 1
reasons.append(reason)
- portal = MyJitPortal()
+ iface = MyJitIface()
myjitdriver = JitDriver(greens=['foo'], reds=['x', 'total'])
@@ -37,14 +37,14 @@
return total
#
assert f(100, 7) == 721
- res = self.meta_interp(f, [100, 7], policy=JitPolicy(portal))
+ res = self.meta_interp(f, [100, 7], policy=JitPolicy(iface))
assert res == 721
assert reasons == [ABORT_FORCE_QUASIIMMUT] * 2
def test_on_compile(self):
called = []
- class MyJitPortal(JitPortal):
+ class MyJitIface(JitHookInterface):
def after_compile(self, jitdriver, logger, looptoken, operations,
type, greenkey, ops_offset, asmaddr, asmlen):
assert asmaddr == 0
@@ -62,7 +62,7 @@
called.append(("trace", greenkey[1].getint(),
greenkey[0].getint(), type))
- portal = MyJitPortal()
+ iface = MyJitIface()
driver = JitDriver(greens = ['n', 'm'], reds = ['i'])
@@ -73,11 +73,11 @@
driver.jit_merge_point(n=n, m=m, i=i)
i += 1
- self.meta_interp(loop, [1, 4], policy=JitPolicy(portal))
+ self.meta_interp(loop, [1, 4], policy=JitPolicy(iface))
assert called == [#("trace", 4, 1, "loop"),
("optimize", 4, 1, "loop"),
("compile", 4, 1, "loop")]
- self.meta_interp(loop, [2, 4], policy=JitPolicy(portal))
+ self.meta_interp(loop, [2, 4], policy=JitPolicy(iface))
assert called == [#("trace", 4, 1, "loop"),
("optimize", 4, 1, "loop"),
("compile", 4, 1, "loop"),
@@ -88,7 +88,7 @@
def test_on_compile_bridge(self):
called = []
- class MyJitPortal(JitPortal):
+ class MyJitIface(JitHookInterface):
def after_compile(self, jitdriver, logger, looptoken, operations,
type, greenkey, ops_offset, asmaddr, asmlen):
assert asmaddr == 0
@@ -114,7 +114,7 @@
i += 2
i += 1
- self.meta_interp(loop, [1, 10], policy=JitPolicy(MyJitPortal()))
+ self.meta_interp(loop, [1, 10], policy=JitPolicy(MyJitIface()))
assert called == ["compile", "before_compile_bridge", "compile_bridge"]
def test_resop_interface(self):
diff --git a/pypy/jit/metainterp/warmspot.py b/pypy/jit/metainterp/warmspot.py
--- a/pypy/jit/metainterp/warmspot.py
+++ b/pypy/jit/metainterp/warmspot.py
@@ -210,13 +210,12 @@
vrefinfo = VirtualRefInfo(self)
self.codewriter.setup_vrefinfo(vrefinfo)
#
- self.portal = policy.portal
+ self.hooks = policy.jithookiface
self.make_virtualizable_infos()
self.make_exception_classes()
self.make_driverhook_graphs()
self.make_enter_functions()
self.rewrite_jit_merge_points(policy)
- self.portal = policy.portal
verbose = False # not self.cpu.translate_support_code
self.rewrite_access_helpers()
diff --git a/pypy/rlib/jit.py b/pypy/rlib/jit.py
--- a/pypy/rlib/jit.py
+++ b/pypy/rlib/jit.py
@@ -725,11 +725,11 @@
return hop.genop('jit_marker', vlist,
resulttype=lltype.Void)
-class JitPortal(object):
+class JitHookInterface(object):
""" This is the main connector between the JIT and the interpreter.
- Several methods on portal will be invoked at various stages of JIT running
- like JIT loops compiled, aborts etc.
- An instance of this class will be available as policy.portal.
+ Several methods on this class will be invoked at various stages
+ of JIT running like JIT loops compiled, aborts etc.
+ An instance of this class will be available as policy.jithookiface.
each hook will accept some of the following args:
From noreply at buildbot.pypy.org Mon Jan 9 18:20:44 2012
From: noreply at buildbot.pypy.org (fijal)
Date: Mon, 9 Jan 2012 18:20:44 +0100 (CET)
Subject: [pypy-commit] pypy better-jit-hooks: update the pypyjit module as
well
Message-ID: <20120109172044.A198A82110@wyvern.cs.uni-duesseldorf.de>
Author: Maciej Fijalkowski
Branch: better-jit-hooks
Changeset: r51177:666eb3524b3c
Date: 2012-01-09 19:20 +0200
http://bitbucket.org/pypy/pypy/changeset/666eb3524b3c/
Log: update the pypyjit module as well
diff --git a/pypy/module/pypyjit/__init__.py b/pypy/module/pypyjit/__init__.py
--- a/pypy/module/pypyjit/__init__.py
+++ b/pypy/module/pypyjit/__init__.py
@@ -17,11 +17,11 @@
def setup_after_space_initialization(self):
# force the __extend__ hacks to occur early
from pypy.module.pypyjit.interp_jit import pypyjitdriver
- from pypy.module.pypyjit.policy import pypy_portal
+ from pypy.module.pypyjit.policy import pypy_hooks
# add the 'defaults' attribute
from pypy.rlib.jit import PARAMETERS
space = self.space
pypyjitdriver.space = space
w_obj = space.wrap(PARAMETERS)
space.setattr(space.wrap(self), space.wrap('defaults'), w_obj)
- pypy_portal.space = space
+ pypy_hooks.space = space
diff --git a/pypy/module/pypyjit/policy.py b/pypy/module/pypyjit/policy.py
--- a/pypy/module/pypyjit/policy.py
+++ b/pypy/module/pypyjit/policy.py
@@ -1,12 +1,12 @@
from pypy.jit.codewriter.policy import JitPolicy
-from pypy.rlib.jit import JitPortal
+from pypy.rlib.jit import JitHookInterface
from pypy.rlib import jit_hooks
from pypy.interpreter.error import OperationError
from pypy.jit.metainterp.jitprof import counter_names
from pypy.module.pypyjit.interp_resop import wrap_oplist, Cache, wrap_greenkey,\
WrappedOp
-class PyPyPortal(JitPortal):
+class PyPyJitIface(JitHookInterface):
def on_abort(self, reason, jitdriver, greenkey):
space = self.space
cache = space.fromcache(Cache)
@@ -104,7 +104,7 @@
finally:
cache.in_recursion = False
-pypy_portal = PyPyPortal()
+pypy_hooks = PyPyJitIface()
class PyPyJitPolicy(JitPolicy):
diff --git a/pypy/module/pypyjit/test/test_jit_hook.py b/pypy/module/pypyjit/test/test_jit_hook.py
--- a/pypy/module/pypyjit/test/test_jit_hook.py
+++ b/pypy/module/pypyjit/test/test_jit_hook.py
@@ -11,7 +11,7 @@
from pypy.rpython.lltypesystem import lltype, llmemory
from pypy.rpython.lltypesystem.rclass import OBJECT
from pypy.module.pypyjit.interp_jit import pypyjitdriver
-from pypy.module.pypyjit.policy import pypy_portal
+from pypy.module.pypyjit.policy import pypy_hooks
from pypy.jit.tool.oparser import parse
from pypy.jit.metainterp.typesystem import llhelper
from pypy.jit.metainterp.jitprof import ABORT_TOO_LONG
@@ -61,21 +61,21 @@
offset[op] = i
def interp_on_compile():
- pypy_portal.after_compile(pypyjitdriver, logger, JitCellToken(),
+ pypy_hooks.after_compile(pypyjitdriver, logger, JitCellToken(),
cls.oplist, 'loop', greenkey, offset,
0, 0)
def interp_on_compile_bridge():
- pypy_portal.after_compile_bridge(pypyjitdriver, logger,
+ pypy_hooks.after_compile_bridge(pypyjitdriver, logger,
JitCellToken(), cls.oplist, 0,
offset, 0, 0)
def interp_on_optimize():
- pypy_portal.before_compile(pypyjitdriver, logger, JitCellToken(),
+ pypy_hooks.before_compile(pypyjitdriver, logger, JitCellToken(),
cls.oplist, 'loop', greenkey)
def interp_on_abort():
- pypy_portal.on_abort(ABORT_TOO_LONG, pypyjitdriver, greenkey)
+ pypy_hooks.on_abort(ABORT_TOO_LONG, pypyjitdriver, greenkey)
cls.w_on_compile = space.wrap(interp2app(interp_on_compile))
cls.w_on_compile_bridge = space.wrap(interp2app(interp_on_compile_bridge))
diff --git a/pypy/translator/goal/targetpypystandalone.py b/pypy/translator/goal/targetpypystandalone.py
--- a/pypy/translator/goal/targetpypystandalone.py
+++ b/pypy/translator/goal/targetpypystandalone.py
@@ -226,8 +226,8 @@
return self.get_entry_point(config)
def jitpolicy(self, driver):
- from pypy.module.pypyjit.policy import PyPyJitPolicy, pypy_portal
- return PyPyJitPolicy(pypy_portal)
+ from pypy.module.pypyjit.policy import PyPyJitPolicy, pypy_hooks
+ return PyPyJitPolicy(pypy_hooks)
def get_entry_point(self, config):
from pypy.tool.lib_pypy import import_from_lib_pypy
From noreply at buildbot.pypy.org Mon Jan 9 18:49:56 2012
From: noreply at buildbot.pypy.org (fijal)
Date: Mon, 9 Jan 2012 18:49:56 +0100 (CET)
Subject: [pypy-commit] pypy better-jit-hooks: improve the situation with
arguments of the hooks
Message-ID: <20120109174956.E165382110@wyvern.cs.uni-duesseldorf.de>
Author: Maciej Fijalkowski
Branch: better-jit-hooks
Changeset: r51178:b3dd81a62153
Date: 2012-01-09 19:49 +0200
http://bitbucket.org/pypy/pypy/changeset/b3dd81a62153/
Log: improve the situation with arguments of the hooks
diff --git a/pypy/jit/backend/llgraph/runner.py b/pypy/jit/backend/llgraph/runner.py
--- a/pypy/jit/backend/llgraph/runner.py
+++ b/pypy/jit/backend/llgraph/runner.py
@@ -141,7 +141,6 @@
self._compile_loop_or_bridge(c, inputargs, operations, clt)
old, oldindex = faildescr._compiled_fail
llimpl.compile_redirect_fail(old, oldindex, c)
- return None, 0, 0
def compile_loop(self, inputargs, operations, jitcell_token,
log=True, name=''):
@@ -156,7 +155,6 @@
clt.compiled_version = c
jitcell_token.compiled_loop_token = clt
self._compile_loop_or_bridge(c, inputargs, operations, clt)
- return None, 0, 0
def free_loop_and_bridges(self, compiled_loop_token):
for c in compiled_loop_token.loop_and_bridges:
diff --git a/pypy/jit/backend/x86/assembler.py b/pypy/jit/backend/x86/assembler.py
--- a/pypy/jit/backend/x86/assembler.py
+++ b/pypy/jit/backend/x86/assembler.py
@@ -7,6 +7,7 @@
from pypy.rpython.lltypesystem import lltype, rffi, rstr, llmemory
from pypy.rpython.lltypesystem.lloperation import llop
from pypy.rpython.annlowlevel import llhelper
+from pypy.rlib.jit import AsmInfo
from pypy.jit.backend.model import CompiledLoopToken
from pypy.jit.backend.x86.regalloc import (RegAlloc, get_ebp_ofs, _get_scale,
gpr_reg_mgr_cls, _valid_addressing_size)
@@ -477,8 +478,8 @@
name = "Loop # %s: %s" % (looptoken.number, loopname)
self.cpu.profile_agent.native_code_written(name,
rawstart, full_size)
- return (ops_offset, rawstart + looppos,
- size_excluding_failure_stuff - looppos)
+ return AsmInfo(ops_offset, rawstart + looppos,
+ size_excluding_failure_stuff - looppos)
def assemble_bridge(self, faildescr, inputargs, operations,
original_loop_token, log):
@@ -492,7 +493,7 @@
except ValueError:
debug_print("Bridge out of guard", descr_number,
"was already compiled!")
- raise
+ return
self.setup(original_loop_token)
if log:
@@ -540,7 +541,7 @@
name = "Bridge # %s" % (descr_number,)
self.cpu.profile_agent.native_code_written(name,
rawstart, fullsize)
- return ops_offset, startpos + rawstart, codeendpos - startpos
+ return AsmInfo(ops_offset, startpos + rawstart, codeendpos - startpos)
def write_pending_failure_recoveries(self):
# for each pending guard, generate the code of the recovery stub
diff --git a/pypy/jit/metainterp/compile.py b/pypy/jit/metainterp/compile.py
--- a/pypy/jit/metainterp/compile.py
+++ b/pypy/jit/metainterp/compile.py
@@ -5,6 +5,7 @@
from pypy.rlib.objectmodel import we_are_translated
from pypy.rlib.debug import debug_start, debug_stop, debug_print
from pypy.rlib import rstack
+from pypy.rlib.jit import JitDebugInfo
from pypy.conftest import option
from pypy.tool.sourcetools import func_with_new_name
@@ -307,32 +308,36 @@
if metainterp_sd.warmrunnerdesc is not None:
hooks = metainterp_sd.warmrunnerdesc.hooks
- hooks.before_compile(jitdriver_sd.jitdriver, metainterp_sd.logger_ops,
- original_jitcell_token, loop.operations, type,
- greenkey)
+ debug_info = JitDebugInfo(jitdriver_sd, metainterp_sd.logger_ops,
+ original_jitcell_token, loop.operations,
+ type, greenkey)
+ hooks.before_compile(debug_info)
else:
+ debug_info = None
hooks = None
operations = get_deep_immutable_oplist(loop.operations)
metainterp_sd.profiler.start_backend()
debug_start("jit-backend")
try:
- tp = metainterp_sd.cpu.compile_loop(loop.inputargs, operations,
- original_jitcell_token,
- name=loopname)
- ops_offset, asmstart, asmlen = tp
+ asminfo = metainterp_sd.cpu.compile_loop(loop.inputargs, operations,
+ original_jitcell_token,
+ name=loopname)
finally:
debug_stop("jit-backend")
metainterp_sd.profiler.end_backend()
if hooks is not None:
- hooks.after_compile(jitdriver_sd.jitdriver, metainterp_sd.logger_ops,
- original_jitcell_token, loop.operations, type,
- greenkey, ops_offset, asmstart, asmlen)
+ debug_info.asminfo = asminfo
+ hooks.after_compile(debug_info)
metainterp_sd.stats.add_new_loop(loop)
if not we_are_translated():
metainterp_sd.stats.compiled()
metainterp_sd.log("compiled new " + type)
#
loopname = jitdriver_sd.warmstate.get_location_str(greenkey)
+ if asminfo is not None:
+ ops_offset = asminfo.ops_offset
+ else:
+ ops_offset = None
metainterp_sd.logger_ops.log_loop(loop.inputargs, loop.operations, n,
type, ops_offset,
name=loopname)
@@ -349,31 +354,34 @@
TreeLoop.check_consistency_of_branch(operations, seen)
if metainterp_sd.warmrunnerdesc is not None:
hooks = metainterp_sd.warmrunnerdesc.hooks
- hooks.before_compile_bridge(jitdriver_sd.jitdriver,
- metainterp_sd.logger_ops,
- original_loop_token, operations, n)
+ debug_info = JitDebugInfo(jitdriver_sd, metainterp_sd.logger_ops,
+ original_loop_token, operations, 'bridge',
+ fail_descr_no=n)
+ hooks.before_compile_bridge(debug_info)
else:
hooks = None
+ debug_info = None
operations = get_deep_immutable_oplist(operations)
metainterp_sd.profiler.start_backend()
debug_start("jit-backend")
try:
- tp = metainterp_sd.cpu.compile_bridge(faildescr, inputargs, operations,
- original_loop_token)
- ops_offset, asmstart, asmlen = tp
+ asminfo = metainterp_sd.cpu.compile_bridge(faildescr, inputargs,
+ operations,
+ original_loop_token)
finally:
debug_stop("jit-backend")
metainterp_sd.profiler.end_backend()
if hooks is not None:
- hooks.after_compile_bridge(jitdriver_sd.jitdriver,
- metainterp_sd.logger_ops,
- original_loop_token, operations, n,
- ops_offset,
- asmstart, asmlen)
+ debug_info.asminfo = asminfo
+ hooks.after_compile_bridge(debug_info)
if not we_are_translated():
metainterp_sd.stats.compiled()
metainterp_sd.log("compiled new bridge")
#
+ if asminfo is not None:
+ ops_offset = asminfo.ops_offset
+ else:
+ ops_offset = None
metainterp_sd.logger_ops.log_bridge(inputargs, operations, n, ops_offset)
#
#if metainterp_sd.warmrunnerdesc is not None: # for tests
diff --git a/pypy/jit/metainterp/test/test_jitiface.py b/pypy/jit/metainterp/test/test_jitiface.py
--- a/pypy/jit/metainterp/test/test_jitiface.py
+++ b/pypy/jit/metainterp/test/test_jitiface.py
@@ -45,22 +45,18 @@
called = []
class MyJitIface(JitHookInterface):
- def after_compile(self, jitdriver, logger, looptoken, operations,
- type, greenkey, ops_offset, asmaddr, asmlen):
- assert asmaddr == 0
- assert asmlen == 0
- called.append(("compile", greenkey[1].getint(),
- greenkey[0].getint(), type))
+ def after_compile(self, di):
+ called.append(("compile", di.greenkey[1].getint(),
+ di.greenkey[0].getint(), di.type))
- def before_compile(self, jitdriver, logger, looptoken, oeprations,
- type, greenkey):
- called.append(("optimize", greenkey[1].getint(),
- greenkey[0].getint(), type))
+ def before_compile(self, di):
+ called.append(("optimize", di.greenkey[1].getint(),
+ di.greenkey[0].getint(), di.type))
- def before_optimize(self, jitdriver, logger, looptoken, oeprations,
- type, greenkey):
- called.append(("trace", greenkey[1].getint(),
- greenkey[0].getint(), type))
+ #def before_optimize(self, jitdriver, logger, looptoken, oeprations,
+ # type, greenkey):
+ # called.append(("trace", greenkey[1].getint(),
+ # greenkey[0].getint(), type))
iface = MyJitIface()
@@ -89,18 +85,13 @@
called = []
class MyJitIface(JitHookInterface):
- def after_compile(self, jitdriver, logger, looptoken, operations,
- type, greenkey, ops_offset, asmaddr, asmlen):
- assert asmaddr == 0
- assert asmlen == 0
+ def after_compile(self, di):
called.append("compile")
- def after_compile_bridge(self, jitdriver, logger, orig_token,
- operations, n, ops_offset, asmstart, asmlen):
+ def after_compile_bridge(self, di):
called.append("compile_bridge")
- def before_compile_bridge(self, jitdriver, logger, orig_token,
- operations, n):
+ def before_compile_bridge(self, di):
called.append("before_compile_bridge")
driver = JitDriver(greens = ['n', 'm'], reds = ['i'])
diff --git a/pypy/module/pypyjit/interp_resop.py b/pypy/module/pypyjit/interp_resop.py
--- a/pypy/module/pypyjit/interp_resop.py
+++ b/pypy/module/pypyjit/interp_resop.py
@@ -107,9 +107,15 @@
cache.in_recursion = NonConstant(False)
def wrap_oplist(space, logops, operations, ops_offset):
- return [WrappedOp(jit_hooks._cast_to_gcref(op),
- ops_offset.get(op, 0),
- logops.repr_of_resop(op)) for op in operations]
+ l_w = []
+ for op in operations:
+ if ops_offset is None:
+ ofs = -1
+ else:
+ ofs = ops_offset.get(op, 0)
+ l_w.append(WrappedOp(jit_hooks._cast_to_gcref(op), ofs,
+ logops.repr_of_resop(op)))
+ return l_w
class WrappedBox(Wrappable):
""" A class representing a single box
diff --git a/pypy/rlib/jit.py b/pypy/rlib/jit.py
--- a/pypy/rlib/jit.py
+++ b/pypy/rlib/jit.py
@@ -725,72 +725,104 @@
return hop.genop('jit_marker', vlist,
resulttype=lltype.Void)
+class AsmInfo(object):
+ """ An addition to JitDebugInfo concerning assembler. Attributes:
+
+ ops_offset - dict of offsets of operations or None
+ asmaddr - (int) raw address of assembler block
+ asmlen - assembler block length
+ """
+ def __init__(self, ops_offset, asmaddr, asmlen):
+ self.ops_offset = ops_offset
+ self.asmaddr = asmaddr
+ self.asmlen = asmlen
+
+class JitDebugInfo(object):
+ """ An object representing debug info. Attributes meanings:
+
+ greenkey - a list of green boxes or None for bridge
+ logger - an instance of jit.metainterp.logger.LogOperations
+ type - either 'loop', 'entry bridge' or 'bridge'
+ looptoken - description of a loop
+ fail_descr_no - number of failing descr for bridges, -1 otherwise
+ asminfo - extra assembler information
+ """
+
+ asminfo = None
+ def __init__(self, jitdriver_sd, logger, looptoken, operations, type,
+ greenkey=None, fail_descr_no=-1):
+ self.jitdriver_sd = jitdriver_sd
+ self.logger = logger
+ self.looptoken = looptoken
+ self.operations = operations
+ self.type = type
+ if type == 'bridge':
+ assert fail_descr_no != -1
+ else:
+ assert greenkey is not None
+ self.greenkey = greenkey
+ self.fail_descr_no = fail_descr_no
+
+ def get_jitdriver(self):
+ """ Return where the jitdriver on which the jitting started
+ """
+ return self.jitdriver_sd.jitdriver
+
+ def get_greenkey_repr(self):
+ """ Return the string repr of a greenkey
+ """
+ return self.jitdriver_sd.warmstate.get_location_str(self.greenkey)
+
class JitHookInterface(object):
""" This is the main connector between the JIT and the interpreter.
Several methods on this class will be invoked at various stages
of JIT running like JIT loops compiled, aborts etc.
An instance of this class will be available as policy.jithookiface.
-
- each hook will accept some of the following args:
-
-
- greenkey - a list of green boxes
- jitdriver - an instance of jitdriver where tracing started
- logger - an instance of jit.metainterp.logger.LogOperations
- ops_offset
- asmaddr - (int) raw address of assembler block
- asmlen - assembler block length
- type - either 'loop' or 'entry bridge'
"""
def on_abort(self, reason, jitdriver, greenkey):
""" A hook called each time a loop is aborted with jitdriver and
greenkey where it started, reason is a string why it got aborted
"""
- #def before_optimize(self, jitdriver, logger, looptoken, operations,
- # type, greenkey):
- # """ A hook called before optimizer is run, args described in class
- # docstring. Overwrite for custom behavior
+ #def before_optimize(self, debug_info):
+ # """ A hook called before optimizer is run, called with instance of
+ # JitDebugInfo. Overwrite for custom behavior
# """
# DISABLED
- def before_compile(self, jitdriver, logger, looptoken, operations, type,
- greenkey):
+ def before_compile(self, debug_info):
""" A hook called after a loop is optimized, before compiling assembler,
- args described ni class docstring. Overwrite for custom behavior
+ called with JitDebugInfo instance. Overwrite for custom behavior
"""
- def after_compile(self, jitdriver, logger, looptoken, operations, type,
- greenkey, ops_offset, asmaddr, asmlen):
+ def after_compile(self, debug_info):
""" A hook called after a loop has compiled assembler,
- args described in class docstring. Overwrite for custom behavior
+ called with JitDebugInfo instance. Overwrite for custom behavior
"""
- #def before_optimize_bridge(self, jitdriver, logger, orig_looptoken,
+ #def before_optimize_bridge(self, debug_info):
# operations, fail_descr_no):
# """ A hook called before a bridge is optimized.
- # Args described in class docstring, Overwrite for
+ # Called with JitDebugInfo instance, overwrite for
# custom behavior
# """
# DISABLED
- def before_compile_bridge(self, jitdriver, logger, orig_looptoken,
- operations, fail_descr_no):
+ def before_compile_bridge(self, debug_info):
""" A hook called before a bridge is compiled, but after optimizations
- are performed. Args described in class docstring, Overwrite for
+ are performed. Called with instance of debug_info, overwrite for
custom behavior
"""
- def after_compile_bridge(self, jitdriver, logger, orig_looptoken,
- operations, fail_descr_no, ops_offset, asmaddr,
- asmlen):
- """ A hook called after a bridge is compiled, args described in class
- docstring, Overwrite for custom behavior
+ def after_compile_bridge(self, debug_info):
+ """ A hook called after a bridge is compiled, called with JitDebugInfo
+ instance, overwrite for custom behavior
"""
def get_stats(self):
""" Returns various statistics
"""
+ raise NotImplementedError
def record_known_class(value, cls):
"""
From noreply at buildbot.pypy.org Mon Jan 9 21:38:29 2012
From: noreply at buildbot.pypy.org (mattip)
Date: Mon, 9 Jan 2012 21:38:29 +0100 (CET)
Subject: [pypy-commit] pypy numpypy-axisops: zjit improvement
Message-ID: <20120109203829.BB3B682110@wyvern.cs.uni-duesseldorf.de>
Author: mattip
Branch: numpypy-axisops
Changeset: r51179:0722e568f060
Date: 2012-01-09 22:36 +0200
http://bitbucket.org/pypy/pypy/changeset/0722e568f060/
Log: zjit improvement
diff --git a/pypy/module/micronumpy/interp_iter.py b/pypy/module/micronumpy/interp_iter.py
--- a/pypy/module/micronumpy/interp_iter.py
+++ b/pypy/module/micronumpy/interp_iter.py
@@ -22,6 +22,9 @@
def done(self):
raise NotImplementedError
+ def axis_done(self):
+ raise NotImplementedError
+
class ArrayIterator(BaseIterator):
def __init__(self, size):
self.offset = 0
@@ -120,7 +123,7 @@
self.shapelen = len(shape)
self.indices = [0] * len(shape)
self._done = False
- self.axis_done = False
+ self._axis_done = False
self.offset = arr_start
self.dim = dim
self.dim_order = []
@@ -136,13 +139,19 @@
def done(self):
return self._done
+ def axis_done(self):
+ return self._axis_done
+
+ @jit.unroll_safe
def next(self, shapelen):
#shapelen will always be one less than self.shapelen
offset = self.offset
- axis_done = False
- indices = [0] * self.shapelen
- for i in range(self.shapelen):
- indices[i] = self.indices[i]
+ _axis_done = False
+ done = False
+ #indices = [0] * self.shapelen
+ #for i in range(self.shapelen):
+ # indices[i] = self.indices[i]
+ indices = self.indices
for i in self.dim_order:
if indices[i] < self.shape[i] - 1:
indices[i] += 1
@@ -150,13 +159,13 @@
break
else:
if i == self.dim:
- axis_done = True
+ _axis_done = True
indices[i] = 0
offset -= self.backstrides[i]
else:
- self._done = True
+ done = True
res = instantiate(AxisIterator)
- res.axis_done = axis_done
+ res._axis_done = _axis_done
res.offset = offset
res.indices = indices
res.strides = self.strides
@@ -165,7 +174,7 @@
res.shape = self.shape
res.shapelen = self.shapelen
res.dim = self.dim
- res._done = self._done
+ res._done = done
return res
# ------ other iterators that are not part of the computation frame ----------
diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py
--- a/pypy/module/micronumpy/interp_numarray.py
+++ b/pypy/module/micronumpy/interp_numarray.py
@@ -39,7 +39,7 @@
axisreduce_driver = jit.JitDriver(
greens=['shapelen', 'sig'],
virtualizables=['frame'],
- reds=['identity', 'self','result', 'ri', 'frame', 'nextval', 'dtype', 'value'],
+ reds=['identity', 'self','result', 'ri', 'frame', 'dtype', 'value'],
get_printable_location=signature.new_printable_location('axisreduce'),
)
@@ -758,8 +758,6 @@
class Reduce(VirtualArray):
- _immutable_fields_ = ['dim', 'binfunc', 'dtype', 'identity']
-
def __init__(self, binfunc, name, dim, res_dtype, values, identity=None):
shape = values.shape[0:dim] + values.shape[dim + 1:len(values.shape)]
VirtualArray.__init__(self, name, shape, res_dtype)
@@ -803,27 +801,27 @@
ri = ArrayIterator(result.size)
frame = sig.create_frame(self.values, dim=self.dim)
value = self.get_identity(sig, frame, shapelen)
- nextval = sig.eval(frame, self.values).convert_to(dtype)
+ assert isinstance(sig, signature.ReduceSignature)
while not frame.done():
axisreduce_driver.jit_merge_point(frame=frame, self=self,
value=value, sig=sig,
shapelen=shapelen, ri=ri,
- nextval=nextval, dtype=dtype,
+ dtype=dtype,
identity=identity,
result=result)
- if frame.iterators[0].axis_done:
+ if frame.axis_done():
+ result.dtype.setitem(result.storage, ri.offset, value)
if identity is None:
value = sig.eval(frame, self.values).convert_to(dtype)
frame.next(shapelen)
else:
value = identity.convert_to(dtype)
ri = ri.next(shapelen)
- assert isinstance(sig, signature.ReduceSignature)
- nextval = sig.eval(frame, self.values).convert_to(dtype)
- value = self.binfunc(dtype, value, nextval)
- result.dtype.setitem(result.storage, ri.offset, value)
+ value = self.binfunc(dtype, value,
+ sig.eval(frame, self.values).convert_to(dtype))
frame.next(shapelen)
assert ri.done
+ result.dtype.setitem(result.storage, ri.offset, value)
return result
diff --git a/pypy/module/micronumpy/signature.py b/pypy/module/micronumpy/signature.py
--- a/pypy/module/micronumpy/signature.py
+++ b/pypy/module/micronumpy/signature.py
@@ -59,6 +59,12 @@
for i in range(len(self.iterators)):
self.iterators[i] = self.iterators[i].next(shapelen)
+ def axis_done(self):
+ final_iter = promote(self.final_iter)
+ if final_iter < 0:
+ return False
+ return self.iterators[final_iter].axis_done()
+
def _add_ptr_to_cache(ptr, cache):
i = 0
for p in cache:
diff --git a/pypy/module/micronumpy/test/test_zjit.py b/pypy/module/micronumpy/test/test_zjit.py
--- a/pypy/module/micronumpy/test/test_zjit.py
+++ b/pypy/module/micronumpy/test/test_zjit.py
@@ -127,16 +127,17 @@
def test_axissum(self):
result = self.run("axissum")
assert result == 30
- self.check_simple_loop({'arraylen_gc': 1,
- 'call': 1,
- 'getfield_gc': 3,
- "getinteriorfield_raw": 1,
- "guard_class": 1,
- "guard_false": 2,
- 'guard_no_exception': 1,
- "float_add": 1,
- "jump": 1,
- 'setinteriorfield_raw': 1,
+ self.check_simple_loop({\
+ 'setarrayitem_gc': 1,
+ 'getarrayitem_gc': 5,
+ 'getinteriorfield_raw': 1,
+ 'arraylen_gc': 2,
+ 'guard_true': 1,
+ 'int_sub': 1,
+ 'int_lt': 1,
+ 'jump': 1,
+ 'float_add': 1,
+ 'int_add': 2,
})
def define_prod():
@@ -236,7 +237,8 @@
def test_ufunc(self):
result = self.run("ufunc")
assert result == -6
- self.check_simple_loop({"getinteriorfield_raw": 2, "float_add": 1, "float_neg": 1,
+ self.check_simple_loop({"getinteriorfield_raw": 2, "float_add": 1,
+ "float_neg": 1,
"setinteriorfield_raw": 1, "int_add": 2,
"int_ge": 1, "guard_false": 1, "jump": 1,
'arraylen_gc': 1})
@@ -346,7 +348,7 @@
result = self.run("setslice")
assert result == 11.0
self.check_trace_count(1)
- self.check_simple_loop({'getinteriorfield_raw': 2, 'float_add' : 1,
+ self.check_simple_loop({'getinteriorfield_raw': 2, 'float_add': 1,
'setinteriorfield_raw': 1, 'int_add': 3,
'int_lt': 1, 'guard_true': 1, 'jump': 1,
'arraylen_gc': 3})
@@ -363,11 +365,12 @@
result = self.run("virtual_slice")
assert result == 4
self.check_trace_count(1)
- self.check_simple_loop({'getinteriorfield_raw': 2, 'float_add' : 1,
+ self.check_simple_loop({'getinteriorfield_raw': 2, 'float_add': 1,
'setinteriorfield_raw': 1, 'int_add': 2,
'int_ge': 1, 'guard_false': 1, 'jump': 1,
'arraylen_gc': 1})
+
class TestNumpyOld(LLJitMixin):
def setup_class(cls):
py.test.skip("old")
@@ -401,4 +404,3 @@
result = self.meta_interp(f, [5], listops=True, backendopt=True)
assert result == f(5)
-
From noreply at buildbot.pypy.org Mon Jan 9 22:06:57 2012
From: noreply at buildbot.pypy.org (fijal)
Date: Mon, 9 Jan 2012 22:06:57 +0100 (CET)
Subject: [pypy-commit] pypy better-jit-hooks: update the interface on the
pypyjit side
Message-ID: <20120109210657.C373382110@wyvern.cs.uni-duesseldorf.de>
Author: Maciej Fijalkowski
Branch: better-jit-hooks
Changeset: r51180:aed03c7eb163
Date: 2012-01-09 23:06 +0200
http://bitbucket.org/pypy/pypy/changeset/aed03c7eb163/
Log: update the interface on the pypyjit side
diff --git a/pypy/jit/metainterp/pyjitpl.py b/pypy/jit/metainterp/pyjitpl.py
--- a/pypy/jit/metainterp/pyjitpl.py
+++ b/pypy/jit/metainterp/pyjitpl.py
@@ -1796,7 +1796,7 @@
jd_sd = self.jitdriver_sd
greenkey = self.current_merge_points[0][0][:jd_sd.num_green_args]
self.staticdata.warmrunnerdesc.hooks.on_abort(reason, jd_sd.jitdriver,
- greenkey)
+ greenkey, jd_sd.warmstate.get_location_str(greenkey))
self.staticdata.stats.aborted()
def blackhole_if_trace_too_long(self):
diff --git a/pypy/jit/metainterp/test/test_jitiface.py b/pypy/jit/metainterp/test/test_jitiface.py
--- a/pypy/jit/metainterp/test/test_jitiface.py
+++ b/pypy/jit/metainterp/test/test_jitiface.py
@@ -12,14 +12,16 @@
reasons = []
class MyJitIface(JitHookInterface):
- def on_abort(self, reason, jitdriver, greenkey):
+ def on_abort(self, reason, jitdriver, greenkey, greenkey_repr):
assert jitdriver is myjitdriver
assert len(greenkey) == 1
reasons.append(reason)
+ assert greenkey_repr == 'blah'
iface = MyJitIface()
- myjitdriver = JitDriver(greens=['foo'], reds=['x', 'total'])
+ myjitdriver = JitDriver(greens=['foo'], reds=['x', 'total'],
+ get_printable_location=lambda *args: 'blah')
class Foo:
_immutable_fields_ = ['a?']
diff --git a/pypy/module/pypyjit/interp_resop.py b/pypy/module/pypyjit/interp_resop.py
--- a/pypy/module/pypyjit/interp_resop.py
+++ b/pypy/module/pypyjit/interp_resop.py
@@ -19,8 +19,9 @@
self.w_abort_hook = space.w_None
self.w_optimize_hook = space.w_None
-def wrap_greenkey(space, jitdriver, greenkey):
- if jitdriver.name == 'pypyjit':
+def wrap_greenkey(space, jitdriver, greenkey, greenkey_repr):
+ jitdriver_name = jitdriver.name
+ if jitdriver_name == 'pypyjit':
next_instr = greenkey[0].getint()
is_being_profiled = greenkey[1].getint()
ll_code = lltype.cast_opaque_ptr(lltype.Ptr(OBJECT),
@@ -29,7 +30,7 @@
return space.newtuple([space.wrap(pycode), space.wrap(next_instr),
space.newbool(bool(is_being_profiled))])
else:
- return space.wrap('who knows?')
+ return space.wrap(greenkey_repr)
def set_compile_hook(space, w_hook):
""" set_compile_hook(hook)
@@ -106,7 +107,7 @@
cache.w_abort_hook = w_hook
cache.in_recursion = NonConstant(False)
-def wrap_oplist(space, logops, operations, ops_offset):
+def wrap_oplist(space, logops, operations, ops_offset=None):
l_w = []
for op in operations:
if ops_offset is None:
diff --git a/pypy/module/pypyjit/policy.py b/pypy/module/pypyjit/policy.py
--- a/pypy/module/pypyjit/policy.py
+++ b/pypy/module/pypyjit/policy.py
@@ -7,7 +7,7 @@
WrappedOp
class PyPyJitIface(JitHookInterface):
- def on_abort(self, reason, jitdriver, greenkey):
+ def on_abort(self, reason, jitdriver, greenkey, greenkey_repr):
space = self.space
cache = space.fromcache(Cache)
if cache.in_recursion:
@@ -18,73 +18,75 @@
try:
space.call_function(cache.w_abort_hook,
space.wrap(jitdriver.name),
- wrap_greenkey(space, jitdriver, greenkey),
+ wrap_greenkey(space, jitdriver,
+ greenkey, greenkey_repr),
space.wrap(counter_names[reason]))
except OperationError, e:
e.write_unraisable(space, "jit hook ", cache.w_abort_hook)
finally:
cache.in_recursion = False
- def after_compile(self, jitdriver, logger, looptoken, operations, type,
- greenkey, ops_offset, asmstart, asmlen):
- self._compile_hook(jitdriver, logger, operations, type,
- ops_offset, asmstart, asmlen,
- wrap_greenkey(self.space, jitdriver, greenkey))
+ def after_compile(self, debug_info):
+ w_greenkey = wrap_greenkey(self.space, debug_info.get_jitdriver(),
+ debug_info.greenkey,
+ debug_info.get_greenkey_repr())
+ self._compile_hook(debug_info, w_greenkey)
- def after_compile_bridge(self, jitdriver, logger, orig_looptoken,
- operations, n, ops_offset, asmstart, asmlen):
- self._compile_hook(jitdriver, logger, operations, 'bridge',
- ops_offset, asmstart, asmlen,
- self.space.wrap(n))
+ def after_compile_bridge(self, debug_info):
+ self._compile_hook(debug_info,
+ self.space.wrap(debug_info.fail_descr_no))
- def before_compile(self, jitdriver, logger, looptoken, operations, type,
- greenkey):
- self._optimize_hook(jitdriver, logger, operations, type,
- wrap_greenkey(self.space, jitdriver, greenkey))
+ def before_compile(self, debug_info):
+ w_greenkey = wrap_greenkey(self.space, debug_info.get_jitdriver(),
+ debug_info.greenkey,
+ debug_info.get_greenkey_repr())
+ self._optimize_hook(debug_info, w_greenkey)
- def before_compile_bridge(self, jitdriver, logger, orig_looptoken,
- operations, n):
- self._optimize_hook(jitdriver, logger, operations, 'bridge',
- self.space.wrap(n))
+ def before_compile_bridge(self, debug_info):
+ self._optimize_hook(debug_info,
+ self.space.wrap(debug_info.fail_descr_no))
- def _compile_hook(self, jitdriver, logger, operations, type,
- ops_offset, asmstart, asmlen, w_arg):
+ def _compile_hook(self, debug_info, w_arg):
space = self.space
cache = space.fromcache(Cache)
if cache.in_recursion:
return
if space.is_true(cache.w_compile_hook):
- logops = logger._make_log_operations()
- list_w = wrap_oplist(space, logops, operations, ops_offset)
+ logops = debug_info.logger._make_log_operations()
+ list_w = wrap_oplist(space, logops, debug_info.operations,
+ debug_info.asminfo.ops_offset)
cache.in_recursion = True
try:
try:
+ jd_name = debug_info.get_jitdriver().name
+ asminfo = debug_info.asminfo
space.call_function(cache.w_compile_hook,
- space.wrap(jitdriver.name),
- space.wrap(type),
+ space.wrap(jd_name),
+ space.wrap(debug_info.type),
w_arg,
space.newlist(list_w),
- space.wrap(asmstart),
- space.wrap(asmlen))
+ space.wrap(asminfo.asmaddr),
+ space.wrap(asminfo.asmlen))
except OperationError, e:
e.write_unraisable(space, "jit hook ", cache.w_compile_hook)
finally:
cache.in_recursion = False
- def _optimize_hook(self, jitdriver, logger, operations, type, w_arg):
+ def _optimize_hook(self, debug_info, w_arg):
space = self.space
cache = space.fromcache(Cache)
if cache.in_recursion:
return
if space.is_true(cache.w_optimize_hook):
- logops = logger._make_log_operations()
- list_w = wrap_oplist(space, logops, operations, {})
+ logops = debug_info.logger._make_log_operations()
+ list_w = wrap_oplist(space, logops, debug_info.operations)
cache.in_recursion = True
try:
try:
+ jd_name = debug_info.get_jitdriver().name
w_res = space.call_function(cache.w_optimize_hook,
- space.wrap(jitdriver.name),
- space.wrap(type),
+ space.wrap(jd_name),
+ space.wrap(debug_info.type),
w_arg,
space.newlist(list_w))
if space.is_w(w_res, space.w_None):
@@ -93,12 +95,12 @@
for w_item in space.listview(w_res):
item = space.interp_w(WrappedOp, w_item)
l.append(jit_hooks._cast_to_resop(item.op))
- del operations[:] # modifying operations above is
+ del debug_info.operations[:] # modifying operations above is
# probably not a great idea since types may not work
# and we'll end up with half-working list and
# a segfault/fatal RPython error
for elem in l:
- operations.append(elem)
+ debug_info.operations.append(elem)
except OperationError, e:
e.write_unraisable(space, "jit hook ", cache.w_compile_hook)
finally:
diff --git a/pypy/module/pypyjit/test/test_jit_hook.py b/pypy/module/pypyjit/test/test_jit_hook.py
--- a/pypy/module/pypyjit/test/test_jit_hook.py
+++ b/pypy/module/pypyjit/test/test_jit_hook.py
@@ -15,6 +15,7 @@
from pypy.jit.tool.oparser import parse
from pypy.jit.metainterp.typesystem import llhelper
from pypy.jit.metainterp.jitprof import ABORT_TOO_LONG
+from pypy.rlib.jit import JitDebugInfo, AsmInfo
class MockJitDriverSD(object):
class warmstate(object):
@@ -25,6 +26,9 @@
pycode = cast_base_ptr_to_instance(PyCode, ll_code)
return pycode.co_name
+ jitdriver = pypyjitdriver
+
+
class MockSD(object):
class cpu(object):
ts = llhelper
@@ -47,7 +51,7 @@
code_gcref = lltype.cast_opaque_ptr(llmemory.GCREF, ll_code)
logger = Logger(MockSD())
- cls.origoplist = parse("""
+ oplist = parse("""
[i1, i2, p2]
i3 = int_add(i1, i2)
debug_merge_point(0, 0, 0, 0, ConstPtr(ptr0))
@@ -56,35 +60,43 @@
""", namespace={'ptr0': code_gcref}).operations
greenkey = [ConstInt(0), ConstInt(0), ConstPtr(code_gcref)]
offset = {}
- for i, op in enumerate(cls.origoplist):
+ for i, op in enumerate(oplist):
if i != 1:
offset[op] = i
+ di_loop = JitDebugInfo(MockJitDriverSD, logger, JitCellToken(),
+ oplist, 'loop', greenkey)
+ di_loop_optimize = JitDebugInfo(MockJitDriverSD, logger, JitCellToken(),
+ oplist, 'loop', greenkey)
+ di_loop.asminfo = AsmInfo(offset, 0, 0)
+ di_bridge = JitDebugInfo(MockJitDriverSD, logger, JitCellToken(),
+ oplist, 'bridge', fail_descr_no=0)
+ di_bridge.asminfo = AsmInfo(offset, 0, 0)
+
def interp_on_compile():
- pypy_hooks.after_compile(pypyjitdriver, logger, JitCellToken(),
- cls.oplist, 'loop', greenkey, offset,
- 0, 0)
+ di_loop.oplist = cls.oplist
+ pypy_hooks.after_compile(di_loop)
def interp_on_compile_bridge():
- pypy_hooks.after_compile_bridge(pypyjitdriver, logger,
- JitCellToken(), cls.oplist, 0,
- offset, 0, 0)
+ pypy_hooks.after_compile_bridge(di_bridge)
def interp_on_optimize():
- pypy_hooks.before_compile(pypyjitdriver, logger, JitCellToken(),
- cls.oplist, 'loop', greenkey)
+ di_loop_optimize.oplist = cls.oplist
+ pypy_hooks.before_compile(di_loop_optimize)
def interp_on_abort():
- pypy_hooks.on_abort(ABORT_TOO_LONG, pypyjitdriver, greenkey)
+ pypy_hooks.on_abort(ABORT_TOO_LONG, pypyjitdriver, greenkey,
+ 'blah')
cls.w_on_compile = space.wrap(interp2app(interp_on_compile))
cls.w_on_compile_bridge = space.wrap(interp2app(interp_on_compile_bridge))
cls.w_on_abort = space.wrap(interp2app(interp_on_abort))
cls.w_int_add_num = space.wrap(rop.INT_ADD)
cls.w_on_optimize = space.wrap(interp2app(interp_on_optimize))
+ cls.orig_oplist = oplist
def setup_method(self, meth):
- self.__class__.oplist = self.origoplist
+ self.__class__.oplist = self.orig_oplist[:]
def test_on_compile(self):
import pypyjit
diff --git a/pypy/rlib/jit.py b/pypy/rlib/jit.py
--- a/pypy/rlib/jit.py
+++ b/pypy/rlib/jit.py
@@ -779,7 +779,7 @@
of JIT running like JIT loops compiled, aborts etc.
An instance of this class will be available as policy.jithookiface.
"""
- def on_abort(self, reason, jitdriver, greenkey):
+ def on_abort(self, reason, jitdriver, greenkey, greenkey_repr):
""" A hook called each time a loop is aborted with jitdriver and
greenkey where it started, reason is a string why it got aborted
"""
From noreply at buildbot.pypy.org Mon Jan 9 23:15:58 2012
From: noreply at buildbot.pypy.org (fijal)
Date: Mon, 9 Jan 2012 23:15:58 +0100 (CET)
Subject: [pypy-commit] pypy look-into-thread: a test - look into thread
module
Message-ID: <20120109221558.8E13882110@wyvern.cs.uni-duesseldorf.de>
Author: Maciej Fijalkowski
Branch: look-into-thread
Changeset: r51181:283df4b51997
Date: 2012-01-10 00:13 +0200
http://bitbucket.org/pypy/pypy/changeset/283df4b51997/
Log: a test - look into thread module
diff --git a/pypy/module/pypyjit/policy.py b/pypy/module/pypyjit/policy.py
--- a/pypy/module/pypyjit/policy.py
+++ b/pypy/module/pypyjit/policy.py
@@ -17,7 +17,7 @@
'imp', 'sys', 'array', '_ffi', 'itertools', 'operator',
'posix', '_socket', '_sre', '_lsprof', '_weakref',
'__pypy__', 'cStringIO', '_collections', 'struct',
- 'mmap', 'marshal']:
+ 'mmap', 'marshal', 'thread']:
return True
return False
From noreply at buildbot.pypy.org Mon Jan 9 23:22:50 2012
From: noreply at buildbot.pypy.org (boemmels)
Date: Mon, 9 Jan 2012 23:22:50 +0100 (CET)
Subject: [pypy-commit] lang-scheme default: Implement not function
Message-ID: <20120109222250.9A8CE82110@wyvern.cs.uni-duesseldorf.de>
Author: Juergen Boemmels
Branch:
Changeset: r32:2f84a6d52477
Date: 2011-12-29 22:05 +0100
http://bitbucket.org/pypy/lang-scheme/changeset/2f84a6d52477/
Log: Implement not function
diff --git a/scheme/procedure.py b/scheme/procedure.py
--- a/scheme/procedure.py
+++ b/scheme/procedure.py
@@ -3,7 +3,7 @@
W_Number, W_Real, W_Integer, W_List, W_Character, W_Vector, \
Body, W_Procedure, W_String, W_Promise, plst2lst, w_undefined, \
SchemeSyntaxError, SchemeQuit, WrongArgType, WrongArgsNumber, \
- w_nil
+ w_nil, w_true, w_false
##
# operations
@@ -534,6 +534,19 @@
def predicate(self, w_obj):
return w_obj is w_nil
+class Not(W_Procedure):
+ _symbol_name = "not"
+
+ def procedure(self, ctx, lst):
+ if len(lst) != 1:
+ raise WrongArgsNumber
+
+ w_bool = lst[0]
+ if w_bool.to_boolean():
+ return w_false
+ else:
+ return w_true
+
##
# Input/Output procedures
From noreply at buildbot.pypy.org Mon Jan 9 23:22:51 2012
From: noreply at buildbot.pypy.org (boemmels)
Date: Mon, 9 Jan 2012 23:22:51 +0100 (CET)
Subject: [pypy-commit] lang-scheme default: Implement all numerical
comparisions (< <= > >=)
Message-ID: <20120109222251.A295F82110@wyvern.cs.uni-duesseldorf.de>
Author: Juergen Boemmels
Branch:
Changeset: r33:82753c10ee59
Date: 2011-12-29 22:37 +0100
http://bitbucket.org/pypy/lang-scheme/changeset/82753c10ee59/
Log: Implement all numerical comparisions (< <= > >=)
diff --git a/scheme/procedure.py b/scheme/procedure.py
--- a/scheme/procedure.py
+++ b/scheme/procedure.py
@@ -94,9 +94,7 @@
Mul = create_op_class('*', '', "Mul", 1)
Div = create_op_class('/', '1 /', "Div")
-class Equal(W_Procedure):
- _symbol_name = "="
-
+class NumberComparison(W_Procedure):
def procedure(self, ctx, lst):
if len(lst) < 2:
return W_Boolean(True)
@@ -109,12 +107,43 @@
if not isinstance(arg, W_Number):
raise WrongArgType(arg, "Number")
- if prev.to_number() != arg.to_number():
+ if not self.relation(prev.to_number(), arg.to_number()):
return W_Boolean(False)
prev = arg
return W_Boolean(True)
+class Equal(NumberComparison):
+ _symbol_name = "="
+
+ def relation(self, a, b):
+ return a == b
+
+class LessThen(NumberComparison):
+ _symbol_name = "<"
+
+ def relation(self, a, b):
+ return a < b
+
+class LessEqual(NumberComparison):
+ _symbol_name = "<="
+
+ def relation(self, a, b):
+ return a <= b
+
+class GreaterThen(NumberComparison):
+ _symbol_name = ">"
+
+ def relation(self, a, b):
+ return a > b
+
+class GreaterEqual(NumberComparison):
+ _symbol_name = ">="
+
+ def relation(self, a, b):
+ return a >= b
+
+
class List(W_Procedure):
_symbol_name = "list"
diff --git a/scheme/test/test_eval.py b/scheme/test/test_eval.py
--- a/scheme/test/test_eval.py
+++ b/scheme/test/test_eval.py
@@ -199,6 +199,21 @@
py.test.raises(WrongArgType, eval_noctx, "(= 'a 1)")
+ w_bool = eval_noctx("(< 1 2 3)")
+ assert w_bool.to_boolean() is True
+
+ w_bool = eval_noctx("(< 1 3 2)")
+ assert w_bool.to_boolean() is False
+
+ w_bool = eval_noctx("(> 3 2 1)")
+ assert w_bool.to_boolean() is True
+
+ w_bool = eval_noctx("(<= 1 1 2 2 3)")
+ assert w_bool.to_boolean() is True
+
+ w_bool = eval_noctx("(>= 3 3 1)")
+ assert w_bool.to_boolean() is True
+
def test_comparison_heteronums():
w_bool = eval_noctx("(= 1 1.0 1.1)")
assert w_bool.to_boolean() is False
@@ -839,4 +854,10 @@
py.test.raises(WrongArgType, eval_, ctx, "(append 'a '())")
py.test.raises(WrongArgType, eval_, ctx, "(append 1 2 3)")
- py.test.raises(WrongArgType, eval_, ctx, "(append! (cons 1 2) '(3 4))")
\ No newline at end of file
+ py.test.raises(WrongArgType, eval_, ctx, "(append! (cons 1 2) '(3 4))")
+
+def test_not():
+ assert not eval_noctx("(not #t)").to_boolean()
+ assert eval_noctx("(not #f)").to_boolean()
+ assert not eval_noctx("(not '())").to_boolean()
+ assert not eval_noctx("(not 0)").to_boolean()
From noreply at buildbot.pypy.org Mon Jan 9 23:22:52 2012
From: noreply at buildbot.pypy.org (boemmels)
Date: Mon, 9 Jan 2012 23:22:52 +0100 (CET)
Subject: [pypy-commit] lang-scheme default: Implement Assoc*-functions
Message-ID: <20120109222252.AA20382110@wyvern.cs.uni-duesseldorf.de>
Author: Juergen Boemmels
Branch:
Changeset: r34:19a17e0790e6
Date: 2011-12-29 23:37 +0100
http://bitbucket.org/pypy/lang-scheme/changeset/19a17e0790e6/
Log: Implement Assoc*-functions
diff --git a/scheme/procedure.py b/scheme/procedure.py
--- a/scheme/procedure.py
+++ b/scheme/procedure.py
@@ -380,6 +380,52 @@
return W_String(w_char.to_string() * w_number.to_fixnum())
##
+# Association lists
+##
+class AssocX(W_Procedure):
+ def procedure(self, ctx, lst):
+ if len(lst) != 2:
+ raise WrongArgsNumber
+
+ (w_obj, w_alst) = lst
+
+ w_iter = w_alst
+ while w_iter is not w_nil:
+ if not isinstance(w_iter, W_Pair):
+ raise WrongArgType(w_alst, "AList")
+
+ w_item = w_iter.car
+
+ if not isinstance(w_item, W_Pair):
+ raise WrongArgType(w_alst, "AList")
+
+ if self.compare(w_obj, w_item.car):
+ return w_item
+
+ w_iter = w_iter.cdr
+
+ return w_false
+
+class Assq(AssocX):
+ _symbol_name = "assq"
+
+ def compare(self, w_obj1, w_obj2):
+ return w_obj1.eq(w_obj2)
+
+class Assv(AssocX):
+ _symbol_name = "assv"
+
+ def compare(self, w_obj1, w_obj2):
+ return w_obj1.eqv(w_obj2)
+
+class Assoc(AssocX):
+ _symbol_name = "assoc"
+
+ def compare(self, w_obj1, w_obj2):
+ return w_obj1.equal(w_obj2)
+
+
+##
# Equivalnece Predicates
##
class EquivalnecePredicate(W_Procedure):
diff --git a/scheme/test/test_eval.py b/scheme/test/test_eval.py
--- a/scheme/test/test_eval.py
+++ b/scheme/test/test_eval.py
@@ -861,3 +861,30 @@
assert eval_noctx("(not #f)").to_boolean()
assert not eval_noctx("(not '())").to_boolean()
assert not eval_noctx("(not 0)").to_boolean()
+
+def test_assoc():
+ w_res = eval_noctx("(assq 'b '((a 1) (b 2) (c 3)))")
+ assert isinstance(w_res, W_Pair)
+ assert w_res.equal(parse_("(b 2)"))
+
+ w_res = eval_noctx("(assq 'x '((a 1) (b 2) (c 3)))")
+ assert w_res is w_false
+
+ w_res = eval_noctx("(assv (+ 1 2) '((1 a) (2 b) (3 c)))")
+ assert isinstance(w_res, W_Pair)
+ assert w_res.equal(parse_("(3 c)"))
+
+ w_res = eval_noctx("(assq (list 'a) '(((a)) ((b)) ((c))))")
+ assert w_res is w_false
+
+ w_res = eval_noctx("(assoc (list 'a) '(((a)) ((b)) ((c))))")
+ assert isinstance(w_res, W_Pair)
+ assert w_res.equal(parse_("((a))"))
+
+ w_res = eval_noctx("(assq 'a '())")
+ assert w_res is w_false
+
+ py.test.raises(WrongArgType, eval_noctx, "(assq 'a '(a b c))")
+ py.test.raises(WrongArgType, eval_noctx, "(assq 1 2)")
+ py.test.raises(WrongArgsNumber, eval_noctx, "(assq 1 '(1 2) '(3 4))")
+
From noreply at buildbot.pypy.org Mon Jan 9 23:22:53 2012
From: noreply at buildbot.pypy.org (boemmels)
Date: Mon, 9 Jan 2012 23:22:53 +0100 (CET)
Subject: [pypy-commit] lang-scheme default: Implement 'cadr' and friends
(all 28 versions)
Message-ID: <20120109222253.B452382110@wyvern.cs.uni-duesseldorf.de>
Author: Juergen Boemmels
Branch:
Changeset: r35:d254d5ae04cf
Date: 2012-01-09 00:47 +0100
http://bitbucket.org/pypy/lang-scheme/changeset/d254d5ae04cf/
Log: Implement 'cadr' and friends (all 28 versions)
diff --git a/scheme/procedure.py b/scheme/procedure.py
--- a/scheme/procedure.py
+++ b/scheme/procedure.py
@@ -177,6 +177,89 @@
raise WrongArgType(w_pair, "Pair")
return w_pair.cdr
+class CarCdrCombination(W_Procedure):
+ def procedure(self, ctx, lst):
+ if len(lst) != 1:
+ raise WrongArgsNumber
+ w_pair = lst[0]
+ return self.do_oper(w_pair)
+
+ def do_oper(self, w_pair):
+ raise NotImplementedError
+
+def gen_cxxxr_class(proc_name, oper_lst):
+ class Cxxxr(CarCdrCombination):
+ pass
+
+ src_block = """
+ w_iter = w_pair
+ """
+ oper_lst.reverse()
+ for oper in oper_lst:
+ src_block += """
+ if not isinstance(w_iter, W_Pair):
+ raise WrongArgType(w_iter, "Pair")
+ """
+ if oper == "car":
+ src_block += """
+ w_iter = w_iter.car
+ """
+ elif oper == "cdr":
+ src_block += """
+ w_iter = w_iter.cdr
+ """
+ else:
+ raise ValueError("oper must 'car' or 'cdr'")
+
+ src_block += """
+ return w_iter
+ """
+
+ local_locals = {}
+ attr_name = "do_oper"
+
+ code = py.code.Source(("""
+ def %s(self, w_pair):
+ from scheme.object import W_Pair, WrongArgType
+ """ % attr_name) + src_block)
+
+ exec code.compile() in local_locals
+ local_locals[attr_name]._annspecialcase_ = 'specialize:argtype(1)'
+ setattr(Cxxxr, attr_name, local_locals[attr_name])
+
+ Cxxxr._symbol_name = proc_name
+ Cxxxr.__name__ = proc_name.capitalize()
+ return Cxxxr
+
+Caar = gen_cxxxr_class("caar", ['car', 'car'])
+Cadr = gen_cxxxr_class("cadr", ['car', 'cdr'])
+Cdar = gen_cxxxr_class("cdar", ['cdr', 'car'])
+Cddr = gen_cxxxr_class("cddr", ['cdr', 'cdr'])
+Caaar = gen_cxxxr_class("caaar", ['car', 'car', 'car'])
+Caadr = gen_cxxxr_class("caadr", ['car', 'car', 'cdr'])
+Cadar = gen_cxxxr_class("cadar", ['car', 'cdr', 'car'])
+Caddr = gen_cxxxr_class("caddr", ['car', 'cdr', 'cdr'])
+Cdaar = gen_cxxxr_class("cdaar", ['cdr', 'car', 'car'])
+Cdadr = gen_cxxxr_class("cdadr", ['cdr', 'car', 'cdr'])
+Cddar = gen_cxxxr_class("cddar", ['cdr', 'cdr', 'car'])
+Cdddr = gen_cxxxr_class("cdddr", ['cdr', 'cdr', 'cdr'])
+Caaaar = gen_cxxxr_class("caaaar", ['car', 'car', 'car', 'car'])
+Caaadr = gen_cxxxr_class("caaadr", ['car', 'car', 'car', 'cdr'])
+Caadar = gen_cxxxr_class("caadar", ['car', 'car', 'cdr', 'car'])
+Caaddr = gen_cxxxr_class("caaddr", ['car', 'car', 'cdr', 'cdr'])
+Cadaar = gen_cxxxr_class("cadaar", ['car', 'cdr', 'car', 'car'])
+Cadadr = gen_cxxxr_class("cadadr", ['car', 'cdr', 'car', 'cdr'])
+Caddar = gen_cxxxr_class("caddar", ['car', 'cdr', 'cdr', 'car'])
+Cadddr = gen_cxxxr_class("cadddr", ['car', 'cdr', 'cdr', 'cdr'])
+Cdaaar = gen_cxxxr_class("cdaaar", ['cdr', 'car', 'car', 'car'])
+Cdaadr = gen_cxxxr_class("cdaadr", ['cdr', 'car', 'car', 'cdr'])
+Cdadar = gen_cxxxr_class("cdadar", ['cdr', 'car', 'cdr', 'car'])
+Cdaddr = gen_cxxxr_class("cdaddr", ['cdr', 'car', 'cdr', 'cdr'])
+Cddaar = gen_cxxxr_class("cddaar", ['cdr', 'cdr', 'car', 'car'])
+Cddadr = gen_cxxxr_class("cddadr", ['cdr', 'cdr', 'car', 'cdr'])
+Cdddar = gen_cxxxr_class("cdddar", ['cdr', 'cdr', 'cdr', 'car'])
+Cddddr = gen_cxxxr_class("cddddr", ['cdr', 'cdr', 'cdr', 'cdr'])
+
class SetCar(W_Procedure):
_symbol_name = "set-car!"
@@ -270,9 +353,11 @@
(w_procedure, w_lst) = lst
if not isinstance(w_procedure, W_Procedure):
+ #print w_procedure.to_repr(), "is not a procedure"
raise WrongArgType(w_procedure, "Procedure")
if not isinstance(w_lst, W_List):
+ #print w_lst.to_repr(), "is not a list"
raise WrongArgType(w_lst, "List")
return w_procedure.call_tr(ctx, w_lst)
diff --git a/scheme/test/test_eval.py b/scheme/test/test_eval.py
--- a/scheme/test/test_eval.py
+++ b/scheme/test/test_eval.py
@@ -888,3 +888,100 @@
py.test.raises(WrongArgType, eval_noctx, "(assq 1 2)")
py.test.raises(WrongArgsNumber, eval_noctx, "(assq 1 '(1 2) '(3 4))")
+def test_cxxxr():
+ w_res = eval_noctx("(caar '((a b) c d))")
+ assert w_res.equal(parse_("a"))
+
+ w_res = eval_noctx("(cadr '((a b) c d))")
+ assert w_res.equal(parse_("c"))
+
+ w_res = eval_noctx("(cdar '((a b) c d))")
+ assert w_res.equal(parse_("(b)"))
+
+ w_res = eval_noctx("(cddr '((a b) c d))")
+ assert w_res.equal(parse_("(d)"))
+
+ w_res = eval_noctx("(caaar '(((a b) c d) (e f) g h))")
+ assert w_res.equal(parse_("a"))
+
+ w_res = eval_noctx("(caadr '(((a b) c d) (e f) g h))")
+ assert w_res.equal(parse_("e"))
+
+ w_res = eval_noctx("(cadar '(((a b) c d) (e f) g h))")
+ assert w_res.equal(parse_("c"))
+
+ w_res = eval_noctx("(caddr '(((a b) c d) (e f) g h))")
+ assert w_res.equal(parse_("g"))
+
+ w_res = eval_noctx("(cdaar '(((a b) c d) (e f) g h))")
+ assert w_res.equal(parse_("(b)"))
+
+ w_res = eval_noctx("(cdadr '(((a b) c d) (e f) g h))")
+ assert w_res.equal(parse_("(f)"))
+
+ w_res = eval_noctx("(cddar '(((a b) c d) (e f) g h))")
+ assert w_res.equal(parse_("(d)"))
+
+ w_res = eval_noctx("""(caaaar '((((a b) c d) (e f) g h)
+ ((i j) k l) (m n) o p))""")
+ assert w_res.equal(parse_("a"))
+
+ w_res = eval_noctx("""(caaadr '((((a b) c d) (e f) g h)
+ ((i j) k l) (m n) o p))""")
+ assert w_res.equal(parse_("i"))
+
+ w_res = eval_noctx("""(caadar '((((a b) c d) (e f) g h)
+ ((i j) k l) (m n) o p))""")
+ assert w_res.equal(parse_("e"))
+
+ w_res = eval_noctx("""(caaddr '((((a b) c d) (e f) g h)
+ ((i j) k l) (m n) o p))""")
+ assert w_res.equal(parse_("m"))
+
+ w_res = eval_noctx("""(cadaar '((((a b) c d) (e f) g h)
+ ((i j) k l) (m n) o p))""")
+ assert w_res.equal(parse_("c"))
+
+ w_res = eval_noctx("""(cadadr '((((a b) c d) (e f) g h)
+ ((i j) k l) (m n) o p))""")
+ assert w_res.equal(parse_("k"))
+
+ w_res = eval_noctx("""(caddar '((((a b) c d) (e f) g h)
+ ((i j) k l) (m n) o p))""")
+ assert w_res.equal(parse_("g"))
+
+ w_res = eval_noctx("""(cadddr '((((a b) c d) (e f) g h)
+ ((i j) k l) (m n) o p))""")
+ assert w_res.equal(parse_("o"))
+
+ w_res = eval_noctx("""(cdaaar '((((a b) c d) (e f) g h)
+ ((i j) k l) (m n) o p))""")
+ assert w_res.equal(parse_("(b)"))
+
+ w_res = eval_noctx("""(cdaadr '((((a b) c d) (e f) g h)
+ ((i j) k l) (m n) o p))""")
+ assert w_res.equal(parse_("(j)"))
+
+ w_res = eval_noctx("""(cdadar '((((a b) c d) (e f) g h)
+ ((i j) k l) (m n) o p))""")
+ assert w_res.equal(parse_("(f)"))
+
+ w_res = eval_noctx("""(cdaddr '((((a b) c d) (e f) g h)
+ ((i j) k l) (m n) o p))""")
+ assert w_res.equal(parse_("(n)"))
+
+ w_res = eval_noctx("""(cddaar '((((a b) c d) (e f) g h)
+ ((i j) k l) (m n) o p))""")
+ assert w_res.equal(parse_("(d)"))
+
+ w_res = eval_noctx("""(cddadr '((((a b) c d) (e f) g h)
+ ((i j) k l) (m n) o p))""")
+ assert w_res.equal(parse_("(l)"))
+
+ w_res = eval_noctx("""(cdddar '((((a b) c d) (e f) g h)
+ ((i j) k l) (m n) o p))""")
+ assert w_res.equal(parse_("(h)"))
+
+ w_res = eval_noctx("""(cddddr '((((a b) c d) (e f) g h)
+ ((i j) k l) (m n) o p))""")
+ assert w_res.equal(parse_("(p)"))
From noreply at buildbot.pypy.org Mon Jan 9 23:22:54 2012
From: noreply at buildbot.pypy.org (boemmels)
Date: Mon, 9 Jan 2012 23:22:54 +0100 (CET)
Subject: [pypy-commit] lang-scheme default: Implement member & friends
Message-ID: <20120109222254.BD14482110@wyvern.cs.uni-duesseldorf.de>
Author: Juergen Boemmels
Branch:
Changeset: r36:a93db4dbd6b0
Date: 2012-01-09 21:33 +0100
http://bitbucket.org/pypy/lang-scheme/changeset/a93db4dbd6b0/
Log: Implement member & friends
diff --git a/scheme/procedure.py b/scheme/procedure.py
--- a/scheme/procedure.py
+++ b/scheme/procedure.py
@@ -491,6 +491,9 @@
return w_false
+ def compare(self, w_obj1, w_obj2):
+ raise NotImplementedError
+
class Assq(AssocX):
_symbol_name = "assq"
@@ -511,6 +514,49 @@
##
+# Member function
+##
+class MemX(W_Procedure):
+ def procedure(self, ctx, lst):
+ if len(lst) != 2:
+ raise WrongArgsNumber
+
+ (w_obj, w_lst) = lst
+
+ w_iter = w_lst
+ while w_iter is not w_nil:
+ if not isinstance(w_iter, W_Pair):
+ raise WrongArgType(w_lst, "List")
+
+ if self.compare(w_obj, w_iter.car):
+ return w_iter
+
+ w_iter = w_iter.cdr
+
+ return w_false
+
+ def compare(self, w_obj1, w_obj2):
+ raise NotImplementedError
+
+class Memq(MemX):
+ _symbol_name = "memq"
+
+ def compare(self, w_obj1, w_obj2):
+ return w_obj1.eq(w_obj2)
+
+class Memv(MemX):
+ _symbol_name = "memv"
+
+ def compare(self, w_obj1, w_obj2):
+ return w_obj1.eqv(w_obj2)
+
+class Member(MemX):
+ _symbol_name = "member"
+
+ def compare(self, w_obj1, w_obj2):
+ return w_obj1.equal(w_obj2)
+
+##
# Equivalnece Predicates
##
class EquivalnecePredicate(W_Procedure):
diff --git a/scheme/test/test_eval.py b/scheme/test/test_eval.py
--- a/scheme/test/test_eval.py
+++ b/scheme/test/test_eval.py
@@ -888,7 +888,27 @@
py.test.raises(WrongArgType, eval_noctx, "(assq 1 2)")
py.test.raises(WrongArgsNumber, eval_noctx, "(assq 1 '(1 2) '(3 4))")
-def test_cxxxr():
+def test_member():
+ w_res = eval_noctx("(memq 'a '(a b c))")
+ assert w_res.equal(parse_("(a b c)"))
+
+ w_res = eval_noctx("(memq 'b '(a b c))")
+ assert w_res.equal(parse_("(b c)"))
+
+ w_res = eval_noctx("(memq 'd '(a b c))")
+ assert w_res.eq(w_false)
+
+ w_res = eval_noctx("(memv 10 (list 11 10 9))")
+ assert w_res.equal(parse_("(10 9)"))
+
+ w_res = eval_noctx("(member '(c d) '((a b) (c d) (e f)))")
+ assert w_res.equal(parse_("((c d) (e f))"))
+
+ py.test.raises(WrongArgType, eval_noctx, "(member 1 2)")
+ py.test.raises(WrongArgsNumber, eval_noctx, "(memq 1)")
+ py.test.raises(WrongArgsNumber, eval_noctx, "(memq 1 2 3)")
+
+def test_cadadr():
w_res = eval_noctx("(caar '((a b) c d))")
assert w_res.equal(parse_("a"))
From noreply at buildbot.pypy.org Mon Jan 9 23:22:55 2012
From: noreply at buildbot.pypy.org (boemmels)
Date: Mon, 9 Jan 2012 23:22:55 +0100 (CET)
Subject: [pypy-commit] lang-scheme default: Stubbing of case,
basic tests work
Message-ID: <20120109222255.C5E8782110@wyvern.cs.uni-duesseldorf.de>
Author: Juergen Boemmels
Branch:
Changeset: r37:046b82d2ef4c
Date: 2012-01-09 22:07 +0100
http://bitbucket.org/pypy/lang-scheme/changeset/046b82d2ef4c/
Log: Stubbing of case, basic tests work
diff --git a/scheme/r5rs_derived_expr.ss b/scheme/r5rs_derived_expr.ss
--- a/scheme/r5rs_derived_expr.ss
+++ b/scheme/r5rs_derived_expr.ss
@@ -39,3 +39,18 @@
(let ((x test1))
(if x x (or test2 ...))))))
+(define-syntax case
+ (syntax-rules (else)
+;;; XXX this check does not work yet
+; ((case (key ...) clauses ...)
+; (let ((atom-key (key ...)))
+; (case atom-key clauses ...)))
+ ((case key (else expr1 expr2 ...))
+ (begin expr1 expr2 ...))
+ ((case key ((atoms ...) expr1 expr2 ...))
+ (if (memv key '(atoms ...))
+ (begin expr1 expr2 ...)))
+ ((case key ((atoms ...) expr1 expr2 ...) clause2 clause3 ...)
+ (if (memv key '(atoms ...))
+ (begin expr1 expr2 ...)
+ (case key clause2 clause3 ...)))))
diff --git a/scheme/test/test_eval.py b/scheme/test/test_eval.py
--- a/scheme/test/test_eval.py
+++ b/scheme/test/test_eval.py
@@ -1005,3 +1005,20 @@
w_res = eval_noctx("""(cddddr '((((a b) c d) (e f) g h)
((i j) k l) (m n) o p))""")
assert w_res.equal(parse_("(p)"))
+
+def test_case():
+ w_res = eval_noctx("""
+ (case (* 2 3)
+ ((2 3 5 7) 'prime)
+ ((1 4 6 8 9) 'composite))
+ """)
+ assert w_res.eq(symbol("composite"))
+
+ w_res = eval_noctx("""
+ (case (car '(c d))
+ ((a e i o u) 'vowel)
+ ((w y) 'semivowel)
+ (else 'consonant))
+ """)
+ assert w_res.eq(symbol("consonant"))
+
From noreply at buildbot.pypy.org Mon Jan 9 23:22:56 2012
From: noreply at buildbot.pypy.org (boemmels)
Date: Mon, 9 Jan 2012 23:22:56 +0100 (CET)
Subject: [pypy-commit] lang-scheme default: Bugfix,
apply evaluated it's argument twice.
Message-ID: <20120109222256.CEC7082110@wyvern.cs.uni-duesseldorf.de>
Author: Juergen Boemmels
Branch:
Changeset: r38:2f31b68cba35
Date: 2012-01-09 22:39 +0100
http://bitbucket.org/pypy/lang-scheme/changeset/2f31b68cba35/
Log: Bugfix, apply evaluated it's argument twice.
diff --git a/scheme/object.py b/scheme/object.py
--- a/scheme/object.py
+++ b/scheme/object.py
@@ -625,7 +625,7 @@
w_iter = w_list
while w_iter is not w_nil:
if not isinstance(w_iter, W_Pair):
- raise WrongArg(w_list, "List")
+ raise WrongArgType(w_list, "List")
lst.append(w_iter.car)
w_iter = w_iter.cdr
diff --git a/scheme/procedure.py b/scheme/procedure.py
--- a/scheme/procedure.py
+++ b/scheme/procedure.py
@@ -3,7 +3,7 @@
W_Number, W_Real, W_Integer, W_List, W_Character, W_Vector, \
Body, W_Procedure, W_String, W_Promise, plst2lst, w_undefined, \
SchemeSyntaxError, SchemeQuit, WrongArgType, WrongArgsNumber, \
- w_nil, w_true, w_false
+ w_nil, w_true, w_false, lst2plst
##
# operations
@@ -360,7 +360,7 @@
#print w_lst.to_repr(), "is not a list"
raise WrongArgType(w_lst, "List")
- return w_procedure.call_tr(ctx, w_lst)
+ return w_procedure.procedure_tr(ctx, lst2plst(w_lst))
class Quit(W_Procedure):
_symbol_name = "quit"
diff --git a/scheme/test/test_eval.py b/scheme/test/test_eval.py
--- a/scheme/test/test_eval.py
+++ b/scheme/test/test_eval.py
@@ -817,6 +817,10 @@
assert w_result.to_number() == 64
assert eval_(ctx, "(apply + '())").to_number() == 0
+
+ w_result = eval_(ctx, "(apply list '((+ 2 3) (* 3 4)))")
+ assert w_result.equal(parse_("((+ 2 3) (* 3 4))"))
+
py.test.raises(WrongArgsNumber, eval_, ctx, "(apply 1)")
py.test.raises(WrongArgType, eval_, ctx, "(apply 1 '(1))")
py.test.raises(WrongArgType, eval_, ctx, "(apply + 42)")
From noreply at buildbot.pypy.org Mon Jan 9 23:30:30 2012
From: noreply at buildbot.pypy.org (fijal)
Date: Mon, 9 Jan 2012 23:30:30 +0100 (CET)
Subject: [pypy-commit] pypy default: document JIT parameters
Message-ID: <20120109223030.C809082110@wyvern.cs.uni-duesseldorf.de>
Author: Maciej Fijalkowski
Branch:
Changeset: r51182:3c58c0bd8803
Date: 2012-01-10 00:28 +0200
http://bitbucket.org/pypy/pypy/changeset/3c58c0bd8803/
Log: document JIT parameters
diff --git a/pypy/rlib/jit.py b/pypy/rlib/jit.py
--- a/pypy/rlib/jit.py
+++ b/pypy/rlib/jit.py
@@ -386,6 +386,18 @@
class JitHintError(Exception):
"""Inconsistency in the JIT hints."""
+PARAMETER_DOCS = {
+ 'threshold': 'number of times a loop has to run for it to become hot',
+ 'function_threshold': 'number of times a function must run for it to become traced from start',
+ 'trace_eagerness': 'number of times a guard has to fail before we start compiling a bridge',
+ 'trace_limit': 'number of recorded operations before we abort tracing with ABORT_TRACE_TOO_LONG',
+ 'inlining': 'inline python functions or not (1/0)',
+ 'loop_longevity': 'a parameter controlling how long loops will be kept before being freed, an estimate',
+ 'retrace_limit': 'how many times we can try retracing before giving up',
+ 'max_retrace_guards': 'number of extra guards a retrace can cause',
+ 'enable_opts': 'optimizations to enabled or all, INTERNAL USE ONLY'
+ }
+
PARAMETERS = {'threshold': 1039, # just above 1024, prime
'function_threshold': 1619, # slightly more than one above, also prime
'trace_eagerness': 200,
From noreply at buildbot.pypy.org Mon Jan 9 23:30:32 2012
From: noreply at buildbot.pypy.org (fijal)
Date: Mon, 9 Jan 2012 23:30:32 +0100 (CET)
Subject: [pypy-commit] pypy default: merge
Message-ID: <20120109223032.04F7F82110@wyvern.cs.uni-duesseldorf.de>
Author: Maciej Fijalkowski
Branch:
Changeset: r51183:5f4b16c8ec98
Date: 2012-01-10 00:29 +0200
http://bitbucket.org/pypy/pypy/changeset/5f4b16c8ec98/
Log: merge
diff --git a/pypy/annotation/description.py b/pypy/annotation/description.py
--- a/pypy/annotation/description.py
+++ b/pypy/annotation/description.py
@@ -257,7 +257,8 @@
try:
inputcells = args.match_signature(signature, defs_s)
except ArgErr, e:
- raise TypeError, "signature mismatch: %s" % e.getmsg(self.name)
+ raise TypeError("signature mismatch: %s() %s" %
+ (self.name, e.getmsg()))
return inputcells
def specialize(self, inputcells, op=None):
diff --git a/pypy/interpreter/argument.py b/pypy/interpreter/argument.py
--- a/pypy/interpreter/argument.py
+++ b/pypy/interpreter/argument.py
@@ -428,8 +428,8 @@
return self._match_signature(w_firstarg,
scope_w, signature, defaults_w, 0)
except ArgErr, e:
- raise OperationError(self.space.w_TypeError,
- self.space.wrap(e.getmsg(fnname)))
+ raise operationerrfmt(self.space.w_TypeError,
+ "%s() %s", fnname, e.getmsg())
def _parse(self, w_firstarg, signature, defaults_w, blindargs=0):
"""Parse args and kwargs according to the signature of a code object,
@@ -450,8 +450,8 @@
try:
return self._parse(w_firstarg, signature, defaults_w, blindargs)
except ArgErr, e:
- raise OperationError(self.space.w_TypeError,
- self.space.wrap(e.getmsg(fnname)))
+ raise operationerrfmt(self.space.w_TypeError,
+ "%s() %s", fnname, e.getmsg())
@staticmethod
def frompacked(space, w_args=None, w_kwds=None):
@@ -626,7 +626,7 @@
class ArgErr(Exception):
- def getmsg(self, fnname):
+ def getmsg(self):
raise NotImplementedError
class ArgErrCount(ArgErr):
@@ -642,11 +642,10 @@
self.num_args = got_nargs
self.num_kwds = nkwds
- def getmsg(self, fnname):
+ def getmsg(self):
n = self.expected_nargs
if n == 0:
- msg = "%s() takes no arguments (%d given)" % (
- fnname,
+ msg = "takes no arguments (%d given)" % (
self.num_args + self.num_kwds)
else:
defcount = self.num_defaults
@@ -672,8 +671,7 @@
msg2 = " non-keyword"
else:
msg2 = ""
- msg = "%s() takes %s %d%s argument%s (%d given)" % (
- fnname,
+ msg = "takes %s %d%s argument%s (%d given)" % (
msg1,
n,
msg2,
@@ -686,9 +684,8 @@
def __init__(self, argname):
self.argname = argname
- def getmsg(self, fnname):
- msg = "%s() got multiple values for keyword argument '%s'" % (
- fnname,
+ def getmsg(self):
+ msg = "got multiple values for keyword argument '%s'" % (
self.argname)
return msg
@@ -722,13 +719,11 @@
break
self.kwd_name = name
- def getmsg(self, fnname):
+ def getmsg(self):
if self.num_kwds == 1:
- msg = "%s() got an unexpected keyword argument '%s'" % (
- fnname,
+ msg = "got an unexpected keyword argument '%s'" % (
self.kwd_name)
else:
- msg = "%s() got %d unexpected keyword arguments" % (
- fnname,
+ msg = "got %d unexpected keyword arguments" % (
self.num_kwds)
return msg
diff --git a/pypy/interpreter/test/test_argument.py b/pypy/interpreter/test/test_argument.py
--- a/pypy/interpreter/test/test_argument.py
+++ b/pypy/interpreter/test/test_argument.py
@@ -393,8 +393,8 @@
class FakeArgErr(ArgErr):
- def getmsg(self, fname):
- return "msg "+fname
+ def getmsg(self):
+ return "msg"
def _match_signature(*args):
raise FakeArgErr()
@@ -404,7 +404,7 @@
excinfo = py.test.raises(OperationError, args.parse_obj, "obj", "foo",
Signature(["a", "b"], None, None))
assert excinfo.value.w_type is TypeError
- assert excinfo.value._w_value == "msg foo"
+ assert excinfo.value.get_w_value(space) == "foo() msg"
def test_args_parsing_into_scope(self):
@@ -448,8 +448,8 @@
class FakeArgErr(ArgErr):
- def getmsg(self, fname):
- return "msg "+fname
+ def getmsg(self):
+ return "msg"
def _match_signature(*args):
raise FakeArgErr()
@@ -460,7 +460,7 @@
"obj", [None, None], "foo",
Signature(["a", "b"], None, None))
assert excinfo.value.w_type is TypeError
- assert excinfo.value._w_value == "msg foo"
+ assert excinfo.value.get_w_value(space) == "foo() msg"
def test_topacked_frompacked(self):
space = DummySpace()
@@ -493,35 +493,35 @@
# got_nargs, nkwds, expected_nargs, has_vararg, has_kwarg,
# defaults_w, missing_args
err = ArgErrCount(1, 0, 0, False, False, None, 0)
- s = err.getmsg('foo')
- assert s == "foo() takes no arguments (1 given)"
+ s = err.getmsg()
+ assert s == "takes no arguments (1 given)"
err = ArgErrCount(0, 0, 1, False, False, [], 1)
- s = err.getmsg('foo')
- assert s == "foo() takes exactly 1 argument (0 given)"
+ s = err.getmsg()
+ assert s == "takes exactly 1 argument (0 given)"
err = ArgErrCount(3, 0, 2, False, False, [], 0)
- s = err.getmsg('foo')
- assert s == "foo() takes exactly 2 arguments (3 given)"
+ s = err.getmsg()
+ assert s == "takes exactly 2 arguments (3 given)"
err = ArgErrCount(3, 0, 2, False, False, ['a'], 0)
- s = err.getmsg('foo')
- assert s == "foo() takes at most 2 arguments (3 given)"
+ s = err.getmsg()
+ assert s == "takes at most 2 arguments (3 given)"
err = ArgErrCount(1, 0, 2, True, False, [], 1)
- s = err.getmsg('foo')
- assert s == "foo() takes at least 2 arguments (1 given)"
+ s = err.getmsg()
+ assert s == "takes at least 2 arguments (1 given)"
err = ArgErrCount(0, 1, 2, True, False, ['a'], 1)
- s = err.getmsg('foo')
- assert s == "foo() takes at least 1 non-keyword argument (0 given)"
+ s = err.getmsg()
+ assert s == "takes at least 1 non-keyword argument (0 given)"
err = ArgErrCount(2, 1, 1, False, True, [], 0)
- s = err.getmsg('foo')
- assert s == "foo() takes exactly 1 non-keyword argument (2 given)"
+ s = err.getmsg()
+ assert s == "takes exactly 1 non-keyword argument (2 given)"
err = ArgErrCount(0, 1, 1, False, True, [], 1)
- s = err.getmsg('foo')
- assert s == "foo() takes exactly 1 non-keyword argument (0 given)"
+ s = err.getmsg()
+ assert s == "takes exactly 1 non-keyword argument (0 given)"
err = ArgErrCount(0, 1, 1, True, True, [], 1)
- s = err.getmsg('foo')
- assert s == "foo() takes at least 1 non-keyword argument (0 given)"
+ s = err.getmsg()
+ assert s == "takes at least 1 non-keyword argument (0 given)"
err = ArgErrCount(2, 1, 1, False, True, ['a'], 0)
- s = err.getmsg('foo')
- assert s == "foo() takes at most 1 non-keyword argument (2 given)"
+ s = err.getmsg()
+ assert s == "takes at most 1 non-keyword argument (2 given)"
def test_bad_type_for_star(self):
space = self.space
@@ -543,12 +543,12 @@
def test_unknown_keywords(self):
space = DummySpace()
err = ArgErrUnknownKwds(space, 1, ['a', 'b'], [True, False], None)
- s = err.getmsg('foo')
- assert s == "foo() got an unexpected keyword argument 'b'"
+ s = err.getmsg()
+ assert s == "got an unexpected keyword argument 'b'"
err = ArgErrUnknownKwds(space, 2, ['a', 'b', 'c'],
[True, False, False], None)
- s = err.getmsg('foo')
- assert s == "foo() got 2 unexpected keyword arguments"
+ s = err.getmsg()
+ assert s == "got 2 unexpected keyword arguments"
def test_unknown_unicode_keyword(self):
class DummySpaceUnicode(DummySpace):
@@ -558,13 +558,13 @@
err = ArgErrUnknownKwds(space, 1, ['a', None, 'b', 'c'],
[True, False, True, True],
[unichr(0x1234), u'b', u'c'])
- s = err.getmsg('foo')
- assert s == "foo() got an unexpected keyword argument '\xe1\x88\xb4'"
+ s = err.getmsg()
+ assert s == "got an unexpected keyword argument '\xe1\x88\xb4'"
def test_multiple_values(self):
err = ArgErrMultipleValues('bla')
- s = err.getmsg('foo')
- assert s == "foo() got multiple values for keyword argument 'bla'"
+ s = err.getmsg()
+ assert s == "got multiple values for keyword argument 'bla'"
class AppTestArgument:
def test_error_message(self):
From noreply at buildbot.pypy.org Mon Jan 9 23:35:15 2012
From: noreply at buildbot.pypy.org (fijal)
Date: Mon, 9 Jan 2012 23:35:15 +0100 (CET)
Subject: [pypy-commit] pypy default: include some actually useful info in
--help
Message-ID: <20120109223515.9A57982110@wyvern.cs.uni-duesseldorf.de>
Author: Maciej Fijalkowski
Branch:
Changeset: r51184:d16e4f017733
Date: 2012-01-10 00:34 +0200
http://bitbucket.org/pypy/pypy/changeset/d16e4f017733/
Log: include some actually useful info in --help
diff --git a/pypy/translator/goal/app_main.py b/pypy/translator/goal/app_main.py
--- a/pypy/translator/goal/app_main.py
+++ b/pypy/translator/goal/app_main.py
@@ -139,8 +139,8 @@
items = pypyjit.defaults.items()
items.sort()
for key, value in items:
- print ' --jit %s=N %slow-level JIT parameter (default %s)' % (
- key, ' '*(18-len(key)), value)
+ print ' --jit %s=N %s%s (default %s)' % (
+ key, ' '*(18-len(key)), pypyjit.PARAMETER_DOCS[key], value)
print ' --jit off turn off the JIT'
def print_version(*args):
From noreply at buildbot.pypy.org Mon Jan 9 23:38:43 2012
From: noreply at buildbot.pypy.org (alex_gaynor)
Date: Mon, 9 Jan 2012 23:38:43 +0100 (CET)
Subject: [pypy-commit] pypy better-jit-hooks: merged default
Message-ID: <20120109223843.9A61E82110@wyvern.cs.uni-duesseldorf.de>
Author: Alex Gaynor
Branch: better-jit-hooks
Changeset: r51185:9e69f381ba7e
Date: 2012-01-09 16:34 -0600
http://bitbucket.org/pypy/pypy/changeset/9e69f381ba7e/
Log: merged default
diff --git a/lib_pypy/numpypy/__init__.py b/lib_pypy/numpypy/__init__.py
--- a/lib_pypy/numpypy/__init__.py
+++ b/lib_pypy/numpypy/__init__.py
@@ -1,1 +1,2 @@
from _numpypy import *
+from fromnumeric import *
diff --git a/lib_pypy/numpypy/fromnumeric.py b/lib_pypy/numpypy/fromnumeric.py
new file mode 100644
--- /dev/null
+++ b/lib_pypy/numpypy/fromnumeric.py
@@ -0,0 +1,2400 @@
+######################################################################
+# This is a copy of numpy/core/fromnumeric.py modified for numpypy
+######################################################################
+# Each name in __all__ was a function in 'numeric' that is now
+# a method in 'numpy'.
+# When the corresponding method is added to numpypy BaseArray
+# each function should be added as a module function
+# at the applevel
+# This can be as simple as doing the following
+#
+# def func(a, ...):
+# if not hasattr(a, 'func')
+# a = numpypy.array(a)
+# return a.func(...)
+#
+######################################################################
+
+import numpypy
+
+# Module containing non-deprecated functions borrowed from Numeric.
+__docformat__ = "restructuredtext en"
+
+# functions that are now methods
+__all__ = ['take', 'reshape', 'choose', 'repeat', 'put',
+ 'swapaxes', 'transpose', 'sort', 'argsort', 'argmax', 'argmin',
+ 'searchsorted', 'alen',
+ 'resize', 'diagonal', 'trace', 'ravel', 'nonzero', 'shape',
+ 'compress', 'clip', 'sum', 'product', 'prod', 'sometrue', 'alltrue',
+ 'any', 'all', 'cumsum', 'cumproduct', 'cumprod', 'ptp', 'ndim',
+ 'rank', 'size', 'around', 'round_', 'mean', 'std', 'var', 'squeeze',
+ 'amax', 'amin',
+ ]
+
+def take(a, indices, axis=None, out=None, mode='raise'):
+ """
+ Take elements from an array along an axis.
+
+ This function does the same thing as "fancy" indexing (indexing arrays
+ using arrays); however, it can be easier to use if you need elements
+ along a given axis.
+
+ Parameters
+ ----------
+ a : array_like
+ The source array.
+ indices : array_like
+ The indices of the values to extract.
+ axis : int, optional
+ The axis over which to select values. By default, the flattened
+ input array is used.
+ out : ndarray, optional
+ If provided, the result will be placed in this array. It should
+ be of the appropriate shape and dtype.
+ mode : {'raise', 'wrap', 'clip'}, optional
+ Specifies how out-of-bounds indices will behave.
+
+ * 'raise' -- raise an error (default)
+ * 'wrap' -- wrap around
+ * 'clip' -- clip to the range
+
+ 'clip' mode means that all indices that are too large are replaced
+ by the index that addresses the last element along that axis. Note
+ that this disables indexing with negative numbers.
+
+ Returns
+ -------
+ subarray : ndarray
+ The returned array has the same type as `a`.
+
+ See Also
+ --------
+ ndarray.take : equivalent method
+
+ Examples
+ --------
+ >>> a = [4, 3, 5, 7, 6, 8]
+ >>> indices = [0, 1, 4]
+ >>> np.take(a, indices)
+ array([4, 3, 6])
+
+ In this example if `a` is an ndarray, "fancy" indexing can be used.
+
+ >>> a = np.array(a)
+ >>> a[indices]
+ array([4, 3, 6])
+
+ """
+ raise NotImplemented('Waiting on interp level method')
+
+
+# not deprecated --- copy if necessary, view otherwise
+def reshape(a, newshape, order='C'):
+ """
+ Gives a new shape to an array without changing its data.
+
+ Parameters
+ ----------
+ a : array_like
+ Array to be reshaped.
+ newshape : int or tuple of ints
+ The new shape should be compatible with the original shape. If
+ an integer, then the result will be a 1-D array of that length.
+ One shape dimension can be -1. In this case, the value is inferred
+ from the length of the array and remaining dimensions.
+ order : {'C', 'F', 'A'}, optional
+ Determines whether the array data should be viewed as in C
+ (row-major) order, FORTRAN (column-major) order, or the C/FORTRAN
+ order should be preserved.
+
+ Returns
+ -------
+ reshaped_array : ndarray
+ This will be a new view object if possible; otherwise, it will
+ be a copy.
+
+
+ See Also
+ --------
+ ndarray.reshape : Equivalent method.
+
+ Notes
+ -----
+
+ It is not always possible to change the shape of an array without
+ copying the data. If you want an error to be raise if the data is copied,
+ you should assign the new shape to the shape attribute of the array::
+
+ >>> a = np.zeros((10, 2))
+ # A transpose make the array non-contiguous
+ >>> b = a.T
+ # Taking a view makes it possible to modify the shape without modiying the
+ # initial object.
+ >>> c = b.view()
+ >>> c.shape = (20)
+ AttributeError: incompatible shape for a non-contiguous array
+
+
+ Examples
+ --------
+ >>> a = np.array([[1,2,3], [4,5,6]])
+ >>> np.reshape(a, 6)
+ array([1, 2, 3, 4, 5, 6])
+ >>> np.reshape(a, 6, order='F')
+ array([1, 4, 2, 5, 3, 6])
+
+ >>> np.reshape(a, (3,-1)) # the unspecified value is inferred to be 2
+ array([[1, 2],
+ [3, 4],
+ [5, 6]])
+
+ """
+ if not hasattr(a, 'reshape'):
+ a = numpypy.array(a)
+ return a.reshape(newshape)
+
+
+def choose(a, choices, out=None, mode='raise'):
+ """
+ Construct an array from an index array and a set of arrays to choose from.
+
+ First of all, if confused or uncertain, definitely look at the Examples -
+ in its full generality, this function is less simple than it might
+ seem from the following code description (below ndi =
+ `numpy.lib.index_tricks`):
+
+ ``np.choose(a,c) == np.array([c[a[I]][I] for I in ndi.ndindex(a.shape)])``.
+
+ But this omits some subtleties. Here is a fully general summary:
+
+ Given an "index" array (`a`) of integers and a sequence of `n` arrays
+ (`choices`), `a` and each choice array are first broadcast, as necessary,
+ to arrays of a common shape; calling these *Ba* and *Bchoices[i], i =
+ 0,...,n-1* we have that, necessarily, ``Ba.shape == Bchoices[i].shape``
+ for each `i`. Then, a new array with shape ``Ba.shape`` is created as
+ follows:
+
+ * if ``mode=raise`` (the default), then, first of all, each element of
+ `a` (and thus `Ba`) must be in the range `[0, n-1]`; now, suppose that
+ `i` (in that range) is the value at the `(j0, j1, ..., jm)` position
+ in `Ba` - then the value at the same position in the new array is the
+ value in `Bchoices[i]` at that same position;
+
+ * if ``mode=wrap``, values in `a` (and thus `Ba`) may be any (signed)
+ integer; modular arithmetic is used to map integers outside the range
+ `[0, n-1]` back into that range; and then the new array is constructed
+ as above;
+
+ * if ``mode=clip``, values in `a` (and thus `Ba`) may be any (signed)
+ integer; negative integers are mapped to 0; values greater than `n-1`
+ are mapped to `n-1`; and then the new array is constructed as above.
+
+ Parameters
+ ----------
+ a : int array
+ This array must contain integers in `[0, n-1]`, where `n` is the number
+ of choices, unless ``mode=wrap`` or ``mode=clip``, in which cases any
+ integers are permissible.
+ choices : sequence of arrays
+ Choice arrays. `a` and all of the choices must be broadcastable to the
+ same shape. If `choices` is itself an array (not recommended), then
+ its outermost dimension (i.e., the one corresponding to
+ ``choices.shape[0]``) is taken as defining the "sequence".
+ out : array, optional
+ If provided, the result will be inserted into this array. It should
+ be of the appropriate shape and dtype.
+ mode : {'raise' (default), 'wrap', 'clip'}, optional
+ Specifies how indices outside `[0, n-1]` will be treated:
+
+ * 'raise' : an exception is raised
+ * 'wrap' : value becomes value mod `n`
+ * 'clip' : values < 0 are mapped to 0, values > n-1 are mapped to n-1
+
+ Returns
+ -------
+ merged_array : array
+ The merged result.
+
+ Raises
+ ------
+ ValueError: shape mismatch
+ If `a` and each choice array are not all broadcastable to the same
+ shape.
+
+ See Also
+ --------
+ ndarray.choose : equivalent method
+
+ Notes
+ -----
+ To reduce the chance of misinterpretation, even though the following
+ "abuse" is nominally supported, `choices` should neither be, nor be
+ thought of as, a single array, i.e., the outermost sequence-like container
+ should be either a list or a tuple.
+
+ Examples
+ --------
+
+ >>> choices = [[0, 1, 2, 3], [10, 11, 12, 13],
+ ... [20, 21, 22, 23], [30, 31, 32, 33]]
+ >>> np.choose([2, 3, 1, 0], choices
+ ... # the first element of the result will be the first element of the
+ ... # third (2+1) "array" in choices, namely, 20; the second element
+ ... # will be the second element of the fourth (3+1) choice array, i.e.,
+ ... # 31, etc.
+ ... )
+ array([20, 31, 12, 3])
+ >>> np.choose([2, 4, 1, 0], choices, mode='clip') # 4 goes to 3 (4-1)
+ array([20, 31, 12, 3])
+ >>> # because there are 4 choice arrays
+ >>> np.choose([2, 4, 1, 0], choices, mode='wrap') # 4 goes to (4 mod 4)
+ array([20, 1, 12, 3])
+ >>> # i.e., 0
+
+ A couple examples illustrating how choose broadcasts:
+
+ >>> a = [[1, 0, 1], [0, 1, 0], [1, 0, 1]]
+ >>> choices = [-10, 10]
+ >>> np.choose(a, choices)
+ array([[ 10, -10, 10],
+ [-10, 10, -10],
+ [ 10, -10, 10]])
+
+ >>> # With thanks to Anne Archibald
+ >>> a = np.array([0, 1]).reshape((2,1,1))
+ >>> c1 = np.array([1, 2, 3]).reshape((1,3,1))
+ >>> c2 = np.array([-1, -2, -3, -4, -5]).reshape((1,1,5))
+ >>> np.choose(a, (c1, c2)) # result is 2x3x5, res[0,:,:]=c1, res[1,:,:]=c2
+ array([[[ 1, 1, 1, 1, 1],
+ [ 2, 2, 2, 2, 2],
+ [ 3, 3, 3, 3, 3]],
+ [[-1, -2, -3, -4, -5],
+ [-1, -2, -3, -4, -5],
+ [-1, -2, -3, -4, -5]]])
+
+ """
+ raise NotImplemented('Waiting on interp level method')
+
+
+def repeat(a, repeats, axis=None):
+ """
+ Repeat elements of an array.
+
+ Parameters
+ ----------
+ a : array_like
+ Input array.
+ repeats : {int, array of ints}
+ The number of repetitions for each element. `repeats` is broadcasted
+ to fit the shape of the given axis.
+ axis : int, optional
+ The axis along which to repeat values. By default, use the
+ flattened input array, and return a flat output array.
+
+ Returns
+ -------
+ repeated_array : ndarray
+ Output array which has the same shape as `a`, except along
+ the given axis.
+
+ See Also
+ --------
+ tile : Tile an array.
+
+ Examples
+ --------
+ >>> x = np.array([[1,2],[3,4]])
+ >>> np.repeat(x, 2)
+ array([1, 1, 2, 2, 3, 3, 4, 4])
+ >>> np.repeat(x, 3, axis=1)
+ array([[1, 1, 1, 2, 2, 2],
+ [3, 3, 3, 4, 4, 4]])
+ >>> np.repeat(x, [1, 2], axis=0)
+ array([[1, 2],
+ [3, 4],
+ [3, 4]])
+
+ """
+ raise NotImplemented('Waiting on interp level method')
+
+
+def put(a, ind, v, mode='raise'):
+ """
+ Replaces specified elements of an array with given values.
+
+ The indexing works on the flattened target array. `put` is roughly
+ equivalent to:
+
+ ::
+
+ a.flat[ind] = v
+
+ Parameters
+ ----------
+ a : ndarray
+ Target array.
+ ind : array_like
+ Target indices, interpreted as integers.
+ v : array_like
+ Values to place in `a` at target indices. If `v` is shorter than
+ `ind` it will be repeated as necessary.
+ mode : {'raise', 'wrap', 'clip'}, optional
+ Specifies how out-of-bounds indices will behave.
+
+ * 'raise' -- raise an error (default)
+ * 'wrap' -- wrap around
+ * 'clip' -- clip to the range
+
+ 'clip' mode means that all indices that are too large are replaced
+ by the index that addresses the last element along that axis. Note
+ that this disables indexing with negative numbers.
+
+ See Also
+ --------
+ putmask, place
+
+ Examples
+ --------
+ >>> a = np.arange(5)
+ >>> np.put(a, [0, 2], [-44, -55])
+ >>> a
+ array([-44, 1, -55, 3, 4])
+
+ >>> a = np.arange(5)
+ >>> np.put(a, 22, -5, mode='clip')
+ >>> a
+ array([ 0, 1, 2, 3, -5])
+
+ """
+ raise NotImplemented('Waiting on interp level method')
+
+
+def swapaxes(a, axis1, axis2):
+ """
+ Interchange two axes of an array.
+
+ Parameters
+ ----------
+ a : array_like
+ Input array.
+ axis1 : int
+ First axis.
+ axis2 : int
+ Second axis.
+
+ Returns
+ -------
+ a_swapped : ndarray
+ If `a` is an ndarray, then a view of `a` is returned; otherwise
+ a new array is created.
+
+ Examples
+ --------
+ >>> x = np.array([[1,2,3]])
+ >>> np.swapaxes(x,0,1)
+ array([[1],
+ [2],
+ [3]])
+
+ >>> x = np.array([[[0,1],[2,3]],[[4,5],[6,7]]])
+ >>> x
+ array([[[0, 1],
+ [2, 3]],
+ [[4, 5],
+ [6, 7]]])
+
+ >>> np.swapaxes(x,0,2)
+ array([[[0, 4],
+ [2, 6]],
+ [[1, 5],
+ [3, 7]]])
+
+ """
+ raise NotImplemented('Waiting on interp level method')
+
+
+def transpose(a, axes=None):
+ """
+ Permute the dimensions of an array.
+
+ Parameters
+ ----------
+ a : array_like
+ Input array.
+ axes : list of ints, optional
+ By default, reverse the dimensions, otherwise permute the axes
+ according to the values given.
+
+ Returns
+ -------
+ p : ndarray
+ `a` with its axes permuted. A view is returned whenever
+ possible.
+
+ See Also
+ --------
+ rollaxis
+
+ Examples
+ --------
+ >>> x = np.arange(4).reshape((2,2))
+ >>> x
+ array([[0, 1],
+ [2, 3]])
+
+ >>> np.transpose(x)
+ array([[0, 2],
+ [1, 3]])
+
+ >>> x = np.ones((1, 2, 3))
+ >>> np.transpose(x, (1, 0, 2)).shape
+ (2, 1, 3)
+
+ """
+ raise NotImplemented('Waiting on interp level method')
+
+
+def sort(a, axis=-1, kind='quicksort', order=None):
+ """
+ Return a sorted copy of an array.
+
+ Parameters
+ ----------
+ a : array_like
+ Array to be sorted.
+ axis : int or None, optional
+ Axis along which to sort. If None, the array is flattened before
+ sorting. The default is -1, which sorts along the last axis.
+ kind : {'quicksort', 'mergesort', 'heapsort'}, optional
+ Sorting algorithm. Default is 'quicksort'.
+ order : list, optional
+ When `a` is a structured array, this argument specifies which fields
+ to compare first, second, and so on. This list does not need to
+ include all of the fields.
+
+ Returns
+ -------
+ sorted_array : ndarray
+ Array of the same type and shape as `a`.
+
+ See Also
+ --------
+ ndarray.sort : Method to sort an array in-place.
+ argsort : Indirect sort.
+ lexsort : Indirect stable sort on multiple keys.
+ searchsorted : Find elements in a sorted array.
+
+ Notes
+ -----
+ The various sorting algorithms are characterized by their average speed,
+ worst case performance, work space size, and whether they are stable. A
+ stable sort keeps items with the same key in the same relative
+ order. The three available algorithms have the following
+ properties:
+
+ =========== ======= ============= ============ =======
+ kind speed worst case work space stable
+ =========== ======= ============= ============ =======
+ 'quicksort' 1 O(n^2) 0 no
+ 'mergesort' 2 O(n*log(n)) ~n/2 yes
+ 'heapsort' 3 O(n*log(n)) 0 no
+ =========== ======= ============= ============ =======
+
+ All the sort algorithms make temporary copies of the data when
+ sorting along any but the last axis. Consequently, sorting along
+ the last axis is faster and uses less space than sorting along
+ any other axis.
+
+ The sort order for complex numbers is lexicographic. If both the real
+ and imaginary parts are non-nan then the order is determined by the
+ real parts except when they are equal, in which case the order is
+ determined by the imaginary parts.
+
+ Previous to numpy 1.4.0 sorting real and complex arrays containing nan
+ values led to undefined behaviour. In numpy versions >= 1.4.0 nan
+ values are sorted to the end. The extended sort order is:
+
+ * Real: [R, nan]
+ * Complex: [R + Rj, R + nanj, nan + Rj, nan + nanj]
+
+ where R is a non-nan real value. Complex values with the same nan
+ placements are sorted according to the non-nan part if it exists.
+ Non-nan values are sorted as before.
+
+ Examples
+ --------
+ >>> a = np.array([[1,4],[3,1]])
+ >>> np.sort(a) # sort along the last axis
+ array([[1, 4],
+ [1, 3]])
+ >>> np.sort(a, axis=None) # sort the flattened array
+ array([1, 1, 3, 4])
+ >>> np.sort(a, axis=0) # sort along the first axis
+ array([[1, 1],
+ [3, 4]])
+
+ Use the `order` keyword to specify a field to use when sorting a
+ structured array:
+
+ >>> dtype = [('name', 'S10'), ('height', float), ('age', int)]
+ >>> values = [('Arthur', 1.8, 41), ('Lancelot', 1.9, 38),
+ ... ('Galahad', 1.7, 38)]
+ >>> a = np.array(values, dtype=dtype) # create a structured array
+ >>> np.sort(a, order='height') # doctest: +SKIP
+ array([('Galahad', 1.7, 38), ('Arthur', 1.8, 41),
+ ('Lancelot', 1.8999999999999999, 38)],
+ dtype=[('name', '|S10'), ('height', '>> np.sort(a, order=['age', 'height']) # doctest: +SKIP
+ array([('Galahad', 1.7, 38), ('Lancelot', 1.8999999999999999, 38),
+ ('Arthur', 1.8, 41)],
+ dtype=[('name', '|S10'), ('height', '>> x = np.array([3, 1, 2])
+ >>> np.argsort(x)
+ array([1, 2, 0])
+
+ Two-dimensional array:
+
+ >>> x = np.array([[0, 3], [2, 2]])
+ >>> x
+ array([[0, 3],
+ [2, 2]])
+
+ >>> np.argsort(x, axis=0)
+ array([[0, 1],
+ [1, 0]])
+
+ >>> np.argsort(x, axis=1)
+ array([[0, 1],
+ [0, 1]])
+
+ Sorting with keys:
+
+ >>> x = np.array([(1, 0), (0, 1)], dtype=[('x', '>> x
+ array([(1, 0), (0, 1)],
+ dtype=[('x', '>> np.argsort(x, order=('x','y'))
+ array([1, 0])
+
+ >>> np.argsort(x, order=('y','x'))
+ array([0, 1])
+
+ """
+ raise NotImplemented('Waiting on interp level method')
+
+
+def argmax(a, axis=None):
+ """
+ Indices of the maximum values along an axis.
+
+ Parameters
+ ----------
+ a : array_like
+ Input array.
+ axis : int, optional
+ By default, the index is into the flattened array, otherwise
+ along the specified axis.
+
+ Returns
+ -------
+ index_array : ndarray of ints
+ Array of indices into the array. It has the same shape as `a.shape`
+ with the dimension along `axis` removed.
+
+ See Also
+ --------
+ ndarray.argmax, argmin
+ amax : The maximum value along a given axis.
+ unravel_index : Convert a flat index into an index tuple.
+
+ Notes
+ -----
+ In case of multiple occurrences of the maximum values, the indices
+ corresponding to the first occurrence are returned.
+
+ Examples
+ --------
+ >>> a = np.arange(6).reshape(2,3)
+ >>> a
+ array([[0, 1, 2],
+ [3, 4, 5]])
+ >>> np.argmax(a)
+ 5
+ >>> np.argmax(a, axis=0)
+ array([1, 1, 1])
+ >>> np.argmax(a, axis=1)
+ array([2, 2])
+
+ >>> b = np.arange(6)
+ >>> b[1] = 5
+ >>> b
+ array([0, 5, 2, 3, 4, 5])
+ >>> np.argmax(b) # Only the first occurrence is returned.
+ 1
+
+ """
+ if not hasattr(a, 'argmax'):
+ a = numpypy.array(a)
+ return a.argmax()
+
+
+def argmin(a, axis=None):
+ """
+ Return the indices of the minimum values along an axis.
+
+ See Also
+ --------
+ argmax : Similar function. Please refer to `numpy.argmax` for detailed
+ documentation.
+
+ """
+ if not hasattr(a, 'argmin'):
+ a = numpypy.array(a)
+ return a.argmin()
+
+
+def searchsorted(a, v, side='left'):
+ """
+ Find indices where elements should be inserted to maintain order.
+
+ Find the indices into a sorted array `a` such that, if the corresponding
+ elements in `v` were inserted before the indices, the order of `a` would
+ be preserved.
+
+ Parameters
+ ----------
+ a : 1-D array_like
+ Input array, sorted in ascending order.
+ v : array_like
+ Values to insert into `a`.
+ side : {'left', 'right'}, optional
+ If 'left', the index of the first suitable location found is given. If
+ 'right', return the last such index. If there is no suitable
+ index, return either 0 or N (where N is the length of `a`).
+
+ Returns
+ -------
+ indices : array of ints
+ Array of insertion points with the same shape as `v`.
+
+ See Also
+ --------
+ sort : Return a sorted copy of an array.
+ histogram : Produce histogram from 1-D data.
+
+ Notes
+ -----
+ Binary search is used to find the required insertion points.
+
+ As of Numpy 1.4.0 `searchsorted` works with real/complex arrays containing
+ `nan` values. The enhanced sort order is documented in `sort`.
+
+ Examples
+ --------
+ >>> np.searchsorted([1,2,3,4,5], 3)
+ 2
+ >>> np.searchsorted([1,2,3,4,5], 3, side='right')
+ 3
+ >>> np.searchsorted([1,2,3,4,5], [-10, 10, 2, 3])
+ array([0, 5, 1, 2])
+
+ """
+ raise NotImplemented('Waiting on interp level method')
+
+
+def resize(a, new_shape):
+ """
+ Return a new array with the specified shape.
+
+ If the new array is larger than the original array, then the new
+ array is filled with repeated copies of `a`. Note that this behavior
+ is different from a.resize(new_shape) which fills with zeros instead
+ of repeated copies of `a`.
+
+ Parameters
+ ----------
+ a : array_like
+ Array to be resized.
+
+ new_shape : int or tuple of int
+ Shape of resized array.
+
+ Returns
+ -------
+ reshaped_array : ndarray
+ The new array is formed from the data in the old array, repeated
+ if necessary to fill out the required number of elements. The
+ data are repeated in the order that they are stored in memory.
+
+ See Also
+ --------
+ ndarray.resize : resize an array in-place.
+
+ Examples
+ --------
+ >>> a=np.array([[0,1],[2,3]])
+ >>> np.resize(a,(1,4))
+ array([[0, 1, 2, 3]])
+ >>> np.resize(a,(2,4))
+ array([[0, 1, 2, 3],
+ [0, 1, 2, 3]])
+
+ """
+ raise NotImplemented('Waiting on interp level method')
+
+
+def squeeze(a):
+ """
+ Remove single-dimensional entries from the shape of an array.
+
+ Parameters
+ ----------
+ a : array_like
+ Input data.
+
+ Returns
+ -------
+ squeezed : ndarray
+ The input array, but with with all dimensions of length 1
+ removed. Whenever possible, a view on `a` is returned.
+
+ Examples
+ --------
+ >>> x = np.array([[[0], [1], [2]]])
+ >>> x.shape
+ (1, 3, 1)
+ >>> np.squeeze(x).shape
+ (3,)
+
+ """
+ raise NotImplemented('Waiting on interp level method')
+
+
+def diagonal(a, offset=0, axis1=0, axis2=1):
+ """
+ Return specified diagonals.
+
+ If `a` is 2-D, returns the diagonal of `a` with the given offset,
+ i.e., the collection of elements of the form ``a[i, i+offset]``. If
+ `a` has more than two dimensions, then the axes specified by `axis1`
+ and `axis2` are used to determine the 2-D sub-array whose diagonal is
+ returned. The shape of the resulting array can be determined by
+ removing `axis1` and `axis2` and appending an index to the right equal
+ to the size of the resulting diagonals.
+
+ Parameters
+ ----------
+ a : array_like
+ Array from which the diagonals are taken.
+ offset : int, optional
+ Offset of the diagonal from the main diagonal. Can be positive or
+ negative. Defaults to main diagonal (0).
+ axis1 : int, optional
+ Axis to be used as the first axis of the 2-D sub-arrays from which
+ the diagonals should be taken. Defaults to first axis (0).
+ axis2 : int, optional
+ Axis to be used as the second axis of the 2-D sub-arrays from
+ which the diagonals should be taken. Defaults to second axis (1).
+
+ Returns
+ -------
+ array_of_diagonals : ndarray
+ If `a` is 2-D, a 1-D array containing the diagonal is returned.
+ If the dimension of `a` is larger, then an array of diagonals is
+ returned, "packed" from left-most dimension to right-most (e.g.,
+ if `a` is 3-D, then the diagonals are "packed" along rows).
+
+ Raises
+ ------
+ ValueError
+ If the dimension of `a` is less than 2.
+
+ See Also
+ --------
+ diag : MATLAB work-a-like for 1-D and 2-D arrays.
+ diagflat : Create diagonal arrays.
+ trace : Sum along diagonals.
+
+ Examples
+ --------
+ >>> a = np.arange(4).reshape(2,2)
+ >>> a
+ array([[0, 1],
+ [2, 3]])
+ >>> a.diagonal()
+ array([0, 3])
+ >>> a.diagonal(1)
+ array([1])
+
+ A 3-D example:
+
+ >>> a = np.arange(8).reshape(2,2,2); a
+ array([[[0, 1],
+ [2, 3]],
+ [[4, 5],
+ [6, 7]]])
+ >>> a.diagonal(0, # Main diagonals of two arrays created by skipping
+ ... 0, # across the outer(left)-most axis last and
+ ... 1) # the "middle" (row) axis first.
+ array([[0, 6],
+ [1, 7]])
+
+ The sub-arrays whose main diagonals we just obtained; note that each
+ corresponds to fixing the right-most (column) axis, and that the
+ diagonals are "packed" in rows.
+
+ >>> a[:,:,0] # main diagonal is [0 6]
+ array([[0, 2],
+ [4, 6]])
+ >>> a[:,:,1] # main diagonal is [1 7]
+ array([[1, 3],
+ [5, 7]])
+
+ """
+ raise NotImplemented('Waiting on interp level method')
+
+
+def trace(a, offset=0, axis1=0, axis2=1, dtype=None, out=None):
+ """
+ Return the sum along diagonals of the array.
+
+ If `a` is 2-D, the sum along its diagonal with the given offset
+ is returned, i.e., the sum of elements ``a[i,i+offset]`` for all i.
+
+ If `a` has more than two dimensions, then the axes specified by axis1 and
+ axis2 are used to determine the 2-D sub-arrays whose traces are returned.
+ The shape of the resulting array is the same as that of `a` with `axis1`
+ and `axis2` removed.
+
+ Parameters
+ ----------
+ a : array_like
+ Input array, from which the diagonals are taken.
+ offset : int, optional
+ Offset of the diagonal from the main diagonal. Can be both positive
+ and negative. Defaults to 0.
+ axis1, axis2 : int, optional
+ Axes to be used as the first and second axis of the 2-D sub-arrays
+ from which the diagonals should be taken. Defaults are the first two
+ axes of `a`.
+ dtype : dtype, optional
+ Determines the data-type of the returned array and of the accumulator
+ where the elements are summed. If dtype has the value None and `a` is
+ of integer type of precision less than the default integer
+ precision, then the default integer precision is used. Otherwise,
+ the precision is the same as that of `a`.
+ out : ndarray, optional
+ Array into which the output is placed. Its type is preserved and
+ it must be of the right shape to hold the output.
+
+ Returns
+ -------
+ sum_along_diagonals : ndarray
+ If `a` is 2-D, the sum along the diagonal is returned. If `a` has
+ larger dimensions, then an array of sums along diagonals is returned.
+
+ See Also
+ --------
+ diag, diagonal, diagflat
+
+ Examples
+ --------
+ >>> np.trace(np.eye(3))
+ 3.0
+ >>> a = np.arange(8).reshape((2,2,2))
+ >>> np.trace(a)
+ array([6, 8])
+
+ >>> a = np.arange(24).reshape((2,2,2,3))
+ >>> np.trace(a).shape
+ (2, 3)
+
+ """
+ raise NotImplemented('Waiting on interp level method')
+
+def ravel(a, order='C'):
+ """
+ Return a flattened array.
+
+ A 1-D array, containing the elements of the input, is returned. A copy is
+ made only if needed.
+
+ Parameters
+ ----------
+ a : array_like
+ Input array. The elements in ``a`` are read in the order specified by
+ `order`, and packed as a 1-D array.
+ order : {'C','F', 'A', 'K'}, optional
+ The elements of ``a`` are read in this order. 'C' means to view
+ the elements in C (row-major) order. 'F' means to view the elements
+ in Fortran (column-major) order. 'A' means to view the elements
+ in 'F' order if a is Fortran contiguous, 'C' order otherwise.
+ 'K' means to view the elements in the order they occur in memory,
+ except for reversing the data when strides are negative.
+ By default, 'C' order is used.
+
+ Returns
+ -------
+ 1d_array : ndarray
+ Output of the same dtype as `a`, and of shape ``(a.size(),)``.
+
+ See Also
+ --------
+ ndarray.flat : 1-D iterator over an array.
+ ndarray.flatten : 1-D array copy of the elements of an array
+ in row-major order.
+
+ Notes
+ -----
+ In row-major order, the row index varies the slowest, and the column
+ index the quickest. This can be generalized to multiple dimensions,
+ where row-major order implies that the index along the first axis
+ varies slowest, and the index along the last quickest. The opposite holds
+ for Fortran-, or column-major, mode.
+
+ Examples
+ --------
+ It is equivalent to ``reshape(-1, order=order)``.
+
+ >>> x = np.array([[1, 2, 3], [4, 5, 6]])
+ >>> print np.ravel(x)
+ [1 2 3 4 5 6]
+
+ >>> print x.reshape(-1)
+ [1 2 3 4 5 6]
+
+ >>> print np.ravel(x, order='F')
+ [1 4 2 5 3 6]
+
+ When ``order`` is 'A', it will preserve the array's 'C' or 'F' ordering:
+
+ >>> print np.ravel(x.T)
+ [1 4 2 5 3 6]
+ >>> print np.ravel(x.T, order='A')
+ [1 2 3 4 5 6]
+
+ When ``order`` is 'K', it will preserve orderings that are neither 'C'
+ nor 'F', but won't reverse axes:
+
+ >>> a = np.arange(3)[::-1]; a
+ array([2, 1, 0])
+ >>> a.ravel(order='C')
+ array([2, 1, 0])
+ >>> a.ravel(order='K')
+ array([2, 1, 0])
+
+ >>> a = np.arange(12).reshape(2,3,2).swapaxes(1,2); a
+ array([[[ 0, 2, 4],
+ [ 1, 3, 5]],
+ [[ 6, 8, 10],
+ [ 7, 9, 11]]])
+ >>> a.ravel(order='C')
+ array([ 0, 2, 4, 1, 3, 5, 6, 8, 10, 7, 9, 11])
+ >>> a.ravel(order='K')
+ array([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11])
+
+ """
+ raise NotImplemented('Waiting on interp level method')
+
+
+def nonzero(a):
+ """
+ Return the indices of the elements that are non-zero.
+
+ Returns a tuple of arrays, one for each dimension of `a`, containing
+ the indices of the non-zero elements in that dimension. The
+ corresponding non-zero values can be obtained with::
+
+ a[nonzero(a)]
+
+ To group the indices by element, rather than dimension, use::
+
+ transpose(nonzero(a))
+
+ The result of this is always a 2-D array, with a row for
+ each non-zero element.
+
+ Parameters
+ ----------
+ a : array_like
+ Input array.
+
+ Returns
+ -------
+ tuple_of_arrays : tuple
+ Indices of elements that are non-zero.
+
+ See Also
+ --------
+ flatnonzero :
+ Return indices that are non-zero in the flattened version of the input
+ array.
+ ndarray.nonzero :
+ Equivalent ndarray method.
+ count_nonzero :
+ Counts the number of non-zero elements in the input array.
+
+ Examples
+ --------
+ >>> x = np.eye(3)
+ >>> x
+ array([[ 1., 0., 0.],
+ [ 0., 1., 0.],
+ [ 0., 0., 1.]])
+ >>> np.nonzero(x)
+ (array([0, 1, 2]), array([0, 1, 2]))
+
+ >>> x[np.nonzero(x)]
+ array([ 1., 1., 1.])
+ >>> np.transpose(np.nonzero(x))
+ array([[0, 0],
+ [1, 1],
+ [2, 2]])
+
+ A common use for ``nonzero`` is to find the indices of an array, where
+ a condition is True. Given an array `a`, the condition `a` > 3 is a
+ boolean array and since False is interpreted as 0, np.nonzero(a > 3)
+ yields the indices of the `a` where the condition is true.
+
+ >>> a = np.array([[1,2,3],[4,5,6],[7,8,9]])
+ >>> a > 3
+ array([[False, False, False],
+ [ True, True, True],
+ [ True, True, True]], dtype=bool)
+ >>> np.nonzero(a > 3)
+ (array([1, 1, 1, 2, 2, 2]), array([0, 1, 2, 0, 1, 2]))
+
+ The ``nonzero`` method of the boolean array can also be called.
+
+ >>> (a > 3).nonzero()
+ (array([1, 1, 1, 2, 2, 2]), array([0, 1, 2, 0, 1, 2]))
+
+ """
+ raise NotImplemented('Waiting on interp level method')
+
+
+def shape(a):
+ """
+ Return the shape of an array.
+
+ Parameters
+ ----------
+ a : array_like
+ Input array.
+
+ Returns
+ -------
+ shape : tuple of ints
+ The elements of the shape tuple give the lengths of the
+ corresponding array dimensions.
+
+ See Also
+ --------
+ alen
+ ndarray.shape : Equivalent array method.
+
+ Examples
+ --------
+ >>> np.shape(np.eye(3))
+ (3, 3)
+ >>> np.shape([[1, 2]])
+ (1, 2)
+ >>> np.shape([0])
+ (1,)
+ >>> np.shape(0)
+ ()
+
+ >>> a = np.array([(1, 2), (3, 4)], dtype=[('x', 'i4'), ('y', 'i4')])
+ >>> np.shape(a)
+ (2,)
+ >>> a.shape
+ (2,)
+
+ """
+ if not hasattr(a, 'shape'):
+ a = numpypy.array(a)
+ return a.shape
+
+
+def compress(condition, a, axis=None, out=None):
+ """
+ Return selected slices of an array along given axis.
+
+ When working along a given axis, a slice along that axis is returned in
+ `output` for each index where `condition` evaluates to True. When
+ working on a 1-D array, `compress` is equivalent to `extract`.
+
+ Parameters
+ ----------
+ condition : 1-D array of bools
+ Array that selects which entries to return. If len(condition)
+ is less than the size of `a` along the given axis, then output is
+ truncated to the length of the condition array.
+ a : array_like
+ Array from which to extract a part.
+ axis : int, optional
+ Axis along which to take slices. If None (default), work on the
+ flattened array.
+ out : ndarray, optional
+ Output array. Its type is preserved and it must be of the right
+ shape to hold the output.
+
+ Returns
+ -------
+ compressed_array : ndarray
+ A copy of `a` without the slices along axis for which `condition`
+ is false.
+
+ See Also
+ --------
+ take, choose, diag, diagonal, select
+ ndarray.compress : Equivalent method.
+ numpy.doc.ufuncs : Section "Output arguments"
+
+ Examples
+ --------
+ >>> a = np.array([[1, 2], [3, 4], [5, 6]])
+ >>> a
+ array([[1, 2],
+ [3, 4],
+ [5, 6]])
+ >>> np.compress([0, 1], a, axis=0)
+ array([[3, 4]])
+ >>> np.compress([False, True, True], a, axis=0)
+ array([[3, 4],
+ [5, 6]])
+ >>> np.compress([False, True], a, axis=1)
+ array([[2],
+ [4],
+ [6]])
+
+ Working on the flattened array does not return slices along an axis but
+ selects elements.
+
+ >>> np.compress([False, True], a)
+ array([2])
+
+ """
+ raise NotImplemented('Waiting on interp level method')
+
+
+def clip(a, a_min, a_max, out=None):
+ """
+ Clip (limit) the values in an array.
+
+ Given an interval, values outside the interval are clipped to
+ the interval edges. For example, if an interval of ``[0, 1]``
+ is specified, values smaller than 0 become 0, and values larger
+ than 1 become 1.
+
+ Parameters
+ ----------
+ a : array_like
+ Array containing elements to clip.
+ a_min : scalar or array_like
+ Minimum value.
+ a_max : scalar or array_like
+ Maximum value. If `a_min` or `a_max` are array_like, then they will
+ be broadcasted to the shape of `a`.
+ out : ndarray, optional
+ The results will be placed in this array. It may be the input
+ array for in-place clipping. `out` must be of the right shape
+ to hold the output. Its type is preserved.
+
+ Returns
+ -------
+ clipped_array : ndarray
+ An array with the elements of `a`, but where values
+ < `a_min` are replaced with `a_min`, and those > `a_max`
+ with `a_max`.
+
+ See Also
+ --------
+ numpy.doc.ufuncs : Section "Output arguments"
+
+ Examples
+ --------
+ >>> a = np.arange(10)
+ >>> np.clip(a, 1, 8)
+ array([1, 1, 2, 3, 4, 5, 6, 7, 8, 8])
+ >>> a
+ array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])
+ >>> np.clip(a, 3, 6, out=a)
+ array([3, 3, 3, 3, 4, 5, 6, 6, 6, 6])
+ >>> a = np.arange(10)
+ >>> a
+ array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])
+ >>> np.clip(a, [3,4,1,1,1,4,4,4,4,4], 8)
+ array([3, 4, 2, 3, 4, 5, 6, 7, 8, 8])
+
+ """
+ raise NotImplemented('Waiting on interp level method')
+
+
+def sum(a, axis=None, dtype=None, out=None):
+ """
+ Sum of array elements over a given axis.
+
+ Parameters
+ ----------
+ a : array_like
+ Elements to sum.
+ axis : integer, optional
+ Axis over which the sum is taken. By default `axis` is None,
+ and all elements are summed.
+ dtype : dtype, optional
+ The type of the returned array and of the accumulator in which
+ the elements are summed. By default, the dtype of `a` is used.
+ An exception is when `a` has an integer type with less precision
+ than the default platform integer. In that case, the default
+ platform integer is used instead.
+ out : ndarray, optional
+ Array into which the output is placed. By default, a new array is
+ created. If `out` is given, it must be of the appropriate shape
+ (the shape of `a` with `axis` removed, i.e.,
+ ``numpy.delete(a.shape, axis)``). Its type is preserved. See
+ `doc.ufuncs` (Section "Output arguments") for more details.
+
+ Returns
+ -------
+ sum_along_axis : ndarray
+ An array with the same shape as `a`, with the specified
+ axis removed. If `a` is a 0-d array, or if `axis` is None, a scalar
+ is returned. If an output array is specified, a reference to
+ `out` is returned.
+
+ See Also
+ --------
+ ndarray.sum : Equivalent method.
+
+ cumsum : Cumulative sum of array elements.
+
+ trapz : Integration of array values using the composite trapezoidal rule.
+
+ mean, average
+
+ Notes
+ -----
+ Arithmetic is modular when using integer types, and no error is
+ raised on overflow.
+
+ Examples
+ --------
+ >>> np.sum([0.5, 1.5])
+ 2.0
+ >>> np.sum([0.5, 0.7, 0.2, 1.5], dtype=np.int32)
+ 1
+ >>> np.sum([[0, 1], [0, 5]])
+ 6
+ >>> np.sum([[0, 1], [0, 5]], axis=0)
+ array([0, 6])
+ >>> np.sum([[0, 1], [0, 5]], axis=1)
+ array([1, 5])
+
+ If the accumulator is too small, overflow occurs:
+
+ >>> np.ones(128, dtype=np.int8).sum(dtype=np.int8)
+ -128
+
+ """
+ if not hasattr(a, "sum"):
+ a = numpypy.array(a)
+ return a.sum()
+
+
+def product (a, axis=None, dtype=None, out=None):
+ """
+ Return the product of array elements over a given axis.
+
+ See Also
+ --------
+ prod : equivalent function; see for details.
+
+ """
+ raise NotImplemented('Waiting on interp level method')
+
+
+def sometrue(a, axis=None, out=None):
+ """
+ Check whether some values are true.
+
+ Refer to `any` for full documentation.
+
+ See Also
+ --------
+ any : equivalent function
+
+ """
+ if not hasattr(a, 'any'):
+ a = numpypy.array(a)
+ return a.any()
+
+
+def alltrue (a, axis=None, out=None):
+ """
+ Check if all elements of input array are true.
+
+ See Also
+ --------
+ numpy.all : Equivalent function; see for details.
+
+ """
+ if not hasattr(a, 'all'):
+ a = numpypy.array(a)
+ return a.all()
+
+def any(a,axis=None, out=None):
+ """
+ Test whether any array element along a given axis evaluates to True.
+
+ Returns single boolean unless `axis` is not ``None``
+
+ Parameters
+ ----------
+ a : array_like
+ Input array or object that can be converted to an array.
+ axis : int, optional
+ Axis along which a logical OR is performed. The default
+ (`axis` = `None`) is to perform a logical OR over a flattened
+ input array. `axis` may be negative, in which case it counts
+ from the last to the first axis.
+ out : ndarray, optional
+ Alternate output array in which to place the result. It must have
+ the same shape as the expected output and its type is preserved
+ (e.g., if it is of type float, then it will remain so, returning
+ 1.0 for True and 0.0 for False, regardless of the type of `a`).
+ See `doc.ufuncs` (Section "Output arguments") for details.
+
+ Returns
+ -------
+ any : bool or ndarray
+ A new boolean or `ndarray` is returned unless `out` is specified,
+ in which case a reference to `out` is returned.
+
+ See Also
+ --------
+ ndarray.any : equivalent method
+
+ all : Test whether all elements along a given axis evaluate to True.
+
+ Notes
+ -----
+ Not a Number (NaN), positive infinity and negative infinity evaluate
+ to `True` because these are not equal to zero.
+
+ Examples
+ --------
+ >>> np.any([[True, False], [True, True]])
+ True
+
+ >>> np.any([[True, False], [False, False]], axis=0)
+ array([ True, False], dtype=bool)
+
+ >>> np.any([-1, 0, 5])
+ True
+
+ >>> np.any(np.nan)
+ True
+
+ >>> o=np.array([False])
+ >>> z=np.any([-1, 4, 5], out=o)
+ >>> z, o
+ (array([ True], dtype=bool), array([ True], dtype=bool))
+ >>> # Check now that z is a reference to o
+ >>> z is o
+ True
+ >>> id(z), id(o) # identity of z and o # doctest: +SKIP
+ (191614240, 191614240)
+
+ """
+ if not hasattr(a, 'any'):
+ a = numpypy.array(a)
+ return a.any()
+
+
+def all(a,axis=None, out=None):
+ """
+ Test whether all array elements along a given axis evaluate to True.
+
+ Parameters
+ ----------
+ a : array_like
+ Input array or object that can be converted to an array.
+ axis : int, optional
+ Axis along which a logical AND is performed.
+ The default (`axis` = `None`) is to perform a logical AND
+ over a flattened input array. `axis` may be negative, in which
+ case it counts from the last to the first axis.
+ out : ndarray, optional
+ Alternate output array in which to place the result.
+ It must have the same shape as the expected output and its
+ type is preserved (e.g., if ``dtype(out)`` is float, the result
+ will consist of 0.0's and 1.0's). See `doc.ufuncs` (Section
+ "Output arguments") for more details.
+
+ Returns
+ -------
+ all : ndarray, bool
+ A new boolean or array is returned unless `out` is specified,
+ in which case a reference to `out` is returned.
+
+ See Also
+ --------
+ ndarray.all : equivalent method
+
+ any : Test whether any element along a given axis evaluates to True.
+
+ Notes
+ -----
+ Not a Number (NaN), positive infinity and negative infinity
+ evaluate to `True` because these are not equal to zero.
+
+ Examples
+ --------
+ >>> np.all([[True,False],[True,True]])
+ False
+
+ >>> np.all([[True,False],[True,True]], axis=0)
+ array([ True, False], dtype=bool)
+
+ >>> np.all([-1, 4, 5])
+ True
+
+ >>> np.all([1.0, np.nan])
+ True
+
+ >>> o=np.array([False])
+ >>> z=np.all([-1, 4, 5], out=o)
+ >>> id(z), id(o), z # doctest: +SKIP
+ (28293632, 28293632, array([ True], dtype=bool))
+
+ """
+ if not hasattr(a, 'all'):
+ a = numpypy.array(a)
+ return a.all()
+
+
+def cumsum (a, axis=None, dtype=None, out=None):
+ """
+ Return the cumulative sum of the elements along a given axis.
+
+ Parameters
+ ----------
+ a : array_like
+ Input array.
+ axis : int, optional
+ Axis along which the cumulative sum is computed. The default
+ (None) is to compute the cumsum over the flattened array.
+ dtype : dtype, optional
+ Type of the returned array and of the accumulator in which the
+ elements are summed. If `dtype` is not specified, it defaults
+ to the dtype of `a`, unless `a` has an integer dtype with a
+ precision less than that of the default platform integer. In
+ that case, the default platform integer is used.
+ out : ndarray, optional
+ Alternative output array in which to place the result. It must
+ have the same shape and buffer length as the expected output
+ but the type will be cast if necessary. See `doc.ufuncs`
+ (Section "Output arguments") for more details.
+
+ Returns
+ -------
+ cumsum_along_axis : ndarray.
+ A new array holding the result is returned unless `out` is
+ specified, in which case a reference to `out` is returned. The
+ result has the same size as `a`, and the same shape as `a` if
+ `axis` is not None or `a` is a 1-d array.
+
+
+ See Also
+ --------
+ sum : Sum array elements.
+
+ trapz : Integration of array values using the composite trapezoidal rule.
+
+ Notes
+ -----
+ Arithmetic is modular when using integer types, and no error is
+ raised on overflow.
+
+ Examples
+ --------
+ >>> a = np.array([[1,2,3], [4,5,6]])
+ >>> a
+ array([[1, 2, 3],
+ [4, 5, 6]])
+ >>> np.cumsum(a)
+ array([ 1, 3, 6, 10, 15, 21])
+ >>> np.cumsum(a, dtype=float) # specifies type of output value(s)
+ array([ 1., 3., 6., 10., 15., 21.])
+
+ >>> np.cumsum(a,axis=0) # sum over rows for each of the 3 columns
+ array([[1, 2, 3],
+ [5, 7, 9]])
+ >>> np.cumsum(a,axis=1) # sum over columns for each of the 2 rows
+ array([[ 1, 3, 6],
+ [ 4, 9, 15]])
+
+ """
+ raise NotImplemented('Waiting on interp level method')
+
+
+def cumproduct(a, axis=None, dtype=None, out=None):
+ """
+ Return the cumulative product over the given axis.
+
+
+ See Also
+ --------
+ cumprod : equivalent function; see for details.
+
+ """
+ raise NotImplemented('Waiting on interp level method')
+
+
+def ptp(a, axis=None, out=None):
+ """
+ Range of values (maximum - minimum) along an axis.
+
+ The name of the function comes from the acronym for 'peak to peak'.
+
+ Parameters
+ ----------
+ a : array_like
+ Input values.
+ axis : int, optional
+ Axis along which to find the peaks. By default, flatten the
+ array.
+ out : array_like
+ Alternative output array in which to place the result. It must
+ have the same shape and buffer length as the expected output,
+ but the type of the output values will be cast if necessary.
+
+ Returns
+ -------
+ ptp : ndarray
+ A new array holding the result, unless `out` was
+ specified, in which case a reference to `out` is returned.
+
+ Examples
+ --------
+ >>> x = np.arange(4).reshape((2,2))
+ >>> x
+ array([[0, 1],
+ [2, 3]])
+
+ >>> np.ptp(x, axis=0)
+ array([2, 2])
+
+ >>> np.ptp(x, axis=1)
+ array([1, 1])
+
+ """
+ raise NotImplemented('Waiting on interp level method')
+
+
+def amax(a, axis=None, out=None):
+ """
+ Return the maximum of an array or maximum along an axis.
+
+ Parameters
+ ----------
+ a : array_like
+ Input data.
+ axis : int, optional
+ Axis along which to operate. By default flattened input is used.
+ out : ndarray, optional
+ Alternate output array in which to place the result. Must be of
+ the same shape and buffer length as the expected output. See
+ `doc.ufuncs` (Section "Output arguments") for more details.
+
+ Returns
+ -------
+ amax : ndarray or scalar
+ Maximum of `a`. If `axis` is None, the result is a scalar value.
+ If `axis` is given, the result is an array of dimension
+ ``a.ndim - 1``.
+
+ See Also
+ --------
+ nanmax : NaN values are ignored instead of being propagated.
+ fmax : same behavior as the C99 fmax function.
+ argmax : indices of the maximum values.
+
+ Notes
+ -----
+ NaN values are propagated, that is if at least one item is NaN, the
+ corresponding max value will be NaN as well. To ignore NaN values
+ (MATLAB behavior), please use nanmax.
+
+ Examples
+ --------
+ >>> a = np.arange(4).reshape((2,2))
+ >>> a
+ array([[0, 1],
+ [2, 3]])
+ >>> np.amax(a)
+ 3
+ >>> np.amax(a, axis=0)
+ array([2, 3])
+ >>> np.amax(a, axis=1)
+ array([1, 3])
+
+ >>> b = np.arange(5, dtype=np.float)
+ >>> b[2] = np.NaN
+ >>> np.amax(b)
+ nan
+ >>> np.nanmax(b)
+ 4.0
+
+ """
+ if not hasattr(a, "max"):
+ a = numpypy.array(a)
+ return a.max()
+
+
+def amin(a, axis=None, out=None):
+ """
+ Return the minimum of an array or minimum along an axis.
+
+ Parameters
+ ----------
+ a : array_like
+ Input data.
+ axis : int, optional
+ Axis along which to operate. By default a flattened input is used.
+ out : ndarray, optional
+ Alternative output array in which to place the result. Must
+ be of the same shape and buffer length as the expected output.
+ See `doc.ufuncs` (Section "Output arguments") for more details.
+
+ Returns
+ -------
+ amin : ndarray
+ A new array or a scalar array with the result.
+
+ See Also
+ --------
+ nanmin: nan values are ignored instead of being propagated
+ fmin: same behavior as the C99 fmin function
+ argmin: Return the indices of the minimum values.
+
+ amax, nanmax, fmax
+
+ Notes
+ -----
+ NaN values are propagated, that is if at least one item is nan, the
+ corresponding min value will be nan as well. To ignore NaN values (matlab
+ behavior), please use nanmin.
+
+ Examples
+ --------
+ >>> a = np.arange(4).reshape((2,2))
+ >>> a
+ array([[0, 1],
+ [2, 3]])
+ >>> np.amin(a) # Minimum of the flattened array
+ 0
+ >>> np.amin(a, axis=0) # Minima along the first axis
+ array([0, 1])
+ >>> np.amin(a, axis=1) # Minima along the second axis
+ array([0, 2])
+
+ >>> b = np.arange(5, dtype=np.float)
+ >>> b[2] = np.NaN
+ >>> np.amin(b)
+ nan
+ >>> np.nanmin(b)
+ 0.0
+
+ """
+ # amin() is equivalent to min()
+ if not hasattr(a, 'min'):
+ a = numpypy.array(a)
+ return a.min()
+
+def alen(a):
+ """
+ Return the length of the first dimension of the input array.
+
+ Parameters
+ ----------
+ a : array_like
+ Input array.
+
+ Returns
+ -------
+ l : int
+ Length of the first dimension of `a`.
+
+ See Also
+ --------
+ shape, size
+
+ Examples
+ --------
+ >>> a = np.zeros((7,4,5))
+ >>> a.shape[0]
+ 7
+ >>> np.alen(a)
+ 7
+
+ """
+ if not hasattr(a, 'shape'):
+ a = numpypy.array(a)
+ return a.shape[0]
+
+
+def prod(a, axis=None, dtype=None, out=None):
+ """
+ Return the product of array elements over a given axis.
+
+ Parameters
+ ----------
+ a : array_like
+ Input data.
+ axis : int, optional
+ Axis over which the product is taken. By default, the product
+ of all elements is calculated.
+ dtype : data-type, optional
+ The data-type of the returned array, as well as of the accumulator
+ in which the elements are multiplied. By default, if `a` is of
+ integer type, `dtype` is the default platform integer. (Note: if
+ the type of `a` is unsigned, then so is `dtype`.) Otherwise,
+ the dtype is the same as that of `a`.
+ out : ndarray, optional
+ Alternative output array in which to place the result. It must have
+ the same shape as the expected output, but the type of the
+ output values will be cast if necessary.
+
+ Returns
+ -------
+ product_along_axis : ndarray, see `dtype` parameter above.
+ An array shaped as `a` but with the specified axis removed.
+ Returns a reference to `out` if specified.
+
+ See Also
+ --------
+ ndarray.prod : equivalent method
+ numpy.doc.ufuncs : Section "Output arguments"
+
+ Notes
+ -----
+ Arithmetic is modular when using integer types, and no error is
+ raised on overflow. That means that, on a 32-bit platform:
+
+ >>> x = np.array([536870910, 536870910, 536870910, 536870910])
+ >>> np.prod(x) #random
+ 16
+
+ Examples
+ --------
+ By default, calculate the product of all elements:
+
+ >>> np.prod([1.,2.])
+ 2.0
+
+ Even when the input array is two-dimensional:
+
+ >>> np.prod([[1.,2.],[3.,4.]])
+ 24.0
+
+ But we can also specify the axis over which to multiply:
+
+ >>> np.prod([[1.,2.],[3.,4.]], axis=1)
+ array([ 2., 12.])
+
+ If the type of `x` is unsigned, then the output type is
+ the unsigned platform integer:
+
+ >>> x = np.array([1, 2, 3], dtype=np.uint8)
+ >>> np.prod(x).dtype == np.uint
+ True
+
+ If `x` is of a signed integer type, then the output type
+ is the default platform integer:
+
+ >>> x = np.array([1, 2, 3], dtype=np.int8)
+ >>> np.prod(x).dtype == np.int
+ True
+
+ """
+ raise NotImplemented('Waiting on interp level method')
+
+
+def cumprod(a, axis=None, dtype=None, out=None):
+ """
+ Return the cumulative product of elements along a given axis.
+
+ Parameters
+ ----------
+ a : array_like
+ Input array.
+ axis : int, optional
+ Axis along which the cumulative product is computed. By default
+ the input is flattened.
+ dtype : dtype, optional
+ Type of the returned array, as well as of the accumulator in which
+ the elements are multiplied. If *dtype* is not specified, it
+ defaults to the dtype of `a`, unless `a` has an integer dtype with
+ a precision less than that of the default platform integer. In
+ that case, the default platform integer is used instead.
+ out : ndarray, optional
+ Alternative output array in which to place the result. It must
+ have the same shape and buffer length as the expected output
+ but the type of the resulting values will be cast if necessary.
+
+ Returns
+ -------
+ cumprod : ndarray
+ A new array holding the result is returned unless `out` is
+ specified, in which case a reference to out is returned.
+
+ See Also
+ --------
+ numpy.doc.ufuncs : Section "Output arguments"
+
+ Notes
+ -----
+ Arithmetic is modular when using integer types, and no error is
+ raised on overflow.
+
+ Examples
+ --------
+ >>> a = np.array([1,2,3])
+ >>> np.cumprod(a) # intermediate results 1, 1*2
+ ... # total product 1*2*3 = 6
+ array([1, 2, 6])
+ >>> a = np.array([[1, 2, 3], [4, 5, 6]])
+ >>> np.cumprod(a, dtype=float) # specify type of output
+ array([ 1., 2., 6., 24., 120., 720.])
+
+ The cumulative product for each column (i.e., over the rows) of `a`:
+
+ >>> np.cumprod(a, axis=0)
+ array([[ 1, 2, 3],
+ [ 4, 10, 18]])
+
+ The cumulative product for each row (i.e. over the columns) of `a`:
+
+ >>> np.cumprod(a,axis=1)
+ array([[ 1, 2, 6],
+ [ 4, 20, 120]])
+
+ """
+ raise NotImplemented('Waiting on interp level method')
+
+
+def ndim(a):
+ """
+ Return the number of dimensions of an array.
+
+ Parameters
+ ----------
+ a : array_like
+ Input array. If it is not already an ndarray, a conversion is
+ attempted.
+
+ Returns
+ -------
+ number_of_dimensions : int
+ The number of dimensions in `a`. Scalars are zero-dimensional.
+
+ See Also
+ --------
+ ndarray.ndim : equivalent method
+ shape : dimensions of array
+ ndarray.shape : dimensions of array
+
+ Examples
+ --------
+ >>> np.ndim([[1,2,3],[4,5,6]])
+ 2
+ >>> np.ndim(np.array([[1,2,3],[4,5,6]]))
+ 2
+ >>> np.ndim(1)
+ 0
+
+ """
+ if not hasattr(a, 'ndim'):
+ a = numpypy.array(a)
+ return a.ndim
+
+
+def rank(a):
+ """
+ Return the number of dimensions of an array.
+
+ If `a` is not already an array, a conversion is attempted.
+ Scalars are zero dimensional.
+
+ Parameters
+ ----------
+ a : array_like
+ Array whose number of dimensions is desired. If `a` is not an array,
+ a conversion is attempted.
+
+ Returns
+ -------
+ number_of_dimensions : int
+ The number of dimensions in the array.
+
+ See Also
+ --------
+ ndim : equivalent function
+ ndarray.ndim : equivalent property
+ shape : dimensions of array
+ ndarray.shape : dimensions of array
+
+ Notes
+ -----
+ In the old Numeric package, `rank` was the term used for the number of
+ dimensions, but in Numpy `ndim` is used instead.
+
+ Examples
+ --------
+ >>> np.rank([1,2,3])
+ 1
+ >>> np.rank(np.array([[1,2,3],[4,5,6]]))
+ 2
+ >>> np.rank(1)
+ 0
+
+ """
+ if not hasattr(a, 'ndim'):
+ a = numpypy.array(a)
+ return a.ndim
+
+
+def size(a, axis=None):
+ """
+ Return the number of elements along a given axis.
+
+ Parameters
+ ----------
+ a : array_like
+ Input data.
+ axis : int, optional
+ Axis along which the elements are counted. By default, give
+ the total number of elements.
+
+ Returns
+ -------
+ element_count : int
+ Number of elements along the specified axis.
+
+ See Also
+ --------
+ shape : dimensions of array
+ ndarray.shape : dimensions of array
+ ndarray.size : number of elements in array
+
+ Examples
+ --------
+ >>> a = np.array([[1,2,3],[4,5,6]])
+ >>> np.size(a)
+ 6
+ >>> np.size(a,1)
+ 3
+ >>> np.size(a,0)
+ 2
+
+ """
+ raise NotImplemented('Waiting on interp level method')
+
+
+def around(a, decimals=0, out=None):
+ """
+ Evenly round to the given number of decimals.
+
+ Parameters
+ ----------
+ a : array_like
+ Input data.
+ decimals : int, optional
+ Number of decimal places to round to (default: 0). If
+ decimals is negative, it specifies the number of positions to
+ the left of the decimal point.
+ out : ndarray, optional
+ Alternative output array in which to place the result. It must have
+ the same shape as the expected output, but the type of the output
+ values will be cast if necessary. See `doc.ufuncs` (Section
+ "Output arguments") for details.
+
+ Returns
+ -------
+ rounded_array : ndarray
+ An array of the same type as `a`, containing the rounded values.
+ Unless `out` was specified, a new array is created. A reference to
+ the result is returned.
+
+ The real and imaginary parts of complex numbers are rounded
+ separately. The result of rounding a float is a float.
+
+ See Also
+ --------
+ ndarray.round : equivalent method
+
+ ceil, fix, floor, rint, trunc
+
+
+ Notes
+ -----
+ For values exactly halfway between rounded decimal values, Numpy
+ rounds to the nearest even value. Thus 1.5 and 2.5 round to 2.0,
+ -0.5 and 0.5 round to 0.0, etc. Results may also be surprising due
+ to the inexact representation of decimal fractions in the IEEE
+ floating point standard [1]_ and errors introduced when scaling
+ by powers of ten.
+
+ References
+ ----------
+ .. [1] "Lecture Notes on the Status of IEEE 754", William Kahan,
+ http://www.cs.berkeley.edu/~wkahan/ieee754status/IEEE754.PDF
+ .. [2] "How Futile are Mindless Assessments of
+ Roundoff in Floating-Point Computation?", William Kahan,
+ http://www.cs.berkeley.edu/~wkahan/Mindless.pdf
+
+ Examples
+ --------
+ >>> np.around([0.37, 1.64])
+ array([ 0., 2.])
+ >>> np.around([0.37, 1.64], decimals=1)
+ array([ 0.4, 1.6])
+ >>> np.around([.5, 1.5, 2.5, 3.5, 4.5]) # rounds to nearest even value
+ array([ 0., 2., 2., 4., 4.])
+ >>> np.around([1,2,3,11], decimals=1) # ndarray of ints is returned
+ array([ 1, 2, 3, 11])
+ >>> np.around([1,2,3,11], decimals=-1)
+ array([ 0, 0, 0, 10])
+
+ """
+ raise NotImplemented('Waiting on interp level method')
+
+
+def round_(a, decimals=0, out=None):
+ """
+ Round an array to the given number of decimals.
+
+ Refer to `around` for full documentation.
+
+ See Also
+ --------
+ around : equivalent function
+
+ """
+ raise NotImplemented('Waiting on interp level method')
+
+
+def mean(a, axis=None, dtype=None, out=None):
+ """
+ Compute the arithmetic mean along the specified axis.
+
+ Returns the average of the array elements. The average is taken over
+ the flattened array by default, otherwise over the specified axis.
+ `float64` intermediate and return values are used for integer inputs.
+
+ Parameters
+ ----------
+ a : array_like
+ Array containing numbers whose mean is desired. If `a` is not an
+ array, a conversion is attempted.
+ axis : int, optional
+ Axis along which the means are computed. The default is to compute
+ the mean of the flattened array.
+ dtype : data-type, optional
+ Type to use in computing the mean. For integer inputs, the default
+ is `float64`; for floating point inputs, it is the same as the
+ input dtype.
+ out : ndarray, optional
+ Alternate output array in which to place the result. The default
+ is ``None``; if provided, it must have the same shape as the
+ expected output, but the type will be cast if necessary.
+ See `doc.ufuncs` for details.
+
+ Returns
+ -------
+ m : ndarray, see dtype parameter above
+ If `out=None`, returns a new array containing the mean values,
+ otherwise a reference to the output array is returned.
+
+ See Also
+ --------
+ average : Weighted average
+
+ Notes
+ -----
+ The arithmetic mean is the sum of the elements along the axis divided
+ by the number of elements.
+
+ Note that for floating-point input, the mean is computed using the
+ same precision the input has. Depending on the input data, this can
+ cause the results to be inaccurate, especially for `float32` (see
+ example below). Specifying a higher-precision accumulator using the
+ `dtype` keyword can alleviate this issue.
+
+ Examples
+ --------
+ >>> a = np.array([[1, 2], [3, 4]])
+ >>> np.mean(a)
+ 2.5
+ >>> np.mean(a, axis=0)
+ array([ 2., 3.])
+ >>> np.mean(a, axis=1)
+ array([ 1.5, 3.5])
+
+ In single precision, `mean` can be inaccurate:
+
+ >>> a = np.zeros((2, 512*512), dtype=np.float32)
+ >>> a[0, :] = 1.0
+ >>> a[1, :] = 0.1
+ >>> np.mean(a)
+ 0.546875
+
+ Computing the mean in float64 is more accurate:
+
+ >>> np.mean(a, dtype=np.float64)
+ 0.55000000074505806
+
+ """
+ if not hasattr(a, "mean"):
+ a = numpypy.array(a)
+ return a.mean()
+
+
+def std(a, axis=None, dtype=None, out=None, ddof=0):
+ """
+ Compute the standard deviation along the specified axis.
+
+ Returns the standard deviation, a measure of the spread of a distribution,
+ of the array elements. The standard deviation is computed for the
+ flattened array by default, otherwise over the specified axis.
+
+ Parameters
+ ----------
+ a : array_like
+ Calculate the standard deviation of these values.
+ axis : int, optional
+ Axis along which the standard deviation is computed. The default is
+ to compute the standard deviation of the flattened array.
+ dtype : dtype, optional
+ Type to use in computing the standard deviation. For arrays of
+ integer type the default is float64, for arrays of float types it is
+ the same as the array type.
+ out : ndarray, optional
+ Alternative output array in which to place the result. It must have
+ the same shape as the expected output but the type (of the calculated
+ values) will be cast if necessary.
+ ddof : int, optional
+ Means Delta Degrees of Freedom. The divisor used in calculations
+ is ``N - ddof``, where ``N`` represents the number of elements.
+ By default `ddof` is zero.
+
+ Returns
+ -------
+ standard_deviation : ndarray, see dtype parameter above.
+ If `out` is None, return a new array containing the standard deviation,
+ otherwise return a reference to the output array.
+
+ See Also
+ --------
+ var, mean
+ numpy.doc.ufuncs : Section "Output arguments"
+
+ Notes
+ -----
+ The standard deviation is the square root of the average of the squared
+ deviations from the mean, i.e., ``std = sqrt(mean(abs(x - x.mean())**2))``.
+
+ The average squared deviation is normally calculated as ``x.sum() / N``, where
+ ``N = len(x)``. If, however, `ddof` is specified, the divisor ``N - ddof``
+ is used instead. In standard statistical practice, ``ddof=1`` provides an
+ unbiased estimator of the variance of the infinite population. ``ddof=0``
+ provides a maximum likelihood estimate of the variance for normally
+ distributed variables. The standard deviation computed in this function
+ is the square root of the estimated variance, so even with ``ddof=1``, it
+ will not be an unbiased estimate of the standard deviation per se.
+
+ Note that, for complex numbers, `std` takes the absolute
+ value before squaring, so that the result is always real and nonnegative.
+
+ For floating-point input, the *std* is computed using the same
+ precision the input has. Depending on the input data, this can cause
+ the results to be inaccurate, especially for float32 (see example below).
+ Specifying a higher-accuracy accumulator using the `dtype` keyword can
+ alleviate this issue.
+
+ Examples
+ --------
+ >>> a = np.array([[1, 2], [3, 4]])
+ >>> np.std(a)
+ 1.1180339887498949
+ >>> np.std(a, axis=0)
+ array([ 1., 1.])
+ >>> np.std(a, axis=1)
+ array([ 0.5, 0.5])
+
+ In single precision, std() can be inaccurate:
+
+ >>> a = np.zeros((2,512*512), dtype=np.float32)
+ >>> a[0,:] = 1.0
+ >>> a[1,:] = 0.1
+ >>> np.std(a)
+ 0.45172946707416706
+
+ Computing the standard deviation in float64 is more accurate:
+
+ >>> np.std(a, dtype=np.float64)
+ 0.44999999925552653
+
+ """
+ if not hasattr(a, "std"):
+ a = numpypy.array(a)
+ return a.std()
+
+
+def var(a, axis=None, dtype=None, out=None, ddof=0):
+ """
+ Compute the variance along the specified axis.
+
+ Returns the variance of the array elements, a measure of the spread of a
+ distribution. The variance is computed for the flattened array by
+ default, otherwise over the specified axis.
+
+ Parameters
+ ----------
+ a : array_like
+ Array containing numbers whose variance is desired. If `a` is not an
+ array, a conversion is attempted.
+ axis : int, optional
+ Axis along which the variance is computed. The default is to compute
+ the variance of the flattened array.
+ dtype : data-type, optional
+ Type to use in computing the variance. For arrays of integer type
+ the default is `float32`; for arrays of float types it is the same as
+ the array type.
+ out : ndarray, optional
+ Alternate output array in which to place the result. It must have
+ the same shape as the expected output, but the type is cast if
+ necessary.
+ ddof : int, optional
+ "Delta Degrees of Freedom": the divisor used in the calculation is
+ ``N - ddof``, where ``N`` represents the number of elements. By
+ default `ddof` is zero.
+
+ Returns
+ -------
+ variance : ndarray, see dtype parameter above
+ If ``out=None``, returns a new array containing the variance;
+ otherwise, a reference to the output array is returned.
+
+ See Also
+ --------
+ std : Standard deviation
+ mean : Average
+ numpy.doc.ufuncs : Section "Output arguments"
+
+ Notes
+ -----
+ The variance is the average of the squared deviations from the mean,
+ i.e., ``var = mean(abs(x - x.mean())**2)``.
+
+ The mean is normally calculated as ``x.sum() / N``, where ``N = len(x)``.
+ If, however, `ddof` is specified, the divisor ``N - ddof`` is used
+ instead. In standard statistical practice, ``ddof=1`` provides an
+ unbiased estimator of the variance of a hypothetical infinite population.
+ ``ddof=0`` provides a maximum likelihood estimate of the variance for
+ normally distributed variables.
+
+ Note that for complex numbers, the absolute value is taken before
+ squaring, so that the result is always real and nonnegative.
+
+ For floating-point input, the variance is computed using the same
+ precision the input has. Depending on the input data, this can cause
+ the results to be inaccurate, especially for `float32` (see example
+ below). Specifying a higher-accuracy accumulator using the ``dtype``
+ keyword can alleviate this issue.
+
+ Examples
+ --------
+ >>> a = np.array([[1,2],[3,4]])
+ >>> np.var(a)
+ 1.25
+ >>> np.var(a,0)
+ array([ 1., 1.])
+ >>> np.var(a,1)
+ array([ 0.25, 0.25])
+
+ In single precision, var() can be inaccurate:
+
+ >>> a = np.zeros((2,512*512), dtype=np.float32)
+ >>> a[0,:] = 1.0
+ >>> a[1,:] = 0.1
+ >>> np.var(a)
+ 0.20405951142311096
+
+ Computing the standard deviation in float64 is more accurate:
+
+ >>> np.var(a, dtype=np.float64)
+ 0.20249999932997387
+ >>> ((1-0.55)**2 + (0.1-0.55)**2)/2
+ 0.20250000000000001
+
+ """
+ if not hasattr(a, "var"):
+ a = numpypy.array(a)
+ return a.var()
diff --git a/lib_pypy/numpypy/test/test_fromnumeric.py b/lib_pypy/numpypy/test/test_fromnumeric.py
new file mode 100644
--- /dev/null
+++ b/lib_pypy/numpypy/test/test_fromnumeric.py
@@ -0,0 +1,109 @@
+
+from pypy.module.micronumpy.test.test_base import BaseNumpyAppTest
+
+class AppTestFromNumeric(BaseNumpyAppTest):
+ def test_argmax(self):
+ # tests taken from numpy/core/fromnumeric.py docstring
+ from numpypy import array, arange, argmax
+ a = arange(6).reshape((2,3))
+ assert argmax(a) == 5
+ # assert (argmax(a, axis=0) == array([1, 1, 1])).all()
+ # assert (argmax(a, axis=1) == array([2, 2])).all()
+ b = arange(6)
+ b[1] = 5
+ assert argmax(b) == 1
+
+ def test_argmin(self):
+ # tests adapted from test_argmax
+ from numpypy import array, arange, argmin
+ a = arange(6).reshape((2,3))
+ assert argmin(a) == 0
+ # assert (argmax(a, axis=0) == array([0, 0, 0])).all()
+ # assert (argmax(a, axis=1) == array([0, 0])).all()
+ b = arange(6)
+ b[1] = 0
+ assert argmin(b) == 0
+
+ def test_shape(self):
+ # tests taken from numpy/core/fromnumeric.py docstring
+ from numpypy import array, identity, shape
+ assert shape(identity(3)) == (3, 3)
+ assert shape([[1, 2]]) == (1, 2)
+ assert shape([0]) == (1,)
+ assert shape(0) == ()
+ # a = array([(1, 2), (3, 4)], dtype=[('x', 'i4'), ('y', 'i4')])
+ # assert shape(a) == (2,)
+
+ def test_sum(self):
+ # tests taken from numpy/core/fromnumeric.py docstring
+ from numpypy import array, sum, ones
+ assert sum([0.5, 1.5])== 2.0
+ assert sum([[0, 1], [0, 5]]) == 6
+ # assert sum([0.5, 0.7, 0.2, 1.5], dtype=int32) == 1
+ # assert (sum([[0, 1], [0, 5]], axis=0) == array([0, 6])).all()
+ # assert (sum([[0, 1], [0, 5]], axis=1) == array([1, 5])).all()
+ # If the accumulator is too small, overflow occurs:
+ # assert ones(128, dtype=int8).sum(dtype=int8) == -128
+
+ def test_amin(self):
+ # tests taken from numpy/core/fromnumeric.py docstring
+ from numpypy import array, arange, amin
+ a = arange(4).reshape((2,2))
+ assert amin(a) == 0
+ # # Minima along the first axis
+ # assert (amin(a, axis=0) == array([0, 1])).all()
+ # # Minima along the second axis
+ # assert (amin(a, axis=1) == array([0, 2])).all()
+ # # NaN behaviour
+ # b = arange(5, dtype=float)
+ # b[2] = NaN
+ # assert amin(b) == nan
+ # assert nanmin(b) == 0.0
+
+ def test_amax(self):
+ # tests taken from numpy/core/fromnumeric.py docstring
+ from numpypy import array, arange, amax
+ a = arange(4).reshape((2,2))
+ assert amax(a) == 3
+ # assert (amax(a, axis=0) == array([2, 3])).all()
+ # assert (amax(a, axis=1) == array([1, 3])).all()
+ # # NaN behaviour
+ # b = arange(5, dtype=float)
+ # b[2] = NaN
+ # assert amax(b) == nan
+ # assert nanmax(b) == 4.0
+
+ def test_alen(self):
+ # tests taken from numpy/core/fromnumeric.py docstring
+ from numpypy import array, zeros, alen
+ a = zeros((7,4,5))
+ assert a.shape[0] == 7
+ assert alen(a) == 7
+
+ def test_ndim(self):
+ # tests taken from numpy/core/fromnumeric.py docstring
+ from numpypy import array, ndim
+ assert ndim([[1,2,3],[4,5,6]]) == 2
+ assert ndim(array([[1,2,3],[4,5,6]])) == 2
+ assert ndim(1) == 0
+
+ def test_rank(self):
+ # tests taken from numpy/core/fromnumeric.py docstring
+ from numpypy import array, rank
+ assert rank([[1,2,3],[4,5,6]]) == 2
+ assert rank(array([[1,2,3],[4,5,6]])) == 2
+ assert rank(1) == 0
+
+ def test_var(self):
+ from numpypy import array, var
+ a = array([[1,2],[3,4]])
+ assert var(a) == 1.25
+ # assert (np.var(a,0) == array([ 1., 1.])).all()
+ # assert (np.var(a,1) == array([ 0.25, 0.25])).all()
+
+ def test_std(self):
+ from numpypy import array, std
+ a = array([[1, 2], [3, 4]])
+ assert std(a) == 1.1180339887498949
+ # assert (std(a, axis=0) == array([ 1., 1.])).all()
+ # assert (std(a, axis=1) == array([ 0.5, 0.5]).all()
diff --git a/pypy/annotation/description.py b/pypy/annotation/description.py
--- a/pypy/annotation/description.py
+++ b/pypy/annotation/description.py
@@ -257,7 +257,8 @@
try:
inputcells = args.match_signature(signature, defs_s)
except ArgErr, e:
- raise TypeError, "signature mismatch: %s" % e.getmsg(self.name)
+ raise TypeError("signature mismatch: %s() %s" %
+ (self.name, e.getmsg()))
return inputcells
def specialize(self, inputcells, op=None):
diff --git a/pypy/interpreter/argument.py b/pypy/interpreter/argument.py
--- a/pypy/interpreter/argument.py
+++ b/pypy/interpreter/argument.py
@@ -428,8 +428,8 @@
return self._match_signature(w_firstarg,
scope_w, signature, defaults_w, 0)
except ArgErr, e:
- raise OperationError(self.space.w_TypeError,
- self.space.wrap(e.getmsg(fnname)))
+ raise operationerrfmt(self.space.w_TypeError,
+ "%s() %s", fnname, e.getmsg())
def _parse(self, w_firstarg, signature, defaults_w, blindargs=0):
"""Parse args and kwargs according to the signature of a code object,
@@ -450,8 +450,8 @@
try:
return self._parse(w_firstarg, signature, defaults_w, blindargs)
except ArgErr, e:
- raise OperationError(self.space.w_TypeError,
- self.space.wrap(e.getmsg(fnname)))
+ raise operationerrfmt(self.space.w_TypeError,
+ "%s() %s", fnname, e.getmsg())
@staticmethod
def frompacked(space, w_args=None, w_kwds=None):
@@ -626,7 +626,7 @@
class ArgErr(Exception):
- def getmsg(self, fnname):
+ def getmsg(self):
raise NotImplementedError
class ArgErrCount(ArgErr):
@@ -642,11 +642,10 @@
self.num_args = got_nargs
self.num_kwds = nkwds
- def getmsg(self, fnname):
+ def getmsg(self):
n = self.expected_nargs
if n == 0:
- msg = "%s() takes no arguments (%d given)" % (
- fnname,
+ msg = "takes no arguments (%d given)" % (
self.num_args + self.num_kwds)
else:
defcount = self.num_defaults
@@ -672,8 +671,7 @@
msg2 = " non-keyword"
else:
msg2 = ""
- msg = "%s() takes %s %d%s argument%s (%d given)" % (
- fnname,
+ msg = "takes %s %d%s argument%s (%d given)" % (
msg1,
n,
msg2,
@@ -686,9 +684,8 @@
def __init__(self, argname):
self.argname = argname
- def getmsg(self, fnname):
- msg = "%s() got multiple values for keyword argument '%s'" % (
- fnname,
+ def getmsg(self):
+ msg = "got multiple values for keyword argument '%s'" % (
self.argname)
return msg
@@ -722,13 +719,11 @@
break
self.kwd_name = name
- def getmsg(self, fnname):
+ def getmsg(self):
if self.num_kwds == 1:
- msg = "%s() got an unexpected keyword argument '%s'" % (
- fnname,
+ msg = "got an unexpected keyword argument '%s'" % (
self.kwd_name)
else:
- msg = "%s() got %d unexpected keyword arguments" % (
- fnname,
+ msg = "got %d unexpected keyword arguments" % (
self.num_kwds)
return msg
diff --git a/pypy/interpreter/test/test_argument.py b/pypy/interpreter/test/test_argument.py
--- a/pypy/interpreter/test/test_argument.py
+++ b/pypy/interpreter/test/test_argument.py
@@ -393,8 +393,8 @@
class FakeArgErr(ArgErr):
- def getmsg(self, fname):
- return "msg "+fname
+ def getmsg(self):
+ return "msg"
def _match_signature(*args):
raise FakeArgErr()
@@ -404,7 +404,7 @@
excinfo = py.test.raises(OperationError, args.parse_obj, "obj", "foo",
Signature(["a", "b"], None, None))
assert excinfo.value.w_type is TypeError
- assert excinfo.value._w_value == "msg foo"
+ assert excinfo.value.get_w_value(space) == "foo() msg"
def test_args_parsing_into_scope(self):
@@ -448,8 +448,8 @@
class FakeArgErr(ArgErr):
- def getmsg(self, fname):
- return "msg "+fname
+ def getmsg(self):
+ return "msg"
def _match_signature(*args):
raise FakeArgErr()
@@ -460,7 +460,7 @@
"obj", [None, None], "foo",
Signature(["a", "b"], None, None))
assert excinfo.value.w_type is TypeError
- assert excinfo.value._w_value == "msg foo"
+ assert excinfo.value.get_w_value(space) == "foo() msg"
def test_topacked_frompacked(self):
space = DummySpace()
@@ -493,35 +493,35 @@
# got_nargs, nkwds, expected_nargs, has_vararg, has_kwarg,
# defaults_w, missing_args
err = ArgErrCount(1, 0, 0, False, False, None, 0)
- s = err.getmsg('foo')
- assert s == "foo() takes no arguments (1 given)"
+ s = err.getmsg()
+ assert s == "takes no arguments (1 given)"
err = ArgErrCount(0, 0, 1, False, False, [], 1)
- s = err.getmsg('foo')
- assert s == "foo() takes exactly 1 argument (0 given)"
+ s = err.getmsg()
+ assert s == "takes exactly 1 argument (0 given)"
err = ArgErrCount(3, 0, 2, False, False, [], 0)
- s = err.getmsg('foo')
- assert s == "foo() takes exactly 2 arguments (3 given)"
+ s = err.getmsg()
+ assert s == "takes exactly 2 arguments (3 given)"
err = ArgErrCount(3, 0, 2, False, False, ['a'], 0)
- s = err.getmsg('foo')
- assert s == "foo() takes at most 2 arguments (3 given)"
+ s = err.getmsg()
+ assert s == "takes at most 2 arguments (3 given)"
err = ArgErrCount(1, 0, 2, True, False, [], 1)
- s = err.getmsg('foo')
- assert s == "foo() takes at least 2 arguments (1 given)"
+ s = err.getmsg()
+ assert s == "takes at least 2 arguments (1 given)"
err = ArgErrCount(0, 1, 2, True, False, ['a'], 1)
- s = err.getmsg('foo')
- assert s == "foo() takes at least 1 non-keyword argument (0 given)"
+ s = err.getmsg()
+ assert s == "takes at least 1 non-keyword argument (0 given)"
err = ArgErrCount(2, 1, 1, False, True, [], 0)
- s = err.getmsg('foo')
- assert s == "foo() takes exactly 1 non-keyword argument (2 given)"
+ s = err.getmsg()
+ assert s == "takes exactly 1 non-keyword argument (2 given)"
err = ArgErrCount(0, 1, 1, False, True, [], 1)
- s = err.getmsg('foo')
- assert s == "foo() takes exactly 1 non-keyword argument (0 given)"
+ s = err.getmsg()
+ assert s == "takes exactly 1 non-keyword argument (0 given)"
err = ArgErrCount(0, 1, 1, True, True, [], 1)
- s = err.getmsg('foo')
- assert s == "foo() takes at least 1 non-keyword argument (0 given)"
+ s = err.getmsg()
+ assert s == "takes at least 1 non-keyword argument (0 given)"
err = ArgErrCount(2, 1, 1, False, True, ['a'], 0)
- s = err.getmsg('foo')
- assert s == "foo() takes at most 1 non-keyword argument (2 given)"
+ s = err.getmsg()
+ assert s == "takes at most 1 non-keyword argument (2 given)"
def test_bad_type_for_star(self):
space = self.space
@@ -543,12 +543,12 @@
def test_unknown_keywords(self):
space = DummySpace()
err = ArgErrUnknownKwds(space, 1, ['a', 'b'], [True, False], None)
- s = err.getmsg('foo')
- assert s == "foo() got an unexpected keyword argument 'b'"
+ s = err.getmsg()
+ assert s == "got an unexpected keyword argument 'b'"
err = ArgErrUnknownKwds(space, 2, ['a', 'b', 'c'],
[True, False, False], None)
- s = err.getmsg('foo')
- assert s == "foo() got 2 unexpected keyword arguments"
+ s = err.getmsg()
+ assert s == "got 2 unexpected keyword arguments"
def test_unknown_unicode_keyword(self):
class DummySpaceUnicode(DummySpace):
@@ -558,13 +558,13 @@
err = ArgErrUnknownKwds(space, 1, ['a', None, 'b', 'c'],
[True, False, True, True],
[unichr(0x1234), u'b', u'c'])
- s = err.getmsg('foo')
- assert s == "foo() got an unexpected keyword argument '\xe1\x88\xb4'"
+ s = err.getmsg()
+ assert s == "got an unexpected keyword argument '\xe1\x88\xb4'"
def test_multiple_values(self):
err = ArgErrMultipleValues('bla')
- s = err.getmsg('foo')
- assert s == "foo() got multiple values for keyword argument 'bla'"
+ s = err.getmsg()
+ assert s == "got multiple values for keyword argument 'bla'"
class AppTestArgument:
def test_error_message(self):
diff --git a/pypy/jit/backend/x86/regalloc.py b/pypy/jit/backend/x86/regalloc.py
--- a/pypy/jit/backend/x86/regalloc.py
+++ b/pypy/jit/backend/x86/regalloc.py
@@ -741,7 +741,7 @@
self.xrm.possibly_free_var(op.getarg(0))
def consider_cast_int_to_float(self, op):
- loc0 = self.rm.loc(op.getarg(0))
+ loc0 = self.rm.force_allocate_reg(op.getarg(0))
loc1 = self.xrm.force_allocate_reg(op.result)
self.Perform(op, [loc0], loc1)
self.rm.possibly_free_var(op.getarg(0))
diff --git a/pypy/jit/metainterp/resoperation.py b/pypy/jit/metainterp/resoperation.py
--- a/pypy/jit/metainterp/resoperation.py
+++ b/pypy/jit/metainterp/resoperation.py
@@ -16,17 +16,15 @@
# debug
name = ""
pc = 0
+ opnum = 0
_attrs_ = ('result',)
def __init__(self, result):
self.result = result
- # methods implemented by each concrete class
- # ------------------------------------------
-
def getopnum(self):
- raise NotImplementedError
+ return self.opnum
# methods implemented by the arity mixins
# ---------------------------------------
@@ -592,12 +590,9 @@
baseclass = PlainResOp
mixin = arity2mixin.get(arity, N_aryOp)
- def getopnum(self):
- return opnum
-
cls_name = '%s_OP' % name
bases = (get_base_class(mixin, baseclass),)
- dic = {'getopnum': getopnum}
+ dic = {'opnum': opnum}
return type(cls_name, bases, dic)
setup(__name__ == '__main__') # print out the table when run directly
diff --git a/pypy/jit/metainterp/test/test_resoperation.py b/pypy/jit/metainterp/test/test_resoperation.py
--- a/pypy/jit/metainterp/test/test_resoperation.py
+++ b/pypy/jit/metainterp/test/test_resoperation.py
@@ -30,17 +30,17 @@
cls = rop.opclasses[rop.rop.INT_ADD]
assert issubclass(cls, rop.PlainResOp)
assert issubclass(cls, rop.BinaryOp)
- assert cls.getopnum.im_func(None) == rop.rop.INT_ADD
+ assert cls.getopnum.im_func(cls) == rop.rop.INT_ADD
cls = rop.opclasses[rop.rop.CALL]
assert issubclass(cls, rop.ResOpWithDescr)
assert issubclass(cls, rop.N_aryOp)
- assert cls.getopnum.im_func(None) == rop.rop.CALL
+ assert cls.getopnum.im_func(cls) == rop.rop.CALL
cls = rop.opclasses[rop.rop.GUARD_TRUE]
assert issubclass(cls, rop.GuardResOp)
assert issubclass(cls, rop.UnaryOp)
- assert cls.getopnum.im_func(None) == rop.rop.GUARD_TRUE
+ assert cls.getopnum.im_func(cls) == rop.rop.GUARD_TRUE
def test_mixins_in_common_base():
INT_ADD = rop.opclasses[rop.rop.INT_ADD]
diff --git a/pypy/module/_lsprof/interp_lsprof.py b/pypy/module/_lsprof/interp_lsprof.py
--- a/pypy/module/_lsprof/interp_lsprof.py
+++ b/pypy/module/_lsprof/interp_lsprof.py
@@ -19,8 +19,9 @@
# cpu affinity settings
srcdir = py.path.local(pypydir).join('translator', 'c', 'src')
-eci = ExternalCompilationInfo(separate_module_files=
- [srcdir.join('profiling.c')])
+eci = ExternalCompilationInfo(
+ separate_module_files=[srcdir.join('profiling.c')],
+ export_symbols=['pypy_setup_profiling', 'pypy_teardown_profiling'])
c_setup_profiling = rffi.llexternal('pypy_setup_profiling',
[], lltype.Void,
diff --git a/pypy/module/micronumpy/__init__.py b/pypy/module/micronumpy/__init__.py
--- a/pypy/module/micronumpy/__init__.py
+++ b/pypy/module/micronumpy/__init__.py
@@ -48,6 +48,7 @@
'int_': 'interp_boxes.W_LongBox',
'inexact': 'interp_boxes.W_InexactBox',
'floating': 'interp_boxes.W_FloatingBox',
+ 'float_': 'interp_boxes.W_Float64Box',
'float32': 'interp_boxes.W_Float32Box',
'float64': 'interp_boxes.W_Float64Box',
}
diff --git a/pypy/module/micronumpy/interp_boxes.py b/pypy/module/micronumpy/interp_boxes.py
--- a/pypy/module/micronumpy/interp_boxes.py
+++ b/pypy/module/micronumpy/interp_boxes.py
@@ -78,6 +78,7 @@
descr_sub = _binop_impl("subtract")
descr_mul = _binop_impl("multiply")
descr_div = _binop_impl("divide")
+ descr_pow = _binop_impl("power")
descr_eq = _binop_impl("equal")
descr_ne = _binop_impl("not_equal")
descr_lt = _binop_impl("less")
@@ -103,7 +104,7 @@
_attrs_ = ()
class W_IntegerBox(W_NumberBox):
- pass
+ descr__new__, get_dtype = new_dtype_getter("long")
class W_SignedIntegerBox(W_IntegerBox):
pass
@@ -170,6 +171,7 @@
__sub__ = interp2app(W_GenericBox.descr_sub),
__mul__ = interp2app(W_GenericBox.descr_mul),
__div__ = interp2app(W_GenericBox.descr_div),
+ __pow__ = interp2app(W_GenericBox.descr_pow),
__radd__ = interp2app(W_GenericBox.descr_radd),
__rsub__ = interp2app(W_GenericBox.descr_rsub),
@@ -198,6 +200,7 @@
)
W_IntegerBox.typedef = TypeDef("integer", W_NumberBox.typedef,
+ __new__ = interp2app(W_IntegerBox.descr__new__.im_func),
__module__ = "numpypy",
)
diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py
--- a/pypy/module/micronumpy/interp_numarray.py
+++ b/pypy/module/micronumpy/interp_numarray.py
@@ -568,6 +568,18 @@
def descr_mean(self, space):
return space.div(self.descr_sum(space), space.wrap(self.size))
+ def descr_var(self, space):
+ ''' var = mean( (values - mean(values))**2 ) '''
+ w_res = self.descr_sub(space, self.descr_mean(space))
+ assert isinstance(w_res, BaseArray)
+ w_res = w_res.descr_pow(space, space.wrap(2))
+ assert isinstance(w_res, BaseArray)
+ return w_res.descr_mean(space)
+
+ def descr_std(self, space):
+ ''' std(v) = sqrt(var(v)) '''
+ return interp_ufuncs.get(space).sqrt.call(space, [self.descr_var(space)] )
+
def descr_nonzero(self, space):
if self.size > 1:
raise OperationError(space.w_ValueError, space.wrap(
@@ -1209,6 +1221,8 @@
all = interp2app(BaseArray.descr_all),
any = interp2app(BaseArray.descr_any),
dot = interp2app(BaseArray.descr_dot),
+ var = interp2app(BaseArray.descr_var),
+ std = interp2app(BaseArray.descr_std),
copy = interp2app(BaseArray.descr_copy),
reshape = interp2app(BaseArray.descr_reshape),
diff --git a/pypy/module/micronumpy/test/test_dtypes.py b/pypy/module/micronumpy/test/test_dtypes.py
--- a/pypy/module/micronumpy/test/test_dtypes.py
+++ b/pypy/module/micronumpy/test/test_dtypes.py
@@ -166,6 +166,15 @@
# You can't subclass dtype
raises(TypeError, type, "Foo", (dtype,), {})
+ def test_new(self):
+ import _numpypy as np
+ assert np.int_(4) == 4
+ assert np.float_(3.4) == 3.4
+
+ def test_pow(self):
+ from _numpypy import int_
+ assert int_(4) ** 2 == 16
+
class AppTestTypes(BaseNumpyAppTest):
def test_abstract_types(self):
import _numpypy as numpy
diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py
--- a/pypy/module/micronumpy/test/test_numarray.py
+++ b/pypy/module/micronumpy/test/test_numarray.py
@@ -978,6 +978,20 @@
assert a[:, 0].tolist() == [17.1, 40.3]
assert a[0].tolist() == [17.1, 27.2]
+ def test_var(self):
+ from _numpypy import array
+ a = array(range(10))
+ assert a.var() == 8.25
+ a = array([5.0])
+ assert a.var() == 0.0
+
+ def test_std(self):
+ from _numpypy import array
+ a = array(range(10))
+ assert a.std() == 2.8722813232690143
+ a = array([5.0])
+ assert a.std() == 0.0
+
class AppTestMultiDim(BaseNumpyAppTest):
def test_init(self):
diff --git a/pypy/rlib/jit.py b/pypy/rlib/jit.py
--- a/pypy/rlib/jit.py
+++ b/pypy/rlib/jit.py
@@ -385,6 +385,18 @@
class JitHintError(Exception):
"""Inconsistency in the JIT hints."""
+PARAMETER_DOCS = {
+ 'threshold': 'number of times a loop has to run for it to become hot',
+ 'function_threshold': 'number of times a function must run for it to become traced from start',
+ 'trace_eagerness': 'number of times a guard has to fail before we start compiling a bridge',
+ 'trace_limit': 'number of recorded operations before we abort tracing with ABORT_TRACE_TOO_LONG',
+ 'inlining': 'inline python functions or not (1/0)',
+ 'loop_longevity': 'a parameter controlling how long loops will be kept before being freed, an estimate',
+ 'retrace_limit': 'how many times we can try retracing before giving up',
+ 'max_retrace_guards': 'number of extra guards a retrace can cause',
+ 'enable_opts': 'optimizations to enabled or all, INTERNAL USE ONLY'
+ }
+
PARAMETERS = {'threshold': 1039, # just above 1024, prime
'function_threshold': 1619, # slightly more than one above, also prime
'trace_eagerness': 200,
diff --git a/pypy/translator/c/src/profiling.c b/pypy/translator/c/src/profiling.c
--- a/pypy/translator/c/src/profiling.c
+++ b/pypy/translator/c/src/profiling.c
@@ -29,6 +29,35 @@
profiling_setup = 0;
}
}
+
+#elif defined(_WIN32)
+#include
+
+DWORD_PTR base_affinity_mask;
+int profiling_setup = 0;
+
+void pypy_setup_profiling() {
+ if (!profiling_setup) {
+ DWORD_PTR affinity_mask, system_affinity_mask;
+ GetProcessAffinityMask(GetCurrentProcess(),
+ &base_affinity_mask, &system_affinity_mask);
+ affinity_mask = 1;
+ /* Pick one cpu allowed by the system */
+ if (system_affinity_mask)
+ while ((affinity_mask & system_affinity_mask) == 0)
+ affinity_mask <<= 1;
+ SetProcessAffinityMask(GetCurrentProcess(), affinity_mask);
+ profiling_setup = 1;
+ }
+}
+
+void pypy_teardown_profiling() {
+ if (profiling_setup) {
+ SetProcessAffinityMask(GetCurrentProcess(), base_affinity_mask);
+ profiling_setup = 0;
+ }
+}
+
#else
void pypy_setup_profiling() { }
void pypy_teardown_profiling() { }
From noreply at buildbot.pypy.org Mon Jan 9 23:38:44 2012
From: noreply at buildbot.pypy.org (alex_gaynor)
Date: Mon, 9 Jan 2012 23:38:44 +0100 (CET)
Subject: [pypy-commit] pypy better-jit-hooks: typo fix
Message-ID: <20120109223844.C098682110@wyvern.cs.uni-duesseldorf.de>
Author: Alex Gaynor
Branch: better-jit-hooks
Changeset: r51186:66f1a9fb79c9
Date: 2012-01-09 16:38 -0600
http://bitbucket.org/pypy/pypy/changeset/66f1a9fb79c9/
Log: typo fix
diff --git a/pypy/module/pypyjit/interp_resop.py b/pypy/module/pypyjit/interp_resop.py
--- a/pypy/module/pypyjit/interp_resop.py
+++ b/pypy/module/pypyjit/interp_resop.py
@@ -63,7 +63,7 @@
cache.in_recursion = NonConstant(False)
def set_optimize_hook(space, w_hook):
- """ set_compile_hook(hook)
+ """ set_optimize_hook(hook)
Set a compiling hook that will be called each time a loop is optimized,
but before assembler compilation. This allows to add additional
From noreply at buildbot.pypy.org Mon Jan 9 23:39:44 2012
From: noreply at buildbot.pypy.org (alex_gaynor)
Date: Mon, 9 Jan 2012 23:39:44 +0100 (CET)
Subject: [pypy-commit] pypy better-jit-hooks: remove dead file
Message-ID: <20120109223944.6462482110@wyvern.cs.uni-duesseldorf.de>
Author: Alex Gaynor
Branch: better-jit-hooks
Changeset: r51187:6a26fcde567b
Date: 2012-01-09 16:39 -0600
http://bitbucket.org/pypy/pypy/changeset/6a26fcde567b/
Log: remove dead file
diff --git a/REVIEW.rst b/REVIEW.rst
deleted file mode 100644
--- a/REVIEW.rst
+++ /dev/null
@@ -1,12 +0,0 @@
-REVIEW NOTES
-============
-
-* ``namespace=locals()``, can we please not use ``locals()``, even in tests? I find it super hard to read, and it's bad for the JIT.
-* Don't we already have a thing named portal (portal call maybe?) is the name confusing?
-* ``interp_reso.pyp:wrap_greenkey()`` should do something useful on non-pypyjit jds.
-* The ``WrappedOp`` constructor doesn't make much sense, it can only create an op with integer args?
-* Let's at least expose ``name`` on ``WrappedOp``.
-* DebugMergePoints don't appears to get their metadata.
-* Someone else should review the annotator magic.
-* Are entry_bridge's compiled seperately anymore? (``set_compile_hook`` docstring)
-
From noreply at buildbot.pypy.org Mon Jan 9 23:46:59 2012
From: noreply at buildbot.pypy.org (fijal)
Date: Mon, 9 Jan 2012 23:46:59 +0100 (CET)
Subject: [pypy-commit] pypy look-into-thread: don't look into those llops
Message-ID: <20120109224659.638E082110@wyvern.cs.uni-duesseldorf.de>
Author: Maciej Fijalkowski
Branch: look-into-thread
Changeset: r51188:d2fe92d73a1f
Date: 2012-01-10 00:46 +0200
http://bitbucket.org/pypy/pypy/changeset/d2fe92d73a1f/
Log: don't look into those llops
diff --git a/pypy/module/thread/ll_thread.py b/pypy/module/thread/ll_thread.py
--- a/pypy/module/thread/ll_thread.py
+++ b/pypy/module/thread/ll_thread.py
@@ -192,6 +192,7 @@
# Thread integration.
# These are six completely ad-hoc operations at the moment.
+ at jit.dont_look_inside
def gc_thread_prepare():
"""To call just before thread.start_new_thread(). This
allocates a new shadow stack to be used by the future
@@ -202,6 +203,7 @@
if we_are_translated():
llop.gc_thread_prepare(lltype.Void)
+ at jit.dont_look_inside
def gc_thread_run():
"""To call whenever the current thread (re-)acquired the GIL.
"""
@@ -209,12 +211,14 @@
llop.gc_thread_run(lltype.Void)
gc_thread_run._always_inline_ = True
+ at jit.dont_look_inside
def gc_thread_start():
"""To call at the beginning of a new thread.
"""
if we_are_translated():
llop.gc_thread_start(lltype.Void)
+ at jit.dont_look_inside
def gc_thread_die():
"""To call just before the final GIL release done by a dying
thread. After a thread_die(), no more gc operation should
@@ -224,6 +228,7 @@
llop.gc_thread_die(lltype.Void)
gc_thread_die._always_inline_ = True
+ at jit.dont_look_inside
def gc_thread_before_fork():
"""To call just before fork(). Prepares for forking, after
which only the current thread will be alive.
@@ -233,6 +238,7 @@
else:
return llmemory.NULL
+ at jit.dont_look_inside
def gc_thread_after_fork(result_of_fork, opaqueaddr):
"""To call just after fork().
"""
From noreply at buildbot.pypy.org Mon Jan 9 23:55:32 2012
From: noreply at buildbot.pypy.org (alex_gaynor)
Date: Mon, 9 Jan 2012 23:55:32 +0100 (CET)
Subject: [pypy-commit] pypy default: stylistic cleanups
Message-ID: <20120109225532.0505D82110@wyvern.cs.uni-duesseldorf.de>
Author: Alex Gaynor
Branch:
Changeset: r51189:409a8b279f54
Date: 2012-01-09 16:55 -0600
http://bitbucket.org/pypy/pypy/changeset/409a8b279f54/
Log: stylistic cleanups
diff --git a/pypy/module/micronumpy/interp_boxes.py b/pypy/module/micronumpy/interp_boxes.py
--- a/pypy/module/micronumpy/interp_boxes.py
+++ b/pypy/module/micronumpy/interp_boxes.py
@@ -104,7 +104,7 @@
_attrs_ = ()
class W_IntegerBox(W_NumberBox):
- descr__new__, get_dtype = new_dtype_getter("long")
+ pass
class W_SignedIntegerBox(W_IntegerBox):
pass
@@ -200,7 +200,6 @@
)
W_IntegerBox.typedef = TypeDef("integer", W_NumberBox.typedef,
- __new__ = interp2app(W_IntegerBox.descr__new__.im_func),
__module__ = "numpypy",
)
@@ -248,6 +247,7 @@
long_name = "int64"
W_LongBox.typedef = TypeDef(long_name, (W_SignedIntegerBox.typedef, int_typedef,),
__module__ = "numpypy",
+ __new__ = interp2app(W_LongBox.descr__new__.im_func),
)
W_ULongBox.typedef = TypeDef("u" + long_name, W_UnsignedIntegerBox.typedef,
diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py
--- a/pypy/module/micronumpy/interp_numarray.py
+++ b/pypy/module/micronumpy/interp_numarray.py
@@ -564,7 +564,7 @@
return space.div(self.descr_sum(space), space.wrap(self.size))
def descr_var(self, space):
- ''' var = mean( (values - mean(values))**2 ) '''
+ # var = mean((values - mean(values)) ** 2)
w_res = self.descr_sub(space, self.descr_mean(space))
assert isinstance(w_res, BaseArray)
w_res = w_res.descr_pow(space, space.wrap(2))
@@ -572,8 +572,8 @@
return w_res.descr_mean(space)
def descr_std(self, space):
- ''' std(v) = sqrt(var(v)) '''
- return interp_ufuncs.get(space).sqrt.call(space, [self.descr_var(space)] )
+ # std(v) = sqrt(var(v))
+ return interp_ufuncs.get(space).sqrt.call(space, [self.descr_var(space)])
def descr_nonzero(self, space):
if self.size > 1:
From noreply at buildbot.pypy.org Tue Jan 10 00:05:48 2012
From: noreply at buildbot.pypy.org (mattip)
Date: Tue, 10 Jan 2012 00:05:48 +0100 (CET)
Subject: [pypy-commit] pypy numpypy-axisops: test for sum_promote fails
miserably, signature.dtype is not arr.dtype
Message-ID: <20120109230548.C411E82110@wyvern.cs.uni-duesseldorf.de>
Author: mattip
Branch: numpypy-axisops
Changeset: r51190:e00f14813b9e
Date: 2012-01-10 01:04 +0200
http://bitbucket.org/pypy/pypy/changeset/e00f14813b9e/
Log: test for sum_promote fails miserably, signature.dtype is not
arr.dtype
diff --git a/pypy/module/micronumpy/app_numpy.py b/pypy/module/micronumpy/app_numpy.py
--- a/pypy/module/micronumpy/app_numpy.py
+++ b/pypy/module/micronumpy/app_numpy.py
@@ -19,12 +19,12 @@
a[i][i] = 1
return a
-def mean(a):
+def mean(a, axis=None):
if not hasattr(a, "mean"):
a = numpypy.array(a)
- return a.mean()
+ return a.mean(axis)
-def sum(a):
+def sum(a,axis=None):
'''sum(a, axis=None)
Sum of array elements over a given axis.
@@ -51,12 +51,12 @@
# TODO: add to doc (once it's implemented): cumsum : Cumulative sum of array elements.
if not hasattr(a, "sum"):
a = numpypy.array(a)
- return a.sum()
+ return a.sum(axis)
-def min(a):
+def min(a, axis=None):
if not hasattr(a, "min"):
a = numpypy.array(a)
- return a.min()
+ return a.min(axis)
def max(a, axis=None):
if not hasattr(a, "max"):
diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py
--- a/pypy/module/micronumpy/test/test_numarray.py
+++ b/pypy/module/micronumpy/test/test_numarray.py
@@ -724,10 +724,13 @@
assert d[1] == 12
def test_mean(self):
- from numpypy import array
+ from numpypy import array,mean
a = array(range(5))
assert a.mean() == 2.0
assert a[:4].mean() == 1.5
+ a = array(range(105)).reshape(3, 5, 7)
+ assert (mean(a, axis=0) == array(range(35, 70)).reshape(5, 7)).all()
+ assert (mean(a, 2) == array(range(0, 15)).reshape(3, 5) * 7 + 3).all()
def test_sum(self):
from numpypy import array, arange
From noreply at buildbot.pypy.org Tue Jan 10 10:47:05 2012
From: noreply at buildbot.pypy.org (hager)
Date: Tue, 10 Jan 2012 10:47:05 +0100 (CET)
Subject: [pypy-commit] pypy ppc-jit-backend: add test to ensure that
arguments are passed correctly
Message-ID: <20120110094705.9B5F082110@wyvern.cs.uni-duesseldorf.de>
Author: hager
Branch: ppc-jit-backend
Changeset: r51191:dd765153417e
Date: 2012-01-10 10:46 +0100
http://bitbucket.org/pypy/pypy/changeset/dd765153417e/
Log: add test to ensure that arguments are passed correctly
diff --git a/pypy/jit/backend/ppc/test/test_runner.py b/pypy/jit/backend/ppc/test/test_runner.py
--- a/pypy/jit/backend/ppc/test/test_runner.py
+++ b/pypy/jit/backend/ppc/test/test_runner.py
@@ -1,5 +1,16 @@
from pypy.jit.backend.test.runner_test import LLtypeBackendTest
from pypy.jit.backend.ppc.runner import PPC_64_CPU
+from pypy.jit.tool.oparser import parse
+from pypy.jit.metainterp.history import (AbstractFailDescr,
+ AbstractDescr,
+ BasicFailDescr,
+ BoxInt, Box, BoxPtr,
+ JitCellToken, TargetToken,
+ ConstInt, ConstPtr,
+ BoxObj, Const,
+ ConstObj, BoxFloat, ConstFloat)
+from pypy.rpython.lltypesystem import lltype, llmemory, rstr, rffi, rclass
+from pypy.jit.codewriter.effectinfo import EffectInfo
import py
class FakeStats(object):
@@ -13,3 +24,34 @@
def test_cond_call_gc_wb_array_card_marking_fast_path(self):
py.test.skip("unsure what to do here")
+
+ def test_compile_loop_many_int_args(self):
+ for numargs in range(1, 16):
+ for _ in range(numargs):
+ self.cpu.reserve_some_free_fail_descr_number()
+ ops = []
+ arglist = "[%s]\n" % ", ".join(["i%d" % i for i in range(numargs)])
+ ops.append(arglist)
+
+ arg1 = 0
+ arg2 = 1
+ res = numargs
+ for i in range(numargs - 1):
+ op = "i%d = int_add(i%d, i%d)\n" % (res, arg1, arg2)
+ arg1 = res
+ res += 1
+ arg2 += 1
+ ops.append(op)
+ ops.append("finish(i%d)" % (res - 1))
+
+ ops = "".join(ops)
+ loop = parse(ops)
+ looptoken = JitCellToken()
+ done_number = self.cpu.get_fail_descr_number(loop.operations[-1].getdescr())
+ self.cpu.compile_loop(loop.inputargs, loop.operations, looptoken)
+ ARGS = [lltype.Signed] * numargs
+ RES = lltype.Signed
+ args = [i+1 for i in range(numargs)]
+ res = self.cpu.execute_token(looptoken, *args)
+ assert self.cpu.get_latest_value_int(0) == sum(args)
+
From noreply at buildbot.pypy.org Tue Jan 10 11:24:41 2012
From: noreply at buildbot.pypy.org (fijal)
Date: Tue, 10 Jan 2012 11:24:41 +0100 (CET)
Subject: [pypy-commit] pypy default: argh, I'm stupid, use the correct API
Message-ID: <20120110102441.28E0782110@wyvern.cs.uni-duesseldorf.de>
Author: Maciej Fijalkowski
Branch:
Changeset: r51192:e6f379da6e7c
Date: 2012-01-10 12:24 +0200
http://bitbucket.org/pypy/pypy/changeset/e6f379da6e7c/
Log: argh, I'm stupid, use the correct API
diff --git a/pypy/jit/backend/x86/regalloc.py b/pypy/jit/backend/x86/regalloc.py
--- a/pypy/jit/backend/x86/regalloc.py
+++ b/pypy/jit/backend/x86/regalloc.py
@@ -741,7 +741,7 @@
self.xrm.possibly_free_var(op.getarg(0))
def consider_cast_int_to_float(self, op):
- loc0 = self.rm.force_allocate_reg(op.getarg(0))
+ loc0 = self.rm.make_sure_var_in_reg(op.getarg(0))
loc1 = self.xrm.force_allocate_reg(op.result)
self.Perform(op, [loc0], loc1)
self.rm.possibly_free_var(op.getarg(0))
From noreply at buildbot.pypy.org Tue Jan 10 11:37:40 2012
From: noreply at buildbot.pypy.org (hager)
Date: Tue, 10 Jan 2012 11:37:40 +0100 (CET)
Subject: [pypy-commit] pypy ppc-jit-backend: rename test,
start with 2 arguments
Message-ID: <20120110103740.7AA8182110@wyvern.cs.uni-duesseldorf.de>
Author: hager
Branch: ppc-jit-backend
Changeset: r51193:768f640c18b7
Date: 2012-01-10 11:36 +0100
http://bitbucket.org/pypy/pypy/changeset/768f640c18b7/
Log: rename test, start with 2 arguments
diff --git a/pypy/jit/backend/ppc/test/test_runner.py b/pypy/jit/backend/ppc/test/test_runner.py
--- a/pypy/jit/backend/ppc/test/test_runner.py
+++ b/pypy/jit/backend/ppc/test/test_runner.py
@@ -26,7 +26,7 @@
py.test.skip("unsure what to do here")
def test_compile_loop_many_int_args(self):
- for numargs in range(1, 16):
+ for numargs in range(2, 16):
for _ in range(numargs):
self.cpu.reserve_some_free_fail_descr_number()
ops = []
From noreply at buildbot.pypy.org Tue Jan 10 11:37:41 2012
From: noreply at buildbot.pypy.org (hager)
Date: Tue, 10 Jan 2012 11:37:41 +0100 (CET)
Subject: [pypy-commit] pypy ppc-jit-backend: (bivab,
hager): fix offset to stack parameters
Message-ID: <20120110103741.B9B5882110@wyvern.cs.uni-duesseldorf.de>
Author: hager
Branch: ppc-jit-backend
Changeset: r51194:d1b7f8e3b929
Date: 2012-01-10 11:37 +0100
http://bitbucket.org/pypy/pypy/changeset/d1b7f8e3b929/
Log: (bivab, hager): fix offset to stack parameters
diff --git a/pypy/jit/backend/ppc/ppcgen/regalloc.py b/pypy/jit/backend/ppc/ppcgen/regalloc.py
--- a/pypy/jit/backend/ppc/ppcgen/regalloc.py
+++ b/pypy/jit/backend/ppc/ppcgen/regalloc.py
@@ -161,7 +161,7 @@
arg_index = 0
count = 0
n_register_args = len(r.PARAM_REGS)
- cur_frame_pos = -self.assembler.OFFSET_STACK_ARGS // WORD + 1
+ cur_frame_pos = -self.assembler.OFFSET_STACK_ARGS // WORD
for box in inputargs:
assert isinstance(box, Box)
# handle inputargs in argument registers
From noreply at buildbot.pypy.org Tue Jan 10 11:41:48 2012
From: noreply at buildbot.pypy.org (arigo)
Date: Tue, 10 Jan 2012 11:41:48 +0100 (CET)
Subject: [pypy-commit] pypy default: Fix the docstrings.
Message-ID: <20120110104148.B446082110@wyvern.cs.uni-duesseldorf.de>
Author: Armin Rigo
Branch:
Changeset: r51195:799b4c3164db
Date: 2012-01-10 11:41 +0100
http://bitbucket.org/pypy/pypy/changeset/799b4c3164db/
Log: Fix the docstrings.
diff --git a/pypy/rlib/jit.py b/pypy/rlib/jit.py
--- a/pypy/rlib/jit.py
+++ b/pypy/rlib/jit.py
@@ -390,12 +390,12 @@
'threshold': 'number of times a loop has to run for it to become hot',
'function_threshold': 'number of times a function must run for it to become traced from start',
'trace_eagerness': 'number of times a guard has to fail before we start compiling a bridge',
- 'trace_limit': 'number of recorded operations before we abort tracing with ABORT_TRACE_TOO_LONG',
+ 'trace_limit': 'number of recorded operations before we abort tracing with ABORT_TOO_LONG',
'inlining': 'inline python functions or not (1/0)',
'loop_longevity': 'a parameter controlling how long loops will be kept before being freed, an estimate',
'retrace_limit': 'how many times we can try retracing before giving up',
'max_retrace_guards': 'number of extra guards a retrace can cause',
- 'enable_opts': 'optimizations to enabled or all, INTERNAL USE ONLY'
+ 'enable_opts': 'optimizations to enable or all, INTERNAL USE ONLY'
}
PARAMETERS = {'threshold': 1039, # just above 1024, prime
From noreply at buildbot.pypy.org Tue Jan 10 11:49:40 2012
From: noreply at buildbot.pypy.org (timo_jbo)
Date: Tue, 10 Jan 2012 11:49:40 +0100 (CET)
Subject: [pypy-commit] pypy strbuf_by_default: turn on the strbuf (strjoin
v2) objspace optimisation by default
Message-ID: <20120110104940.1F84582110@wyvern.cs.uni-duesseldorf.de>
Author: Timo Paulssen
Branch: strbuf_by_default
Changeset: r51196:211606889b44
Date: 2012-01-10 11:47 +0100
http://bitbucket.org/pypy/pypy/changeset/211606889b44/
Log: turn on the strbuf (strjoin v2) objspace optimisation by default
diff --git a/pypy/config/pypyoption.py b/pypy/config/pypyoption.py
--- a/pypy/config/pypyoption.py
+++ b/pypy/config/pypyoption.py
@@ -237,7 +237,7 @@
default=False),
BoolOption("withstrbuf", "use strings optimized for addition (ver 2)",
- default=False),
+ default=True),
BoolOption("withprebuiltchar",
"use prebuilt single-character string objects",
From noreply at buildbot.pypy.org Tue Jan 10 11:52:50 2012
From: noreply at buildbot.pypy.org (fijal)
Date: Tue, 10 Jan 2012 11:52:50 +0100 (CET)
Subject: [pypy-commit] pypy look-into-thread: don't look into a function
that does add_memory_pressure. We should fix it
Message-ID: <20120110105250.27CDD82110@wyvern.cs.uni-duesseldorf.de>
Author: Maciej Fijalkowski
Branch: look-into-thread
Changeset: r51197:a72a6f955660
Date: 2012-01-10 12:52 +0200
http://bitbucket.org/pypy/pypy/changeset/a72a6f955660/
Log: don't look into a function that does add_memory_pressure. We should
fix it one day
diff --git a/pypy/module/thread/ll_thread.py b/pypy/module/thread/ll_thread.py
--- a/pypy/module/thread/ll_thread.py
+++ b/pypy/module/thread/ll_thread.py
@@ -156,6 +156,7 @@
null_ll_lock = lltype.nullptr(TLOCKP.TO)
+ at jit.dont_look_inside
def allocate_ll_lock():
# track_allocation=False here; be careful to lltype.free() it. The
# reason it is set to False is that we get it from all app-level
From noreply at buildbot.pypy.org Tue Jan 10 12:19:11 2012
From: noreply at buildbot.pypy.org (hager)
Date: Tue, 10 Jan 2012 12:19:11 +0100 (CET)
Subject: [pypy-commit] pypy ppc-jit-backend: (bivab,
hager): fix off-by-one bug in computation of offset to stack
locations
Message-ID: <20120110111911.86B9382110@wyvern.cs.uni-duesseldorf.de>
Author: hager
Branch: ppc-jit-backend
Changeset: r51198:308dd2d5e89f
Date: 2012-01-10 12:18 +0100
http://bitbucket.org/pypy/pypy/changeset/308dd2d5e89f/
Log: (bivab, hager): fix off-by-one bug in computation of offset to stack
locations
diff --git a/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py b/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py
--- a/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py
+++ b/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py
@@ -709,14 +709,14 @@
# move immediate value to memory
elif loc.is_stack():
self.mc.alloc_scratch_reg()
- offset = loc.as_key() * WORD - WORD
+ offset = loc.as_key() * WORD
self.mc.load_imm(r.SCRATCH.value, value)
self.mc.store(r.SCRATCH.value, r.SPP.value, offset)
self.mc.free_scratch_reg()
return
assert 0, "not supported location"
elif prev_loc.is_stack():
- offset = prev_loc.as_key() * WORD - WORD
+ offset = prev_loc.as_key() * WORD
# move from memory to register
if loc.is_reg():
reg = loc.as_key()
@@ -724,7 +724,7 @@
return
# move in memory
elif loc.is_stack():
- target_offset = loc.as_key() * WORD - WORD
+ target_offset = loc.as_key() * WORD
self.mc.alloc_scratch_reg()
self.mc.load(r.SCRATCH.value, r.SPP.value, offset)
self.mc.store(r.SCRATCH.value, r.SPP.value, target_offset)
@@ -740,7 +740,7 @@
return
# move to memory
elif loc.is_stack():
- offset = loc.as_key() * WORD - WORD
+ offset = loc.as_key() * WORD
self.mc.store(reg, r.SPP.value, offset)
return
assert 0, "not supported location"
diff --git a/pypy/jit/backend/ppc/ppcgen/regalloc.py b/pypy/jit/backend/ppc/ppcgen/regalloc.py
--- a/pypy/jit/backend/ppc/ppcgen/regalloc.py
+++ b/pypy/jit/backend/ppc/ppcgen/regalloc.py
@@ -161,7 +161,7 @@
arg_index = 0
count = 0
n_register_args = len(r.PARAM_REGS)
- cur_frame_pos = -self.assembler.OFFSET_STACK_ARGS // WORD
+ cur_frame_pos = -self.assembler.OFFSET_STACK_ARGS // WORD + 1
for box in inputargs:
assert isinstance(box, Box)
# handle inputargs in argument registers
From noreply at buildbot.pypy.org Tue Jan 10 12:37:53 2012
From: noreply at buildbot.pypy.org (hager)
Date: Tue, 10 Jan 2012 12:37:53 +0100 (CET)
Subject: [pypy-commit] pypy ppc-jit-backend: fix wrong initialisation of
StackLocation in regalloc_push/regalloc_pop
Message-ID: <20120110113753.6FA9782110@wyvern.cs.uni-duesseldorf.de>
Author: hager
Branch: ppc-jit-backend
Changeset: r51199:e1dea1c15227
Date: 2012-01-10 12:37 +0100
http://bitbucket.org/pypy/pypy/changeset/e1dea1c15227/
Log: fix wrong initialisation of StackLocation in
regalloc_push/regalloc_pop
diff --git a/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py b/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py
--- a/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py
+++ b/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py
@@ -39,6 +39,7 @@
from pypy.rpython.annlowlevel import llhelper
from pypy.rlib.objectmodel import we_are_translated
from pypy.rpython.lltypesystem.lloperation import llop
+from pypy.jit.backend.ppc.ppcgen.locations import StackLocation
memcpy_fn = rffi.llexternal('memcpy', [llmemory.Address, llmemory.Address,
rffi.SIZE_T], lltype.Void,
@@ -757,7 +758,7 @@
assert 0, "not implemented yet"
# XXX this code has to be verified
assert not self.stack_in_use
- target = StackLocation(self.ENCODING_AREA) # write to force index field
+ target = StackLocation(self.ENCODING_AREA // WORD) # write to ENCODING AREA
self.regalloc_mov(loc, target)
self.stack_in_use = True
elif loc.is_reg():
@@ -782,7 +783,7 @@
assert 0, "not implemented yet"
# XXX this code has to be verified
assert self.stack_in_use
- from_loc = StackLocation(self.ENCODING_AREA)
+ from_loc = StackLocation(self.ENCODING_AREA // WORD) # read from ENCODING AREA
self.regalloc_mov(from_loc, loc)
self.stack_in_use = False
elif loc.is_reg():
From noreply at buildbot.pypy.org Tue Jan 10 13:04:35 2012
From: noreply at buildbot.pypy.org (hager)
Date: Tue, 10 Jan 2012 13:04:35 +0100 (CET)
Subject: [pypy-commit] pypy ppc-jit-backend: adjust
_build_propagate_exception_path to new interface
Message-ID: <20120110120435.072BD82110@wyvern.cs.uni-duesseldorf.de>
Author: hager
Branch: ppc-jit-backend
Changeset: r51200:46750704d638
Date: 2012-01-10 13:03 +0100
http://bitbucket.org/pypy/pypy/changeset/46750704d638/
Log: adjust _build_propagate_exception_path to new interface
diff --git a/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py b/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py
--- a/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py
+++ b/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py
@@ -282,7 +282,8 @@
mc = PPCBuilder()
with Saved_Volatiles(mc):
- addr = self.cpu.get_on_leave_jitted_int(save_exception=True)
+ addr = self.cpu.get_on_leave_jitted_int(save_exception=True,
+ default_to_memoryerror=True)
mc.call(addr)
mc.load_imm(r.RES, self.cpu.propagate_exception_v)
From noreply at buildbot.pypy.org Tue Jan 10 13:59:42 2012
From: noreply at buildbot.pypy.org (fijal)
Date: Tue, 10 Jan 2012 13:59:42 +0100 (CET)
Subject: [pypy-commit] extradoc extradoc: add a draft
Message-ID: <20120110125942.4A43A82110@wyvern.cs.uni-duesseldorf.de>
Author: Maciej Fijalkowski
Branch: extradoc
Changeset: r4005:07cb0fa35b28
Date: 2012-01-10 14:56 +0200
http://bitbucket.org/pypy/extradoc/changeset/07cb0fa35b28/
Log: add a draft
diff --git a/blog/draft/laplace.rst b/blog/draft/laplace.rst
new file mode 100644
--- /dev/null
+++ b/blog/draft/laplace.rst
@@ -0,0 +1,165 @@
+NumPyPy progress report - running benchmarks
+============================================
+
+Hello.
+
+I'm pleased to inform about progress we made on NumPyPy both in terms of
+completeness and performance. This post mostly deals with the performance
+side and how far we got by now. **Word of warning:** It's worth noting that
+the performance work on the numpy side is not done - we're maybe half way
+through and there are trivial and not so trivial optimizations to be performed.
+In fact we didn't even start to implement some optimizations like vectorization.
+
+Benchmark
+---------
+
+We choose a laplace transform, which is also used on scipy's
+`PerformancePython`_ wiki. The problem with the implementation on the
+performance python wiki page is that there are two algorithms used which
+has different convergence, but also very different performance characteristics
+on modern machines. Instead we implemented our own versions in C and a set
+of various Python versions using numpy or not. The full source is available
+on `fijal's hack`_ repo and the exact revision used is 18502dbbcdb3.
+
+Let me describe various algorithms used. Note that some of them contain
+pypy-specific hacks to work around current limitations in the implementation.
+Those hacks will go away eventually and the performance should improve and
+not decrease. It's worth noting that while numerically the algorithms used
+are identical, the exact data layout is not and differs between methods.
+
+**Note on all the benchmarks:** they're all run once, but the performance
+is very stable across runs.
+
+So, starting from the C version, it implements dead simple laplace transform
+using two loops and a double-reference memory (array of ``int**``). The double
+reference does not matter for performance and two algorithms are implemented
+in ``inline-laplace.c`` and ``laplace.c``. They're both compiled with
+``gcc 4.4.5`` and ``-O3``.
+
+A straightforward version of those in python
+is implemented in ``laplace.py`` using respectively ``inline_slow_time_step``
+and ``slow_time_step``. ``slow_2_time_step`` does the same thing, except
+it copies arrays in-place instead of creating new copies.
+
++-----------------------+----------------------+--------------------+
+| bench | number of iterations | time per iteration |
++-----------------------+----------------------+--------------------+
+| laplace C | 219 | 6.3ms |
++-----------------------+----------------------+--------------------+
+| inline-laplace C | 278 | 20ms |
++-----------------------+----------------------+--------------------+
+| slow python | 219 | 17ms |
++-----------------------+----------------------+--------------------+
+| slow 2 python | 219 | 14ms |
++-----------------------+----------------------+--------------------+
+| inline_slow python | 278 | 23.7 |
++-----------------------+----------------------+--------------------+
+
+The important thing to notice here that data dependency in the inline version
+is causing a huge slowdown. Note that this is already **not too bad**,
+as in yes, the braindead python version of the same algorithm takes longer
+and pypy is not able to use as much info about data being independent, but this
+is within the same ballpark - **15% - 170%** slower than C, but it definitely
+matters more which algorithm you choose than which language. For a comparison,
+slow versions take about **5.75s** each on CPython 2.6 **per iteration**,
+so estimating, they're about **200x** slower than the PyPy equivalent.
+I didn't measure full run though :)
+
+Next step is to use numpy expressions. The first problem we run into is that
+computing the error walks again the entire array. This is fairly inefficient
+in terms of cache access, so I took a liberty of computing errors every 15
+steps. This makes convergence rounded to the nearest 15 iterations, but
+speeds things up anyway. ``numeric_time_step`` takes the most braindead
+approach of replacing the array with itself, like this::
+
+ u[1:-1, 1:-1] = ((u[0:-2, 1:-1] + u[2:, 1:-1])*dy2 +
+ (u[1:-1,0:-2] + u[1:-1, 2:])*dx2)*dnr_inv
+
+We need 3 arrays here - one for an intermediate (pypy does not automatically
+create intermediates for expressions), one for a copy to compute error and
+one for the result. This works a bit by chance, since numpy ``+`` or
+``*`` creates an intermediate and pypy simulates the behavior if necessary.
+
+``numeric_2_time_step`` works pretty much the same::
+
+ src = self.u
+ self.u = src.copy()
+ self.u[1:-1, 1:-1] = ((src[0:-2, 1:-1] + src[2:, 1:-1])*dy2 +
+ (src[1:-1,0:-2] + src[1:-1, 2:])*dx2)*dnr_inv
+
+except the copy is now explicit rather than implicit.
+
+``numeric_3_time_step`` does the same thing, but notices you don't have to copy
+the entire array, it's enough to copy border pieces and fill rest with zeros::
+
+ src = self.u
+ self.u = numpy.zeros((self.nx, self.ny), 'd')
+ self.u[0] = src[0]
+ self.u[-1] = src[-1]
+ self.u[:, 0] = src[:, 0]
+ self.u[:, -1] = src[:, -1]
+ self.u[1:-1, 1:-1] = ((src[0:-2, 1:-1] + src[2:, 1:-1])*dy2 +
+ (src[1:-1,0:-2] + src[1:-1, 2:])*dx2)*dnr_inv
+
+``numeric_4_time_step`` is the one that tries to resemble the C version more.
+Instead of doing an array copy, it actually notices that you can alternate
+between two arrays. This is exactly what C version does.
+Note the ``remove_invalidates`` call that's a pypy specific hack - we hope
+to remove this call in the near future, but in short it promises "I don't
+have any unbuilt intermediates that depend on the value of the argument",
+which means you don't have to compute expressions you're not actually using::
+
+ remove_invalidates(self.old_u)
+ remove_invalidates(self.u)
+ self.old_u[:,:] = self.u
+ src = self.old_u
+ self.u[1:-1, 1:-1] = ((src[0:-2, 1:-1] + src[2:, 1:-1])*dy2 +
+ (src[1:-1,0:-2] + src[1:-1, 2:])*dx2)*dnr_inv
+
+This one is the most equivalent to the C version.
+
+``numeric_5_time_step`` does the same thing, but notices you don't have to
+copy the entire array, it's enough to just copy edges. This is an optimization
+that was not done in the C version::
+
+ remove_invalidates(self.old_u)
+ remove_invalidates(self.u)
+ src = self.u
+ self.old_u, self.u = self.u, self.old_u
+ self.u[0] = src[0]
+ self.u[-1] = src[-1]
+ self.u[:, 0] = src[:, 0]
+ self.u[:, -1] = src[:, -1]
+ self.u[1:-1, 1:-1] = ((src[0:-2, 1:-1] + src[2:, 1:-1])*dy2 +
+ (src[1:-1,0:-2] + src[1:-1, 2:])*dx2)*dnr_inv
+
+Let's look at the table of runs. As above, ``gcc 4.4.5``, compiled with
+``-O3``, pypy nightly 7bb8b38d8563, 64bit platform. All of the numeric methods
+run 226 steps each, slightly more than 219, rounding to the next 15 when
+the error is computed. Comparison for PyPy and CPython:
+
++-----------------------+-------------+----------------+
+| benchmark | PyPy | CPython |
++-----------------------+-------------+----------------+
+| numeric | 21ms | 35ms |
++-----------------------+-------------+----------------+
+| numeric 2 | 14ms | 37ms |
++-----------------------+-------------+----------------+
+| numeric 3 | 13ms | 29ms |
++-----------------------+-------------+----------------+
+| numeric 4 | 11ms | 31ms |
++-----------------------+-------------+----------------+
+| numeric 5 | 9.3ms | 21ms |
++-----------------------+-------------+----------------+
+
+So, I can say that those preliminary results are pretty ok. They're not as
+fast as the C version, but we're already much faster than CPython, almost
+always more than 2x on this relatively real-world example. This is not the
+end though. As we continue work, we hope to use a much better high level
+information that we have about operations to eventually outperform C, hopefully
+in 2012. Stay tuned.
+
+Cheers,
+fijal
+
+.. _`PerformancePython`: http://www.scipy.org/PerformancePython
From noreply at buildbot.pypy.org Tue Jan 10 13:59:44 2012
From: noreply at buildbot.pypy.org (fijal)
Date: Tue, 10 Jan 2012 13:59:44 +0100 (CET)
Subject: [pypy-commit] extradoc extradoc: merge
Message-ID: <20120110125944.2983082110@wyvern.cs.uni-duesseldorf.de>
Author: Maciej Fijalkowski
Branch: extradoc
Changeset: r4006:642dcd49d458
Date: 2012-01-10 14:59 +0200
http://bitbucket.org/pypy/extradoc/changeset/642dcd49d458/
Log: merge
diff --git a/blog/draft/pycon-2012-teaser.rst b/blog/draft/pycon-2012-teaser.rst
--- a/blog/draft/pycon-2012-teaser.rst
+++ b/blog/draft/pycon-2012-teaser.rst
@@ -13,7 +13,7 @@
perform much better. In this tutorial we'll give you insights on how to push
PyPy to it's limits. We'll focus on understanding the performance
characteristics of PyPy, and learning the analysis tools in order to maximize
- your applications performance.
+ your applications performance. *This is the tutorial.*
* **Why PyPy by example**, by Maciej Fijalkowski, Alex Gaynor and Armin Rigo:
One of the goals of PyPy is to make existing Python code faster, however an
diff --git a/planning/jit.txt b/planning/jit.txt
--- a/planning/jit.txt
+++ b/planning/jit.txt
@@ -86,8 +86,6 @@
- ((turn max(x, y)/min(x, y) into MAXSD, MINSD instructions when x and y are
floats.)) (a mess, MAXSD/MINSD have different semantics WRT nan)
-- list.pop() (with no arguments) calls into delitem, rather than recognizing that
- no items need to be moved
BACKEND TASKS
-------------
diff --git a/sprintinfo/leysin-winter-2011/announcement.txt b/sprintinfo/leysin-winter-2012/announcement.txt
copy from sprintinfo/leysin-winter-2011/announcement.txt
copy to sprintinfo/leysin-winter-2012/announcement.txt
--- a/sprintinfo/leysin-winter-2011/announcement.txt
+++ b/sprintinfo/leysin-winter-2012/announcement.txt
@@ -1,30 +1,23 @@
=====================================================================
- PyPy Leysin Winter Sprint (16-22nd January 2011)
+ PyPy Leysin Winter Sprint (15-22nd January 2012)
=====================================================================
The next PyPy sprint will be in Leysin, Switzerland, for the
-seventh time. This is a fully public sprint: newcomers and topics
+eighth time. This is a fully public sprint: newcomers and topics
other than those proposed below are welcome.
------------------------------
Goals and topics of the sprint
------------------------------
-* Now that we have released 1.4, and plan to release 1.4.1 soon
- (possibly before the sprint), the sprint itself is going to be
- mainly working on fixing issues reported by various users. Of
- course this does not prevent people from showing up with a more
- precise interest in mind. If there are newcomers, we will gladly
- give introduction talks.
+* Py3k: work towards supporting Python 3 in PyPy
-* We will also work on polishing and merging the long-standing
- branches that are around, which could eventually lead to the
- next PyPy release. These branches are notably:
+* NumPyPy: work towards supporting the numpy module in PyPy
- - fast-forward (Python 2.7 support, by Benjamin, Amaury, and others)
- - jit-unroll-loops (improve JITting of smaller loops, by Hakan)
- - arm-backend (a JIT backend for ARM, by David)
- - jitypes2 (fast ctypes calls with the JIT, by Antonio).
+* JIT backends: integrate tests for ARM; look at the PowerPC 64;
+ maybe try again to write an LLVM- or GCC-based one
+
+* STM and STM-related topics; or the Concurrent Mark-n-Sweep GC
* And as usual, the main side goal is to have fun in winter sports :-)
We can take a day off for ski.
@@ -33,8 +26,9 @@
Exact times
-----------
-The work days should be 16-22 January 2011. People may arrive on
-the 15th already and/or leave on the 23rd.
+The work days should be 15-21 January 2011 (Sunday-Saturday). The
+official plans are for people to arrive on the 14th or the 15th, and to
+leave on the 22nd.
-----------------------
Location & Accomodation
@@ -56,13 +50,14 @@
expensive) and maybe the possibility to get a single room if you really want
to.
-Please register by svn:
+Please register by Mercurial::
- http://codespeak.net/svn/pypy/extradoc/sprintinfo/leysin-winter-2011/people.txt
+ https://bitbucket.org/pypy/extradoc/
+ https://bitbucket.org/pypy/extradoc/raw/extradoc/sprintinfo/leysin-winter-2012
-or on the pypy-sprint mailing list if you do not yet have check-in rights:
+or on the pypy-dev mailing list if you do not yet have check-in rights:
- http://codespeak.net/mailman/listinfo/pypy-sprint
+ http://mail.python.org/mailman/listinfo/pypy-dev
You need a Swiss-to-(insert country here) power adapter. There will be
some Swiss-to-EU adapters around -- bring a EU-format power strip if you
diff --git a/sprintinfo/leysin-winter-2012/people.txt b/sprintinfo/leysin-winter-2012/people.txt
new file mode 100644
--- /dev/null
+++ b/sprintinfo/leysin-winter-2012/people.txt
@@ -0,0 +1,60 @@
+
+People coming to the Leysin sprint Winter 2011
+==================================================
+
+People who have a ``?`` in their arrive/depart or accomodation
+column are known to be coming but there are no details
+available yet from them.
+
+
+==================== ============== =======================
+ Name Arrive/Depart Accomodation
+==================== ============== =======================
+Armin Rigo private
+David Schneider 17/22 ermina
+Antonio Cuni 16/22 ermina, might arrive on the 15th
+Romain Guillebert 15/22 ermina
+==================== ============== =======================
+
+
+People on the following list were present at previous sprints:
+
+==================== ============== =====================
+ Name Arrive/Depart Accomodation
+==================== ============== =====================
+Antonio Cuni ? ?
+Michael Foord ? ?
+Maciej Fijalkowski ? ?
+David Schneider ? ?
+Jacob Hallen ? ?
+Laura Creighton ? ?
+Hakan Ardo ? ?
+Carl Friedrich Bolz ? ?
+Samuele Pedroni ? ?
+Anders Hammarquist ? ?
+Christian Tismer ? ?
+Niko Matsakis ? ?
+Toby Watson ? ?
+Paul deGrandis ? ?
+Michael Hudson ? ?
+Anders Lehmann ? ?
+Niklaus Haldimann ? ?
+Lene Wagner ? ?
+Amaury Forgeot d'Arc ? ?
+Valentino Volonghi ? ?
+Boris Feigin ? ?
+Andrew Thompson ? ?
+Bert Freudenberg ? ?
+Beatrice Duering ? ?
+Richard Emslie ? ?
+Johan Hahn ? ?
+Stephan Diehl ? ?
+Alexander Schremmer ? ?
+Anders Chrigstroem ? ?
+Eric van Riet Paap ? ?
+Holger Krekel ? ?
+Guido Wesdorp ? ?
+Leonardo Santagada ? ?
+Alexandre Fayolle ? ?
+Sylvain Th�nault ? ?
+==================== ============== =====================
diff --git a/talk/dagstuhl2012/figures/all_numbers.png b/talk/dagstuhl2012/figures/all_numbers.png
new file mode 100644
index 0000000000000000000000000000000000000000..9076ac193fc9ba1954e24e2ae372ec7e1e1f44e6
GIT binary patch
[cut]
diff --git a/talk/dagstuhl2012/figures/metatrace01.pdf b/talk/dagstuhl2012/figures/metatrace01.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..0b7181b5a476093c16ff1233f37535378ef7bf8a
GIT binary patch
[cut]
diff --git a/talk/dagstuhl2012/figures/telco.png b/talk/dagstuhl2012/figures/telco.png
new file mode 100644
index 0000000000000000000000000000000000000000..56033389dab8bfe8211ffd4de5bfcb23cdc94b0f
GIT binary patch
[cut]
diff --git a/talk/dagstuhl2012/figures/trace-levels-metatracing.svg b/talk/dagstuhl2012/figures/trace-levels-metatracing.svg
new file mode 100644
--- /dev/null
+++ b/talk/dagstuhl2012/figures/trace-levels-metatracing.svg
@@ -0,0 +1,833 @@
+
+
+
+
diff --git a/talk/dagstuhl2012/figures/trace-levels-tracing.svg b/talk/dagstuhl2012/figures/trace-levels-tracing.svg
new file mode 100644
--- /dev/null
+++ b/talk/dagstuhl2012/figures/trace-levels-tracing.svg
@@ -0,0 +1,991 @@
+
+
+
+
diff --git a/talk/dagstuhl2012/figures/trace01.pdf b/talk/dagstuhl2012/figures/trace01.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..252b5089e72d3626e636cd02397204a464c7ca22
GIT binary patch
[cut]
diff --git a/talk/dagstuhl2012/figures/trace02.pdf b/talk/dagstuhl2012/figures/trace02.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..ece12fe0c3f96856afea26c49d92ade630db9328
GIT binary patch
[cut]
diff --git a/talk/dagstuhl2012/figures/trace03.pdf b/talk/dagstuhl2012/figures/trace03.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..04b38b8996eb2c297214c017bbe1cce1f8f64bdb
GIT binary patch
[cut]
diff --git a/talk/dagstuhl2012/figures/trace04.pdf b/talk/dagstuhl2012/figures/trace04.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..472b798aeae005652fc0d749ed6571117c5819d9
GIT binary patch
[cut]
diff --git a/talk/dagstuhl2012/figures/trace05.pdf b/talk/dagstuhl2012/figures/trace05.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..977e3bbda8d4d349f27f06fcdaa3bd100e95e1a7
GIT binary patch
[cut]
diff --git a/talk/dagstuhl2012/meta-tracing-pypy.pdf b/talk/dagstuhl2012/meta-tracing-pypy.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..910ebeedd1a286e8104d7f59848d1bfa80eed050
GIT binary patch
[cut]
diff --git a/talk/dagstuhl2012/talk.tex b/talk/dagstuhl2012/talk.tex
new file mode 100644
--- /dev/null
+++ b/talk/dagstuhl2012/talk.tex
@@ -0,0 +1,417 @@
+\documentclass[utf8x]{beamer}
+
+% This file is a solution template for:
+
+% - Talk at a conference/colloquium.
+% - Talk length is about 20min.
+% - Style is ornate.
+
+\mode
+{
+ \usetheme{Warsaw}
+ % or ...
+
+ %\setbeamercovered{transparent}
+ % or whatever (possibly just delete it)
+}
+
+
+\usepackage[english]{babel}
+\usepackage{listings}
+\usepackage{fancyvrb}
+\usepackage{ulem}
+\usepackage{color}
+\usepackage{alltt}
+\usepackage{hyperref}
+
+\usepackage[utf8x]{inputenc}
+
+
+\newcommand\redsout[1]{{\color{red}\sout{\hbox{\color{black}{#1}}}}}
+\newcommand{\noop}{}
+
+% or whatever
+
+% Or whatever. Note that the encoding and the font should match. If T1
+% does not look nice, try deleting the line with the fontenc.
+
+
+\title{Meta-Tracing in the PyPy Project}
+
+\author[Carl Friedrich Bolz et. al.]{\emph{Carl Friedrich Bolz}\inst{1} \and Antonio Cuni\inst{1} \and Maciej Fijałkowski\inst{2} \and Michael Leuschel\inst{1} \and Samuele Pedroni\inst{3} \and Armin Rigo\inst{1} \and many~more}
+% - Give the names in the same order as the appear in the paper.
+% - Use the \inst{?} command only if the authors have different
+% affiliation.
+
+\institute[Heinrich-Heine-Universität Düsseldorf]
+{$^1$Heinrich-Heine-Universität Düsseldorf, STUPS Group, Germany \and
+
+ $^2$merlinux GmbH, Hildesheim, Germany \and
+
+ $^3$Canonical
+}
+
+\date{Foundations of Scription Languages, Dagstuhl, 5th January 2012}
+% - Either use conference name or its abbreviation.
+% - Not really informative to the audience, more for people (including
+% yourself) who are reading the slides online
+
+
+% If you have a file called "university-logo-filename.xxx", where xxx
+% is a graphic format that can be processed by latex or pdflatex,
+% resp., then you can add a logo as follows:
+
+
+
+
+% Delete this, if you do not want the table of contents to pop up at
+% the beginning of each subsection:
+%\AtBeginSubsection[]
+%{
+% \begin{frame}
+% \frametitle{Outline}
+% \tableofcontents[currentsection,currentsubsection]
+% \end{frame}
+%}
+
+
+% If you wish to uncover everything in a step-wise fashion, uncomment
+% the following command:
+
+%\beamerdefaultoverlayspecification{<+->}
+
+
+\begin{document}
+
+\begin{frame}
+ \titlepage
+\end{frame}
+
+\begin{frame}
+ \frametitle{Good JIT Compilers for Scripting Languages are Hard}
+ \begin{itemize}
+ \item recent languages like Python, Ruby, JS, PHP have complex core semantics
+ \item many corner cases, even hard to interpret correctly
+ \item particularly in contexts where you have limited resources (like
+ academic, Open Source)
+ \end{itemize}
+ \pause
+ \begin{block}{Problems}
+ \begin{enumerate}
+ \item implement all corner-cases of semantics correctly
+ \item ... and the common cases efficiently
+ \item while maintaing reasonable simplicity in the implementation
+ \end{enumerate}
+ \end{block}
+\end{frame}
+
+\begin{frame}
+ \frametitle{Example: Attribute Reads in Python}
+ What happens when an attribute \texttt{x.m} is read? (simplified)
+ \pause
+ \begin{itemize}
+ \item check for \texttt{x.\_\_getattribute\_\_}, if there, call it
+ \pause
+ \item look for the attribute in the object's dictionary, if it's there, return it
+ \pause
+ \item walk up the MRO and look in each class' dictionary for the attribute
+ \pause
+ \item if the attribute is found, call its \texttt{\_\_get\_\_} attribute and return the result
+ \pause
+ \item if the attribute is not found, look for \texttt{x.\_\_getattr\_\_}, if there, call it
+ \pause
+ \item raise an \texttt{AttributeError}
+ \end{itemize}
+\end{frame}
+
+\begin{frame}
+ \frametitle{An Interpreter}
+ \includegraphics[scale=0.5]{figures/trace01.pdf}
+\end{frame}
+
+\begin{frame}
+ \frametitle{A Tracing JIT}
+ \includegraphics[scale=0.5]{figures/trace02.pdf}
+\end{frame}
+
+\begin{frame}
+ \frametitle{A Tracing JIT}
+ \includegraphics[scale=0.5]{figures/trace03.pdf}
+\end{frame}
+
+\begin{frame}
+ \frametitle{A Tracing JIT}
+ \includegraphics[scale=0.5]{figures/trace04.pdf}
+\end{frame}
+
+\begin{frame}
+ \frametitle{Tracing JITs}
+ Advantages:
+ \begin{itemize}
+ \item can be added to existing VM
+ \item interpreter does a lot of work
+ \item can fall back to interpreter for uncommon paths
+ \end{itemize}
+ \pause
+ \begin{block}{Problems}
+ \begin{itemize}
+ \item traces typically contain bytecodes
+ \item many scripting languages have bytecodes that contain complex logic
+ \item need to expand the bytecode in the trace into something more explicit
+ \item this duplicates the language semantics in the tracer/optimizer
+ \end{itemize}
+ \end{block}
+\end{frame}
+
+\begin{frame}
+ \frametitle{Idea of Meta-Tracing}
+ \includegraphics[scale=0.5]{figures/trace05.pdf}
+\end{frame}
+
+\begin{frame}
+ \frametitle{Meta-Tracing}
+ \includegraphics[scale=0.5]{figures/metatrace01.pdf}
+\end{frame}
+
+\begin{frame}
+ \frametitle{Meta-Tracing JITs}
+ \begin{block}{Advantages:}
+ \begin{itemize}
+ \item semantics are always like that of the interpreter
+ \item trace fully contains language semantics
+ \item meta-tracers can be reused for various interpreters
+ \end{itemize}
+ \end{block}
+ \pause
+ a few meta-tracing systems have been built:
+ \begin{itemize}
+ \item Sullivan et.al. describe a meta-tracer using the Dynamo RIO system
+ \item Yermolovich et.al. run a Lua implementation on top of a tracing JS implementation
+ \item SPUR is a tracing JIT for CLR bytecodes, which is used to speed up a JS implementation in C\#
+ \end{itemize}
+\end{frame}
+
+\begin{frame}
+ \frametitle{PyPy}
+ A general environment for implementing scripting languages
+ \pause
+ \begin{block}{Approach}
+ \begin{itemize}
+ \item write an interpreter for the language in RPython
+ \item compilable to an efficient C-based VM
+ \pause
+ \item (RPython is a restricted subset of Python)
+ \end{itemize}
+ \end{block}
+ \pause
+\end{frame}
+
+\begin{frame}
+ \frametitle{PyPy's Meta-Tracing JIT}
+ \begin{itemize}
+ \item PyPy contains a meta-tracing JIT for interpreters in RPython
+ \item needs a few source-code hints (or annotations) \emph{in the interpreter}
+ \item allows interpreter-author to express language specific type feedback
+ \item contains powerful general optimizations
+ \pause
+ \item general techniques to deal with reified frames
+ \end{itemize}
+\end{frame}
+
+
+
+\begin{frame}
+ \frametitle{Language Implementations Done with PyPy}
+ \begin{itemize}
+ \item Most complete language implemented: Python
+ \item regular expression matcher of Python standard library
+ \item A reasonably complete Prolog
+ \item Converge (previous talk)
+ \item lots of experiments (Squeak, Gameboy emulator, JS, start of a PHP, Haskell, ...)
+ \end{itemize}
+\end{frame}
+
+
+\begin{frame}
+ \frametitle{Some Benchmarks for Python}
+ \begin{itemize}
+ \item benchmarks done using PyPy's Python interpreter
+ \item about 30'000 lines of code
+ \end{itemize}
+\end{frame}
+
+\begin{frame}
+ \includegraphics[scale=0.3]{figures/all_numbers.png}
+\end{frame}
+
+\begin{frame}
+ \frametitle{Telco Benchmark}
+ \includegraphics[scale=0.3]{figures/telco.png}
+\end{frame}
+
+\begin{frame}
+ \frametitle{Conclusion}
+ \begin{itemize}
+ \item writing good JITs for recent scripting languages is too hard!
+ \item only reasonable if the language is exceptionally simple
+ \item or if somebody has a lot of money
+ \item PyPy is one point in a large design space of meta-solutions
+ \item uses tracing on the level of the interpreter (meta-tracing) to get speed
+ \pause
+ \item \textbf{In a way, the exact approach is not too important: let's write more meta-tools!}
+ \end{itemize}
+\end{frame}
+
+\begin{frame}
+ \frametitle{Thank you! Questions?}
+ \begin{itemize}
+ \item writing good JITs for recent scripting languages is too hard!
+ \item only reasonable if the language is exceptionally simple
+ \item or if somebody has a lot of money
+ \item PyPy is one point in a large design space of meta-solutions
+ \item uses tracing on the level of the interpreter (meta-tracing) to get speed
+ \item \textbf{In a way, the exact approach is not too important: let's write more meta-tools!}
+ \end{itemize}
+\end{frame}
+
+\begin{frame}
+ \frametitle{Possible Further Slides}
+ \hyperlink{necessary-hints}{\beamergotobutton{}} Getting Meta-Tracing to Work
+
+ \hyperlink{feedback}{\beamergotobutton{}} Language-Specific Runtime Feedback
+
+ \hyperlink{optimizations}{\beamergotobutton{}} Powerful General Optimizations
+
+ \hyperlink{virtualizables}{\beamergotobutton{}} Optimizing Reified Frames
+
+ \hyperlink{which-langs}{\beamergotobutton{}} Which Languages Can Meta-Tracing be Used With?
+
+ \hyperlink{OOVM}{\beamergotobutton{}} Using OO VMs as an implementation substrate
+
+ \hyperlink{PE}{\beamergotobutton{}} Comparison with Partial Evaluation
+
+\end{frame}
+
+\begin{frame}[label=necessary-hints]
+ \frametitle{Getting Meta-Tracing to Work}
+ \begin{itemize}
+ \item Interpreter author needs add some hints to the interpreter
+ \item one hint to identify the bytecode dispatch loop
+ \item one hint to identify the jump bytecode
+ \item with these in place, meta-tracing works
+ \item but produces non-optimal code
+ \end{itemize}
+\end{frame}
+
+
+\begin{frame}[label=feedback]
+ \frametitle{Language-Specific Runtime Feedback}
+ Problems of Naive Meta-Tracing:
+ \begin{itemize}
+ \item user-level types are normal instances on the implementation level
+ \item thus no runtime feedback of user-level types
+ \item tracer does not know about invariants in the interpreter
+ \end{itemize}
+ \pause
+ \begin{block}{Solution in PyPy}
+ \begin{itemize}
+ \item introduce more hints that the interpreter-author can use
+ \item hints are annotation in the interpreter
+ \item they give information to the meta-tracer
+ \pause
+ \item one to induce runtime feedback of arbitrary information (typically types)
+ \item the second one to influence constant folding
+ \end{itemize}
+ \end{block}
+\end{frame}
+
+
+\begin{frame}[label=optimizations]
+ \frametitle{Powerful General Optimizations}
+ \begin{itemize}
+ \item Very powerful general optimizations on traces
+ \pause
+ \begin{block}{Heap Optimizations}
+ \begin{itemize}
+ \item escape analysis/allocation removal
+ \item remove short-lived objects
+ \item gets rid of the overhead of boxing primitive types
+ \item also reduces overhead of constant heap accesses
+ \end{itemize}
+ \end{block}
+ \end{itemize}
+\end{frame}
+
+\begin{frame}[label=virtualizables]
+ \frametitle{Optimizing Reified Frames}
+ \begin{itemize}
+ \item Common problem in scripting languages
+ \item frames are reified in the language, i.e. can be accessed via reflection
+ \item used to implement the debugger in the language itself
+ \item or for more advanced usecases (backtracking in Smalltalk)
+ \item when using a JIT, quite expensive to keep them up-to-date
+ \pause
+ \begin{block}{Solution in PyPy}
+ \begin{itemize}
+ \item General mechanism for updating reified frames lazily
+ \item use deoptimization when frame objects are accessed by the program
+ \item interpreter just needs to mark the frame class
+ \end{itemize}
+ \end{block}
+ \end{itemize}
+\end{frame}
+
+
+\begin{frame}[label=which-langs]
+ \frametitle{Bonus: Which Languages Can Meta-Tracing be Used With?}
+ \begin{itemize}
+ \item To make meta-tracing useful, there needs to be some kind of runtime variability
+ \item that means it definitely works for all dynamically typed languages
+ \item ... but also for other languages with polymorphism that is not resolvable at compile time
+ \item most languages that have any kind of runtime work
+ \end{itemize}
+\end{frame}
+
+\begin{frame}[label=OOVM]
+ \frametitle{Bonus: Using OO VMs as an implementation substrate}
+ \begin{block}{Benefits}
+ \begin{itemize}
+ \item higher level of implementation
+ \item the VM supplies a GC and mostly a JIT
+ \item better interoperability than what the C level provides
+ \item \texttt{invokedynamic} should make it possible to get language-specific runtime feedback
+ \end{itemize}
+ \end{block}
+ \pause
+ \begin{block}{Problems}
+ \begin{itemize}
+ \item can be hard to map concepts of the scripting language to
+ the host OO VM
+ \item performance is often not improved, and can be very bad, because of this
+ semantic mismatch
+ \item getting good performance needs a huge amount of tweaking
+ \item tools not really prepared to deal with people that care about
+ the shape of the generated assembler
+ \end{itemize}
+ \end{block}
+ \pause
+\end{frame}
+
+\begin{frame}[label=PE]
+ \frametitle{Bonus: Comparison with Partial Evaluation}
+ \begin{itemize}
+ \pause
+ \item the only difference between meta-tracing and partial evaluation is that meta-tracing works
+ \pause
+ \item ... mostly kidding
+ \pause
+ \item very similar from the motivation and ideas
+ \item PE was never scaled up to perform well on large interpreters
+ \item classical PE mostly ahead of time
+ \item PE tried very carefully to select the right paths to inline and optimize
+ \item quite often this fails and inlines too much or too little
+ \item tracing is much more pragmatic: simply look what happens
+ \end{itemize}
+\end{frame}
+
+\end{document}
diff --git a/talk/icooolps2011/talk/talk.tex b/talk/icooolps2011/talk/talk.tex
--- a/talk/icooolps2011/talk/talk.tex
+++ b/talk/icooolps2011/talk/talk.tex
@@ -437,7 +437,7 @@
|{\color{gray}$index_1$ = Map.getindex($map_1$, "a")}|
|{\color{gray}guard($index_1$ != -1)}|
$storage_1$ = $inst_1$.storage
-$result_1$ = $storage_1$[$index_1$}]
+$result_1$ = $storage_1$[$index_1$]
# $inst_1$.getfield("b")
|{\color{gray}$map_2$ = $inst_1$.map|
diff --git a/talk/iwtc11/benchmarks/image/io.py b/talk/iwtc11/benchmarks/image/io.py
--- a/talk/iwtc11/benchmarks/image/io.py
+++ b/talk/iwtc11/benchmarks/image/io.py
@@ -1,4 +1,6 @@
import os, re, array
+from subprocess import Popen, PIPE, STDOUT
+
def mplayer(Image, fn='tv://', options=''):
f = os.popen('mplayer -really-quiet -noframedrop ' + options + ' '
@@ -19,18 +21,18 @@
def view(self, img):
assert img.typecode == 'B'
if not self.width:
- self.mplayer = os.popen('mplayer -really-quiet -noframedrop - ' +
- '2> /dev/null ', 'w')
- self.mplayer.write('YUV4MPEG2 W%d H%d F100:1 Ip A1:1\n' %
- (img.width, img.height))
+ w, h = img.width, img.height
+ self.mplayer = Popen(['mplayer', '-', '-benchmark',
+ '-demuxer', 'rawvideo',
+ '-rawvideo', 'w=%d:h=%d:format=y8' % (w, h),
+ '-really-quiet'],
+ stdin=PIPE, stdout=PIPE, stderr=PIPE)
+
self.width = img.width
self.height = img.height
- self.color_data = array.array('B', [127]) * (img.width * img.height / 2)
assert self.width == img.width
assert self.height == img.height
- self.mplayer.write('FRAME\n')
- img.tofile(self.mplayer)
- self.color_data.tofile(self.mplayer)
+ img.tofile(self.mplayer.stdin)
default_viewer = MplayerViewer()
From noreply at buildbot.pypy.org Tue Jan 10 14:03:54 2012
From: noreply at buildbot.pypy.org (timo_jbo)
Date: Tue, 10 Jan 2012 14:03:54 +0100 (CET)
Subject: [pypy-commit] pypy strbuf_by_default: fix whitebox test that checks
for W_StringObject, rather than W_AbstractStringObject.
Message-ID: <20120110130354.8F39682110@wyvern.cs.uni-duesseldorf.de>
Author: Timo Paulssen
Branch: strbuf_by_default
Changeset: r51201:9014cd34145f
Date: 2012-01-10 13:56 +0100
http://bitbucket.org/pypy/pypy/changeset/9014cd34145f/
Log: fix whitebox test that checks for W_StringObject, rather than
W_AbstractStringObject.
diff --git a/pypy/objspace/std/test/test_stdobjspace.py b/pypy/objspace/std/test/test_stdobjspace.py
--- a/pypy/objspace/std/test/test_stdobjspace.py
+++ b/pypy/objspace/std/test/test_stdobjspace.py
@@ -48,13 +48,13 @@
assert space.sliceindices(w_obj, w(3)) == (1,2,3)
def test_fastpath_isinstance(self):
- from pypy.objspace.std.stringobject import W_StringObject
+ from pypy.objspace.std.stringobject import W_AbstractStringObject, W_StringObject
from pypy.objspace.std.intobject import W_IntObject
from pypy.objspace.std.iterobject import W_AbstractSeqIterObject
from pypy.objspace.std.iterobject import W_SeqIterObject
space = self.space
- assert space._get_interplevel_cls(space.w_str) is W_StringObject
+ assert space._get_interplevel_cls(space.w_str) is W_AbstractStringObject
assert space._get_interplevel_cls(space.w_int) is W_IntObject
class X(W_StringObject):
def __init__(self):
From noreply at buildbot.pypy.org Tue Jan 10 14:11:31 2012
From: noreply at buildbot.pypy.org (bivab)
Date: Tue, 10 Jan 2012 14:11:31 +0100 (CET)
Subject: [pypy-commit] pypy default: reintroduce changes done in
b6390a34f261 to push_arg_as_ffiptr in clibffi.py,
somehow lost in a731ffd298b4
Message-ID: <20120110131131.20B4482110@wyvern.cs.uni-duesseldorf.de>
Author: David Schneider
Branch:
Changeset: r51202:e2f82a5d9f5e
Date: 2012-01-10 14:09 +0100
http://bitbucket.org/pypy/pypy/changeset/e2f82a5d9f5e/
Log: reintroduce changes done in b6390a34f261 to push_arg_as_ffiptr in
clibffi.py, somehow lost in a731ffd298b4
diff --git a/pypy/rlib/clibffi.py b/pypy/rlib/clibffi.py
--- a/pypy/rlib/clibffi.py
+++ b/pypy/rlib/clibffi.py
@@ -30,6 +30,9 @@
_MAC_OS = platform.name == "darwin"
_FREEBSD_7 = platform.name == "freebsd7"
+_LITTLE_ENDIAN = sys.byteorder == 'little'
+_BIG_ENDIAN = sys.byteorder == 'big'
+
if _WIN32:
from pypy.rlib import rwin32
@@ -360,12 +363,36 @@
cast_type_to_ffitype._annspecialcase_ = 'specialize:memo'
def push_arg_as_ffiptr(ffitp, arg, ll_buf):
- # this is for primitive types. For structures and arrays
- # would be something different (more dynamic)
+ # This is for primitive types. Note that the exact type of 'arg' may be
+ # different from the expected 'c_size'. To cope with that, we fall back
+ # to a byte-by-byte copy.
TP = lltype.typeOf(arg)
TP_P = lltype.Ptr(rffi.CArray(TP))
- buf = rffi.cast(TP_P, ll_buf)
- buf[0] = arg
+ TP_size = rffi.sizeof(TP)
+ c_size = intmask(ffitp.c_size)
+ # if both types have the same size, we can directly write the
+ # value to the buffer
+ if c_size == TP_size:
+ buf = rffi.cast(TP_P, ll_buf)
+ buf[0] = arg
+ else:
+ # needs byte-by-byte copying. Make sure 'arg' is an integer type.
+ # Note that this won't work for rffi.FLOAT/rffi.DOUBLE.
+ assert TP is not rffi.FLOAT and TP is not rffi.DOUBLE
+ if TP_size <= rffi.sizeof(lltype.Signed):
+ arg = rffi.cast(lltype.Unsigned, arg)
+ else:
+ arg = rffi.cast(lltype.UnsignedLongLong, arg)
+ if _LITTLE_ENDIAN:
+ for i in range(c_size):
+ ll_buf[i] = chr(arg & 0xFF)
+ arg >>= 8
+ elif _BIG_ENDIAN:
+ for i in range(c_size-1, -1, -1):
+ ll_buf[i] = chr(arg & 0xFF)
+ arg >>= 8
+ else:
+ raise AssertionError
push_arg_as_ffiptr._annspecialcase_ = 'specialize:argtype(1)'
From noreply at buildbot.pypy.org Tue Jan 10 14:13:53 2012
From: noreply at buildbot.pypy.org (bivab)
Date: Tue, 10 Jan 2012 14:13:53 +0100 (CET)
Subject: [pypy-commit] pypy ppc-jit-backend: merge default
Message-ID: <20120110131353.889EE82110@wyvern.cs.uni-duesseldorf.de>
Author: David Schneider
Branch: ppc-jit-backend
Changeset: r51203:d094b25960ad
Date: 2012-01-10 14:13 +0100
http://bitbucket.org/pypy/pypy/changeset/d094b25960ad/
Log: merge default
diff --git a/LICENSE b/LICENSE
--- a/LICENSE
+++ b/LICENSE
@@ -27,7 +27,7 @@
DEALINGS IN THE SOFTWARE.
-PyPy Copyright holders 2003-2011
+PyPy Copyright holders 2003-2012
-----------------------------------
Except when otherwise stated (look for LICENSE files or information at
diff --git a/lib_pypy/numpypy/__init__.py b/lib_pypy/numpypy/__init__.py
new file mode 100644
--- /dev/null
+++ b/lib_pypy/numpypy/__init__.py
@@ -0,0 +1,2 @@
+from _numpypy import *
+from fromnumeric import *
diff --git a/lib_pypy/numpypy/fromnumeric.py b/lib_pypy/numpypy/fromnumeric.py
new file mode 100644
--- /dev/null
+++ b/lib_pypy/numpypy/fromnumeric.py
@@ -0,0 +1,2400 @@
+######################################################################
+# This is a copy of numpy/core/fromnumeric.py modified for numpypy
+######################################################################
+# Each name in __all__ was a function in 'numeric' that is now
+# a method in 'numpy'.
+# When the corresponding method is added to numpypy BaseArray
+# each function should be added as a module function
+# at the applevel
+# This can be as simple as doing the following
+#
+# def func(a, ...):
+# if not hasattr(a, 'func')
+# a = numpypy.array(a)
+# return a.func(...)
+#
+######################################################################
+
+import numpypy
+
+# Module containing non-deprecated functions borrowed from Numeric.
+__docformat__ = "restructuredtext en"
+
+# functions that are now methods
+__all__ = ['take', 'reshape', 'choose', 'repeat', 'put',
+ 'swapaxes', 'transpose', 'sort', 'argsort', 'argmax', 'argmin',
+ 'searchsorted', 'alen',
+ 'resize', 'diagonal', 'trace', 'ravel', 'nonzero', 'shape',
+ 'compress', 'clip', 'sum', 'product', 'prod', 'sometrue', 'alltrue',
+ 'any', 'all', 'cumsum', 'cumproduct', 'cumprod', 'ptp', 'ndim',
+ 'rank', 'size', 'around', 'round_', 'mean', 'std', 'var', 'squeeze',
+ 'amax', 'amin',
+ ]
+
+def take(a, indices, axis=None, out=None, mode='raise'):
+ """
+ Take elements from an array along an axis.
+
+ This function does the same thing as "fancy" indexing (indexing arrays
+ using arrays); however, it can be easier to use if you need elements
+ along a given axis.
+
+ Parameters
+ ----------
+ a : array_like
+ The source array.
+ indices : array_like
+ The indices of the values to extract.
+ axis : int, optional
+ The axis over which to select values. By default, the flattened
+ input array is used.
+ out : ndarray, optional
+ If provided, the result will be placed in this array. It should
+ be of the appropriate shape and dtype.
+ mode : {'raise', 'wrap', 'clip'}, optional
+ Specifies how out-of-bounds indices will behave.
+
+ * 'raise' -- raise an error (default)
+ * 'wrap' -- wrap around
+ * 'clip' -- clip to the range
+
+ 'clip' mode means that all indices that are too large are replaced
+ by the index that addresses the last element along that axis. Note
+ that this disables indexing with negative numbers.
+
+ Returns
+ -------
+ subarray : ndarray
+ The returned array has the same type as `a`.
+
+ See Also
+ --------
+ ndarray.take : equivalent method
+
+ Examples
+ --------
+ >>> a = [4, 3, 5, 7, 6, 8]
+ >>> indices = [0, 1, 4]
+ >>> np.take(a, indices)
+ array([4, 3, 6])
+
+ In this example if `a` is an ndarray, "fancy" indexing can be used.
+
+ >>> a = np.array(a)
+ >>> a[indices]
+ array([4, 3, 6])
+
+ """
+ raise NotImplemented('Waiting on interp level method')
+
+
+# not deprecated --- copy if necessary, view otherwise
+def reshape(a, newshape, order='C'):
+ """
+ Gives a new shape to an array without changing its data.
+
+ Parameters
+ ----------
+ a : array_like
+ Array to be reshaped.
+ newshape : int or tuple of ints
+ The new shape should be compatible with the original shape. If
+ an integer, then the result will be a 1-D array of that length.
+ One shape dimension can be -1. In this case, the value is inferred
+ from the length of the array and remaining dimensions.
+ order : {'C', 'F', 'A'}, optional
+ Determines whether the array data should be viewed as in C
+ (row-major) order, FORTRAN (column-major) order, or the C/FORTRAN
+ order should be preserved.
+
+ Returns
+ -------
+ reshaped_array : ndarray
+ This will be a new view object if possible; otherwise, it will
+ be a copy.
+
+
+ See Also
+ --------
+ ndarray.reshape : Equivalent method.
+
+ Notes
+ -----
+
+ It is not always possible to change the shape of an array without
+ copying the data. If you want an error to be raise if the data is copied,
+ you should assign the new shape to the shape attribute of the array::
+
+ >>> a = np.zeros((10, 2))
+ # A transpose make the array non-contiguous
+ >>> b = a.T
+ # Taking a view makes it possible to modify the shape without modiying the
+ # initial object.
+ >>> c = b.view()
+ >>> c.shape = (20)
+ AttributeError: incompatible shape for a non-contiguous array
+
+
+ Examples
+ --------
+ >>> a = np.array([[1,2,3], [4,5,6]])
+ >>> np.reshape(a, 6)
+ array([1, 2, 3, 4, 5, 6])
+ >>> np.reshape(a, 6, order='F')
+ array([1, 4, 2, 5, 3, 6])
+
+ >>> np.reshape(a, (3,-1)) # the unspecified value is inferred to be 2
+ array([[1, 2],
+ [3, 4],
+ [5, 6]])
+
+ """
+ if not hasattr(a, 'reshape'):
+ a = numpypy.array(a)
+ return a.reshape(newshape)
+
+
+def choose(a, choices, out=None, mode='raise'):
+ """
+ Construct an array from an index array and a set of arrays to choose from.
+
+ First of all, if confused or uncertain, definitely look at the Examples -
+ in its full generality, this function is less simple than it might
+ seem from the following code description (below ndi =
+ `numpy.lib.index_tricks`):
+
+ ``np.choose(a,c) == np.array([c[a[I]][I] for I in ndi.ndindex(a.shape)])``.
+
+ But this omits some subtleties. Here is a fully general summary:
+
+ Given an "index" array (`a`) of integers and a sequence of `n` arrays
+ (`choices`), `a` and each choice array are first broadcast, as necessary,
+ to arrays of a common shape; calling these *Ba* and *Bchoices[i], i =
+ 0,...,n-1* we have that, necessarily, ``Ba.shape == Bchoices[i].shape``
+ for each `i`. Then, a new array with shape ``Ba.shape`` is created as
+ follows:
+
+ * if ``mode=raise`` (the default), then, first of all, each element of
+ `a` (and thus `Ba`) must be in the range `[0, n-1]`; now, suppose that
+ `i` (in that range) is the value at the `(j0, j1, ..., jm)` position
+ in `Ba` - then the value at the same position in the new array is the
+ value in `Bchoices[i]` at that same position;
+
+ * if ``mode=wrap``, values in `a` (and thus `Ba`) may be any (signed)
+ integer; modular arithmetic is used to map integers outside the range
+ `[0, n-1]` back into that range; and then the new array is constructed
+ as above;
+
+ * if ``mode=clip``, values in `a` (and thus `Ba`) may be any (signed)
+ integer; negative integers are mapped to 0; values greater than `n-1`
+ are mapped to `n-1`; and then the new array is constructed as above.
+
+ Parameters
+ ----------
+ a : int array
+ This array must contain integers in `[0, n-1]`, where `n` is the number
+ of choices, unless ``mode=wrap`` or ``mode=clip``, in which cases any
+ integers are permissible.
+ choices : sequence of arrays
+ Choice arrays. `a` and all of the choices must be broadcastable to the
+ same shape. If `choices` is itself an array (not recommended), then
+ its outermost dimension (i.e., the one corresponding to
+ ``choices.shape[0]``) is taken as defining the "sequence".
+ out : array, optional
+ If provided, the result will be inserted into this array. It should
+ be of the appropriate shape and dtype.
+ mode : {'raise' (default), 'wrap', 'clip'}, optional
+ Specifies how indices outside `[0, n-1]` will be treated:
+
+ * 'raise' : an exception is raised
+ * 'wrap' : value becomes value mod `n`
+ * 'clip' : values < 0 are mapped to 0, values > n-1 are mapped to n-1
+
+ Returns
+ -------
+ merged_array : array
+ The merged result.
+
+ Raises
+ ------
+ ValueError: shape mismatch
+ If `a` and each choice array are not all broadcastable to the same
+ shape.
+
+ See Also
+ --------
+ ndarray.choose : equivalent method
+
+ Notes
+ -----
+ To reduce the chance of misinterpretation, even though the following
+ "abuse" is nominally supported, `choices` should neither be, nor be
+ thought of as, a single array, i.e., the outermost sequence-like container
+ should be either a list or a tuple.
+
+ Examples
+ --------
+
+ >>> choices = [[0, 1, 2, 3], [10, 11, 12, 13],
+ ... [20, 21, 22, 23], [30, 31, 32, 33]]
+ >>> np.choose([2, 3, 1, 0], choices
+ ... # the first element of the result will be the first element of the
+ ... # third (2+1) "array" in choices, namely, 20; the second element
+ ... # will be the second element of the fourth (3+1) choice array, i.e.,
+ ... # 31, etc.
+ ... )
+ array([20, 31, 12, 3])
+ >>> np.choose([2, 4, 1, 0], choices, mode='clip') # 4 goes to 3 (4-1)
+ array([20, 31, 12, 3])
+ >>> # because there are 4 choice arrays
+ >>> np.choose([2, 4, 1, 0], choices, mode='wrap') # 4 goes to (4 mod 4)
+ array([20, 1, 12, 3])
+ >>> # i.e., 0
+
+ A couple examples illustrating how choose broadcasts:
+
+ >>> a = [[1, 0, 1], [0, 1, 0], [1, 0, 1]]
+ >>> choices = [-10, 10]
+ >>> np.choose(a, choices)
+ array([[ 10, -10, 10],
+ [-10, 10, -10],
+ [ 10, -10, 10]])
+
+ >>> # With thanks to Anne Archibald
+ >>> a = np.array([0, 1]).reshape((2,1,1))
+ >>> c1 = np.array([1, 2, 3]).reshape((1,3,1))
+ >>> c2 = np.array([-1, -2, -3, -4, -5]).reshape((1,1,5))
+ >>> np.choose(a, (c1, c2)) # result is 2x3x5, res[0,:,:]=c1, res[1,:,:]=c2
+ array([[[ 1, 1, 1, 1, 1],
+ [ 2, 2, 2, 2, 2],
+ [ 3, 3, 3, 3, 3]],
+ [[-1, -2, -3, -4, -5],
+ [-1, -2, -3, -4, -5],
+ [-1, -2, -3, -4, -5]]])
+
+ """
+ raise NotImplemented('Waiting on interp level method')
+
+
+def repeat(a, repeats, axis=None):
+ """
+ Repeat elements of an array.
+
+ Parameters
+ ----------
+ a : array_like
+ Input array.
+ repeats : {int, array of ints}
+ The number of repetitions for each element. `repeats` is broadcasted
+ to fit the shape of the given axis.
+ axis : int, optional
+ The axis along which to repeat values. By default, use the
+ flattened input array, and return a flat output array.
+
+ Returns
+ -------
+ repeated_array : ndarray
+ Output array which has the same shape as `a`, except along
+ the given axis.
+
+ See Also
+ --------
+ tile : Tile an array.
+
+ Examples
+ --------
+ >>> x = np.array([[1,2],[3,4]])
+ >>> np.repeat(x, 2)
+ array([1, 1, 2, 2, 3, 3, 4, 4])
+ >>> np.repeat(x, 3, axis=1)
+ array([[1, 1, 1, 2, 2, 2],
+ [3, 3, 3, 4, 4, 4]])
+ >>> np.repeat(x, [1, 2], axis=0)
+ array([[1, 2],
+ [3, 4],
+ [3, 4]])
+
+ """
+ raise NotImplemented('Waiting on interp level method')
+
+
+def put(a, ind, v, mode='raise'):
+ """
+ Replaces specified elements of an array with given values.
+
+ The indexing works on the flattened target array. `put` is roughly
+ equivalent to:
+
+ ::
+
+ a.flat[ind] = v
+
+ Parameters
+ ----------
+ a : ndarray
+ Target array.
+ ind : array_like
+ Target indices, interpreted as integers.
+ v : array_like
+ Values to place in `a` at target indices. If `v` is shorter than
+ `ind` it will be repeated as necessary.
+ mode : {'raise', 'wrap', 'clip'}, optional
+ Specifies how out-of-bounds indices will behave.
+
+ * 'raise' -- raise an error (default)
+ * 'wrap' -- wrap around
+ * 'clip' -- clip to the range
+
+ 'clip' mode means that all indices that are too large are replaced
+ by the index that addresses the last element along that axis. Note
+ that this disables indexing with negative numbers.
+
+ See Also
+ --------
+ putmask, place
+
+ Examples
+ --------
+ >>> a = np.arange(5)
+ >>> np.put(a, [0, 2], [-44, -55])
+ >>> a
+ array([-44, 1, -55, 3, 4])
+
+ >>> a = np.arange(5)
+ >>> np.put(a, 22, -5, mode='clip')
+ >>> a
+ array([ 0, 1, 2, 3, -5])
+
+ """
+ raise NotImplemented('Waiting on interp level method')
+
+
+def swapaxes(a, axis1, axis2):
+ """
+ Interchange two axes of an array.
+
+ Parameters
+ ----------
+ a : array_like
+ Input array.
+ axis1 : int
+ First axis.
+ axis2 : int
+ Second axis.
+
+ Returns
+ -------
+ a_swapped : ndarray
+ If `a` is an ndarray, then a view of `a` is returned; otherwise
+ a new array is created.
+
+ Examples
+ --------
+ >>> x = np.array([[1,2,3]])
+ >>> np.swapaxes(x,0,1)
+ array([[1],
+ [2],
+ [3]])
+
+ >>> x = np.array([[[0,1],[2,3]],[[4,5],[6,7]]])
+ >>> x
+ array([[[0, 1],
+ [2, 3]],
+ [[4, 5],
+ [6, 7]]])
+
+ >>> np.swapaxes(x,0,2)
+ array([[[0, 4],
+ [2, 6]],
+ [[1, 5],
+ [3, 7]]])
+
+ """
+ raise NotImplemented('Waiting on interp level method')
+
+
+def transpose(a, axes=None):
+ """
+ Permute the dimensions of an array.
+
+ Parameters
+ ----------
+ a : array_like
+ Input array.
+ axes : list of ints, optional
+ By default, reverse the dimensions, otherwise permute the axes
+ according to the values given.
+
+ Returns
+ -------
+ p : ndarray
+ `a` with its axes permuted. A view is returned whenever
+ possible.
+
+ See Also
+ --------
+ rollaxis
+
+ Examples
+ --------
+ >>> x = np.arange(4).reshape((2,2))
+ >>> x
+ array([[0, 1],
+ [2, 3]])
+
+ >>> np.transpose(x)
+ array([[0, 2],
+ [1, 3]])
+
+ >>> x = np.ones((1, 2, 3))
+ >>> np.transpose(x, (1, 0, 2)).shape
+ (2, 1, 3)
+
+ """
+ raise NotImplemented('Waiting on interp level method')
+
+
+def sort(a, axis=-1, kind='quicksort', order=None):
+ """
+ Return a sorted copy of an array.
+
+ Parameters
+ ----------
+ a : array_like
+ Array to be sorted.
+ axis : int or None, optional
+ Axis along which to sort. If None, the array is flattened before
+ sorting. The default is -1, which sorts along the last axis.
+ kind : {'quicksort', 'mergesort', 'heapsort'}, optional
+ Sorting algorithm. Default is 'quicksort'.
+ order : list, optional
+ When `a` is a structured array, this argument specifies which fields
+ to compare first, second, and so on. This list does not need to
+ include all of the fields.
+
+ Returns
+ -------
+ sorted_array : ndarray
+ Array of the same type and shape as `a`.
+
+ See Also
+ --------
+ ndarray.sort : Method to sort an array in-place.
+ argsort : Indirect sort.
+ lexsort : Indirect stable sort on multiple keys.
+ searchsorted : Find elements in a sorted array.
+
+ Notes
+ -----
+ The various sorting algorithms are characterized by their average speed,
+ worst case performance, work space size, and whether they are stable. A
+ stable sort keeps items with the same key in the same relative
+ order. The three available algorithms have the following
+ properties:
+
+ =========== ======= ============= ============ =======
+ kind speed worst case work space stable
+ =========== ======= ============= ============ =======
+ 'quicksort' 1 O(n^2) 0 no
+ 'mergesort' 2 O(n*log(n)) ~n/2 yes
+ 'heapsort' 3 O(n*log(n)) 0 no
+ =========== ======= ============= ============ =======
+
+ All the sort algorithms make temporary copies of the data when
+ sorting along any but the last axis. Consequently, sorting along
+ the last axis is faster and uses less space than sorting along
+ any other axis.
+
+ The sort order for complex numbers is lexicographic. If both the real
+ and imaginary parts are non-nan then the order is determined by the
+ real parts except when they are equal, in which case the order is
+ determined by the imaginary parts.
+
+ Previous to numpy 1.4.0 sorting real and complex arrays containing nan
+ values led to undefined behaviour. In numpy versions >= 1.4.0 nan
+ values are sorted to the end. The extended sort order is:
+
+ * Real: [R, nan]
+ * Complex: [R + Rj, R + nanj, nan + Rj, nan + nanj]
+
+ where R is a non-nan real value. Complex values with the same nan
+ placements are sorted according to the non-nan part if it exists.
+ Non-nan values are sorted as before.
+
+ Examples
+ --------
+ >>> a = np.array([[1,4],[3,1]])
+ >>> np.sort(a) # sort along the last axis
+ array([[1, 4],
+ [1, 3]])
+ >>> np.sort(a, axis=None) # sort the flattened array
+ array([1, 1, 3, 4])
+ >>> np.sort(a, axis=0) # sort along the first axis
+ array([[1, 1],
+ [3, 4]])
+
+ Use the `order` keyword to specify a field to use when sorting a
+ structured array:
+
+ >>> dtype = [('name', 'S10'), ('height', float), ('age', int)]
+ >>> values = [('Arthur', 1.8, 41), ('Lancelot', 1.9, 38),
+ ... ('Galahad', 1.7, 38)]
+ >>> a = np.array(values, dtype=dtype) # create a structured array
+ >>> np.sort(a, order='height') # doctest: +SKIP
+ array([('Galahad', 1.7, 38), ('Arthur', 1.8, 41),
+ ('Lancelot', 1.8999999999999999, 38)],
+ dtype=[('name', '|S10'), ('height', '>> np.sort(a, order=['age', 'height']) # doctest: +SKIP
+ array([('Galahad', 1.7, 38), ('Lancelot', 1.8999999999999999, 38),
+ ('Arthur', 1.8, 41)],
+ dtype=[('name', '|S10'), ('height', '>> x = np.array([3, 1, 2])
+ >>> np.argsort(x)
+ array([1, 2, 0])
+
+ Two-dimensional array:
+
+ >>> x = np.array([[0, 3], [2, 2]])
+ >>> x
+ array([[0, 3],
+ [2, 2]])
+
+ >>> np.argsort(x, axis=0)
+ array([[0, 1],
+ [1, 0]])
+
+ >>> np.argsort(x, axis=1)
+ array([[0, 1],
+ [0, 1]])
+
+ Sorting with keys:
+
+ >>> x = np.array([(1, 0), (0, 1)], dtype=[('x', '>> x
+ array([(1, 0), (0, 1)],
+ dtype=[('x', '>> np.argsort(x, order=('x','y'))
+ array([1, 0])
+
+ >>> np.argsort(x, order=('y','x'))
+ array([0, 1])
+
+ """
+ raise NotImplemented('Waiting on interp level method')
+
+
+def argmax(a, axis=None):
+ """
+ Indices of the maximum values along an axis.
+
+ Parameters
+ ----------
+ a : array_like
+ Input array.
+ axis : int, optional
+ By default, the index is into the flattened array, otherwise
+ along the specified axis.
+
+ Returns
+ -------
+ index_array : ndarray of ints
+ Array of indices into the array. It has the same shape as `a.shape`
+ with the dimension along `axis` removed.
+
+ See Also
+ --------
+ ndarray.argmax, argmin
+ amax : The maximum value along a given axis.
+ unravel_index : Convert a flat index into an index tuple.
+
+ Notes
+ -----
+ In case of multiple occurrences of the maximum values, the indices
+ corresponding to the first occurrence are returned.
+
+ Examples
+ --------
+ >>> a = np.arange(6).reshape(2,3)
+ >>> a
+ array([[0, 1, 2],
+ [3, 4, 5]])
+ >>> np.argmax(a)
+ 5
+ >>> np.argmax(a, axis=0)
+ array([1, 1, 1])
+ >>> np.argmax(a, axis=1)
+ array([2, 2])
+
+ >>> b = np.arange(6)
+ >>> b[1] = 5
+ >>> b
+ array([0, 5, 2, 3, 4, 5])
+ >>> np.argmax(b) # Only the first occurrence is returned.
+ 1
+
+ """
+ if not hasattr(a, 'argmax'):
+ a = numpypy.array(a)
+ return a.argmax()
+
+
+def argmin(a, axis=None):
+ """
+ Return the indices of the minimum values along an axis.
+
+ See Also
+ --------
+ argmax : Similar function. Please refer to `numpy.argmax` for detailed
+ documentation.
+
+ """
+ if not hasattr(a, 'argmin'):
+ a = numpypy.array(a)
+ return a.argmin()
+
+
+def searchsorted(a, v, side='left'):
+ """
+ Find indices where elements should be inserted to maintain order.
+
+ Find the indices into a sorted array `a` such that, if the corresponding
+ elements in `v` were inserted before the indices, the order of `a` would
+ be preserved.
+
+ Parameters
+ ----------
+ a : 1-D array_like
+ Input array, sorted in ascending order.
+ v : array_like
+ Values to insert into `a`.
+ side : {'left', 'right'}, optional
+ If 'left', the index of the first suitable location found is given. If
+ 'right', return the last such index. If there is no suitable
+ index, return either 0 or N (where N is the length of `a`).
+
+ Returns
+ -------
+ indices : array of ints
+ Array of insertion points with the same shape as `v`.
+
+ See Also
+ --------
+ sort : Return a sorted copy of an array.
+ histogram : Produce histogram from 1-D data.
+
+ Notes
+ -----
+ Binary search is used to find the required insertion points.
+
+ As of Numpy 1.4.0 `searchsorted` works with real/complex arrays containing
+ `nan` values. The enhanced sort order is documented in `sort`.
+
+ Examples
+ --------
+ >>> np.searchsorted([1,2,3,4,5], 3)
+ 2
+ >>> np.searchsorted([1,2,3,4,5], 3, side='right')
+ 3
+ >>> np.searchsorted([1,2,3,4,5], [-10, 10, 2, 3])
+ array([0, 5, 1, 2])
+
+ """
+ raise NotImplemented('Waiting on interp level method')
+
+
+def resize(a, new_shape):
+ """
+ Return a new array with the specified shape.
+
+ If the new array is larger than the original array, then the new
+ array is filled with repeated copies of `a`. Note that this behavior
+ is different from a.resize(new_shape) which fills with zeros instead
+ of repeated copies of `a`.
+
+ Parameters
+ ----------
+ a : array_like
+ Array to be resized.
+
+ new_shape : int or tuple of int
+ Shape of resized array.
+
+ Returns
+ -------
+ reshaped_array : ndarray
+ The new array is formed from the data in the old array, repeated
+ if necessary to fill out the required number of elements. The
+ data are repeated in the order that they are stored in memory.
+
+ See Also
+ --------
+ ndarray.resize : resize an array in-place.
+
+ Examples
+ --------
+ >>> a=np.array([[0,1],[2,3]])
+ >>> np.resize(a,(1,4))
+ array([[0, 1, 2, 3]])
+ >>> np.resize(a,(2,4))
+ array([[0, 1, 2, 3],
+ [0, 1, 2, 3]])
+
+ """
+ raise NotImplemented('Waiting on interp level method')
+
+
+def squeeze(a):
+ """
+ Remove single-dimensional entries from the shape of an array.
+
+ Parameters
+ ----------
+ a : array_like
+ Input data.
+
+ Returns
+ -------
+ squeezed : ndarray
+ The input array, but with with all dimensions of length 1
+ removed. Whenever possible, a view on `a` is returned.
+
+ Examples
+ --------
+ >>> x = np.array([[[0], [1], [2]]])
+ >>> x.shape
+ (1, 3, 1)
+ >>> np.squeeze(x).shape
+ (3,)
+
+ """
+ raise NotImplemented('Waiting on interp level method')
+
+
+def diagonal(a, offset=0, axis1=0, axis2=1):
+ """
+ Return specified diagonals.
+
+ If `a` is 2-D, returns the diagonal of `a` with the given offset,
+ i.e., the collection of elements of the form ``a[i, i+offset]``. If
+ `a` has more than two dimensions, then the axes specified by `axis1`
+ and `axis2` are used to determine the 2-D sub-array whose diagonal is
+ returned. The shape of the resulting array can be determined by
+ removing `axis1` and `axis2` and appending an index to the right equal
+ to the size of the resulting diagonals.
+
+ Parameters
+ ----------
+ a : array_like
+ Array from which the diagonals are taken.
+ offset : int, optional
+ Offset of the diagonal from the main diagonal. Can be positive or
+ negative. Defaults to main diagonal (0).
+ axis1 : int, optional
+ Axis to be used as the first axis of the 2-D sub-arrays from which
+ the diagonals should be taken. Defaults to first axis (0).
+ axis2 : int, optional
+ Axis to be used as the second axis of the 2-D sub-arrays from
+ which the diagonals should be taken. Defaults to second axis (1).
+
+ Returns
+ -------
+ array_of_diagonals : ndarray
+ If `a` is 2-D, a 1-D array containing the diagonal is returned.
+ If the dimension of `a` is larger, then an array of diagonals is
+ returned, "packed" from left-most dimension to right-most (e.g.,
+ if `a` is 3-D, then the diagonals are "packed" along rows).
+
+ Raises
+ ------
+ ValueError
+ If the dimension of `a` is less than 2.
+
+ See Also
+ --------
+ diag : MATLAB work-a-like for 1-D and 2-D arrays.
+ diagflat : Create diagonal arrays.
+ trace : Sum along diagonals.
+
+ Examples
+ --------
+ >>> a = np.arange(4).reshape(2,2)
+ >>> a
+ array([[0, 1],
+ [2, 3]])
+ >>> a.diagonal()
+ array([0, 3])
+ >>> a.diagonal(1)
+ array([1])
+
+ A 3-D example:
+
+ >>> a = np.arange(8).reshape(2,2,2); a
+ array([[[0, 1],
+ [2, 3]],
+ [[4, 5],
+ [6, 7]]])
+ >>> a.diagonal(0, # Main diagonals of two arrays created by skipping
+ ... 0, # across the outer(left)-most axis last and
+ ... 1) # the "middle" (row) axis first.
+ array([[0, 6],
+ [1, 7]])
+
+ The sub-arrays whose main diagonals we just obtained; note that each
+ corresponds to fixing the right-most (column) axis, and that the
+ diagonals are "packed" in rows.
+
+ >>> a[:,:,0] # main diagonal is [0 6]
+ array([[0, 2],
+ [4, 6]])
+ >>> a[:,:,1] # main diagonal is [1 7]
+ array([[1, 3],
+ [5, 7]])
+
+ """
+ raise NotImplemented('Waiting on interp level method')
+
+
+def trace(a, offset=0, axis1=0, axis2=1, dtype=None, out=None):
+ """
+ Return the sum along diagonals of the array.
+
+ If `a` is 2-D, the sum along its diagonal with the given offset
+ is returned, i.e., the sum of elements ``a[i,i+offset]`` for all i.
+
+ If `a` has more than two dimensions, then the axes specified by axis1 and
+ axis2 are used to determine the 2-D sub-arrays whose traces are returned.
+ The shape of the resulting array is the same as that of `a` with `axis1`
+ and `axis2` removed.
+
+ Parameters
+ ----------
+ a : array_like
+ Input array, from which the diagonals are taken.
+ offset : int, optional
+ Offset of the diagonal from the main diagonal. Can be both positive
+ and negative. Defaults to 0.
+ axis1, axis2 : int, optional
+ Axes to be used as the first and second axis of the 2-D sub-arrays
+ from which the diagonals should be taken. Defaults are the first two
+ axes of `a`.
+ dtype : dtype, optional
+ Determines the data-type of the returned array and of the accumulator
+ where the elements are summed. If dtype has the value None and `a` is
+ of integer type of precision less than the default integer
+ precision, then the default integer precision is used. Otherwise,
+ the precision is the same as that of `a`.
+ out : ndarray, optional
+ Array into which the output is placed. Its type is preserved and
+ it must be of the right shape to hold the output.
+
+ Returns
+ -------
+ sum_along_diagonals : ndarray
+ If `a` is 2-D, the sum along the diagonal is returned. If `a` has
+ larger dimensions, then an array of sums along diagonals is returned.
+
+ See Also
+ --------
+ diag, diagonal, diagflat
+
+ Examples
+ --------
+ >>> np.trace(np.eye(3))
+ 3.0
+ >>> a = np.arange(8).reshape((2,2,2))
+ >>> np.trace(a)
+ array([6, 8])
+
+ >>> a = np.arange(24).reshape((2,2,2,3))
+ >>> np.trace(a).shape
+ (2, 3)
+
+ """
+ raise NotImplemented('Waiting on interp level method')
+
+def ravel(a, order='C'):
+ """
+ Return a flattened array.
+
+ A 1-D array, containing the elements of the input, is returned. A copy is
+ made only if needed.
+
+ Parameters
+ ----------
+ a : array_like
+ Input array. The elements in ``a`` are read in the order specified by
+ `order`, and packed as a 1-D array.
+ order : {'C','F', 'A', 'K'}, optional
+ The elements of ``a`` are read in this order. 'C' means to view
+ the elements in C (row-major) order. 'F' means to view the elements
+ in Fortran (column-major) order. 'A' means to view the elements
+ in 'F' order if a is Fortran contiguous, 'C' order otherwise.
+ 'K' means to view the elements in the order they occur in memory,
+ except for reversing the data when strides are negative.
+ By default, 'C' order is used.
+
+ Returns
+ -------
+ 1d_array : ndarray
+ Output of the same dtype as `a`, and of shape ``(a.size(),)``.
+
+ See Also
+ --------
+ ndarray.flat : 1-D iterator over an array.
+ ndarray.flatten : 1-D array copy of the elements of an array
+ in row-major order.
+
+ Notes
+ -----
+ In row-major order, the row index varies the slowest, and the column
+ index the quickest. This can be generalized to multiple dimensions,
+ where row-major order implies that the index along the first axis
+ varies slowest, and the index along the last quickest. The opposite holds
+ for Fortran-, or column-major, mode.
+
+ Examples
+ --------
+ It is equivalent to ``reshape(-1, order=order)``.
+
+ >>> x = np.array([[1, 2, 3], [4, 5, 6]])
+ >>> print np.ravel(x)
+ [1 2 3 4 5 6]
+
+ >>> print x.reshape(-1)
+ [1 2 3 4 5 6]
+
+ >>> print np.ravel(x, order='F')
+ [1 4 2 5 3 6]
+
+ When ``order`` is 'A', it will preserve the array's 'C' or 'F' ordering:
+
+ >>> print np.ravel(x.T)
+ [1 4 2 5 3 6]
+ >>> print np.ravel(x.T, order='A')
+ [1 2 3 4 5 6]
+
+ When ``order`` is 'K', it will preserve orderings that are neither 'C'
+ nor 'F', but won't reverse axes:
+
+ >>> a = np.arange(3)[::-1]; a
+ array([2, 1, 0])
+ >>> a.ravel(order='C')
+ array([2, 1, 0])
+ >>> a.ravel(order='K')
+ array([2, 1, 0])
+
+ >>> a = np.arange(12).reshape(2,3,2).swapaxes(1,2); a
+ array([[[ 0, 2, 4],
+ [ 1, 3, 5]],
+ [[ 6, 8, 10],
+ [ 7, 9, 11]]])
+ >>> a.ravel(order='C')
+ array([ 0, 2, 4, 1, 3, 5, 6, 8, 10, 7, 9, 11])
+ >>> a.ravel(order='K')
+ array([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11])
+
+ """
+ raise NotImplemented('Waiting on interp level method')
+
+
+def nonzero(a):
+ """
+ Return the indices of the elements that are non-zero.
+
+ Returns a tuple of arrays, one for each dimension of `a`, containing
+ the indices of the non-zero elements in that dimension. The
+ corresponding non-zero values can be obtained with::
+
+ a[nonzero(a)]
+
+ To group the indices by element, rather than dimension, use::
+
+ transpose(nonzero(a))
+
+ The result of this is always a 2-D array, with a row for
+ each non-zero element.
+
+ Parameters
+ ----------
+ a : array_like
+ Input array.
+
+ Returns
+ -------
+ tuple_of_arrays : tuple
+ Indices of elements that are non-zero.
+
+ See Also
+ --------
+ flatnonzero :
+ Return indices that are non-zero in the flattened version of the input
+ array.
+ ndarray.nonzero :
+ Equivalent ndarray method.
+ count_nonzero :
+ Counts the number of non-zero elements in the input array.
+
+ Examples
+ --------
+ >>> x = np.eye(3)
+ >>> x
+ array([[ 1., 0., 0.],
+ [ 0., 1., 0.],
+ [ 0., 0., 1.]])
+ >>> np.nonzero(x)
+ (array([0, 1, 2]), array([0, 1, 2]))
+
+ >>> x[np.nonzero(x)]
+ array([ 1., 1., 1.])
+ >>> np.transpose(np.nonzero(x))
+ array([[0, 0],
+ [1, 1],
+ [2, 2]])
+
+ A common use for ``nonzero`` is to find the indices of an array, where
+ a condition is True. Given an array `a`, the condition `a` > 3 is a
+ boolean array and since False is interpreted as 0, np.nonzero(a > 3)
+ yields the indices of the `a` where the condition is true.
+
+ >>> a = np.array([[1,2,3],[4,5,6],[7,8,9]])
+ >>> a > 3
+ array([[False, False, False],
+ [ True, True, True],
+ [ True, True, True]], dtype=bool)
+ >>> np.nonzero(a > 3)
+ (array([1, 1, 1, 2, 2, 2]), array([0, 1, 2, 0, 1, 2]))
+
+ The ``nonzero`` method of the boolean array can also be called.
+
+ >>> (a > 3).nonzero()
+ (array([1, 1, 1, 2, 2, 2]), array([0, 1, 2, 0, 1, 2]))
+
+ """
+ raise NotImplemented('Waiting on interp level method')
+
+
+def shape(a):
+ """
+ Return the shape of an array.
+
+ Parameters
+ ----------
+ a : array_like
+ Input array.
+
+ Returns
+ -------
+ shape : tuple of ints
+ The elements of the shape tuple give the lengths of the
+ corresponding array dimensions.
+
+ See Also
+ --------
+ alen
+ ndarray.shape : Equivalent array method.
+
+ Examples
+ --------
+ >>> np.shape(np.eye(3))
+ (3, 3)
+ >>> np.shape([[1, 2]])
+ (1, 2)
+ >>> np.shape([0])
+ (1,)
+ >>> np.shape(0)
+ ()
+
+ >>> a = np.array([(1, 2), (3, 4)], dtype=[('x', 'i4'), ('y', 'i4')])
+ >>> np.shape(a)
+ (2,)
+ >>> a.shape
+ (2,)
+
+ """
+ if not hasattr(a, 'shape'):
+ a = numpypy.array(a)
+ return a.shape
+
+
+def compress(condition, a, axis=None, out=None):
+ """
+ Return selected slices of an array along given axis.
+
+ When working along a given axis, a slice along that axis is returned in
+ `output` for each index where `condition` evaluates to True. When
+ working on a 1-D array, `compress` is equivalent to `extract`.
+
+ Parameters
+ ----------
+ condition : 1-D array of bools
+ Array that selects which entries to return. If len(condition)
+ is less than the size of `a` along the given axis, then output is
+ truncated to the length of the condition array.
+ a : array_like
+ Array from which to extract a part.
+ axis : int, optional
+ Axis along which to take slices. If None (default), work on the
+ flattened array.
+ out : ndarray, optional
+ Output array. Its type is preserved and it must be of the right
+ shape to hold the output.
+
+ Returns
+ -------
+ compressed_array : ndarray
+ A copy of `a` without the slices along axis for which `condition`
+ is false.
+
+ See Also
+ --------
+ take, choose, diag, diagonal, select
+ ndarray.compress : Equivalent method.
+ numpy.doc.ufuncs : Section "Output arguments"
+
+ Examples
+ --------
+ >>> a = np.array([[1, 2], [3, 4], [5, 6]])
+ >>> a
+ array([[1, 2],
+ [3, 4],
+ [5, 6]])
+ >>> np.compress([0, 1], a, axis=0)
+ array([[3, 4]])
+ >>> np.compress([False, True, True], a, axis=0)
+ array([[3, 4],
+ [5, 6]])
+ >>> np.compress([False, True], a, axis=1)
+ array([[2],
+ [4],
+ [6]])
+
+ Working on the flattened array does not return slices along an axis but
+ selects elements.
+
+ >>> np.compress([False, True], a)
+ array([2])
+
+ """
+ raise NotImplemented('Waiting on interp level method')
+
+
+def clip(a, a_min, a_max, out=None):
+ """
+ Clip (limit) the values in an array.
+
+ Given an interval, values outside the interval are clipped to
+ the interval edges. For example, if an interval of ``[0, 1]``
+ is specified, values smaller than 0 become 0, and values larger
+ than 1 become 1.
+
+ Parameters
+ ----------
+ a : array_like
+ Array containing elements to clip.
+ a_min : scalar or array_like
+ Minimum value.
+ a_max : scalar or array_like
+ Maximum value. If `a_min` or `a_max` are array_like, then they will
+ be broadcasted to the shape of `a`.
+ out : ndarray, optional
+ The results will be placed in this array. It may be the input
+ array for in-place clipping. `out` must be of the right shape
+ to hold the output. Its type is preserved.
+
+ Returns
+ -------
+ clipped_array : ndarray
+ An array with the elements of `a`, but where values
+ < `a_min` are replaced with `a_min`, and those > `a_max`
+ with `a_max`.
+
+ See Also
+ --------
+ numpy.doc.ufuncs : Section "Output arguments"
+
+ Examples
+ --------
+ >>> a = np.arange(10)
+ >>> np.clip(a, 1, 8)
+ array([1, 1, 2, 3, 4, 5, 6, 7, 8, 8])
+ >>> a
+ array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])
+ >>> np.clip(a, 3, 6, out=a)
+ array([3, 3, 3, 3, 4, 5, 6, 6, 6, 6])
+ >>> a = np.arange(10)
+ >>> a
+ array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])
+ >>> np.clip(a, [3,4,1,1,1,4,4,4,4,4], 8)
+ array([3, 4, 2, 3, 4, 5, 6, 7, 8, 8])
+
+ """
+ raise NotImplemented('Waiting on interp level method')
+
+
+def sum(a, axis=None, dtype=None, out=None):
+ """
+ Sum of array elements over a given axis.
+
+ Parameters
+ ----------
+ a : array_like
+ Elements to sum.
+ axis : integer, optional
+ Axis over which the sum is taken. By default `axis` is None,
+ and all elements are summed.
+ dtype : dtype, optional
+ The type of the returned array and of the accumulator in which
+ the elements are summed. By default, the dtype of `a` is used.
+ An exception is when `a` has an integer type with less precision
+ than the default platform integer. In that case, the default
+ platform integer is used instead.
+ out : ndarray, optional
+ Array into which the output is placed. By default, a new array is
+ created. If `out` is given, it must be of the appropriate shape
+ (the shape of `a` with `axis` removed, i.e.,
+ ``numpy.delete(a.shape, axis)``). Its type is preserved. See
+ `doc.ufuncs` (Section "Output arguments") for more details.
+
+ Returns
+ -------
+ sum_along_axis : ndarray
+ An array with the same shape as `a`, with the specified
+ axis removed. If `a` is a 0-d array, or if `axis` is None, a scalar
+ is returned. If an output array is specified, a reference to
+ `out` is returned.
+
+ See Also
+ --------
+ ndarray.sum : Equivalent method.
+
+ cumsum : Cumulative sum of array elements.
+
+ trapz : Integration of array values using the composite trapezoidal rule.
+
+ mean, average
+
+ Notes
+ -----
+ Arithmetic is modular when using integer types, and no error is
+ raised on overflow.
+
+ Examples
+ --------
+ >>> np.sum([0.5, 1.5])
+ 2.0
+ >>> np.sum([0.5, 0.7, 0.2, 1.5], dtype=np.int32)
+ 1
+ >>> np.sum([[0, 1], [0, 5]])
+ 6
+ >>> np.sum([[0, 1], [0, 5]], axis=0)
+ array([0, 6])
+ >>> np.sum([[0, 1], [0, 5]], axis=1)
+ array([1, 5])
+
+ If the accumulator is too small, overflow occurs:
+
+ >>> np.ones(128, dtype=np.int8).sum(dtype=np.int8)
+ -128
+
+ """
+ if not hasattr(a, "sum"):
+ a = numpypy.array(a)
+ return a.sum()
+
+
+def product (a, axis=None, dtype=None, out=None):
+ """
+ Return the product of array elements over a given axis.
+
+ See Also
+ --------
+ prod : equivalent function; see for details.
+
+ """
+ raise NotImplemented('Waiting on interp level method')
+
+
+def sometrue(a, axis=None, out=None):
+ """
+ Check whether some values are true.
+
+ Refer to `any` for full documentation.
+
+ See Also
+ --------
+ any : equivalent function
+
+ """
+ if not hasattr(a, 'any'):
+ a = numpypy.array(a)
+ return a.any()
+
+
+def alltrue (a, axis=None, out=None):
+ """
+ Check if all elements of input array are true.
+
+ See Also
+ --------
+ numpy.all : Equivalent function; see for details.
+
+ """
+ if not hasattr(a, 'all'):
+ a = numpypy.array(a)
+ return a.all()
+
+def any(a,axis=None, out=None):
+ """
+ Test whether any array element along a given axis evaluates to True.
+
+ Returns single boolean unless `axis` is not ``None``
+
+ Parameters
+ ----------
+ a : array_like
+ Input array or object that can be converted to an array.
+ axis : int, optional
+ Axis along which a logical OR is performed. The default
+ (`axis` = `None`) is to perform a logical OR over a flattened
+ input array. `axis` may be negative, in which case it counts
+ from the last to the first axis.
+ out : ndarray, optional
+ Alternate output array in which to place the result. It must have
+ the same shape as the expected output and its type is preserved
+ (e.g., if it is of type float, then it will remain so, returning
+ 1.0 for True and 0.0 for False, regardless of the type of `a`).
+ See `doc.ufuncs` (Section "Output arguments") for details.
+
+ Returns
+ -------
+ any : bool or ndarray
+ A new boolean or `ndarray` is returned unless `out` is specified,
+ in which case a reference to `out` is returned.
+
+ See Also
+ --------
+ ndarray.any : equivalent method
+
+ all : Test whether all elements along a given axis evaluate to True.
+
+ Notes
+ -----
+ Not a Number (NaN), positive infinity and negative infinity evaluate
+ to `True` because these are not equal to zero.
+
+ Examples
+ --------
+ >>> np.any([[True, False], [True, True]])
+ True
+
+ >>> np.any([[True, False], [False, False]], axis=0)
+ array([ True, False], dtype=bool)
+
+ >>> np.any([-1, 0, 5])
+ True
+
+ >>> np.any(np.nan)
+ True
+
+ >>> o=np.array([False])
+ >>> z=np.any([-1, 4, 5], out=o)
+ >>> z, o
+ (array([ True], dtype=bool), array([ True], dtype=bool))
+ >>> # Check now that z is a reference to o
+ >>> z is o
+ True
+ >>> id(z), id(o) # identity of z and o # doctest: +SKIP
+ (191614240, 191614240)
+
+ """
+ if not hasattr(a, 'any'):
+ a = numpypy.array(a)
+ return a.any()
+
+
+def all(a,axis=None, out=None):
+ """
+ Test whether all array elements along a given axis evaluate to True.
+
+ Parameters
+ ----------
+ a : array_like
+ Input array or object that can be converted to an array.
+ axis : int, optional
+ Axis along which a logical AND is performed.
+ The default (`axis` = `None`) is to perform a logical AND
+ over a flattened input array. `axis` may be negative, in which
+ case it counts from the last to the first axis.
+ out : ndarray, optional
+ Alternate output array in which to place the result.
+ It must have the same shape as the expected output and its
+ type is preserved (e.g., if ``dtype(out)`` is float, the result
+ will consist of 0.0's and 1.0's). See `doc.ufuncs` (Section
+ "Output arguments") for more details.
+
+ Returns
+ -------
+ all : ndarray, bool
+ A new boolean or array is returned unless `out` is specified,
+ in which case a reference to `out` is returned.
+
+ See Also
+ --------
+ ndarray.all : equivalent method
+
+ any : Test whether any element along a given axis evaluates to True.
+
+ Notes
+ -----
+ Not a Number (NaN), positive infinity and negative infinity
+ evaluate to `True` because these are not equal to zero.
+
+ Examples
+ --------
+ >>> np.all([[True,False],[True,True]])
+ False
+
+ >>> np.all([[True,False],[True,True]], axis=0)
+ array([ True, False], dtype=bool)
+
+ >>> np.all([-1, 4, 5])
+ True
+
+ >>> np.all([1.0, np.nan])
+ True
+
+ >>> o=np.array([False])
+ >>> z=np.all([-1, 4, 5], out=o)
+ >>> id(z), id(o), z # doctest: +SKIP
+ (28293632, 28293632, array([ True], dtype=bool))
+
+ """
+ if not hasattr(a, 'all'):
+ a = numpypy.array(a)
+ return a.all()
+
+
+def cumsum (a, axis=None, dtype=None, out=None):
+ """
+ Return the cumulative sum of the elements along a given axis.
+
+ Parameters
+ ----------
+ a : array_like
+ Input array.
+ axis : int, optional
+ Axis along which the cumulative sum is computed. The default
+ (None) is to compute the cumsum over the flattened array.
+ dtype : dtype, optional
+ Type of the returned array and of the accumulator in which the
+ elements are summed. If `dtype` is not specified, it defaults
+ to the dtype of `a`, unless `a` has an integer dtype with a
+ precision less than that of the default platform integer. In
+ that case, the default platform integer is used.
+ out : ndarray, optional
+ Alternative output array in which to place the result. It must
+ have the same shape and buffer length as the expected output
+ but the type will be cast if necessary. See `doc.ufuncs`
+ (Section "Output arguments") for more details.
+
+ Returns
+ -------
+ cumsum_along_axis : ndarray.
+ A new array holding the result is returned unless `out` is
+ specified, in which case a reference to `out` is returned. The
+ result has the same size as `a`, and the same shape as `a` if
+ `axis` is not None or `a` is a 1-d array.
+
+
+ See Also
+ --------
+ sum : Sum array elements.
+
+ trapz : Integration of array values using the composite trapezoidal rule.
+
+ Notes
+ -----
+ Arithmetic is modular when using integer types, and no error is
+ raised on overflow.
+
+ Examples
+ --------
+ >>> a = np.array([[1,2,3], [4,5,6]])
+ >>> a
+ array([[1, 2, 3],
+ [4, 5, 6]])
+ >>> np.cumsum(a)
+ array([ 1, 3, 6, 10, 15, 21])
+ >>> np.cumsum(a, dtype=float) # specifies type of output value(s)
+ array([ 1., 3., 6., 10., 15., 21.])
+
+ >>> np.cumsum(a,axis=0) # sum over rows for each of the 3 columns
+ array([[1, 2, 3],
+ [5, 7, 9]])
+ >>> np.cumsum(a,axis=1) # sum over columns for each of the 2 rows
+ array([[ 1, 3, 6],
+ [ 4, 9, 15]])
+
+ """
+ raise NotImplemented('Waiting on interp level method')
+
+
+def cumproduct(a, axis=None, dtype=None, out=None):
+ """
+ Return the cumulative product over the given axis.
+
+
+ See Also
+ --------
+ cumprod : equivalent function; see for details.
+
+ """
+ raise NotImplemented('Waiting on interp level method')
+
+
+def ptp(a, axis=None, out=None):
+ """
+ Range of values (maximum - minimum) along an axis.
+
+ The name of the function comes from the acronym for 'peak to peak'.
+
+ Parameters
+ ----------
+ a : array_like
+ Input values.
+ axis : int, optional
+ Axis along which to find the peaks. By default, flatten the
+ array.
+ out : array_like
+ Alternative output array in which to place the result. It must
+ have the same shape and buffer length as the expected output,
+ but the type of the output values will be cast if necessary.
+
+ Returns
+ -------
+ ptp : ndarray
+ A new array holding the result, unless `out` was
+ specified, in which case a reference to `out` is returned.
+
+ Examples
+ --------
+ >>> x = np.arange(4).reshape((2,2))
+ >>> x
+ array([[0, 1],
+ [2, 3]])
+
+ >>> np.ptp(x, axis=0)
+ array([2, 2])
+
+ >>> np.ptp(x, axis=1)
+ array([1, 1])
+
+ """
+ raise NotImplemented('Waiting on interp level method')
+
+
+def amax(a, axis=None, out=None):
+ """
+ Return the maximum of an array or maximum along an axis.
+
+ Parameters
+ ----------
+ a : array_like
+ Input data.
+ axis : int, optional
+ Axis along which to operate. By default flattened input is used.
+ out : ndarray, optional
+ Alternate output array in which to place the result. Must be of
+ the same shape and buffer length as the expected output. See
+ `doc.ufuncs` (Section "Output arguments") for more details.
+
+ Returns
+ -------
+ amax : ndarray or scalar
+ Maximum of `a`. If `axis` is None, the result is a scalar value.
+ If `axis` is given, the result is an array of dimension
+ ``a.ndim - 1``.
+
+ See Also
+ --------
+ nanmax : NaN values are ignored instead of being propagated.
+ fmax : same behavior as the C99 fmax function.
+ argmax : indices of the maximum values.
+
+ Notes
+ -----
+ NaN values are propagated, that is if at least one item is NaN, the
+ corresponding max value will be NaN as well. To ignore NaN values
+ (MATLAB behavior), please use nanmax.
+
+ Examples
+ --------
+ >>> a = np.arange(4).reshape((2,2))
+ >>> a
+ array([[0, 1],
+ [2, 3]])
+ >>> np.amax(a)
+ 3
+ >>> np.amax(a, axis=0)
+ array([2, 3])
+ >>> np.amax(a, axis=1)
+ array([1, 3])
+
+ >>> b = np.arange(5, dtype=np.float)
+ >>> b[2] = np.NaN
+ >>> np.amax(b)
+ nan
+ >>> np.nanmax(b)
+ 4.0
+
+ """
+ if not hasattr(a, "max"):
+ a = numpypy.array(a)
+ return a.max()
+
+
+def amin(a, axis=None, out=None):
+ """
+ Return the minimum of an array or minimum along an axis.
+
+ Parameters
+ ----------
+ a : array_like
+ Input data.
+ axis : int, optional
+ Axis along which to operate. By default a flattened input is used.
+ out : ndarray, optional
+ Alternative output array in which to place the result. Must
+ be of the same shape and buffer length as the expected output.
+ See `doc.ufuncs` (Section "Output arguments") for more details.
+
+ Returns
+ -------
+ amin : ndarray
+ A new array or a scalar array with the result.
+
+ See Also
+ --------
+ nanmin: nan values are ignored instead of being propagated
+ fmin: same behavior as the C99 fmin function
+ argmin: Return the indices of the minimum values.
+
+ amax, nanmax, fmax
+
+ Notes
+ -----
+ NaN values are propagated, that is if at least one item is nan, the
+ corresponding min value will be nan as well. To ignore NaN values (matlab
+ behavior), please use nanmin.
+
+ Examples
+ --------
+ >>> a = np.arange(4).reshape((2,2))
+ >>> a
+ array([[0, 1],
+ [2, 3]])
+ >>> np.amin(a) # Minimum of the flattened array
+ 0
+ >>> np.amin(a, axis=0) # Minima along the first axis
+ array([0, 1])
+ >>> np.amin(a, axis=1) # Minima along the second axis
+ array([0, 2])
+
+ >>> b = np.arange(5, dtype=np.float)
+ >>> b[2] = np.NaN
+ >>> np.amin(b)
+ nan
+ >>> np.nanmin(b)
+ 0.0
+
+ """
+ # amin() is equivalent to min()
+ if not hasattr(a, 'min'):
+ a = numpypy.array(a)
+ return a.min()
+
+def alen(a):
+ """
+ Return the length of the first dimension of the input array.
+
+ Parameters
+ ----------
+ a : array_like
+ Input array.
+
+ Returns
+ -------
+ l : int
+ Length of the first dimension of `a`.
+
+ See Also
+ --------
+ shape, size
+
+ Examples
+ --------
+ >>> a = np.zeros((7,4,5))
+ >>> a.shape[0]
+ 7
+ >>> np.alen(a)
+ 7
+
+ """
+ if not hasattr(a, 'shape'):
+ a = numpypy.array(a)
+ return a.shape[0]
+
+
+def prod(a, axis=None, dtype=None, out=None):
+ """
+ Return the product of array elements over a given axis.
+
+ Parameters
+ ----------
+ a : array_like
+ Input data.
+ axis : int, optional
+ Axis over which the product is taken. By default, the product
+ of all elements is calculated.
+ dtype : data-type, optional
+ The data-type of the returned array, as well as of the accumulator
+ in which the elements are multiplied. By default, if `a` is of
+ integer type, `dtype` is the default platform integer. (Note: if
+ the type of `a` is unsigned, then so is `dtype`.) Otherwise,
+ the dtype is the same as that of `a`.
+ out : ndarray, optional
+ Alternative output array in which to place the result. It must have
+ the same shape as the expected output, but the type of the
+ output values will be cast if necessary.
+
+ Returns
+ -------
+ product_along_axis : ndarray, see `dtype` parameter above.
+ An array shaped as `a` but with the specified axis removed.
+ Returns a reference to `out` if specified.
+
+ See Also
+ --------
+ ndarray.prod : equivalent method
+ numpy.doc.ufuncs : Section "Output arguments"
+
+ Notes
+ -----
+ Arithmetic is modular when using integer types, and no error is
+ raised on overflow. That means that, on a 32-bit platform:
+
+ >>> x = np.array([536870910, 536870910, 536870910, 536870910])
+ >>> np.prod(x) #random
+ 16
+
+ Examples
+ --------
+ By default, calculate the product of all elements:
+
+ >>> np.prod([1.,2.])
+ 2.0
+
+ Even when the input array is two-dimensional:
+
+ >>> np.prod([[1.,2.],[3.,4.]])
+ 24.0
+
+ But we can also specify the axis over which to multiply:
+
+ >>> np.prod([[1.,2.],[3.,4.]], axis=1)
+ array([ 2., 12.])
+
+ If the type of `x` is unsigned, then the output type is
+ the unsigned platform integer:
+
+ >>> x = np.array([1, 2, 3], dtype=np.uint8)
+ >>> np.prod(x).dtype == np.uint
+ True
+
+ If `x` is of a signed integer type, then the output type
+ is the default platform integer:
+
+ >>> x = np.array([1, 2, 3], dtype=np.int8)
+ >>> np.prod(x).dtype == np.int
+ True
+
+ """
+ raise NotImplemented('Waiting on interp level method')
+
+
+def cumprod(a, axis=None, dtype=None, out=None):
+ """
+ Return the cumulative product of elements along a given axis.
+
+ Parameters
+ ----------
+ a : array_like
+ Input array.
+ axis : int, optional
+ Axis along which the cumulative product is computed. By default
+ the input is flattened.
+ dtype : dtype, optional
+ Type of the returned array, as well as of the accumulator in which
+ the elements are multiplied. If *dtype* is not specified, it
+ defaults to the dtype of `a`, unless `a` has an integer dtype with
+ a precision less than that of the default platform integer. In
+ that case, the default platform integer is used instead.
+ out : ndarray, optional
+ Alternative output array in which to place the result. It must
+ have the same shape and buffer length as the expected output
+ but the type of the resulting values will be cast if necessary.
+
+ Returns
+ -------
+ cumprod : ndarray
+ A new array holding the result is returned unless `out` is
+ specified, in which case a reference to out is returned.
+
+ See Also
+ --------
+ numpy.doc.ufuncs : Section "Output arguments"
+
+ Notes
+ -----
+ Arithmetic is modular when using integer types, and no error is
+ raised on overflow.
+
+ Examples
+ --------
+ >>> a = np.array([1,2,3])
+ >>> np.cumprod(a) # intermediate results 1, 1*2
+ ... # total product 1*2*3 = 6
+ array([1, 2, 6])
+ >>> a = np.array([[1, 2, 3], [4, 5, 6]])
+ >>> np.cumprod(a, dtype=float) # specify type of output
+ array([ 1., 2., 6., 24., 120., 720.])
+
+ The cumulative product for each column (i.e., over the rows) of `a`:
+
+ >>> np.cumprod(a, axis=0)
+ array([[ 1, 2, 3],
+ [ 4, 10, 18]])
+
+ The cumulative product for each row (i.e. over the columns) of `a`:
+
+ >>> np.cumprod(a,axis=1)
+ array([[ 1, 2, 6],
+ [ 4, 20, 120]])
+
+ """
+ raise NotImplemented('Waiting on interp level method')
+
+
+def ndim(a):
+ """
+ Return the number of dimensions of an array.
+
+ Parameters
+ ----------
+ a : array_like
+ Input array. If it is not already an ndarray, a conversion is
+ attempted.
+
+ Returns
+ -------
+ number_of_dimensions : int
+ The number of dimensions in `a`. Scalars are zero-dimensional.
+
+ See Also
+ --------
+ ndarray.ndim : equivalent method
+ shape : dimensions of array
+ ndarray.shape : dimensions of array
+
+ Examples
+ --------
+ >>> np.ndim([[1,2,3],[4,5,6]])
+ 2
+ >>> np.ndim(np.array([[1,2,3],[4,5,6]]))
+ 2
+ >>> np.ndim(1)
+ 0
+
+ """
+ if not hasattr(a, 'ndim'):
+ a = numpypy.array(a)
+ return a.ndim
+
+
+def rank(a):
+ """
+ Return the number of dimensions of an array.
+
+ If `a` is not already an array, a conversion is attempted.
+ Scalars are zero dimensional.
+
+ Parameters
+ ----------
+ a : array_like
+ Array whose number of dimensions is desired. If `a` is not an array,
+ a conversion is attempted.
+
+ Returns
+ -------
+ number_of_dimensions : int
+ The number of dimensions in the array.
+
+ See Also
+ --------
+ ndim : equivalent function
+ ndarray.ndim : equivalent property
+ shape : dimensions of array
+ ndarray.shape : dimensions of array
+
+ Notes
+ -----
+ In the old Numeric package, `rank` was the term used for the number of
+ dimensions, but in Numpy `ndim` is used instead.
+
+ Examples
+ --------
+ >>> np.rank([1,2,3])
+ 1
+ >>> np.rank(np.array([[1,2,3],[4,5,6]]))
+ 2
+ >>> np.rank(1)
+ 0
+
+ """
+ if not hasattr(a, 'ndim'):
+ a = numpypy.array(a)
+ return a.ndim
+
+
+def size(a, axis=None):
+ """
+ Return the number of elements along a given axis.
+
+ Parameters
+ ----------
+ a : array_like
+ Input data.
+ axis : int, optional
+ Axis along which the elements are counted. By default, give
+ the total number of elements.
+
+ Returns
+ -------
+ element_count : int
+ Number of elements along the specified axis.
+
+ See Also
+ --------
+ shape : dimensions of array
+ ndarray.shape : dimensions of array
+ ndarray.size : number of elements in array
+
+ Examples
+ --------
+ >>> a = np.array([[1,2,3],[4,5,6]])
+ >>> np.size(a)
+ 6
+ >>> np.size(a,1)
+ 3
+ >>> np.size(a,0)
+ 2
+
+ """
+ raise NotImplemented('Waiting on interp level method')
+
+
+def around(a, decimals=0, out=None):
+ """
+ Evenly round to the given number of decimals.
+
+ Parameters
+ ----------
+ a : array_like
+ Input data.
+ decimals : int, optional
+ Number of decimal places to round to (default: 0). If
+ decimals is negative, it specifies the number of positions to
+ the left of the decimal point.
+ out : ndarray, optional
+ Alternative output array in which to place the result. It must have
+ the same shape as the expected output, but the type of the output
+ values will be cast if necessary. See `doc.ufuncs` (Section
+ "Output arguments") for details.
+
+ Returns
+ -------
+ rounded_array : ndarray
+ An array of the same type as `a`, containing the rounded values.
+ Unless `out` was specified, a new array is created. A reference to
+ the result is returned.
+
+ The real and imaginary parts of complex numbers are rounded
+ separately. The result of rounding a float is a float.
+
+ See Also
+ --------
+ ndarray.round : equivalent method
+
+ ceil, fix, floor, rint, trunc
+
+
+ Notes
+ -----
+ For values exactly halfway between rounded decimal values, Numpy
+ rounds to the nearest even value. Thus 1.5 and 2.5 round to 2.0,
+ -0.5 and 0.5 round to 0.0, etc. Results may also be surprising due
+ to the inexact representation of decimal fractions in the IEEE
+ floating point standard [1]_ and errors introduced when scaling
+ by powers of ten.
+
+ References
+ ----------
+ .. [1] "Lecture Notes on the Status of IEEE 754", William Kahan,
+ http://www.cs.berkeley.edu/~wkahan/ieee754status/IEEE754.PDF
+ .. [2] "How Futile are Mindless Assessments of
+ Roundoff in Floating-Point Computation?", William Kahan,
+ http://www.cs.berkeley.edu/~wkahan/Mindless.pdf
+
+ Examples
+ --------
+ >>> np.around([0.37, 1.64])
+ array([ 0., 2.])
+ >>> np.around([0.37, 1.64], decimals=1)
+ array([ 0.4, 1.6])
+ >>> np.around([.5, 1.5, 2.5, 3.5, 4.5]) # rounds to nearest even value
+ array([ 0., 2., 2., 4., 4.])
+ >>> np.around([1,2,3,11], decimals=1) # ndarray of ints is returned
+ array([ 1, 2, 3, 11])
+ >>> np.around([1,2,3,11], decimals=-1)
+ array([ 0, 0, 0, 10])
+
+ """
+ raise NotImplemented('Waiting on interp level method')
+
+
+def round_(a, decimals=0, out=None):
+ """
+ Round an array to the given number of decimals.
+
+ Refer to `around` for full documentation.
+
+ See Also
+ --------
+ around : equivalent function
+
+ """
+ raise NotImplemented('Waiting on interp level method')
+
+
+def mean(a, axis=None, dtype=None, out=None):
+ """
+ Compute the arithmetic mean along the specified axis.
+
+ Returns the average of the array elements. The average is taken over
+ the flattened array by default, otherwise over the specified axis.
+ `float64` intermediate and return values are used for integer inputs.
+
+ Parameters
+ ----------
+ a : array_like
+ Array containing numbers whose mean is desired. If `a` is not an
+ array, a conversion is attempted.
+ axis : int, optional
+ Axis along which the means are computed. The default is to compute
+ the mean of the flattened array.
+ dtype : data-type, optional
+ Type to use in computing the mean. For integer inputs, the default
+ is `float64`; for floating point inputs, it is the same as the
+ input dtype.
+ out : ndarray, optional
+ Alternate output array in which to place the result. The default
+ is ``None``; if provided, it must have the same shape as the
+ expected output, but the type will be cast if necessary.
+ See `doc.ufuncs` for details.
+
+ Returns
+ -------
+ m : ndarray, see dtype parameter above
+ If `out=None`, returns a new array containing the mean values,
+ otherwise a reference to the output array is returned.
+
+ See Also
+ --------
+ average : Weighted average
+
+ Notes
+ -----
+ The arithmetic mean is the sum of the elements along the axis divided
+ by the number of elements.
+
+ Note that for floating-point input, the mean is computed using the
+ same precision the input has. Depending on the input data, this can
+ cause the results to be inaccurate, especially for `float32` (see
+ example below). Specifying a higher-precision accumulator using the
+ `dtype` keyword can alleviate this issue.
+
+ Examples
+ --------
+ >>> a = np.array([[1, 2], [3, 4]])
+ >>> np.mean(a)
+ 2.5
+ >>> np.mean(a, axis=0)
+ array([ 2., 3.])
+ >>> np.mean(a, axis=1)
+ array([ 1.5, 3.5])
+
+ In single precision, `mean` can be inaccurate:
+
+ >>> a = np.zeros((2, 512*512), dtype=np.float32)
+ >>> a[0, :] = 1.0
+ >>> a[1, :] = 0.1
+ >>> np.mean(a)
+ 0.546875
+
+ Computing the mean in float64 is more accurate:
+
+ >>> np.mean(a, dtype=np.float64)
+ 0.55000000074505806
+
+ """
+ if not hasattr(a, "mean"):
+ a = numpypy.array(a)
+ return a.mean()
+
+
+def std(a, axis=None, dtype=None, out=None, ddof=0):
+ """
+ Compute the standard deviation along the specified axis.
+
+ Returns the standard deviation, a measure of the spread of a distribution,
+ of the array elements. The standard deviation is computed for the
+ flattened array by default, otherwise over the specified axis.
+
+ Parameters
+ ----------
+ a : array_like
+ Calculate the standard deviation of these values.
+ axis : int, optional
+ Axis along which the standard deviation is computed. The default is
+ to compute the standard deviation of the flattened array.
+ dtype : dtype, optional
+ Type to use in computing the standard deviation. For arrays of
+ integer type the default is float64, for arrays of float types it is
+ the same as the array type.
+ out : ndarray, optional
+ Alternative output array in which to place the result. It must have
+ the same shape as the expected output but the type (of the calculated
+ values) will be cast if necessary.
+ ddof : int, optional
+ Means Delta Degrees of Freedom. The divisor used in calculations
+ is ``N - ddof``, where ``N`` represents the number of elements.
+ By default `ddof` is zero.
+
+ Returns
+ -------
+ standard_deviation : ndarray, see dtype parameter above.
+ If `out` is None, return a new array containing the standard deviation,
+ otherwise return a reference to the output array.
+
+ See Also
+ --------
+ var, mean
+ numpy.doc.ufuncs : Section "Output arguments"
+
+ Notes
+ -----
+ The standard deviation is the square root of the average of the squared
+ deviations from the mean, i.e., ``std = sqrt(mean(abs(x - x.mean())**2))``.
+
+ The average squared deviation is normally calculated as ``x.sum() / N``, where
+ ``N = len(x)``. If, however, `ddof` is specified, the divisor ``N - ddof``
+ is used instead. In standard statistical practice, ``ddof=1`` provides an
+ unbiased estimator of the variance of the infinite population. ``ddof=0``
+ provides a maximum likelihood estimate of the variance for normally
+ distributed variables. The standard deviation computed in this function
+ is the square root of the estimated variance, so even with ``ddof=1``, it
+ will not be an unbiased estimate of the standard deviation per se.
+
+ Note that, for complex numbers, `std` takes the absolute
+ value before squaring, so that the result is always real and nonnegative.
+
+ For floating-point input, the *std* is computed using the same
+ precision the input has. Depending on the input data, this can cause
+ the results to be inaccurate, especially for float32 (see example below).
+ Specifying a higher-accuracy accumulator using the `dtype` keyword can
+ alleviate this issue.
+
+ Examples
+ --------
+ >>> a = np.array([[1, 2], [3, 4]])
+ >>> np.std(a)
+ 1.1180339887498949
+ >>> np.std(a, axis=0)
+ array([ 1., 1.])
+ >>> np.std(a, axis=1)
+ array([ 0.5, 0.5])
+
+ In single precision, std() can be inaccurate:
+
+ >>> a = np.zeros((2,512*512), dtype=np.float32)
+ >>> a[0,:] = 1.0
+ >>> a[1,:] = 0.1
+ >>> np.std(a)
+ 0.45172946707416706
+
+ Computing the standard deviation in float64 is more accurate:
+
+ >>> np.std(a, dtype=np.float64)
+ 0.44999999925552653
+
+ """
+ if not hasattr(a, "std"):
+ a = numpypy.array(a)
+ return a.std()
+
+
+def var(a, axis=None, dtype=None, out=None, ddof=0):
+ """
+ Compute the variance along the specified axis.
+
+ Returns the variance of the array elements, a measure of the spread of a
+ distribution. The variance is computed for the flattened array by
+ default, otherwise over the specified axis.
+
+ Parameters
+ ----------
+ a : array_like
+ Array containing numbers whose variance is desired. If `a` is not an
+ array, a conversion is attempted.
+ axis : int, optional
+ Axis along which the variance is computed. The default is to compute
+ the variance of the flattened array.
+ dtype : data-type, optional
+ Type to use in computing the variance. For arrays of integer type
+ the default is `float32`; for arrays of float types it is the same as
+ the array type.
+ out : ndarray, optional
+ Alternate output array in which to place the result. It must have
+ the same shape as the expected output, but the type is cast if
+ necessary.
+ ddof : int, optional
+ "Delta Degrees of Freedom": the divisor used in the calculation is
+ ``N - ddof``, where ``N`` represents the number of elements. By
+ default `ddof` is zero.
+
+ Returns
+ -------
+ variance : ndarray, see dtype parameter above
+ If ``out=None``, returns a new array containing the variance;
+ otherwise, a reference to the output array is returned.
+
+ See Also
+ --------
+ std : Standard deviation
+ mean : Average
+ numpy.doc.ufuncs : Section "Output arguments"
+
+ Notes
+ -----
+ The variance is the average of the squared deviations from the mean,
+ i.e., ``var = mean(abs(x - x.mean())**2)``.
+
+ The mean is normally calculated as ``x.sum() / N``, where ``N = len(x)``.
+ If, however, `ddof` is specified, the divisor ``N - ddof`` is used
+ instead. In standard statistical practice, ``ddof=1`` provides an
+ unbiased estimator of the variance of a hypothetical infinite population.
+ ``ddof=0`` provides a maximum likelihood estimate of the variance for
+ normally distributed variables.
+
+ Note that for complex numbers, the absolute value is taken before
+ squaring, so that the result is always real and nonnegative.
+
+ For floating-point input, the variance is computed using the same
+ precision the input has. Depending on the input data, this can cause
+ the results to be inaccurate, especially for `float32` (see example
+ below). Specifying a higher-accuracy accumulator using the ``dtype``
+ keyword can alleviate this issue.
+
+ Examples
+ --------
+ >>> a = np.array([[1,2],[3,4]])
+ >>> np.var(a)
+ 1.25
+ >>> np.var(a,0)
+ array([ 1., 1.])
+ >>> np.var(a,1)
+ array([ 0.25, 0.25])
+
+ In single precision, var() can be inaccurate:
+
+ >>> a = np.zeros((2,512*512), dtype=np.float32)
+ >>> a[0,:] = 1.0
+ >>> a[1,:] = 0.1
+ >>> np.var(a)
+ 0.20405951142311096
+
+ Computing the standard deviation in float64 is more accurate:
+
+ >>> np.var(a, dtype=np.float64)
+ 0.20249999932997387
+ >>> ((1-0.55)**2 + (0.1-0.55)**2)/2
+ 0.20250000000000001
+
+ """
+ if not hasattr(a, "var"):
+ a = numpypy.array(a)
+ return a.var()
diff --git a/lib_pypy/numpypy/test/test_fromnumeric.py b/lib_pypy/numpypy/test/test_fromnumeric.py
new file mode 100644
--- /dev/null
+++ b/lib_pypy/numpypy/test/test_fromnumeric.py
@@ -0,0 +1,109 @@
+
+from pypy.module.micronumpy.test.test_base import BaseNumpyAppTest
+
+class AppTestFromNumeric(BaseNumpyAppTest):
+ def test_argmax(self):
+ # tests taken from numpy/core/fromnumeric.py docstring
+ from numpypy import array, arange, argmax
+ a = arange(6).reshape((2,3))
+ assert argmax(a) == 5
+ # assert (argmax(a, axis=0) == array([1, 1, 1])).all()
+ # assert (argmax(a, axis=1) == array([2, 2])).all()
+ b = arange(6)
+ b[1] = 5
+ assert argmax(b) == 1
+
+ def test_argmin(self):
+ # tests adapted from test_argmax
+ from numpypy import array, arange, argmin
+ a = arange(6).reshape((2,3))
+ assert argmin(a) == 0
+ # assert (argmax(a, axis=0) == array([0, 0, 0])).all()
+ # assert (argmax(a, axis=1) == array([0, 0])).all()
+ b = arange(6)
+ b[1] = 0
+ assert argmin(b) == 0
+
+ def test_shape(self):
+ # tests taken from numpy/core/fromnumeric.py docstring
+ from numpypy import array, identity, shape
+ assert shape(identity(3)) == (3, 3)
+ assert shape([[1, 2]]) == (1, 2)
+ assert shape([0]) == (1,)
+ assert shape(0) == ()
+ # a = array([(1, 2), (3, 4)], dtype=[('x', 'i4'), ('y', 'i4')])
+ # assert shape(a) == (2,)
+
+ def test_sum(self):
+ # tests taken from numpy/core/fromnumeric.py docstring
+ from numpypy import array, sum, ones
+ assert sum([0.5, 1.5])== 2.0
+ assert sum([[0, 1], [0, 5]]) == 6
+ # assert sum([0.5, 0.7, 0.2, 1.5], dtype=int32) == 1
+ # assert (sum([[0, 1], [0, 5]], axis=0) == array([0, 6])).all()
+ # assert (sum([[0, 1], [0, 5]], axis=1) == array([1, 5])).all()
+ # If the accumulator is too small, overflow occurs:
+ # assert ones(128, dtype=int8).sum(dtype=int8) == -128
+
+ def test_amin(self):
+ # tests taken from numpy/core/fromnumeric.py docstring
+ from numpypy import array, arange, amin
+ a = arange(4).reshape((2,2))
+ assert amin(a) == 0
+ # # Minima along the first axis
+ # assert (amin(a, axis=0) == array([0, 1])).all()
+ # # Minima along the second axis
+ # assert (amin(a, axis=1) == array([0, 2])).all()
+ # # NaN behaviour
+ # b = arange(5, dtype=float)
+ # b[2] = NaN
+ # assert amin(b) == nan
+ # assert nanmin(b) == 0.0
+
+ def test_amax(self):
+ # tests taken from numpy/core/fromnumeric.py docstring
+ from numpypy import array, arange, amax
+ a = arange(4).reshape((2,2))
+ assert amax(a) == 3
+ # assert (amax(a, axis=0) == array([2, 3])).all()
+ # assert (amax(a, axis=1) == array([1, 3])).all()
+ # # NaN behaviour
+ # b = arange(5, dtype=float)
+ # b[2] = NaN
+ # assert amax(b) == nan
+ # assert nanmax(b) == 4.0
+
+ def test_alen(self):
+ # tests taken from numpy/core/fromnumeric.py docstring
+ from numpypy import array, zeros, alen
+ a = zeros((7,4,5))
+ assert a.shape[0] == 7
+ assert alen(a) == 7
+
+ def test_ndim(self):
+ # tests taken from numpy/core/fromnumeric.py docstring
+ from numpypy import array, ndim
+ assert ndim([[1,2,3],[4,5,6]]) == 2
+ assert ndim(array([[1,2,3],[4,5,6]])) == 2
+ assert ndim(1) == 0
+
+ def test_rank(self):
+ # tests taken from numpy/core/fromnumeric.py docstring
+ from numpypy import array, rank
+ assert rank([[1,2,3],[4,5,6]]) == 2
+ assert rank(array([[1,2,3],[4,5,6]])) == 2
+ assert rank(1) == 0
+
+ def test_var(self):
+ from numpypy import array, var
+ a = array([[1,2],[3,4]])
+ assert var(a) == 1.25
+ # assert (np.var(a,0) == array([ 1., 1.])).all()
+ # assert (np.var(a,1) == array([ 0.25, 0.25])).all()
+
+ def test_std(self):
+ from numpypy import array, std
+ a = array([[1, 2], [3, 4]])
+ assert std(a) == 1.1180339887498949
+ # assert (std(a, axis=0) == array([ 1., 1.])).all()
+ # assert (std(a, axis=1) == array([ 0.5, 0.5]).all()
diff --git a/pypy/annotation/description.py b/pypy/annotation/description.py
--- a/pypy/annotation/description.py
+++ b/pypy/annotation/description.py
@@ -257,7 +257,8 @@
try:
inputcells = args.match_signature(signature, defs_s)
except ArgErr, e:
- raise TypeError, "signature mismatch: %s" % e.getmsg(self.name)
+ raise TypeError("signature mismatch: %s() %s" %
+ (self.name, e.getmsg()))
return inputcells
def specialize(self, inputcells, op=None):
diff --git a/pypy/doc/coding-guide.rst b/pypy/doc/coding-guide.rst
--- a/pypy/doc/coding-guide.rst
+++ b/pypy/doc/coding-guide.rst
@@ -175,15 +175,15 @@
RPython
=================
-RPython Definition, not
------------------------
+RPython Definition
+------------------
-The list and exact details of the "RPython" restrictions are a somewhat
-evolving topic. In particular, we have no formal language definition
-as we find it more practical to discuss and evolve the set of
-restrictions while working on the whole program analysis. If you
-have any questions about the restrictions below then please feel
-free to mail us at pypy-dev at codespeak net.
+RPython is a restricted subset of Python that is amenable to static analysis.
+Although there are additions to the language and some things might surprisingly
+work, this is a rough list of restrictions that should be considered. Note
+that there are tons of special cased restrictions that you'll encounter
+as you go. The exact definition is "RPython is everything that our translation
+toolchain can accept" :)
.. _`wrapped object`: coding-guide.html#wrapping-rules
@@ -198,7 +198,7 @@
contain both a string and a int must be avoided. It is allowed to
mix None (basically with the role of a null pointer) with many other
types: `wrapped objects`, class instances, lists, dicts, strings, etc.
- but *not* with int and floats.
+ but *not* with int, floats or tuples.
**constants**
@@ -209,9 +209,12 @@
have this restriction, so if you need mutable global state, store it
in the attributes of some prebuilt singleton instance.
+
+
**control structures**
- all allowed but yield, ``for`` loops restricted to builtin types
+ all allowed, ``for`` loops restricted to builtin types, generators
+ very restricted.
**range**
@@ -226,7 +229,8 @@
**generators**
- generators are not supported.
+ generators are supported, but their exact scope is very limited. you can't
+ merge two different generator in one control point.
**exceptions**
@@ -245,22 +249,27 @@
**strings**
- a lot of, but not all string methods are supported. Indexes can be
+ a lot of, but not all string methods are supported and those that are
+ supported, not necesarilly accept all arguments. Indexes can be
negative. In case they are not, then you get slightly more efficient
code if the translator can prove that they are non-negative. When
slicing a string it is necessary to prove that the slice start and
- stop indexes are non-negative.
+ stop indexes are non-negative. There is no implicit str-to-unicode cast
+ anywhere.
**tuples**
no variable-length tuples; use them to store or return pairs or n-tuples of
- values. Each combination of types for elements and length constitute a separate
- and not mixable type.
+ values. Each combination of types for elements and length constitute
+ a separate and not mixable type.
**lists**
lists are used as an allocated array. Lists are over-allocated, so list.append()
- is reasonably fast. Negative or out-of-bound indexes are only allowed for the
+ is reasonably fast. However, if you use a fixed-size list, the code
+ is more efficient. Annotator can figure out most of the time that your
+ list is fixed-size, even when you use list comprehension.
+ Negative or out-of-bound indexes are only allowed for the
most common operations, as follows:
- *indexing*:
@@ -287,16 +296,14 @@
**dicts**
- dicts with a unique key type only, provided it is hashable.
- String keys have been the only allowed key types for a while, but this was generalized.
- After some re-optimization,
- the implementation could safely decide that all string dict keys should be interned.
+ dicts with a unique key type only, provided it is hashable. Custom
+ hash functions and custom equality will not be honored.
+ Use ``pypy.rlib.objectmodel.r_dict`` for custom hash functions.
**list comprehensions**
- may be used to create allocated, initialized arrays.
- After list over-allocation was introduced, there is no longer any restriction.
+ May be used to create allocated, initialized arrays.
**functions**
@@ -334,9 +341,7 @@
**objects**
- in PyPy, wrapped objects are borrowed from the object space. Just like
- in CPython, code that needs e.g. a dictionary can use a wrapped dict
- and the object space operations on it.
+ Normal rules apply.
This layout makes the number of types to take care about quite limited.
diff --git a/pypy/interpreter/argument.py b/pypy/interpreter/argument.py
--- a/pypy/interpreter/argument.py
+++ b/pypy/interpreter/argument.py
@@ -428,8 +428,8 @@
return self._match_signature(w_firstarg,
scope_w, signature, defaults_w, 0)
except ArgErr, e:
- raise OperationError(self.space.w_TypeError,
- self.space.wrap(e.getmsg(fnname)))
+ raise operationerrfmt(self.space.w_TypeError,
+ "%s() %s", fnname, e.getmsg())
def _parse(self, w_firstarg, signature, defaults_w, blindargs=0):
"""Parse args and kwargs according to the signature of a code object,
@@ -450,8 +450,8 @@
try:
return self._parse(w_firstarg, signature, defaults_w, blindargs)
except ArgErr, e:
- raise OperationError(self.space.w_TypeError,
- self.space.wrap(e.getmsg(fnname)))
+ raise operationerrfmt(self.space.w_TypeError,
+ "%s() %s", fnname, e.getmsg())
@staticmethod
def frompacked(space, w_args=None, w_kwds=None):
@@ -626,7 +626,7 @@
class ArgErr(Exception):
- def getmsg(self, fnname):
+ def getmsg(self):
raise NotImplementedError
class ArgErrCount(ArgErr):
@@ -642,11 +642,10 @@
self.num_args = got_nargs
self.num_kwds = nkwds
- def getmsg(self, fnname):
+ def getmsg(self):
n = self.expected_nargs
if n == 0:
- msg = "%s() takes no arguments (%d given)" % (
- fnname,
+ msg = "takes no arguments (%d given)" % (
self.num_args + self.num_kwds)
else:
defcount = self.num_defaults
@@ -672,8 +671,7 @@
msg2 = " non-keyword"
else:
msg2 = ""
- msg = "%s() takes %s %d%s argument%s (%d given)" % (
- fnname,
+ msg = "takes %s %d%s argument%s (%d given)" % (
msg1,
n,
msg2,
@@ -686,9 +684,8 @@
def __init__(self, argname):
self.argname = argname
- def getmsg(self, fnname):
- msg = "%s() got multiple values for keyword argument '%s'" % (
- fnname,
+ def getmsg(self):
+ msg = "got multiple values for keyword argument '%s'" % (
self.argname)
return msg
@@ -722,13 +719,11 @@
break
self.kwd_name = name
- def getmsg(self, fnname):
+ def getmsg(self):
if self.num_kwds == 1:
- msg = "%s() got an unexpected keyword argument '%s'" % (
- fnname,
+ msg = "got an unexpected keyword argument '%s'" % (
self.kwd_name)
else:
- msg = "%s() got %d unexpected keyword arguments" % (
- fnname,
+ msg = "got %d unexpected keyword arguments" % (
self.num_kwds)
return msg
diff --git a/pypy/interpreter/baseobjspace.py b/pypy/interpreter/baseobjspace.py
--- a/pypy/interpreter/baseobjspace.py
+++ b/pypy/interpreter/baseobjspace.py
@@ -1591,12 +1591,15 @@
'ArithmeticError',
'AssertionError',
'AttributeError',
+ 'BaseException',
+ 'DeprecationWarning',
'EOFError',
'EnvironmentError',
'Exception',
'FloatingPointError',
'IOError',
'ImportError',
+ 'ImportWarning',
'IndentationError',
'IndexError',
'KeyError',
@@ -1617,7 +1620,10 @@
'TabError',
'TypeError',
'UnboundLocalError',
+ 'UnicodeDecodeError',
'UnicodeError',
+ 'UnicodeEncodeError',
+ 'UnicodeTranslateError',
'ValueError',
'ZeroDivisionError',
'UnicodeEncodeError',
diff --git a/pypy/interpreter/test/test_argument.py b/pypy/interpreter/test/test_argument.py
--- a/pypy/interpreter/test/test_argument.py
+++ b/pypy/interpreter/test/test_argument.py
@@ -393,8 +393,8 @@
class FakeArgErr(ArgErr):
- def getmsg(self, fname):
- return "msg "+fname
+ def getmsg(self):
+ return "msg"
def _match_signature(*args):
raise FakeArgErr()
@@ -404,7 +404,7 @@
excinfo = py.test.raises(OperationError, args.parse_obj, "obj", "foo",
Signature(["a", "b"], None, None))
assert excinfo.value.w_type is TypeError
- assert excinfo.value._w_value == "msg foo"
+ assert excinfo.value.get_w_value(space) == "foo() msg"
def test_args_parsing_into_scope(self):
@@ -448,8 +448,8 @@
class FakeArgErr(ArgErr):
- def getmsg(self, fname):
- return "msg "+fname
+ def getmsg(self):
+ return "msg"
def _match_signature(*args):
raise FakeArgErr()
@@ -460,7 +460,7 @@
"obj", [None, None], "foo",
Signature(["a", "b"], None, None))
assert excinfo.value.w_type is TypeError
- assert excinfo.value._w_value == "msg foo"
+ assert excinfo.value.get_w_value(space) == "foo() msg"
def test_topacked_frompacked(self):
space = DummySpace()
@@ -493,35 +493,35 @@
# got_nargs, nkwds, expected_nargs, has_vararg, has_kwarg,
# defaults_w, missing_args
err = ArgErrCount(1, 0, 0, False, False, None, 0)
- s = err.getmsg('foo')
- assert s == "foo() takes no arguments (1 given)"
+ s = err.getmsg()
+ assert s == "takes no arguments (1 given)"
err = ArgErrCount(0, 0, 1, False, False, [], 1)
- s = err.getmsg('foo')
- assert s == "foo() takes exactly 1 argument (0 given)"
+ s = err.getmsg()
+ assert s == "takes exactly 1 argument (0 given)"
err = ArgErrCount(3, 0, 2, False, False, [], 0)
- s = err.getmsg('foo')
- assert s == "foo() takes exactly 2 arguments (3 given)"
+ s = err.getmsg()
+ assert s == "takes exactly 2 arguments (3 given)"
err = ArgErrCount(3, 0, 2, False, False, ['a'], 0)
- s = err.getmsg('foo')
- assert s == "foo() takes at most 2 arguments (3 given)"
+ s = err.getmsg()
+ assert s == "takes at most 2 arguments (3 given)"
err = ArgErrCount(1, 0, 2, True, False, [], 1)
- s = err.getmsg('foo')
- assert s == "foo() takes at least 2 arguments (1 given)"
+ s = err.getmsg()
+ assert s == "takes at least 2 arguments (1 given)"
err = ArgErrCount(0, 1, 2, True, False, ['a'], 1)
- s = err.getmsg('foo')
- assert s == "foo() takes at least 1 non-keyword argument (0 given)"
+ s = err.getmsg()
+ assert s == "takes at least 1 non-keyword argument (0 given)"
err = ArgErrCount(2, 1, 1, False, True, [], 0)
- s = err.getmsg('foo')
- assert s == "foo() takes exactly 1 non-keyword argument (2 given)"
+ s = err.getmsg()
+ assert s == "takes exactly 1 non-keyword argument (2 given)"
err = ArgErrCount(0, 1, 1, False, True, [], 1)
- s = err.getmsg('foo')
- assert s == "foo() takes exactly 1 non-keyword argument (0 given)"
+ s = err.getmsg()
+ assert s == "takes exactly 1 non-keyword argument (0 given)"
err = ArgErrCount(0, 1, 1, True, True, [], 1)
- s = err.getmsg('foo')
- assert s == "foo() takes at least 1 non-keyword argument (0 given)"
+ s = err.getmsg()
+ assert s == "takes at least 1 non-keyword argument (0 given)"
err = ArgErrCount(2, 1, 1, False, True, ['a'], 0)
- s = err.getmsg('foo')
- assert s == "foo() takes at most 1 non-keyword argument (2 given)"
+ s = err.getmsg()
+ assert s == "takes at most 1 non-keyword argument (2 given)"
def test_bad_type_for_star(self):
space = self.space
@@ -543,12 +543,12 @@
def test_unknown_keywords(self):
space = DummySpace()
err = ArgErrUnknownKwds(space, 1, ['a', 'b'], [True, False], None)
- s = err.getmsg('foo')
- assert s == "foo() got an unexpected keyword argument 'b'"
+ s = err.getmsg()
+ assert s == "got an unexpected keyword argument 'b'"
err = ArgErrUnknownKwds(space, 2, ['a', 'b', 'c'],
[True, False, False], None)
- s = err.getmsg('foo')
- assert s == "foo() got 2 unexpected keyword arguments"
+ s = err.getmsg()
+ assert s == "got 2 unexpected keyword arguments"
def test_unknown_unicode_keyword(self):
class DummySpaceUnicode(DummySpace):
@@ -558,13 +558,13 @@
err = ArgErrUnknownKwds(space, 1, ['a', None, 'b', 'c'],
[True, False, True, True],
[unichr(0x1234), u'b', u'c'])
- s = err.getmsg('foo')
- assert s == "foo() got an unexpected keyword argument '\xe1\x88\xb4'"
+ s = err.getmsg()
+ assert s == "got an unexpected keyword argument '\xe1\x88\xb4'"
def test_multiple_values(self):
err = ArgErrMultipleValues('bla')
- s = err.getmsg('foo')
- assert s == "foo() got multiple values for keyword argument 'bla'"
+ s = err.getmsg()
+ assert s == "got multiple values for keyword argument 'bla'"
class AppTestArgument:
def test_error_message(self):
diff --git a/pypy/jit/backend/x86/regalloc.py b/pypy/jit/backend/x86/regalloc.py
--- a/pypy/jit/backend/x86/regalloc.py
+++ b/pypy/jit/backend/x86/regalloc.py
@@ -683,7 +683,7 @@
self.xrm.possibly_free_var(op.getarg(0))
def consider_cast_int_to_float(self, op):
- loc0 = self.rm.loc(op.getarg(0))
+ loc0 = self.rm.make_sure_var_in_reg(op.getarg(0))
loc1 = self.xrm.force_allocate_reg(op.result)
self.Perform(op, [loc0], loc1)
self.rm.possibly_free_var(op.getarg(0))
diff --git a/pypy/jit/backend/x86/test/test_runner.py b/pypy/jit/backend/x86/test/test_runner.py
--- a/pypy/jit/backend/x86/test/test_runner.py
+++ b/pypy/jit/backend/x86/test/test_runner.py
@@ -420,8 +420,8 @@
debug._log = None
#
assert ops_offset is looptoken._x86_ops_offset
- # getfield_raw/int_add/setfield_raw + ops + None
- assert len(ops_offset) == 3 + len(operations) + 1
+ # 2*(getfield_raw/int_add/setfield_raw) + ops + None
+ assert len(ops_offset) == 2*3 + len(operations) + 1
assert (ops_offset[operations[0]] <=
ops_offset[operations[1]] <=
ops_offset[operations[2]] <=
diff --git a/pypy/jit/metainterp/optimizeopt/fficall.py b/pypy/jit/metainterp/optimizeopt/fficall.py
--- a/pypy/jit/metainterp/optimizeopt/fficall.py
+++ b/pypy/jit/metainterp/optimizeopt/fficall.py
@@ -234,11 +234,11 @@
# longlongs are treated as floats, see
# e.g. llsupport/descr.py:getDescrClass
is_float = True
- elif kind == 'u':
+ elif kind == 'u' or kind == 's':
# they're all False
pass
else:
- assert False, "unsupported ffitype or kind"
+ raise NotImplementedError("unsupported ffitype or kind: %s" % kind)
#
fieldsize = rffi.getintfield(ffitype, 'c_size')
return self.optimizer.cpu.interiorfielddescrof_dynamic(
diff --git a/pypy/jit/metainterp/optimizeopt/test/test_multilabel.py b/pypy/jit/metainterp/optimizeopt/test/test_multilabel.py
--- a/pypy/jit/metainterp/optimizeopt/test/test_multilabel.py
+++ b/pypy/jit/metainterp/optimizeopt/test/test_multilabel.py
@@ -442,6 +442,22 @@
"""
self.optimize_loop(ops, expected)
+ def test_optimizer_renaming_boxes_not_imported(self):
+ ops = """
+ [p1]
+ i1 = strlen(p1)
+ label(p1)
+ jump(p1)
+ """
+ expected = """
+ [p1]
+ i1 = strlen(p1)
+ label(p1, i1)
+ i11 = same_as(i1)
+ jump(p1, i11)
+ """
+ self.optimize_loop(ops, expected)
+
class TestLLtype(OptimizeoptTestMultiLabel, LLtypeMixin):
diff --git a/pypy/jit/metainterp/optimizeopt/unroll.py b/pypy/jit/metainterp/optimizeopt/unroll.py
--- a/pypy/jit/metainterp/optimizeopt/unroll.py
+++ b/pypy/jit/metainterp/optimizeopt/unroll.py
@@ -271,6 +271,10 @@
if newresult is not op.result and not newvalue.is_constant():
op = ResOperation(rop.SAME_AS, [op.result], newresult)
self.optimizer._newoperations.append(op)
+ if self.optimizer.loop.logops:
+ debug_print(' Falling back to add extra: ' +
+ self.optimizer.loop.logops.repr_of_resop(op))
+
self.optimizer.flush()
self.optimizer.emitting_dissabled = False
@@ -435,7 +439,13 @@
return
for a in op.getarglist():
if not isinstance(a, Const) and a not in seen:
- self.ensure_short_op_emitted(self.short_boxes.producer(a), optimizer, seen)
+ self.ensure_short_op_emitted(self.short_boxes.producer(a), optimizer,
+ seen)
+
+ if self.optimizer.loop.logops:
+ debug_print(' Emitting short op: ' +
+ self.optimizer.loop.logops.repr_of_resop(op))
+
optimizer.send_extra_operation(op)
seen[op.result] = True
if op.is_ovf():
diff --git a/pypy/jit/metainterp/resoperation.py b/pypy/jit/metainterp/resoperation.py
--- a/pypy/jit/metainterp/resoperation.py
+++ b/pypy/jit/metainterp/resoperation.py
@@ -16,15 +16,13 @@
# debug
name = ""
pc = 0
+ opnum = 0
def __init__(self, result):
self.result = result
- # methods implemented by each concrete class
- # ------------------------------------------
-
def getopnum(self):
- raise NotImplementedError
+ return self.opnum
# methods implemented by the arity mixins
# ---------------------------------------
@@ -590,12 +588,9 @@
baseclass = PlainResOp
mixin = arity2mixin.get(arity, N_aryOp)
- def getopnum(self):
- return opnum
-
cls_name = '%s_OP' % name
bases = (get_base_class(mixin, baseclass),)
- dic = {'getopnum': getopnum}
+ dic = {'opnum': opnum}
return type(cls_name, bases, dic)
setup(__name__ == '__main__') # print out the table when run directly
diff --git a/pypy/jit/metainterp/test/test_fficall.py b/pypy/jit/metainterp/test/test_fficall.py
--- a/pypy/jit/metainterp/test/test_fficall.py
+++ b/pypy/jit/metainterp/test/test_fficall.py
@@ -148,28 +148,38 @@
self.check_resops({'jump': 1, 'int_lt': 2, 'setinteriorfield_raw': 4,
'getinteriorfield_raw': 8, 'int_add': 6, 'guard_true': 2})
- def test_array_getitem_uint8(self):
+ def _test_getitem_type(self, TYPE, ffitype, COMPUTE_TYPE):
+ reds = ["n", "i", "s", "data"]
+ if COMPUTE_TYPE is lltype.Float:
+ # Move the float var to the back.
+ reds.remove("s")
+ reds.append("s")
myjitdriver = JitDriver(
greens = [],
- reds = ["n", "i", "s", "data"],
+ reds = reds,
)
def f(data, n):
- i = s = 0
+ i = 0
+ s = rffi.cast(COMPUTE_TYPE, 0)
while i < n:
myjitdriver.jit_merge_point(n=n, i=i, s=s, data=data)
- s += rffi.cast(lltype.Signed, array_getitem(types.uchar, 1, data, 0, 0))
+ s += rffi.cast(COMPUTE_TYPE, array_getitem(ffitype, rffi.sizeof(TYPE), data, 0, 0))
i += 1
return s
+ def main(n):
+ with lltype.scoped_alloc(rffi.CArray(TYPE), 1) as data:
+ data[0] = rffi.cast(TYPE, 200)
+ return f(data, n)
+ assert self.meta_interp(main, [10]) == 2000
- def main(n):
- with lltype.scoped_alloc(rffi.CArray(rffi.UCHAR), 1) as data:
- data[0] = rffi.cast(rffi.UCHAR, 200)
- return f(data, n)
-
- assert self.meta_interp(main, [10]) == 2000
+ def test_array_getitem_uint8(self):
+ self._test_getitem_type(rffi.UCHAR, types.uchar, lltype.Signed)
self.check_resops({'jump': 1, 'int_lt': 2, 'getinteriorfield_raw': 2,
'guard_true': 2, 'int_add': 4})
+ def test_array_getitem_float(self):
+ self._test_getitem_type(rffi.FLOAT, types.float, lltype.Float)
+
class TestFfiCall(FfiCallTests, LLJitMixin):
supports_all = False
diff --git a/pypy/jit/metainterp/test/test_resoperation.py b/pypy/jit/metainterp/test/test_resoperation.py
--- a/pypy/jit/metainterp/test/test_resoperation.py
+++ b/pypy/jit/metainterp/test/test_resoperation.py
@@ -30,17 +30,17 @@
cls = rop.opclasses[rop.rop.INT_ADD]
assert issubclass(cls, rop.PlainResOp)
assert issubclass(cls, rop.BinaryOp)
- assert cls.getopnum.im_func(None) == rop.rop.INT_ADD
+ assert cls.getopnum.im_func(cls) == rop.rop.INT_ADD
cls = rop.opclasses[rop.rop.CALL]
assert issubclass(cls, rop.ResOpWithDescr)
assert issubclass(cls, rop.N_aryOp)
- assert cls.getopnum.im_func(None) == rop.rop.CALL
+ assert cls.getopnum.im_func(cls) == rop.rop.CALL
cls = rop.opclasses[rop.rop.GUARD_TRUE]
assert issubclass(cls, rop.GuardResOp)
assert issubclass(cls, rop.UnaryOp)
- assert cls.getopnum.im_func(None) == rop.rop.GUARD_TRUE
+ assert cls.getopnum.im_func(cls) == rop.rop.GUARD_TRUE
def test_mixins_in_common_base():
INT_ADD = rop.opclasses[rop.rop.INT_ADD]
diff --git a/pypy/module/_lsprof/interp_lsprof.py b/pypy/module/_lsprof/interp_lsprof.py
--- a/pypy/module/_lsprof/interp_lsprof.py
+++ b/pypy/module/_lsprof/interp_lsprof.py
@@ -19,8 +19,9 @@
# cpu affinity settings
srcdir = py.path.local(pypydir).join('translator', 'c', 'src')
-eci = ExternalCompilationInfo(separate_module_files=
- [srcdir.join('profiling.c')])
+eci = ExternalCompilationInfo(
+ separate_module_files=[srcdir.join('profiling.c')],
+ export_symbols=['pypy_setup_profiling', 'pypy_teardown_profiling'])
c_setup_profiling = rffi.llexternal('pypy_setup_profiling',
[], lltype.Void,
diff --git a/pypy/module/cpyext/api.py b/pypy/module/cpyext/api.py
--- a/pypy/module/cpyext/api.py
+++ b/pypy/module/cpyext/api.py
@@ -23,6 +23,7 @@
from pypy.interpreter.function import StaticMethod
from pypy.objspace.std.sliceobject import W_SliceObject
from pypy.module.__builtin__.descriptor import W_Property
+from pypy.module.__builtin__.interp_memoryview import W_MemoryView
from pypy.rlib.entrypoint import entrypoint
from pypy.rlib.unroll import unrolling_iterable
from pypy.rlib.objectmodel import specialize
@@ -387,6 +388,8 @@
"Float": "space.w_float",
"Long": "space.w_long",
"Complex": "space.w_complex",
+ "ByteArray": "space.w_bytearray",
+ "MemoryView": "space.gettypeobject(W_MemoryView.typedef)",
"BaseObject": "space.w_object",
'None': 'space.type(space.w_None)',
'NotImplemented': 'space.type(space.w_NotImplemented)',
diff --git a/pypy/module/cpyext/buffer.py b/pypy/module/cpyext/buffer.py
--- a/pypy/module/cpyext/buffer.py
+++ b/pypy/module/cpyext/buffer.py
@@ -1,6 +1,36 @@
+from pypy.interpreter.error import OperationError
from pypy.rpython.lltypesystem import rffi, lltype
from pypy.module.cpyext.api import (
cpython_api, CANNOT_FAIL, Py_buffer)
+from pypy.module.cpyext.pyobject import PyObject
+
+ at cpython_api([PyObject], rffi.INT_real, error=CANNOT_FAIL)
+def PyObject_CheckBuffer(space, w_obj):
+ """Return 1 if obj supports the buffer interface otherwise 0."""
+ return 0 # the bf_getbuffer field is never filled by cpyext
+
+ at cpython_api([PyObject, lltype.Ptr(Py_buffer), rffi.INT_real],
+ rffi.INT_real, error=-1)
+def PyObject_GetBuffer(space, w_obj, view, flags):
+ """Export obj into a Py_buffer, view. These arguments must
+ never be NULL. The flags argument is a bit field indicating what
+ kind of buffer the caller is prepared to deal with and therefore what
+ kind of buffer the exporter is allowed to return. The buffer interface
+ allows for complicated memory sharing possibilities, but some caller may
+ not be able to handle all the complexity but may want to see if the
+ exporter will let them take a simpler view to its memory.
+
+ Some exporters may not be able to share memory in every possible way and
+ may need to raise errors to signal to some consumers that something is
+ just not possible. These errors should be a BufferError unless
+ there is another error that is actually causing the problem. The
+ exporter can use flags information to simplify how much of the
+ Py_buffer structure is filled in with non-default values and/or
+ raise an error if the object can't support a simpler view of its memory.
+
+ 0 is returned on success and -1 on error."""
+ raise OperationError(space.w_TypeError, space.wrap(
+ 'PyPy does not yet implement the new buffer interface'))
@cpython_api([lltype.Ptr(Py_buffer), lltype.Char], rffi.INT_real, error=CANNOT_FAIL)
def PyBuffer_IsContiguous(space, view, fortran):
diff --git a/pypy/module/cpyext/include/object.h b/pypy/module/cpyext/include/object.h
--- a/pypy/module/cpyext/include/object.h
+++ b/pypy/module/cpyext/include/object.h
@@ -123,10 +123,6 @@
typedef Py_ssize_t (*segcountproc)(PyObject *, Py_ssize_t *);
typedef Py_ssize_t (*charbufferproc)(PyObject *, Py_ssize_t, char **);
-typedef int (*objobjproc)(PyObject *, PyObject *);
-typedef int (*visitproc)(PyObject *, void *);
-typedef int (*traverseproc)(PyObject *, visitproc, void *);
-
/* Py3k buffer interface */
typedef struct bufferinfo {
void *buf;
@@ -153,6 +149,41 @@
typedef int (*getbufferproc)(PyObject *, Py_buffer *, int);
typedef void (*releasebufferproc)(PyObject *, Py_buffer *);
+ /* Flags for getting buffers */
+#define PyBUF_SIMPLE 0
+#define PyBUF_WRITABLE 0x0001
+/* we used to include an E, backwards compatible alias */
+#define PyBUF_WRITEABLE PyBUF_WRITABLE
+#define PyBUF_FORMAT 0x0004
+#define PyBUF_ND 0x0008
+#define PyBUF_STRIDES (0x0010 | PyBUF_ND)
+#define PyBUF_C_CONTIGUOUS (0x0020 | PyBUF_STRIDES)
+#define PyBUF_F_CONTIGUOUS (0x0040 | PyBUF_STRIDES)
+#define PyBUF_ANY_CONTIGUOUS (0x0080 | PyBUF_STRIDES)
+#define PyBUF_INDIRECT (0x0100 | PyBUF_STRIDES)
+
+#define PyBUF_CONTIG (PyBUF_ND | PyBUF_WRITABLE)
+#define PyBUF_CONTIG_RO (PyBUF_ND)
+
+#define PyBUF_STRIDED (PyBUF_STRIDES | PyBUF_WRITABLE)
+#define PyBUF_STRIDED_RO (PyBUF_STRIDES)
+
+#define PyBUF_RECORDS (PyBUF_STRIDES | PyBUF_WRITABLE | PyBUF_FORMAT)
+#define PyBUF_RECORDS_RO (PyBUF_STRIDES | PyBUF_FORMAT)
+
+#define PyBUF_FULL (PyBUF_INDIRECT | PyBUF_WRITABLE | PyBUF_FORMAT)
+#define PyBUF_FULL_RO (PyBUF_INDIRECT | PyBUF_FORMAT)
+
+
+#define PyBUF_READ 0x100
+#define PyBUF_WRITE 0x200
+#define PyBUF_SHADOW 0x400
+/* end Py3k buffer interface */
+
+typedef int (*objobjproc)(PyObject *, PyObject *);
+typedef int (*visitproc)(PyObject *, void *);
+typedef int (*traverseproc)(PyObject *, visitproc, void *);
+
typedef struct {
/* For numbers without flag bit Py_TPFLAGS_CHECKTYPES set, all
arguments are guaranteed to be of the object's type (modulo
diff --git a/pypy/module/cpyext/include/pystate.h b/pypy/module/cpyext/include/pystate.h
--- a/pypy/module/cpyext/include/pystate.h
+++ b/pypy/module/cpyext/include/pystate.h
@@ -5,7 +5,7 @@
struct _is; /* Forward */
typedef struct _is {
- int _foo;
+ struct _is *next;
} PyInterpreterState;
typedef struct _ts {
diff --git a/pypy/module/cpyext/pystate.py b/pypy/module/cpyext/pystate.py
--- a/pypy/module/cpyext/pystate.py
+++ b/pypy/module/cpyext/pystate.py
@@ -2,7 +2,10 @@
cpython_api, generic_cpy_call, CANNOT_FAIL, CConfig, cpython_struct)
from pypy.rpython.lltypesystem import rffi, lltype
-PyInterpreterState = lltype.Ptr(cpython_struct("PyInterpreterState", ()))
+PyInterpreterStateStruct = lltype.ForwardReference()
+PyInterpreterState = lltype.Ptr(PyInterpreterStateStruct)
+cpython_struct(
+ "PyInterpreterState", [('next', PyInterpreterState)], PyInterpreterStateStruct)
PyThreadState = lltype.Ptr(cpython_struct("PyThreadState", [('interp', PyInterpreterState)]))
@cpython_api([], PyThreadState, error=CANNOT_FAIL)
@@ -54,7 +57,8 @@
class InterpreterState(object):
def __init__(self, space):
- self.interpreter_state = lltype.malloc(PyInterpreterState.TO, flavor='raw', immortal=True)
+ self.interpreter_state = lltype.malloc(
+ PyInterpreterState.TO, flavor='raw', zero=True, immortal=True)
def new_thread_state(self):
capsule = ThreadStateCapsule()
diff --git a/pypy/module/cpyext/stubs.py b/pypy/module/cpyext/stubs.py
--- a/pypy/module/cpyext/stubs.py
+++ b/pypy/module/cpyext/stubs.py
@@ -34,141 +34,6 @@
@cpython_api([PyObject], rffi.INT_real, error=CANNOT_FAIL)
def PyObject_CheckBuffer(space, obj):
- """Return 1 if obj supports the buffer interface otherwise 0."""
- raise NotImplementedError
-
- at cpython_api([PyObject, Py_buffer, rffi.INT_real], rffi.INT_real, error=-1)
-def PyObject_GetBuffer(space, obj, view, flags):
- """Export obj into a Py_buffer, view. These arguments must
- never be NULL. The flags argument is a bit field indicating what
- kind of buffer the caller is prepared to deal with and therefore what
- kind of buffer the exporter is allowed to return. The buffer interface
- allows for complicated memory sharing possibilities, but some caller may
- not be able to handle all the complexity but may want to see if the
- exporter will let them take a simpler view to its memory.
-
- Some exporters may not be able to share memory in every possible way and
- may need to raise errors to signal to some consumers that something is
- just not possible. These errors should be a BufferError unless
- there is another error that is actually causing the problem. The
- exporter can use flags information to simplify how much of the
- Py_buffer structure is filled in with non-default values and/or
- raise an error if the object can't support a simpler view of its memory.
-
- 0 is returned on success and -1 on error.
-
- The following table gives possible values to the flags arguments.
-
- Flag
-
- Description
-
- PyBUF_SIMPLE
-
- This is the default flag state. The returned
- buffer may or may not have writable memory. The
- format of the data will be assumed to be unsigned
- bytes. This is a "stand-alone" flag constant. It
- never needs to be '|'d to the others. The exporter
- will raise an error if it cannot provide such a
- contiguous buffer of bytes.
-
- PyBUF_WRITABLE
-
- The returned buffer must be writable. If it is
- not writable, then raise an error.
-
- PyBUF_STRIDES
-
- This implies PyBUF_ND. The returned
- buffer must provide strides information (i.e. the
- strides cannot be NULL). This would be used when
- the consumer can handle strided, discontiguous
- arrays. Handling strides automatically assumes
- you can handle shape. The exporter can raise an
- error if a strided representation of the data is
- not possible (i.e. without the suboffsets).
-
- PyBUF_ND
-
- The returned buffer must provide shape
- information. The memory will be assumed C-style
- contiguous (last dimension varies the
- fastest). The exporter may raise an error if it
- cannot provide this kind of contiguous buffer. If
- this is not given then shape will be NULL.
-
- PyBUF_C_CONTIGUOUS
- PyBUF_F_CONTIGUOUS
- PyBUF_ANY_CONTIGUOUS
-
- These flags indicate that the contiguity returned
- buffer must be respectively, C-contiguous (last
- dimension varies the fastest), Fortran contiguous
- (first dimension varies the fastest) or either
- one. All of these flags imply
- PyBUF_STRIDES and guarantee that the
- strides buffer info structure will be filled in
- correctly.
-
- PyBUF_INDIRECT
-
- This flag indicates the returned buffer must have
- suboffsets information (which can be NULL if no
- suboffsets are needed). This can be used when
- the consumer can handle indirect array
- referencing implied by these suboffsets. This
- implies PyBUF_STRIDES.
-
- PyBUF_FORMAT
-
- The returned buffer must have true format
- information if this flag is provided. This would
- be used when the consumer is going to be checking
- for what 'kind' of data is actually stored. An
- exporter should always be able to provide this
- information if requested. If format is not
- explicitly requested then the format must be
- returned as NULL (which means 'B', or
- unsigned bytes)
-
- PyBUF_STRIDED
-
- This is equivalent to (PyBUF_STRIDES |
- PyBUF_WRITABLE).
-
- PyBUF_STRIDED_RO
-
- This is equivalent to (PyBUF_STRIDES).
-
- PyBUF_RECORDS
-
- This is equivalent to (PyBUF_STRIDES |
- PyBUF_FORMAT | PyBUF_WRITABLE).
-
- PyBUF_RECORDS_RO
-
- This is equivalent to (PyBUF_STRIDES |
- PyBUF_FORMAT).
-
- PyBUF_FULL
-
- This is equivalent to (PyBUF_INDIRECT |
- PyBUF_FORMAT | PyBUF_WRITABLE).
-
- PyBUF_FULL_RO
-
- This is equivalent to (PyBUF_INDIRECT |
- PyBUF_FORMAT).
-
- PyBUF_CONTIG
-
- This is equivalent to (PyBUF_ND |
- PyBUF_WRITABLE).
-
- PyBUF_CONTIG_RO
-
- This is equivalent to (PyBUF_ND)."""
raise NotImplementedError
@cpython_api([rffi.CCHARP], Py_ssize_t, error=CANNOT_FAIL)
diff --git a/pypy/module/cpyext/test/test_pystate.py b/pypy/module/cpyext/test/test_pystate.py
--- a/pypy/module/cpyext/test/test_pystate.py
+++ b/pypy/module/cpyext/test/test_pystate.py
@@ -37,6 +37,7 @@
def test_thread_state_interp(self, space, api):
ts = api.PyThreadState_Get()
assert ts.c_interp == api.PyInterpreterState_Head()
+ assert ts.c_interp.c_next == nullptr(PyInterpreterState.TO)
def test_basic_threadstate_dance(self, space, api):
# Let extension modules call these functions,
diff --git a/pypy/module/micronumpy/__init__.py b/pypy/module/micronumpy/__init__.py
--- a/pypy/module/micronumpy/__init__.py
+++ b/pypy/module/micronumpy/__init__.py
@@ -9,7 +9,7 @@
appleveldefs = {}
class Module(MixedModule):
- applevel_name = 'numpypy'
+ applevel_name = '_numpypy'
submodules = {
'pypy': PyPyModule
@@ -48,6 +48,7 @@
'int_': 'interp_boxes.W_LongBox',
'inexact': 'interp_boxes.W_InexactBox',
'floating': 'interp_boxes.W_FloatingBox',
+ 'float_': 'interp_boxes.W_Float64Box',
'float32': 'interp_boxes.W_Float32Box',
'float64': 'interp_boxes.W_Float64Box',
}
diff --git a/pypy/module/micronumpy/app_numpy.py b/pypy/module/micronumpy/app_numpy.py
--- a/pypy/module/micronumpy/app_numpy.py
+++ b/pypy/module/micronumpy/app_numpy.py
@@ -1,6 +1,6 @@
import math
-import numpypy
+import _numpypy
inf = float("inf")
@@ -14,29 +14,29 @@
return mean(a)
def identity(n, dtype=None):
- a = numpypy.zeros((n,n), dtype=dtype)
+ a = _numpypy.zeros((n,n), dtype=dtype)
for i in range(n):
a[i][i] = 1
return a
def mean(a):
if not hasattr(a, "mean"):
- a = numpypy.array(a)
+ a = _numpypy.array(a)
return a.mean()
def sum(a):
if not hasattr(a, "sum"):
- a = numpypy.array(a)
+ a = _numpypy.array(a)
return a.sum()
def min(a):
if not hasattr(a, "min"):
- a = numpypy.array(a)
+ a = _numpypy.array(a)
return a.min()
def max(a):
if not hasattr(a, "max"):
- a = numpypy.array(a)
+ a = _numpypy.array(a)
return a.max()
def arange(start, stop=None, step=1, dtype=None):
@@ -47,9 +47,9 @@
stop = start
start = 0
if dtype is None:
- test = numpypy.array([start, stop, step, 0])
+ test = _numpypy.array([start, stop, step, 0])
dtype = test.dtype
- arr = numpypy.zeros(int(math.ceil((stop - start) / step)), dtype=dtype)
+ arr = _numpypy.zeros(int(math.ceil((stop - start) / step)), dtype=dtype)
i = start
for j in range(arr.size):
arr[j] = i
@@ -90,5 +90,5 @@
you should assign the new shape to the shape attribute of the array
'''
if not hasattr(a, 'reshape'):
- a = numpypy.array(a)
+ a = _numpypy.array(a)
return a.reshape(shape)
diff --git a/pypy/module/micronumpy/interp_boxes.py b/pypy/module/micronumpy/interp_boxes.py
--- a/pypy/module/micronumpy/interp_boxes.py
+++ b/pypy/module/micronumpy/interp_boxes.py
@@ -78,6 +78,7 @@
descr_sub = _binop_impl("subtract")
descr_mul = _binop_impl("multiply")
descr_div = _binop_impl("divide")
+ descr_pow = _binop_impl("power")
descr_eq = _binop_impl("equal")
descr_ne = _binop_impl("not_equal")
descr_lt = _binop_impl("less")
@@ -170,6 +171,7 @@
__sub__ = interp2app(W_GenericBox.descr_sub),
__mul__ = interp2app(W_GenericBox.descr_mul),
__div__ = interp2app(W_GenericBox.descr_div),
+ __pow__ = interp2app(W_GenericBox.descr_pow),
__radd__ = interp2app(W_GenericBox.descr_radd),
__rsub__ = interp2app(W_GenericBox.descr_rsub),
@@ -245,6 +247,7 @@
long_name = "int64"
W_LongBox.typedef = TypeDef(long_name, (W_SignedIntegerBox.typedef, int_typedef,),
__module__ = "numpypy",
+ __new__ = interp2app(W_LongBox.descr__new__.im_func),
)
W_ULongBox.typedef = TypeDef("u" + long_name, W_UnsignedIntegerBox.typedef,
diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py
--- a/pypy/module/micronumpy/interp_numarray.py
+++ b/pypy/module/micronumpy/interp_numarray.py
@@ -380,6 +380,9 @@
def descr_get_dtype(self, space):
return space.wrap(self.find_dtype())
+ def descr_get_ndim(self, space):
+ return space.wrap(len(self.shape))
+
@jit.unroll_safe
def descr_get_shape(self, space):
return space.newtuple([space.wrap(i) for i in self.shape])
@@ -409,7 +412,7 @@
def descr_repr(self, space):
res = StringBuilder()
res.append("array(")
- concrete = self.get_concrete()
+ concrete = self.get_concrete_or_scalar()
dtype = concrete.find_dtype()
if not concrete.size:
res.append('[]')
@@ -422,8 +425,9 @@
else:
concrete.to_str(space, 1, res, indent=' ')
if (dtype is not interp_dtype.get_dtype_cache(space).w_float64dtype and
- dtype is not interp_dtype.get_dtype_cache(space).w_int64dtype) or \
- not self.size:
+ not (dtype.kind == interp_dtype.SIGNEDLTR and
+ dtype.itemtype.get_element_size() == rffi.sizeof(lltype.Signed)) or
+ not self.size):
res.append(", dtype=" + dtype.name)
res.append(")")
return space.wrap(res.build())
@@ -559,6 +563,18 @@
def descr_mean(self, space):
return space.div(self.descr_sum(space), space.wrap(self.size))
+ def descr_var(self, space):
+ # var = mean((values - mean(values)) ** 2)
+ w_res = self.descr_sub(space, self.descr_mean(space))
+ assert isinstance(w_res, BaseArray)
+ w_res = w_res.descr_pow(space, space.wrap(2))
+ assert isinstance(w_res, BaseArray)
+ return w_res.descr_mean(space)
+
+ def descr_std(self, space):
+ # std(v) = sqrt(var(v))
+ return interp_ufuncs.get(space).sqrt.call(space, [self.descr_var(space)])
+
def descr_nonzero(self, space):
if self.size > 1:
raise OperationError(space.w_ValueError, space.wrap(
@@ -840,80 +856,80 @@
each line will begin with indent.
'''
size = self.size
+ ccomma = ',' * comma
+ ncomma = ',' * (1 - comma)
+ dtype = self.find_dtype()
if size < 1:
builder.append('[]')
return
+ elif size == 1:
+ builder.append(dtype.itemtype.str_format(self.getitem(0)))
+ return
if size > 1000:
# Once this goes True it does not go back to False for recursive
# calls
use_ellipsis = True
- dtype = self.find_dtype()
ndims = len(self.shape)
i = 0
- start = True
builder.append('[')
if ndims > 1:
if use_ellipsis:
- for i in range(3):
- if start:
- start = False
- else:
- builder.append(',' * comma + '\n')
- if ndims == 3:
+ for i in range(min(3, self.shape[0])):
+ if i > 0:
+ builder.append(ccomma + '\n')
+ if ndims >= 3:
builder.append('\n' + indent)
else:
builder.append(indent)
- # create_slice requires len(chunks) > 1 in order to reduce
- # shape
- view = self.create_slice([(i, 0, 0, 1), (0, self.shape[1], 1, self.shape[1])]).get_concrete()
- view.to_str(space, comma, builder, indent=indent + ' ', use_ellipsis=use_ellipsis)
- builder.append('\n' + indent + '..., ')
- i = self.shape[0] - 3
+ view = self.create_slice([(i, 0, 0, 1)]).get_concrete()
+ view.to_str(space, comma, builder, indent=indent + ' ',
+ use_ellipsis=use_ellipsis)
+ if i < self.shape[0] - 1:
+ builder.append(ccomma +'\n' + indent + '...' + ncomma)
+ i = self.shape[0] - 3
+ else:
+ i += 1
while i < self.shape[0]:
- if start:
- start = False
- else:
- builder.append(',' * comma + '\n')
- if ndims == 3:
+ if i > 0:
+ builder.append(ccomma + '\n')
+ if ndims >= 3:
builder.append('\n' + indent)
else:
builder.append(indent)
# create_slice requires len(chunks) > 1 in order to reduce
# shape
- view = self.create_slice([(i, 0, 0, 1), (0, self.shape[1], 1, self.shape[1])]).get_concrete()
- view.to_str(space, comma, builder, indent=indent + ' ', use_ellipsis=use_ellipsis)
+ view = self.create_slice([(i, 0, 0, 1)]).get_concrete()
+ view.to_str(space, comma, builder, indent=indent + ' ',
+ use_ellipsis=use_ellipsis)
i += 1
elif ndims == 1:
- spacer = ',' * comma + ' '
+ spacer = ccomma + ' '
item = self.start
# An iterator would be a nicer way to walk along the 1d array, but
# how do I reset it if printing ellipsis? iterators have no
# "set_offset()"
i = 0
if use_ellipsis:
- for i in range(3):
- if start:
- start = False
- else:
+ for i in range(min(3, self.shape[0])):
+ if i > 0:
builder.append(spacer)
builder.append(dtype.itemtype.str_format(self.getitem(item)))
item += self.strides[0]
- # Add a comma only if comma is False - this prevents adding two
- # commas
- builder.append(spacer + '...' + ',' * (1 - comma))
- # Ugly, but can this be done with an iterator?
- item = self.start + self.backstrides[0] - 2 * self.strides[0]
- i = self.shape[0] - 3
+ if i < self.shape[0] - 1:
+ # Add a comma only if comma is False - this prevents adding
+ # two commas
+ builder.append(spacer + '...' + ncomma)
+ # Ugly, but can this be done with an iterator?
+ item = self.start + self.backstrides[0] - 2 * self.strides[0]
+ i = self.shape[0] - 3
+ else:
+ i += 1
while i < self.shape[0]:
- if start:
- start = False
- else:
+ if i > 0:
builder.append(spacer)
builder.append(dtype.itemtype.str_format(self.getitem(item)))
item += self.strides[0]
i += 1
- else:
- builder.append('[')
builder.append(']')
@jit.unroll_safe
@@ -1185,6 +1201,7 @@
shape = GetSetProperty(BaseArray.descr_get_shape,
BaseArray.descr_set_shape),
size = GetSetProperty(BaseArray.descr_get_size),
+ ndim = GetSetProperty(BaseArray.descr_get_ndim),
T = GetSetProperty(BaseArray.descr_get_transpose),
flat = GetSetProperty(BaseArray.descr_get_flatiter),
@@ -1199,6 +1216,8 @@
all = interp2app(BaseArray.descr_all),
any = interp2app(BaseArray.descr_any),
dot = interp2app(BaseArray.descr_dot),
+ var = interp2app(BaseArray.descr_var),
+ std = interp2app(BaseArray.descr_std),
copy = interp2app(BaseArray.descr_copy),
reshape = interp2app(BaseArray.descr_reshape),
diff --git a/pypy/module/micronumpy/test/test_dtypes.py b/pypy/module/micronumpy/test/test_dtypes.py
--- a/pypy/module/micronumpy/test/test_dtypes.py
+++ b/pypy/module/micronumpy/test/test_dtypes.py
@@ -3,7 +3,7 @@
class AppTestDtypes(BaseNumpyAppTest):
def test_dtype(self):
- from numpypy import dtype
+ from _numpypy import dtype
d = dtype('?')
assert d.num == 0
@@ -14,7 +14,7 @@
raises(TypeError, dtype, 1042)
def test_dtype_with_types(self):
- from numpypy import dtype
+ from _numpypy import dtype
assert dtype(bool).num == 0
assert dtype(int).num == 7
@@ -22,13 +22,13 @@
assert dtype(float).num == 12
def test_array_dtype_attr(self):
- from numpypy import array, dtype
+ from _numpypy import array, dtype
a = array(range(5), long)
assert a.dtype is dtype(long)
def test_repr_str(self):
- from numpypy import dtype
+ from _numpypy import dtype
assert repr(dtype) == ""
d = dtype('?')
@@ -36,7 +36,7 @@
assert str(d) == "bool"
def test_bool_array(self):
- from numpypy import array, False_, True_
+ from _numpypy import array, False_, True_
a = array([0, 1, 2, 2.5], dtype='?')
assert a[0] is False_
@@ -44,7 +44,7 @@
assert a[i] is True_
def test_copy_array_with_dtype(self):
- from numpypy import array, False_, True_, int64
+ from _numpypy import array, False_, True_, int64
a = array([0, 1, 2, 3], dtype=long)
# int on 64-bit, long in 32-bit
@@ -58,35 +58,35 @@
assert b[0] is False_
def test_zeros_bool(self):
- from numpypy import zeros, False_
+ from _numpypy import zeros, False_
a = zeros(10, dtype=bool)
for i in range(10):
assert a[i] is False_
def test_ones_bool(self):
- from numpypy import ones, True_
+ from _numpypy import ones, True_
a = ones(10, dtype=bool)
for i in range(10):
assert a[i] is True_
def test_zeros_long(self):
- from numpypy import zeros, int64
+ from _numpypy import zeros, int64
a = zeros(10, dtype=long)
for i in range(10):
assert isinstance(a[i], int64)
assert a[1] == 0
def test_ones_long(self):
- from numpypy import ones, int64
+ from _numpypy import ones, int64
a = ones(10, dtype=long)
for i in range(10):
assert isinstance(a[i], int64)
assert a[1] == 1
def test_overflow(self):
- from numpypy import array, dtype
+ from _numpypy import array, dtype
assert array([128], 'b')[0] == -128
assert array([256], 'B')[0] == 0
assert array([32768], 'h')[0] == -32768
@@ -98,7 +98,7 @@
raises(OverflowError, "array([2**64], 'Q')")
def test_bool_binop_types(self):
- from numpypy import array, dtype
+ from _numpypy import array, dtype
types = [
'?', 'b', 'B', 'h', 'H', 'i', 'I', 'l', 'L', 'q', 'Q', 'f', 'd'
]
@@ -107,7 +107,7 @@
assert (a + array([0], t)).dtype is dtype(t)
def test_binop_types(self):
- from numpypy import array, dtype
+ from _numpypy import array, dtype
tests = [('b','B','h'), ('b','h','h'), ('b','H','i'), ('b','i','i'),
('b','l','l'), ('b','q','q'), ('b','Q','d'), ('B','h','h'),
('B','H','H'), ('B','i','i'), ('B','I','I'), ('B','l','l'),
@@ -129,7 +129,7 @@
assert (array([1], d1) + array([1], d2)).dtype is dtype(dout)
def test_add_int8(self):
- from numpypy import array, dtype
+ from _numpypy import array, dtype
a = array(range(5), dtype="int8")
b = a + a
@@ -138,7 +138,7 @@
assert b[i] == i * 2
def test_add_int16(self):
- from numpypy import array, dtype
+ from _numpypy import array, dtype
a = array(range(5), dtype="int16")
b = a + a
@@ -147,7 +147,7 @@
assert b[i] == i * 2
def test_add_uint32(self):
- from numpypy import array, dtype
+ from _numpypy import array, dtype
a = array(range(5), dtype="I")
b = a + a
@@ -156,19 +156,28 @@
assert b[i] == i * 2
def test_shape(self):
- from numpypy import dtype
+ from _numpypy import dtype
assert dtype(long).shape == ()
def test_cant_subclass(self):
- from numpypy import dtype
+ from _numpypy import dtype
# You can't subclass dtype
raises(TypeError, type, "Foo", (dtype,), {})
+ def test_new(self):
+ import _numpypy as np
+ assert np.int_(4) == 4
+ assert np.float_(3.4) == 3.4
+
+ def test_pow(self):
+ from _numpypy import int_
+ assert int_(4) ** 2 == 16
+
class AppTestTypes(BaseNumpyAppTest):
def test_abstract_types(self):
- import numpypy as numpy
+ import _numpypy as numpy
raises(TypeError, numpy.generic, 0)
raises(TypeError, numpy.number, 0)
raises(TypeError, numpy.integer, 0)
@@ -181,7 +190,7 @@
raises(TypeError, numpy.inexact, 0)
def test_bool(self):
- import numpypy as numpy
+ import _numpypy as numpy
assert numpy.bool_.mro() == [numpy.bool_, numpy.generic, object]
assert numpy.bool_(3) is numpy.True_
@@ -196,7 +205,7 @@
assert numpy.bool_("False") is numpy.True_
def test_int8(self):
- import numpypy as numpy
+ import _numpypy as numpy
assert numpy.int8.mro() == [numpy.int8, numpy.signedinteger, numpy.integer, numpy.number, numpy.generic, object]
@@ -218,7 +227,7 @@
assert numpy.int8('128') == -128
def test_uint8(self):
- import numpypy as numpy
+ import _numpypy as numpy
assert numpy.uint8.mro() == [numpy.uint8, numpy.unsignedinteger, numpy.integer, numpy.number, numpy.generic, object]
@@ -241,7 +250,7 @@
assert numpy.uint8('256') == 0
def test_int16(self):
- import numpypy as numpy
+ import _numpypy as numpy
x = numpy.int16(3)
assert x == 3
@@ -251,7 +260,7 @@
assert numpy.int16('32768') == -32768
def test_uint16(self):
- import numpypy as numpy
+ import _numpypy as numpy
assert numpy.uint16(65535) == 65535
assert numpy.uint16(65536) == 0
@@ -260,7 +269,7 @@
def test_int32(self):
import sys
- import numpypy as numpy
+ import _numpypy as numpy
x = numpy.int32(23)
assert x == 23
@@ -275,7 +284,7 @@
def test_uint32(self):
import sys
- import numpypy as numpy
+ import _numpypy as numpy
assert numpy.uint32(10) == 10
@@ -286,14 +295,14 @@
assert numpy.uint32('4294967296') == 0
def test_int_(self):
- import numpypy as numpy
+ import _numpypy as numpy
assert numpy.int_ is numpy.dtype(int).type
assert numpy.int_.mro() == [numpy.int_, numpy.signedinteger, numpy.integer, numpy.number, numpy.generic, int, object]
def test_int64(self):
import sys
- import numpypy as numpy
+ import _numpypy as numpy
if sys.maxint == 2 ** 63 -1:
assert numpy.int64.mro() == [numpy.int64, numpy.signedinteger, numpy.integer, numpy.number, numpy.generic, int, object]
@@ -315,7 +324,7 @@
def test_uint64(self):
import sys
- import numpypy as numpy
+ import _numpypy as numpy
assert numpy.uint64.mro() == [numpy.uint64, numpy.unsignedinteger, numpy.integer, numpy.number, numpy.generic, object]
@@ -330,7 +339,7 @@
raises(OverflowError, numpy.uint64(18446744073709551616))
def test_float32(self):
- import numpypy as numpy
+ import _numpypy as numpy
assert numpy.float32.mro() == [numpy.float32, numpy.floating, numpy.inexact, numpy.number, numpy.generic, object]
@@ -339,7 +348,7 @@
raises(ValueError, numpy.float32, '23.2df')
def test_float64(self):
- import numpypy as numpy
+ import _numpypy as numpy
assert numpy.float64.mro() == [numpy.float64, numpy.floating, numpy.inexact, numpy.number, numpy.generic, float, object]
@@ -352,7 +361,7 @@
raises(ValueError, numpy.float64, '23.2df')
def test_subclass_type(self):
- import numpypy as numpy
+ import _numpypy as numpy
class X(numpy.float64):
def m(self):
diff --git a/pypy/module/micronumpy/test/test_module.py b/pypy/module/micronumpy/test/test_module.py
--- a/pypy/module/micronumpy/test/test_module.py
+++ b/pypy/module/micronumpy/test/test_module.py
@@ -3,33 +3,33 @@
class AppTestNumPyModule(BaseNumpyAppTest):
def test_mean(self):
- from numpypy import array, mean
+ from _numpypy import array, mean
assert mean(array(range(5))) == 2.0
assert mean(range(5)) == 2.0
def test_average(self):
- from numpypy import array, average
+ from _numpypy import array, average
assert average(range(10)) == 4.5
assert average(array(range(10))) == 4.5
def test_sum(self):
- from numpypy import array, sum
+ from _numpypy import array, sum
assert sum(range(10)) == 45
assert sum(array(range(10))) == 45
def test_min(self):
- from numpypy import array, min
+ from _numpypy import array, min
assert min(range(10)) == 0
assert min(array(range(10))) == 0
def test_max(self):
- from numpypy import array, max
+ from _numpypy import array, max
assert max(range(10)) == 9
assert max(array(range(10))) == 9
def test_constants(self):
import math
- from numpypy import inf, e, pi
+ from _numpypy import inf, e, pi
assert type(inf) is float
assert inf == float("inf")
assert e == math.e
diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py
--- a/pypy/module/micronumpy/test/test_numarray.py
+++ b/pypy/module/micronumpy/test/test_numarray.py
@@ -158,9 +158,10 @@
assert calc_new_strides([24], [2, 4, 3], [48, 6, 1]) is None
assert calc_new_strides([24], [2, 4, 3], [24, 6, 2]) == [2]
+
class AppTestNumArray(BaseNumpyAppTest):
def test_ndarray(self):
- from numpypy import ndarray, array, dtype
+ from _numpypy import ndarray, array, dtype
assert type(ndarray) is type
assert type(array) is not type
@@ -175,12 +176,26 @@
assert a.dtype is dtype(int)
def test_type(self):
- from numpypy import array
+ from _numpypy import array
ar = array(range(5))
assert type(ar) is type(ar + ar)
+ def test_ndim(self):
+ from _numpypy import array
+ x = array(0.2)
+ assert x.ndim == 0
+ x = array([1, 2])
+ assert x.ndim == 1
+ x = array([[1, 2], [3, 4]])
+ assert x.ndim == 2
+ x = array([[[1, 2], [3, 4]], [[5, 6], [7, 8]]])
+ assert x.ndim == 3
+ # numpy actually raises an AttributeError, but _numpypy raises an
+ # TypeError
+ raises(TypeError, 'x.ndim = 3')
+
def test_init(self):
- from numpypy import zeros
+ from _numpypy import zeros
a = zeros(15)
# Check that storage was actually zero'd.
assert a[10] == 0.0
@@ -189,7 +204,7 @@
assert a[13] == 5.3
def test_size(self):
- from numpypy import array
+ from _numpypy import array
assert array(3).size == 1
a = array([1, 2, 3])
assert a.size == 3
@@ -200,13 +215,13 @@
Test that empty() works.
"""
- from numpypy import empty
+ from _numpypy import empty
a = empty(2)
a[1] = 1.0
assert a[1] == 1.0
def test_ones(self):
- from numpypy import ones
+ from _numpypy import ones
a = ones(3)
assert len(a) == 3
assert a[0] == 1
@@ -215,7 +230,7 @@
assert a[2] == 4
def test_copy(self):
- from numpypy import arange, array
+ from _numpypy import arange, array
a = arange(5)
b = a.copy()
for i in xrange(5):
@@ -232,12 +247,12 @@
assert (c == b).all()
def test_iterator_init(self):
- from numpypy import array
+ from _numpypy import array
a = array(range(5))
assert a[3] == 3
def test_getitem(self):
- from numpypy import array
+ from _numpypy import array
a = array(range(5))
raises(IndexError, "a[5]")
a = a + a
@@ -246,7 +261,7 @@
raises(IndexError, "a[-6]")
def test_getitem_tuple(self):
- from numpypy import array
+ from _numpypy import array
a = array(range(5))
raises(IndexError, "a[(1,2)]")
for i in xrange(5):
@@ -256,7 +271,7 @@
assert a[i] == b[i]
def test_setitem(self):
- from numpypy import array
+ from _numpypy import array
a = array(range(5))
a[-1] = 5.0
assert a[4] == 5.0
@@ -264,7 +279,7 @@
raises(IndexError, "a[-6] = 3.0")
def test_setitem_tuple(self):
- from numpypy import array
+ from _numpypy import array
a = array(range(5))
raises(IndexError, "a[(1,2)] = [0,1]")
for i in xrange(5):
@@ -275,7 +290,7 @@
assert a[i] == i
def test_setslice_array(self):
- from numpypy import array
+ from _numpypy import array
a = array(range(5))
b = array(range(2))
a[1:4:2] = b
@@ -286,7 +301,7 @@
assert b[1] == 0.
def test_setslice_of_slice_array(self):
- from numpypy import array, zeros
+ from _numpypy import array, zeros
a = zeros(5)
a[::2] = array([9., 10., 11.])
assert a[0] == 9.
@@ -305,7 +320,7 @@
assert a[0] == 3.
def test_setslice_list(self):
- from numpypy import array
+ from _numpypy import array
a = array(range(5), float)
b = [0., 1.]
a[1:4:2] = b
@@ -313,14 +328,14 @@
assert a[3] == 1.
def test_setslice_constant(self):
- from numpypy import array
+ from _numpypy import array
a = array(range(5), float)
a[1:4:2] = 0.
assert a[1] == 0.
assert a[3] == 0.
def test_scalar(self):
- from numpypy import array, dtype
+ from _numpypy import array, dtype
a = array(3)
raises(IndexError, "a[0]")
raises(IndexError, "a[0] = 5")
@@ -329,13 +344,13 @@
assert a.dtype is dtype(int)
def test_len(self):
- from numpypy import array
+ from _numpypy import array
a = array(range(5))
assert len(a) == 5
assert len(a + a) == 5
def test_shape(self):
- from numpypy import array
+ from _numpypy import array
a = array(range(5))
assert a.shape == (5,)
b = a + a
@@ -344,7 +359,7 @@
assert c.shape == (3,)
def test_set_shape(self):
- from numpypy import array, zeros
+ from _numpypy import array, zeros
a = array([])
a.shape = []
a = array(range(12))
@@ -364,7 +379,7 @@
a.shape = (1,)
def test_reshape(self):
- from numpypy import array, zeros
+ from _numpypy import array, zeros
a = array(range(12))
exc = raises(ValueError, "b = a.reshape((3, 10))")
assert str(exc.value) == "total size of new array must be unchanged"
@@ -377,7 +392,7 @@
a.shape = (12, 2)
def test_slice_reshape(self):
- from numpypy import zeros, arange
+ from _numpypy import zeros, arange
a = zeros((4, 2, 3))
b = a[::2, :, :]
b.shape = (2, 6)
@@ -413,13 +428,13 @@
raises(ValueError, arange(10).reshape, (5, -1, -1))
def test_reshape_varargs(self):
- from numpypy import arange
+ from _numpypy import arange
z = arange(96).reshape(12, -1)
y = z.reshape(4, 3, 8)
assert y.shape == (4, 3, 8)
def test_add(self):
- from numpypy import array
+ from _numpypy import array
a = array(range(5))
b = a + a
for i in range(5):
@@ -432,7 +447,7 @@
assert c[i] == bool(a[i] + b[i])
def test_add_other(self):
- from numpypy import array
+ from _numpypy import array
a = array(range(5))
b = array([i for i in reversed(range(5))])
c = a + b
@@ -440,20 +455,20 @@
assert c[i] == 4
def test_add_constant(self):
- from numpypy import array
+ from _numpypy import array
a = array(range(5))
b = a + 5
for i in range(5):
assert b[i] == i + 5
def test_radd(self):
- from numpypy import array
+ from _numpypy import array
r = 3 + array(range(3))
for i in range(3):
assert r[i] == i + 3
def test_add_list(self):
- from numpypy import array, ndarray
+ from _numpypy import array, ndarray
a = array(range(5))
b = list(reversed(range(5)))
c = a + b
@@ -462,14 +477,14 @@
assert c[i] == 4
def test_subtract(self):
- from numpypy import array
+ from _numpypy import array
a = array(range(5))
b = a - a
for i in range(5):
assert b[i] == 0
def test_subtract_other(self):
- from numpypy import array
+ from _numpypy import array
a = array(range(5))
b = array([1, 1, 1, 1, 1])
c = a - b
@@ -477,34 +492,34 @@
assert c[i] == i - 1
def test_subtract_constant(self):
- from numpypy import array
+ from _numpypy import array
a = array(range(5))
b = a - 5
for i in range(5):
assert b[i] == i - 5
def test_scalar_subtract(self):
- from numpypy import int32
+ from _numpypy import int32
assert int32(2) - 1 == 1
assert 1 - int32(2) == -1
def test_mul(self):
- import numpypy
+ import _numpypy
- a = numpypy.array(range(5))
+ a = _numpypy.array(range(5))
b = a * a
for i in range(5):
assert b[i] == i * i
- a = numpypy.array(range(5), dtype=bool)
+ a = _numpypy.array(range(5), dtype=bool)
b = a * a
- assert b.dtype is numpypy.dtype(bool)
- assert b[0] is numpypy.False_
+ assert b.dtype is _numpypy.dtype(bool)
+ assert b[0] is _numpypy.False_
for i in range(1, 5):
- assert b[i] is numpypy.True_
+ assert b[i] is _numpypy.True_
def test_mul_constant(self):
- from numpypy import array
+ from _numpypy import array
a = array(range(5))
b = a * 5
for i in range(5):
@@ -512,7 +527,7 @@
def test_div(self):
from math import isnan
- from numpypy import array, dtype, inf
+ from _numpypy import array, dtype, inf
a = array(range(1, 6))
b = a / a
@@ -544,7 +559,7 @@
assert c[2] == -inf
def test_div_other(self):
- from numpypy import array
+ from _numpypy import array
a = array(range(5))
b = array([2, 2, 2, 2, 2], float)
c = a / b
@@ -552,14 +567,14 @@
assert c[i] == i / 2.0
def test_div_constant(self):
- from numpypy import array
+ from _numpypy import array
a = array(range(5))
b = a / 5.0
for i in range(5):
assert b[i] == i / 5.0
def test_pow(self):
- from numpypy import array
+ from _numpypy import array
a = array(range(5), float)
b = a ** a
for i in range(5):
@@ -569,7 +584,7 @@
assert (a ** 2 == a * a).all()
def test_pow_other(self):
- from numpypy import array
+ from _numpypy import array
a = array(range(5), float)
b = array([2, 2, 2, 2, 2])
c = a ** b
@@ -577,14 +592,14 @@
assert c[i] == i ** 2
def test_pow_constant(self):
- from numpypy import array
+ from _numpypy import array
a = array(range(5), float)
b = a ** 2
for i in range(5):
assert b[i] == i ** 2
def test_mod(self):
- from numpypy import array
+ from _numpypy import array
a = array(range(1, 6))
b = a % a
for i in range(5):
@@ -597,7 +612,7 @@
assert b[i] == 1
def test_mod_other(self):
- from numpypy import array
+ from _numpypy import array
a = array(range(5))
b = array([2, 2, 2, 2, 2])
c = a % b
@@ -605,14 +620,14 @@
assert c[i] == i % 2
def test_mod_constant(self):
- from numpypy import array
+ from _numpypy import array
a = array(range(5))
b = a % 2
for i in range(5):
assert b[i] == i % 2
def test_pos(self):
- from numpypy import array
+ from _numpypy import array
a = array([1., -2., 3., -4., -5.])
b = +a
for i in range(5):
@@ -623,7 +638,7 @@
assert a[i] == i
def test_neg(self):
- from numpypy import array
+ from _numpypy import array
a = array([1., -2., 3., -4., -5.])
b = -a
for i in range(5):
@@ -634,7 +649,7 @@
assert a[i] == -i
def test_abs(self):
- from numpypy import array
+ from _numpypy import array
a = array([1., -2., 3., -4., -5.])
b = abs(a)
for i in range(5):
@@ -645,7 +660,7 @@
assert a[i + 5] == abs(i)
def test_auto_force(self):
- from numpypy import array
+ from _numpypy import array
a = array(range(5))
b = a - 1
a[2] = 3
@@ -659,7 +674,7 @@
assert c[1] == 4
def test_getslice(self):
- from numpypy import array
+ from _numpypy import array
a = array(range(5))
s = a[1:5]
assert len(s) == 4
@@ -673,7 +688,7 @@
assert s[0] == 5
def test_getslice_step(self):
- from numpypy import array
+ from _numpypy import array
a = array(range(10))
s = a[1:9:2]
assert len(s) == 4
@@ -681,7 +696,7 @@
assert s[i] == a[2 * i + 1]
def test_slice_update(self):
- from numpypy import array
+ from _numpypy import array
a = array(range(5))
s = a[0:3]
s[1] = 10
@@ -691,7 +706,7 @@
def test_slice_invaidate(self):
# check that slice shares invalidation list with
- from numpypy import array
+ from _numpypy import array
a = array(range(5))
s = a[0:2]
b = array([10, 11])
@@ -705,13 +720,13 @@
assert d[1] == 12
def test_mean(self):
- from numpypy import array
+ from _numpypy import array
a = array(range(5))
assert a.mean() == 2.0
assert a[:4].mean() == 1.5
def test_sum(self):
- from numpypy import array
+ from _numpypy import array
a = array(range(5))
assert a.sum() == 10.0
assert a[:4].sum() == 6.0
@@ -720,52 +735,52 @@
assert a.sum() == 5
def test_identity(self):
- from numpypy import identity, array
- from numpypy import int32, float64, dtype
+ from _numpypy import identity, array
+ from _numpypy import int32, float64, dtype
a = identity(0)
assert len(a) == 0
assert a.dtype == dtype('float64')
- assert a.shape == (0,0)
+ assert a.shape == (0, 0)
b = identity(1, dtype=int32)
assert len(b) == 1
assert b[0][0] == 1
- assert b.shape == (1,1)
+ assert b.shape == (1, 1)
assert b.dtype == dtype('int32')
c = identity(2)
- assert c.shape == (2,2)
- assert (c == [[1,0],[0,1]]).all()
+ assert c.shape == (2, 2)
+ assert (c == [[1, 0], [0, 1]]).all()
d = identity(3, dtype='int32')
- assert d.shape == (3,3)
+ assert d.shape == (3, 3)
assert d.dtype == dtype('int32')
- assert (d == [[1,0,0],[0,1,0],[0,0,1]]).all()
+ assert (d == [[1, 0, 0], [0, 1, 0], [0, 0, 1]]).all()
def test_prod(self):
- from numpypy import array
+ from _numpypy import array
a = array(range(1, 6))
assert a.prod() == 120.0
assert a[:4].prod() == 24.0
def test_max(self):
- from numpypy import array
+ from _numpypy import array
a = array([-1.2, 3.4, 5.7, -3.0, 2.7])
assert a.max() == 5.7
b = array([])
raises(ValueError, "b.max()")
def test_max_add(self):
- from numpypy import array
+ from _numpypy import array
a = array([-1.2, 3.4, 5.7, -3.0, 2.7])
assert (a + a).max() == 11.4
def test_min(self):
- from numpypy import array
+ from _numpypy import array
a = array([-1.2, 3.4, 5.7, -3.0, 2.7])
assert a.min() == -3.0
b = array([])
raises(ValueError, "b.min()")
def test_argmax(self):
- from numpypy import array
+ from _numpypy import array
a = array([-1.2, 3.4, 5.7, -3.0, 2.7])
r = a.argmax()
assert r == 2
@@ -786,14 +801,14 @@
assert a.argmax() == 2
def test_argmin(self):
- from numpypy import array
+ from _numpypy import array
a = array([-1.2, 3.4, 5.7, -3.0, 2.7])
assert a.argmin() == 3
b = array([])
raises(ValueError, "b.argmin()")
def test_all(self):
- from numpypy import array
+ from _numpypy import array
a = array(range(5))
assert a.all() == False
a[0] = 3.0
@@ -802,7 +817,7 @@
assert b.all() == True
def test_any(self):
- from numpypy import array, zeros
+ from _numpypy import array, zeros
a = array(range(5))
assert a.any() == True
b = zeros(5)
@@ -811,7 +826,7 @@
assert c.any() == False
def test_dot(self):
- from numpypy import array, dot
+ from _numpypy import array, dot
a = array(range(5))
assert a.dot(a) == 30.0
@@ -821,14 +836,14 @@
assert (dot(5, [1, 2, 3]) == [5, 10, 15]).all()
def test_dot_constant(self):
- from numpypy import array
+ from _numpypy import array
a = array(range(5))
b = a.dot(2.5)
for i in xrange(5):
assert b[i] == 2.5 * a[i]
def test_dtype_guessing(self):
- from numpypy import array, dtype, float64, int8, bool_
+ from _numpypy import array, dtype, float64, int8, bool_
assert array([True]).dtype is dtype(bool)
assert array([True, False]).dtype is dtype(bool)
@@ -845,7 +860,7 @@
def test_comparison(self):
import operator
- from numpypy import array, dtype
+ from _numpypy import array, dtype
a = array(range(5))
b = array(range(5), float)
@@ -864,7 +879,7 @@
assert c[i] == func(b[i], 3)
def test_nonzero(self):
- from numpypy import array
+ from _numpypy import array
a = array([1, 2])
raises(ValueError, bool, a)
raises(ValueError, bool, a == a)
@@ -874,7 +889,7 @@
assert not bool(array([0]))
def test_slice_assignment(self):
- from numpypy import array
+ from _numpypy import array
a = array(range(5))
a[::-1] = a
assert (a == [0, 1, 2, 1, 0]).all()
@@ -884,8 +899,8 @@
assert (a == [8, 6, 4, 2, 0]).all()
def test_debug_repr(self):
- from numpypy import zeros, sin
- from numpypy.pypy import debug_repr
+ from _numpypy import zeros, sin
+ from _numpypy.pypy import debug_repr
a = zeros(1)
assert debug_repr(a) == 'Array'
assert debug_repr(a + a) == 'Call2(add, Array, Array)'
@@ -899,8 +914,8 @@
assert debug_repr(b) == 'Array'
def test_remove_invalidates(self):
- from numpypy import array
- from numpypy.pypy import remove_invalidates
+ from _numpypy import array
+ from _numpypy.pypy import remove_invalidates
a = array([1, 2, 3])
b = a + a
remove_invalidates(a)
@@ -908,7 +923,7 @@
assert b[0] == 28
def test_virtual_views(self):
- from numpypy import arange
+ from _numpypy import arange
a = arange(15)
c = (a + a)
d = c[::2]
@@ -926,7 +941,7 @@
assert b[1] == 2
def test_tolist_scalar(self):
- from numpypy import int32, bool_
+ from _numpypy import int32, bool_
x = int32(23)
assert x.tolist() == 23
assert type(x.tolist()) is int
@@ -934,13 +949,13 @@
assert y.tolist() is True
def test_tolist_zerodim(self):
- from numpypy import array
+ from _numpypy import array
x = array(3)
assert x.tolist() == 3
assert type(x.tolist()) is int
def test_tolist_singledim(self):
- from numpypy import array
+ from _numpypy import array
a = array(range(5))
assert a.tolist() == [0, 1, 2, 3, 4]
assert type(a.tolist()[0]) is int
@@ -948,41 +963,55 @@
assert b.tolist() == [0.2, 0.4, 0.6]
def test_tolist_multidim(self):
- from numpypy import array
+ from _numpypy import array
a = array([[1, 2], [3, 4]])
assert a.tolist() == [[1, 2], [3, 4]]
def test_tolist_view(self):
- from numpypy import array
- a = array([[1,2],[3,4]])
+ from _numpypy import array
+ a = array([[1, 2], [3, 4]])
assert (a + a).tolist() == [[2, 4], [6, 8]]
def test_tolist_slice(self):
- from numpypy import array
+ from _numpypy import array
a = array([[17.1, 27.2], [40.3, 50.3]])
- assert a[:,0].tolist() == [17.1, 40.3]
+ assert a[:, 0].tolist() == [17.1, 40.3]
assert a[0].tolist() == [17.1, 27.2]
+ def test_var(self):
+ from _numpypy import array
+ a = array(range(10))
+ assert a.var() == 8.25
+ a = array([5.0])
+ assert a.var() == 0.0
+
+ def test_std(self):
+ from _numpypy import array
+ a = array(range(10))
+ assert a.std() == 2.8722813232690143
+ a = array([5.0])
+ assert a.std() == 0.0
+
class AppTestMultiDim(BaseNumpyAppTest):
def test_init(self):
- import numpypy
- a = numpypy.zeros((2, 2))
+ import _numpypy
+ a = _numpypy.zeros((2, 2))
assert len(a) == 2
def test_shape(self):
- import numpypy
- assert numpypy.zeros(1).shape == (1,)
- assert numpypy.zeros((2, 2)).shape == (2, 2)
- assert numpypy.zeros((3, 1, 2)).shape == (3, 1, 2)
- assert numpypy.array([[1], [2], [3]]).shape == (3, 1)
- assert len(numpypy.zeros((3, 1, 2))) == 3
- raises(TypeError, len, numpypy.zeros(()))
- raises(ValueError, numpypy.array, [[1, 2], 3])
+ import _numpypy
+ assert _numpypy.zeros(1).shape == (1,)
+ assert _numpypy.zeros((2, 2)).shape == (2, 2)
+ assert _numpypy.zeros((3, 1, 2)).shape == (3, 1, 2)
+ assert _numpypy.array([[1], [2], [3]]).shape == (3, 1)
+ assert len(_numpypy.zeros((3, 1, 2))) == 3
+ raises(TypeError, len, _numpypy.zeros(()))
+ raises(ValueError, _numpypy.array, [[1, 2], 3])
def test_getsetitem(self):
- import numpypy
- a = numpypy.zeros((2, 3, 1))
+ import _numpypy
+ a = _numpypy.zeros((2, 3, 1))
raises(IndexError, a.__getitem__, (2, 0, 0))
raises(IndexError, a.__getitem__, (0, 3, 0))
raises(IndexError, a.__getitem__, (0, 0, 1))
@@ -993,8 +1022,8 @@
assert a[1, -1, 0] == 3
def test_slices(self):
- import numpypy
- a = numpypy.zeros((4, 3, 2))
+ import _numpypy
+ a = _numpypy.zeros((4, 3, 2))
raises(IndexError, a.__getitem__, (4,))
raises(IndexError, a.__getitem__, (3, 3))
raises(IndexError, a.__getitem__, (slice(None), 3))
@@ -1027,51 +1056,51 @@
assert a[1][2][1] == 15
def test_init_2(self):
- import numpypy
- raises(ValueError, numpypy.array, [[1], 2])
- raises(ValueError, numpypy.array, [[1, 2], [3]])
- raises(ValueError, numpypy.array, [[[1, 2], [3, 4], 5]])
- raises(ValueError, numpypy.array, [[[1, 2], [3, 4], [5]]])
- a = numpypy.array([[1, 2], [4, 5]])
+ import _numpypy
+ raises(ValueError, _numpypy.array, [[1], 2])
+ raises(ValueError, _numpypy.array, [[1, 2], [3]])
+ raises(ValueError, _numpypy.array, [[[1, 2], [3, 4], 5]])
+ raises(ValueError, _numpypy.array, [[[1, 2], [3, 4], [5]]])
+ a = _numpypy.array([[1, 2], [4, 5]])
assert a[0, 1] == 2
assert a[0][1] == 2
- a = numpypy.array(([[[1, 2], [3, 4], [5, 6]]]))
+ a = _numpypy.array(([[[1, 2], [3, 4], [5, 6]]]))
assert (a[0, 1] == [3, 4]).all()
def test_setitem_slice(self):
- import numpypy
- a = numpypy.zeros((3, 4))
+ import _numpypy
+ a = _numpypy.zeros((3, 4))
a[1] = [1, 2, 3, 4]
assert a[1, 2] == 3
raises(TypeError, a[1].__setitem__, [1, 2, 3])
- a = numpypy.array([[1, 2], [3, 4]])
+ a = _numpypy.array([[1, 2], [3, 4]])
assert (a == [[1, 2], [3, 4]]).all()
- a[1] = numpypy.array([5, 6])
+ a[1] = _numpypy.array([5, 6])
assert (a == [[1, 2], [5, 6]]).all()
- a[:, 1] = numpypy.array([8, 10])
+ a[:, 1] = _numpypy.array([8, 10])
assert (a == [[1, 8], [5, 10]]).all()
- a[0, :: -1] = numpypy.array([11, 12])
+ a[0, :: -1] = _numpypy.array([11, 12])
assert (a == [[12, 11], [5, 10]]).all()
def test_ufunc(self):
- from numpypy import array
+ from _numpypy import array
a = array([[1, 2], [3, 4], [5, 6]])
assert ((a + a) == \
array([[1 + 1, 2 + 2], [3 + 3, 4 + 4], [5 + 5, 6 + 6]])).all()
def test_getitem_add(self):
- from numpypy import array
+ from _numpypy import array
a = array([[1, 2], [3, 4], [5, 6], [7, 8], [9, 10]])
assert (a + a)[1, 1] == 8
def test_ufunc_negative(self):
- from numpypy import array, negative
+ from _numpypy import array, negative
a = array([[1, 2], [3, 4]])
b = negative(a + a)
assert (b == [[-2, -4], [-6, -8]]).all()
def test_getitem_3(self):
- from numpypy import array
+ from _numpypy import array
a = array([[1, 2], [3, 4], [5, 6], [7, 8],
[9, 10], [11, 12], [13, 14]])
b = a[::2]
@@ -1082,37 +1111,37 @@
assert c[1][1] == 12
def test_multidim_ones(self):
- from numpypy import ones
+ from _numpypy import ones
a = ones((1, 2, 3))
assert a[0, 1, 2] == 1.0
def test_multidim_setslice(self):
- from numpypy import zeros, ones
+ from _numpypy import zeros, ones
a = zeros((3, 3))
b = ones((3, 3))
- a[:,1:3] = b[:,1:3]
+ a[:, 1:3] = b[:, 1:3]
assert (a == [[0, 1, 1], [0, 1, 1], [0, 1, 1]]).all()
a = zeros((3, 3))
b = ones((3, 3))
- a[:,::2] = b[:,::2]
+ a[:, ::2] = b[:, ::2]
assert (a == [[1, 0, 1], [1, 0, 1], [1, 0, 1]]).all()
def test_broadcast_ufunc(self):
- from numpypy import array
+ from _numpypy import array
a = array([[1, 2], [3, 4], [5, 6]])
b = array([5, 6])
c = ((a + b) == [[1 + 5, 2 + 6], [3 + 5, 4 + 6], [5 + 5, 6 + 6]])
assert c.all()
def test_broadcast_setslice(self):
- from numpypy import zeros, ones
+ from _numpypy import zeros, ones
a = zeros((10, 10))
b = ones(10)
a[:, :] = b
assert a[3, 5] == 1
def test_broadcast_shape_agreement(self):
- from numpypy import zeros, array
+ from _numpypy import zeros, array
a = zeros((3, 1, 3))
b = array(((10, 11, 12), (20, 21, 22), (30, 31, 32)))
c = ((a + b) == [b, b, b])
@@ -1126,7 +1155,7 @@
assert c.all()
def test_broadcast_scalar(self):
- from numpypy import zeros
+ from _numpypy import zeros
a = zeros((4, 5), 'd')
a[:, 1] = 3
assert a[2, 1] == 3
@@ -1137,14 +1166,14 @@
assert a[3, 2] == 0
def test_broadcast_call2(self):
- from numpypy import zeros, ones
+ from _numpypy import zeros, ones
a = zeros((4, 1, 5))
b = ones((4, 3, 5))
b[:] = (a + a)
assert (b == zeros((4, 3, 5))).all()
def test_broadcast_virtualview(self):
- from numpypy import arange, zeros
+ from _numpypy import arange, zeros
a = arange(8).reshape([2, 2, 2])
b = (a + a)[1, 1]
c = zeros((2, 2, 2))
@@ -1152,13 +1181,13 @@
assert (c == [[[12, 14], [12, 14]], [[12, 14], [12, 14]]]).all()
def test_argmax(self):
- from numpypy import array
+ from _numpypy import array
a = array([[1, 2], [3, 4], [5, 6]])
assert a.argmax() == 5
assert a[:2, ].argmax() == 3
def test_broadcast_wrong_shapes(self):
- from numpypy import zeros
+ from _numpypy import zeros
a = zeros((4, 3, 2))
b = zeros((4, 2))
exc = raises(ValueError, lambda: a + b)
@@ -1166,7 +1195,7 @@
" together with shapes (4,3,2) (4,2)"
def test_reduce(self):
- from numpypy import array
+ from _numpypy import array
a = array([[1, 2, 3, 4], [5, 6, 7, 8], [9, 10, 11, 12]])
assert a.sum() == (13 * 12) / 2
b = a[1:, 1::2]
@@ -1174,7 +1203,7 @@
assert c.sum() == (6 + 8 + 10 + 12) * 2
def test_transpose(self):
- from numpypy import array
+ from _numpypy import array
a = array(((range(3), range(3, 6)),
(range(6, 9), range(9, 12)),
(range(12, 15), range(15, 18)),
@@ -1193,7 +1222,7 @@
assert(b[:, 0] == a[0, :]).all()
def test_flatiter(self):
- from numpypy import array, flatiter
+ from _numpypy import array, flatiter
a = array([[10, 30], [40, 60]])
f_iter = a.flat
assert f_iter.next() == 10
@@ -1208,23 +1237,23 @@
assert s == 140
def test_flatiter_array_conv(self):
- from numpypy import array, dot
+ from _numpypy import array, dot
a = array([1, 2, 3])
assert dot(a.flat, a.flat) == 14
def test_flatiter_varray(self):
- from numpypy import ones
+ from _numpypy import ones
a = ones((2, 2))
assert list(((a + a).flat)) == [2, 2, 2, 2]
def test_slice_copy(self):
- from numpypy import zeros
+ from _numpypy import zeros
a = zeros((10, 10))
b = a[0].copy()
assert (b == zeros(10)).all()
def test_array_interface(self):
- from numpypy import array
+ from _numpypy import array
a = array([1, 2, 3])
i = a.__array_interface__
assert isinstance(i['data'][0], int)
@@ -1233,6 +1262,7 @@
assert isinstance(i['data'][0], int)
raises(TypeError, getattr, array(3), '__array_interface__')
+
class AppTestSupport(BaseNumpyAppTest):
def setup_class(cls):
import struct
@@ -1245,7 +1275,7 @@
def test_fromstring(self):
import sys
- from numpypy import fromstring, array, uint8, float32, int32
+ from _numpypy import fromstring, array, uint8, float32, int32
a = fromstring(self.data)
for i in range(4):
@@ -1275,17 +1305,17 @@
assert g[1] == 2
assert g[2] == 3
h = fromstring("1, , 2, 3", dtype=uint8, sep=",")
- assert (h == [1,0,2,3]).all()
+ assert (h == [1, 0, 2, 3]).all()
i = fromstring("1 2 3", dtype=uint8, sep=" ")
- assert (i == [1,2,3]).all()
+ assert (i == [1, 2, 3]).all()
j = fromstring("1\t\t\t\t2\t3", dtype=uint8, sep="\t")
- assert (j == [1,2,3]).all()
+ assert (j == [1, 2, 3]).all()
k = fromstring("1,x,2,3", dtype=uint8, sep=",")
- assert (k == [1,0]).all()
+ assert (k == [1, 0]).all()
l = fromstring("1,x,2,3", dtype='float32', sep=",")
- assert (l == [1.0,-1.0]).all()
+ assert (l == [1.0, -1.0]).all()
m = fromstring("1,,2,3", sep=",")
- assert (m == [1.0,-1.0,2.0,3.0]).all()
+ assert (m == [1.0, -1.0, 2.0, 3.0]).all()
n = fromstring("3.4 2.0 3.8 2.2", dtype=int32, sep=" ")
assert (n == [3]).all()
o = fromstring("1.0 2f.0f 3.8 2.2", dtype=float32, sep=" ")
@@ -1309,7 +1339,7 @@
assert (u == [1, 0]).all()
def test_fromstring_types(self):
- from numpypy import (fromstring, int8, int16, int32, int64, uint8,
+ from _numpypy import (fromstring, int8, int16, int32, int64, uint8,
uint16, uint32, float32, float64)
a = fromstring('\xFF', dtype=int8)
@@ -1333,9 +1363,8 @@
j = fromstring(self.ulongval, dtype='L')
assert j[0] == 12
-
def test_fromstring_invalid(self):
- from numpypy import fromstring, uint16, uint8, int32
+ from _numpypy import fromstring, uint16, uint8, int32
#default dtype is 64-bit float, so 3 bytes should fail
raises(ValueError, fromstring, "\x01\x02\x03")
#3 bytes is not modulo 2 bytes (int16)
@@ -1346,7 +1375,8 @@
class AppTestRepr(BaseNumpyAppTest):
def test_repr(self):
- from numpypy import array, zeros
+ from _numpypy import array, zeros
+ int_size = array(5).dtype.itemsize
a = array(range(5), float)
assert repr(a) == "array([0.0, 1.0, 2.0, 3.0, 4.0])"
a = array([], float)
@@ -1354,14 +1384,26 @@
a = zeros(1001)
assert repr(a) == "array([0.0, 0.0, 0.0, ..., 0.0, 0.0, 0.0])"
a = array(range(5), long)
- assert repr(a) == "array([0, 1, 2, 3, 4])"
+ if a.dtype.itemsize == int_size:
+ assert repr(a) == "array([0, 1, 2, 3, 4])"
+ else:
+ assert repr(a) == "array([0, 1, 2, 3, 4], dtype=int64)"
+ a = array(range(5), 'int32')
+ if a.dtype.itemsize == int_size:
+ assert repr(a) == "array([0, 1, 2, 3, 4])"
+ else:
+ assert repr(a) == "array([0, 1, 2, 3, 4], dtype=int32)"
a = array([], long)
assert repr(a) == "array([], dtype=int64)"
a = array([True, False, True, False], "?")
assert repr(a) == "array([True, False, True, False], dtype=bool)"
+ a = zeros([])
+ assert repr(a) == "array(0.0)"
+ a = array(0.2)
+ assert repr(a) == "array(0.2)"
def test_repr_multi(self):
- from numpypy import array, zeros
+ from _numpypy import arange, zeros
a = zeros((3, 4))
assert repr(a) == '''array([[0.0, 0.0, 0.0, 0.0],
[0.0, 0.0, 0.0, 0.0],
@@ -1374,9 +1416,19 @@
[[0.0, 0.0, 0.0, 0.0],
[0.0, 0.0, 0.0, 0.0],
[0.0, 0.0, 0.0, 0.0]]])'''
+ a = arange(1002).reshape((2, 501))
+ assert repr(a) == '''array([[0, 1, 2, ..., 498, 499, 500],
+ [501, 502, 503, ..., 999, 1000, 1001]])'''
+ assert repr(a.T) == '''array([[0, 501],
+ [1, 502],
+ [2, 503],
+ ...,
+ [498, 999],
+ [499, 1000],
+ [500, 1001]])'''
def test_repr_slice(self):
- from numpypy import array, zeros
+ from _numpypy import array, zeros
a = array(range(5), float)
b = a[1::2]
assert repr(b) == "array([1.0, 3.0])"
@@ -1391,7 +1443,7 @@
assert repr(b) == "array([], shape=(0, 5), dtype=int16)"
def test_str(self):
- from numpypy import array, zeros
+ from _numpypy import array, zeros
a = array(range(5), float)
assert str(a) == "[0.0 1.0 2.0 3.0 4.0]"
assert str((2 * a)[:]) == "[0.0 2.0 4.0 6.0 8.0]"
@@ -1417,14 +1469,14 @@
a = zeros((400, 400), dtype=int)
assert str(a) == "[[0 0 0 ..., 0 0 0]\n [0 0 0 ..., 0 0 0]\n" \
- " [0 0 0 ..., 0 0 0]\n ..., \n [0 0 0 ..., 0 0 0]\n" \
+ " [0 0 0 ..., 0 0 0]\n ...,\n [0 0 0 ..., 0 0 0]\n" \
" [0 0 0 ..., 0 0 0]\n [0 0 0 ..., 0 0 0]]"
a = zeros((2, 2, 2))
r = str(a)
assert r == '[[[0.0 0.0]\n [0.0 0.0]]\n\n [[0.0 0.0]\n [0.0 0.0]]]'
def test_str_slice(self):
- from numpypy import array, zeros
+ from _numpypy import array, zeros
a = array(range(5), float)
b = a[1::2]
assert str(b) == "[1.0 3.0]"
@@ -1440,7 +1492,7 @@
class AppTestRanges(BaseNumpyAppTest):
def test_arange(self):
- from numpypy import arange, array, dtype
+ from _numpypy import arange, array, dtype
a = arange(3)
assert (a == [0, 1, 2]).all()
assert a.dtype is dtype(int)
@@ -1462,7 +1514,7 @@
class AppTestRanges(BaseNumpyAppTest):
def test_app_reshape(self):
- from numpypy import arange, array, dtype, reshape
+ from _numpypy import arange, array, dtype, reshape
a = arange(12)
b = reshape(a, (3, 4))
assert b.shape == (3, 4)
diff --git a/pypy/module/micronumpy/test/test_ufuncs.py b/pypy/module/micronumpy/test/test_ufuncs.py
--- a/pypy/module/micronumpy/test/test_ufuncs.py
+++ b/pypy/module/micronumpy/test/test_ufuncs.py
@@ -4,14 +4,14 @@
class AppTestUfuncs(BaseNumpyAppTest):
def test_ufunc_instance(self):
- from numpypy import add, ufunc
+ from _numpypy import add, ufunc
assert isinstance(add, ufunc)
assert repr(add) == ""
assert repr(ufunc) == ""
def test_ufunc_attrs(self):
- from numpypy import add, multiply, sin
+ from _numpypy import add, multiply, sin
assert add.identity == 0
assert multiply.identity == 1
@@ -22,7 +22,7 @@
assert sin.nin == 1
def test_wrong_arguments(self):
- from numpypy import add, sin
+ from _numpypy import add, sin
raises(ValueError, add, 1)
raises(TypeError, add, 1, 2, 3)
@@ -30,14 +30,14 @@
raises(ValueError, sin)
def test_single_item(self):
- from numpypy import negative, sign, minimum
+ from _numpypy import negative, sign, minimum
assert negative(5.0) == -5.0
assert sign(-0.0) == 0.0
assert minimum(2.0, 3.0) == 2.0
def test_sequence(self):
- from numpypy import array, ndarray, negative, minimum
+ from _numpypy import array, ndarray, negative, minimum
a = array(range(3))
b = [2.0, 1.0, 0.0]
c = 1.0
@@ -71,7 +71,7 @@
assert min_c_b[i] == min(b[i], c)
def test_negative(self):
- from numpypy import array, negative
+ from _numpypy import array, negative
a = array([-5.0, 0.0, 1.0])
b = negative(a)
@@ -86,7 +86,7 @@
assert negative(a + a)[3] == -6
def test_abs(self):
- from numpypy import array, absolute
+ from _numpypy import array, absolute
a = array([-5.0, -0.0, 1.0])
b = absolute(a)
@@ -94,7 +94,7 @@
assert b[i] == abs(a[i])
def test_add(self):
- from numpypy import array, add
+ from _numpypy import array, add
a = array([-5.0, -0.0, 1.0])
b = array([ 3.0, -2.0,-3.0])
@@ -103,7 +103,7 @@
assert c[i] == a[i] + b[i]
def test_divide(self):
- from numpypy import array, divide
+ from _numpypy import array, divide
a = array([-5.0, -0.0, 1.0])
b = array([ 3.0, -2.0,-3.0])
@@ -114,7 +114,7 @@
assert (divide(array([-10]), array([2])) == array([-5])).all()
def test_fabs(self):
- from numpypy import array, fabs
+ from _numpypy import array, fabs
from math import fabs as math_fabs
a = array([-5.0, -0.0, 1.0])
@@ -123,7 +123,7 @@
assert b[i] == math_fabs(a[i])
def test_minimum(self):
- from numpypy import array, minimum
+ from _numpypy import array, minimum
a = array([-5.0, -0.0, 1.0])
b = array([ 3.0, -2.0,-3.0])
@@ -132,7 +132,7 @@
assert c[i] == min(a[i], b[i])
def test_maximum(self):
- from numpypy import array, maximum
+ from _numpypy import array, maximum
a = array([-5.0, -0.0, 1.0])
b = array([ 3.0, -2.0,-3.0])
@@ -145,7 +145,7 @@
assert isinstance(x, (int, long))
def test_multiply(self):
- from numpypy import array, multiply
+ from _numpypy import array, multiply
a = array([-5.0, -0.0, 1.0])
b = array([ 3.0, -2.0,-3.0])
@@ -154,7 +154,7 @@
assert c[i] == a[i] * b[i]
def test_sign(self):
- from numpypy import array, sign, dtype
+ from _numpypy import array, sign, dtype
reference = [-1.0, 0.0, 0.0, 1.0]
a = array([-5.0, -0.0, 0.0, 6.0])
@@ -173,7 +173,7 @@
assert a[1] == 0
def test_reciporocal(self):
- from numpypy import array, reciprocal
+ from _numpypy import array, reciprocal
reference = [-0.2, float("inf"), float("-inf"), 2.0]
a = array([-5.0, 0.0, -0.0, 0.5])
@@ -182,7 +182,7 @@
assert b[i] == reference[i]
def test_subtract(self):
- from numpypy import array, subtract
+ from _numpypy import array, subtract
a = array([-5.0, -0.0, 1.0])
b = array([ 3.0, -2.0,-3.0])
@@ -191,7 +191,7 @@
assert c[i] == a[i] - b[i]
def test_floor(self):
- from numpypy import array, floor
+ from _numpypy import array, floor
reference = [-2.0, -1.0, 0.0, 1.0, 1.0]
a = array([-1.4, -1.0, 0.0, 1.0, 1.4])
@@ -200,7 +200,7 @@
assert b[i] == reference[i]
def test_copysign(self):
- from numpypy import array, copysign
+ from _numpypy import array, copysign
reference = [5.0, -0.0, 0.0, -6.0]
a = array([-5.0, 0.0, 0.0, 6.0])
@@ -216,7 +216,7 @@
def test_exp(self):
import math
- from numpypy import array, exp
+ from _numpypy import array, exp
a = array([-5.0, -0.0, 0.0, 12345678.0, float("inf"),
-float('inf'), -12343424.0])
@@ -230,7 +230,7 @@
def test_sin(self):
import math
- from numpypy import array, sin
+ from _numpypy import array, sin
a = array([0, 1, 2, 3, math.pi, math.pi*1.5, math.pi*2])
b = sin(a)
@@ -243,7 +243,7 @@
def test_cos(self):
import math
- from numpypy import array, cos
+ from _numpypy import array, cos
a = array([0, 1, 2, 3, math.pi, math.pi*1.5, math.pi*2])
b = cos(a)
@@ -252,7 +252,7 @@
def test_tan(self):
import math
- from numpypy import array, tan
+ from _numpypy import array, tan
a = array([0, 1, 2, 3, math.pi, math.pi*1.5, math.pi*2])
b = tan(a)
@@ -262,7 +262,7 @@
def test_arcsin(self):
import math
- from numpypy import array, arcsin
+ from _numpypy import array, arcsin
a = array([-1, -0.5, -0.33, 0, 0.33, 0.5, 1])
b = arcsin(a)
@@ -276,7 +276,7 @@
def test_arccos(self):
import math
- from numpypy import array, arccos
+ from _numpypy import array, arccos
a = array([-1, -0.5, -0.33, 0, 0.33, 0.5, 1])
b = arccos(a)
@@ -291,7 +291,7 @@
def test_arctan(self):
import math
- from numpypy import array, arctan
+ from _numpypy import array, arctan
a = array([-3, -2, -1, 0, 1, 2, 3, float('inf'), float('-inf')])
b = arctan(a)
@@ -304,7 +304,7 @@
def test_arcsinh(self):
import math
- from numpypy import arcsinh, inf
+ from _numpypy import arcsinh, inf
for v in [inf, -inf, 1.0, math.e]:
assert math.asinh(v) == arcsinh(v)
@@ -312,7 +312,7 @@
def test_arctanh(self):
import math
- from numpypy import arctanh
+ from _numpypy import arctanh
for v in [.99, .5, 0, -.5, -.99]:
assert math.atanh(v) == arctanh(v)
@@ -323,7 +323,7 @@
def test_sqrt(self):
import math
- from numpypy import sqrt
+ from _numpypy import sqrt
nan, inf = float("nan"), float("inf")
data = [1, 2, 3, inf]
@@ -333,13 +333,13 @@
assert math.isnan(sqrt(nan))
def test_reduce_errors(self):
- from numpypy import sin, add
+ from _numpypy import sin, add
raises(ValueError, sin.reduce, [1, 2, 3])
raises(TypeError, add.reduce, 1)
def test_reduce(self):
- from numpypy import add, maximum
+ from _numpypy import add, maximum
assert add.reduce([1, 2, 3]) == 6
assert maximum.reduce([1]) == 1
@@ -348,7 +348,7 @@
def test_comparisons(self):
import operator
- from numpypy import equal, not_equal, less, less_equal, greater, greater_equal
+ from _numpypy import equal, not_equal, less, less_equal, greater, greater_equal
for ufunc, func in [
(equal, operator.eq),
diff --git a/pypy/module/pypyjit/test_pypy_c/test_00_model.py b/pypy/module/pypyjit/test_pypy_c/test_00_model.py
--- a/pypy/module/pypyjit/test_pypy_c/test_00_model.py
+++ b/pypy/module/pypyjit/test_pypy_c/test_00_model.py
@@ -8,10 +8,12 @@
from pypy.tool import logparser
from pypy.jit.tool.jitoutput import parse_prof
from pypy.module.pypyjit.test_pypy_c.model import (Log, find_ids_range,
- find_ids, TraceWithIds,
+ find_ids,
OpMatcher, InvalidMatch)
class BaseTestPyPyC(object):
+ log_string = 'jit-log-opt,jit-log-noopt,jit-log-virtualstate,jit-summary'
+
def setup_class(cls):
if '__pypy__' not in sys.builtin_module_names:
py.test.skip("must run this test with pypy")
@@ -52,8 +54,7 @@
cmdline += ['--jit', ','.join(jitcmdline)]
cmdline.append(str(self.filepath))
#
- print cmdline, logfile
- env={'PYPYLOG': 'jit-log-opt,jit-log-noopt,jit-log-virtualstate,jit-summary:' + str(logfile)}
+ env={'PYPYLOG': self.log_string + ':' + str(logfile)}
pipe = subprocess.Popen(cmdline,
env=env,
stdout=subprocess.PIPE,
diff --git a/pypy/module/pypyjit/test_pypy_c/test__ffi.py b/pypy/module/pypyjit/test_pypy_c/test__ffi.py
--- a/pypy/module/pypyjit/test_pypy_c/test__ffi.py
+++ b/pypy/module/pypyjit/test_pypy_c/test__ffi.py
@@ -98,7 +98,8 @@
end = time.time()
return end - start
#
- log = self.run(main, [get_libc_name(), 200], threshold=150)
+ log = self.run(main, [get_libc_name(), 200], threshold=150,
+ import_site=True)
assert 1 <= log.result <= 1.5 # at most 0.5 seconds of overhead
loops = log.loops_by_id('sleep')
assert len(loops) == 1 # make sure that we actually JITted the loop
@@ -121,7 +122,7 @@
return fabs._ptr.getaddr(), x
libm_name = get_libm_name(sys.platform)
- log = self.run(main, [libm_name])
+ log = self.run(main, [libm_name], import_site=True)
fabs_addr, res = log.result
assert res == -4.0
loop, = log.loops_by_filename(self.filepath)
diff --git a/pypy/module/pypyjit/test_pypy_c/test_string.py b/pypy/module/pypyjit/test_pypy_c/test_string.py
--- a/pypy/module/pypyjit/test_pypy_c/test_string.py
+++ b/pypy/module/pypyjit/test_pypy_c/test_string.py
@@ -15,7 +15,7 @@
i += letters[i % len(letters)] == uletters[i % len(letters)]
return i
- log = self.run(main, [300])
+ log = self.run(main, [300], import_site=True)
assert log.result == 300
loop, = log.loops_by_filename(self.filepath)
assert loop.match("""
@@ -55,7 +55,7 @@
i += int(long(string.digits[i % len(string.digits)], 16))
return i
- log = self.run(main, [1100])
+ log = self.run(main, [1100], import_site=True)
assert log.result == main(1100)
loop, = log.loops_by_filename(self.filepath)
assert loop.match("""
diff --git a/pypy/module/sys/__init__.py b/pypy/module/sys/__init__.py
--- a/pypy/module/sys/__init__.py
+++ b/pypy/module/sys/__init__.py
@@ -42,7 +42,7 @@
'argv' : 'state.get(space).w_argv',
'py3kwarning' : 'space.w_False',
'warnoptions' : 'state.get(space).w_warnoptions',
- 'builtin_module_names' : 'state.w_None',
+ 'builtin_module_names' : 'space.w_None',
'pypy_getudir' : 'state.pypy_getudir', # not translated
'pypy_initial_path' : 'state.pypy_initial_path',
diff --git a/pypy/module/sys/app.py b/pypy/module/sys/app.py
--- a/pypy/module/sys/app.py
+++ b/pypy/module/sys/app.py
@@ -66,11 +66,11 @@
return None
copyright_str = """
-Copyright 2003-2011 PyPy development team.
+Copyright 2003-2012 PyPy development team.
All Rights Reserved.
For further information, see
-Portions Copyright (c) 2001-2008 Python Software Foundation.
+Portions Copyright (c) 2001-2012 Python Software Foundation.
All Rights Reserved.
Portions Copyright (c) 2000 BeOpen.com.
diff --git a/pypy/objspace/fake/checkmodule.py b/pypy/objspace/fake/checkmodule.py
--- a/pypy/objspace/fake/checkmodule.py
+++ b/pypy/objspace/fake/checkmodule.py
@@ -1,8 +1,10 @@
from pypy.objspace.fake.objspace import FakeObjSpace, W_Root
+from pypy.config.pypyoption import get_pypy_config
def checkmodule(modname):
- space = FakeObjSpace()
+ config = get_pypy_config(translating=True)
+ space = FakeObjSpace(config)
mod = __import__('pypy.module.%s' % modname, None, None, ['__doc__'])
# force computation and record what we wrap
module = mod.Module(space, W_Root())
diff --git a/pypy/objspace/fake/objspace.py b/pypy/objspace/fake/objspace.py
--- a/pypy/objspace/fake/objspace.py
+++ b/pypy/objspace/fake/objspace.py
@@ -93,9 +93,9 @@
class FakeObjSpace(ObjSpace):
- def __init__(self):
+ def __init__(self, config=None):
self._seen_extras = []
- ObjSpace.__init__(self)
+ ObjSpace.__init__(self, config=config)
def float_w(self, w_obj):
is_root(w_obj)
@@ -135,6 +135,9 @@
def newfloat(self, x):
return w_some_obj()
+ def newcomplex(self, x, y):
+ return w_some_obj()
+
def marshal_w(self, w_obj):
"NOT_RPYTHON"
raise NotImplementedError
@@ -215,6 +218,10 @@
expected_length = 3
return [w_some_obj()] * expected_length
+ def unpackcomplex(self, w_complex):
+ is_root(w_complex)
+ return 1.1, 2.2
+
def allocate_instance(self, cls, w_subtype):
is_root(w_subtype)
return instantiate(cls)
@@ -232,6 +239,11 @@
def exec_(self, *args, **kwds):
pass
+ def createexecutioncontext(self):
+ ec = ObjSpace.createexecutioncontext(self)
+ ec._py_repr = None
+ return ec
+
# ----------
def translates(self, func=None, argtypes=None, **kwds):
@@ -267,18 +279,21 @@
ObjSpace.ExceptionTable +
['int', 'str', 'float', 'long', 'tuple', 'list',
'dict', 'unicode', 'complex', 'slice', 'bool',
- 'type', 'basestring']):
+ 'type', 'basestring', 'object']):
setattr(FakeObjSpace, 'w_' + name, w_some_obj())
#
for (name, _, arity, _) in ObjSpace.MethodTable:
args = ['w_%d' % i for i in range(arity)]
+ params = args[:]
d = {'is_root': is_root,
'w_some_obj': w_some_obj}
+ if name in ('get',):
+ params[-1] += '=None'
exec compile2("""\
def meth(self, %s):
%s
return w_some_obj()
- """ % (', '.join(args),
+ """ % (', '.join(params),
'; '.join(['is_root(%s)' % arg for arg in args]))) in d
meth = func_with_new_name(d['meth'], name)
setattr(FakeObjSpace, name, meth)
@@ -301,9 +316,12 @@
pass
FakeObjSpace.default_compiler = FakeCompiler()
-class FakeModule(object):
+class FakeModule(Wrappable):
+ def __init__(self):
+ self.w_dict = w_some_obj()
def get(self, name):
name + "xx" # check that it's a string
return w_some_obj()
FakeObjSpace.sys = FakeModule()
FakeObjSpace.sys.filesystemencoding = 'foobar'
+FakeObjSpace.builtin = FakeModule()
diff --git a/pypy/objspace/fake/test/test_objspace.py b/pypy/objspace/fake/test/test_objspace.py
--- a/pypy/objspace/fake/test/test_objspace.py
+++ b/pypy/objspace/fake/test/test_objspace.py
@@ -40,7 +40,7 @@
def test_constants(self):
space = self.space
space.translates(lambda: (space.w_None, space.w_True, space.w_False,
- space.w_int, space.w_str,
+ space.w_int, space.w_str, space.w_object,
space.w_TypeError))
def test_wrap(self):
@@ -72,3 +72,9 @@
def test_newlist(self):
self.space.newlist([W_Root(), W_Root()])
+
+ def test_default_values(self):
+ # the __get__ method takes either 2 or 3 arguments
+ space = self.space
+ space.translates(lambda: (space.get(W_Root(), W_Root()),
+ space.get(W_Root(), W_Root(), W_Root())))
diff --git a/pypy/rlib/clibffi.py b/pypy/rlib/clibffi.py
--- a/pypy/rlib/clibffi.py
+++ b/pypy/rlib/clibffi.py
@@ -30,6 +30,9 @@
_MAC_OS = platform.name == "darwin"
_FREEBSD_7 = platform.name == "freebsd7"
+_LITTLE_ENDIAN = sys.byteorder == 'little'
+_BIG_ENDIAN = sys.byteorder == 'big'
+
if _WIN32:
from pypy.rlib import rwin32
@@ -360,12 +363,36 @@
cast_type_to_ffitype._annspecialcase_ = 'specialize:memo'
def push_arg_as_ffiptr(ffitp, arg, ll_buf):
- # this is for primitive types. For structures and arrays
- # would be something different (more dynamic)
+ # This is for primitive types. Note that the exact type of 'arg' may be
+ # different from the expected 'c_size'. To cope with that, we fall back
+ # to a byte-by-byte copy.
TP = lltype.typeOf(arg)
TP_P = lltype.Ptr(rffi.CArray(TP))
- buf = rffi.cast(TP_P, ll_buf)
- buf[0] = arg
+ TP_size = rffi.sizeof(TP)
+ c_size = intmask(ffitp.c_size)
+ # if both types have the same size, we can directly write the
+ # value to the buffer
+ if c_size == TP_size:
+ buf = rffi.cast(TP_P, ll_buf)
+ buf[0] = arg
+ else:
+ # needs byte-by-byte copying. Make sure 'arg' is an integer type.
+ # Note that this won't work for rffi.FLOAT/rffi.DOUBLE.
+ assert TP is not rffi.FLOAT and TP is not rffi.DOUBLE
+ if TP_size <= rffi.sizeof(lltype.Signed):
+ arg = rffi.cast(lltype.Unsigned, arg)
+ else:
+ arg = rffi.cast(lltype.UnsignedLongLong, arg)
+ if _LITTLE_ENDIAN:
+ for i in range(c_size):
+ ll_buf[i] = chr(arg & 0xFF)
+ arg >>= 8
+ elif _BIG_ENDIAN:
+ for i in range(c_size-1, -1, -1):
+ ll_buf[i] = chr(arg & 0xFF)
+ arg >>= 8
+ else:
+ raise AssertionError
push_arg_as_ffiptr._annspecialcase_ = 'specialize:argtype(1)'
diff --git a/pypy/rlib/jit.py b/pypy/rlib/jit.py
--- a/pypy/rlib/jit.py
+++ b/pypy/rlib/jit.py
@@ -386,6 +386,18 @@
class JitHintError(Exception):
"""Inconsistency in the JIT hints."""
+PARAMETER_DOCS = {
+ 'threshold': 'number of times a loop has to run for it to become hot',
+ 'function_threshold': 'number of times a function must run for it to become traced from start',
+ 'trace_eagerness': 'number of times a guard has to fail before we start compiling a bridge',
+ 'trace_limit': 'number of recorded operations before we abort tracing with ABORT_TOO_LONG',
+ 'inlining': 'inline python functions or not (1/0)',
+ 'loop_longevity': 'a parameter controlling how long loops will be kept before being freed, an estimate',
+ 'retrace_limit': 'how many times we can try retracing before giving up',
+ 'max_retrace_guards': 'number of extra guards a retrace can cause',
+ 'enable_opts': 'optimizations to enable or all, INTERNAL USE ONLY'
+ }
+
PARAMETERS = {'threshold': 1039, # just above 1024, prime
'function_threshold': 1619, # slightly more than one above, also prime
'trace_eagerness': 200,
diff --git a/pypy/tool/jitlogparser/parser.py b/pypy/tool/jitlogparser/parser.py
--- a/pypy/tool/jitlogparser/parser.py
+++ b/pypy/tool/jitlogparser/parser.py
@@ -185,7 +185,10 @@
return self.code.map[self.bytecode_no]
def getlineno(self):
- return self.getopcode().lineno
+ code = self.getopcode()
+ if code is None:
+ return None
+ return code.lineno
lineno = property(getlineno)
def getline_starts_here(self):
diff --git a/pypy/tool/jitlogparser/storage.py b/pypy/tool/jitlogparser/storage.py
--- a/pypy/tool/jitlogparser/storage.py
+++ b/pypy/tool/jitlogparser/storage.py
@@ -6,7 +6,6 @@
import py
import os
from lib_pypy.disassembler import dis
-from pypy.tool.jitlogparser.parser import Function
from pypy.tool.jitlogparser.module_finder import gather_all_code_objs
class LoopStorage(object):
diff --git a/pypy/translator/c/src/profiling.c b/pypy/translator/c/src/profiling.c
--- a/pypy/translator/c/src/profiling.c
+++ b/pypy/translator/c/src/profiling.c
@@ -29,6 +29,35 @@
profiling_setup = 0;
}
}
+
+#elif defined(_WIN32)
+#include
+
+DWORD_PTR base_affinity_mask;
+int profiling_setup = 0;
+
+void pypy_setup_profiling() {
+ if (!profiling_setup) {
+ DWORD_PTR affinity_mask, system_affinity_mask;
+ GetProcessAffinityMask(GetCurrentProcess(),
+ &base_affinity_mask, &system_affinity_mask);
+ affinity_mask = 1;
+ /* Pick one cpu allowed by the system */
+ if (system_affinity_mask)
+ while ((affinity_mask & system_affinity_mask) == 0)
+ affinity_mask <<= 1;
+ SetProcessAffinityMask(GetCurrentProcess(), affinity_mask);
+ profiling_setup = 1;
+ }
+}
+
+void pypy_teardown_profiling() {
+ if (profiling_setup) {
+ SetProcessAffinityMask(GetCurrentProcess(), base_affinity_mask);
+ profiling_setup = 0;
+ }
+}
+
#else
void pypy_setup_profiling() { }
void pypy_teardown_profiling() { }
diff --git a/pypy/translator/goal/app_main.py b/pypy/translator/goal/app_main.py
--- a/pypy/translator/goal/app_main.py
+++ b/pypy/translator/goal/app_main.py
@@ -139,8 +139,8 @@
items = pypyjit.defaults.items()
items.sort()
for key, value in items:
- print ' --jit %s=N %slow-level JIT parameter (default %s)' % (
- key, ' '*(18-len(key)), value)
+ print ' --jit %s=N %s%s (default %s)' % (
+ key, ' '*(18-len(key)), pypyjit.PARAMETER_DOCS[key], value)
print ' --jit off turn off the JIT'
def print_version(*args):
From noreply at buildbot.pypy.org Tue Jan 10 14:56:56 2012
From: noreply at buildbot.pypy.org (stefanor)
Date: Tue, 10 Jan 2012 14:56:56 +0100 (CET)
Subject: [pypy-commit] pypy default: Add pypy.1 manpage to sphinx docs
Message-ID: <20120110135656.44BC082110@wyvern.cs.uni-duesseldorf.de>
Author: Stefano Rivera
Branch:
Changeset: r51204:e8239b6167fa
Date: 2012-01-10 15:56 +0200
http://bitbucket.org/pypy/pypy/changeset/e8239b6167fa/
Log: Add pypy.1 manpage to sphinx docs
diff --git a/pypy/doc/Makefile b/pypy/doc/Makefile
--- a/pypy/doc/Makefile
+++ b/pypy/doc/Makefile
@@ -97,3 +97,9 @@
$(SPHINXBUILD) -b doctest $(ALLSPHINXOPTS) $(BUILDDIR)/doctest
@echo "Testing of doctests in the sources finished, look at the " \
"results in $(BUILDDIR)/doctest/output.txt."
+
+manpage:
+ $(SPHINXBUILD) -b man $(ALLSPHINXOPTS) $(BUILDDIR)/man
+ @echo
+ @echo "Build finished; the man pages are in $(BUILDDIR)/man"
+
diff --git a/pypy/doc/conf.py b/pypy/doc/conf.py
--- a/pypy/doc/conf.py
+++ b/pypy/doc/conf.py
@@ -197,3 +197,10 @@
# Example configuration for intersphinx: refer to the Python standard library.
intersphinx_mapping = {'http://docs.python.org/': None}
+# -- Options for manpage output-------------------------------------------------
+
+man_pages = [
+ ('man/pypy.1', 'pypy',
+ u'fast, compliant alternative implementation of the Python language',
+ u'The PyPy Project', 1)
+]
diff --git a/pypy/doc/man/pypy.1.rst b/pypy/doc/man/pypy.1.rst
new file mode 100644
--- /dev/null
+++ b/pypy/doc/man/pypy.1.rst
@@ -0,0 +1,89 @@
+======
+ pypy
+======
+
+SYNOPSIS
+========
+
+``pypy`` [*options*]
+[``-c`` *cmd*\ \|\ ``-m`` *mod*\ \|\ *file.py*\ \|\ ``-``\ ]
+[*arg*\ ...]
+
+OPTIONS
+=======
+
+-i
+ Inspect interactively after running script.
+
+-O
+ Dummy optimization flag for compatibility with C Python.
+
+-c *cmd*
+ Program passed in as CMD (terminates option list).
+
+-S
+ Do not ``import site`` on initialization.
+
+-u
+ Unbuffered binary ``stdout`` and ``stderr``.
+
+-h, --help
+ Show a help message and exit.
+
+-m *mod*
+ Library module to be run as a script (terminates option list).
+
+-W *arg*
+ Warning control (*arg* is *action*:*message*:*category*:*module*:*lineno*).
+
+-E
+ Ignore environment variables (such as ``PYTHONPATH``).
+
+--version
+ Print the PyPy version.
+
+--info
+ Print translation information about this PyPy executable.
+
+--jit *arg*
+ Low level JIT parameters. Format is *arg*\ ``=``\ *value*.
+
+ ``off``
+ Disable the JIT.
+
+ ``threshold=``\ *value*
+ Number of times a loop has to run for it to become hot.
+
+ ``function_threshold=``\ *value*
+ Number of times a function must run for it to become traced from
+ start.
+
+ ``inlining=``\ *value*
+ Inline python functions or not (``1``/``0``).
+
+ ``loop_longevity=``\ *value*
+ A parameter controlling how long loops will be kept before being
+ freed, an estimate.
+
+ ``max_retrace_guards=``\ *value*
+ Number of extra guards a retrace can cause.
+
+ ``retrace_limit=``\ *value*
+ How many times we can try retracing before giving up.
+
+ ``trace_eagerness=``\ *value*
+ Number of times a guard has to fail before we start compiling a
+ bridge.
+
+ ``trace_limit=``\ *value*
+ Number of recorded operations before we abort tracing with
+ ``ABORT_TRACE_TOO_LONG``.
+
+ ``enable_opts=``\ *value*
+ Optimizations to enabled or ``all``.
+ Warning, this option is dangerous, and should be avoided.
+
+SEE ALSO
+========
+
+**python**\ (1)
From noreply at buildbot.pypy.org Tue Jan 10 15:00:56 2012
From: noreply at buildbot.pypy.org (stefanor)
Date: Tue, 10 Jan 2012 15:00:56 +0100 (CET)
Subject: [pypy-commit] pypy default: pypy manpage: Format for multiple --jit
arguments
Message-ID: <20120110140056.2340782110@wyvern.cs.uni-duesseldorf.de>
Author: Stefano Rivera
Branch:
Changeset: r51205:2f90612495e2
Date: 2012-01-10 16:00 +0200
http://bitbucket.org/pypy/pypy/changeset/2f90612495e2/
Log: pypy manpage: Format for multiple --jit arguments
diff --git a/pypy/doc/man/pypy.1.rst b/pypy/doc/man/pypy.1.rst
--- a/pypy/doc/man/pypy.1.rst
+++ b/pypy/doc/man/pypy.1.rst
@@ -46,7 +46,8 @@
Print translation information about this PyPy executable.
--jit *arg*
- Low level JIT parameters. Format is *arg*\ ``=``\ *value*.
+ Low level JIT parameters. Format is
+ *arg*\ ``=``\ *value*\ [``,``\ *arg*\ ``=``\ *value*\ ...]
``off``
Disable the JIT.
From noreply at buildbot.pypy.org Tue Jan 10 16:14:29 2012
From: noreply at buildbot.pypy.org (hager)
Date: Tue, 10 Jan 2012 16:14:29 +0100 (CET)
Subject: [pypy-commit] pypy ppc-jit-backend: adjust
emit_guard_call_assembler and prepare_guard_call_assembler
Message-ID: <20120110151429.1D88382110@wyvern.cs.uni-duesseldorf.de>
Author: hager
Branch: ppc-jit-backend
Changeset: r51206:a51d6a2b3e1d
Date: 2012-01-10 16:14 +0100
http://bitbucket.org/pypy/pypy/changeset/a51d6a2b3e1d/
Log: adjust emit_guard_call_assembler and prepare_guard_call_assembler
diff --git a/pypy/jit/backend/ppc/ppcgen/opassembler.py b/pypy/jit/backend/ppc/ppcgen/opassembler.py
--- a/pypy/jit/backend/ppc/ppcgen/opassembler.py
+++ b/pypy/jit/backend/ppc/ppcgen/opassembler.py
@@ -932,11 +932,11 @@
self._write_fail_index(fail_index)
descr = op.getdescr()
- assert isinstance(descr, LoopToken)
+ assert isinstance(descr, JitCellToken)
# XXX check this
- assert op.numargs() == len(descr._ppc_arglocs[0])
+ #assert op.numargs() == len(descr._ppc_arglocs[0])
resbox = TempInt()
- self._emit_call(fail_index, descr._ppc_direct_bootstrap_code, op.getarglist(),
+ self._emit_call(fail_index, descr._ppc_func_addr, op.getarglist(),
regalloc, result=resbox)
if op.result is None:
value = self.cpu.done_with_this_frame_void_v
diff --git a/pypy/jit/backend/ppc/ppcgen/regalloc.py b/pypy/jit/backend/ppc/ppcgen/regalloc.py
--- a/pypy/jit/backend/ppc/ppcgen/regalloc.py
+++ b/pypy/jit/backend/ppc/ppcgen/regalloc.py
@@ -877,10 +877,11 @@
def prepare_guard_call_assembler(self, op, guard_op):
descr = op.getdescr()
- assert isinstance(descr, LoopToken)
+ assert isinstance(descr, JitCellToken)
jd = descr.outermost_jitdriver_sd
assert jd is not None
- size = jd.portal_calldescr.get_result_size(self.cpu.translate_support_code)
+ #size = jd.portal_calldescr.get_result_size(self.cpu.translate_support_code)
+ size = jd.portal_calldescr.get_result_size()
vable_index = jd.index_of_virtualizable
if vable_index >= 0:
self._sync_var(op.getarg(vable_index))
From noreply at buildbot.pypy.org Tue Jan 10 16:54:05 2012
From: noreply at buildbot.pypy.org (arigo)
Date: Tue, 10 Jan 2012 16:54:05 +0100 (CET)
Subject: [pypy-commit] pypy default: Add two papers.
Message-ID: <20120110155405.12F6982110@wyvern.cs.uni-duesseldorf.de>
Author: Armin Rigo
Branch:
Changeset: r51207:71d3d24c92d1
Date: 2012-01-10 16:53 +0100
http://bitbucket.org/pypy/pypy/changeset/71d3d24c92d1/
Log: Add two papers.
diff --git a/pypy/doc/extradoc.rst b/pypy/doc/extradoc.rst
--- a/pypy/doc/extradoc.rst
+++ b/pypy/doc/extradoc.rst
@@ -8,6 +8,9 @@
*Articles about PyPy published so far, most recent first:* (bibtex_ file)
+* `Runtime Feedback in a Meta-Tracing JIT for Efficient Dynamic Languages`_,
+ C.F. Bolz, A. Cuni, M. Fijalkowski, M. Leuschel, S. Pedroni, A. Rigo
+
* `Allocation Removal by Partial Evaluation in a Tracing JIT`_,
C.F. Bolz, A. Cuni, M. Fijalkowski, M. Leuschel, S. Pedroni, A. Rigo
@@ -50,6 +53,9 @@
*Other research using PyPy (as far as we know it):*
+* `Hardware Transactional Memory Support for Lightweight Dynamic Language Evolution`_,
+ N. Riley and C. Zilles
+
* `PyGirl: Generating Whole-System VMs from High-Level Prototypes using PyPy`_,
C. Bruni and T. Verwaest
@@ -65,6 +71,7 @@
.. _bibtex: https://bitbucket.org/pypy/extradoc/raw/tip/talk/bibtex.bib
+.. _`Runtime Feedback in a Meta-Tracing JIT for Efficient Dynamic Languages`: https://bitbucket.org/pypy/extradoc/raw/extradoc/talk/icooolps2011/jit-hints.pdf
.. _`Allocation Removal by Partial Evaluation in a Tracing JIT`: http://codespeak.net/svn/pypy/extradoc/talk/pepm2011/bolz-allocation-removal.pdf
.. _`Towards a Jitting VM for Prolog Execution`: http://www.stups.uni-duesseldorf.de/publications/bolz-prolog-jit.pdf
.. _`High performance implementation of Python for CLI/.NET with JIT compiler generation for dynamic languages`: http://buildbot.pypy.org/misc/antocuni-thesis.pdf
@@ -74,6 +81,7 @@
.. _`Automatic JIT Compiler Generation with Runtime Partial Evaluation`: http://www.stups.uni-duesseldorf.de/thesis/final-master.pdf
.. _`RPython: A Step towards Reconciling Dynamically and Statically Typed OO Languages`: http://www.disi.unige.it/person/AnconaD/papers/Recent_abstracts.html#AACM-DLS07
.. _`EU Reports`: index-report.html
+.. _`Hardware Transactional Memory Support for Lightweight Dynamic Language Evolution`: http://sabi.net/nriley/pubs/dls6-riley.pdf
.. _`PyGirl: Generating Whole-System VMs from High-Level Prototypes using PyPy`: http://scg.unibe.ch/archive/papers/Brun09cPyGirl.pdf
.. _`Representation-Based Just-in-Time Specialization and the Psyco Prototype for Python`: http://psyco.sourceforge.net/psyco-pepm-a.ps.gz
.. _`Back to the Future in One Week -- Implementing a Smalltalk VM in PyPy`: http://dx.doi.org/10.1007/978-3-540-89275-5_7
From noreply at buildbot.pypy.org Tue Jan 10 17:05:43 2012
From: noreply at buildbot.pypy.org (alex_gaynor)
Date: Tue, 10 Jan 2012 17:05:43 +0100 (CET)
Subject: [pypy-commit] extradoc extradoc: grammar changes all over
Message-ID: <20120110160543.77D1182110@wyvern.cs.uni-duesseldorf.de>
Author: Alex Gaynor
Branch: extradoc
Changeset: r4007:a65754d300a3
Date: 2012-01-10 10:05 -0600
http://bitbucket.org/pypy/extradoc/changeset/a65754d300a3/
Log: grammar changes all over
diff --git a/blog/draft/laplace.rst b/blog/draft/laplace.rst
--- a/blog/draft/laplace.rst
+++ b/blog/draft/laplace.rst
@@ -3,43 +3,45 @@
Hello.
-I'm pleased to inform about progress we made on NumPyPy both in terms of
-completeness and performance. This post mostly deals with the performance
-side and how far we got by now. **Word of warning:** It's worth noting that
-the performance work on the numpy side is not done - we're maybe half way
-through and there are trivial and not so trivial optimizations to be performed.
-In fact we didn't even start to implement some optimizations like vectorization.
+I'm pleased to inform you about the progress we have made on NumPyPy, both in
+terms of completeness and performance. This post mostly deals with the
+performance side and how far we have come so far. **Word of warning:** It's worth noting that the performance work on NumPyPy isn't done - we're maybe half way
+to where we want to be and there are many trivial and not so trivial
+optimizations to be performed. In fact we haven't even started to implement
+important optimizations, like vectorization.
Benchmark
---------
-We choose a laplace transform, which is also used on scipy's
-`PerformancePython`_ wiki. The problem with the implementation on the
-performance python wiki page is that there are two algorithms used which
-has different convergence, but also very different performance characteristics
-on modern machines. Instead we implemented our own versions in C and a set
-of various Python versions using numpy or not. The full source is available
-on `fijal's hack`_ repo and the exact revision used is 18502dbbcdb3.
+We choose a laplace transform, based on SciPy's `PerformancePython`_ wiki.
+Unfortunately, the different implementations on the wiki page accidentally use
+two different algorithms, which have different convergences, and very different
+performance characteristics on modern computers. As a result, we implemented
+our own versions in C and Python (both with and without NumPy). The full source
+can be found in `fijal's hack`_ repo, all these benchmarks were performed at
+revision 18502dbbcdb3.
-Let me describe various algorithms used. Note that some of them contain
-pypy-specific hacks to work around current limitations in the implementation.
-Those hacks will go away eventually and the performance should improve and
-not decrease. It's worth noting that while numerically the algorithms used
-are identical, the exact data layout is not and differs between methods.
+First, let me describe various algorithms used. Note that some of them contain
+PyPy-specific hacks to work around limitations in the current implementation.
+These hacks will go away eventually and the performance should improve. It's
+worth noting that while numerically the algorithms used are identical, the
+exact data layout in memory differs between them.
-**Note on all the benchmarks:** they're all run once, but the performance
-is very stable across runs.
+**Note on all the benchmarks:** they were all run once, but the performance is
+very stable across runs.
-So, starting from the C version, it implements dead simple laplace transform
-using two loops and a double-reference memory (array of ``int**``). The double
-reference does not matter for performance and two algorithms are implemented
-in ``inline-laplace.c`` and ``laplace.c``. They're both compiled with
-``gcc 4.4.5`` and ``-O3``.
+Starting with the C version, it implements a dead simple laplace transform
+using two loops and a double-reference memory (array of ``int*``). The double
+reference does not matter for performance and two algorithms are implemented in
+``inline-laplace.c`` and ``laplace.c``. They're both compiled with
+``gcc 4.4.5`` at ``-O3``.
-A straightforward version of those in python
-is implemented in ``laplace.py`` using respectively ``inline_slow_time_step``
-and ``slow_time_step``. ``slow_2_time_step`` does the same thing, except
-it copies arrays in-place instead of creating new copies.
+A straightforward version of those in Python is implemented in ``laplace.py``
+using respectively ``inline_slow_time_step`` and ``slow_time_step``.
+``slow_2_time_step`` does the same thing, except it copies arrays in-place
+instead of creating new copies.
+
+(XXX: these are timed under PyPy?)
+-----------------------+----------------------+--------------------+
| bench | number of iterations | time per iteration |
@@ -55,42 +57,44 @@
| inline_slow python | 278 | 23.7 |
+-----------------------+----------------------+--------------------+
-The important thing to notice here that data dependency in the inline version
-is causing a huge slowdown. Note that this is already **not too bad**,
-as in yes, the braindead python version of the same algorithm takes longer
-and pypy is not able to use as much info about data being independent, but this
-is within the same ballpark - **15% - 170%** slower than C, but it definitely
-matters more which algorithm you choose than which language. For a comparison,
-slow versions take about **5.75s** each on CPython 2.6 **per iteration**,
-so estimating, they're about **200x** slower than the PyPy equivalent.
-I didn't measure full run though :)
+An important thing to notice here is that the data dependency in the inline
+version causes a huge slowdown for the C versions. This is already not too bad
+for us, the braindead Python version takes longer and PyPy is not able to take
+advantage of the knowledge that the data is independent, but it is in the same
+ballpark - **15% - 170%** slower than C, but the algorithm you choose matters
+more than the language. By comparison, the slow versions take about **5.75s**
+each on CPython 2.6 **per iteration**, and by estimating, are about **200x**
+slower than the PyPy equivalent. I didn't measure the full run though :)
-Next step is to use numpy expressions. The first problem we run into is that
-computing the error walks again the entire array. This is fairly inefficient
-in terms of cache access, so I took a liberty of computing errors every 15
-steps. This makes convergence rounded to the nearest 15 iterations, but
-speeds things up anyway. ``numeric_time_step`` takes the most braindead
-approach of replacing the array with itself, like this::
+The next step is to use NumPy expressions. The first problem we run into is
+that computing the error requires walking the entire array a second time. This
+is fairly inefficient in terms of cache access, so I took the liberty of
+computing the errors every 15 steps. This results in the convergence being
+rounded to the nearest 15 iterations, but speeds things up considerably (XXX:
+is this true?). ``numeric_time_step`` takes the most braindead approach of
+replacing the array with itself, like this::
- u[1:-1, 1:-1] = ((u[0:-2, 1:-1] + u[2:, 1:-1])*dy2 +
+ u[1:-1, 1:-1] = ((u[0:-2, 1:-1] + u[2:, 1:-1])*dy2 +
(u[1:-1,0:-2] + u[1:-1, 2:])*dx2)*dnr_inv
-We need 3 arrays here - one for an intermediate (pypy does not automatically
-create intermediates for expressions), one for a copy to compute error and
-one for the result. This works a bit by chance, since numpy ``+`` or
-``*`` creates an intermediate and pypy simulates the behavior if necessary.
+We need 3 arrays here - one is an intermediate (PyPy does not automatically
+create intermediates for expressions), one is a copy for computing the error,
+and one is the result. This works by chance, since in NumPy ``+`` or ``*``
+creates an intermediate, while NumPyPy avoids allocating the intermediate if
+possible.
``numeric_2_time_step`` works pretty much the same::
src = self.u
self.u = src.copy()
- self.u[1:-1, 1:-1] = ((src[0:-2, 1:-1] + src[2:, 1:-1])*dy2 +
+ self.u[1:-1, 1:-1] = ((src[0:-2, 1:-1] + src[2:, 1:-1])*dy2 +
(src[1:-1,0:-2] + src[1:-1, 2:])*dx2)*dnr_inv
except the copy is now explicit rather than implicit.
``numeric_3_time_step`` does the same thing, but notices you don't have to copy
-the entire array, it's enough to copy border pieces and fill rest with zeros::
+the entire array, it's enough to copy the border pieces and fill rest with
+zeros::
src = self.u
self.u = numpy.zeros((self.nx, self.ny), 'd')
@@ -98,29 +102,29 @@
self.u[-1] = src[-1]
self.u[:, 0] = src[:, 0]
self.u[:, -1] = src[:, -1]
- self.u[1:-1, 1:-1] = ((src[0:-2, 1:-1] + src[2:, 1:-1])*dy2 +
+ self.u[1:-1, 1:-1] = ((src[0:-2, 1:-1] + src[2:, 1:-1])*dy2 +
(src[1:-1,0:-2] + src[1:-1, 2:])*dx2)*dnr_inv
``numeric_4_time_step`` is the one that tries to resemble the C version more.
Instead of doing an array copy, it actually notices that you can alternate
-between two arrays. This is exactly what C version does.
-Note the ``remove_invalidates`` call that's a pypy specific hack - we hope
-to remove this call in the near future, but in short it promises "I don't
-have any unbuilt intermediates that depend on the value of the argument",
-which means you don't have to compute expressions you're not actually using::
+between two arrays. This is exactly what C version does. The
+``remove_invalidates`` call is a PyPy specific hack - we hope to remove this
+call in the near future, but in short it promises "I don't have any unbuilt
+intermediates that depend on the value of the argument", which means you don't
+have to compute sub-expressions you're not actually using::
remove_invalidates(self.old_u)
remove_invalidates(self.u)
self.old_u[:,:] = self.u
src = self.old_u
- self.u[1:-1, 1:-1] = ((src[0:-2, 1:-1] + src[2:, 1:-1])*dy2 +
+ self.u[1:-1, 1:-1] = ((src[0:-2, 1:-1] + src[2:, 1:-1])*dy2 +
(src[1:-1,0:-2] + src[1:-1, 2:])*dx2)*dnr_inv
This one is the most equivalent to the C version.
-``numeric_5_time_step`` does the same thing, but notices you don't have to
-copy the entire array, it's enough to just copy edges. This is an optimization
-that was not done in the C version::
+``numeric_5_time_step`` does the same thing, but notices you don't have to copy
+the entire array, it's enough to just copy edges. This is an optimization that
+was not done in the C version::
remove_invalidates(self.old_u)
remove_invalidates(self.u)
@@ -130,13 +134,13 @@
self.u[-1] = src[-1]
self.u[:, 0] = src[:, 0]
self.u[:, -1] = src[:, -1]
- self.u[1:-1, 1:-1] = ((src[0:-2, 1:-1] + src[2:, 1:-1])*dy2 +
+ self.u[1:-1, 1:-1] = ((src[0:-2, 1:-1] + src[2:, 1:-1])*dy2 +
(src[1:-1,0:-2] + src[1:-1, 2:])*dx2)*dnr_inv
-Let's look at the table of runs. As above, ``gcc 4.4.5``, compiled with
-``-O3``, pypy nightly 7bb8b38d8563, 64bit platform. All of the numeric methods
-run 226 steps each, slightly more than 219, rounding to the next 15 when
-the error is computed. Comparison for PyPy and CPython:
+Let's look at the table of runs. As before, ``gcc 4.4.5``, compiled at ``-O3``,
+and PyPy nightly 7bb8b38d8563, on an x86-64 machine. All of the numeric methods
+run 226 steps each, slightly more than 219, rounding to the next 15 when the
+error is computed. Comparison for PyPy and CPython:
+-----------------------+-------------+----------------+
| benchmark | PyPy | CPython |
@@ -150,14 +154,15 @@
| numeric 4 | 11ms | 31ms |
+-----------------------+-------------+----------------+
| numeric 5 | 9.3ms | 21ms |
-+-----------------------+-------------+----------------+
++-----------------------+-------------+-----------------
-So, I can say that those preliminary results are pretty ok. They're not as
-fast as the C version, but we're already much faster than CPython, almost
-always more than 2x on this relatively real-world example. This is not the
-end though. As we continue work, we hope to use a much better high level
-information that we have about operations to eventually outperform C, hopefully
-in 2012. Stay tuned.
+We think that these preliminary results are pretty good, they're not as fast as
+the C version (or as fast as we'd like them to be), but we're already much
+faster than NumPy on CPython, almost always by more than 2x on this relatively
+real-world example. This is not the end though, in fact it's hardly the
+beginning: as we continue work, we hope to make even much better use of the
+high level information that we have, in order to eventually outperform C,
+hopefully in 2012. Stay tuned.
Cheers,
fijal
From noreply at buildbot.pypy.org Tue Jan 10 17:08:19 2012
From: noreply at buildbot.pypy.org (alex_gaynor)
Date: Tue, 10 Jan 2012 17:08:19 +0100 (CET)
Subject: [pypy-commit] extradoc extradoc: another rewording
Message-ID: <20120110160819.2048C82110@wyvern.cs.uni-duesseldorf.de>
Author: Alex Gaynor
Branch: extradoc
Changeset: r4008:5dc64fda0ea7
Date: 2012-01-10 10:08 -0600
http://bitbucket.org/pypy/extradoc/changeset/5dc64fda0ea7/
Log: another rewording
diff --git a/blog/draft/laplace.rst b/blog/draft/laplace.rst
--- a/blog/draft/laplace.rst
+++ b/blog/draft/laplace.rst
@@ -5,10 +5,11 @@
I'm pleased to inform you about the progress we have made on NumPyPy, both in
terms of completeness and performance. This post mostly deals with the
-performance side and how far we have come so far. **Word of warning:** It's worth noting that the performance work on NumPyPy isn't done - we're maybe half way
-to where we want to be and there are many trivial and not so trivial
-optimizations to be performed. In fact we haven't even started to implement
-important optimizations, like vectorization.
+performance side and how far we have come so far. **Word of warning:** the
+performance work on NumPyPy isn't done - we're maybe half way to where we want
+to be and there are many trivial and not so trivial optimizations to be
+performed. In fact we haven't even started to implement important
+optimizations, like vectorization.
Benchmark
---------
From noreply at buildbot.pypy.org Tue Jan 10 17:12:35 2012
From: noreply at buildbot.pypy.org (stefanor)
Date: Tue, 10 Jan 2012 17:12:35 +0100 (CET)
Subject: [pypy-commit] pypy default: Rather use standard Sphinx 1.x target
Message-ID: <20120110161235.4952A82110@wyvern.cs.uni-duesseldorf.de>
Author: Stefano Rivera
Branch:
Changeset: r51208:0e67e4538c80
Date: 2012-01-10 18:11 +0200
http://bitbucket.org/pypy/pypy/changeset/0e67e4538c80/
Log: Rather use standard Sphinx 1.x target
diff --git a/pypy/doc/Makefile b/pypy/doc/Makefile
--- a/pypy/doc/Makefile
+++ b/pypy/doc/Makefile
@@ -12,7 +12,7 @@
PAPEROPT_letter = -D latex_paper_size=letter
ALLSPHINXOPTS = -d $(BUILDDIR)/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) .
-.PHONY: help clean html dirhtml pickle json htmlhelp qthelp latex changes linkcheck doctest
+.PHONY: help clean html dirhtml pickle json htmlhelp qthelp latex man changes linkcheck doctest
help:
@echo "Please use \`make ' where is one of"
@@ -23,6 +23,7 @@
@echo " htmlhelp to make HTML files and a HTML help project"
@echo " qthelp to make HTML files and a qthelp project"
@echo " latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter"
+ @echo " man to make manual pages"
@echo " changes to make an overview of all changed/added/deprecated items"
@echo " linkcheck to check all external links for integrity"
@echo " doctest to run all doctests embedded in the documentation (if enabled)"
@@ -79,6 +80,11 @@
@echo "Run \`make all-pdf' or \`make all-ps' in that directory to" \
"run these through (pdf)latex."
+man:
+ $(SPHINXBUILD) -b man $(ALLSPHINXOPTS) $(BUILDDIR)/man
+ @echo
+ @echo "Build finished. The manual pages are in $(BUILDDIR)/man"
+
changes:
python config/generate.py
$(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) $(BUILDDIR)/changes
@@ -97,9 +103,3 @@
$(SPHINXBUILD) -b doctest $(ALLSPHINXOPTS) $(BUILDDIR)/doctest
@echo "Testing of doctests in the sources finished, look at the " \
"results in $(BUILDDIR)/doctest/output.txt."
-
-manpage:
- $(SPHINXBUILD) -b man $(ALLSPHINXOPTS) $(BUILDDIR)/man
- @echo
- @echo "Build finished; the man pages are in $(BUILDDIR)/man"
-
From noreply at buildbot.pypy.org Tue Jan 10 17:24:08 2012
From: noreply at buildbot.pypy.org (hager)
Date: Tue, 10 Jan 2012 17:24:08 +0100 (CET)
Subject: [pypy-commit] pypy ppc-jit-backend: (bivab,
hager): StackLocations have now a value field which stores the
offset to the SPP. It is used in regalloc_mov.
Message-ID: <20120110162408.63E2082110@wyvern.cs.uni-duesseldorf.de>
Author: hager
Branch: ppc-jit-backend
Changeset: r51209:b5f5e48c3799
Date: 2012-01-10 17:22 +0100
http://bitbucket.org/pypy/pypy/changeset/b5f5e48c3799/
Log: (bivab, hager): StackLocations have now a value field which stores
the offset to the SPP. It is used in regalloc_mov.
diff --git a/pypy/jit/backend/ppc/ppcgen/locations.py b/pypy/jit/backend/ppc/ppcgen/locations.py
--- a/pypy/jit/backend/ppc/ppcgen/locations.py
+++ b/pypy/jit/backend/ppc/ppcgen/locations.py
@@ -88,11 +88,11 @@
def __init__(self, position, num_words=1, type=INT):
self.position = position
- self.width = num_words * WORD
self.type = type
+ self.value = get_spp_offset(position)
def __repr__(self):
- return 'FP(%s)+%d' % (self.type, self.position,)
+ return 'SPP(%s)+%d' % (self.type, self.value)
def location_code(self):
return 'b'
@@ -108,3 +108,9 @@
def imm(val):
return ImmLocation(val)
+
+def get_spp_offset(pos):
+ if pos < 0:
+ return -pos * WORD
+ else:
+ return -(pos + 1) * WORD
diff --git a/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py b/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py
--- a/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py
+++ b/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py
@@ -711,14 +711,14 @@
# move immediate value to memory
elif loc.is_stack():
self.mc.alloc_scratch_reg()
- offset = loc.as_key() * WORD
+ offset = loc.value
self.mc.load_imm(r.SCRATCH.value, value)
self.mc.store(r.SCRATCH.value, r.SPP.value, offset)
self.mc.free_scratch_reg()
return
assert 0, "not supported location"
elif prev_loc.is_stack():
- offset = prev_loc.as_key() * WORD
+ offset = prev_loc.value
# move from memory to register
if loc.is_reg():
reg = loc.as_key()
@@ -726,7 +726,7 @@
return
# move in memory
elif loc.is_stack():
- target_offset = loc.as_key() * WORD
+ target_offset = loc.value
self.mc.alloc_scratch_reg()
self.mc.load(r.SCRATCH.value, r.SPP.value, offset)
self.mc.store(r.SCRATCH.value, r.SPP.value, target_offset)
@@ -742,7 +742,7 @@
return
# move to memory
elif loc.is_stack():
- offset = loc.as_key() * WORD
+ offset = loc.value
self.mc.store(reg, r.SPP.value, offset)
return
assert 0, "not supported location"
From noreply at buildbot.pypy.org Tue Jan 10 17:24:09 2012
From: noreply at buildbot.pypy.org (hager)
Date: Tue, 10 Jan 2012 17:24:09 +0100 (CET)
Subject: [pypy-commit] pypy ppc-jit-backend: (bivab,
hager): we don't want to free the args here
Message-ID: <20120110162409.8818782110@wyvern.cs.uni-duesseldorf.de>
Author: hager
Branch: ppc-jit-backend
Changeset: r51210:f04c600f8177
Date: 2012-01-10 17:23 +0100
http://bitbucket.org/pypy/pypy/changeset/f04c600f8177/
Log: (bivab, hager): we don't want to free the args here
diff --git a/pypy/jit/backend/ppc/ppcgen/opassembler.py b/pypy/jit/backend/ppc/ppcgen/opassembler.py
--- a/pypy/jit/backend/ppc/ppcgen/opassembler.py
+++ b/pypy/jit/backend/ppc/ppcgen/opassembler.py
@@ -466,7 +466,6 @@
self.mc.call(adr)
self.mark_gc_roots(force_index)
- regalloc.possibly_free_vars(args)
# restore the arguments stored on the stack
if result is not None:
From noreply at buildbot.pypy.org Tue Jan 10 17:24:31 2012
From: noreply at buildbot.pypy.org (arigo)
Date: Tue, 10 Jan 2012 17:24:31 +0100 (CET)
Subject: [pypy-commit] extradoc extradoc: Rewrite to remove the emphasis on
**per iteration** --- all the other
Message-ID: <20120110162431.7450182110@wyvern.cs.uni-duesseldorf.de>
Author: Armin Rigo
Branch: extradoc
Changeset: r4009:749fa78eeb73
Date: 2012-01-10 17:24 +0100
http://bitbucket.org/pypy/extradoc/changeset/749fa78eeb73/
Log: Rewrite to remove the emphasis on **per iteration** --- all the
other numbers are also per iteration.
diff --git a/blog/draft/laplace.rst b/blog/draft/laplace.rst
--- a/blog/draft/laplace.rst
+++ b/blog/draft/laplace.rst
@@ -64,8 +64,8 @@
advantage of the knowledge that the data is independent, but it is in the same
ballpark - **15% - 170%** slower than C, but the algorithm you choose matters
more than the language. By comparison, the slow versions take about **5.75s**
-each on CPython 2.6 **per iteration**, and by estimating, are about **200x**
-slower than the PyPy equivalent. I didn't measure the full run though :)
+each on CPython 2.6 per iteration, and by estimating, would be about **200x**
+slower than the PyPy equivalent if had the patience to measure the full run.
The next step is to use NumPy expressions. The first problem we run into is
that computing the error requires walking the entire array a second time. This
From noreply at buildbot.pypy.org Tue Jan 10 17:25:33 2012
From: noreply at buildbot.pypy.org (l.diekmann)
Date: Tue, 10 Jan 2012 17:25:33 +0100 (CET)
Subject: [pypy-commit] pypy set-strategies: optimization fix
Message-ID: <20120110162533.17BE482110@wyvern.cs.uni-duesseldorf.de>
Author: Lukas Diekmann
Branch: set-strategies
Changeset: r51211:498b6ee337e9
Date: 2012-01-10 17:22 +0100
http://bitbucket.org/pypy/pypy/changeset/498b6ee337e9/
Log: optimization fix
diff --git a/pypy/objspace/std/listobject.py b/pypy/objspace/std/listobject.py
--- a/pypy/objspace/std/listobject.py
+++ b/pypy/objspace/std/listobject.py
@@ -691,6 +691,7 @@
for i in l:
if i == obj:
return True
+ return False
return ListStrategy.contains(self, w_list, w_obj)
def length(self, w_list):
From noreply at buildbot.pypy.org Tue Jan 10 17:27:17 2012
From: noreply at buildbot.pypy.org (alex_gaynor)
Date: Tue, 10 Jan 2012 17:27:17 +0100 (CET)
Subject: [pypy-commit] extradoc extradoc: less formal writing
Message-ID: <20120110162717.6899282110@wyvern.cs.uni-duesseldorf.de>
Author: Alex Gaynor
Branch: extradoc
Changeset: r4010:fc2925740080
Date: 2012-01-10 10:25 -0600
http://bitbucket.org/pypy/extradoc/changeset/fc2925740080/
Log: less formal writing
diff --git a/blog/draft/laplace.rst b/blog/draft/laplace.rst
--- a/blog/draft/laplace.rst
+++ b/blog/draft/laplace.rst
@@ -3,47 +3,44 @@
Hello.
-I'm pleased to inform you about the progress we have made on NumPyPy, both in
-terms of completeness and performance. This post mostly deals with the
-performance side and how far we have come so far. **Word of warning:** the
-performance work on NumPyPy isn't done - we're maybe half way to where we want
-to be and there are many trivial and not so trivial optimizations to be
-performed. In fact we haven't even started to implement important
-optimizations, like vectorization.
+We're excited to let you know about some of the great progress we've made on
+NumPyPy -- both completeness and performance. Here we'll mostly talk about the
+performance side and how far we have come so far. **Word of warning:** this
+work isn't done - we're maybe half way to where we want to be and there are
+many trivial and not so trivial optimizations to be written. (For example, we
+haven't even started to implement important optimizations, like vectorization.)
Benchmark
---------
-We choose a laplace transform, based on SciPy's `PerformancePython`_ wiki.
+We chose a laplace transform, based on SciPy's `PerformancePython`_ wiki.
Unfortunately, the different implementations on the wiki page accidentally use
two different algorithms, which have different convergences, and very different
performance characteristics on modern computers. As a result, we implemented
-our own versions in C and Python (both with and without NumPy). The full source
+our own versions in both C and Python (with and without NumPy). The full source
can be found in `fijal's hack`_ repo, all these benchmarks were performed at
revision 18502dbbcdb3.
First, let me describe various algorithms used. Note that some of them contain
PyPy-specific hacks to work around limitations in the current implementation.
-These hacks will go away eventually and the performance should improve. It's
-worth noting that while numerically the algorithms used are identical, the
-exact data layout in memory differs between them.
+These hacks will go away eventually and the performance will improve.
+Numerically the algorithms used are identical, however exact data layout in
+memory differs between them.
-**Note on all the benchmarks:** they were all run once, but the performance is
-very stable across runs.
+**A note about all the benchmarks:** they were each run once, but the
+performance is very stable across runs.
Starting with the C version, it implements a dead simple laplace transform
-using two loops and a double-reference memory (array of ``int*``). The double
-reference does not matter for performance and two algorithms are implemented in
-``inline-laplace.c`` and ``laplace.c``. They're both compiled with
-``gcc 4.4.5`` at ``-O3``.
+using two loops and double-reference memory (array of ``int*``). The double
+reference does not matter for performance and the two algorithms are
+implemented in ``inline-laplace.c`` and ``laplace.c``. They were both compiled
+with ``gcc 4.4.5`` at ``-O3``.
A straightforward version of those in Python is implemented in ``laplace.py``
using respectively ``inline_slow_time_step`` and ``slow_time_step``.
``slow_2_time_step`` does the same thing, except it copies arrays in-place
instead of creating new copies.
-(XXX: these are timed under PyPy?)
-
+-----------------------+----------------------+--------------------+
| bench | number of iterations | time per iteration |
+-----------------------+----------------------+--------------------+
@@ -60,31 +57,31 @@
An important thing to notice here is that the data dependency in the inline
version causes a huge slowdown for the C versions. This is already not too bad
-for us, the braindead Python version takes longer and PyPy is not able to take
-advantage of the knowledge that the data is independent, but it is in the same
-ballpark - **15% - 170%** slower than C, but the algorithm you choose matters
-more than the language. By comparison, the slow versions take about **5.75s**
-each on CPython 2.6 **per iteration**, and by estimating, are about **200x**
-slower than the PyPy equivalent. I didn't measure the full run though :)
+for us though, the braindead Python version takes longer and PyPy is not able
+to take advantage of the knowledge that the data is independent, but it is in
+the same ballpark as the C versions - **15% - 170%** slower, but the algorithm
+you choose matters more than the language. By comparison, the slow versions
+take about **5.75s** each on CPython 2.6 **per iteration**, and by estimating,
+are about **200x** slower than the PyPy equivalent. I didn't measure the full
+run though :)
The next step is to use NumPy expressions. The first problem we run into is
that computing the error requires walking the entire array a second time. This
is fairly inefficient in terms of cache access, so I took the liberty of
computing the errors every 15 steps. This results in the convergence being
-rounded to the nearest 15 iterations, but speeds things up considerably (XXX:
-is this true?). ``numeric_time_step`` takes the most braindead approach of
-replacing the array with itself, like this::
+rounded to the nearest 15 iterations, but speeds things up considerably.
+``numeric_time_step`` takes the most braindead approach of replacing the array
+with itself, like this::
u[1:-1, 1:-1] = ((u[0:-2, 1:-1] + u[2:, 1:-1])*dy2 +
(u[1:-1,0:-2] + u[1:-1, 2:])*dx2)*dnr_inv
-We need 3 arrays here - one is an intermediate (PyPy does not automatically
-create intermediates for expressions), one is a copy for computing the error,
-and one is the result. This works by chance, since in NumPy ``+`` or ``*``
-creates an intermediate, while NumPyPy avoids allocating the intermediate if
-possible.
+We need 3 arrays here - one is an intermediate (PyPy only needs one, for all of
+those subexpressions), one is a copy for computing the error, and one is the
+result. This works automatically, since in NumPy ``+`` or ``*`` creates an
+intermediate, while NumPyPy avoids allocating the intermediate if possible.
-``numeric_2_time_step`` works pretty much the same::
+``numeric_2_time_step`` works in pretty much the same way::
src = self.u
self.u = src.copy()
@@ -106,9 +103,9 @@
self.u[1:-1, 1:-1] = ((src[0:-2, 1:-1] + src[2:, 1:-1])*dy2 +
(src[1:-1,0:-2] + src[1:-1, 2:])*dx2)*dnr_inv
-``numeric_4_time_step`` is the one that tries to resemble the C version more.
+``numeric_4_time_step`` is the one that tries hardest to resemble the C version.
Instead of doing an array copy, it actually notices that you can alternate
-between two arrays. This is exactly what C version does. The
+between two arrays. This is exactly what the C version does. The
``remove_invalidates`` call is a PyPy specific hack - we hope to remove this
call in the near future, but in short it promises "I don't have any unbuilt
intermediates that depend on the value of the argument", which means you don't
@@ -121,11 +118,11 @@
self.u[1:-1, 1:-1] = ((src[0:-2, 1:-1] + src[2:, 1:-1])*dy2 +
(src[1:-1,0:-2] + src[1:-1, 2:])*dx2)*dnr_inv
-This one is the most equivalent to the C version.
+This one is the most comparable to the C version.
``numeric_5_time_step`` does the same thing, but notices you don't have to copy
-the entire array, it's enough to just copy edges. This is an optimization that
-was not done in the C version::
+the entire array, it's enough to just copy the edges. This is an optimization
+that was not done in the C version::
remove_invalidates(self.old_u)
remove_invalidates(self.u)
@@ -140,8 +137,8 @@
Let's look at the table of runs. As before, ``gcc 4.4.5``, compiled at ``-O3``,
and PyPy nightly 7bb8b38d8563, on an x86-64 machine. All of the numeric methods
-run 226 steps each, slightly more than 219, rounding to the next 15 when the
-error is computed. Comparison for PyPy and CPython:
+run for 226 steps, slightly more than the 219, rounding to the next 15 when the
+error is computed.
+-----------------------+-------------+----------------+
| benchmark | PyPy | CPython |
From noreply at buildbot.pypy.org Tue Jan 10 17:27:18 2012
From: noreply at buildbot.pypy.org (alex_gaynor)
Date: Tue, 10 Jan 2012 17:27:18 +0100 (CET)
Subject: [pypy-commit] extradoc extradoc: resolved merge
Message-ID: <20120110162718.85F4A82110@wyvern.cs.uni-duesseldorf.de>
Author: Alex Gaynor
Branch: extradoc
Changeset: r4011:75aa1ba6d29f
Date: 2012-01-10 10:27 -0600
http://bitbucket.org/pypy/extradoc/changeset/75aa1ba6d29f/
Log: resolved merge
diff --git a/blog/draft/laplace.rst b/blog/draft/laplace.rst
--- a/blog/draft/laplace.rst
+++ b/blog/draft/laplace.rst
@@ -61,9 +61,9 @@
to take advantage of the knowledge that the data is independent, but it is in
the same ballpark as the C versions - **15% - 170%** slower, but the algorithm
you choose matters more than the language. By comparison, the slow versions
-take about **5.75s** each on CPython 2.6 **per iteration**, and by estimating,
-are about **200x** slower than the PyPy equivalent. I didn't measure the full
-run though :)
+take about **5.75s** each on CPython 2.6 per iteration, and by estimating,
+are about **200x** slower than the PyPy equivalent, if I had the patience to
+measure the full run.
The next step is to use NumPy expressions. The first problem we run into is
that computing the error requires walking the entire array a second time. This
From noreply at buildbot.pypy.org Tue Jan 10 17:31:10 2012
From: noreply at buildbot.pypy.org (fijal)
Date: Tue, 10 Jan 2012 17:31:10 +0100 (CET)
Subject: [pypy-commit] extradoc extradoc: quantify "faster than C"
Message-ID: <20120110163110.F093982110@wyvern.cs.uni-duesseldorf.de>
Author: Maciej Fijalkowski
Branch: extradoc
Changeset: r4012:ad6f9cb35d27
Date: 2012-01-10 18:30 +0200
http://bitbucket.org/pypy/extradoc/changeset/ad6f9cb35d27/
Log: quantify "faster than C"
diff --git a/blog/draft/laplace.rst b/blog/draft/laplace.rst
--- a/blog/draft/laplace.rst
+++ b/blog/draft/laplace.rst
@@ -159,8 +159,9 @@
faster than NumPy on CPython, almost always by more than 2x on this relatively
real-world example. This is not the end though, in fact it's hardly the
beginning: as we continue work, we hope to make even much better use of the
-high level information that we have, in order to eventually outperform C,
-hopefully in 2012. Stay tuned.
+high level information that we have. Looking at the generated assembler by
+gcc in this example it's pretty clear we can outperform it by having a much
+better aliasing information and hence a better possibilities for vectorization.
Cheers,
fijal
From noreply at buildbot.pypy.org Tue Jan 10 17:35:37 2012
From: noreply at buildbot.pypy.org (fijal)
Date: Tue, 10 Jan 2012 17:35:37 +0100 (CET)
Subject: [pypy-commit] extradoc extradoc: add link
Message-ID: <20120110163537.8E97482110@wyvern.cs.uni-duesseldorf.de>
Author: Maciej Fijalkowski
Branch: extradoc
Changeset: r4013:1f530d01ba87
Date: 2012-01-10 18:35 +0200
http://bitbucket.org/pypy/extradoc/changeset/1f530d01ba87/
Log: add link
diff --git a/blog/draft/laplace.rst b/blog/draft/laplace.rst
--- a/blog/draft/laplace.rst
+++ b/blog/draft/laplace.rst
@@ -167,3 +167,4 @@
fijal
.. _`PerformancePython`: http://www.scipy.org/PerformancePython
+.. _`fijal's hack`: https://bitbucket.org/fijal/hack2/src/default/bench/laplace
From noreply at buildbot.pypy.org Tue Jan 10 17:36:17 2012
From: noreply at buildbot.pypy.org (arigo)
Date: Tue, 10 Jan 2012 17:36:17 +0100 (CET)
Subject: [pypy-commit] extradoc extradoc: wording.
Message-ID: <20120110163617.8EC9482110@wyvern.cs.uni-duesseldorf.de>
Author: Armin Rigo
Branch: extradoc
Changeset: r4014:9566e67df82c
Date: 2012-01-10 17:36 +0100
http://bitbucket.org/pypy/extradoc/changeset/9566e67df82c/
Log: wording.
diff --git a/blog/draft/laplace.rst b/blog/draft/laplace.rst
--- a/blog/draft/laplace.rst
+++ b/blog/draft/laplace.rst
@@ -160,8 +160,9 @@
real-world example. This is not the end though, in fact it's hardly the
beginning: as we continue work, we hope to make even much better use of the
high level information that we have. Looking at the generated assembler by
-gcc in this example it's pretty clear we can outperform it by having a much
-better aliasing information and hence a better possibilities for vectorization.
+gcc in this example it's pretty clear we can outperform it, thanks to better
+aliasing information and hence better possibilities for vectorization.
+Stay tuned.
Cheers,
fijal
From noreply at buildbot.pypy.org Tue Jan 10 17:53:45 2012
From: noreply at buildbot.pypy.org (arigo)
Date: Tue, 10 Jan 2012 17:53:45 +0100 (CET)
Subject: [pypy-commit] extradoc extradoc: typo
Message-ID: <20120110165345.87D4482110@wyvern.cs.uni-duesseldorf.de>
Author: Armin Rigo
Branch: extradoc
Changeset: r4015:24ad6171712f
Date: 2012-01-10 17:53 +0100
http://bitbucket.org/pypy/extradoc/changeset/24ad6171712f/
Log: typo
diff --git a/blog/draft/laplace.rst b/blog/draft/laplace.rst
--- a/blog/draft/laplace.rst
+++ b/blog/draft/laplace.rst
@@ -152,7 +152,7 @@
| numeric 4 | 11ms | 31ms |
+-----------------------+-------------+----------------+
| numeric 5 | 9.3ms | 21ms |
-+-----------------------+-------------+-----------------
++-----------------------+-------------+----------------+
We think that these preliminary results are pretty good, they're not as fast as
the C version (or as fast as we'd like them to be), but we're already much
From noreply at buildbot.pypy.org Tue Jan 10 19:22:58 2012
From: noreply at buildbot.pypy.org (edelsohn)
Date: Tue, 10 Jan 2012 19:22:58 +0100 (CET)
Subject: [pypy-commit] extradoc extradoc: English language cleanups.
Message-ID: <20120110182258.A8F4E82110@wyvern.cs.uni-duesseldorf.de>
Author: edelsohn
Branch: extradoc
Changeset: r4016:0d508d74845b
Date: 2012-01-10 13:22 -0500
http://bitbucket.org/pypy/extradoc/changeset/0d508d74845b/
Log: English language cleanups.
diff --git a/blog/draft/laplace.rst b/blog/draft/laplace.rst
--- a/blog/draft/laplace.rst
+++ b/blog/draft/laplace.rst
@@ -4,9 +4,10 @@
Hello.
We're excited to let you know about some of the great progress we've made on
-NumPyPy -- both completeness and performance. Here we'll mostly talk about the
-performance side and how far we have come so far. **Word of warning:** this
-work isn't done - we're maybe half way to where we want to be and there are
+NumPyPy: both completeness and performance. In this blog entry we mostly
+will talk about performance and how much progress we have made so far.
+**Word of warning:** this
+work isn't done -- we're maybe half way to where we want to be and there are
many trivial and not so trivial optimizations to be written. (For example, we
haven't even started to implement important optimizations, like vectorization.)
@@ -27,10 +28,10 @@
Numerically the algorithms used are identical, however exact data layout in
memory differs between them.
-**A note about all the benchmarks:** they were each run once, but the
+**A note about all the benchmarks:** they each were run once, but the
performance is very stable across runs.
-Starting with the C version, it implements a dead simple laplace transform
+Starting with the C version, it implements a trivial laplace transform
using two loops and double-reference memory (array of ``int*``). The double
reference does not matter for performance and the two algorithms are
implemented in ``inline-laplace.c`` and ``laplace.c``. They were both compiled
@@ -55,13 +56,14 @@
| inline_slow python | 278 | 23.7 |
+-----------------------+----------------------+--------------------+
-An important thing to notice here is that the data dependency in the inline
-version causes a huge slowdown for the C versions. This is already not too bad
-for us though, the braindead Python version takes longer and PyPy is not able
-to take advantage of the knowledge that the data is independent, but it is in
-the same ballpark as the C versions - **15% - 170%** slower, but the algorithm
-you choose matters more than the language. By comparison, the slow versions
-take about **5.75s** each on CPython 2.6 per iteration, and by estimating,
+An important thing to notice is the data dependency of the inline
+version causes a huge slowdown for the C versions. This is not a severe
+disadvantage for us though -- the brain-dead Python version takes longer
+and PyPy is not able to take advantage of the knowledge that the data is
+independent. The results are in the same ballpark as the C versions --
+**15% - 170%** slower, but the algorithm
+one chooses matters more than the language. By comparison, the slow versions
+take about **5.75s** each on CPython 2.6 per iteration, and by estimation,
are about **200x** slower than the PyPy equivalent, if I had the patience to
measure the full run.
@@ -78,7 +80,7 @@
We need 3 arrays here - one is an intermediate (PyPy only needs one, for all of
those subexpressions), one is a copy for computing the error, and one is the
-result. This works automatically, since in NumPy ``+`` or ``*`` creates an
+result. This works automatically because in NumPy ``+`` or ``*`` creates an
intermediate, while NumPyPy avoids allocating the intermediate if possible.
``numeric_2_time_step`` works in pretty much the same way::
@@ -90,7 +92,7 @@
except the copy is now explicit rather than implicit.
-``numeric_3_time_step`` does the same thing, but notices you don't have to copy
+``numeric_3_time_step`` does the same thing, but notices one doesn't have to copy
the entire array, it's enough to copy the border pieces and fill rest with
zeros::
@@ -104,12 +106,12 @@
(src[1:-1,0:-2] + src[1:-1, 2:])*dx2)*dnr_inv
``numeric_4_time_step`` is the one that tries hardest to resemble the C version.
-Instead of doing an array copy, it actually notices that you can alternate
+Instead of doing an array copy, it actually notices that one can alternate
between two arrays. This is exactly what the C version does. The
``remove_invalidates`` call is a PyPy specific hack - we hope to remove this
-call in the near future, but in short it promises "I don't have any unbuilt
-intermediates that depend on the value of the argument", which means you don't
-have to compute sub-expressions you're not actually using::
+call in the near future, but, in short, it promises "I don't have any unbuilt
+intermediates that depend on the value of the argument", which means one doesn't
+have to compute sub-expressions one is not actually using::
remove_invalidates(self.old_u)
remove_invalidates(self.u)
@@ -120,7 +122,7 @@
This one is the most comparable to the C version.
-``numeric_5_time_step`` does the same thing, but notices you don't have to copy
+``numeric_5_time_step`` does the same thing, but notices one doesn't have to copy
the entire array, it's enough to just copy the edges. This is an optimization
that was not done in the C version::
@@ -158,9 +160,9 @@
the C version (or as fast as we'd like them to be), but we're already much
faster than NumPy on CPython, almost always by more than 2x on this relatively
real-world example. This is not the end though, in fact it's hardly the
-beginning: as we continue work, we hope to make even much better use of the
+beginning! As we continue work, we hope to make even more use of the
high level information that we have. Looking at the generated assembler by
-gcc in this example it's pretty clear we can outperform it, thanks to better
+gcc in this example, it's pretty clear we can outperform it, thanks to better
aliasing information and hence better possibilities for vectorization.
Stay tuned.
From noreply at buildbot.pypy.org Tue Jan 10 19:45:00 2012
From: noreply at buildbot.pypy.org (edelsohn)
Date: Tue, 10 Jan 2012 19:45:00 +0100 (CET)
Subject: [pypy-commit] extradoc extradoc: More English improvements and a
few commas.
Message-ID: <20120110184500.75A9082110@wyvern.cs.uni-duesseldorf.de>
Author: edelsohn
Branch: extradoc
Changeset: r4017:15a3491e715a
Date: 2012-01-10 13:44 -0500
http://bitbucket.org/pypy/extradoc/changeset/15a3491e715a/
Log: More English improvements and a few commas.
diff --git a/blog/draft/laplace.rst b/blog/draft/laplace.rst
--- a/blog/draft/laplace.rst
+++ b/blog/draft/laplace.rst
@@ -38,7 +38,7 @@
with ``gcc 4.4.5`` at ``-O3``.
A straightforward version of those in Python is implemented in ``laplace.py``
-using respectively ``inline_slow_time_step`` and ``slow_time_step``.
+using, respectively, ``inline_slow_time_step`` and ``slow_time_step``.
``slow_2_time_step`` does the same thing, except it copies arrays in-place
instead of creating new copies.
@@ -63,7 +63,7 @@
independent. The results are in the same ballpark as the C versions --
**15% - 170%** slower, but the algorithm
one chooses matters more than the language. By comparison, the slow versions
-take about **5.75s** each on CPython 2.6 per iteration, and by estimation,
+take about **5.75s** each on CPython 2.6 per iteration and, by estimation,
are about **200x** slower than the PyPy equivalent, if I had the patience to
measure the full run.
@@ -78,7 +78,7 @@
u[1:-1, 1:-1] = ((u[0:-2, 1:-1] + u[2:, 1:-1])*dy2 +
(u[1:-1,0:-2] + u[1:-1, 2:])*dx2)*dnr_inv
-We need 3 arrays here - one is an intermediate (PyPy only needs one, for all of
+We need 3 arrays here -- one is an intermediate (PyPy only needs one, for all of
those subexpressions), one is a copy for computing the error, and one is the
result. This works automatically because in NumPy ``+`` or ``*`` creates an
intermediate, while NumPyPy avoids allocating the intermediate if possible.
@@ -92,7 +92,7 @@
except the copy is now explicit rather than implicit.
-``numeric_3_time_step`` does the same thing, but notices one doesn't have to copy
+``numeric_3_time_step`` does the same thing, but notice one doesn't have to copy
the entire array, it's enough to copy the border pieces and fill rest with
zeros::
@@ -156,13 +156,13 @@
| numeric 5 | 9.3ms | 21ms |
+-----------------------+-------------+----------------+
-We think that these preliminary results are pretty good, they're not as fast as
+We think that these preliminary results are pretty good. They're not as fast as
the C version (or as fast as we'd like them to be), but we're already much
-faster than NumPy on CPython, almost always by more than 2x on this relatively
-real-world example. This is not the end though, in fact it's hardly the
+faster than NumPy on CPython -- almost always by more than 2x on this relatively
+real-world example. This is not the end, though. In fact, it's hardly the
beginning! As we continue work, we hope to make even more use of the
-high level information that we have. Looking at the generated assembler by
-gcc in this example, it's pretty clear we can outperform it, thanks to better
+high level information that we have. Looking at the assembler generated by
+gcc for this example, it's pretty clear we can outperform it thanks to better
aliasing information and hence better possibilities for vectorization.
Stay tuned.
From noreply at buildbot.pypy.org Tue Jan 10 20:19:34 2012
From: noreply at buildbot.pypy.org (fijal)
Date: Tue, 10 Jan 2012 20:19:34 +0100 (CET)
Subject: [pypy-commit] pypy default: add a note about special methods
Message-ID: <20120110191934.D8F4582110@wyvern.cs.uni-duesseldorf.de>
Author: Maciej Fijalkowski