[pypy-commit] pypy space-newtext: merge default
cfbolz
pypy.commits at gmail.com
Fri Dec 16 06:12:30 EST 2016
Author: Carl Friedrich Bolz <cfbolz at gmx.de>
Branch: space-newtext
Changeset: r89093:b799e1db856f
Date: 2016-12-16 11:39 +0100
http://bitbucket.org/pypy/pypy/changeset/b799e1db856f/
Log: merge default
diff --git a/pypy/doc/contributor.rst b/pypy/doc/contributor.rst
--- a/pypy/doc/contributor.rst
+++ b/pypy/doc/contributor.rst
@@ -1,3 +1,9 @@
+#encoding utf-8
+
+Contributors
+------------
+::
+
Armin Rigo
Maciej Fijalkowski
Carl Friedrich Bolz
@@ -307,7 +313,7 @@
Mads Kiilerich
Antony Lee
Jason Madden
- Daniel Neuh�user
+ Daniel Neuhäuser
reubano at gmail.com
Yaroslav Fedevych
Jim Hunziker
diff --git a/pypy/doc/cppyy.rst b/pypy/doc/cppyy.rst
--- a/pypy/doc/cppyy.rst
+++ b/pypy/doc/cppyy.rst
@@ -3,16 +3,17 @@
The cppyy module delivers dynamic Python-C++ bindings.
It is designed for automation, high performance, scale, interactivity, and
-handling all of modern C++.
+handling all of modern C++ (11, 14, etc.).
It is based on `Cling`_ which, through `LLVM`_/`clang`_, provides C++
reflection and interactivity.
Reflection information is extracted from C++ header files.
Cppyy itself is built into PyPy (an alternative exists for CPython), but
-it requires a backend, installable through pip, to interface with Cling.
+it requires a `backend`_, installable through pip, to interface with Cling.
.. _Cling: https://root.cern.ch/cling
.. _LLVM: http://llvm.org/
.. _clang: http://clang.llvm.org/
+.. _backend: https://pypi.python.org/pypi/PyPy-cppyy-backend
Installation
@@ -22,25 +23,39 @@
module, which is no longer supported.
Both the tooling and user-facing Python codes are very backwards compatible,
however.
-Further dependencies are cmake (for general build) and Python2.7 (for LLVM).
+Further dependencies are cmake (for general build), Python2.7 (for LLVM), and
+a modern C++ compiler (one that supports at least C++11).
Assuming you have a recent enough version of PyPy installed, use pip to
complete the installation of cppyy::
- $ pypy-c -m pip install PyPy-cppyy-backend
+ $ MAKE_NPROCS=4 pypy-c -m pip install --verbose PyPy-cppyy-backend
+Set the number of parallel builds ('4' in this example, through the MAKE_NPROCS
+environment variable) to a number appropriate for your machine.
The building process may take quite some time as it includes a customized
-version of LLVM as part of Cling.
+version of LLVM as part of Cling, which is why --verbose is recommended so that
+you can see the build progress.
+
+The default installation will be under
+$PYTHONHOME/site-packages/cppyy_backend/lib,
+which needs to be added to your dynamic loader path (LD_LIBRARY_PATH).
+If you need the dictionary and class map generation tools (used in the examples
+below), you need to add $PYTHONHOME/site-packages/cppyy_backend/bin to your
+executable path (PATH).
Basic bindings example
----------------------
-Now test with a trivial example whether all packages are properly installed
-and functional.
-First, create a C++ header file with some class in it (note that all functions
-are made inline for convenience; a real-world example would of course have a
-corresponding source file)::
+These examples assume that cppyy_backend is pointed to by the environment
+variable CPPYYHOME, and that CPPYYHOME/lib is added to LD_LIBRARY_PATH and
+CPPYYHOME/bin to PATH.
+
+Let's first test with a trivial example whether all packages are properly
+installed and functional.
+Create a C++ header file with some class in it (all functions are made inline
+for convenience; if you have out-of-line code, link with it as appropriate)::
$ cat MyClass.h
class MyClass {
@@ -54,11 +69,11 @@
int m_myint;
};
-Then, generate the bindings using ``genreflex`` (part of ROOT), and compile the
-code::
+Then, generate the bindings using ``genreflex`` (installed under
+cppyy_backend/bin in site_packages), and compile the code::
$ genreflex MyClass.h
- $ g++ -fPIC -rdynamic -O2 -shared -I$REFLEXHOME/include MyClass_rflx.cpp -o libMyClassDict.so -L$REFLEXHOME/lib -lReflex
+ $ g++ -std=c++11 -fPIC -rdynamic -O2 -shared -I$CPPYYHOME/include MyClass_rflx.cpp -o libMyClassDict.so -L$CPPYYHOME/lib -lCling
Next, make sure that the library can be found through the dynamic lookup path
(the ``LD_LIBRARY_PATH`` environment variable on Linux, ``PATH`` on Windows),
@@ -110,7 +125,7 @@
For example::
$ genreflex MyClass.h --rootmap=libMyClassDict.rootmap --rootmap-lib=libMyClassDict.so
- $ g++ -fPIC -rdynamic -O2 -shared -I$REFLEXHOME/include MyClass_rflx.cpp -o libMyClassDict.so -L$REFLEXHOME/lib -lReflex
+ $ g++ -std=c++11 -fPIC -rdynamic -O2 -shared -I$CPPYYHOME/include MyClass_rflx.cpp -o libMyClassDict.so -L$CPPYYHOME/lib -lCling
where the first option (``--rootmap``) specifies the output file name, and the
second option (``--rootmap-lib``) the name of the reflection library where
@@ -212,7 +227,7 @@
Now the reflection info can be generated and compiled::
$ genreflex MyAdvanced.h --selection=MyAdvanced.xml
- $ g++ -fPIC -rdynamic -O2 -shared -I$REFLEXHOME/include MyAdvanced_rflx.cpp -o libAdvExDict.so -L$REFLEXHOME/lib -lReflex
+ $ g++ -std=c++11 -fPIC -rdynamic -O2 -shared -I$CPPYYHOME/include MyAdvanced_rflx.cpp -o libAdvExDict.so -L$CPPYYHOME/lib -lCling
and subsequently be used from PyPy::
@@ -271,7 +286,7 @@
bound using::
$ genreflex example.h --deep --rootmap=libexampleDict.rootmap --rootmap-lib=libexampleDict.so
- $ g++ -fPIC -rdynamic -O2 -shared -I$REFLEXHOME/include example_rflx.cpp -o libexampleDict.so -L$REFLEXHOME/lib -lReflex
+ $ g++ -std=c++11 -fPIC -rdynamic -O2 -shared -I$CPPYYHOME/include example_rflx.cpp -o libexampleDict.so -L$CPPYYHOME/lib -lCling
* **abstract classes**: Are represented as python classes, since they are
needed to complete the inheritance hierarchies, but will raise an exception
@@ -567,13 +582,10 @@
Templates
---------
-A bit of special care needs to be taken for the use of templates.
-For a templated class to be completely available, it must be guaranteed that
-said class is fully instantiated, and hence all executable C++ code is
-generated and compiled in.
-The easiest way to fulfill that guarantee is by explicit instantiation in the
-header file that is handed to ``genreflex``.
-The following example should make that clear::
+Templates can be automatically instantiated, assuming the appropriate header
+files have been loaded or are accessible to the class loader.
+This is the case for example for all of STL.
+For example::
$ cat MyTemplate.h
#include <vector>
@@ -587,68 +599,10 @@
int m_i;
};
- #ifdef __GCCXML__
- template class std::vector<MyClass>; // explicit instantiation
- #endif
-
-If you know for certain that all symbols will be linked in from other sources,
-you can also declare the explicit template instantiation ``extern``.
-An alternative is to add an object to an unnamed namespace::
-
- namespace {
- std::vector<MyClass> vmc;
- } // unnamed namespace
-
-Unfortunately, this is not always enough for gcc.
-The iterators of vectors, if they are going to be used, need to be
-instantiated as well, as do the comparison operators on those iterators, as
-these live in an internal namespace, rather than in the iterator classes.
-Note that you do NOT need this iterators to iterator over a vector.
-You only need them if you plan to explicitly call e.g. ``begin`` and ``end``
-methods, and do comparisons of iterators.
-One way to handle this, is to deal with this once in a macro, then reuse that
-macro for all ``vector`` classes.
-Thus, the header above needs this (again protected with
-``#ifdef __GCCXML__``), instead of just the explicit instantiation of the
-``vector<MyClass>``::
-
- #define STLTYPES_EXPLICIT_INSTANTIATION_DECL(STLTYPE, TTYPE) \
- template class std::STLTYPE< TTYPE >; \
- template class __gnu_cxx::__normal_iterator<TTYPE*, std::STLTYPE< TTYPE > >; \
- template class __gnu_cxx::__normal_iterator<const TTYPE*, std::STLTYPE< TTYPE > >;\
- namespace __gnu_cxx { \
- template bool operator==(const std::STLTYPE< TTYPE >::iterator&, \
- const std::STLTYPE< TTYPE >::iterator&); \
- template bool operator!=(const std::STLTYPE< TTYPE >::iterator&, \
- const std::STLTYPE< TTYPE >::iterator&); \
- }
-
- STLTYPES_EXPLICIT_INSTANTIATION_DECL(vector, MyClass)
-
-Then, still for gcc, the selection file needs to contain the full hierarchy as
-well as the global overloads for comparisons for the iterators::
-
- $ cat MyTemplate.xml
- <lcgdict>
- <class pattern="std::vector<*>" />
- <class pattern="std::vector<*>::iterator" />
- <function name="__gnu_cxx::operator=="/>
- <function name="__gnu_cxx::operator!="/>
-
- <class name="MyClass" />
- </lcgdict>
-
Run the normal ``genreflex`` and compilation steps::
$ genreflex MyTemplate.h --selection=MyTemplate.xml
- $ g++ -fPIC -rdynamic -O2 -shared -I$REFLEXHOME/include MyTemplate_rflx.cpp -o libTemplateDict.so -L$REFLEXHOME/lib -lReflex
-
-Note: this is a dirty corner that clearly could do with some automation,
-even if the macro already helps.
-Such automation is planned.
-In fact, in the Cling world, the backend can perform the template
-instantations and generate the reflection info on the fly, and none of the
-above will any longer be necessary.
+ $ g++ -std=c++11 -fPIC -rdynamic -O2 -shared -I$CPPYYHOME/include MyTemplate_rflx.cpp -o libTemplateDict.so -L$CPPYYHOME/lib -lCling
Subsequent use should be as expected.
Note the meta-class style of "instantiating" the template::
@@ -665,8 +619,6 @@
1 2 3
>>>>
-Other templates work similarly, but are typically simpler, as there are no
-similar issues with iterators for e.g. ``std::list``.
The arguments to the template instantiation can either be a string with the
full list of arguments, or the explicit classes.
The latter makes for easier code writing if the classes passed to the
@@ -676,95 +628,40 @@
The fast lane
-------------
-The following is an experimental feature of cppyy.
-It mostly works, but there are some known issues (e.g. with return-by-value).
-Soon it should be the default mode, however.
+By default, cppyy will use direct function pointers through `CFFI`_ whenever
+possible. If this causes problems for you, you can disable it by setting the
+CPPYY_DISABLE_FASTPATH environment variable.
-With a slight modification of Reflex, it can provide function pointers for
-C++ methods, and hence allow PyPy to call those pointers directly, rather than
-calling C++ through a Reflex stub.
+.. _CFFI: https://cffi.readthedocs.io/en/latest/
-The standalone version of Reflex `provided`_ has been patched, but if you get
-Reflex from another source (most likely with a ROOT distribution), locate the
-file `genreflex-methptrgetter.patch`_ in pypy/module/cppyy and apply it to
-the genreflex python scripts found in ``$ROOTSYS/lib``::
-
- $ cd $ROOTSYS/lib
- $ patch -p2 < genreflex-methptrgetter.patch
-
-With this patch, ``genreflex`` will have grown the ``--with-methptrgetter``
-option.
-Use this option when running ``genreflex``, and add the
-``-Wno-pmf-conversions`` option to ``g++`` when compiling.
-The rest works the same way: the fast path will be used transparently (which
-also means that you can't actually find out whether it is in use, other than
-by running a micro-benchmark or a JIT test).
-
-.. _provided: http://cern.ch/wlav/reflex-2014-10-20.tar.bz2
-.. _genreflex-methptrgetter.patch: https://bitbucket.org/pypy/pypy/src/default/pypy/module/cppyy/genreflex-methptrgetter.patch
CPython
-------
-Most of the ideas in cppyy come originally from the `PyROOT`_ project.
-Although PyROOT does not support Reflex directly, it has an alter ego called
-"PyCintex" that, in a somewhat roundabout way, does.
-If you installed ROOT, rather than just Reflex, PyCintex should be available
-immediately if you add ``$ROOTSYS/lib`` to the ``PYTHONPATH`` environment
-variable.
+Most of the ideas in cppyy come originally from the `PyROOT`_ project, which
+contains a CPython-based cppyy.py module (with similar dependencies as the
+one that comes with PyPy).
+A standalone pip-installable version is planned, but for now you can install
+ROOT through your favorite distribution installer (available in the science
+section).
.. _PyROOT: https://root.cern.ch/pyroot
-There are a couple of minor differences between PyCintex and cppyy, most to do
-with naming.
-The one that you will run into directly, is that PyCintex uses a function
-called ``loadDictionary`` rather than ``load_reflection_info`` (it has the
-same rootmap-based class loader functionality, though, making this point
-somewhat moot).
-The reason for this is that Reflex calls the shared libraries that contain
-reflection info "dictionaries."
-However, in python, the name `dictionary` already has a well-defined meaning,
-so a more descriptive name was chosen for cppyy.
-In addition, PyCintex requires that the names of shared libraries so loaded
-start with "lib" in their name.
-The basic example above, rewritten for PyCintex thus goes like this::
-
- $ python
- >>> import PyCintex
- >>> PyCintex.loadDictionary("libMyClassDict.so")
- >>> myinst = PyCintex.gbl.MyClass(42)
- >>> print myinst.GetMyInt()
- 42
- >>> myinst.SetMyInt(33)
- >>> print myinst.m_myint
- 33
- >>> myinst.m_myint = 77
- >>> print myinst.GetMyInt()
- 77
- >>> help(PyCintex.gbl.MyClass) # shows that normal python introspection works
-
-Other naming differences are such things as taking an address of an object.
-In PyCintex, this is done with ``AddressOf`` whereas in cppyy the choice was
-made to follow the naming as in ``ctypes`` and hence use ``addressof``
-(PyROOT/PyCintex predate ``ctypes`` by several years, and the ROOT project
-follows camel-case, hence the differences).
-
-Of course, this is python, so if any of the naming is not to your liking, all
-you have to do is provide a wrapper script that you import instead of
-importing the ``cppyy`` or ``PyCintex`` modules directly.
-In that wrapper script you can rename methods exactly the way you need it.
-
-In the cling world, all these differences will be resolved.
+There are a couple of minor differences between the two versions of cppyy
+(the CPython version has a few more features).
+Work is on-going to integrate the nightly tests of both to make sure their
+feature sets are equalized.
Python3
-------
-To change versions of CPython (to Python3, another version of Python, or later
-to the `Py3k`_ version of PyPy), the only part that requires recompilation is
-the bindings module, be it ``cppyy`` or ``libPyROOT.so`` (in PyCintex).
-Although ``genreflex`` is indeed a Python tool, the generated reflection
-information is completely independent of Python.
+The CPython version of cppyy supports Python3, assuming your packager has
+build the backend for it.
+The cppyy module has not been tested with the `Py3k`_ version of PyPy.
+Note that the generated reflection information (from ``genreflex``) is fully
+independent of Python, and does not need to be rebuild when switching versions
+or interpreters.
.. _Py3k: https://bitbucket.org/pypy/pypy/src/py3k
@@ -772,5 +669,4 @@
.. toctree::
:hidden:
- cppyy_backend
cppyy_example
diff --git a/pypy/doc/cppyy_backend.rst b/pypy/doc/cppyy_backend.rst
deleted file mode 100644
--- a/pypy/doc/cppyy_backend.rst
+++ /dev/null
@@ -1,45 +0,0 @@
-Backends for cppyy
-==================
-
-The cppyy module needs a backend to provide the C++ reflection information on
-which the Python bindings are build.
-The backend is called through a C-API, which can be found in the PyPy sources
-in: :source:`pypy/module/cppyy/include/capi.h`.
-There are two kinds of API calls: querying about reflection information, which
-are used during the creation of Python-side constructs, and making the actual
-calls into C++.
-The objects passed around are all opaque: cppyy does not make any assumptions
-about them, other than that the opaque handles can be copied.
-Their definition, however, appears in two places: in the C code (in capi.h),
-and on the RPython side (in :source:`capi_types.py <pypy/module/cppyy/capi/capi_types.py>`), so if they are changed, they
-need to be changed on both sides.
-
-There are two places where selections in the RPython code affect the choice
-(and use) of the backend.
-The first is in :source:`pypy/module/cppyy/capi/__init__.py`::
-
- # choose C-API access method:
- from pypy.module.cppyy.capi.loadable_capi import *
- #from pypy.module.cppyy.capi.builtin_capi import *
-
-The default is the loadable C-API.
-Comment it and uncomment the builtin C-API line, to use the builtin version.
-
-Next, if the builtin C-API is chosen, the specific backend needs to be set as
-well (default is Reflex).
-This second choice is in :source:`pypy/module/cppyy/capi/builtin_capi.py`::
-
- import reflex_capi as backend
- #import cint_capi as backend
-
-After those choices have been made, built pypy-c as usual.
-
-When building pypy-c from source, keep the following in mind.
-If the loadable_capi is chosen, no further prerequisites are needed.
-However, for the build of the builtin_capi to succeed, the ``ROOTSYS``
-environment variable must point to the location of your ROOT (or standalone
-Reflex in the case of the Reflex backend) installation, or the ``root-config``
-utility must be accessible through ``$PATH`` (e.g. by adding ``$ROOTSYS/bin``
-to ``PATH``).
-In case of the former, include files are expected under ``$ROOTSYS/include``
-and libraries under ``$ROOTSYS/lib``.
diff --git a/pypy/doc/extending.rst b/pypy/doc/extending.rst
--- a/pypy/doc/extending.rst
+++ b/pypy/doc/extending.rst
@@ -12,7 +12,7 @@
* Write them in pure Python and use ctypes_.
-* Write them in C++ and bind them through Reflex_.
+* Write them in C++ and bind them through :doc:`cppyy <cppyy>` using Cling.
* Write them in as `RPython mixed modules`_.
@@ -61,11 +61,11 @@
.. _libffi: http://sourceware.org/libffi/
-Reflex
-------
+Cling and cppyy
+---------------
The builtin :doc:`cppyy <cppyy>` module uses reflection information, provided by
-`Reflex`_ (which needs to be `installed separately`_), of C/C++ code to
+`Cling`_ (which needs to be `installed separately`_), of C/C++ code to
automatically generate bindings at runtime.
In Python, classes and functions are always runtime structures, so when they
are generated matters not for performance.
@@ -76,11 +76,14 @@
The :doc:`cppyy <cppyy>` module is written in RPython, thus PyPy's JIT is able to remove
most cross-language call overhead.
-:doc:`Full details <cppyy>` are `available here <cppyy>`.
+:doc:Full details are `available here <cppyy>`.
-.. _installed separately: http://cern.ch/wlav/reflex-2013-08-14.tar.bz2
-.. _Reflex: https://root.cern.ch/how/how-use-reflex
+.. _installed separately: https://pypi.python.org/pypi/PyPy-cppyy-backend
+.. _Cling: https://root.cern.ch/cling
+.. toctree::
+
+ cppyy
RPython Mixed Modules
---------------------
@@ -94,7 +97,3 @@
This is how the numpy module is being developed.
-.. toctree::
- :hidden:
-
- cppyy
diff --git a/pypy/doc/index-of-release-notes.rst b/pypy/doc/index-of-release-notes.rst
--- a/pypy/doc/index-of-release-notes.rst
+++ b/pypy/doc/index-of-release-notes.rst
@@ -59,6 +59,7 @@
.. toctree::
+ release-pypy3.3-v5.5.0.rst
release-pypy3.3-v5.2-alpha1.rst
CPython 3.2 compatible versions
diff --git a/pypy/doc/whatsnew-head.rst b/pypy/doc/whatsnew-head.rst
--- a/pypy/doc/whatsnew-head.rst
+++ b/pypy/doc/whatsnew-head.rst
@@ -45,3 +45,14 @@
Assign ``tp_doc`` to the new TypeObject's type dictionary ``__doc__`` key
so it will be picked up by app-level objects of that type
+
+.. branch: cling-support
+
+Module cppyy now uses cling as its backend (Reflex has been removed). The
+user-facing interface and main developer tools (genreflex, selection files,
+class loader, etc.) remain the same. A libcppyy_backend.so library is still
+needed but is now available through PyPI with pip: PyPy-cppyy-backend.
+
+The Cling-backend brings support for modern C++ (11, 14, etc.), dynamic
+template instantations, and improved integration with CFFI for better
+performance. It also provides interactive C++ (and bindings to that).
diff --git a/pypy/module/cppyy/test/conftest.py b/pypy/module/cppyy/test/conftest.py
--- a/pypy/module/cppyy/test/conftest.py
+++ b/pypy/module/cppyy/test/conftest.py
@@ -1,7 +1,13 @@
-import py
+import py, sys
@py.test.mark.tryfirst
def pytest_runtest_setup(item):
+ if 'linux' in sys.platform:
+ # tests require minimally std=c++11
+ cc_info = py.process.cmdexec('gcc -v --help')
+ if not '-std=c++11' in cc_info:
+ py.test.skip('skipping tests because gcc does not support C++11')
+
if py.path.local.sysfind('genreflex') is None:
import pypy.module.cppyy.capi.loadable_capi as lcapi
if 'dummy' in lcapi.reflection_library:
diff --git a/pypy/module/cpyext/api.py b/pypy/module/cpyext/api.py
--- a/pypy/module/cpyext/api.py
+++ b/pypy/module/cpyext/api.py
@@ -379,103 +379,97 @@
if error is _NOT_SPECIFIED:
raise ValueError("function %s has no return value for exceptions"
% func)
- def make_unwrapper(catch_exception):
- # ZZZ is this whole logic really needed??? It seems to be only
- # for RPython code calling PyXxx() functions directly. I would
- # think that usually directly calling the function is clean
- # enough now
- names = api_function.argnames
- types_names_enum_ui = unrolling_iterable(enumerate(
- zip(api_function.argtypes,
- [tp_name.startswith("w_") for tp_name in names])))
+ names = api_function.argnames
+ types_names_enum_ui = unrolling_iterable(enumerate(
+ zip(api_function.argtypes,
+ [tp_name.startswith("w_") for tp_name in names])))
- @specialize.ll()
- def unwrapper(space, *args):
- from pypy.module.cpyext.pyobject import Py_DecRef, is_pyobj
- from pypy.module.cpyext.pyobject import from_ref, as_pyobj
- newargs = ()
- keepalives = ()
- assert len(args) == len(api_function.argtypes)
- for i, (ARG, is_wrapped) in types_names_enum_ui:
- input_arg = args[i]
- if is_PyObject(ARG) and not is_wrapped:
- # build a 'PyObject *' (not holding a reference)
- if not is_pyobj(input_arg):
- keepalives += (input_arg,)
- arg = rffi.cast(ARG, as_pyobj(space, input_arg))
- else:
- arg = rffi.cast(ARG, input_arg)
- elif ARG == rffi.VOIDP and not is_wrapped:
- # unlike is_PyObject case above, we allow any kind of
- # argument -- just, if it's an object, we assume the
- # caller meant for it to become a PyObject*.
- if input_arg is None or isinstance(input_arg, W_Root):
- keepalives += (input_arg,)
- arg = rffi.cast(ARG, as_pyobj(space, input_arg))
- else:
- arg = rffi.cast(ARG, input_arg)
- elif (is_PyObject(ARG) or ARG == rffi.VOIDP) and is_wrapped:
- # build a W_Root, possibly from a 'PyObject *'
- if is_pyobj(input_arg):
- arg = from_ref(space, input_arg)
- else:
- arg = input_arg
+ @specialize.ll()
+ def unwrapper(space, *args):
+ from pypy.module.cpyext.pyobject import Py_DecRef, is_pyobj
+ from pypy.module.cpyext.pyobject import from_ref, as_pyobj
+ newargs = ()
+ keepalives = ()
+ assert len(args) == len(api_function.argtypes)
+ for i, (ARG, is_wrapped) in types_names_enum_ui:
+ input_arg = args[i]
+ if is_PyObject(ARG) and not is_wrapped:
+ # build a 'PyObject *' (not holding a reference)
+ if not is_pyobj(input_arg):
+ keepalives += (input_arg,)
+ arg = rffi.cast(ARG, as_pyobj(space, input_arg))
+ else:
+ arg = rffi.cast(ARG, input_arg)
+ elif ARG == rffi.VOIDP and not is_wrapped:
+ # unlike is_PyObject case above, we allow any kind of
+ # argument -- just, if it's an object, we assume the
+ # caller meant for it to become a PyObject*.
+ if input_arg is None or isinstance(input_arg, W_Root):
+ keepalives += (input_arg,)
+ arg = rffi.cast(ARG, as_pyobj(space, input_arg))
+ else:
+ arg = rffi.cast(ARG, input_arg)
+ elif (is_PyObject(ARG) or ARG == rffi.VOIDP) and is_wrapped:
+ # build a W_Root, possibly from a 'PyObject *'
+ if is_pyobj(input_arg):
+ arg = from_ref(space, input_arg)
+ else:
+ arg = input_arg
- ## ZZZ: for is_pyobj:
- ## try:
- ## arg = from_ref(space,
- ## rffi.cast(PyObject, input_arg))
- ## except TypeError, e:
- ## err = oefmt(space.w_TypeError,
- ## "could not cast arg to PyObject")
- ## if not catch_exception:
- ## raise err
- ## state = space.fromcache(State)
- ## state.set_exception(err)
- ## if is_PyObject(restype):
- ## return None
- ## else:
- ## return api_function.error_value
- else:
- # arg is not declared as PyObject, no magic
- arg = input_arg
- newargs += (arg, )
- if not catch_exception:
- try:
- res = func(space, *newargs)
- finally:
- keepalive_until_here(*keepalives)
+ ## ZZZ: for is_pyobj:
+ ## try:
+ ## arg = from_ref(space,
+ ## rffi.cast(PyObject, input_arg))
+ ## except TypeError, e:
+ ## err = oefmt(space.w_TypeError,
+ ## "could not cast arg to PyObject")
+ ## if not catch_exception:
+ ## raise err
+ ## state = space.fromcache(State)
+ ## state.set_exception(err)
+ ## if is_PyObject(restype):
+ ## return None
+ ## else:
+ ## return api_function.error_value
else:
- # non-rpython variant
- assert not we_are_translated()
- try:
- res = func(space, *newargs)
- except OperationError as e:
- if not hasattr(api_function, "error_value"):
- raise
- state = space.fromcache(State)
- state.set_exception(e)
- if is_PyObject(restype):
- return None
- else:
- return api_function.error_value
- # 'keepalives' is alive here (it's not rpython)
- got_integer = isinstance(res, (int, long, float))
- assert got_integer == expect_integer, (
- 'got %r not integer' % (res,))
- return res
- unwrapper.func = func
- unwrapper.api_func = api_function
- return unwrapper
+ # arg is not declared as PyObject, no magic
+ arg = input_arg
+ newargs += (arg, )
+ try:
+ return func(space, *newargs)
+ finally:
+ keepalive_until_here(*keepalives)
- unwrapper_catch = make_unwrapper(True)
- unwrapper_raise = make_unwrapper(False)
+ unwrapper.func = func
+ unwrapper.api_func = api_function
+
+ # ZZZ is this whole logic really needed??? It seems to be only
+ # for RPython code calling PyXxx() functions directly. I would
+ # think that usually directly calling the function is clean
+ # enough now
+ def unwrapper_catch(space, *args):
+ try:
+ res = unwrapper(space, *args)
+ except OperationError as e:
+ if not hasattr(api_function, "error_value"):
+ raise
+ state = space.fromcache(State)
+ state.set_exception(e)
+ if is_PyObject(restype):
+ return None
+ else:
+ return api_function.error_value
+ got_integer = isinstance(res, (int, long, float))
+ assert got_integer == expect_integer, (
+ 'got %r not integer' % (res,))
+ return res
+
if header is not None:
if header == DEFAULT_HEADER:
FUNCTIONS[func_name] = api_function
FUNCTIONS_BY_HEADER.setdefault(header, {})[func_name] = api_function
- INTERPLEVEL_API[func_name] = unwrapper_catch # used in tests
- return unwrapper_raise # used in 'normal' RPython code.
+ INTERPLEVEL_API[func_name] = unwrapper_catch # used in tests
+ return unwrapper # used in 'normal' RPython code.
return decorate
def cpython_struct(name, fields, forward=None, level=1):
diff --git a/pypy/module/cpyext/pyobject.py b/pypy/module/cpyext/pyobject.py
--- a/pypy/module/cpyext/pyobject.py
+++ b/pypy/module/cpyext/pyobject.py
@@ -7,7 +7,7 @@
from pypy.module.cpyext.api import (
cpython_api, bootstrap_function, PyObject, PyObjectP, ADDR,
CANNOT_FAIL, Py_TPFLAGS_HEAPTYPE, PyTypeObjectPtr, is_PyObject,
- INTERPLEVEL_API, PyVarObject)
+ PyVarObject)
from pypy.module.cpyext.state import State
from pypy.objspace.std.typeobject import W_TypeObject
from pypy.objspace.std.objectobject import W_ObjectObject
@@ -245,12 +245,10 @@
else:
return lltype.nullptr(PyObject.TO)
as_pyobj._always_inline_ = 'try'
-INTERPLEVEL_API['as_pyobj'] = as_pyobj
def pyobj_has_w_obj(pyobj):
w_obj = rawrefcount.to_obj(W_Root, pyobj)
return w_obj is not None and w_obj is not w_marker_deallocating
-INTERPLEVEL_API['pyobj_has_w_obj'] = staticmethod(pyobj_has_w_obj)
def is_pyobj(x):
@@ -260,7 +258,6 @@
return True
else:
raise TypeError(repr(type(x)))
-INTERPLEVEL_API['is_pyobj'] = staticmethod(is_pyobj)
class Entry(ExtRegistryEntry):
_about_ = is_pyobj
@@ -286,7 +283,6 @@
if not is_pyobj(obj):
keepalive_until_here(obj)
return pyobj
-INTERPLEVEL_API['make_ref'] = make_ref
@specialize.ll()
@@ -307,13 +303,11 @@
assert pyobj.c_ob_refcnt >= rawrefcount.REFCNT_FROM_PYPY
keepalive_until_here(w_obj)
return w_obj
-INTERPLEVEL_API['get_w_obj_and_decref'] = get_w_obj_and_decref
@specialize.ll()
def incref(space, obj):
make_ref(space, obj)
-INTERPLEVEL_API['incref'] = incref
@specialize.ll()
def decref(space, obj):
@@ -326,7 +320,6 @@
_Py_Dealloc(space, obj)
else:
get_w_obj_and_decref(space, obj)
-INTERPLEVEL_API['decref'] = decref
@cpython_api([PyObject], lltype.Void)
diff --git a/pypy/module/cpyext/stubs.py b/pypy/module/cpyext/stubs.py
--- a/pypy/module/cpyext/stubs.py
+++ b/pypy/module/cpyext/stubs.py
@@ -2210,21 +2210,3 @@
it causes an exception to immediately be thrown; this is used for the
throw() methods of generator objects."""
raise NotImplementedError
-
- at cpython_api([PyObject], rffi.INT_real, error=CANNOT_FAIL)
-def PyWeakref_Check(space, ob):
- """Return true if ob is either a reference or proxy object.
- """
- raise NotImplementedError
-
- at cpython_api([PyObject], rffi.INT_real, error=CANNOT_FAIL)
-def PyWeakref_CheckRef(space, ob):
- """Return true if ob is a reference object.
- """
- raise NotImplementedError
-
- at cpython_api([PyObject], rffi.INT_real, error=CANNOT_FAIL)
-def PyWeakref_CheckProxy(space, ob):
- """Return true if ob is a proxy object.
- """
- raise NotImplementedError
diff --git a/pypy/module/cpyext/test/test_typeobject.py b/pypy/module/cpyext/test/test_typeobject.py
--- a/pypy/module/cpyext/test/test_typeobject.py
+++ b/pypy/module/cpyext/test/test_typeobject.py
@@ -1183,9 +1183,19 @@
Base1->tp_basicsize = sizeof(PyHeapTypeObject);
Base2->tp_basicsize = sizeof(PyHeapTypeObject);
Base12->tp_basicsize = sizeof(PyHeapTypeObject);
+ #ifndef PYPY_VERSION /* PyHeapTypeObject has no ht_qualname on PyPy */
+ #if PY_MAJOR_VERSION >= 3 && PY_MINOR_VERSION >= 3
+ {
+ PyObject * dummyname = PyBytes_FromString("dummy name");
+ ((PyHeapTypeObject*)Base1)->ht_qualname = dummyname;
+ ((PyHeapTypeObject*)Base2)->ht_qualname = dummyname;
+ ((PyHeapTypeObject*)Base12)->ht_qualname = dummyname;
+ }
+ #endif
+ #endif
Base1->tp_flags = Py_TPFLAGS_DEFAULT | Py_TPFLAGS_BASETYPE | Py_TPFLAGS_HEAPTYPE;
Base2->tp_flags = Py_TPFLAGS_DEFAULT | Py_TPFLAGS_BASETYPE | Py_TPFLAGS_HEAPTYPE;
- Base12->tp_flags = Py_TPFLAGS_DEFAULT;
+ Base12->tp_flags = Py_TPFLAGS_DEFAULT | Py_TPFLAGS_HEAPTYPE;
Base12->tp_base = Base1;
Base12->tp_bases = PyTuple_Pack(2, Base1, Base2);
Base12->tp_doc = "The Base12 type or object";
diff --git a/pypy/module/cpyext/test/test_weakref.py b/pypy/module/cpyext/test/test_weakref.py
--- a/pypy/module/cpyext/test/test_weakref.py
+++ b/pypy/module/cpyext/test/test_weakref.py
@@ -56,3 +56,30 @@
)
])
module.test_macro_cast()
+
+ def test_weakref_check(self):
+ module = self.import_extension('foo', [
+ ("test_weakref_cast", "METH_O",
+ """
+ return Py_BuildValue("iiii",
+ (int)PyWeakref_Check(args),
+ (int)PyWeakref_CheckRef(args),
+ (int)PyWeakref_CheckRefExact(args),
+ (int)PyWeakref_CheckProxy(args));
+ """
+ )
+ ])
+ import weakref
+ def foo(): pass
+ class Bar(object):
+ pass
+ bar = Bar()
+ assert module.test_weakref_cast([]) == (0, 0, 0, 0)
+ assert module.test_weakref_cast(weakref.ref(foo)) == (1, 1, 1, 0)
+ assert module.test_weakref_cast(weakref.ref(bar)) == (1, 1, 1, 0)
+ assert module.test_weakref_cast(weakref.proxy(foo)) == (1, 0, 0, 1)
+ assert module.test_weakref_cast(weakref.proxy(bar)) == (1, 0, 0, 1)
+ class X(weakref.ref):
+ pass
+ assert module.test_weakref_cast(X(foo)) == (1, 1, 0, 0)
+ assert module.test_weakref_cast(X(bar)) == (1, 1, 0, 0)
diff --git a/pypy/module/cpyext/weakrefobject.py b/pypy/module/cpyext/weakrefobject.py
--- a/pypy/module/cpyext/weakrefobject.py
+++ b/pypy/module/cpyext/weakrefobject.py
@@ -1,6 +1,7 @@
from pypy.module.cpyext.api import cpython_api
-from pypy.module.cpyext.pyobject import PyObject
+from pypy.module.cpyext.pyobject import PyObject, CANNOT_FAIL
from pypy.module._weakref.interp__weakref import W_Weakref, proxy
+from pypy.module._weakref.interp__weakref import W_Proxy, W_CallableProxy
from rpython.rtyper.lltypesystem import rffi
@cpython_api([PyObject, PyObject], PyObject)
@@ -54,3 +55,34 @@
PyWeakref_GetObject() and Py_INCREF().)
"""
return space.call_function(w_ref)
+
+ at cpython_api([PyObject], rffi.INT_real, error=CANNOT_FAIL)
+def PyWeakref_CheckRef(space, w_obj):
+ """Return true if ob is a reference object.
+ """
+ w_obj_type = space.type(w_obj)
+ w_type = space.gettypeobject(W_Weakref.typedef)
+ return (space.is_w(w_obj_type, w_type) or
+ space.issubtype_w(w_obj_type, w_type))
+
+ at cpython_api([PyObject], rffi.INT_real, error=CANNOT_FAIL)
+def PyWeakref_CheckRefExact(space, w_obj):
+ w_obj_type = space.type(w_obj)
+ w_type = space.gettypeobject(W_Weakref.typedef)
+ return space.is_w(w_obj_type, w_type)
+
+ at cpython_api([PyObject], rffi.INT_real, error=CANNOT_FAIL)
+def PyWeakref_CheckProxy(space, w_obj):
+ """Return true if ob is a proxy object.
+ """
+ w_obj_type = space.type(w_obj)
+ w_type1 = space.gettypeobject(W_Proxy.typedef)
+ w_type2 = space.gettypeobject(W_CallableProxy.typedef)
+ return space.is_w(w_obj_type, w_type1) or space.is_w(w_obj_type, w_type2)
+
+ at cpython_api([PyObject], rffi.INT_real, error=CANNOT_FAIL)
+def PyWeakref_Check(space, w_obj):
+ """Return true if ob is either a reference or proxy object.
+ """
+ return (PyWeakref_CheckRef(space, w_obj) or
+ PyWeakref_CheckProxy(space, w_obj))
diff --git a/pypy/module/posix/interp_posix.py b/pypy/module/posix/interp_posix.py
--- a/pypy/module/posix/interp_posix.py
+++ b/pypy/module/posix/interp_posix.py
@@ -1332,8 +1332,9 @@
Return a string of n random bytes suitable for cryptographic use.
"""
context = get(space).random_context
+ signal_checker = space.getexecutioncontext().checksignals
try:
- return space.newbytes(rurandom.urandom(context, n))
+ return space.newbytes(rurandom.urandom(context, n, signal_checker))
except OSError as e:
raise wrap_oserror(space, e)
diff --git a/rpython/rlib/rurandom.py b/rpython/rlib/rurandom.py
--- a/rpython/rlib/rurandom.py
+++ b/rpython/rlib/rurandom.py
@@ -7,12 +7,12 @@
from rpython.rtyper.lltypesystem import lltype, rffi
from rpython.rlib.objectmodel import not_rpython
+from rpython.translator.tool.cbuild import ExternalCompilationInfo
+from rpython.rtyper.tool import rffi_platform
if sys.platform == 'win32':
from rpython.rlib import rwin32
- from rpython.translator.tool.cbuild import ExternalCompilationInfo
- from rpython.rtyper.tool import rffi_platform
eci = ExternalCompilationInfo(
includes = ['windows.h', 'wincrypt.h'],
@@ -56,7 +56,7 @@
return lltype.malloc(rffi.CArray(HCRYPTPROV), 1,
immortal=True, zero=True)
- def urandom(context, n):
+ def urandom(context, n, signal_checker=None):
provider = context[0]
if not provider:
# This handle is never explicitly released. The operating
@@ -80,11 +80,71 @@
def init_urandom():
return None
- def urandom(context, n):
+ SYS_getrandom = None
+
+ if sys.platform.startswith('linux'):
+ eci = ExternalCompilationInfo(includes=['sys/syscall.h'])
+ class CConfig:
+ _compilation_info_ = eci
+ SYS_getrandom = rffi_platform.DefinedConstantInteger(
+ 'SYS_getrandom')
+ globals().update(rffi_platform.configure(CConfig))
+
+ if SYS_getrandom is not None:
+ from rpython.rlib.rposix import get_saved_errno, handle_posix_error
+ import errno
+
+ eci = eci.merge(ExternalCompilationInfo(includes=['linux/random.h']))
+ class CConfig:
+ _compilation_info_ = eci
+ GRND_NONBLOCK = rffi_platform.ConstantInteger('GRND_NONBLOCK')
+ globals().update(rffi_platform.configure(CConfig))
+
+ # On Linux, use the syscall() function because the GNU libc doesn't
+ # expose the Linux getrandom() syscall yet.
+ syscall = rffi.llexternal(
+ 'syscall',
+ [lltype.Signed, rffi.CCHARP, rffi.LONG, rffi.INT],
+ lltype.Signed,
+ compilation_info=eci,
+ save_err=rffi.RFFI_SAVE_ERRNO)
+
+ class Works:
+ status = True
+ getrandom_works = Works()
+
+ def _getrandom(n, result, signal_checker):
+ if not getrandom_works.status:
+ return n
+ while n > 0:
+ with rffi.scoped_alloc_buffer(n) as buf:
+ got = syscall(SYS_getrandom, buf.raw, n, GRND_NONBLOCK)
+ if got >= 0:
+ s = buf.str(got)
+ result.append(s)
+ n -= len(s)
+ continue
+ err = get_saved_errno()
+ if (err == errno.ENOSYS or err == errno.EPERM or
+ err == errno.EAGAIN): # see CPython 3.5
+ getrandom_works.status = False
+ return n
+ if err == errno.EINTR:
+ if signal_checker is not None:
+ signal_checker()
+ continue
+ handle_posix_error("getrandom", got)
+ raise AssertionError("unreachable")
+ return n
+
+ def urandom(context, n, signal_checker=None):
"Read n bytes from /dev/urandom."
- result = ''
- if n == 0:
- return result
+ result = []
+ if SYS_getrandom is not None:
+ n = _getrandom(n, result, signal_checker)
+ if n <= 0:
+ return ''.join(result)
+
# XXX should somehow cache the file descriptor. It's a mess.
# CPython has a 99% solution and hopes for the remaining 1%
# not to occur. For now, we just don't cache the file
@@ -98,8 +158,8 @@
if e.errno != errno.EINTR:
raise
data = ''
- result += data
+ result.append(data)
n -= len(data)
finally:
os.close(fd)
- return result
+ return ''.join(result)
diff --git a/rpython/rlib/test/test_rurandom.py b/rpython/rlib/test/test_rurandom.py
new file mode 100644
--- /dev/null
+++ b/rpython/rlib/test/test_rurandom.py
@@ -0,0 +1,12 @@
+from rpython.rlib import rurandom
+
+def test_rurandom():
+ context = rurandom.init_urandom()
+ s = rurandom.urandom(context, 5000)
+ assert type(s) is str and len(s) == 5000
+ for x in [1, 11, 111, 222]:
+ assert s.count(chr(x)) >= 1
+
+def test_rurandom_no_syscall(monkeypatch):
+ monkeypatch.setattr(rurandom, 'SYS_getrandom', None)
+ test_rurandom()
More information about the pypy-commit
mailing list