From wlavrijsen at lbl.gov Fri Aug 1 02:12:07 2014 From: wlavrijsen at lbl.gov (wlavrijsen at lbl.gov) Date: Thu, 31 Jul 2014 17:12:07 -0700 (PDT) Subject: [pypy-dev] cppyy and out arguments In-Reply-To: References: Message-ID: Hi Armin, > ...I wish I could make sense of what you said, but I'm probably just > too tired at the moment. I don't understand what the low-level > difference between "int*" and "int&" is. at low-level, sure. But to first order, I deal with wrappers (even in cling, as it could be an inline function, which, although having external linkage, is usually removed from the code section). > More importantly I don't understand why you would mention cffi/ctypes, > given that these two are not about C++ at all. No, but they actually do have ints. Right now, to get an int*, one would have to do: import array flag = array.array('i', [0]) set_flag(flag) But here's what I'd like code to look like: flag = c_int(0) set_flag(flag) and I can cook up a c_int myself, or re-use a data type representing a c_int, either from ctypes or cffi, that is already available. Re-using seems a better idea. For that to work, though, I need to be able to take the address of the actual payload of the c_int. This is not in the public interface of ctypes on CPython. Best regards, Wim -- WLavrijsen at lbl.gov -- +1 (510) 486 6411 -- www.lavrijsen.net From matti.picus at gmail.com Fri Aug 1 07:01:33 2014 From: matti.picus at gmail.com (Matti Picus) Date: Fri, 01 Aug 2014 08:01:33 +0300 Subject: [pypy-dev] libdynd In-Reply-To: References: Message-ID: <53DB1F2D.1010809@gmail.com> Did Travis seem to indicate they would accept contributions from outside toward making a new ndarray implementation a reality? Until we can find a wagon to hitch our minimal efforts to, I will keep working slowly on micronumpy. It seems useful for some tasks, and will be even more so once linalg LAPACK/BLAS is supported (i.e. once we can invert a large matrix). Matti On 31/07/2014 6:29 PM, Armin Rigo wrote: > Hi, > > On 26 July 2014 11:19, Antonio Cuni wrote: >> this looks interesting, but from a quick look it seems they are only >> offering a C++ API? >> In that case, it might be better/easier to wrap it through cppyy than cffi. > One or the other, yes. > >> Also, did Travis told you what are the plans for scipy? > No. As far as I know the basic library is still in development. It's > just that I have somehow a feeling that the current speed at which > numpypy progresses is rather slow, and it has a huge existing code > base of expectations as well as messy backward-compatibility > requirements. If we could instead throw that away and attach our > wagon to the newer development, even if it takes another couple of > years before it becomes usable, then it seems like a long-term win to > me. Also, a cffi or cppyy version seems easier than a RPython version > for third-party contributors to help maintain, too. > > > A bient?t, > > Armin. > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > https://mail.python.org/mailman/listinfo/pypy-dev From arigo at tunes.org Fri Aug 1 16:07:59 2014 From: arigo at tunes.org (Armin Rigo) Date: Fri, 1 Aug 2014 16:07:59 +0200 Subject: [pypy-dev] cppyy and out arguments In-Reply-To: References: Message-ID: Hi Wim, On 1 August 2014 02:12, wrote: > flag = c_int(0) > set_flag(flag) > > and I can cook up a c_int myself, or re-use a data type representing a > c_int, either from ctypes or cffi, that is already available. Re-using > seems a better idea. For that to work, though, I need to be able to > take the address of the actual payload of the c_int. This is not in the > public interface of ctypes on CPython. Isn't it `ctypes.addressof(flag)`? Armin From arigo at tunes.org Fri Aug 1 16:12:24 2014 From: arigo at tunes.org (Armin Rigo) Date: Fri, 1 Aug 2014 16:12:24 +0200 Subject: [pypy-dev] libdynd In-Reply-To: <53DB1F2D.1010809@gmail.com> References: <53DB1F2D.1010809@gmail.com> Message-ID: Hi Matti, On 1 August 2014 07:01, Matti Picus wrote: > Did Travis seem to indicate they would accept contributions from outside > toward making a new ndarray implementation a reality? I didn't ask (and am not sure why you mean by) a new ndarray implementation. I did ask him about making a cffi interface to the C++ libdynd library that would be usable from both CPython and PyPy, and he was certainly interested and said it was in line with the foreseen usage of libdynd. A bient?t, Armin. From wlavrijsen at lbl.gov Fri Aug 1 19:17:54 2014 From: wlavrijsen at lbl.gov (wlavrijsen at lbl.gov) Date: Fri, 1 Aug 2014 10:17:54 -0700 (PDT) Subject: [pypy-dev] cppyy and out arguments In-Reply-To: References: Message-ID: Hi Armin, > Isn't it `ctypes.addressof(flag)`? good idea; didn't think of that as addressof is not part of the public C interface, but I can get hold of the python callable in the normal way. Thanks, Wim -- WLavrijsen at lbl.gov -- +1 (510) 486 6411 -- www.lavrijsen.net From anton.gulenko at googlemail.com Sun Aug 3 12:57:20 2014 From: anton.gulenko at googlemail.com (Anton Gulenko) Date: Sun, 3 Aug 2014 12:57:20 +0200 Subject: [pypy-dev] AssertionError in rpython_jit_metainterp_resume.c Message-ID: Hello, I am working on the RSqueak VM (https://bitbucket.org/pypy/lang-smalltalk). Usually I debug assertion errors by executing the VM in interpreted mode. Recently I started getting an error where that doesn't work. See below for the only output on the console. I am working on Windows, so the VM is compiled with Visual Studio. The VM is translated using the pypy commit tagged "release-2.3.1". RPython traceback: File "rpython_jit_metainterp_compile.c", line 379, in ResumeGuardForcedDescr_handle_async_forcing File "rpython_jit_metainterp_resume.c", line 256, in force_from_resumedata File "rpython_jit_metainterp_virtualizable.c", line 426, in write_from_resume_data_partial File "rpython_jit_metainterp_resume.c", line 768, in ResumeDataDirectReader_decode_ref Fatal RPython error: AssertionError Since this seems to be related to virtualizable objects: during translation we get 2 warnings that the "fresh_virtualizable" hints are ignored (there are 2 places where virtualizable frame objects are created. Also, we have one virtualizable list[*] field in our Context class, which should never be resized. I applied the make_sure_not_resized hint on it, but maybe I'm using it wrong... I'm just calling make_sure_not_resized once, after setting the list field. Any ideas? Thanks in advance! Best, Anton -------------- next part -------------- An HTML attachment was scrubbed... URL: From arigo at tunes.org Sun Aug 3 14:26:38 2014 From: arigo at tunes.org (Armin Rigo) Date: Sun, 3 Aug 2014 14:26:38 +0200 Subject: [pypy-dev] AssertionError in rpython_jit_metainterp_resume.c In-Reply-To: References: Message-ID: Hi Anton, On 3 August 2014 12:57, Anton Gulenko wrote: > Since this seems to be related to virtualizable objects: during translation > we get 2 warnings that the "fresh_virtualizable" hints are ignored (there > are 2 places where virtualizable frame objects are created. > Also, we have one virtualizable list[*] field in our Context class, which > should never be resized. I applied the make_sure_not_resized hint on it, but > maybe I'm using it wrong... I'm just calling make_sure_not_resized once, > after setting the list field. It's not enough to not resize the list: you need to make sure that a given virtualizable object's list attribute is never re-assigned with a list of different size. Is that true? A bient?t, Armin. From anton.gulenko at googlemail.com Mon Aug 4 14:36:36 2014 From: anton.gulenko at googlemail.com (Anton Gulenko) Date: Mon, 4 Aug 2014 14:36:36 +0200 Subject: [pypy-dev] AssertionError in rpython_jit_metainterp_resume.c In-Reply-To: References: Message-ID: Salut Armin, I checked the code and indeed, it seems like the list field in the Context class _can_ be reassigned. However, it should only happen very soon after the constructor is finished, and the size of the list should not change. To be sure, I'll try to refactor this and make it impossible for the list to be reassigned after the constructor is finished. That would satisfy the requirements right? Is it illegal to reassign the list field at all, or is it just the size of the list that matters? Also, is it possible to have two different list fields in a virtualizable object (provided both are never reassigned after the constructor)? Merci bien at a bient?t, Anton 2014-08-03 14:26 GMT+02:00 Armin Rigo : > Hi Anton, > > On 3 August 2014 12:57, Anton Gulenko > wrote: > > Since this seems to be related to virtualizable objects: during > translation > > we get 2 warnings that the "fresh_virtualizable" hints are ignored (there > > are 2 places where virtualizable frame objects are created. > > Also, we have one virtualizable list[*] field in our Context class, which > > should never be resized. I applied the make_sure_not_resized hint on it, > but > > maybe I'm using it wrong... I'm just calling make_sure_not_resized once, > > after setting the list field. > > It's not enough to not resize the list: you need to make sure that a > given virtualizable object's list attribute is never re-assigned with > a list of different size. Is that true? > > > A bient?t, > > Armin. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From arigo at tunes.org Mon Aug 4 16:00:05 2014 From: arigo at tunes.org (Armin Rigo) Date: Mon, 4 Aug 2014 16:00:05 +0200 Subject: [pypy-dev] AssertionError in rpython_jit_metainterp_resume.c In-Reply-To: References: Message-ID: Hi Anton, On 4 August 2014 14:36, Anton Gulenko wrote: > To be sure, I'll try to refactor this and make it impossible for the list to > be reassigned after the constructor is finished. That would satisfy the > requirements right? Yes, at least this one... To be able to help you more if needed, please provide step-by-step instructions about how to reproduce. Armin From dimaqq at gmail.com Mon Aug 4 17:04:31 2014 From: dimaqq at gmail.com (Dima Tisnek) Date: Mon, 4 Aug 2014 17:04:31 +0200 Subject: [pypy-dev] use as benchmark pypy vs python if you please Message-ID: Attached is n-queens solver (pardon my naive algorithm), it runs: python 2.7.6: 17s pypy 2.4.0 alpha: 23s same nojit: 32s I've tried similar-looking algorithm for another problem before, and has similar results -- somehow pypy was slower. feel free to investigate / tweak or even use on speed.pypy.org -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: nq.py Type: text/x-python Size: 3354 bytes Desc: not available URL: From toni.mattis at student.hpi.uni-potsdam.de Mon Aug 4 18:06:29 2014 From: toni.mattis at student.hpi.uni-potsdam.de (Toni Mattis) Date: Mon, 4 Aug 2014 18:06:29 +0200 Subject: [pypy-dev] Calling lambdas inside loop causes significant slowdown Message-ID: <53DFAF85.4080603@student.hpi.uni-potsdam.de> Hi, I'm trying to figure out the fastest way in PyPy to introduce abstractions into loops, e.g. refactoring the following code: def sum_direct(data): s = 0 for i in data: if i < 5: s += i + 1 return s to something like: def sum_lambda(data): filter_func = lambda x: x < 5 map_func = lambda x: x + 1 s = 0 for i in data: if filter_func(i): s += map_func(i) return s and then turning both lambdas into arguments, class members and so on. However, the refactoring mentioned above already introduces about 50% of runtime overhead and is not getting better with further refactorings. Shoudn't the tracing/inlining eliminate most of this overhead or is there a mistake on my part? I timed both methods on a large array: from array import array import time data = array('i') for i in xrange(100000000): data.append(i % 10) t = time.time() result = sum_lambda(data) # or sum_direct print result, time.time() - t Calling sum_direct() takes about 0.43 seconds, sum_lambda() is at 0.64s on average. (I'm at changeset 72674:78d5d873a260 from Aug 3 2014, translated and run on Ubuntu 14.04) The JIT trace of the lambda code basically adds two force_token() operations and potentially more expensive guards. Is there any chance to avoid these without excessive metaprogramming? If no, which speedup tricks (speaking of jit hooks, code generation, etc.) can you recommend for implementing such APIs? Thanks in advance, Toni From anton.gulenko at googlemail.com Tue Aug 5 18:24:04 2014 From: anton.gulenko at googlemail.com (Anton Gulenko) Date: Tue, 5 Aug 2014 18:24:04 +0200 Subject: [pypy-dev] AssertionError in rpython_jit_metainterp_resume.c In-Reply-To: References: Message-ID: Hi Armin, I made a refactoring that makes sure that the virtualizable list is only assigned once, but the issue still persists. To reproduce this you'd have to build the Vm... Fortunately that's way faster than Pypy ;) The repo: https://bitbucket.org/pypy/lang-smalltalk The relevant branch is 'storage', current revision 1012, commit hash d007ca0 . The requirements are to have the pypy repo on PYTHONPATH (to load rpython.rlib and so on), and to have rsdl installed. The target file is targetimageloadingsmalltalk.py, so compiling should be something like: $ pypy ../pypy/rpython/bin/rpython -Ojit targetimageloadingsmalltalk.py I am also compiling with --gc=minimark because I had some issues with the default gc. The virtualizable classes are defined in spyvm/storage_contexts.py. There's the root class ContextPartShadow and two subclasses BlockContextShadow and MethodContextShadow. Both subclasses have a default constructor and static build() methods. It might be enough if you check out the definitions and constructors of those classes?.. Let me know if it doesn't translate or if the code is weird... Thanks for your help! Best, Anton 2014-08-04 16:00 GMT+02:00 Armin Rigo : > Hi Anton, > > On 4 August 2014 14:36, Anton Gulenko > wrote: > > To be sure, I'll try to refactor this and make it impossible for the > list to > > be reassigned after the constructor is finished. That would satisfy the > > requirements right? > > Yes, at least this one... To be able to help you more if needed, > please provide step-by-step instructions about how to reproduce. > > > Armin > -------------- next part -------------- An HTML attachment was scrubbed... URL: From anton.gulenko at googlemail.com Tue Aug 5 18:32:15 2014 From: anton.gulenko at googlemail.com (Anton Gulenko) Date: Tue, 5 Aug 2014 18:32:15 +0200 Subject: [pypy-dev] AssertionError in rpython_jit_metainterp_resume.c In-Reply-To: References: Message-ID: Sorry, I forgot to mention - to actually produce the error you have to run the Squek image: ./rsqueak images/Squeak4.5-noBitBlt.image Best, Anton 2014-08-05 18:24 GMT+02:00 Anton Gulenko : > Hi Armin, > > I made a refactoring that makes sure that the virtualizable list is only > assigned once, but the issue still persists. > > To reproduce this you'd have to build the Vm... Fortunately that's way > faster than Pypy ;) > The repo: https://bitbucket.org/pypy/lang-smalltalk > The relevant branch is 'storage', current revision 1012, commit hash > d007ca0 > > . > The requirements are to have the pypy repo on PYTHONPATH (to load > rpython.rlib and so on), and to have rsdl installed. > The target file is targetimageloadingsmalltalk.py, so compiling should be > something like: > $ pypy ../pypy/rpython/bin/rpython -Ojit targetimageloadingsmalltalk.py > I am also compiling with --gc=minimark because I had some issues with the > default gc. > > The virtualizable classes are defined in spyvm/storage_contexts.py. > There's the root class ContextPartShadow and two > subclasses BlockContextShadow and MethodContextShadow. > Both subclasses have a default constructor and static build() methods. > It might be enough if you check out the definitions and constructors of > those classes?.. > > Let me know if it doesn't translate or if the code is weird... > Thanks for your help! > Best, > Anton > > > 2014-08-04 16:00 GMT+02:00 Armin Rigo : > > Hi Anton, >> >> On 4 August 2014 14:36, Anton Gulenko >> wrote: >> > To be sure, I'll try to refactor this and make it impossible for the >> list to >> > be reassigned after the constructor is finished. That would satisfy the >> > requirements right? >> >> Yes, at least this one... To be able to help you more if needed, >> please provide step-by-step instructions about how to reproduce. >> >> >> Armin >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From russt at releasetools.org Sat Aug 9 22:27:31 2014 From: russt at releasetools.org (Russ Tremain) Date: Sat, 9 Aug 2014 13:27:31 -0700 Subject: [pypy-dev] trouble compiling p4python api with pypy3 Message-ID: Hi, I'm trying to get p4python (perforce python API) compiling with pypy and I notice that the include headers in pypy are very different than the standard 3.2.5 python headers. Is the C api different for pypy3 from python 3.2.5? For example, note the following diffs: --------------------------------------------------------------------------------------------------- diff -c `find . -name moduleobject.h` *** ./pypy3-2.3.1-linux-armhf-raspbian/include/moduleobject.h 2014-06-19 09:20:58.000000000 -0700 --- ./Python-3.2.5/Include/moduleobject.h 2013-05-15 09:33:41.000000000 -0700 *************** *** 1,3 **** --- 1,4 ---- + /* Module object interface */ #ifndef Py_MODULEOBJECT_H *************** *** 6,11 **** --- 7,30 ---- extern "C" { #endif + PyAPI_DATA(PyTypeObject) PyModule_Type; + + #define PyModule_Check(op) PyObject_TypeCheck(op, &PyModule_Type) + #define PyModule_CheckExact(op) (Py_TYPE(op) == &PyModule_Type) + + PyAPI_FUNC(PyObject *) PyModule_New( + const char *name /* UTF-8 encoded string */ + ); + PyAPI_FUNC(PyObject *) PyModule_GetDict(PyObject *); + PyAPI_FUNC(const char *) PyModule_GetName(PyObject *); + PyAPI_FUNC(const char *) PyModule_GetFilename(PyObject *); + PyAPI_FUNC(PyObject *) PyModule_GetFilenameObject(PyObject *); + #ifndef Py_LIMITED_API + PyAPI_FUNC(void) _PyModule_Clear(PyObject *); + #endif + PyAPI_FUNC(struct PyModuleDef*) PyModule_GetDef(PyObject*); + PyAPI_FUNC(void*) PyModule_GetState(PyObject*); + typedef struct PyModuleDef_Base { PyObject_HEAD PyObject* (*m_init)(void); *************** *** 32,37 **** --- 51,57 ---- freefunc m_free; }PyModuleDef; + #ifdef __cplusplus } #endif --------------------------------------------------------------------------------------------------- When I compile against the distributed pypy3 headers, I get errors: --------------------------------------------------------------------------------------------------- 08/09/14.12:11:29: Building p4python ... 08/09/14.12:11:29: pypy setup.py build --apidir ../p4api-2014.1.886167.main/ --ssl API Release 2014.1 running build running build_py creating build creating build/lib.linux-armv6l-3.2 copying P4.py -> build/lib.linux-armv6l-3.2 running build_ext building 'P4API' extension creating build/temp.linux-armv6l-3.2 cc -O2 -fPIC -Wimplicit -DID_OS="LINUX31ARM" -DID_REL="2012.2" -DID_PATCH="549493" -DID_API="2014.1.main/886167" -DID_Y="2012" -DID_M="11" -DID_D="05" -I../p4api-2014.1.886167.main/ -I../p4api-2014.1.886167.main/include/p4 -I/usr/local/pypy3-2.3.1-linux-armhf-raspbian/include -c P4API.cpp -o build/temp.linux-armv6l-3.2/P4API.o -DOS_LINUX -DOS_LINUX31 -DOS_LINUXARM -DOS_LINUX31ARM cc1plus: warning: command line option '-Wimplicit' is valid for C/ObjC but not for C++ [enabled by default] P4API.cpp: In function 'int P4API_traverse(PyObject*, visitproc, void*)': P4API.cpp:984:5: error: 'PyModule_GetState' was not declared in this scope P4API.cpp: In function 'int P4API_clear(PyObject*)': P4API.cpp:989:5: error: 'PyModule_GetState' was not declared in this scope P4API.cpp: In function 'PyObject* PyInit_P4API()': P4API.cpp:1045:30: error: 'PyModule_GetState' was not declared in this scope error: command 'cc' failed with exit status 1 raspberrypi{gfinstall.5} pypy --version Python 3.2.5 (986752d005bb, Jun 19 2014, 16:20:03) [PyPy 2.3.1 with GCC 4.7.2 20120731 (prerelease)] --------------------------------------------------------------------------------------------------- If I hack around the macro problems, I can get a compile, but then I get a runtime error: --------------------------------------------------------------------------------------------------- raspberrypi{gfinstall.42} cat p4python_version_check.py import P4 print(P4.P4.identify()) raspberrypi{gfinstall.43} pypy p4python_version_check.py Traceback (most recent call last): File "p4python_version_check.py", line 1, in import P4 File "/home/pi/git-fusion/bin/p4python-2012.2.549493/P4.py", line 359, in import P4API ImportError: unable to load extension module '/home/pi/git-fusion/bin/p4python-2012.2.549493/P4API.pypy3-23.so': /home/pi/git-fusion/bin/p4python-2012.2.549493/P4API.pypy3-23.so: undefined symbol: __cxa_pure_virtual --------------------------------------------------------------------------------------------------- Any help appreciated.. tia, -Russ From yury at shurup.com Sun Aug 10 13:03:15 2014 From: yury at shurup.com (Yury V. Zaytsev) Date: Sun, 10 Aug 2014 13:03:15 +0200 Subject: [pypy-dev] trouble compiling p4python api with pypy3 In-Reply-To: References: Message-ID: <1407668595.2651.6.camel@newpride> On Sat, 2014-08-09 at 13:27 -0700, Russ Tremain wrote: > > If I hack around the macro problems, I can get a compile, but then I > get a runtime error: Hi Russ, The PyPy/C API (cpyext) is an incomplete implementation of Python/C API, fitted to emulate the implementation details of CPython when possible, often at a performance cost. It is not impossible that the functions that you are trying to use are simply not implemented, and I've had such issues in the past. How are you "hacking" around macro problems? Simply declaring the macros will not help, most likely you would also have to implement the corresponding support code in PyPy. Hope that helps, -- Sincerely yours, Yury V. Zaytsev From russt at releasetools.org Sun Aug 10 19:39:53 2014 From: russt at releasetools.org (Russ Tremain) Date: Sun, 10 Aug 2014 10:39:53 -0700 Subject: [pypy-dev] trouble compiling p4python api with pypy3 In-Reply-To: <1407668595.2651.6.camel@newpride> References: <1407668595.2651.6.camel@newpride> Message-ID: Hi Yury, Yes I was afraid of that - I had expected a fuller emulation on the pypy side. It looks like the pypy C api is closer to python 2.x api and still somewhat incomplete w/r to python 3.x. On the bright side, it appears that the older Perforce API (based on python 3.2.x) only needs PyByteArray_Type() and PyModule_GetState(). Here are my compile hacks: ---------------------------------------------------------------------------------- raspberrypi{pypylocal.72} diff -c include.orig/bytesobject.h include/bytesobject.h *** include.orig/bytesobject.h 2014-06-19 09:20:58.000000000 -0700 --- include/bytesobject.h 2014-08-10 10:14:53.686066499 -0700 *************** *** 16,23 **** Py_ssize_t size; } PyBytesObject; ! #define PyByteArray_Check(obj) \ ! PyObject_IsInstance(obj, (PyObject *)&PyByteArray_Type) #ifdef __cplusplus } --- 16,23 ---- Py_ssize_t size; } PyBytesObject; ! #define PyByteArray_Check(self) PyObject_TypeCheck(self, &PyByteArray_Type) ! PyAPI_FUNC(PyObject *) PyByteArray_FromStringAndSize(const char *, Py_ssize_t); #ifdef __cplusplus } raspberrypi{pypylocal.73} diff -c include.orig/moduleobject.h include/moduleobject.h *** include.orig/moduleobject.h 2014-06-19 09:20:58.000000000 -0700 --- include/moduleobject.h 2014-08-09 12:24:55.280748682 -0700 *************** *** 32,37 **** --- 32,39 ---- freefunc m_free; }PyModuleDef; + PyAPI_FUNC(void*) PyModule_GetState(PyObject*); + #ifdef __cplusplus } #endif ---------------------------------------------------------------------------------- How hard would that be to emulate those types? Would it be better to customize the api to comply with the "native" pypy interface? (Any pointers appreciated!) For the current Perforce API, based on python 3.3, there are additional requirements in the unicode area, which I believe reflect changes in Cpython itself between 3.2.x and 3.3.x. thanks, -Russ At 1:03 PM +0200 8/10/14, Yury V. Zaytsev wrote: >On Sat, 2014-08-09 at 13:27 -0700, Russ Tremain wrote: >> >> If I hack around the macro problems, I can get a compile, but then I >> get a runtime error: > >Hi Russ, > >The PyPy/C API (cpyext) is an incomplete implementation of Python/C API, >fitted to emulate the implementation details of CPython when possible, >often at a performance cost. > >It is not impossible that the functions that you are trying to use are >simply not implemented, and I've had such issues in the past. > >How are you "hacking" around macro problems? Simply declaring the macros >will not help, most likely you would also have to implement the >corresponding support code in PyPy. > >Hope that helps, > >-- >Sincerely yours, >Yury V. Zaytsev From russt at releasetools.org Sun Aug 10 19:56:36 2014 From: russt at releasetools.org (Russ Tremain) Date: Sun, 10 Aug 2014 10:56:36 -0700 Subject: [pypy-dev] failure installing pygit2 on pypy3 raspbian Message-ID: Hi, just wondering if pip install of pygit2 is expected to work, or if this is a problem related to ARM. Also, is there a difference using "pip" vs. pip3 or pip3.2? thanks, -Russ Linux raspberrypi 3.12.22+ #691 PREEMPT Wed Jun 18 18:29:58 BST 2014 armv6l ... pi at raspberrypi:~$ pip3 install pygit2 Downloading/unpacking pygit2 Downloading pygit2-0.21.2.tar.gz (416kB): 416kB downloaded Running setup.py (path:/tmp/pip_build_pi/pygit2/setup.py) egg_info for package pygit2 pygit2/__pycache__/_cffi__gab826d3dx2bbed4fa.c:26:34: error: unknown type name 'git_buf' pygit2/__pycache__/_cffi__gab826d3dx2bbed4fa.c: In function '_cffi_layout__git_buf': pygit2/__pycache__/_cffi__gab826d3dx2bbed4fa.c:35:10: error: unknown type name 'git_buf' pygit2/__pycache__/_cffi__gab826d3dx2bbed4fa.c:37:12: error: 'git_buf' undeclared (first use in this function) pygit2/__pycache__/_cffi__gab826d3dx2bbed4fa.c:1649:21: error: (near initialization for 'nums[13]') [snip - thousands more errors] Traceback (most recent call last): File "/usr/local/pypy3-2.3.1-linux-armhf-raspbian/lib-python/3/distutils/unixccompiler.py", line 131, in _compile extra_postargs) File "/usr/local/pypy3-2.3.1-linux-armhf-raspbian/lib-python/3/distutils/ccompiler.py", line 909, in spawn spawn(cmd, dry_run=self.dry_run) File "/usr/local/pypy3-2.3.1-linux-armhf-raspbian/lib-python/3/distutils/spawn.py", line 32, in spawn _spawn_posix(cmd, search_path, dry_run=dry_run) File "/usr/local/pypy3-2.3.1-linux-armhf-raspbian/lib-python/3/distutils/spawn.py", line 163, in _spawn_posix % (cmd[0], exit_status)) distutils.errors.DistutilsExecError: command 'cc' failed with exit status 1 During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/pypy3-2.3.1-linux-armhf-raspbian/lib_pypy/cffi/ffiplatform.py", line 47, in _build dist.run_command('build_ext') File "/usr/local/pypy3-2.3.1-linux-armhf-raspbian/lib-python/3/distutils/dist.py", line 936, in run_command cmd_obj.run() File "/usr/local/pypy3-2.3.1-linux-armhf-raspbian/site-packages/distribute-0.6.49-py3.2.egg/setuptools/command/build_ext.py", line 46, in run _build_ext.run(self) File "/usr/local/pypy3-2.3.1-linux-armhf-raspbian/lib-python/3/distutils/command/build_ext.py", line 354, in run self.build_extensions() File "/usr/local/pypy3-2.3.1-linux-armhf-raspbian/lib-python/3/distutils/command/build_ext.py", line 463, in build_extensions self.build_extension(ext) File "/usr/local/pypy3-2.3.1-linux-armhf-raspbian/site-packages/distribute-0.6.49-py3.2.egg/setuptools/command/build_ext.py", line 182, in build_extension _build_ext.build_extension(self,ext) File "/usr/local/pypy3-2.3.1-linux-armhf-raspbian/lib-python/3/distutils/command/build_ext.py", line 518, in build_extension depends=ext.depends) File "/usr/local/pypy3-2.3.1-linux-armhf-raspbian/lib-python/3/distutils/ccompiler.py", line 574, in compile self._compile(obj, src, ext, cc_args, extra_postargs, pp_opts) File "/usr/local/pypy3-2.3.1-linux-armhf-raspbian/lib-python/3/distutils/unixccompiler.py", line 133, in _compile raise CompileError(msg) distutils.errors.CompileError: command 'cc' failed with exit status 1 During handling of the above exception, another exception occurred: Traceback (most recent call last): File "", line 17, in File "/tmp/pip_build_pi/pygit2/setup.py", line 178, in from ffi import ffi File "pygit2/ffi.py", line 59, in include_dirs=include_dirs, library_dirs=library_dirs) File "/usr/local/pypy3-2.3.1-linux-armhf-raspbian/lib_pypy/cffi/api.py", line 341, in verify lib = self.verifier.load_library() File "/usr/local/pypy3-2.3.1-linux-armhf-raspbian/lib_pypy/cffi/verifier.py", line 74, in load_library self._compile_module() File "/usr/local/pypy3-2.3.1-linux-armhf-raspbian/lib_pypy/cffi/verifier.py", line 139, in _compile_module outputfilename = ffiplatform.compile(tmpdir, self.get_extension()) File "/usr/local/pypy3-2.3.1-linux-armhf-raspbian/lib_pypy/cffi/ffiplatform.py", line 25, in compile outputfilename = _build(tmpdir, ext) File "/usr/local/pypy3-2.3.1-linux-armhf-raspbian/lib_pypy/cffi/ffiplatform.py", line 50, in _build raise VerificationError('%s: %s' % (e.__class__.__name__, e)) cffi.ffiplatform.VerificationError: CompileError: command 'cc' failed with exit status 1 Complete output from command python setup.py egg_info: pygit2/__pycache__/_cffi__gab826d3dx2bbed4fa.c:26:34: error: unknown type name 'git_buf' [snip] ---------------------------------------- Cleaning up... Command python setup.py egg_info failed with error code 1 in /tmp/pip_build_pi/pygit2 Storing debug log for failure in /home/pi/.pip/pip.log pi at raspberrypi:~$ From arigo at tunes.org Mon Aug 11 09:17:18 2014 From: arigo at tunes.org (Armin Rigo) Date: Mon, 11 Aug 2014 09:17:18 +0200 Subject: [pypy-dev] trouble compiling p4python api with pypy3 In-Reply-To: References: <1407668595.2651.6.camel@newpride> Message-ID: Hi Russ, On 10 August 2014 19:39, Russ Tremain wrote: > Would it be better to customize the api to comply with the "native" pypy interface? The "native" pypy interface is cffi (which also works on CPython, btw). It would definitely give better stability and performance on PyPy. It's more than just adapting some C code here and there, though. http://cffi.readthedocs.org/ Armin From russt at releasetools.org Mon Aug 11 21:20:34 2014 From: russt at releasetools.org (Russ Tremain) Date: Mon, 11 Aug 2014 12:20:34 -0700 Subject: [pypy-dev] trouble compiling p4python api with pypy3 In-Reply-To: References: <1407668595.2651.6.camel@newpride> Message-ID: Hi Armin, Looks like more of a project than what I was wanting to take on, but will give it a look. Would be nice to consolidate the interface for both pypy and Cpython.. thanks, -Russ At 9:17 AM +0200 8/11/14, Armin Rigo wrote: >Hi Russ, > >On 10 August 2014 19:39, Russ Tremain wrote: >> Would it be better to customize the api to comply with the "native" pypy interface? > >The "native" pypy interface is cffi (which also works on CPython, >btw). It would definitely give better stability and performance on >PyPy. It's more than just adapting some C code here and there, >though. http://cffi.readthedocs.org/ > > >Armin From arigo at tunes.org Thu Aug 14 09:26:41 2014 From: arigo at tunes.org (Armin Rigo) Date: Thu, 14 Aug 2014 09:26:41 +0200 Subject: [pypy-dev] Mini-sprint August 25-27 in Switzerland Message-ID: Hi all, We're running a 3-days mini-sprint the 25-27 of August at the University of Neuchatel, Switzerland, research group of Pascal Felber. This is not a general PyPy sprint, but one on the topic of STM specifically. Nevertheless, in case there is someone nearby who listens to this mailing list and who would like to show up (from one day to the full 3 days), he is welcome to. More details: the mini-sprint is about anything related to PyPy-STM, but more precisely, now would be a good time to try to push the idea forward, now that we have an almost reasonable core working (however rough). This would involve in general the question: what does it mean for the user? Should we scrap atomic blocks and instead focus on lock elision (as a more backward-compatible idea with nicer fallbacks in case we try to do I/O in the atomic block)? We definitely should make dictionaries more aware of multithreading, but is that enough to avoid most "unexpected" conflicts? Can we try to adapt a larger single-threaded framework (my favourite is Twisted) to run on multiple threads transparently for the user's applications? A bient?t, Armin. From anton.gulenko at googlemail.com Sun Aug 17 19:04:37 2014 From: anton.gulenko at googlemail.com (Anton Gulenko) Date: Sun, 17 Aug 2014 19:04:37 +0200 Subject: [pypy-dev] AssertionError in rpython_jit_metainterp_resume.c In-Reply-To: References: Message-ID: Hi Armin, just wanted to bump this... Do you maybe have any hints for me to debug this issue? Thanks and best regards, Anton 2014-08-05 18:32 GMT+02:00 Anton Gulenko : > Sorry, I forgot to mention - to actually produce the error you have to run > the Squek image: > ./rsqueak images/Squeak4.5-noBitBlt.image > > Best, > Anton > > > 2014-08-05 18:24 GMT+02:00 Anton Gulenko : > > Hi Armin, >> >> I made a refactoring that makes sure that the virtualizable list is only >> assigned once, but the issue still persists. >> >> To reproduce this you'd have to build the Vm... Fortunately that's way >> faster than Pypy ;) >> The repo: https://bitbucket.org/pypy/lang-smalltalk >> The relevant branch is 'storage', current revision 1012, commit hash >> d007ca0 >> >> . >> The requirements are to have the pypy repo on PYTHONPATH (to load >> rpython.rlib and so on), and to have rsdl installed. >> The target file is targetimageloadingsmalltalk.py, so compiling should be >> something like: >> $ pypy ../pypy/rpython/bin/rpython -Ojit targetimageloadingsmalltalk.py >> I am also compiling with --gc=minimark because I had some issues with the >> default gc. >> >> The virtualizable classes are defined in spyvm/storage_contexts.py. >> There's the root class ContextPartShadow and two >> subclasses BlockContextShadow and MethodContextShadow. >> Both subclasses have a default constructor and static build() methods. >> It might be enough if you check out the definitions and constructors of >> those classes?.. >> >> Let me know if it doesn't translate or if the code is weird... >> Thanks for your help! >> Best, >> Anton >> >> >> 2014-08-04 16:00 GMT+02:00 Armin Rigo : >> >> Hi Anton, >>> >>> On 4 August 2014 14:36, Anton Gulenko >>> wrote: >>> > To be sure, I'll try to refactor this and make it impossible for the >>> list to >>> > be reassigned after the constructor is finished. That would satisfy the >>> > requirements right? >>> >>> Yes, at least this one... To be able to help you more if needed, >>> please provide step-by-step instructions about how to reproduce. >>> >>> >>> Armin >>> >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From arigo at tunes.org Fri Aug 22 20:50:42 2014 From: arigo at tunes.org (Armin Rigo) Date: Fri, 22 Aug 2014 20:50:42 +0200 Subject: [pypy-dev] AssertionError in rpython_jit_metainterp_resume.c In-Reply-To: References: Message-ID: Hi Anton, On 17 August 2014 19:04, Anton Gulenko wrote: > just wanted to bump this... Do you maybe have any hints for me to debug this > issue? Sorry, I've exhausted my hint supply. The next thing to do is to debug, and I've not had much time to do that recently. Can you tell me precisely which PyPy revision you're using, too? We're busy changing a lot of things in the stmgc-c7 branch. A bient?t, Armin. From numerodix at gmail.com Sat Aug 23 22:03:45 2014 From: numerodix at gmail.com (Martin Matusiak) Date: Sat, 23 Aug 2014 22:03:45 +0200 Subject: [pypy-dev] Fwd: For py3k... In-Reply-To: References: Message-ID: Hi, So just to get a quick sanity check before I start. We are missing lzma in py3.3 (or to be more precise we have the beginnings of the module, but it's not passing any tests yet). Philip hinted to me recently that it would make sense to integrate lzmaffi. I would like to work on that, but I will probably get stuck a few times, so I'm planning to do some writeups on my blog as I go along (both for note taking and debugging). Since it's an external project I was planning to follow the same procedure that we use for stdlib, which is described in stdlib-upgrade.txt. So start out with vendor/lzmaffi, then branch off to py3.3-lzmaffi for integration work, finally merge into py3.3. Does that seem like a good approach? Martin 2014-07-23 19:20 GMT+02:00 Peter Cock : > Thanks - that will be a useful companion to my backport > for using lzma on C Python 2.6, 2.7 and early Python 3 :) > https://pypi.python.org/pypi/backports.lzma > > Peter > > On Wed, Jul 23, 2014 at 5:14 PM, Armin Rigo wrote: >> Hi all, >> >> A module mentioned today in a EuroPython lightning talk: "lzma" >> reimplemented in cffi (compatible with the one from Python 3.3's >> stdlib). >> >> https://pypi.python.org/pypi/lzmaffi >> >> >> A bient?t, >> >> Armin. >> _______________________________________________ >> pypy-dev mailing list >> pypy-dev at python.org >> https://mail.python.org/mailman/listinfo/pypy-dev > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > https://mail.python.org/mailman/listinfo/pypy-dev From arigo at tunes.org Sun Aug 24 10:11:11 2014 From: arigo at tunes.org (Armin Rigo) Date: Sun, 24 Aug 2014 10:11:11 +0200 Subject: [pypy-dev] Fwd: For py3k... In-Reply-To: References: Message-ID: Hi Martin, On 23 August 2014 22:03, Martin Matusiak wrote: > Since it's an external project I was > planning to follow the same procedure that we use for stdlib, which is > described in stdlib-upgrade.txt. So start out with vendor/lzmaffi, > then branch off to py3.3-lzmaffi for integration work, finally merge > into py3.3. No, that's unnecessarily complicated. Depending on the level of completion of lzmaffi, you either don't need a lot more, or you do; if you do, then first complete it independently of PyPy, e.g. in a fork of lzmaffi or in your own repository. Import the py3.3 tests, test on top of CPython and PyPy, and so on. When it's done, you can simply copy it into pypy in the branch py3.3 and kill the beginnings of pypy/module/lzma/. It's probably just a pull request at this point. A bient?t, Armin. From mike.kaplinskiy at gmail.com Mon Aug 25 09:20:55 2014 From: mike.kaplinskiy at gmail.com (Mike Kaplinskiy) Date: Mon, 25 Aug 2014 03:20:55 -0400 Subject: [pypy-dev] Adding a feature to re Message-ID: Hey folks, One of the projects I'm working on in CPython is becoming a little CPU bound and I was hoping to use pypy. One problem though - one of the pieces uses the regex library (which claims to be CPython's re-next). Running regex through cpyext works, but is deadly slow. >From reading the docs it seems like I have a few options: - rewrite all of regex in Python - seems like a bad idea - rewrite regex to be non-python specific & use cppyy or cffi to interface with it. I actually looked into this & unfortunately the CPython API seems quite deep in there. - get rid of the dependency somehow. What I'm missing are named lists (basically "L", a=["1","2"] will match 1 or 2). Unfortunately creating one really long re string is out of the question - I have not seen compile() finish with that approach. Writing a custom DFA could be on the table, but I was hoping to avoid that error prone step. - somehow factor out the part using regex and keep using CPython for it. - add the missing functionality to pypy's re. This seems like the path of least resistance. I've started looking into the sre module and it looks like quite a few bits (parsing & compiling to byte code mostly) are reused from CPython. I would have to change some of those bits. My question is then - is there any hope of getting these changes upstream then? Do stdlib pieces have a "no touch" policy? Thanks, Mike. -------------- next part -------------- An HTML attachment was scrubbed... URL: From lac at openend.se Mon Aug 25 15:30:11 2014 From: lac at openend.se (Laura Creighton) Date: Mon, 25 Aug 2014 15:30:11 +0200 Subject: [pypy-dev] Adding a feature to re In-Reply-To: Message from Mike Kaplinskiy of "Mon, 25 Aug 2014 03:20:55 -0400." References: Message-ID: <201408251330.s7PDUBIW021581@fido.openend.se> In a message of Mon, 25 Aug 2014 03:20:55 -0400, Mike Kaplinskiy writes: >Hey folks, > >One of the projects I'm working on in CPython is becoming a little CPU >bound and I was hoping to use pypy. One problem though - one of the pieces >uses the regex library (which claims to be CPython's re-next). Running >regex through cpyext works, but is deadly slow. > >>From reading the docs it seems like I have a few options: > - rewrite all of regex in Python - seems like a bad idea > - rewrite regex to be non-python specific & use cppyy or cffi to interface >with it. I actually looked into this & unfortunately the CPython API seems >quite deep in there. > - get rid of the dependency somehow. What I'm missing are named lists >(basically "L", a=["1","2"] will match 1 or 2). Unfortunately creating >one really long re string is out of the question - I have not seen >compile() finish with that approach. Writing a custom DFA could be on the >table, but I was hoping to avoid that error prone step. > - somehow factor out the part using regex and keep using CPython for it. > - add the missing functionality to pypy's re. This seems like the path of >least resistance. > >I've started looking into the sre module and it looks like quite a few bits >(parsing & compiling to byte code mostly) are reused from CPython. I would >have to change some of those bits. My question is then - is there any hope >of getting these changes upstream then? Do stdlib pieces have a "no touch" >policy? > >Thanks, >Mike. Do you know about https://pypi.python.org/pypi/regex If I were you, I would try to get the behaviour you want put into the new replacement version -- which would, of course, be easiest if you contributed the code. Then we can see about having pypy do the same ... Laura From me at wilfred.me.uk Mon Aug 25 15:32:57 2014 From: me at wilfred.me.uk (Wilfred Hughes) Date: Mon, 25 Aug 2014 14:32:57 +0100 Subject: [pypy-dev] Pidigits performance without FFI Message-ID: Hi folks I've been looking at the pidigits benchmark from the Computer Language Benchmarks, but measuring performance without FFI (so native Python numbers instead of GMP). You can see my results here: https://github.com/Wilfred/the_end_times#preliminary-results I'm seeing better performance for CPython than pypy3 on simple arithmetic code. Is this to be expected? Could I make pypy3 perform better? Wilfred -------------- next part -------------- An HTML attachment was scrubbed... URL: From fijall at gmail.com Mon Aug 25 15:49:17 2014 From: fijall at gmail.com (Maciej Fijalkowski) Date: Mon, 25 Aug 2014 15:49:17 +0200 Subject: [pypy-dev] Pidigits performance without FFI In-Reply-To: References: Message-ID: pidigits is based on long numbers. We can make pypy be the same speed as cpython (porting missing optimizations), but not faster, or at least unlikely. Since the time is spent in runtime, the JIT does not help at all. We tried to used GMP, but we found out that GMP semantics are not really suitable for Python - notably there is no way to recover from an out-of-memory error (I think your program crashes as a result in GMP). On Mon, Aug 25, 2014 at 3:32 PM, Wilfred Hughes wrote: > Hi folks > > I've been looking at the pidigits benchmark from the Computer Language > Benchmarks, but measuring performance without FFI (so native Python numbers > instead of GMP). > > You can see my results here: > https://github.com/Wilfred/the_end_times#preliminary-results > > I'm seeing better performance for CPython than pypy3 on simple arithmetic > code. Is this to be expected? Could I make pypy3 perform better? > > Wilfred > > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > https://mail.python.org/mailman/listinfo/pypy-dev > From mike.kaplinskiy at gmail.com Tue Aug 26 04:36:24 2014 From: mike.kaplinskiy at gmail.com (Mike Kaplinskiy) Date: Mon, 25 Aug 2014 22:36:24 -0400 Subject: [pypy-dev] Adding a feature to re In-Reply-To: <201408251330.s7PDUBIW021581@fido.openend.se> References: <201408251330.s7PDUBIW021581@fido.openend.se> Message-ID: The regex library I meant is that very one. Named lists are a feature there but not in cpython's or pypy's re. On Monday, August 25, 2014, Laura Creighton wrote: > In a message of Mon, 25 Aug 2014 03:20:55 -0400, Mike Kaplinskiy writes: > >Hey folks, > > > >One of the projects I'm working on in CPython is becoming a little CPU > >bound and I was hoping to use pypy. One problem though - one of the pieces > >uses the regex library (which claims to be CPython's re-next). Running > >regex through cpyext works, but is deadly slow. > > > >>From reading the docs it seems like I have a few options: > > - rewrite all of regex in Python - seems like a bad idea > > - rewrite regex to be non-python specific & use cppyy or cffi to > interface > >with it. I actually looked into this & unfortunately the CPython API seems > >quite deep in there. > > - get rid of the dependency somehow. What I'm missing are named lists > >(basically "L", a=["1","2"] will match 1 or 2). Unfortunately creating > >one really long re string is out of the question - I have not seen > >compile() finish with that approach. Writing a custom DFA could be on the > >table, but I was hoping to avoid that error prone step. > > - somehow factor out the part using regex and keep using CPython for it. > > - add the missing functionality to pypy's re. This seems like the path of > >least resistance. > > > >I've started looking into the sre module and it looks like quite a few > bits > >(parsing & compiling to byte code mostly) are reused from CPython. I would > >have to change some of those bits. My question is then - is there any hope > >of getting these changes upstream then? Do stdlib pieces have a "no touch" > >policy? > > > >Thanks, > >Mike. > > Do you know about > https://pypi.python.org/pypi/regex > > If I were you, I would try to get the behaviour you want put into the > new replacement version -- which would, of course, be easiest if you > contributed the code. Then we can see about having pypy do the same ... > > Laura > -------------- next part -------------- An HTML attachment was scrubbed... URL: From arigo at tunes.org Thu Aug 28 10:10:18 2014 From: arigo at tunes.org (Armin Rigo) Date: Thu, 28 Aug 2014 10:10:18 +0200 Subject: [pypy-dev] Adding a feature to re In-Reply-To: References: <201408251330.s7PDUBIW021581@fido.openend.se> Message-ID: Hi Mike, On 26 August 2014 04:36, Mike Kaplinskiy wrote: > The regex library I meant is that very one. Named lists are a feature there > but not in cpython's or pypy's re. The regular expression library is a bit special inside PyPy: its core engine has to be written as RPython code in order to benefit from a regular-expression-aware JIT. (If we wrote it in pure Python, it would be significantly slower.) This core is a bytecode interpreter (a different one than Python's, obviously) in a module called "_sre" --- same name as the corresponding C module in CPython. When Python code does "import re", on either PyPy or CPython, it is also importing some pure Python code for the re.compile() part; only the execution of the compiled regular expressions is done by "_sre". What would likely be the best approach would be to add new bytecodes to the same core engine, for example to support the named lists. These new bytecodes would never be produced by the pure Python parts of the "re" module, so they wouldn't have any impact on that. Then you can write or adapt a pure Python "regex" module. It would compile regex-compatible extended regular expressions down to a format that can be used by the same core engine --- using the extra bytecodes as well. If you end up supporting the complete "regex" syntax this way, then we'd be happy to distribute it included inside PyPy, as a pre-installed module (or, depending on how it turns out, as a separate module that needs to be pip-installed --- but it looks saner to include it with PyPy anyway, given that it depends on changes to PyPy's own built-in "_sre" module). A bient?t, Armin. From phyo.arkarlwin at gmail.com Fri Aug 29 11:56:06 2014 From: phyo.arkarlwin at gmail.com (Phyo Arkar) Date: Fri, 29 Aug 2014 16:26:06 +0630 Subject: [pypy-dev] pypy 2.3.1 json encoding performnce is Extremely slow (30x slower ). Message-ID: $ pypy benchmark.py Array with 256 doubles: json encode : 5044.67291 calls/sec json decode : 19591.44018 calls/sec Array with 256 utf-8 strings: json decode UTF : 71.03748 calls/sec json decode UTF : 482.03748 calls/sec $ /usr/bin/python benchmark.py Array with 256 doubles: json encode : 4292.39818 calls/sec json decode : 15089.87792 calls/sec Array with 256 utf-8 strings: json encode UTF : 2062.16175 calls/sec json decode UTF : 479.04892 calls/sec Test using ultra json: $ /usr/bin/python ujson/test/benchmark.py Array with 256 doubles: ujson encode : 4386.51907 calls/sec simplejson encode : 4269.30241 calls/sec yajl encode : 4268.15286 calls/sec ujson decode : 23814.23743 calls/sec simplejson decode : 15375.76992 calls/sec yajl decode : 15388.19165 calls/sec Array with 256 utf-8 strings: ujson encode : 4114.12586 calls/sec simplejson encode : 1965.17111 calls/sec yajl encode : 1964.98007 calls/sec ujson decode : 1237.99751 calls/sec simplejson decode : 440.96787 calls/sec yajl decode : 440.53785 calls/sec Ofcoz it is not fair comparing against Ultra json but there is no real performance increase vs vanilla python's json -------------- next part -------------- A non-text attachment was scrubbed... Name: benchmark.py Type: text/x-python Size: 2816 bytes Desc: not available URL: From arigo at tunes.org Fri Aug 29 18:04:13 2014 From: arigo at tunes.org (Armin Rigo) Date: Fri, 29 Aug 2014 18:04:13 +0200 Subject: [pypy-dev] pypy 2.3.1 json encoding performnce is Extremely slow (30x slower ). In-Reply-To: References: Message-ID: Hi Phyo, Thanks for the report! I may have fixed it in e80c25f01061. Please try it out with tomorrow's nightly build. A bient?t, Armin. From phyo.arkarlwin at gmail.com Fri Aug 29 18:52:46 2014 From: phyo.arkarlwin at gmail.com (Phyo Arkar) Date: Fri, 29 Aug 2014 23:22:46 +0630 Subject: [pypy-dev] pypy 2.3.1 json encoding performnce is Extremely slow (30x slower ). In-Reply-To: References: Message-ID: Thanks a lot! i am waiting to switch to PyPy. Also there is one performance problem. I haven't profile it properly yet just from looking at Chrome DEV Console. A simple Tornado async (callback) + pymongo + motor + reading 100 records from db and returning them in json , in Python its takes only 1 ms in Pypy it takes 4-6 ms I will make proper benchmark on it when i get time. and find which part is slowing down. But may be you can guess where it is slowing? On Fri, Aug 29, 2014 at 10:34 PM, Armin Rigo wrote: > Hi Phyo, > > Thanks for the report! I may have fixed it in e80c25f01061. Please > try it out with tomorrow's nightly build. > > > A bient?t, > > Armin. From alex.gaynor at gmail.com Sat Aug 30 00:09:44 2014 From: alex.gaynor at gmail.com (Alex Gaynor) Date: Fri, 29 Aug 2014 15:09:44 -0700 Subject: [pypy-dev] pypy 2.3.1 json encoding performnce is Extremely slow (30x slower ). In-Reply-To: References: Message-ID: FWIW, I'm not sure your commit helped, at least, it seems to be worse for some usecases: (PyPy default vs 2.3.1): $ ./pypy-c -mtimeit -s "import json" "json.dumps({u'': u'abcdef' * 10000})" 1000 loops, best of 3: 1.09 msec per loop $ ./pypy-c -mtimeit -s "import json" "json.dumps({u'': u'abcdef' * 10000})" 1000 loops, best of 3: 1.1 msec per loop $ ./pypy-c -mtimeit -s "import json" "json.dumps({u'': u'abcdef' * 10000})" 1000 loops, best of 3: 1.11 msec per loop $ pypy -mtimeit -s "import json" "json.dumps({u'': u'abcdef' * 10000}) 1000 loops, best of 3: 503 usec per loop $ pypy -mtimeit -s "import json" "json.dumps({u'': u'abcdef' * 10000})" 1000 loops, best of 3: 491 usec per loop $ pypy -mtimeit -s "import json" "json.dumps({u'': u'abcdef' * 10000})" 1000 loops, best of 3: 498 usec per loop Alex On Fri, Aug 29, 2014 at 9:52 AM, Phyo Arkar wrote: > Thanks a lot! i am waiting to switch to PyPy. > > Also there is one performance problem. I haven't profile it properly > yet just from looking at Chrome DEV Console. > > A simple Tornado async (callback) + pymongo + motor + reading 100 > records from db and returning them in json , > in Python its takes only 1 ms > in Pypy it takes 4-6 ms > > I will make proper benchmark on it when i get time. and find which > part is slowing down. But may be you can guess where it is slowing? > > > > > > On Fri, Aug 29, 2014 at 10:34 PM, Armin Rigo wrote: > > Hi Phyo, > > > > Thanks for the report! I may have fixed it in e80c25f01061. Please > > try it out with tomorrow's nightly build. > > > > > > A bient?t, > > > > Armin. > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > https://mail.python.org/mailman/listinfo/pypy-dev > -- "I disapprove of what you say, but I will defend to the death your right to say it." -- Evelyn Beatrice Hall (summarizing Voltaire) "The people's good is the highest law." -- Cicero GPG Key fingerprint: 125F 5C67 DFE9 4084 -------------- next part -------------- An HTML attachment was scrubbed... URL: From arigo at tunes.org Sat Aug 30 08:43:10 2014 From: arigo at tunes.org (Armin Rigo) Date: Sat, 30 Aug 2014 08:43:10 +0200 Subject: [pypy-dev] pypy 2.3.1 json encoding performnce is Extremely slow (30x slower ). In-Reply-To: References: Message-ID: Hi Alex, On 30 August 2014 00:09, Alex Gaynor wrote: > FWIW, I'm not sure your commit helped, at least, it seems to be worse for > some usecases: (PyPy default vs 2.3.1): Ah. Yes, my commit helped really a lot: $ ./pypy-c -mtimeit -s"import json;x=u'\u1234'*10000" "json.dumps(x)" went down from 5.8ms per loop to 208us. However, I see that running the same example with 10000 ascii chars went up from 41.4us to 139us. Time to tweak. Armin From matti.picus at gmail.com Sat Aug 30 23:16:43 2014 From: matti.picus at gmail.com (Matti Picus) Date: Sun, 31 Aug 2014 00:16:43 +0300 Subject: [pypy-dev] 2.4 release process has begun Message-ID: <54023F3B.3020203@gmail.com> An HTML attachment was scrubbed... URL: