From issues-reply at bitbucket.org Fri Apr 1 14:55:39 2016 From: issues-reply at bitbucket.org (uncaffeinated) Date: Fri, 01 Apr 2016 18:55:39 -0000 Subject: [pypy-issue] Issue #2269: Huge performance regression in pypy3 for Enjarify (pypy/pypy) Message-ID: <20160401185539.7689.71511@celery-worker-103.ash1.bb-inf.net> New issue 2269: Huge performance regression in pypy3 for Enjarify https://bitbucket.org/pypy/pypy/issues/2269/huge-performance-regression-in-pypy3-for uncaffeinated: Yesterday, I tried updating pypy3. My previous build was ~Dec 8, and the current one is from March 31st. In the process, I saw a huge performance regression when running Enjarify. Everything is 4-6x slower. For example, running the tests went from 15 seconds to over a minute. This essentially makes pypy unusable for Enjarify. It almost makes me wonder if I did something wrong somewhere. From issues-reply at bitbucket.org Sat Apr 2 02:10:10 2016 From: issues-reply at bitbucket.org (jeffery_chenn) Date: Sat, 02 Apr 2016 06:10:10 -0000 Subject: [pypy-issue] Issue #2270: odoo 8.0 on pypy-5.0.1 cause segmentation fault. (pypy/pypy) Message-ID: <20160402061010.19926.62198@celery-worker-102.ash1.bb-inf.net> New issue 2270: odoo 8.0 on pypy-5.0.1 cause segmentation fault. https://bitbucket.org/pypy/pypy/issues/2270/odoo-80-on-pypy-501-cause-segmentation jeffery_chenn: try to run odoo 8.0 on pypy 5.0.1, it failed because segmentation fault. have installed those packages. Babel==1.3 cffi==1.5.2 Cython==0.23.5 decorator==3.4.0 docutils==0.12 feedparser==5.1.3 gdata==2.0.18 gevent==1.1.0 greenlet==0.4.7 jcconv==0.2.3 Jinja2==2.7.3 lxml==3.6.0 Mako==1.0.0 MarkupSafe==0.23 mock==1.0.1 passlib==1.6.2 Pillow==2.5.1 psutil==2.1.1 psycogreen==1.0 psycopg2cffi==2.7.3 pydot==1.0.2 pyparsing==1.5.7 pyPdf==1.13 pyserial==2.7 Python-Chart==1.39 python-dateutil==1.5 python-ldap==2.4.15 python-openid==2.2.5 python-stdnum==1.3 pytz==2014.4 pyusb==1.0.0b1 PyYAML==3.11 qrcode==5.0.1 readline==6.2.4.1 reportlab==3.1.44 requests==2.6.0 simplejson==3.5.3 six==1.10.0 unittest2==0.5.1 vatnumber==1.2 vobject==0.6.6 Werkzeug==0.9.6 xlwt==0.7.5 odoo with demo data can running, but crashed when click some menu, such messaging inbox, and opportunities. i'm new to pypy, have no idea to catch those problem. From issues-reply at bitbucket.org Thu Apr 7 08:04:53 2016 From: issues-reply at bitbucket.org (John Longinotto) Date: Thu, 07 Apr 2016 12:04:53 -0000 Subject: [pypy-issue] Issue #2271: Decompressing takes significantly longer than on CPython (pypy/pypy) Message-ID: <20160407120453.61438.37475@celery-worker-102.ash1.bb-inf.net> New issue 2271: Decompressing takes significantly longer than on CPython https://bitbucket.org/pypy/pypy/issues/2271/decompressing-takes-significantly-longer John Longinotto: The following code takes about 9.5 minutes on PyPy, but only 42 seconds on CPython: ``` #!python import sys import zlib import struct def bgzip(data,blocks_at_a_time=1): if type(data) == str: d = open(data,'rb') else: d = data cache = '' bytes_read = 0 magic = d.read(4) blocks_left_to_grab = blocks_at_a_time while magic: if not magic: break # a child's heart bytes_read += 4 if magic != "\x1f\x8b\x08\x04": print "ERROR: The input file is not in a format I understand :("; exit() header_data = magic + d.read(8) header_size = 12 extra_len = struct.unpack(" 0: cache += unzipped_data blocks_left_to_grab -= 1 if blocks_left_to_grab == 0: yield cache cache = '' blocks_left_to_grab = blocks_at_a_time if cache != '': yield cache d.close() data_generator = bgzip(sys.argv[-1],blocks_at_a_time=300) for block in data_generator: pass ``` Run via: the_code.py ./ENCFF001LCU.bam The input file ENCFF001LCU.bam can be downloaded from https://www.encodeproject.org/files/ENCFF001LCU/@@download/ENCFF001LCU.bam From issues-reply at bitbucket.org Wed Apr 13 13:42:22 2016 From: issues-reply at bitbucket.org (Antonio Cuni) Date: Wed, 13 Apr 2016 17:42:22 -0000 Subject: [pypy-issue] Issue #2272: socket._fileobject.read horribly slow (pypy/pypy) Message-ID: <20160413174222.33146.87575@celery-worker-101.ash1.bb-inf.net> New issue 2272: socket._fileobject.read horribly slow https://bitbucket.org/pypy/pypy/issues/2272/socket_fileobjectread-horribly-slow Antonio Cuni: In theory, _fileobject is supposed to be a buffered layer on top of socket.recv/send. However, socket.py implements it in a way which completely disable buffering for read(), and it also full of complicated half-working code for handling the buffering which does not happen. After digging in CPython's history, we found that buffering has been enabled/disabled/re-enabled/re-disabled many times, each time because of a different issue; some relevant CPython's commits are (these are hg commit id): 8e062e572ea4, 54606ea9f4c7, 2729e977fdd9 Also, these issues: https://mail.python.org/pipermail/python-dev/2008-April/078613.html https://bugs.python.org/issue2632 https://bugs.python.org/issue2760 Moreover, even if it were buffered (as it was before 2729e977fdd9), the performance would still be bad because the "fast path" copies the StringIO buffer again and again. So, apparently _fileobject was supposed to be buffered, but then buffering was disabled at some point in 2008 (around release 2.5). Now it's possible/likely that there is some code in the wild which incorrectly *relies* on it to behave like it's unbuffered. The conclusion is: _fileobject.read is horribly slow, but we risk to break some code by fixing it. One possible thing to do is: 1. fix _fileobject.read 2. emit a warning if you call sock.recv or sock.makefile *after* you already called _fileobject.read (such code is likely to rely on the currently-unbuffered behaviou) 3. introduce a command-line flag to enable/disable this optimization See also the relevant IRC discussion which started here: https://botbot.me/freenode/pypy/2016-04-13/?msg=64058470&page=3 Attached is a small benchmark which shows the problem (both on CPython and PyPy): ``` $ python try.py recv ( 4): 5000000 bytes, 0.83 seconds read ( 4): 5000000 bytes, 2.70 seconds buffered( 4): 5000000 bytes, 0.87 seconds stringio( 4): 5000000 bytes, 0.45 seconds $ pypy try.py recv ( 4): 5000000 bytes, 0.44 seconds read ( 4): 5000000 bytes, 0.44 seconds buffered( 4): 5000000 bytes, 0.12 seconds stringio( 4): 5000000 bytes, 0.11 seconds ``` From issues-reply at bitbucket.org Fri Apr 15 15:02:14 2016 From: issues-reply at bitbucket.org (=?utf-8?q?Piotr_Ma=C5=9Blanka?=) Date: Fri, 15 Apr 2016 19:02:14 -0000 Subject: [pypy-issue] Issue #2273: pypy leaks memory when using cassandra-driver and JIT (pypy/pypy) Message-ID: <20160415190214.14208.18493@celery-worker-103.ash1.bb-inf.net> New issue 2273: pypy leaks memory when using cassandra-driver and JIT https://bitbucket.org/pypy/pypy/issues/2273/pypy-leaks-memory-when-using-cassandra Piotr Ma?lanka: PyPy leaks memory when using cassandra-driver library with JIT enabled. There is no memory leak if JIT is disabled (--jit off). I don't really know whom to blame, so I posted this bug on [cassandra-driver](https://datastax-oss.atlassian.net/browse/PYTHON-545) too. cassandra-driver bug report also hosts the script to reproduce that. I will try to pinpoint the minimal code needed to reproduce this, but until I do so, I'd just like to note this bug. From issues-reply at bitbucket.org Mon Apr 18 23:50:44 2016 From: issues-reply at bitbucket.org (Jesse Fang) Date: Tue, 19 Apr 2016 03:50:44 -0000 Subject: [pypy-issue] Issue #2274: PyPyGILState_Ensure() crash (pypy/pypy) Message-ID: <20160419035044.24616.29369@celery-worker-103.ash1.bb-inf.net> New issue 2274: PyPyGILState_Ensure() crash https://bitbucket.org/pypy/pypy/issues/2274/pypygilstate_ensure-crash Jesse Fang: pycares use PyPyGILState_Ensure () in every C callback function, before call callback that writen in python. eg. https://github.com/saghul/pycares/blob/master/src/cares.c#L654 PyPyGILState_Ensure() crash both on windows and linux and report: *Fatal RPython error: a thread is trying to wait for the GIL, but the GIL was not initialized* From issues-reply at bitbucket.org Tue Apr 19 03:02:57 2016 From: issues-reply at bitbucket.org (FredStober) Date: Tue, 19 Apr 2016 07:02:57 -0000 Subject: [pypy-issue] Issue #2275: pypy segfault (pypy/pypy) Message-ID: <20160419070257.24404.64285@celery-worker-103.ash1.bb-inf.net> New issue 2275: pypy segfault https://bitbucket.org/pypy/pypy/issues/2275/pypy-segfault FredStober: Hi, I have some code that *sometimes* crashes with a segfault when run with pypy (tested with PyPy 2.2.1 from Ubuntu 14.04.4 LTS and PyPy 2.5.0 as installed on TravisCI). The issue does not appear when run with CPython. I've reduced the program to the following short snippet of code: ``` #!python import os, threading def fun(): pass if __name__ == '__main__': thread = threading.Thread(target = fun) thread.start() thread.join() pid, fd = os.forkpty() if pid == 0: os._exit(42) os.waitpid(pid, 0) ``` Creating and joining the thread seems to be essential to trigger the issue - replacing os._exit with sys.exit to run cleanup does not help. Am I overlooking some issue with this code? Cheers, Fred From issues-reply at bitbucket.org Tue Apr 19 08:14:53 2016 From: issues-reply at bitbucket.org (mattip) Date: Tue, 19 Apr 2016 12:14:53 -0000 Subject: [pypy-issue] Issue #2276: py3k importer failures on win32 (pypy/pypy) Message-ID: <20160419121453.18586.1050@celery-worker-101.ash1.bb-inf.net> New issue 2276: py3k importer failures on win32 https://bitbucket.org/pypy/pypy/issues/2276/py3k-importer-failures-on-win32 mattip: own tests (and translation) are failing on py3k win32, all the interpreter.astcompiler.test.test_ast tests fail with importing winreg. The actual failure is in `lib-python/3/importlib/_bootstrap.py, where it calls `winreg_module = BuiltinImporter.load_module('winreg')`, but this line changed over two years ago and there have been successful builds since. Any hints? Here is one of the many failing own tests http://buildbot.pypy.org/summary/longrepr?testname=TestAstToObject.%28%29.test_attributes&builder=own-win-x86-32&build=934&mod=interpreter.astcompiler.test.test_ast From issues-reply at bitbucket.org Wed Apr 20 10:58:13 2016 From: issues-reply at bitbucket.org (Iain Buclaw) Date: Wed, 20 Apr 2016 14:58:13 -0000 Subject: [pypy-issue] Issue #2277: zip doesn't call __iter__ on list objects (pypy/pypy) Message-ID: <20160420145813.12753.40109@celery-worker-103.ash1.bb-inf.net> New issue 2277: zip doesn't call __iter__ on list objects https://bitbucket.org/pypy/pypy/issues/2277/zip-doesnt-call-__iter__-on-list-objects Iain Buclaw: ``` #!python class A(object): def __init__(self, values): self.values = values def __iter__(self): print "DEBUG: (A) __iter__ called" return list.__iter__(self.values) class B(list): def __init__(self, values): list.__init__(self, values) def __iter__(self): print "DEBUG: (B) __iter__ called" return list.__iter__(self) mylist = [1,2,3,4] a = A(mylist) b = B(mylist) c = zip(a, mylist) d = zip(b, mylist) ``` When running this test program, I get two results: ``` # python bin/test.py DEBUG: (A) __iter__ called DEBUG: (B) __iter__ called # pypy bin/test.py DEBUG: (A) __iter__ called ``` Using pypy 2.7.10 and python 2.7.6. From issues-reply at bitbucket.org Wed Apr 20 15:14:37 2016 From: issues-reply at bitbucket.org (Ronan Lamy) Date: Wed, 20 Apr 2016 19:14:37 -0000 Subject: [pypy-issue] Issue #2278: 3.3: Precision loss in os.stat() (pypy/pypy) Message-ID: <20160420191437.59725.4843@celery-worker-102.ash1.bb-inf.net> New issue 2278: 3.3: Precision loss in os.stat() https://bitbucket.org/pypy/pypy/issues/2278/33-precision-loss-in-osstat Ronan Lamy: `os.stat` returns time results with a theoretical nanosecond precision, but the current implementation loses precision by converting times to float (in `rposix.stat`) and back (in `interp_posix.stat`). From issues-reply at bitbucket.org Thu Apr 21 16:35:35 2016 From: issues-reply at bitbucket.org (Ronan Lamy) Date: Thu, 21 Apr 2016 20:35:35 -0000 Subject: [pypy-issue] Issue #2279: 3.3: wrong encoding on sys.stdout when output is redirected to a file (pypy/pypy) Message-ID: <20160421203535.21499.74127@celery-worker-103.ash1.bb-inf.net> New issue 2279: 3.3: wrong encoding on sys.stdout when output is redirected to a file https://bitbucket.org/pypy/pypy/issues/2279/33-wrong-encoding-on-sysstdout-when-output Ronan Lamy: ``` $ pypy3 -c 'import sys; print(sys.stdout.encoding)' > xx $ cat xx ascii $ pypy3 -c 'import sys; print(sys.stdout.encoding)' UTF-8 ``` On CPython, the result is `UTF-8` in both cases. Responsible: rlamy From issues-reply at bitbucket.org Fri Apr 22 21:23:41 2016 From: issues-reply at bitbucket.org (Justin Cunningham) Date: Sat, 23 Apr 2016 01:23:41 -0000 Subject: [pypy-issue] Issue #2280: syslog.syslog fails with unicode (pypy/pypy) Message-ID: <20160423012341.58896.22622@celery-worker-102.ash1.bb-inf.net> New issue 2280: syslog.syslog fails with unicode https://bitbucket.org/pypy/pypy/issues/2280/syslogsyslog-fails-with-unicode Justin Cunningham: Making a call to ``` #!python syslog.syslog(unicode('test')) ``` fails with the error: ``` TypeError: initializer for ctype 'char *' must be a str or list or tuple, not unicode ``` This works under CPython 2.7. From issues-reply at bitbucket.org Sat Apr 23 17:37:35 2016 From: issues-reply at bitbucket.org (Armin Rigo) Date: Sat, 23 Apr 2016 21:37:35 -0000 Subject: [pypy-issue] Issue #2281: 'exctrans' branch makes the C sources bigger (pypy/pypy) Message-ID: <20160423213735.21359.58573@celery-worker-103.ash1.bb-inf.net> New issue 2281: 'exctrans' branch makes the C sources bigger https://bitbucket.org/pypy/pypy/issues/2281/exctrans-branch-makes-the-c-sources-bigger Armin Rigo: The .c files are generated with a ton of "struct pypy_header0" prebuilt constants nowadays, which are never referenced. According to hg bisect, it was added by the branch 'exctrans'. Ronan, could you look at it? It shouldn't have an effect in the final pypy-c as the linker should be able to remove all these 200'000+ structures, but it has probably a non-negligible translation-time overhead. From issues-reply at bitbucket.org Sun Apr 24 03:20:48 2016 From: issues-reply at bitbucket.org (mattip) Date: Sun, 24 Apr 2016 07:20:48 -0000 Subject: [pypy-issue] Issue #2282: (micronumpy) cleanup-includes branch broke 3rd party C-API dependencies (pypy/pypy) Message-ID: <20160424072048.54994.11791@celery-worker-102.ash1.bb-inf.net> New issue 2282: (micronumpy) cleanup-includes branch broke 3rd party C-API dependencies https://bitbucket.org/pypy/pypy/issues/2282/micronumpy-cleanup-includes-branch-broke mattip: Since we no longer package numpy-specific include files (after merging branch cleanup-includes) , we need to support all the type definitions in arrayscalars.h and ndarraytypes.h and remove our cpyext functions provided as an alternative to the type definitions. It may be more prudent to revert the merge of cleanup-includes on default This issue is blocking use of github.com/mattip/matplotlib on pypy 5.1 From issues-reply at bitbucket.org Sun Apr 24 11:04:07 2016 From: issues-reply at bitbucket.org (Henry) Date: Sun, 24 Apr 2016 15:04:07 -0000 Subject: [pypy-issue] Issue #2283: Import error relating to __pycache__ (pypy/pypy) Message-ID: <20160424150407.21160.78997@celery-worker-103.ash1.bb-inf.net> New issue 2283: Import error relating to __pycache__ https://bitbucket.org/pypy/pypy/issues/2283/import-error-relating-to-__pycache__ Henry: Volatility 2.5 (www.volatilityfoundation.org) is a digital forensics tools. I built and installed it with no problem, but when I ran it, there were many import errors. Copy __init__.py into __pycache__ will solve this problem, but it's better that pypy can handle this case. Errors: $ vol.py Volatility Foundation Volatility Framework 2.5 *** Failed to import volatility.plugins.mac.__pycache__.bash_env.pypy-41 (ImportError: No module named volatility.plugins.mac.__pycache__) *** Failed to import volatility.plugins.mac.__pycache__.check_sysctl.pypy-41 (ImportError: No module named volatility.plugins.mac.__pycache__) *** Failed to import volatility.plugins.mac.__pycache__.dlyd_maps.pypy-41 (ImportError: No module named volatility.plugins.mac.__pycache__) *** Failed to import volatility.plugins.mac.__pycache__.list_files.pypy-41 (ImportError: No module named volatility.plugins.mac.__pycache__) *** Failed to import volatility.plugins.mac.__pycache__.common.pypy-41 (ImportError: No module named volatility.plugins.mac.__pycache__) *** Failed to import volatility.plugins.mac.__pycache__.list_kauth_scopes.pypy-41 (ImportError: No module named volatility.plugins.mac.__pycache__) *** Failed to import volatility.plugins.mac.__pycache__.print_boot_cmdline.pypy-41 (ImportError: No module named volatility.plugins.mac.__pycache__) *** Failed to import volatility.plugins.mac.__pycache__.apihooks.pypy-41 (ImportError: No module named volatility.plugins.mac.__pycache__) ... From issues-reply at bitbucket.org Sun Apr 24 23:31:35 2016 From: issues-reply at bitbucket.org (glyph) Date: Mon, 25 Apr 2016 03:31:35 -0000 Subject: [pypy-issue] Issue #2284: C++ embedding documentation is a 404 (pypy/pypy) Message-ID: <20160425033135.28900.91479@celery-worker-103.ash1.bb-inf.net> New issue 2284: C++ embedding documentation is a 404 https://bitbucket.org/pypy/pypy/issues/2284/c-embedding-documentation-is-a-404 glyph: The page for "reflex", linked from http://doc.pypy.org/en/latest/extending.html, is a 404; see https://root.cern.ch/reflex From issues-reply at bitbucket.org Tue Apr 26 19:13:40 2016 From: issues-reply at bitbucket.org (Nick Meharry) Date: Tue, 26 Apr 2016 23:13:40 -0000 Subject: [pypy-issue] Issue #2285: Segfault in vmprof sigprof_handler on OS X in release-5.1 (pypy/pypy) Message-ID: <20160426231340.23802.89395@celery-worker-102.ash1.bb-inf.net> New issue 2285: Segfault in vmprof sigprof_handler on OS X in release-5.1 https://bitbucket.org/pypy/pypy/issues/2285/segfault-in-vmprof-sigprof_handler-on-os-x Nick Meharry: While evaluating PyPy for use on an existing project, I ran into a segfault. It appears to be the use of thread-locals in `sigprof_handler`. Specifically, on line 158 as a parameter to `get_stack_trace`, `get_vmprof_stack()` is called, which is just a wrapper around `RPY_THREADLOCALREF_GET(vmprof_tl_stack)`. This macro (at least on my machine) unwraps to this: ``` #!c ((struct pypy_threadlocal_s *)pthread_getspecific(pypy_threadlocal_key))->vmprof_tl_stack ``` According to the manual for `pthread_getspecific`, this function can return `NULL`. This concern is noted on line 118 within this file under the header "TERRIBLE HACK AHEAD". The calls to threadlocals on lines 130 (`pthread_self()`) and 131 (`get_current_thread_id()`) are guarded, but this one later on is not. My C is a little rusty, but I think this could be solved by returning `NULL` in `get_vmprof_stack()` if `_RPy_ThreadLocals_Get()` returns `NULL` and checking for that case in `sigprof_handler`. `get_vmprof_stack` doesn't appear to be used anywhere else. From issues-reply at bitbucket.org Thu Apr 28 05:09:19 2016 From: issues-reply at bitbucket.org (C. W.) Date: Thu, 28 Apr 2016 09:09:19 -0000 Subject: [pypy-issue] Issue #2286: Segmentation fault: when running nosetests (pypy/pypy) Message-ID: <20160428090919.31468.16211@celery-worker-102.ash1.bb-inf.net> New issue 2286: Segmentation fault: when running nosetests https://bitbucket.org/pypy/pypy/issues/2286/segmentation-fault-when-running-nosetests C. W.: Hi all, [nosetests under pypy3](https://travis-ci.org/pyexcel/pyexcel/jobs/126315415 gives segmentation fault but the ones [under pypy](https://travis-ci.org/pyexcel/pyexcel/jobs/126315411) are fine. Any hints to help find out the cause?