From issues-reply at bitbucket.org Thu Jun 1 04:32:40 2017 From: issues-reply at bitbucket.org (Tuom Larsen) Date: Thu, 01 Jun 2017 08:32:40 -0000 Subject: [pypy-issue] Issue #2567: Sorting small tuples slower than CPython (pypy/pypy) Message-ID: <20170601083240.38466.3264@celery-worker-106.ash1.bb-inf.net> New issue 2567: Sorting small tuples slower than CPython https://bitbucket.org/pypy/pypy/issues/2567/sorting-small-tuples-slower-than-cpython Tuom Larsen: The following program takes 12.24 seconds under PyPy 5.4.1, while under CPython 2.7.10 only 3.25 seconds: from random import random from time import time arrays = [[(random(), random()) for j in range(100)] for i in range(100000)] t = time() for array in arrays: array.sort() print time() - t This is similar to https://bitbucket.org/pypy/pypy/issues/2410 From issues-reply at bitbucket.org Fri Jun 2 03:51:02 2017 From: issues-reply at bitbucket.org (mattip) Date: Fri, 02 Jun 2017 07:51:02 -0000 Subject: [pypy-issue] Issue #2568: os.stat needs st_ino, st_dev on windows (pypy/pypy) Message-ID: <20170602075102.6480.1340@celery-worker-101.ash1.bb-inf.net> New issue 2568: os.stat needs st_ino, st_dev on windows https://bitbucket.org/pypy/pypy/issues/2568/osstat-needs-st_ino-st_dev-on-windows mattip: CPython 2.7.13 ``` #!python python2.exe -c "import os; print os.stat(r'c:\randomfile')" nt.stat_result(st_mode=33206, st_ino=0L, st_dev=0L, st_nlink=0, st_uid=0, st_gid=0, st_size=655L, st_atime=1494398674L, st_mtime=1496388302L, st_ctime=1494398674L) ``` CPython 3.6.0 ``` #!python python3.exe -c "import os; print(os.stat(r'c:\randomfile'))" os.stat_result(st_mode=33206, st_ino=562949954083371, st_dev=2623564768, st_nlink=1, st_uid=0, st_gid=0, st_size=655, st_atime=1494398674, st_mtime=1496388302, st_ctime=1494398674) ``` PyPy 3.5 behaves like python2, not python3. This causes ``os.path.samestat`` to fail, which is used by cffi, which is causing ``build_cffi_imports`` to fail during packaging, which is preventing us shipping a win32 beta version From issues-reply at bitbucket.org Fri Jun 2 12:25:01 2017 From: issues-reply at bitbucket.org (Antonio Cuni) Date: Fri, 02 Jun 2017 16:25:01 -0000 Subject: [pypy-issue] Issue #2569: Teach the JIT about frozenset immutability (pypy/pypy) Message-ID: <20170602162501.8696.38837@celery-worker-105.ash1.bb-inf.net> New issue 2569: Teach the JIT about frozenset immutability https://bitbucket.org/pypy/pypy/issues/2569/teach-the-jit-about-frozenset-immutability Antonio Cuni: The JIT does not take advantage of frozenset immutability. E.g. consider this case: ``` TUP = ('foo', 'bar', 'baz') FROZ = frozenset(TUP) def main(): x = 0 for i in range(2000): x += 'foo' in TUP x += 'foo' in FROZ main() ``` The JIT can constat-fold the TUP lookup but not the FROZ one. Looking at the code, `W_FrozensetObject` lacks `_immutable_fields_`, but I'm not sure whether more is needed to achieve it. From issues-reply at bitbucket.org Sun Jun 4 06:16:06 2017 From: issues-reply at bitbucket.org (Andrew Stromnov) Date: Sun, 04 Jun 2017 10:16:06 -0000 Subject: [pypy-issue] Issue #2570: Index error in dis module (pypy/pypy) Message-ID: <20170604101606.34451.94267@celery-worker-107.ash1.bb-inf.net> New issue 2570: Index error in dis module https://bitbucket.org/pypy/pypy/issues/2570/index-error-in-dis-module Andrew Stromnov: ``` #!python Python 2.7.13 (1aa2d8e03cdfab54b7121e93fda7e98ea88a30bf, Apr 07 2017, 21:23:56) [PyPy 5.7.1 with GCC 4.2.1 Compatible Apple LLVM 8.0.0 (clang-800.0.42.1)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>>> import dis >>>> dis.dis("{'key': 'value'}") Traceback (most recent call last): File "", line 1, in File "/opt/local/lib/pypy/lib-python/2.7/dis.py", line 45, in dis disassemble_string(x) File "/opt/local/lib/pypy/lib-python/2.7/dis.py", line 112, in disassemble_string labels = findlabels(code) File "/opt/local/lib/pypy/lib-python/2.7/dis.py", line 166, in findlabels oparg = ord(code[i]) + ord(code[i+1])*256 IndexError: string index out of range ``` From issues-reply at bitbucket.org Mon Jun 5 08:59:27 2017 From: issues-reply at bitbucket.org (Denis Sumin) Date: Mon, 05 Jun 2017 12:59:27 -0000 Subject: [pypy-issue] Issue #2571: Fatal RPython error: NotImplementedError (pypy/pypy) Message-ID: <20170605125927.1459.30937@celery-worker-101.ash1.bb-inf.net> New issue 2571: Fatal RPython error: NotImplementedError https://bitbucket.org/pypy/pypy/issues/2571/fatal-rpython-error-notimplementederror Denis Sumin: I'm trying to run a python3 project with pypy (5.7.1), and I'm getting the following: ``` #!bash RPython traceback: File "pypy_interpreter.c", line 38704, in BuiltinCodePassThroughArguments1_funcrun_obj File "pypy_module_cpyext_3.c", line 18, in wrap_del Fatal RPython error: NotImplementedError Aborted (core dumped) ``` How could I proceed with the debugging? From issues-reply at bitbucket.org Tue Jun 6 01:36:02 2017 From: issues-reply at bitbucket.org (Armin Rigo) Date: Tue, 06 Jun 2017 05:36:02 -0000 Subject: [pypy-issue] Issue #2572: Link-time optimization (LTO) disabled (pypy/pypy) Message-ID: <20170606053602.34993.10060@celery-worker-105.ash1.bb-inf.net> New issue 2572: Link-time optimization (LTO) disabled https://bitbucket.org/pypy/pypy/issues/2572/link-time-optimization-lto-disabled Armin Rigo: Link-time optimization (the gcc option ``-flto``) has been disabled again. With gcc 6.2.0 on Ubuntu 14.04, it produces code that really looks invalid after inspection in ``objdump``. To reproduce, take a recent version of trunk (e.g. 0649d557369f), and translate it on Linux64 with ``gcc (Ubuntu 6.2.0-3ubuntu11~14.04) 6.2.0 20160901``. Then look in ``objdump`` at ``read.constfold.NNNN`` and its caller. This function contains only a just to itself, creating an infinite loop. We need to investigate which versions of gcc this bug appears on, and possibly report to gcc. Alternatively, if we're confident that the problem is really fixed in gcc 6.3, then we could add some checks and put the option ``-flto`` only on gcc >= 6.3... From issues-reply at bitbucket.org Wed Jun 7 05:44:35 2017 From: issues-reply at bitbucket.org (Nathaniel Smith) Date: Wed, 07 Jun 2017 09:44:35 -0000 Subject: [pypy-issue] Issue #2573: Reasonable looking asyncio program where PyPy3 is mysteriously ~5-10x slower than CPython (pypy/pypy) Message-ID: <20170607094435.22812.81797@celery-worker-107.ash1.bb-inf.net> New issue 2573: Reasonable looking asyncio program where PyPy3 is mysteriously ~5-10x slower than CPython https://bitbucket.org/pypy/pypy/issues/2573/reasonable-looking-asyncio-program-where Nathaniel Smith: This came up on IRC. I didn't write the test case, and I don't personally care about the specific issue at all, but it might be interesting for someone to look into, so filing an issue to capture it: https://gist.github.com/buhman/79fd0f9683e561de6ff129952a718bbc From issues-reply at bitbucket.org Sat Jun 10 12:52:28 2017 From: issues-reply at bitbucket.org (Samuel Villamonte) Date: Sat, 10 Jun 2017 16:52:28 -0000 Subject: [pypy-issue] Issue #2574: Dropping support for Ubuntu 12.04 (Precise), build against Ubuntu 14.04? (pypy/pypy) Message-ID: <20170610165228.22181.14526@celery-worker-108.ash1.bb-inf.net> New issue 2574: Dropping support for Ubuntu 12.04 (Precise), build against Ubuntu 14.04? https://bitbucket.org/pypy/pypy/issues/2574/dropping-support-for-ubuntu-1204-precise Samuel Villamonte: It was reported at the pyenv issue tracker that the PyPy3 binary as of version 5.7.1 no longer works because of the minimum libc version required by libbz2 is greater than the one bundled in 12.04: https://github.com/pyenv/pyenv/pull/932#issuecomment-307567916 And given that Ubuntu 12.04 is considered EOL since April of this year https://lists.ubuntu.com/archives/ubuntu-announce/2017-March/000218.html and the 64-bit binaries seem to work (at least for me) under Ubuntu 16.04, I was wondering if you'd consider dropping support for Ubuntu 12.04 and build against 14.04 instead. From issues-reply at bitbucket.org Sat Jun 10 16:19:39 2017 From: issues-reply at bitbucket.org (Sven-Hendrik Haase) Date: Sat, 10 Jun 2017 20:19:39 -0000 Subject: [pypy-issue] Issue #2575: pypy3 5.8.0 doesn't compile (pypy/pypy) Message-ID: <20170610201939.39917.83582@celery-worker-105.ash1.bb-inf.net> New issue 2575: pypy3 5.8.0 doesn't compile https://bitbucket.org/pypy/pypy/issues/2575/pypy3-580-doesnt-compile Sven-Hendrik Haase: Log is attached. This is Arch Linux with gcc 7.1.1 and openssl 1.1.0.f. From issues-reply at bitbucket.org Tue Jun 13 06:32:45 2017 From: issues-reply at bitbucket.org (Omer Katz) Date: Tue, 13 Jun 2017 10:32:45 -0000 Subject: [pypy-issue] Issue #2576: Core dumped when running MacroPy tests on PyPy2 5.7.1 (pypy/pypy) Message-ID: <20170613103245.25525.88565@celery-worker-105.ash1.bb-inf.net> New issue 2576: Core dumped when running MacroPy tests on PyPy2 5.7.1 https://bitbucket.org/pypy/pypy/issues/2576/core-dumped-when-running-macropy-tests-on Omer Katz: There's a consistent core dump when running [MacroPy](https://github.com/lihaoyi/macropy)'s test suite. ``` #!python RPython traceback: File "rpython_jit_metainterp_11.c", line 16475, in send_loop_to_backend File "rpython_jit_backend_x86.c", line 1327, in Assembler386_assemble_loop File "rpython_jit_backend_x86.c", line 4220, in RegAlloc_prepare_loop File "rpython_jit_backend_x86.c", line 7455, in RegAlloc__prepare File "rpython_jit_backend_llsupport.c", line 17062, in compute_vars_longevity Fatal RPython error: AssertionError /home/travis/.travis/job_stages: line 54: 2414 Aborted (core dumped) python run_tests.py ``` In order to run the tests you need to currently install pyxl from git like so: pip install git+https://github.com/dropbox/pyxl I can provide more information on request. From issues-reply at bitbucket.org Tue Jun 13 06:44:42 2017 From: issues-reply at bitbucket.org (Omer Ben-Amram) Date: Tue, 13 Jun 2017 10:44:42 -0000 Subject: [pypy-issue] Issue #2577: Building C Extensions on MacOS (pypy/pypy) Message-ID: <20170613104442.22699.9122@celery-worker-107.ash1.bb-inf.net> New issue 2577: Building C Extensions on MacOS https://bitbucket.org/pypy/pypy/issues/2577/building-c-extensions-on-macos Omer Ben-Amram: When building c extensions on macos sierra - pypy3 doesn't seem to add the libpypy3-c shared object (dylib) to the link command. settings LDFLAGS to `-L -lpypy3-c` solves the issue. From issues-reply at bitbucket.org Wed Jun 14 04:32:32 2017 From: issues-reply at bitbucket.org (Nathaniel Smith) Date: Wed, 14 Jun 2017 08:32:32 -0000 Subject: [pypy-issue] Issue #2578: Behavioral discrepancy between CPython and PyPy ssl modules (pypy/pypy) Message-ID: <20170614083232.34977.69397@celery-worker-101.ash1.bb-inf.net> New issue 2578: Behavioral discrepancy between CPython and PyPy ssl modules https://bitbucket.org/pypy/pypy/issues/2578/behavioral-discrepancy-between-cpython-and Nathaniel Smith: Here's a weird edge case where the CPython and PyPy ssl modules behave differently. I'm not entirely sure whether it's a bug or not, and it's easy enough for me to work around, but I found the PyPy behavior surprising and who knows it might point to some deeper issue, so I wanted to make a formal note. Scenario: * Client and server negotiate a TLS connection * Client sends `close_notify` then immediately closes socket * Server receives `close_notify` then attempts to send `close_notify` back In this scenario, on CPython the server gets a `BrokenPipeError`, which makes sense ? this is the usual error for trying to send data on a socket that the other side already closed. On PyPy, the server gets a `SSLEOFError: EOF occurred in violation of protocol`. This is (a) different, and (b) the text is not true, the EOF was totally legal. (Quoth RFC 5246: "Unless some other fatal alert has been transmitted, each party is required to send a close_notify alert before closing the write side of the connection. The other party MUST respond with a close_notify alert of its own and close down the connection immediately, discarding any pending writes. It is not required for the initiator of the close to wait for the responding close_notify alert before closing the read side of the connection.") Weirdly, running under strace I can see PyPy getting `EPIPE` before raising `SSLEOFError`, and then the next syscall it makes is to start looking up python files so it can print the traceback: ``` [pid 17390] write(4, "\25\3\3\0\32\211\327\250b\301+o\373$\363\32\t\r\231\261\256\322\f\230\362\205vr\224VD", 31) = -1 EPIPE (Broken pipe) [pid 17390] --- SIGPIPE {si_signo=SIGPIPE, si_code=SI_USER, si_pid=17387, si_uid=1000} --- [pid 17390] stat("/home/njs/pypy/pypy3.5-5.8-beta-linux_x86_64-portable/lib-python/3/threading.py", {st_mode=S_IFREG|0644, st_size=48919, ...}) = 0 ``` And looking at the definition of `pyssl_error` in lib_pypy/_cffi_ssl/_stdssl/error.py, it looks like the difference between raising `SSLEOFError` and an `OSError` like `BrokenPipeError` is the setting of a "did the BIO indicate an error" flag. If that flag is broken in general then that's definitely worrisome. Sample code here: https://gist.github.com/njsmith/7cf06383adca0392c862cfa22cb70804 CC: @alex_gaynor From issues-reply at bitbucket.org Thu Jun 15 12:08:18 2017 From: issues-reply at bitbucket.org (strombrg) Date: Thu, 15 Jun 2017 16:08:18 -0000 Subject: [pypy-issue] Issue #2579: pypy3.5 5.8.0 memory leak? (pypy/pypy) Message-ID: <20170615160818.7019.63082@celery-worker-106.ash1.bb-inf.net> New issue 2579: pypy3.5 5.8.0 memory leak? https://bitbucket.org/pypy/pypy/issues/2579/pypy35-580-memory-leak strombrg: I'm running http://stromberg.dnsalias.org/svn/backshift/trunk/ (that's an SVN URL) on a self-compiled pypy3.5 5.8.0. This results in a very slow run, sluggishness in other applications on the same machine, and eventually the message "Killed" and a terminated pypy3.5. I suspect this is a memory leak which is leading the Linux OOM Killer to kill the process. The code runs fine on CPython 2.x, CPython 3.x, Jython and Pypy2. It used to run fine on Pypy3, but does no longer. It's possible that the new lzma module is where the memory leak is. The new lzma module is also the main reason I want to run the code on Pypy3.5. Backshift uses lzma/xz heavily. Sure enough, if I hack backshift to use ctypes and liblzma instead of the lzma module, I don't get a memory leak. Any suggestions? Thanks! From issues-reply at bitbucket.org Thu Jun 15 17:41:43 2017 From: issues-reply at bitbucket.org (Stefano Rivera) Date: Thu, 15 Jun 2017 21:41:43 -0000 Subject: [pypy-issue] Issue #2580: asmgcc fails on i386 with gcc 6.3.0-18 (pypy/pypy) Message-ID: <20170615214143.15457.66827@celery-worker-106.ash1.bb-inf.net> New issue 2580: asmgcc fails on i386 with gcc 6.3.0-18 https://bitbucket.org/pypy/pypy/issues/2580/asmgcc-fails-on-i386-with-gcc-630-18 Stefano Rivera: Building pypy 5.8.0 on Debian sid i386: ``` gcc -O3 -pthread -fomit-frame-pointer -Wall -Wno-unused -Wno-address -fvisibility=hidden -DPYPY_USE_ASMGCC -DPYPY_JIT_CODEMAP -DPy_BUILD_CORE -DRPYTHON_VMPROF -O3 -DVMPROF_UNIX -DVMPROF_LINUX -DRPYTHON_VMPROF=1 -DPYPY_CPU_HAS_STANDARD_PRECISION -DPYPY_X86_CHECK_SSE2 -msse2 -mfpmath=sse -DRPYTHON_VMPROF=1 -DRPYTHON_VMPROF=1 -DPy_BUILD_CORE -DPy_BUILD_CORE -frandom-seed=elf.c -o elf.s -S elf.c -I"/home/stefanor/hg/pypy/rpython"/translator/c -I"/home/stefanor/hg/pypy/rpython"/rlib/rjitlog/src -I"/home/stefanor/hg/pypy/rpython"/rlib/rvmprof/src -I"/home/stefanor/hg/pypy/rpython"/../pypy/module/cpyext/include -I"/home/stefanor/hg/pypy/rpython"/../pypy/module/cpyext/parse -I.. -I"/home/stefanor/hg/pypy/rpython"/rlib/rvmprof/src/shared -I"/home/stefanor/hg/pypy/rpython"/rlib/rvmprof/src/shared/libbacktrace -I"/home/stefanor/hg/pypy/rpython"/../pypy/module/faulthandler -I"/home/stefanor/hg/pypy/rpython"/../pypy/module/_multibytecodec -I"/home/stefanor/hg/pypy/rpython"/../pypy/module/ operator -I"/home/stefanor/hg/pypy/rpython"/../pypy/module/_cffi_backend/src python "/home/stefanor/hg/pypy/rpython"/translator/c/gcc/trackgcroot.py -t elf.s > elf.gctmp Traceback (most recent call last): File "/home/stefanor/hg/pypy/rpython/translator/c/gcc/trackgcroot.py", line 2094, in tracker.process(f, g, filename=fn) File "/home/stefanor/hg/pypy/rpython/translator/c/gcc/trackgcroot.py", line 1987, in process tracker = parser.process_function(lines, filename) File "/home/stefanor/hg/pypy/rpython/translator/c/gcc/trackgcroot.py", line 1498, in process_function table = tracker.computegcmaptable(self.verbose) File "/home/stefanor/hg/pypy/rpython/translator/c/gcc/trackgcroot.py", line 59, in computegcmaptable self.findframesize() File "/home/stefanor/hg/pypy/rpython/translator/c/gcc/trackgcroot.py", line 295, in findframesize insn1.framesize = size_at_insn1 File "/home/stefanor/hg/pypy/rpython/translator/c/gcc/../../../../rpython/translator/c/gcc/instruction.py", line 262, in __setattr__ "unrecognized function prologue - " AssertionError: unrecognized function prologue - only supports push %ebp; movl %esp, %ebp Makefile:496: recipe for target 'elf.gcmap' failed make: *** [elf.gcmap] Error 1 ``` From issues-reply at bitbucket.org Sat Jun 17 12:46:24 2017 From: issues-reply at bitbucket.org (Matteo Dell'Amico) Date: Sat, 17 Jun 2017 16:46:24 -0000 Subject: [pypy-issue] Issue #2581: heapq.merge is slow in pypy3 (pypy/pypy) Message-ID: <20170617164624.38194.56678@celery-worker-108.ash1.bb-inf.net> New issue 2581: heapq.merge is slow in pypy3 https://bitbucket.org/pypy/pypy/issues/2581/heapqmerge-is-slow-in-pypy3 Matteo Dell'Amico: heapq.merge runs definitely slower in pypy3 than in python3. This script: ``` #!python import heapq data = [range(10000) for _ in range(100)] list(heapq.merge(*data)) ``` runs on my machine on around 1.2s on pypy3 and 0.6s on python3. Antonio is a friend of mine and asked me to report this :) From issues-reply at bitbucket.org Sun Jun 18 13:52:41 2017 From: issues-reply at bitbucket.org (David Naylor) Date: Sun, 18 Jun 2017 17:52:41 -0000 Subject: [pypy-issue] Issue #2582: [patch] add support for FreeBSD to rvmprof (pypy/pypy) Message-ID: <20170618175241.29653.75706@celery-worker-105.ash1.bb-inf.net> New issue 2582: [patch] add support for FreeBSD to rvmprof https://bitbucket.org/pypy/pypy/issues/2582/patch-add-support-for-freebsd-to-rvmprof David Naylor: - add 'freebsd' as a known identified for 'vmp_machine_os_name' - return '-1' for vmp_fd_to_path() as FreeBSD does not (yet) support fcntl(F_GETPATH) From issues-reply at bitbucket.org Mon Jun 19 19:34:45 2017 From: issues-reply at bitbucket.org (Stefan Krah) Date: Mon, 19 Jun 2017 23:34:45 -0000 Subject: [pypy-issue] Issue #2583: decimal.py: overflow in fraction comparisons (pypy/pypy) Message-ID: <20170619233445.21267.51595@celery-worker-108.ash1.bb-inf.net> New issue 2583: decimal.py: overflow in fraction comparisons https://bitbucket.org/pypy/pypy/issues/2583/decimalpy-overflow-in-fraction-comparisons Stefan Krah: _convert_for_comparison() should use a maxcontext for a couple of internal operations in order to prevent overflow. Example: ``` >>>> from decimal import * >>>> from fractions import Fraction >>>> c = getcontext() >>>> c.Emax = 1 >>>> c.Emin = -1 >>>> Fraction(1000, 5) == Decimal(200) Traceback (most recent call last): File "", line 1, in File "/home/stefan/usr/pypy3-v5.8.0-linux64/lib_pypy/_decimal.py", line 607, in __eq__ r = self._cmp(other, 'eq') File "/home/stefan/usr/pypy3-v5.8.0-linux64/lib_pypy/_decimal.py", line 586, in _cmp a, b = self._convert_for_comparison(other, op) File "/home/stefan/usr/pypy3-v5.8.0-linux64/lib_pypy/_decimal.py", line 494, in _convert_for_comparison ctx, status_ptr) File "/home/stefan/usr/pypy3-v5.8.0-linux64/lib_pypy/_decimal.py", line 1628, in __exit__ self.context._add_status(status) File "/home/stefan/usr/pypy3-v5.8.0-linux64/lib_pypy/_decimal.py", line 1260, in _add_status raise exception decimal.Overflow ``` Relatedly, there is a comment "XXX probably a bug in _decimal.c" in that function. The multiplication is on two integers with a guaranteed exponent of 0. We have two cases: a) The result does not have an exponent of 0. In this case "status" is set and the ValueError is triggered. b) The result has an exponent of zero and the multiplication was exact. The previous exponent is restored, so that the result can have a combination of coefficient and exponent that *could* overflow, if finalized. But this result is only used in mpd_qcmp(), which is safe. So while the code is a bit tricky, there is no bug here. From issues-reply at bitbucket.org Tue Jun 20 10:44:27 2017 From: issues-reply at bitbucket.org (KoshkinAlex31) Date: Tue, 20 Jun 2017 14:44:27 -0000 Subject: [pypy-issue] Issue #2584: Segfault on os.write with --gc=minimark (pypy/pypy) Message-ID: <20170620144427.24623.20517@celery-worker-101.ash1.bb-inf.net> New issue 2584: Segfault on os.write with --gc=minimark https://bitbucket.org/pypy/pypy/issues/2584/segfault-on-oswrite-with-gc-minimark KoshkinAlex31: The problem discussed [there](https://stackoverflow.com/questions/44649591/rpython-segfaults-on-os-write) From issues-reply at bitbucket.org Tue Jun 20 12:49:03 2017 From: issues-reply at bitbucket.org (mattip) Date: Tue, 20 Jun 2017 16:49:03 -0000 Subject: [pypy-issue] Issue #2585: cpyext: running lldebug0 shows many leftover BaseCpyTypedescr.allocate (pypy/pypy) Message-ID: <20170620164903.4789.36822@celery-worker-107.ash1.bb-inf.net> New issue 2585: cpyext: running lldebug0 shows many leftover BaseCpyTypedescr.allocate https://bitbucket.org/pypy/pypy/issues/2585/cpyext-running-lldebug0-shows-many mattip: running this script ``test.py``: ```python import pandas as pd values = range(55109); data = pd.DataFrame.from_dict({'a': values, 'b': values, 'c': values, 'd': values}) ``` with a lldebug0 pypy like this ```shell pip install numpy pandas PYPY_ALLOC=1 ./pypy -c test.py 2>&1 1>/dev/null | cut -b 15-200 |sort | uniq -c ``` gives me the following output: ``` 1 mallocs left (most recent first) 1 pypy_g_allocate_ctxobj 232981 pypy_g_BaseCpyTypedescr_allocate 23 pypy_g_CPyListStorage___init__ 1 pypy_g_IncrementalMiniMarkGC_deal_with_objects_with_fin 1 pypy_g_IncrementalMiniMarkGC_deal_with_old_objects_with 1 pypy_g_IncrementalMiniMarkGC_invalidate_old_weakrefs 1 pypy_g_IncrementalMiniMarkGC__minor_collection 3 pypy_g_IncrementalMiniMarkGC_rawrefcount_init 2 pypy_g_IncrementalMiniMarkGC_rrc_major_collection_free 7 pypy_g_IncrementalMiniMarkGC_setup 1 pypy_g_InterpreterState_new_thread_state 1 pypy_g__ll_dict_resize_to__DICTPtr_Signed 5 pypy_g_ll_newdict_size__Struct_DICTLlT_Signed 907 pypy_g_make_GetSet 127 pypy_g_PyObject_Malloc 1272 pypy_g_str2charp 93 pypy_g_type_alloc 111 pypy_g_update_all_slots 4 pypy_g_W_Array_allocate ``` Note the large number of ``pypy_g_BaseCpyTypedescr_allocate`` Shouldn't they be collected at some stage? From issues-reply at bitbucket.org Tue Jun 20 17:16:07 2017 From: issues-reply at bitbucket.org (Stefan Krah) Date: Tue, 20 Jun 2017 21:16:07 -0000 Subject: [pypy-issue] Issue #2586: decimal.py: compare_total_mag() results reversed (pypy/pypy) Message-ID: <20170620211607.5634.26552@celery-worker-107.ash1.bb-inf.net> New issue 2586: decimal.py: compare_total_mag() results reversed https://bitbucket.org/pypy/pypy/issues/2586/decimalpy-compare_total_mag-results Stefan Krah: There seems to be a glitch in compare_total_mag(): Expected: >>> Decimal(1).compare_total_mag(-2) Decimal('-1') Got: >>>> Decimal(1).compare_total_mag(-2) Decimal('1') From issues-reply at bitbucket.org Tue Jun 20 17:28:10 2017 From: issues-reply at bitbucket.org (Stefan Krah) Date: Tue, 20 Jun 2017 21:28:10 -0000 Subject: [pypy-issue] Issue #2587: decimal.py: compare_total(): unexpected result with NaN operand (pypy/pypy) Message-ID: <20170620212810.18926.59668@celery-worker-108.ash1.bb-inf.net> New issue 2587: decimal.py: compare_total(): unexpected result with NaN operand https://bitbucket.org/pypy/pypy/issues/2587/decimalpy-compare_total-unexpected-result Stefan Krah: NaN operands are handled differently from _pydecimal: Expected: ``` >>> Decimal('4367').compare_total(Decimal('NaN')) Decimal('-1') ``` Got: ``` >>>> Decimal('4367').compare_total(Decimal('NaN')) Decimal('NaN') ``` From issues-reply at bitbucket.org Tue Jun 20 21:40:45 2017 From: issues-reply at bitbucket.org (MilesCranmer) Date: Wed, 21 Jun 2017 01:40:45 -0000 Subject: [pypy-issue] Issue #2588: ctypes package error: "TypeError: expected a readable buffer object" (pypy/pypy) Message-ID: <20170621014045.8221.93674@celery-worker-106.ash1.bb-inf.net> New issue 2588: ctypes package error: "TypeError: expected a readable buffer object" https://bitbucket.org/pypy/pypy/issues/2588/ctypes-package-error-typeerror-expected-a MilesCranmer: I am trying out PyPy on the following python 2.7 package (written with ctypes calls): https://github.com/ledatelescope/bifrost. Everything installs correctly. However, in running the test suite, I get the following errors (about 2/3 of the tests have this error): ``` #!python Exception in thread Pipeline_0/SigprocSourceBlock_0: Traceback (most recent call last): File "/workspace/pypy2-v5.8.0-linux64/lib-python/2.7/threading.py", line 797, in __bootstrap_inner self.run() File "/workspace/pypy2-v5.8.0-linux64/lib-python/2.7/threading.py", line 750, in run self.__target(*self.__args, **self.__kwargs) File "build/bdist.linux-x86_64/egg/bifrost/pipeline.py", line 308, in run self.main(active_orings) File "build/bdist.linux-x86_64/egg/bifrost/pipeline.py", line 380, in main ostrides = self.on_data(ireader, ospans) File "build/bdist.linux-x86_64/egg/bifrost/blocks/sigproc.py", line 109, in on_data ospan.data[:nframe] = indata File "build/bdist.linux-x86_64/egg/bifrost/ring2.py", line 399, in data dtype=self.dtype) File "build/bdist.linux-x86_64/egg/bifrost/ndarray.py", line 224, in __new__ data_buffer, offset, strides) TypeError: expected a readable buffer object ``` The minimal reproducible code is: ``` #!python >>>> import bifrost >>>> from bifrost import blocks as blocks >>>> from bifrost import pipeline as bfp >>>> with bfp.Pipeline() as pipeline: .... blocks.read_sigproc(['data/2chan4bitNoDM.fil'], 4096) .... pipeline.run() ``` This reads in a file from "data" into the pipeline class, which calls a C++ backend to allocate a ring buffer and put the data on it. The relevant code where the error is: https://github.com/ledatelescope/bifrost/blob/master/python/bifrost/ndarray.py ``` #!python BufferType = ctypes.c_byte*nbyte data_buffer_ptr = ctypes.cast(buffer, ctypes.POINTER(BufferType)) data_buffer = data_buffer_ptr.contents obj = np.ndarray.__new__(cls, shape, dtype_np, data_buffer, offset, strides) obj.bf = BFArrayInfo(space, dtype, native, conjugated, ownbuffer) obj._update_BFarray() ``` Please let me know if you would like any other info. Here are some system stats: ``` ? test git:(master) ? pypy --version Python 2.7.13 (c925e7381036, Jun 05 2017, 21:20:51) [PyPy 5.8.0 with GCC 6.2.0 20160901] ``` ``` ? test git:(master) ? uname -a Linux 004a0c817a24 4.9.27-14.31.amzn1.x86_64 #1 SMP Wed May 10 01:58:40 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux ``` This is running inside a Docker (which is running Ubuntu 16.04). You can try this package with: ``` docker pull mcranmer/bifrost:cpu-base ``` Then, clone Bifrost and install it with pypy. From issues-reply at bitbucket.org Wed Jun 21 08:19:27 2017 From: issues-reply at bitbucket.org (Alex Kashirin) Date: Wed, 21 Jun 2017 12:19:27 -0000 Subject: [pypy-issue] Issue #2589: PyFile_FromFile, fclose(void *arg3) might need to be int (*close)(FILE*) (pypy/pypy) Message-ID: <20170621121927.21578.13197@celery-worker-106.ash1.bb-inf.net> New issue 2589: PyFile_FromFile, fclose(void *arg3) might need to be int (*close)(FILE*) https://bitbucket.org/pypy/pypy/issues/2589/pyfile_fromfile-fclose-void-arg3-might Alex Kashirin: The issue happens with pydoop pypy_pip install pydoop the fclose(*arg3) looks like need to be int (*close)(FILE*) ``` #!C building 'pydoop.sercore' extension creating build/temp.linux-x86_64-2.7 creating build/temp.linux-x86_64-2.7/src creating build/temp.linux-x86_64-2.7/src/serialize cc -pthread -DNDEBUG -O2 -fPIC -I/opt/pypy2/include -c src/serialize/protocol_codec.cc -o build/temp.linux-x86_64-2.7/src/serialize/protocol_codec.o -Wno-write-strings -O3 src/serialize/protocol_codec.cc: In function PyObject* util_fdopen(PyObject*, PyObject*): src/serialize/protocol_codec.cc:424:54: error: invalid conversion from int (*)(FILE*) {aka int (*)(_IO_FILE*)} to void* [-fpermissive] return PyFile_FromFile(fp, "", mode, fclose); ^ In file included from /opt/pypy2/include/Python.h:141:0, from src/serialize/protocol_codec.cc:20: /opt/pypy2/include/pypy_decl.h:270:25: note: initializing argument 4 of PyObject* PyPyFile_FromFile(FILE*, const char*, const char*, void*) #define PyFile_FromFile PyPyFile_FromFile ^ /opt/pypy2/include/pypy_decl.h:271:24: note: in expansion of macro PyFile_FromFile PyAPI_FUNC(PyObject *) PyFile_FromFile(FILE *arg0, const char *arg1, const char *arg2, void *arg3); ^~~~~~~~~~~~~~~ error: command 'cc' failed with exit status 1 ``` Thank You, Kashirin Alex From issues-reply at bitbucket.org Thu Jun 22 01:02:37 2017 From: issues-reply at bitbucket.org (Philip Jenvey) Date: Thu, 22 Jun 2017 05:02:37 -0000 Subject: [pypy-issue] Issue #2590: finalizers heavily stress GC (pypy/pypy) Message-ID: <20170622050237.17655.70280@celery-worker-106.ash1.bb-inf.net> New issue 2590: finalizers heavily stress GC https://bitbucket.org/pypy/pypy/issues/2590/finalizers-heavily-stress-gc Philip Jenvey: @alex_gaynor's script demonstrates app level `__del__`s: ``` #!python import random import sys class A(object): def __del__(self): pass class B(object): pass cls = {"A": A, "B": B}[sys.argv[1]] while True: # Gibberish to make sure the JIT doesn't optimize the allocation away [cls()] * random.randint(1, 1) ``` "A" shooting RSS up to 1-2GB pretty quickly vs "B" holding steady =~ 20mb. massif+pypy2-5.6 on "A": https://pastebin.mozilla.org/9025519 From issues-reply at bitbucket.org Thu Jun 22 05:30:46 2017 From: issues-reply at bitbucket.org (Nathaniel Smith) Date: Thu, 22 Jun 2017 09:30:46 -0000 Subject: [pypy-issue] Issue #2591: Closed-over variable updates made in one thread are not always visible in another (pypy/pypy) Message-ID: <20170622093046.16641.75018@celery-worker-107.ash1.bb-inf.net> New issue 2591: Closed-over variable updates made in one thread are not always visible in another https://bitbucket.org/pypy/pypy/issues/2591/closed-over-variable-updates-made-in-one Nathaniel Smith: Unfortunately, I don't have a minimal reproducer here. But in the trio test suite, I have [some code](https://github.com/njsmith/trio/blob/29adfee7f2a041427eef80b1e30d3d2f77fd488c/trio/tests/test_threads.py#L282-L295) that schematically looks like: ```python def outer_scope(): lock = threading.Lock() ran = 0 def thread_fn(): nonlocal ran with lock: print("got lock; on entry ran is", ran) ran += 1 print("releasing lock; on exit ran is", ran) # ... spawn a bunch of threads running thread_fn() # after they all exit, assert ran == thread_count # ... it doesn't ``` What's going on? Well, here's output from an actual run of the code I linked above: ``` getting lock got lock, old values: 0 0 0 0 releasing lock, new values: 1 1 1 1 gate.wait() thread_fn start run_thread finished, cancelled: True getting lock got lock, old values: 1 1 1 1 releasing lock, new values: 2 2 2 2 gate.wait() thread_fn start getting lock got lock, old values: 1 1 1 1 releasing lock, new values: 2 2 2 2 gate.wait() run_thread finished, cancelled: True thread_fn start run_thread finished, cancelled: True getting lock got lock, old values: 2 2 2 2 releasing lock, new values: 3 3 3 3 ``` Notice that one thread took the lock, saw the values 1 1 1 1, updated them to 2 2 2 2, and then released the lock; then the next thread took the lock and... saw the values 1 1 1 1 again (!!!??!!). So if I start 10 threads, the final value ends up being 9, because one of the updates was lost. This was observed with both PyPy3 5.8.0 and with the latest PyPy35 nightly, on Linux x86-64. Here's my best attempt at a reproduction recipe: 1. Download Squeaky's PyPy3 5.8.0 build 2. `pip install pytest pytest-cov ipython pyOpenSSL attrs sortedcontainers async_generator idna` 3. `git clone https://github.com/njsmith/trio` 4. `cd trio` 5. `git checkout 29adfee7f2a041427eef80b1e30d3d2f77fd488c` 6. `pytest -W error -ra -k run_in_worker_thread trio -v --cov=trio -s` (you might have to repeat this a few times) Note: running with coverage enabled, `--cov=trio`, appears to be necessary to hit the bug. If I disable coverage then I haven't managed to reproduce it at all; with coverage enabled it happens more often than not when running locally, and apparently 100% of the time when running on travis. Here are two Travis logs showing the problem: [log 1](https://travis-ci.org/python-trio/trio/jobs/245702941), [log 2](https://travis-ci.org/python-trio/trio/jobs/245702942) The logs also show exactly what commands are being run, so might also be useful for reproduction. From issues-reply at bitbucket.org Thu Jun 22 08:44:56 2017 From: issues-reply at bitbucket.org (Alexey Popravka) Date: Thu, 22 Jun 2017 12:44:56 -0000 Subject: [pypy-issue] Issue #2592: list.pop() returns None for a list created in C-extension (pypy/pypy) Message-ID: <20170622124456.3151.4629@celery-worker-107.ash1.bb-inf.net> New issue 2592: list.pop() returns None for a list created in C-extension https://bitbucket.org/pypy/pypy/issues/2592/listpop-returns-none-for-a-list-created-in Alexey Popravka: I've updated PyPy3 from v5.7.1 to v5.8.0 and encountered the following error: `pop()`'ing a value from a list created in hiredis library returns `None` for the first call, subsequent calls return expected values. The code to reproduce is here: https://gist.github.com/popravich/f52cb9919b9ed19820aff0e2348df0ed From issues-reply at bitbucket.org Thu Jun 22 12:45:08 2017 From: issues-reply at bitbucket.org (Antonio Cuni) Date: Thu, 22 Jun 2017 16:45:08 -0000 Subject: [pypy-issue] Issue #2593: distutil's runtime_library_dirs is broken (pypy/pypy) Message-ID: <20170622164508.22483.6494@celery-worker-106.ash1.bb-inf.net> New issue 2593: distutil's runtime_library_dirs is broken https://bitbucket.org/pypy/pypy/issues/2593/distutils-runtime_library_dirs-is-broken Antonio Cuni: Currently, the `runtime_library_dirs` options seems to be broken: ``` $ cat bug.py from cffi import FFI ffibuilder = FFI() ffibuilder.set_source("foo", '', runtime_library_dirs = ['/tmp']) ffibuilder.cdef("") ffibuilder.compile(verbose=True) $ pypy bug.py generating ./foo.c the current directory is '/home/antocuni/tmp/pypybug' running build_ext building 'foo' extension cc -pthread -DNDEBUG -O2 -fPIC -I/home/antocuni/pypy/default/include -c foo.c -o ./foo.o cc -pthread -shared ./foo.o -R/tmp -o ./foo.pypy-41.so cc: error: unrecognized command line option ?-R?; did you mean ?-R?? ... ``` The problem is in `distutils/unixccompiler.py:231`: if the value of `compiler` seems to be a gcc, it inserts the option `-Wl,-R` (which is correct). If not, it inserts `-R` (which causes the error). The value of `sysconfig.get_config_var('CC')` is defined in `distutils/sysconfig_pypy.py` and it's hard-coded to `'cc'`. If I manually change it to `gcc`, the program above works ok. I don't know what is the best way to fix this, though; if we simply hardcode `gcc`, we risk to break setups in which the only way to invoke the compiler is actually `cc`. From issues-reply at bitbucket.org Sun Jun 25 05:18:22 2017 From: issues-reply at bitbucket.org (Armin Rigo) Date: Sun, 25 Jun 2017 09:18:22 -0000 Subject: [pypy-issue] Issue #2594: _lzma.py contains truncated error messages (pypy/pypy) Message-ID: <20170625091822.13108.38244@celery-worker-105.ash1.bb-inf.net> New issue 2594: _lzma.py contains truncated error messages https://bitbucket.org/pypy/pypy/issues/2594/_lzmapy-contains-truncated-error-messages Armin Rigo: In the py3.5 branch, ``lib_pypy/_lzma.py`` is full of this kind of line: ``raise ValueError("Must...")`` Why the truncated error messages?? From issues-reply at bitbucket.org Sun Jun 25 17:49:21 2017 From: issues-reply at bitbucket.org (Wojciech Kordalski) Date: Sun, 25 Jun 2017 21:49:21 -0000 Subject: [pypy-issue] Issue #2595: [Stackless] Killing tasklet blocked on a channel (pypy/pypy) Message-ID: <20170625214921.8984.42175@celery-worker-105.ash1.bb-inf.net> New issue 2595: [Stackless] Killing tasklet blocked on a channel https://bitbucket.org/pypy/pypy/issues/2595/stackless-killing-tasklet-blocked-on-a Wojciech Kordalski: Reproduction: 1. Kill tasklet blocked on receiving from a channel 2. Send message to the channel Actual result: channel tries to resume killed tasklet Expected result (one of the two): * when tasklet blocked on a channel is killed, it removes itself from the channel's queue * during send operation on the channel killed tasklets are ignored Example code: ```py from stackless import tasklet, channel, getcurrent, run, schedule c = channel() def sender(): print("Sending 1 by {}".format(getcurrent())) c.send(1) def receiver(): v = c.receive() print("Received {} by {}".format(v, getcurrent())) def killer(tl): print("Killing {} by {}".format(tl, getcurrent())) tl.kill() def main(): trk = tasklet(receiver)() print("Receiver tasklet: {}".format(trk)) schedule() print("Channel: {}".format(c)) killer(trk) schedule() print("Channel: {}".format(c)) tasklet(sender)() schedule() tasklet(receiver)() schedule() tasklet(main)() run() ``` Current output: ``` Receiver tasklet: Channel: channel[](-1,deque([])) Killing by Channel: channel[](-1,deque([])) Sending 1 by Traceback (most recent call last): File "bug.py", line 37, in run() File "/opt/pypy3/lib_pypy/stackless.py", line 484, in run schedule() File "/opt/pypy3/lib_pypy/stackless.py", line 524, in schedule _scheduler_switch(curr, task) File "/opt/pypy3/lib_pypy/stackless.py", line 150, in _scheduler_switch next.switch() File "/opt/pypy3/lib_pypy/stackless.py", line 65, in switch current._frame.switch(to=self._frame) File "/opt/pypy3/lib_pypy/stackless.py", line 53, in run return func(*argl, **argd) File "/opt/pypy3/lib_pypy/stackless.py", line 415, in _func func(*argl, **argd) File "bug.py", line 8, in sender c.send(1) File "/opt/pypy3/lib_pypy/stackless.py", line 340, in send return self._channel_action(msg, 1) File "/opt/pypy3/lib_pypy/stackless.py", line 307, in _channel_action schedule() File "/opt/pypy3/lib_pypy/stackless.py", line 524, in schedule _scheduler_switch(curr, task) File "/opt/pypy3/lib_pypy/stackless.py", line 150, in _scheduler_switch next.switch() File "/opt/pypy3/lib_pypy/stackless.py", line 65, in switch current._frame.switch(to=self._frame) _continuation.error: continulet already finished ``` From issues-reply at bitbucket.org Tue Jun 27 23:34:39 2017 From: issues-reply at bitbucket.org (Amber Brown) Date: Wed, 28 Jun 2017 03:34:39 -0000 Subject: [pypy-issue] Issue #2596: cvxcanon from PyPI fails to build (pypy/pypy) Message-ID: <20170628033439.4603.52596@celery-worker-108.ash1.bb-inf.net> New issue 2596: cvxcanon from PyPI fails to build https://bitbucket.org/pypy/pypy/issues/2596/cvxcanon-from-pypi-fails-to-build Amber Brown: OS: MacOS 10.12.5 PyPy versions: Python 3.5.3 (a37ecfe5f142bc971a86d17305cc5d1d70abec64, Jun 10 2017, 18:23:14) [PyPy 5.8.0-beta0 with GCC 4.2.1 Compatible Apple LLVM 8.1.0 (clang-802.0.42)] and Python 2.7.13 (c925e73810367cd960a32592dd7f728f436c125c, Jun 09 2017, 01:01:49) [PyPy 5.8.0 with GCC 4.2.1 Compatible Apple LLVM 8.1.0 (clang-802.0.42)] (both available through Homebrew) Steps to reproduce: - pip install numpy - pip install cvxcanon Relevant failure: ``` In file included from src/python/CVXcanon_wrap.cpp:3157: In file included from /tmp/pypyvenv/site-packages/numpy/core/include/numpy/arrayobject.h:4: In file included from /tmp/pypyvenv/site-packages/numpy/core/include/numpy/ndarrayobject.h:18: In file included from /tmp/pypyvenv/site-packages/numpy/core/include/numpy/ndarraytypes.h:1809: /tmp/pypyvenv/site-packages/numpy/core/include/numpy/npy_1_7_deprecated_api.h:15:2: warning: "Using deprecated NumPy API, disable it by " "#defining NPY_NO_DEPRECATED_API NPY_1_7_API_VERSION" [-W#warnings] #warning "Using deprecated NumPy API, disable it by " \ ^ src/python/CVXcanon_wrap.cpp:5146:12: error: no matching function for call to 'PyPySlice_Check' if( !PySlice_Check(slice) ) { ^~~~~~~~~~~~~ /tmp/pypyvenv/include/pypy_decl.h:748:23: note: expanded from macro 'PySlice_Check' #define PySlice_Check PyPySlice_Check ^~~~~~~~~~~~~~~ /tmp/pypyvenv/include/pypy_decl.h:749:17: note: candidate function not viable: no known conversion from 'PySliceObject *' to 'PyObject *' (aka '_object *') for 1st argument PyAPI_FUNC(int) PySlice_Check(PyObject *arg0); ^ /tmp/pypyvenv/include/pypy_decl.h:748:23: note: expanded from macro 'PySlice_Check' #define PySlice_Check PyPySlice_Check ``` Full traceback attached (PyPy2), error is identical on PyPy3. From issues-reply at bitbucket.org Fri Jun 30 08:09:11 2017 From: issues-reply at bitbucket.org (Marian Beermann) Date: Fri, 30 Jun 2017 12:09:11 +0000 (UTC) Subject: [pypy-issue] Issue #2597: C API "PyMemoryView_FromMemory" (Python 3.3+) missing (pypy/pypy) Message-ID: <20170630120910.646.41713@celery-worker-107.ash1.bb-inf.net> New issue 2597: C API "PyMemoryView_FromMemory" (Python 3.3+) missing https://bitbucket.org/pypy/pypy/issues/2597/c-api-pymemoryview_frommemory-python-33 Marian Beermann: I didn't find an existing issue about this, sorry if I overlooked something.