From zhao.meide at gmail.com Sun Jul 2 21:16:40 2017 From: zhao.meide at gmail.com (Meide Zhao) Date: Sun, 2 Jul 2017 18:16:40 -0700 Subject: [pypy-dev] Asking for help Message-ID: Hi all, I'm trying to build pypy from source on ubuntu 12.04 LTS but can't get it to work (see error messages below). Can someone help me and let me know the correct steps since I'm new to pypy? Thanks, Meide (1) The details for the ubuntu in virtualbox are as follows cat /etc/*-release DISTRIB_ID=Ubuntu DISTRIB_RELEASE=12.04 DISTRIB_CODENAME=precise DISTRIB_DESCRIPTION="Ubuntu 12.04.3 LTS" NAME="Ubuntu" VERSION="12.04.3 LTS, Precise Pangolin" ID=ubuntu ID_LIKE=debian PRETTY_NAME="Ubuntu precise (12.04.3 LTS)" VERSION_ID="12.04" uname -a Linux ubuntu 3.8.0-29-generic #42~precise1-Ubuntu SMP Wed Aug 14 16:19:23 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux python -V Python 2.7.13 (2) install pypy dependencies via command: sudo apt-get install gcc make libffi-dev pkg-config zlib1g-dev libbz2-dev libsqlite3-dev libncurses5-dev libexpat1-dev libssl-dev libgdbm-dev tk-dev libgc-dev liblzma-dev mercurial (3) check out pypy source code: cd ~/pypy hg clone http://bitbucket.org/pypy/pypy pypy (4)build pypy cd ~/pypy/pypy/pypy/goal/ sudo ../../rpython/bin/rpython --opt=jit targetpypystandalone.py [platform:execute] gcc -c -O3 -pthread -fomit-frame-pointer -Wall -Wno-unused -Wno-address -DRPYTHON_VMPROF -O3 -DVMPROF_UNIX -DVMPROF_LINUX -DRPYTHON_LL2CTYPES -I/home/meide/pypy/pypy/rpython/rlib/rvmprof/src -I/home/meide/pypy/pypy/rpython/rlib/rvmprof/src/shared -I/home/meide/pypy/pypy/rpython/rlib/rvmprof/src/shared/libbacktrace /home/meide/pypy/pypy/rpython/rlib/rvmprof/src/shared/libbacktrace/elf.c -o /tmp/usession-default-0/rpython/rlib/rvmprof/src/shared/libbacktrace/elf.o [translation:info] Error: File "/home/meide/pypy/pypy/rpython/translator/goal/translate.py", line 284, in main default_goal='compile') File "/home/meide/pypy/pypy/rpython/translator/driver.py", line 566, in from_targetspec spec = target(driver, args) File "targetpypystandalone.py", line 337, in target return self.get_entry_point(config) File "targetpypystandalone.py", line 368, in get_entry_point space = make_objspace(config) File "/home/meide/pypy/pypy/pypy/tool/option.py", line 35, in make_objspace return Space(config) File "/home/meide/pypy/pypy/pypy/interpreter/baseobjspace.py", line 441, in __init__ self.initialize() File "/home/meide/pypy/pypy/pypy/objspace/std/objspace.py", line 105, in initialize self.make_builtins() File "/home/meide/pypy/pypy/pypy/interpreter/baseobjspace.py", line 637, in make_builtins self.install_mixedmodule(mixedname, installed_builtin_modules) File "/home/meide/pypy/pypy/pypy/interpreter/baseobjspace.py", line 668, in install_mixedmodule modname = self.setbuiltinmodule(mixedname) File "/home/meide/pypy/pypy/pypy/interpreter/baseobjspace.py", line 507, in setbuiltinmodule None, None, ["Module"]).Module File "/home/meide/pypy/pypy/pypy/module/_vmprof/__init__.py", line 30, in import pypy.module._vmprof.interp_vmprof File "/home/meide/pypy/pypy/pypy/module/_vmprof/interp_vmprof.py", line 14, in my_execute_frame = _decorator(PyFrame.execute_frame) File "/home/meide/pypy/pypy/rpython/rlib/rvmprof/rvmprof.py", line 196, in decorate _get_vmprof() File "/home/meide/pypy/pypy/rpython/rlib/rvmprof/rvmprof.py", line 243, in _get_vmprof _vmprof_instance = VMProf() File "/home/meide/pypy/pypy/rpython/rlib/rvmprof/rvmprof.py", line 49, in __init__ self.cintf = cintf.setup() File "/home/meide/pypy/pypy/rpython/rlib/rvmprof/cintf.py", line 82, in setup **eci_kwds)) File "/home/meide/pypy/pypy/rpython/rtyper/tool/rffi_platform.py", line 94, in verify_eci configure(CConfig) File "/home/meide/pypy/pypy/rpython/rtyper/tool/rffi_platform.py", line 223, in configure res[key] = value.question(writer.ask_gcc) File "/home/meide/pypy/pypy/rpython/rtyper/tool/rffi_platform.py", line 555, in question ask_gcc("") File "/home/meide/pypy/pypy/rpython/rtyper/tool/rffi_platform.py", line 191, in ask_gcc try_compile_cache([self.path], self.eci) File "/home/meide/pypy/pypy/rpython/tool/gcc_cache.py", line 71, in try_compile_cache platform.compile(c_files, eci) File "/home/meide/pypy/pypy/rpython/translator/platform/__init__.py", line 54, in compile ofiles = self._compile_o_files(cfiles, eci, standalone) File "/home/meide/pypy/pypy/rpython/translator/platform/__init__.py", line 76, in _compile_o_files ofiles.append(self._compile_c_file(self.cc, cfile, compile_args)) File "/home/meide/pypy/pypy/rpython/translator/platform/posix.py", line 41, in _compile_c_file cwd=str(cfile.dirpath())) File "/home/meide/pypy/pypy/rpython/translator/platform/__init__.py", line 140, in _execute_c_compiler self._handle_error(returncode, stdout, stderr, outname) File "/home/meide/pypy/pypy/rpython/translator/platform/__init__.py", line 152, in _handle_error raise CompilationError(stdout, stderr) [translation:ERROR] CompilationError: CompilationError(err=""" /home/meide/pypy/pypy/rpython/rlib/rvmprof/src/shared/libbacktrace/elf.c: In function ?elf_add_syminfo_data?: /home/meide/pypy/pypy/rpython/rlib/rvmprof/src/shared/libbacktrace/elf.c:448:8: warning: implicit declaration of function ?__atomic_load_n? [-Wimplicit-function-declaration] /home/meide/pypy/pypy/rpython/rlib/rvmprof/src/shared/libbacktrace/elf.c:448:12: error: ?__ATOMIC_ACQUIRE? undeclared (first use in this function) /home/meide/pypy/pypy/rpython/rlib/rvmprof/src/shared/libbacktrace/elf.c:448:12: note: each undeclared identifier is reported only once for each function it appears in /home/meide/pypy/pypy/rpython/rlib/rvmprof/src/shared/libbacktrace/elf.c: In function ?elf_syminfo?: /home/meide/pypy/pypy/rpython/rlib/rvmprof/src/shared/libbacktrace/elf.c:493:12: error: ?__ATOMIC_ACQUIRE? undeclared (first use in this function) /home/meide/pypy/pypy/rpython/rlib/rvmprof/src/shared/libbacktrace/elf.c: In function ?backtrace_initialize?: /home/meide/pypy/pypy/rpython/rlib/rvmprof/src/shared/libbacktrace/elf.c:958:2: warning: implicit declaration of function ?__atomic_store_n? [-Wimplicit-function-declaration] /home/meide/pypy/pypy/rpython/rlib/rvmprof/src/shared/libbacktrace/elf.c:958:2: error: ?__ATOMIC_RELEASE? undeclared (first use in this function) /home/meide/pypy/pypy/rpython/rlib/rvmprof/src/shared/libbacktrace/elf.c:973:20: error: ?__ATOMIC_ACQUIRE? undeclared (first use in this function) """) [translation] start debugger... > /home/meide/pypy/pypy/rpython/translator/platform/__init__.py(152)_handle_error() -> raise CompilationError(stdout, stderr) (Pdb+) -------------- next part -------------- An HTML attachment was scrubbed... URL: From william.leslie.ttg at gmail.com Mon Jul 3 01:24:05 2017 From: william.leslie.ttg at gmail.com (William ML Leslie) Date: Mon, 3 Jul 2017 15:24:05 +1000 Subject: [pypy-dev] Asking for help In-Reply-To: References: Message-ID: On 3 July 2017 at 11:16, Meide Zhao wrote: > Hi all, > > I'm trying to build pypy from source on ubuntu 12.04 LTS but can't get it > to work (see error messages below). Can someone help me and let me know the > correct steps since I'm new to pypy? > ?Hi! 12.04 reached End Of Life at the end of April. It has GCC 4.6.3 so I'm not surprised pypy isn't building on it. At a minimum you'll need a more recent GCC, ?but it would probably be best to update to an LTS version that still has support. > > Thanks, > Meide > > (1) The details for the ubuntu in virtualbox are as follows > cat /etc/*-release > DISTRIB_ID=Ubuntu > DISTRIB_RELEASE=12.04 > DISTRIB_CODENAME=precise > DISTRIB_DESCRIPTION="Ubuntu 12.04.3 LTS" > NAME="Ubuntu" > VERSION="12.04.3 LTS, Precise Pangolin" > ID=ubuntu > ID_LIKE=debian > PRETTY_NAME="Ubuntu precise (12.04.3 LTS)" > VERSION_ID="12.04" > uname -a > Linux ubuntu 3.8.0-29-generic #42~precise1-Ubuntu SMP Wed Aug 14 > 16:19:23 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux > python -V > Python 2.7.13 > > (2) install pypy dependencies via command: > sudo apt-get install gcc make libffi-dev pkg-config zlib1g-dev > libbz2-dev libsqlite3-dev libncurses5-dev libexpat1-dev libssl-dev > libgdbm-dev tk-dev libgc-dev liblzma-dev mercurial > > (3) check out pypy source code: > cd ~/pypy > hg clone http://bitbucket.org/pypy/pypy pypy > > (4)build pypy > cd ~/pypy/pypy/pypy/goal/ > sudo ../../rpython/bin/rpython --opt=jit targetpypystandalone.py > > > [platform:execute] gcc -c -O3 -pthread -fomit-frame-pointer -Wall -Wno-unused -Wno-address -DRPYTHON_VMPROF -O3 -DVMPROF_UNIX -DVMPROF_LINUX -DRPYTHON_LL2CTYPES -I/home/meide/pypy/pypy/rpython/rlib/rvmprof/src -I/home/meide/pypy/pypy/rpython/rlib/rvmprof/src/shared -I/home/meide/pypy/pypy/rpython/rlib/rvmprof/src/shared/libbacktrace /home/meide/pypy/pypy/rpython/rlib/rvmprof/src/shared/libbacktrace/elf.c -o /tmp/usession-default-0/rpython/rlib/rvmprof/src/shared/libbacktrace/elf.o > [translation:info] Error: > File "/home/meide/pypy/pypy/rpython/translator/goal/translate.py", line 284, in main > default_goal='compile') > File "/home/meide/pypy/pypy/rpython/translator/driver.py", line 566, in from_targetspec > spec = target(driver, args) > File "targetpypystandalone.py", line 337, in target > return self.get_entry_point(config) > File "targetpypystandalone.py", line 368, in get_entry_point > space = make_objspace(config) > File "/home/meide/pypy/pypy/pypy/tool/option.py", line 35, in make_objspace > return Space(config) > File "/home/meide/pypy/pypy/pypy/interpreter/baseobjspace.py", line 441, in __init__ > self.initialize() > File "/home/meide/pypy/pypy/pypy/objspace/std/objspace.py", line 105, in initialize > self.make_builtins() > File "/home/meide/pypy/pypy/pypy/interpreter/baseobjspace.py", line 637, in make_builtins > self.install_mixedmodule(mixedname, installed_builtin_modules) > File "/home/meide/pypy/pypy/pypy/interpreter/baseobjspace.py", line 668, in install_mixedmodule > modname = self.setbuiltinmodule(mixedname) > File "/home/meide/pypy/pypy/pypy/interpreter/baseobjspace.py", line 507, in setbuiltinmodule > None, None, ["Module"]).Module > File "/home/meide/pypy/pypy/pypy/module/_vmprof/__init__.py", line 30, in > import pypy.module._vmprof.interp_vmprof > File "/home/meide/pypy/pypy/pypy/module/_vmprof/interp_vmprof.py", line 14, in > my_execute_frame = _decorator(PyFrame.execute_frame) > File "/home/meide/pypy/pypy/rpython/rlib/rvmprof/rvmprof.py", line 196, in decorate > _get_vmprof() > File "/home/meide/pypy/pypy/rpython/rlib/rvmprof/rvmprof.py", line 243, in _get_vmprof > _vmprof_instance = VMProf() > File "/home/meide/pypy/pypy/rpython/rlib/rvmprof/rvmprof.py", line 49, in __init__ > self.cintf = cintf.setup() > File "/home/meide/pypy/pypy/rpython/rlib/rvmprof/cintf.py", line 82, in setup > **eci_kwds)) > File "/home/meide/pypy/pypy/rpython/rtyper/tool/rffi_platform.py", line 94, in verify_eci > configure(CConfig) > File "/home/meide/pypy/pypy/rpython/rtyper/tool/rffi_platform.py", line 223, in configure > res[key] = value.question(writer.ask_gcc) > File "/home/meide/pypy/pypy/rpython/rtyper/tool/rffi_platform.py", line 555, in question > ask_gcc("") > File "/home/meide/pypy/pypy/rpython/rtyper/tool/rffi_platform.py", line 191, in ask_gcc > try_compile_cache([self.path], self.eci) > File "/home/meide/pypy/pypy/rpython/tool/gcc_cache.py", line 71, in try_compile_cache > platform.compile(c_files, eci) > File "/home/meide/pypy/pypy/rpython/translator/platform/__init__.py", line 54, in compile > ofiles = self._compile_o_files(cfiles, eci, standalone) > File "/home/meide/pypy/pypy/rpython/translator/platform/__init__.py", line 76, in _compile_o_files > ofiles.append(self._compile_c_file(self.cc, cfile, compile_args)) > File "/home/meide/pypy/pypy/rpython/translator/platform/posix.py", line 41, in _compile_c_file > cwd=str(cfile.dirpath())) > File "/home/meide/pypy/pypy/rpython/translator/platform/__init__.py", line 140, in _execute_c_compiler > self._handle_error(returncode, stdout, stderr, outname) > File "/home/meide/pypy/pypy/rpython/translator/platform/__init__.py", line 152, in _handle_error > raise CompilationError(stdout, stderr) > [translation:ERROR] CompilationError: CompilationError(err=""" > /home/meide/pypy/pypy/rpython/rlib/rvmprof/src/shared/libbacktrace/elf.c: In function ?elf_add_syminfo_data?: > /home/meide/pypy/pypy/rpython/rlib/rvmprof/src/shared/libbacktrace/elf.c:448:8: warning: implicit declaration of function ?__atomic_load_n? [-Wimplicit-function-declaration] > /home/meide/pypy/pypy/rpython/rlib/rvmprof/src/shared/libbacktrace/elf.c:448:12: error: ?__ATOMIC_ACQUIRE? undeclared (first use in this function) > /home/meide/pypy/pypy/rpython/rlib/rvmprof/src/shared/libbacktrace/elf.c:448:12: note: each undeclared identifier is reported only once for each function it appears in > /home/meide/pypy/pypy/rpython/rlib/rvmprof/src/shared/libbacktrace/elf.c: In function ?elf_syminfo?: > /home/meide/pypy/pypy/rpython/rlib/rvmprof/src/shared/libbacktrace/elf.c:493:12: error: ?__ATOMIC_ACQUIRE? undeclared (first use in this function) > /home/meide/pypy/pypy/rpython/rlib/rvmprof/src/shared/libbacktrace/elf.c: In function ?backtrace_initialize?: > /home/meide/pypy/pypy/rpython/rlib/rvmprof/src/shared/libbacktrace/elf.c:958:2: warning: implicit declaration of function ?__atomic_store_n? [-Wimplicit-function-declaration] > /home/meide/pypy/pypy/rpython/rlib/rvmprof/src/shared/libbacktrace/elf.c:958:2: error: ?__ATOMIC_RELEASE? undeclared (first use in this function) > /home/meide/pypy/pypy/rpython/rlib/rvmprof/src/shared/libbacktrace/elf.c:973:20: error: ?__ATOMIC_ACQUIRE? undeclared (first use in this function) > """) > [translation] start debugger... > > /home/meide/pypy/pypy/rpython/translator/platform/__init__.py(152)_handle_error() > -> raise CompilationError(stdout, stderr) > (Pdb+) > > > > > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > https://mail.python.org/mailman/listinfo/pypy-dev > > -- William Leslie Notice: Likely much of this email is, by the nature of copyright, covered under copyright law. You absolutely MAY reproduce any part of it in accordance with the copyright law of the nation you are reading this in. Any attempt to DENY YOU THOSE RIGHTS would be illegal without prior contractual agreement. -------------- next part -------------- An HTML attachment was scrubbed... URL: From william.leslie.ttg at gmail.com Mon Jul 3 01:26:48 2017 From: william.leslie.ttg at gmail.com (William ML Leslie) Date: Mon, 3 Jul 2017 15:26:48 +1000 Subject: [pypy-dev] Asking for help In-Reply-To: References: Message-ID: ?Also: Please don't run compilers with sudo.? On 3 July 2017 at 11:16, Meide Zhao wrote: > (4)build pypy > cd ~/pypy/pypy/pypy/goal/ > sudo ../../rpython/bin/rpython --opt=jit targetpypystandalone.py > > -- William Leslie Notice: Likely much of this email is, by the nature of copyright, covered under copyright law. You absolutely MAY reproduce any part of it in accordance with the copyright law of the nation you are reading this in. Any attempt to DENY YOU THOSE RIGHTS would be illegal without prior contractual agreement. -------------- next part -------------- An HTML attachment was scrubbed... URL: From yury at shurup.com Mon Jul 3 01:20:15 2017 From: yury at shurup.com (Yury V. Zaytsev) Date: Mon, 3 Jul 2017 07:20:15 +0200 (CEST) Subject: [pypy-dev] Asking for help In-Reply-To: References: Message-ID: On Sun, 2 Jul 2017, Meide Zhao wrote: > I'm trying to build pypy from source on ubuntu 12.04 LTS but can't get > it to work (see error messages below). Can someone help me and let me > know the correct steps since I'm new to pypy? It seems that GCC shipped with Ubuntu 12.04 LTS is way too old, but a better question would be *why* are you trying to build PyPy from source? Here are the portable PyPy binaries, if you are able to, just use that: https://github.com/squeaky-pl/portable-pypy Otherwise, you'd need to get a newer GCC, the easiest option being this PPA if you aren't very comfortable with bootstrapping compilers: https://launchpad.net/~ubuntu-toolchain-r/+archive/ubuntu/test?field.series_filter=precise -- Sincerely yours, Yury V. Zaytsev From william.leslie.ttg at gmail.com Mon Jul 3 02:09:23 2017 From: william.leslie.ttg at gmail.com (William ML Leslie) Date: Mon, 3 Jul 2017 16:09:23 +1000 Subject: [pypy-dev] Asking for help In-Reply-To: References: Message-ID: On 3 July 2017 at 15:47, Meide Zhao wrote: > Thanks a lot Leslie. > > Can I use the newer version of Ubuntu like 16.04 or 17.04? Which Linux and > which version do you use? > Tannit (which does the automatic linux-64 builds) has GCC 4.8.2, so even something that old should work. I'm not sure if it differs from 12.04 in any other way. I build pypy regularly on Ubuntu 16.04 LTS (has gcc 5.4.0) as well as Debian Jessie (with gcc 4.9.2). -- William Leslie Notice: Likely much of this email is, by the nature of copyright, covered under copyright law. You absolutely MAY reproduce any part of it in accordance with the copyright law of the nation you are reading this in. Any attempt to DENY YOU THOSE RIGHTS would be illegal without prior contractual agreement. -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhao.meide at gmail.com Mon Jul 3 02:34:14 2017 From: zhao.meide at gmail.com (Meide Zhao) Date: Sun, 2 Jul 2017 23:34:14 -0700 Subject: [pypy-dev] Asking for help In-Reply-To: References: Message-ID: Thanks Leslie, will try. On Sun, Jul 2, 2017 at 11:09 PM, William ML Leslie < william.leslie.ttg at gmail.com> wrote: > On 3 July 2017 at 15:47, Meide Zhao wrote: > >> Thanks a lot Leslie. >> >> Can I use the newer version of Ubuntu like 16.04 or 17.04? Which Linux >> and which version do you use? >> > > Tannit (which does the automatic linux-64 builds) has GCC 4.8.2, so even > something that old should work. I'm not sure if it differs from 12.04 in > any other way. I build pypy regularly on Ubuntu 16.04 LTS (has gcc 5.4.0) > as well as Debian Jessie (with gcc 4.9.2). > > -- > William Leslie > > Notice: > Likely much of this email is, by the nature of copyright, covered under > copyright law. You absolutely MAY reproduce any part of it in accordance > with the copyright law of the nation you are reading this in. Any attempt > to DENY YOU THOSE RIGHTS would be illegal without prior contractual > agreement. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From santa at portaone.com Mon Jul 3 10:25:38 2017 From: santa at portaone.com (Pavlo Lavrenenko) Date: Mon, 3 Jul 2017 17:25:38 +0300 Subject: [pypy-dev] Emulate pypy SIGSEGV Message-ID: Hello, guys. Running PyPy 5.6.0 with GCC 4.8.5 20150623 (Red Hat 4.8.5-4) here. Occasionally PyPy terminates with signal 11 and a long stack consisting mostly of: #31 0x00007f0d4c26ce7d in ?? () from /lib64/libpypy-c.so If I had a proper code sample to reproduce the issue I would have submitted it but since I am literally chasing ghosts I am more interested in how to make PyPy throw SIGSEGV on purpose (the software hangs after one of its child processes dies with signal 11, I want to at least reproduce and workaround this). kill -11 doesn't give the result I want. Any way to make PyPy crash somewhere in /lib64/libpypy-c.so? -- Best regards, Pavlo Lavrenenko, PortaOne, Inc., Junior Software Developer Tel: +1-866-SIP VOIP (+1 866 747 8647) ext. 7624 PortaOne - VoIP Solutions Company Visit our Website: http://www.portaone.com From armin.rigo at gmail.com Mon Jul 3 11:04:15 2017 From: armin.rigo at gmail.com (Armin Rigo) Date: Mon, 3 Jul 2017 17:04:15 +0200 Subject: [pypy-dev] Emulate pypy SIGSEGV In-Reply-To: References: Message-ID: Hi Pavlo, On 3 July 2017 at 16:25, Pavlo Lavrenenko wrote: > Running PyPy 5.6.0 with GCC 4.8.5 20150623 (Red Hat 4.8.5-4) here. > Occasionally PyPy terminates with signal 11 and a long stack consisting > mostly of: #31 0x00007f0d4c26ce7d in ?? () from /lib64/libpypy-c.so I would try first upgrading to PyPy 5.8.0. > kill -11 doesn't give the result I want. Any way to make PyPy crash > somewhere in /lib64/libpypy-c.so? ``os.kill(os.getpid(), signal.SIGSEGV)``? If that's what you tried and it doesn't give the results you want, please explain in more details why not. A bient?t, Armin. From santa at portaone.com Tue Jul 4 06:47:27 2017 From: santa at portaone.com (Pavlo Lavrenenko) Date: Tue, 4 Jul 2017 13:47:27 +0300 Subject: [pypy-dev] Emulate pypy SIGSEGV In-Reply-To: References: Message-ID: On 07/03/2017 06:04 PM, Armin Rigo wrote: > Hi Pavlo, > > On 3 July 2017 at 16:25, Pavlo Lavrenenko wrote: >> Running PyPy 5.6.0 with GCC 4.8.5 20150623 (Red Hat 4.8.5-4) here. >> Occasionally PyPy terminates with signal 11 and a long stack consisting >> mostly of: #31 0x00007f0d4c26ce7d in ?? () from /lib64/libpypy-c.so > > I would try first upgrading to PyPy 5.8.0. > Hello Armin, any prominent changes between 5.6.0 and 5.8.0? Can I find the change logs somewhere? >> kill -11 doesn't give the result I want. Any way to make PyPy crash >> somewhere in /lib64/libpypy-c.so? > > ``os.kill(os.getpid(), signal.SIGSEGV)``? If that's what you tried > and it doesn't give the results you want, please explain in more > details why not. > I don't understand why myself. The software works as: # there is master process and N children # master periodically checks if children are alive (multiprocessing.Process.is_alive()) # if not, master process calls join() and creates a new multiprocessing.Process instance This whole machinery works fine if I send 11 signal to the child but stucks either after join() or further on the process creation. There is not enough logging in the software to trace the issue and the issue quite rare. Moreover when it comes to production and the issue is reproduced engineers just restart everything manually. > A bient?t, > > Armin. > -- Best regards, Pavlo Lavrenenko, PortaOne, Inc., Junior Software Developer Tel: +1-866-SIP VOIP (+1 866 747 8647) ext. 7624 PortaOne - VoIP Solutions Company Visit our Website: http://www.portaone.com From santa.ssh at gmail.com Tue Jul 4 06:45:41 2017 From: santa.ssh at gmail.com (Santa) Date: Tue, 4 Jul 2017 13:45:41 +0300 Subject: [pypy-dev] Emulate pypy SIGSEGV In-Reply-To: References: Message-ID: <6055394e-27ba-c560-485b-fb7affaa2519@gmail.com> On 07/03/2017 06:04 PM, Armin Rigo wrote: > Hi Pavlo, > > On 3 July 2017 at 16:25, Pavlo Lavrenenko wrote: >> Running PyPy 5.6.0 with GCC 4.8.5 20150623 (Red Hat 4.8.5-4) here. >> Occasionally PyPy terminates with signal 11 and a long stack consisting >> mostly of: #31 0x00007f0d4c26ce7d in ?? () from /lib64/libpypy-c.so > > I would try first upgrading to PyPy 5.8.0. > Hello Armin, any prominent changes between 5.6.0 and 5.8.0? Can I find the change logs somewhere? >> kill -11 doesn't give the result I want. Any way to make PyPy crash >> somewhere in /lib64/libpypy-c.so? > > ``os.kill(os.getpid(), signal.SIGSEGV)``? If that's what you tried > and it doesn't give the results you want, please explain in more > details why not. > I don't understand why myself. The software works as: # there is master process and N children # master periodically checks if children are alive (multiprocessing.Process.is_alive()) # if not, master process calls join() and creates a new multiprocessing.Process instance This whole machinery works fine if I send 11 signal to the child but stucks either after join() or further on the process creation. There is not enough logging in the software to trace the issue and the issue quite rare. Moreover when it comes to production and the issue is reproduced engineers just restart everything manually. > A bient?t, > > Armin. > From armin.rigo at gmail.com Tue Jul 4 15:27:21 2017 From: armin.rigo at gmail.com (Armin Rigo) Date: Tue, 4 Jul 2017 21:27:21 +0200 Subject: [pypy-dev] Emulate pypy SIGSEGV In-Reply-To: References: Message-ID: Hi, On 4 July 2017 at 12:47, Pavlo Lavrenenko wrote: > any prominent changes between 5.6.0 and 5.8.0? Can I find the change logs > somewhere? There were fixes for very rare crashes, notably in the JIT. It's not possible to say "avoid doing this or that and it will avoid the bug". The JIT doesn't work at a high enough level for that. There is no better answer than: 5.8.0 contains (most probably) less of these very rare crashes. A bient?t, Armin. From rb at dustyfeet.com Sun Jul 9 20:53:31 2017 From: rb at dustyfeet.com (Rocky Bernstein) Date: Sun, 9 Jul 2017 20:53:31 -0400 Subject: [pypy-dev] lib_pypy/_marshal.py looks out of date for Python 3 Message-ID: A while back, in working on a cross-version python decompiler, I used PyPy's lib_pypy/_marshal.py. Recently in doing work in writing a cross-platform a Python bytecode assember, I noticed what is in the py3.5 branch is a bit out of date for Python 3. Specifically these types: TYPE_REF = 'r' # Since 3.4 TYPE_ASCII = 'a' # since 3.4 TYPE_ASCII_INTERNED = 'A' # since 3.4 TYPE_SMALL_TUPLE = ')' # since 3.4 TYPE_SHORT_ASCII = 'z' # since 3.4 TYPE_SHORT_ASCII_INTERNED = 'Z' # since 3.4 In xdis, the cross-version marshal/unmarshal and opcode routines, I've started adding some of these. Is lib_pypy/_marshal.py used? If so, shouldn't this be corrected? -------------- next part -------------- An HTML attachment was scrubbed... URL: From rb at dustyfeet.com Sun Jul 9 21:10:46 2017 From: rb at dustyfeet.com (Rocky Bernstein) Date: Sun, 9 Jul 2017 21:10:46 -0400 Subject: [pypy-dev] Static bytecode instruction optimization and pypy (JIT?) optimization Message-ID: In looking at Python bytecode over the years, I've noticed that it does very little to no traditional static-compiler optimization. Specifically: * Dead code elmination (most of the time) * Jumps to Jumps or Jumps to the next instruction * Constant propagation (most of the time) * Common subexpression elimination * Tail recursion * Code hoisting/sinking Yes, over the years more compiler optimization has been done but it's still pretty sparse. The little that I've looked at pypy, it is pretty much the same thing at the Python bytecode level. That's why a python decompiler for say 2.7 will work largely unmodified for he corresponding pypy 2.7 version. Same is true 3.2 versus pypy 3.2. I understand though that pypy can do and probably does some of the optimization when it JITs. But my question is: if traditional compiler optimization were done would this hinder, be neutral, or help pypy's optimization. Of course, if there were static compiler optimization of the type described above, that might be a win when JITing doesn't kick in. (And perhaps then who cares) But I'm interested in the other situation where both are in play. -------------- next part -------------- An HTML attachment was scrubbed... URL: From rymg19 at gmail.com Sun Jul 9 21:41:32 2017 From: rymg19 at gmail.com (rymg19 at gmail.com) Date: Sun, 9 Jul 2017 21:41:32 -0400 Subject: [pypy-dev] Static bytecode instruction optimization and pypy (JIT?) optimization In-Reply-To: <> Message-ID: Part of the problem is behavioral changes. If you implement tail call recursion, your tracebacks change. Common subexpression elimination will also have subtle behavioral changes. -- Ryan (????) Yoko Shimomura, ryo (supercell/EGOIST), Hiroyuki Sawano >> everyone elsehttp://refi64.com On Jul 9, 2017 at 8:11 PM, > wrote: In looking at Python bytecode over the years, I've noticed that it does very little to no traditional static-compiler optimization. Specifically: * Dead code elmination (most of the time) * Jumps to Jumps or Jumps to the next instruction * Constant propagation (most of the time) * Common subexpression elimination * Tail recursion * Code hoisting/sinking Yes, over the years more compiler optimization has been done but it's still pretty sparse. The little that I've looked at pypy, it is pretty much the same thing at the Python bytecode level. That's why a python decompiler for say 2.7 will work largely unmodified for he corresponding pypy 2.7 version. Same is true 3.2 versus pypy 3.2. I understand though that pypy can do and probably does some of the optimization when it JITs. But my question is: if traditional compiler optimization were done would this hinder, be neutral, or help pypy's optimization. Of course, if there were static compiler optimization of the type described above, that might be a win when JITing doesn't kick in. (And perhaps then who cares) But I'm interested in the other situation where both are in play. _______________________________________________ pypy-dev mailing list pypy-dev at python.org https://mail.python.org/mailman/listinfo/pypy-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From santa at portaone.com Mon Jul 10 02:26:03 2017 From: santa at portaone.com (Pavlo Lavrenenko) Date: Mon, 10 Jul 2017 09:26:03 +0300 Subject: [pypy-dev] Emulate pypy SIGSEGV In-Reply-To: <6055394e-27ba-c560-485b-fb7affaa2519@gmail.com> References: <6055394e-27ba-c560-485b-fb7affaa2519@gmail.com> Message-ID: <6e0cd096-187f-3f47-9bbd-8ef1d275694d@portaone.com> On 07/04/2017 01:45 PM, Santa wrote: > On 07/03/2017 06:04 PM, Armin Rigo wrote: > > I don't understand why myself. The software works as: > Anyway I wrote a CFFI module with a function that dereferences NULL and called the function in the application. Did the trick I wanted. -- Best regards, Pavlo Lavrenenko, PortaOne, Inc., Junior Software Developer Tel: +1-866-SIP VOIP (+1 866 747 8647) ext. 7624 PortaOne - VoIP Solutions Company Visit our Website: http://www.portaone.com From rb at dustyfeet.com Tue Jul 11 00:28:48 2017 From: rb at dustyfeet.com (Rocky Bernstein) Date: Tue, 11 Jul 2017 00:28:48 -0400 Subject: [pypy-dev] Static bytecode instruction optimization and pypy (JIT?) optimization In-Reply-To: References: Message-ID: Yes, that's true, but I believe that such decisions should offered to the programmers rather than dictated. There is also a whole other discussion on * how/where optimizations/transformations could be recorded * how tracebacks could be handled * debugging optimized code But here I wanted to focus on just the optimization interaction. If you are implying that pypy doesn't do changes that would make such behaviorial changes, then that would suggest there may be a lot of benefit static compiler optimization. On Sun, Jul 9, 2017 at 9:41 PM, rymg19 at gmail.com wrote: > Part of the problem is behavioral changes. If you implement tail call > recursion, your tracebacks change. Common subexpression elimination will > also have subtle behavioral changes. > > > -- > Ryan (????) > Yoko Shimomura, ryo (supercell/EGOIST), Hiroyuki Sawano >> everyone elsehttp://refi64.com > > On Jul 9, 2017 at 8:11 PM, > wrote: > > In looking at Python bytecode over the years, I've noticed that it does > very little to no traditional static-compiler optimization. Specifically: > > * Dead code elmination (most of the time) > * Jumps to Jumps or Jumps to the next instruction > * Constant propagation (most of the time) > * Common subexpression elimination > * Tail recursion > * Code hoisting/sinking > > Yes, over the years more compiler optimization has been done but it's > still pretty sparse. > > The little that I've looked at pypy, it is pretty much the same thing at > the Python bytecode level. That's why a python decompiler for say 2.7 will > work largely unmodified for he corresponding pypy 2.7 version. Same is true > 3.2 versus pypy 3.2. > > I understand though that pypy can do and probably does some of the > optimization when it JITs. But my question is: if traditional compiler > optimization were done would this hinder, be neutral, or help pypy's > optimization. > > Of course, if there were static compiler optimization of the type > described above, that might be a win when JITing doesn't kick in. (And > perhaps then who cares) > > But I'm interested in the other situation where both are in play. > _______________________________________________ pypy-dev mailing list > pypy-dev at python.org https://mail.python.org/mailman/listinfo/pypy-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From william.leslie.ttg at gmail.com Tue Jul 11 02:51:34 2017 From: william.leslie.ttg at gmail.com (William ML Leslie) Date: Tue, 11 Jul 2017 16:51:34 +1000 Subject: [pypy-dev] Static bytecode instruction optimization and pypy (JIT?) optimization In-Reply-To: References: Message-ID: On 10 July 2017 at 11:10, Rocky Bernstein wrote: > In looking at Python bytecode over the years, I've noticed that it does very > little to no traditional static-compiler optimization. Specifically: > > > Yes, over the years more compiler optimization has been done but it's still > pretty sparse. > > The little that I've looked at pypy, it is pretty much the same thing at the > Python bytecode level. That's why a python decompiler for say 2.7 will work > largely unmodified for he corresponding pypy 2.7 version. Same is true 3.2 > versus pypy 3.2. > > I understand though that pypy can do and probably does some of the > optimization when it JITs. But my question is: if traditional compiler > optimization were done would this hinder, be neutral, or help pypy's > optimization. > It'd likely be neutral with respect to the JIT optimisation, which operates on traces (what actually happens to the rpython objects) rather than bytecode. Within a trace, which is a fairly closed universe, all of those optimisations are very easy to make. As Ryan has mentioned, the bigger concern is going to be around the expectations that people have on how python code will behave, and a secondary concern is going to be people relying on optimisations that only work on pypy. It's one thing for people to tune their programs for pypy, and another for people to rely on certain optimisations being performed for correct execution. If you attempt this, here's a few things to keep in mind: > * Dead code elmination (most of the time) Mostly a code size optimisation and not so key for performance, but it is very strange that python generates different code for `return` at the end of a function and `else: return` also at the end of a function. On the other hand, `if 0:` statements are removed, which is handy. > * Jumps to Jumps or Jumps to the next instruction SGTM > * Constant propagation (most of the time) This is an interesting one. Consider beta reducing xs in this code (assume xs is otherwise unused): \[ xs = [1, 7, 2] if x in xs: .... \] Doing so would allow the list to be converted to a tuple and stored as a constant against the code object (mostly because the value is never used in an identity-dependent way). However, people can use the debugger to set a breakpoint on the assignment to xs, which they can't do if no evaluation happens on this line. They can even set xs to be a different value in the debugger before continuing. I'm a big fan of prebuilding objects at compile time, it's something that the pypy team wished the JVM could do back when the Java backend was written because loading time was awful. Even now the JVM is making changes to the constant pool layout that improve that possibility. > * Common subexpression elimination This is not widely applicable in python, in fact the only subexpressions that can be eliminated are a small group of expressions that apply to objects constructed from literals. A notable example is that you can't remove a duplicated `x = 1 + y` because the object that `y` refers to may have overridden __radd__ and that method may even have side-effects. There can be some success when either whole-program optimisation, encoding preconditions into optimistically compiled module code, or intraprocedural effect analysis is performed, but there isn't much precedent for the first two in pypy and the third is the subject of ongoing research. > * Tail recursion For functions that the compiler has shown are total and have a reasonable and static bound on memory allocation and time? Any situation where asyncs need to be suspended, lest anyone hit C-c and generate a traceback / kill the operation because it is taking too long, need to be very tightly time bounded. > * Code hoisting/sinking Again, without a really solid type or effect system it's really hard to know you're not changing the behaviour of the program when performing such code movement, because almost all operations can delegate to user-defined methods. > Of course, if there were static compiler optimization of the type described > above, that might be a win when JITing doesn't kick in. (And perhaps then > who cares) > > But I'm interested in the other situation where both are in play. > A direction that might be interesting is seeing how far you can get by making reasonable assumptions (module bindings for certain functions have their original values, this value has this exact code object as its special method `__radd__` or method `append`, this module `foo` imported has this exact hash), checking them and branching to optimised code, which may or may not be pypy bytecode. The preconditions themselves can't be written in pypy bytecode - you don't want to be executing side effects when pre-emptively *looking up* methods, which can happen due to descriptors, so you'd need to go behind the scenes. Nevertheless, they can be platform-independent and could live in the .pyc for a first cut. At the moment, it's necessary to assume *nothing* on return from *any* user code, which can violate assumptions in remarkable ways, including "what if the cmp or key functions change the list being sorted" or "what if an __eq__ method removes the key being examined from the dict or re-assigns it with a different hash", two cases that the pypy and python runtimes handle without crashing. Another good source of problems is that python's concurrent memory model is far stricter than Java's, so an optimisation that might be fine in a single threaded situation may break sequential consistency. I had been thinking that this sort of thing is a huge engineering effort, but if you're just generating optimised bytecode and jumping to that with some custom precondition-checking operation rather than generating machine code, it might be achieveable. Generating machine code is difficult because neither of rpython's compilers are well suited to the task of generating general-purpose position-independent machine code at app level - the JIT generates straight-line code only and this assumption is made throughout the codebase, it also assumes constants are in-memory during compilation and doesn't expect them to be reloaded at a different location, wheras the static rpython compiler expects to work on a whole program. -- William Leslie Notice: Likely much of this email is, by the nature of copyright, covered under copyright law. You absolutely MAY reproduce any part of it in accordance with the copyright law of the nation you are reading this in. Any attempt to DENY YOU THOSE RIGHTS would be illegal without prior contractual agreement. From armin.rigo at gmail.com Tue Jul 11 04:01:12 2017 From: armin.rigo at gmail.com (Armin Rigo) Date: Tue, 11 Jul 2017 10:01:12 +0200 Subject: [pypy-dev] Static bytecode instruction optimization and pypy (JIT?) optimization In-Reply-To: References: Message-ID: Hi Rocky, On 11 July 2017 at 06:28, Rocky Bernstein wrote: > Yes, that's true, but I believe that such decisions should offered to the > programmers rather than dictated. If the choice for the programmer is between "run my program exactly like I expect" and "run my program with some unexpected subtle effects and no noticeable performance gain", I think the choice need not be offered. A bient?t, Armin. From william.leslie.ttg at gmail.com Tue Jul 11 04:35:04 2017 From: william.leslie.ttg at gmail.com (William ML Leslie) Date: Tue, 11 Jul 2017 18:35:04 +1000 Subject: [pypy-dev] Static bytecode instruction optimization and pypy (JIT?) optimization In-Reply-To: References: Message-ID: On 11 July 2017 at 18:22, Rocky Bernstein wrote: > There's too much generalization and abstraction in the discussion at least > for me. So let me try to make this a little more concrete with real > examples. First note that, although Python, yes, does "optimize" "if 0:" > away, as soon as > you do something as simple as "debug = 0; if debug:" it is confounded. And > in my own experience I have done this. > Typically DEBUG is a global in the module, which means it can be set from /outside/ the module, so it's not constant as far as the runtime is concerned. Within a function it's a bit wacky sure, and is only kept around for interactive debugging reasons afaict. Here's a possible type for `self.a` which illustrates why CSE/Store-Sinking on a STORE_ATTR is not valid for the example function `foo`. class A(object): def set_b(self, value): print value self._b = value b = property(set_b) class Example(object): def __init__(self): self.a = A() def foo(self): self.a.b = 5 x = 0 if x: self.a.b = 2 else: return Example().foo() -- William Leslie Notice: Likely much of this email is, by the nature of copyright, covered under copyright law. You absolutely MAY reproduce any part of it in accordance with the copyright law of the nation you are reading this in. Any attempt to DENY YOU THOSE RIGHTS would be illegal without prior contractual agreement. From rb at dustyfeet.com Tue Jul 11 04:37:16 2017 From: rb at dustyfeet.com (Rocky Bernstein) Date: Tue, 11 Jul 2017 04:37:16 -0400 Subject: [pypy-dev] Static bytecode instruction optimization and pypy (JIT?) optimization In-Reply-To: References: Message-ID: On Tue, Jul 11, 2017 at 4:01 AM, Armin Rigo wrote: > Hi Rocky, > > On 11 July 2017 at 06:28, Rocky Bernstein wrote: > > Yes, that's true, but I believe that such decisions should offered to the > > programmers rather than dictated. > > If the choice for the programmer is between "run my program exactly > like I expect" and "run my program with some unexpected subtle effects > and no noticeable performance gain", I think the choice need not be > offered. > Sure but that's a straw argument and has a lot of packed opinions in it. Things like "run my program exactly like I expect" is loaded with opinions. As has already been noted, Python removes "if 0:" by default. When I first saw that, I thought it was cool, but honestly it wasn't "exactly as I expect" because I wanted my decompiler to recreate the source code as written which helps me in testing and lo and behold I was getting something different :-) You know what? I got use to it. I had to change my opinion of what "exactly as I expect" meant slightly. And if it *were *the case that everyone felt that no choice needed to be offered, assuming everyone agreed on what "some subtle effects" and "no noticeable performance gain" meant, then gcc and clang wouldn't offer -O2 and -O3 optimization levels. Maybe not even -O or compile everything -O=-1. As I said before, there's a different discussion to be had regarding debugging optimized code, and I have thoughts on that too. And it's basically that if you have a decompiler and/or logs of how the program has been transformed that might be acceptable those who *choose* to use optimization. I suspect a large number of people who write code, have tests as well. And those tests allow one some confidence in that the optimization performed is correct for the optimizer as well as the programmer using the optimizer. In sum, for me, it is more about transparency in how the program has been worked on. If I can see that, I'm cool. Others may feel differently, but that's okay. Optimization is optional. > > > A bient?t, > > Armin. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rb at dustyfeet.com Tue Jul 11 04:53:44 2017 From: rb at dustyfeet.com (Rocky Bernstein) Date: Tue, 11 Jul 2017 04:53:44 -0400 Subject: [pypy-dev] Static bytecode instruction optimization and pypy (JIT?) optimization In-Reply-To: References: Message-ID: On Tue, Jul 11, 2017 at 4:35 AM, William ML Leslie < william.leslie.ttg at gmail.com> wrote: > On 11 July 2017 at 18:22, Rocky Bernstein wrote: > > There's too much generalization and abstraction in the discussion at > least > > for me. So let me try to make this a little more concrete with real > > examples. First note that, although Python, yes, does "optimize" "if 0:" > > away, as soon as > > you do something as simple as "debug = 0; if debug:" it is confounded. > And > > in my own experience I have done this. > > > > Typically DEBUG is a global in the module, which means it can be set > from /outside/ the module, so it's not constant as far as the runtime > is concerned. Within a function it's a bit wacky sure, and is only > kept around for interactive debugging reasons afaict. > You presume to know much more about the programmer's intentions and behavior than I would care to venture into, or how and why such things could arise. > Here's a possible type for `self.a` which illustrates why > CSE/Store-Sinking on a STORE_ATTR is not valid for the example > function `foo`. > > class A(object): > def set_b(self, value): > print value > self._b = value > b = property(set_b) > > class Example(object): > def __init__(self): > self.a = A() > > def foo(self): > self.a.b = 5 > x = 0 > if x: > self.a.b = 2 > else: > return > > Example().foo() > Absolutely one can create pathological cases like this. I hope though this isn't your normal mode of coding though. I suspect that this kind of thing doesn't occur often. You know, I encountered something analogous when I first started using Python's code coverage tool, coverage, and the Python style checker flake8. As per Mark Pilgram's original version of Dive into Python I used: if __name__ == '__main__': And I was annoyed that older versions of code coverage dinged me because that code was untested code unless I took special care to run it in testing. (Which sometimes I would do). And I suspect most people have run into *some* annoyance with flake8 and the way it dictates formatting things. So what people typically do *if *they choose to use coverage or flake8 is add additional comments that clew the tool not to consider that section of the code. As best as I can tell, it works out pretty well. -- William Leslie Notice: Likely much of this email is, by the nature of copyright, covered under copyright law. You absolutely MAY reproduce any part of it in accordance with the copyright law of the nation you are reading this in. Any attempt to DENY YOU THOSE RIGHTS would be illegal without prior contractual agreement. -------------- next part -------------- An HTML attachment was scrubbed... URL: From william.leslie.ttg at gmail.com Tue Jul 11 05:04:46 2017 From: william.leslie.ttg at gmail.com (William ML Leslie) Date: Tue, 11 Jul 2017 19:04:46 +1000 Subject: [pypy-dev] Static bytecode instruction optimization and pypy (JIT?) optimization In-Reply-To: References: Message-ID: TLDR: Doing these sort of optimisations on /python/ requires you understand the evaluation model. You can't just do store-sinking like descriptors aren't a thing, and you can't go inlining functions like module globals aren't mutable bindings. #XXTiredCompilation On 11 July 2017 at 18:53, Rocky Bernstein wrote: > > > On Tue, Jul 11, 2017 at 4:35 AM, William ML Leslie > wrote: >> >> On 11 July 2017 at 18:22, Rocky Bernstein wrote: >> > There's too much generalization and abstraction in the discussion at >> > least >> > for me. So let me try to make this a little more concrete with real >> > examples. First note that, although Python, yes, does "optimize" "if 0:" >> > away, as soon as >> > you do something as simple as "debug = 0; if debug:" it is confounded. >> > And >> > in my own experience I have done this. >> > >> >> Typically DEBUG is a global in the module, which means it can be set >> from /outside/ the module, so it's not constant as far as the runtime >> is concerned. Within a function it's a bit wacky sure, and is only >> kept around for interactive debugging reasons afaict. > > > You presume to know much more about the programmer's intentions and behavior > than I would care to venture into, or how and why such things could arise. > If the programmer assigns a module-level variable, then the variable should appear to be set in that module. What other intention do you suppose to give this operation? > >> >> Here's a possible type for `self.a` which illustrates why >> CSE/Store-Sinking on a STORE_ATTR is not valid for the example >> function `foo`. >> >> class A(object): >> def set_b(self, value): >> print value >> self._b = value >> b = property(set_b) >> >> class Example(object): >> def __init__(self): >> self.a = A() >> >> def foo(self): >> self.a.b = 5 >> x = 0 >> if x: >> self.a.b = 2 >> else: >> return >> >> Example().foo() > > > > Absolutely one can create pathological cases like this. I hope though this > isn't your normal mode of coding though. > Descriptors - including properties - are a language feature that people should be able to expect to work. You can't just compile their invocation out because you think it will make their program faster. > I suspect that this kind of thing doesn't occur often. I don't think you'll find many in industry that agree with you. Have you *seen* the level of magic behind django, flask, pandas? Have you ever had people bring their super-obscure bugs to you when they run their program with your runtime? > You know, I > encountered something analogous when I first started using Python's code > coverage tool, coverage, and the Python style checker flake8. As per Mark > Pilgram's original version of Dive into Python I used: > > if __name__ == '__main__': > > And I was annoyed that older versions of code coverage dinged me because > that code was untested code unless I took special care to run it in testing. > (Which sometimes I would do). > PSA: don't run your tests directly, use a test runner, and say goodbye to all the awful hacks around getting your path setup correctly and bugs when the tests run against the installed version rather than the version you are editing. -- William Leslie Notice: Likely much of this email is, by the nature of copyright, covered under copyright law. You absolutely MAY reproduce any part of it in accordance with the copyright law of the nation you are reading this in. Any attempt to DENY YOU THOSE RIGHTS would be illegal without prior contractual agreement. From cfbolz at gmx.de Tue Jul 11 05:28:00 2017 From: cfbolz at gmx.de (Carl Friedrich Bolz) Date: Tue, 11 Jul 2017 11:28:00 +0200 Subject: [pypy-dev] Static bytecode instruction optimization and pypy (JIT?) optimization In-Reply-To: References: Message-ID: <0ae32617-c4c7-495a-6d66-621c0c6705db@gmx.de> On 11/07/17 10:37, Rocky Bernstein wrote: > Sure but that's a straw argument and has a lot of packed opinions in it. > Things like "run my program exactly like I expect" is loaded with > opinions. As has already been noted, Python removes "if 0:" by > default. When I first saw that, I thought it was cool, but honestly it > wasn't "exactly as I expect" because I wanted my decompiler to recreate > the source code as written which helps me in testing and lo and behold I > was getting something different :-) You know what? I got use to it. I > had to change my opinion of what "exactly as I expect" meant slightly. Actually, to me this would be an argument to remove the "if 0:" optimization ;-). Cheers, Carl Friedrich From william.leslie.ttg at gmail.com Tue Jul 11 05:36:38 2017 From: william.leslie.ttg at gmail.com (William ML Leslie) Date: Tue, 11 Jul 2017 19:36:38 +1000 Subject: [pypy-dev] Static bytecode instruction optimization and pypy (JIT?) optimization In-Reply-To: <0ae32617-c4c7-495a-6d66-621c0c6705db@gmx.de> References: <0ae32617-c4c7-495a-6d66-621c0c6705db@gmx.de> Message-ID: On 11 July 2017 at 19:28, Carl Friedrich Bolz wrote: > On 11/07/17 10:37, Rocky Bernstein wrote: >> Sure but that's a straw argument and has a lot of packed opinions in it. >> Things like "run my program exactly like I expect" is loaded with >> opinions. As has already been noted, Python removes "if 0:" by >> default. When I first saw that, I thought it was cool, but honestly it >> wasn't "exactly as I expect" because I wanted my decompiler to recreate >> the source code as written which helps me in testing and lo and behold I >> was getting something different :-) You know what? I got use to it. I >> had to change my opinion of what "exactly as I expect" meant slightly. > > Actually, to me this would be an argument to remove the "if 0:" > optimization ;-). > Not least of all because `if False:` is more ideomatic, but False can be bound to a value that is True, so the statement can't be folded! The lengths the compiler goes to in order to do what you told it (: and yet we still get bug reports that our built-in objects have methods that cpython's built-ins don't have and this makes some popular library give the wrong result. Oblig: https://twitter.com/shit_jit_says/status/446914805191172096 https://twitter.com/fijall/status/446913102190505984 -- William Leslie Notice: Likely much of this email is, by the nature of copyright, covered under copyright law. You absolutely MAY reproduce any part of it in accordance with the copyright law of the nation you are reading this in. Any attempt to DENY YOU THOSE RIGHTS would be illegal without prior contractual agreement. From tinysnippets at gmail.com Mon Jul 10 10:03:17 2017 From: tinysnippets at gmail.com (Aleksandr Koshkin) Date: Mon, 10 Jul 2017 17:03:17 +0300 Subject: [pypy-dev] Call rpython from outside Message-ID: Sup, guys. I want my rpython function to be invokable from outside world specifically be python. I have wrapped my function with entrypoint_highlevel and it appeared in shared object. So far so good. As a first argument this function takes a pointer to a C struct, and there is a problem. I have precisely recreated this struct in RPython as a lltypes.Struct (not rffi.CStruct) and annotated by this object my entrypoint signature, but it seams that some fields of the passed struct are messed up (shifted basically). Could it be because I used Struct instead of CStruct? I am using CFFI as a binding generator. -- Kind regards, Aleksandr Koshkin. -------------- next part -------------- An HTML attachment was scrubbed... URL: From tinysnippets at gmail.com Mon Jul 10 10:15:39 2017 From: tinysnippets at gmail.com (Aleksandr Koshkin) Date: Mon, 10 Jul 2017 17:15:39 +0300 Subject: [pypy-dev] Call rpython from outside In-Reply-To: References: Message-ID: Here is a link to a function that buggs me. https://github.com/magniff/rere/blob/master/rere/vm/vm_main.py#L110 I am using this headers for CFFI: https://github.com/magniff/rere/blob/master/rere/build/vm_headers.h 2017-07-10 17:03 GMT+03:00 Aleksandr Koshkin : > Sup, guys. > I want my rpython function to be invokable from outside world specifically > be python. I have wrapped my function with entrypoint_highlevel and it > appeared in shared object. So far so good. As a first argument this > function takes a pointer to a C struct, and there is a problem. I have > precisely recreated this struct in RPython as a lltypes.Struct (not > rffi.CStruct) and annotated by this object my entrypoint signature, but it > seams that some fields of the passed struct are messed up (shifted > basically). Could it be because I used Struct instead of CStruct? I am > using CFFI as a binding generator. > > -- > Kind regards, Aleksandr Koshkin. > -- Kind regards, Aleksandr Koshkin. -------------- next part -------------- An HTML attachment was scrubbed... URL: From fijall at gmail.com Tue Jul 11 10:49:53 2017 From: fijall at gmail.com (Maciej Fijalkowski) Date: Tue, 11 Jul 2017 18:49:53 +0400 Subject: [pypy-dev] Call rpython from outside In-Reply-To: References: Message-ID: lltype.Struct is a GC-managed struct, you don't want to have this as a part of API (use CStruct) On Mon, Jul 10, 2017 at 6:15 PM, Aleksandr Koshkin wrote: > Here is a link to a function that buggs me. > https://github.com/magniff/rere/blob/master/rere/vm/vm_main.py#L110 > I am using this headers for CFFI: > https://github.com/magniff/rere/blob/master/rere/build/vm_headers.h > > 2017-07-10 17:03 GMT+03:00 Aleksandr Koshkin : >> >> Sup, guys. >> I want my rpython function to be invokable from outside world specifically >> be python. I have wrapped my function with entrypoint_highlevel and it >> appeared in shared object. So far so good. As a first argument this function >> takes a pointer to a C struct, and there is a problem. I have precisely >> recreated this struct in RPython as a lltypes.Struct (not rffi.CStruct) and >> annotated by this object my entrypoint signature, but it seams that some >> fields of the passed struct are messed up (shifted basically). Could it be >> because I used Struct instead of CStruct? I am using CFFI as a binding >> generator. >> >> -- >> Kind regards, Aleksandr Koshkin. > > > > > -- > Kind regards, Aleksandr Koshkin. > > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > https://mail.python.org/mailman/listinfo/pypy-dev > From fijall at gmail.com Tue Jul 11 11:20:36 2017 From: fijall at gmail.com (Maciej Fijalkowski) Date: Tue, 11 Jul 2017 19:20:36 +0400 Subject: [pypy-dev] Call rpython from outside In-Reply-To: References: Message-ID: Sorry wrong, lltype.GcStruct is GC managed, lltype.Struct should work. However, please use rffi.CStruct (as it's better defined) and especially rffi.CArray, since lltype.Array contains length field On Tue, Jul 11, 2017 at 6:49 PM, Maciej Fijalkowski wrote: > lltype.Struct is a GC-managed struct, you don't want to have this as a > part of API (use CStruct) > > On Mon, Jul 10, 2017 at 6:15 PM, Aleksandr Koshkin > wrote: >> Here is a link to a function that buggs me. >> https://github.com/magniff/rere/blob/master/rere/vm/vm_main.py#L110 >> I am using this headers for CFFI: >> https://github.com/magniff/rere/blob/master/rere/build/vm_headers.h >> >> 2017-07-10 17:03 GMT+03:00 Aleksandr Koshkin : >>> >>> Sup, guys. >>> I want my rpython function to be invokable from outside world specifically >>> be python. I have wrapped my function with entrypoint_highlevel and it >>> appeared in shared object. So far so good. As a first argument this function >>> takes a pointer to a C struct, and there is a problem. I have precisely >>> recreated this struct in RPython as a lltypes.Struct (not rffi.CStruct) and >>> annotated by this object my entrypoint signature, but it seams that some >>> fields of the passed struct are messed up (shifted basically). Could it be >>> because I used Struct instead of CStruct? I am using CFFI as a binding >>> generator. >>> >>> -- >>> Kind regards, Aleksandr Koshkin. >> >> >> >> >> -- >> Kind regards, Aleksandr Koshkin. >> >> _______________________________________________ >> pypy-dev mailing list >> pypy-dev at python.org >> https://mail.python.org/mailman/listinfo/pypy-dev >> From fijall at gmail.com Tue Jul 11 12:15:31 2017 From: fijall at gmail.com (Maciej Fijalkowski) Date: Tue, 11 Jul 2017 20:15:31 +0400 Subject: [pypy-dev] Call rpython from outside In-Reply-To: References: Message-ID: you're missing #includes On Tue, Jul 11, 2017 at 8:00 PM, Aleksandr Koshkin wrote: > Thanks for your reply. I have reworked my code a bit - now it uses CStruct > instead of Struct. > https://github.com/magniff/rere/blob/master/rere/vm/vm_main.py#L92 > Now it fails with a rather obscure error https://pastebin.com/MZkni9bU > Anyway, Maciej, see you at PyConRu 17) > > 2017-07-11 18:20 GMT+03:00 Maciej Fijalkowski : >> >> Sorry wrong, lltype.GcStruct is GC managed, lltype.Struct should work. >> >> However, please use rffi.CStruct (as it's better defined) and >> especially rffi.CArray, since lltype.Array contains length field >> >> On Tue, Jul 11, 2017 at 6:49 PM, Maciej Fijalkowski >> wrote: >> > lltype.Struct is a GC-managed struct, you don't want to have this as a >> > part of API (use CStruct) >> > >> > On Mon, Jul 10, 2017 at 6:15 PM, Aleksandr Koshkin >> > wrote: >> >> Here is a link to a function that buggs me. >> >> https://github.com/magniff/rere/blob/master/rere/vm/vm_main.py#L110 >> >> I am using this headers for CFFI: >> >> https://github.com/magniff/rere/blob/master/rere/build/vm_headers.h >> >> >> >> 2017-07-10 17:03 GMT+03:00 Aleksandr Koshkin : >> >>> >> >>> Sup, guys. >> >>> I want my rpython function to be invokable from outside world >> >>> specifically >> >>> be python. I have wrapped my function with entrypoint_highlevel and it >> >>> appeared in shared object. So far so good. As a first argument this >> >>> function >> >>> takes a pointer to a C struct, and there is a problem. I have >> >>> precisely >> >>> recreated this struct in RPython as a lltypes.Struct (not >> >>> rffi.CStruct) and >> >>> annotated by this object my entrypoint signature, but it seams that >> >>> some >> >>> fields of the passed struct are messed up (shifted basically). Could >> >>> it be >> >>> because I used Struct instead of CStruct? I am using CFFI as a binding >> >>> generator. >> >>> >> >>> -- >> >>> Kind regards, Aleksandr Koshkin. >> >> >> >> >> >> >> >> >> >> -- >> >> Kind regards, Aleksandr Koshkin. >> >> >> >> _______________________________________________ >> >> pypy-dev mailing list >> >> pypy-dev at python.org >> >> https://mail.python.org/mailman/listinfo/pypy-dev >> >> > > > > > -- > Kind regards, Aleksandr Koshkin. From tinysnippets at gmail.com Tue Jul 11 12:00:24 2017 From: tinysnippets at gmail.com (Aleksandr Koshkin) Date: Tue, 11 Jul 2017 19:00:24 +0300 Subject: [pypy-dev] Call rpython from outside In-Reply-To: References: Message-ID: Thanks for your reply. I have reworked my code a bit - now it uses CStruct instead of Struct. https://github.com/magniff/rere/blob/master/rere/vm/vm_main.py#L92 Now it fails with a rather obscure error https://pastebin.com/MZkni9bU Anyway, Maciej, see you at PyConRu 17) 2017-07-11 18:20 GMT+03:00 Maciej Fijalkowski : > Sorry wrong, lltype.GcStruct is GC managed, lltype.Struct should work. > > However, please use rffi.CStruct (as it's better defined) and > especially rffi.CArray, since lltype.Array contains length field > > On Tue, Jul 11, 2017 at 6:49 PM, Maciej Fijalkowski > wrote: > > lltype.Struct is a GC-managed struct, you don't want to have this as a > > part of API (use CStruct) > > > > On Mon, Jul 10, 2017 at 6:15 PM, Aleksandr Koshkin > > wrote: > >> Here is a link to a function that buggs me. > >> https://github.com/magniff/rere/blob/master/rere/vm/vm_main.py#L110 > >> I am using this headers for CFFI: > >> https://github.com/magniff/rere/blob/master/rere/build/vm_headers.h > >> > >> 2017-07-10 17:03 GMT+03:00 Aleksandr Koshkin : > >>> > >>> Sup, guys. > >>> I want my rpython function to be invokable from outside world > specifically > >>> be python. I have wrapped my function with entrypoint_highlevel and it > >>> appeared in shared object. So far so good. As a first argument this > function > >>> takes a pointer to a C struct, and there is a problem. I have precisely > >>> recreated this struct in RPython as a lltypes.Struct (not > rffi.CStruct) and > >>> annotated by this object my entrypoint signature, but it seams that > some > >>> fields of the passed struct are messed up (shifted basically). Could > it be > >>> because I used Struct instead of CStruct? I am using CFFI as a binding > >>> generator. > >>> > >>> -- > >>> Kind regards, Aleksandr Koshkin. > >> > >> > >> > >> > >> -- > >> Kind regards, Aleksandr Koshkin. > >> > >> _______________________________________________ > >> pypy-dev mailing list > >> pypy-dev at python.org > >> https://mail.python.org/mailman/listinfo/pypy-dev > >> > -- Kind regards, Aleksandr Koshkin. -------------- next part -------------- An HTML attachment was scrubbed... URL: From tinysnippets at gmail.com Tue Jul 11 12:33:28 2017 From: tinysnippets at gmail.com (Aleksandr Koshkin) Date: Tue, 11 Jul 2017 19:33:28 +0300 Subject: [pypy-dev] Call rpython from outside In-Reply-To: References: Message-ID: So ok, I have to specify headers containing my structs and somehow push it to rpython toolchain, if I got you correctly. 0. Why? This structures are already described in the vm file as a bunch of ffi.CStruct objects. 1. If I have to, how would I do that, is there any example of embedding rpython into something? I saw an example "how to embed PyPy" but it looked a bit too PyPy specific. Thanks 2017-07-11 19:15 GMT+03:00 Maciej Fijalkowski : > you're missing #includes > > On Tue, Jul 11, 2017 at 8:00 PM, Aleksandr Koshkin > wrote: > > Thanks for your reply. I have reworked my code a bit - now it uses > CStruct > > instead of Struct. > > https://github.com/magniff/rere/blob/master/rere/vm/vm_main.py#L92 > > Now it fails with a rather obscure error https://pastebin.com/MZkni9bU > > Anyway, Maciej, see you at PyConRu 17) > > > > 2017-07-11 18:20 GMT+03:00 Maciej Fijalkowski : > >> > >> Sorry wrong, lltype.GcStruct is GC managed, lltype.Struct should work. > >> > >> However, please use rffi.CStruct (as it's better defined) and > >> especially rffi.CArray, since lltype.Array contains length field > >> > >> On Tue, Jul 11, 2017 at 6:49 PM, Maciej Fijalkowski > >> wrote: > >> > lltype.Struct is a GC-managed struct, you don't want to have this as a > >> > part of API (use CStruct) > >> > > >> > On Mon, Jul 10, 2017 at 6:15 PM, Aleksandr Koshkin > >> > wrote: > >> >> Here is a link to a function that buggs me. > >> >> https://github.com/magniff/rere/blob/master/rere/vm/vm_main.py#L110 > >> >> I am using this headers for CFFI: > >> >> https://github.com/magniff/rere/blob/master/rere/build/vm_headers.h > >> >> > >> >> 2017-07-10 17:03 GMT+03:00 Aleksandr Koshkin >: > >> >>> > >> >>> Sup, guys. > >> >>> I want my rpython function to be invokable from outside world > >> >>> specifically > >> >>> be python. I have wrapped my function with entrypoint_highlevel and > it > >> >>> appeared in shared object. So far so good. As a first argument this > >> >>> function > >> >>> takes a pointer to a C struct, and there is a problem. I have > >> >>> precisely > >> >>> recreated this struct in RPython as a lltypes.Struct (not > >> >>> rffi.CStruct) and > >> >>> annotated by this object my entrypoint signature, but it seams that > >> >>> some > >> >>> fields of the passed struct are messed up (shifted basically). Could > >> >>> it be > >> >>> because I used Struct instead of CStruct? I am using CFFI as a > binding > >> >>> generator. > >> >>> > >> >>> -- > >> >>> Kind regards, Aleksandr Koshkin. > >> >> > >> >> > >> >> > >> >> > >> >> -- > >> >> Kind regards, Aleksandr Koshkin. > >> >> > >> >> _______________________________________________ > >> >> pypy-dev mailing list > >> >> pypy-dev at python.org > >> >> https://mail.python.org/mailman/listinfo/pypy-dev > >> >> > > > > > > > > > > -- > > Kind regards, Aleksandr Koshkin. > -- Kind regards, Aleksandr Koshkin. -------------- next part -------------- An HTML attachment was scrubbed... URL: From armin.rigo at gmail.com Sat Jul 15 02:02:01 2017 From: armin.rigo at gmail.com (Armin Rigo) Date: Sat, 15 Jul 2017 08:02:01 +0200 Subject: [pypy-dev] Call rpython from outside In-Reply-To: References: Message-ID: Hi Aleksandr, On 11 July 2017 at 18:33, Aleksandr Koshkin wrote: > So ok, I have to specify headers containing my structs and somehow push it > to rpython toolchain, if I got you correctly. > 0. Why? This structures are already described in the vm file as a bunch of > ffi.CStruct objects. rffi.CStruct() is used to declare the RPython interface for structs that are originally defined in C. You can use lltype.Struct(), but it's not recommended in your case because lltype.Struct() is meant to define structs in RPython where you *don't* need a precise C-level struct; for example, lltype.Struct() could rename and reorder the fields in C if it is more efficient. We don't have a direct way to declare the struct in RPython but also force it to generate exactly the C struct declaration you want, because we never needed it. You need to use rffi.CStruct() and write the struct in the .h file manually too. > 1. If I have to, how would I do that, is there any example of embedding > rpython into something? Not really. Look maybe at tests using rpython.rlib.entrypoint, like rpython/translator/c/test/test_genc:test_entrypoints. A bient?t, Armin. From fijall at gmail.com Sun Jul 16 07:31:55 2017 From: fijall at gmail.com (Maciej Fijalkowski) Date: Sun, 16 Jul 2017 13:31:55 +0200 Subject: [pypy-dev] Call rpython from outside In-Reply-To: References: Message-ID: Hi Armin We ended up (Aleksandr is here at pycon russia) using rffi_platform to get the exact shape of the structure from the header file. There were a few bugs how exactly this got mapped, so it ended up being a good way to do it. On Sat, Jul 15, 2017 at 8:02 AM, Armin Rigo wrote: > Hi Aleksandr, > > On 11 July 2017 at 18:33, Aleksandr Koshkin wrote: >> So ok, I have to specify headers containing my structs and somehow push it >> to rpython toolchain, if I got you correctly. >> 0. Why? This structures are already described in the vm file as a bunch of >> ffi.CStruct objects. > > rffi.CStruct() is used to declare the RPython interface for structs > that are originally defined in C. > > You can use lltype.Struct(), but it's not recommended in your case > because lltype.Struct() is meant to define structs in RPython where > you *don't* need a precise C-level struct; for example, > lltype.Struct() could rename and reorder the fields in C if it is more > efficient. > > We don't have a direct way to declare the struct in RPython but also > force it to generate exactly the C struct declaration you want, > because we never needed it. You need to use rffi.CStruct() and write > the struct in the .h file manually too. > >> 1. If I have to, how would I do that, is there any example of embedding >> rpython into something? > > Not really. Look maybe at tests using rpython.rlib.entrypoint, like > rpython/translator/c/test/test_genc:test_entrypoints. > > > A bient?t, > > Armin. From tinysnippets at gmail.com Sun Jul 16 07:35:14 2017 From: tinysnippets at gmail.com (Aleksandr Koshkin) Date: Sun, 16 Jul 2017 14:35:14 +0300 Subject: [pypy-dev] Call rpython from outside In-Reply-To: References: Message-ID: Great! 16 ???. 2017 ?. 2:32 ?? ???????????? "Maciej Fijalkowski" ???????: > Hi Armin > > We ended up (Aleksandr is here at pycon russia) using rffi_platform to > get the exact shape of the structure from the header file. There were > a few bugs how exactly this got mapped, so it ended up being a good > way to do it. > > On Sat, Jul 15, 2017 at 8:02 AM, Armin Rigo wrote: > > Hi Aleksandr, > > > > On 11 July 2017 at 18:33, Aleksandr Koshkin > wrote: > >> So ok, I have to specify headers containing my structs and somehow push > it > >> to rpython toolchain, if I got you correctly. > >> 0. Why? This structures are already described in the vm file as a bunch > of > >> ffi.CStruct objects. > > > > rffi.CStruct() is used to declare the RPython interface for structs > > that are originally defined in C. > > > > You can use lltype.Struct(), but it's not recommended in your case > > because lltype.Struct() is meant to define structs in RPython where > > you *don't* need a precise C-level struct; for example, > > lltype.Struct() could rename and reorder the fields in C if it is more > > efficient. > > > > We don't have a direct way to declare the struct in RPython but also > > force it to generate exactly the C struct declaration you want, > > because we never needed it. You need to use rffi.CStruct() and write > > the struct in the .h file manually too. > > > >> 1. If I have to, how would I do that, is there any example of embedding > >> rpython into something? > > > > Not really. Look maybe at tests using rpython.rlib.entrypoint, like > > rpython/translator/c/test/test_genc:test_entrypoints. > > > > > > A bient?t, > > > > Armin. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From wlavrijsen at lbl.gov Tue Jul 18 17:33:03 2017 From: wlavrijsen at lbl.gov (wlavrijsen at lbl.gov) Date: Tue, 18 Jul 2017 14:33:03 -0700 (PDT) Subject: [pypy-dev] okay to rename cppyy -> _cppyy Message-ID: Hi, any objections to renaming cppyy into _cppyy? I want to be able to do a straight 'pip install cppyy' and then use it w/o further gymnastics (this works today for CPython), but then I can't have 'cppyy' be a built-in module. (You can pip install PyPy-cppyy-backend, but then you'd still have to deal with LD_LIBRARY_PATH and certain cppyy features are PyPy version dependent even as they need not be as they are pure Python.) The pip-installed cppyy will still use the built-in _cppyy for the PyPy specific parts (low-level manipulations etc.). I'm also moving the cppyy documentation out of the pypy documentation and placing it on its own (http://cppyy.readthedocs.io/), given that the CPython side of things now works, too. Yes, no, conditional? Thanks, Wim -- WLavrijsen at lbl.gov -- +1 (510) 486 6411 -- www.lavrijsen.net From fijall at gmail.com Tue Jul 18 17:56:39 2017 From: fijall at gmail.com (Maciej Fijalkowski) Date: Wed, 19 Jul 2017 00:56:39 +0300 Subject: [pypy-dev] okay to rename cppyy -> _cppyy In-Reply-To: References: Message-ID: that's how cffi works FYI. +1 from me On Wed, Jul 19, 2017 at 12:33 AM, wrote: > Hi, > > any objections to renaming cppyy into _cppyy? > > I want to be able to do a straight 'pip install cppyy' and then use it > w/o further gymnastics (this works today for CPython), but then I can't > have 'cppyy' be a built-in module. > > (You can pip install PyPy-cppyy-backend, but then you'd still have to deal > with LD_LIBRARY_PATH and certain cppyy features are PyPy version dependent > even as they need not be as they are pure Python.) > > The pip-installed cppyy will still use the built-in _cppyy for the PyPy > specific parts (low-level manipulations etc.). > > I'm also moving the cppyy documentation out of the pypy documentation and > placing it on its own (http://cppyy.readthedocs.io/), given that the > CPython side of things now works, too. > > Yes, no, conditional? > > Thanks, > Wim > -- > WLavrijsen at lbl.gov -- +1 (510) 486 6411 -- www.lavrijsen.net > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > https://mail.python.org/mailman/listinfo/pypy-dev From omer.drow at gmail.com Wed Jul 19 02:16:31 2017 From: omer.drow at gmail.com (Omer Katz) Date: Wed, 19 Jul 2017 06:16:31 +0000 Subject: [pypy-dev] okay to rename cppyy -> _cppyy In-Reply-To: References: Message-ID: +1. Great work! ??????? ??? ??, 19 ????? 2017 ?-0:58 ??? ?Maciej Fijalkowski?? :? > that's how cffi works FYI. +1 from me > > On Wed, Jul 19, 2017 at 12:33 AM, wrote: > > Hi, > > > > any objections to renaming cppyy into _cppyy? > > > > I want to be able to do a straight 'pip install cppyy' and then use it > > w/o further gymnastics (this works today for CPython), but then I can't > > have 'cppyy' be a built-in module. > > > > (You can pip install PyPy-cppyy-backend, but then you'd still have to > deal > > with LD_LIBRARY_PATH and certain cppyy features are PyPy version > dependent > > even as they need not be as they are pure Python.) > > > > The pip-installed cppyy will still use the built-in _cppyy for the PyPy > > specific parts (low-level manipulations etc.). > > > > I'm also moving the cppyy documentation out of the pypy documentation and > > placing it on its own (http://cppyy.readthedocs.io/), given that the > > CPython side of things now works, too. > > > > Yes, no, conditional? > > > > Thanks, > > Wim > > -- > > WLavrijsen at lbl.gov -- +1 (510) 486 6411 <+1%20510-486-6411> > -- www.lavrijsen.net > > _______________________________________________ > > pypy-dev mailing list > > pypy-dev at python.org > > https://mail.python.org/mailman/listinfo/pypy-dev > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > https://mail.python.org/mailman/listinfo/pypy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From anto.cuni at gmail.com Wed Jul 19 05:25:17 2017 From: anto.cuni at gmail.com (Antonio Cuni) Date: Wed, 19 Jul 2017 11:25:17 +0200 Subject: [pypy-dev] okay to rename cppyy -> _cppyy In-Reply-To: References: Message-ID: +1 On Tue, Jul 18, 2017 at 11:33 PM, wrote: > Hi, > > any objections to renaming cppyy into _cppyy? > > I want to be able to do a straight 'pip install cppyy' and then use it > w/o further gymnastics (this works today for CPython), but then I can't > have 'cppyy' be a built-in module. > > (You can pip install PyPy-cppyy-backend, but then you'd still have to deal > with LD_LIBRARY_PATH and certain cppyy features are PyPy version dependent > even as they need not be as they are pure Python.) > > The pip-installed cppyy will still use the built-in _cppyy for the PyPy > specific parts (low-level manipulations etc.). > > I'm also moving the cppyy documentation out of the pypy documentation and > placing it on its own (http://cppyy.readthedocs.io/), given that the > CPython side of things now works, too. > > Yes, no, conditional? > > Thanks, > Wim > -- > WLavrijsen at lbl.gov -- +1 (510) 486 6411 -- www.lavrijsen.net > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > https://mail.python.org/mailman/listinfo/pypy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From anto.cuni at gmail.com Wed Jul 19 06:15:51 2017 From: anto.cuni at gmail.com (Antonio Cuni) Date: Wed, 19 Jul 2017 12:15:51 +0200 Subject: [pypy-dev] cpyext performance Message-ID: Hello, recently I have been playing a bit with cpyext, so see if there are low haning fruits to be taken to improve the performance. I didn't get any real result but I think it's interesting to share my findings. The benchmark I'm using is here: https://github.com/antocuni/cpyext-benchmarks it contains a simple C extension defining three methods, one for each METH_NOARGS, METH_O and METH_VARARGS flags. So first, the results with CPython and PyPy 5.8: $ python bench.py noargs : 0.78 secs onearg : 0.89 secs varargs: 1.05 secs $ pypy bench.py noargs : 1.67 secs onearg : 2.13 secs varargs: 4.89 secs Then, I tried my cpyext-jit branch; this branch does two things: 1) it makes cpyext visible to the JIT, and add enough @jit.dont_look_inside so that it actually compiles 2) merges part of the cpyext-callopt branch, up to rev 9cbc8bd76297 (more on this later): this adds fast paths for METH_NOARGS and METH_O to avoid going through the slow __args__.unpack(): $ pypy-cpyext-jit bench.py noargs : 0.30 secs onearg : 0.31 secs varargs: 4.90 secs So, apparently this is enough to greatly speedup the calls, and be even faster than CPython. Note that "onearg" calls "simple.onearg(None)". However, things become more complicated as soon as I start passing various kind of objects to onearg(): $ pypy bench_oneargs.py # pypy 5.8 onearg(None): 2.09 secs onearg(1) : 2.07 secs onearg(i) : 4.98 secs onearg(i%2) : 4.92 secs onearg(X) : 2.13 secs onearg((1,)): 2.30 secs onearg((i,)): 9.80 secs $ pypy-cpyext-jit bench_oneargs.py onearg(None): 0.30 secs onearg(1) : 0.30 secs onearg(i) : 2.52 secs onearg(i%2) : 2.56 secs onearg(X) : 0.30 secs onearg((1,)): 0.30 secs onearg((i,)): 7.45 secs so, the call optimization still helps, but as soon as we need to convert one object from pypy to cpython we are horribly slow. However, it is interesting to note that: 1) if we pass a constant object, we are fast: None, 1, (1,) 2) if we pass X (which is a global X=100), we are still fast 3) any other object which is created on the fly is slow Looking at the traces, they look more or less the same in the three cases, so I don't really understand what is the difference. Finally, about the branch cpyext-callopt, which was started in Leysin by Richard, Armin and me: I am not sure to fully understand the purpose of dbba78b270fd: apparently, the optimization done in 9cbc8bd76297 seems to work well, so what am I missing? ciao, Anto -------------- next part -------------- An HTML attachment was scrubbed... URL: From wlavrijsen at lbl.gov Wed Jul 19 17:09:12 2017 From: wlavrijsen at lbl.gov (wlavrijsen at lbl.gov) Date: Wed, 19 Jul 2017 14:09:12 -0700 (PDT) Subject: [pypy-dev] okay to rename cppyy -> _cppyy In-Reply-To: References: Message-ID: Maciej, yes, I know. :) I used and use CFFI for inspiration. Now done. The package structure has become: _cppyy for PyPy / \ cppyy (shared) cppyy-backend (shared) \ / CPyCppyy for CPython For pip-purposes, the cppyy wheel is not shared (but it's cheap to install). The backend bdist_wheel can be re-used for PyPy, CPython2, and CPython3. $ virtualenv -p .../pypy/pypy/goal/pypy-c pypy-dev $ MAKE_NPROCS=32 pip install cppyy ... $ python Python 2.7.13 (0b40d2587588, Jul 19 2017, 00:43:13) [PyPy 5.9.0-alpha0 with GCC 5.4.0] on linux2 Type "help", "copyright", "credits" or "license" for more information. using my private settings ... And now for something completely different: ``Therefore, specific information, I was in an ideal context, I had to realize the faith'' >>>> import cppyy >>>> cppyy.cppdef('void say_hello() { cout << "Hello, World!" << endl; }') >>>> cppyy.gbl.say_hello() Hello, World! >>>> The documentation and (shared) app-level tests live in cppyy. I have some more work to do there. For PyPy-5.7 and -5.8, this can be done: >>>> from cppyy_backend import loader >>>> c = loader.load_cpp_backend() >>>> import cppyy and things work from there (albeit not niceties such as cppdef() above, but all the genreflex etc. stuff works). Omer, Anto, thanks for pushing. :) I'm actually pretty pleased with this setup: it all works nicer than I expected. Life being what it is, the last time I looked into distutils was apparently Oct 9, 2007 (I just checked my archives). Things have improved quite a bit since then ... Best regards, Wim -- WLavrijsen at lbl.gov -- +1 (510) 486 6411 -- www.lavrijsen.net From me at manueljacob.de Wed Jul 19 19:23:37 2017 From: me at manueljacob.de (Manuel Jacob) Date: Thu, 20 Jul 2017 01:23:37 +0200 Subject: [pypy-dev] okay to rename cppyy -> _cppyy In-Reply-To: References: Message-ID: Sounds like a good idea. Since this is similar to what is done with cffi, I have a few questions: How do you make sure that the pure Python part remains compatible with the backend? Can we include the pure Python part in PyPy, in the way as it's done for cffi? On 2017-07-18 23:33, wlavrijsen at lbl.gov wrote: > Hi, > > any objections to renaming cppyy into _cppyy? > > I want to be able to do a straight 'pip install cppyy' and then use it > w/o further gymnastics (this works today for CPython), but then I can't > have 'cppyy' be a built-in module. > > (You can pip install PyPy-cppyy-backend, but then you'd still have to > deal > with LD_LIBRARY_PATH and certain cppyy features are PyPy version > dependent > even as they need not be as they are pure Python.) > > The pip-installed cppyy will still use the built-in _cppyy for the PyPy > specific parts (low-level manipulations etc.). > > I'm also moving the cppyy documentation out of the pypy documentation > and > placing it on its own (http://cppyy.readthedocs.io/), given that the > CPython side of things now works, too. > > Yes, no, conditional? > > Thanks, > Wim From wlavrijsen at lbl.gov Thu Jul 20 00:14:27 2017 From: wlavrijsen at lbl.gov (wlavrijsen at lbl.gov) Date: Wed, 19 Jul 2017 21:14:27 -0700 (PDT) Subject: [pypy-dev] okay to rename cppyy -> _cppyy In-Reply-To: References: Message-ID: Manuel, > How do you make sure that the pure Python part remains compatible with the > backend? I'm thinking of selecting the versions explicitly in the front cppyy package: it already has different dependencies selected based on interpreter: PyPy or CPython. Also, the backend "API" is small and stable, so I expect a large number of versions to work well, including older ones, moving forward. If you have a better idea, then I'm all ears! I'm new to this packaging ... > Can we include the pure Python part in PyPy, in the way as it's done for > cffi? Yes, albeit that it is still necessary to install cppyy_backend (contains a patched version of Clang/LLVM), so I don't see much net gain. And yes, it is technically possible to write a bindings generator that only depends on LLVM during offline bindings generation, not at run-time. But then you'd just have SWIG (albeit with a standards-compliant parser). Best regards, Wim -- WLavrijsen at lbl.gov -- +1 (510) 486 6411 -- www.lavrijsen.net From jaymin.dabhi at vivaldi.net Thu Jul 20 00:45:40 2017 From: jaymin.dabhi at vivaldi.net (JAYMIN DABHI) Date: Thu, 20 Jul 2017 04:45:40 +0000 Subject: [pypy-dev] (no subject) Message-ID: <0418e7696b984a78dd0d4b761799ca7a@webmail.vivaldi.net> Hi All, I am trying to integrate PyPy in my current Yocto (Jethro branch) build for my i.MX6UL processor based development board. The board has "armv7a" (cortexa7) architecture. I am using "meta-pypy" layer from here (https://github.com/mzakharo/meta-pypy (https://github.com/mzakharo/meta-pypy)) and inherit the pypy package in my local.conf. But, it gives me error during bitbaking. I am bitbaking the "core-image-base". The error is as below : ERROR: Fetcher failure: Unable to find file file://pypy-4.0.1-cortexa7.tar.bz2 (file://pypy-4.0.1-cortexa7.tar.bz2/) anywhere. I think, it can't find the package "pypy-4.0.1-cortexa7.tar.bz2" in pypy directory. What should be the other possible reason? and How it can be solved? Suggestions are welcome. Thanks, -JAYMIN -------------- next part -------------- An HTML attachment was scrubbed... URL: From jaymin.dabhi at vivaldi.net Thu Jul 20 00:47:22 2017 From: jaymin.dabhi at vivaldi.net (JAYMIN DABHI) Date: Thu, 20 Jul 2017 04:47:22 +0000 Subject: [pypy-dev] Error while integrating PyPy module in Yocto (Jethro branch) build. Message-ID: <8ee4dc73add971a07fd2a5a9f31f12a9@webmail.vivaldi.net> Hi All, I am trying to integrate PyPy in my current Yocto (Jethro branch) build for my i.MX6UL processor based development board. The board has "armv7a" (cortexa7) architecture. I am using "meta-pypy" layer from here (https://github.com/mzakharo/meta-pypy (https://github.com/mzakharo/meta-pypy)) and inherit the pypy package in my local.conf. But, it gives me error during bitbaking. I am bitbaking the "core-image-base". The error is as below : ERROR: Fetcher failure: Unable to find file file://pypy-4.0.1-cortexa7.tar.bz2 (file://pypy-4.0.1-cortexa7.tar.bz2/) anywhere. I think, it can't find the package "pypy-4.0.1-cortexa7.tar.bz2" in pypy directory. What should be the other possible reason? and How it can be solved? Suggestions are welcome. Thanks, - JAYMIN -------------- next part -------------- An HTML attachment was scrubbed... URL: From armin.rigo at gmail.com Thu Jul 20 03:38:39 2017 From: armin.rigo at gmail.com (Armin Rigo) Date: Thu, 20 Jul 2017 09:38:39 +0200 Subject: [pypy-dev] okay to rename cppyy -> _cppyy In-Reply-To: References: Message-ID: Hi Wim, On 18 July 2017 at 23:33, wrote: > I want to be able to do a straight 'pip install cppyy' and then use it > w/o further gymnastics (this works today for CPython) Great, I'm +1 on that as well. If the newly-named '_cppyy' module is more minimal than '_cffi_backend' and less likely to change, then yes, I think it's a good plan to include only '_cppyy' with PyPy and distribute the rest with 'pip install'. A bient?t, Armin From cfbolz at gmx.de Thu Jul 20 04:14:02 2017 From: cfbolz at gmx.de (Carl Friedrich Bolz-Tereick) Date: Thu, 20 Jul 2017 10:14:02 +0200 Subject: [pypy-dev] Error while integrating PyPy module in Yocto (Jethro branch) build. In-Reply-To: <8ee4dc73add971a07fd2a5a9f31f12a9@webmail.vivaldi.net> References: <8ee4dc73add971a07fd2a5a9f31f12a9@webmail.vivaldi.net> Message-ID: <87D13D55-2AA3-40EC-A211-63BE2DB67114@gmx.de> Hi Jaymin, Welcome! This looks like an error that the meta-pypy build scripts produce? So it might be more productive to talk to them, since we don't know how that works. Cheers, Carl Friedrich On July 20, 2017 6:47:22 AM GMT+02:00, JAYMIN DABHI wrote: >Hi All, > > I am trying to integrate PyPy in my current Yocto (Jethro branch) >build for my i.MX6UL processor based development board. The board has >"armv7a" (cortexa7) architecture. > I am using "meta-pypy" layer from here >(https://github.com/mzakharo/meta-pypy >(https://github.com/mzakharo/meta-pypy)) and inherit the pypy package >in my local.conf. > But, it gives me error during bitbaking. I am bitbaking the >"core-image-base". > > The error is as below : > > ERROR: Fetcher failure: Unable to find file >file://pypy-4.0.1-cortexa7.tar.bz2 >(file://pypy-4.0.1-cortexa7.tar.bz2/) anywhere. > > I think, it can't find the package "pypy-4.0.1-cortexa7.tar.bz2" in >pypy directory. > > What should be the other possible reason? and How it can be solved? >Suggestions are welcome. Thanks, >- >JAYMIN -------------- next part -------------- An HTML attachment was scrubbed... URL: From victor.stinner at gmail.com Thu Jul 20 07:08:02 2017 From: victor.stinner at gmail.com (Victor Stinner) Date: Thu, 20 Jul 2017 13:08:02 +0200 Subject: [pypy-dev] Hide implementation details of the CPython C API Message-ID: Hi, I'm not sure that PyPy developers are following the python-ideas mailing list, so here is a copy of my email. I really want your feedback to know if my idea is stupid and if it's feasible. You have a much better experience with C API compatibility than me ;-) LWN already wrote an article on the python-ideas discussion: https://lwn.net/Articles/727973/ or if you are not subscribed: https://lwn.net/SubscriberLink/727973/9b2004c4fe039833/ My PEP draft: https://github.com/haypo/misc/blob/master/python/pep_c_api.rst -- Hi, This is the first draft of a big (?) project to prepare CPython to be able to "modernize" its implementation. Proposed changes should allow to make CPython more efficient in the future. The optimizations themself are out of the scope of the PEP, but some examples are listed to explain why these changes are needed. For the background, see also my talk at the previous Python Language Summit at Pycon US, Portland OR: "Keeping Python competitive" https://lwn.net/Articles/723752/#723949 "Python performance", slides (PDF): https://github.com/haypo/conf/raw/master/2017-PyconUS/summit.pdf Since this is really the first draft, I didn't assign a PEP number to it yet. I prefer to wait for a first feedback round. Victor PEP: xxx Title: Hide implementation details of the C API Version: $Revision$ Last-Modified: $Date$ Author: Victor Stinner , Status: Draft Type: Standards Track Content-Type: text/x-rst Created: 31-May-2017 Abstract ======== Modify the C API to remove implementation details. Add an opt-in option to compile C extensions to get the old full API with implementation details. The modified C API allows to more easily experiment new optimizations: * Indirect Reference Counting * Remove Reference Counting, New Garbage Collector * Remove the GIL * Tagged pointers Reference counting may be emulated in a future implementation for backward compatibility. Rationale ========= History of CPython forks ------------------------ Last 10 years, CPython was forked multiple times to attempt different CPython enhancements: * Unladen Swallow: add a JIT compiler based on LLVM * Pyston: add a JIT compiler based on LLVM (CPython 2.7 fork) * Pyjion: add a JIT compiler based on Microsoft CLR * Gilectomy: remove the Global Interpreter Lock nicknamed "GIL" * etc. Sadly, none is this project has been merged back into CPython. Unladen Swallow lost its funding from Google, Pyston lost its funding from Dropbox, Pyjion is developed in the limited spare time of two Microsoft employees. One hard technically issue which blocked these projects to really unleash their power is the C API of CPython. Many old technical choices of CPython are hardcoded in this API: * reference counting * garbage collector * C structures like PyObject which contains headers for reference counting and the garbage collector * specific memory allocators * etc. PyPy ---- PyPy uses more efficient structures and use a more efficient garbage collector without reference counting. Thanks to that (but also many other optimizations), PyPy succeeded to run Python code up to 5x faster than CPython. Plan made of multiple small steps ================================= Step 1: split Include/ into subdirectories ------------------------------------------ Split the ``Include/`` directory of CPython: * ``python`` API: ``Include/Python.h`` remains the default C API * ``core`` API: ``Include/core/Python.h`` is a new C API designed for building Python * ``stable`` API: ``Include/stable/Python.h`` is the stable ABI Expect declarations to be duplicated on purpose: ``#include`` should be not used to include files from a different API to prevent mistakes. In the past, too many functions were exposed *by mistake*, especially symbols exported to the stable ABI by mistake. At this point, ``Include/Python.h`` is not changed at all: zero risk of backward incompatibility. The ``core`` API is the most complete API exposing *all* implementation details and use macros for best performances. XXX should we abandon the stable ABI? Never really used by anyone. Step 2: Add an opt-in API option to tools building packages ----------------------------------------------------------- Modify Python packaging tools (distutils, setuptools, flit, pip, etc.) to add an opt-in option to choose the API: ``python``, ``core`` or ``stable``. For example, debuggers like ``vmprof`` need need the ``core`` API to get a full access to implementation details. XXX handle backward compatibility for packaging tools. Step 3: first pass of implementation detail removal --------------------------------------------------- Modify the ``python`` API: * Add a new ``API`` subdirectory in the Python source code which will "implement" the Python C API * Replace macros with functions. The implementation of new functions will be written in the ``API/`` directory. For example, Py_INCREF() becomes the function ``void Py_INCREF(PyObject *op)`` and its implementation will be written in the ``API`` directory. * Slowly remove more and more implementation details from this API. Modifications of these API should be driven by tests of popular third party packages like: * Django with database drivers * numpy * scipy * Pillow * lxml * etc. Compilation errors on these extensions are expected. This step should help to draw a line for the backward incompatible change. Goal: remove a few implementation details but don't break numpy and lxml. Step 4 ------ Switch the default API to the new restricted ``python`` API. Help third party projects to patch their code: don't break the "Python world". Step 5 ------ Continue Step 3: remove even more implementation details. Long-term goal to complete this PEP: Remove *all* implementation details, remove all structures and macros. Alternative: keep core as the default API ========================================= A smoother transition would be to not touch the existing API but work on a new API which would only be used as an opt-in option. Similar plan used by Gilectomy: opt-in option to get best performances. It would be at least two Python binaries per Python version: default compatible version, and a new faster but incompatible version. Idea: implementation of the C API supporting old Python versions? ================================================================= Open questions. Q: Would it be possible to design an external library which would work on Python 2.7, Python 3.4-3.6, and the future Python 3.7? Q: Should such library be linked to libpythonX.Y? Or even to a pythonX.Y binary which wasn't built with shared library? Q: Would it be easy to use it? How would it be downloaded and installed to build extensions? Collaboration with PyPy, IronPython, Jython and MicroPython =========================================================== XXX to be done Enhancements becoming possible thanks to a new C API ==================================================== Indirect Reference Counting --------------------------- * Replace ``Py_ssize_t ob_refcnt;`` (integer) with ``Py_ssize_t *ob_refcnt;`` (pointer to an integer). * Same change for GC headers? * Store all reference counters in a separated memory block (or maybe multiple memory blocks) Expected advantage: smaller memory footprint when using fork() on UNIX which is implemented with Copy-On-Write on physical memory pages. See also `Dismissing Python Garbage Collection at Instagram `_. Remove Reference Counting, New Garbage Collector ------------------------------------------------ If the new C API hides well all implementation details, it becomes possible to change fundamental features like how CPython tracks the lifetime of an object. * Remove ``Py_ssize_t ob_refcnt;`` from the PyObject structure * Replace the current XXX garbage collector with a new tracing garbage collector * Use new macros to define a variable storing an object and to set the value of an object * Reimplement Py_INCREF() and Py_DECREF() on top of that using a hash table: object => reference counter. XXX PyPy is only partially successful on that project, cpyext remains very slow. XXX Would it require an opt-in option to really limit backward compatibility? Remove the GIL -------------- * Don't remove the GIL, but replace the GIL with smaller locks * Builtin mutable types: list, set, dict * Modules * Classes * etc. Backward compatibility: * Keep the GIL Tagged pointers --------------- https://en.wikipedia.org/wiki/Tagged_pointer Common optimization, especially used for "small integers". Current C API doesn't allow to implement tagged pointers. Tagged pointers are used in MicroPython to reduce the memory footprint. Note: ARM64 was recently extended its address space to 48 bits, causing issue in LuaJIT: `47 bit address space restriction on ARM64 `_. Misc ideas ---------- * Software Transactional Memory? See `PyPy STM `_ Idea: Multiple Python binaries ============================== Instead of a single ``python3.7``, providing two or more binaries, as PyPy does, would allow to experiment more easily changes without breaking the backward compatibility. For example, ``python3.7`` would remain the default binary with reference counting and the current garbage collector, whereas ``fastpython3.7`` would not use reference counting and a new garbage collector. It would allow to more quickly "break the backward compatibility" and make it even more explicit than only prepared C extensions will be compatible with the new ``fastpython3.7``. cffi ==== XXX Long term goal: "more cffi, less libpython". Copyright ========= This document has been placed in the public domain. .. Local Variables: mode: indented-text indent-tabs-mode: nil sentence-end-double-space: t fill-column: 70 coding: utf-8 End: From wlavrijsen at lbl.gov Thu Jul 20 11:46:38 2017 From: wlavrijsen at lbl.gov (wlavrijsen at lbl.gov) Date: Thu, 20 Jul 2017 08:46:38 -0700 (PDT) Subject: [pypy-dev] okay to rename cppyy -> _cppyy In-Reply-To: References: Message-ID: Armin, > If the newly-named '_cppyy' module is more minimal than '_cffi_backend' 'more minimal' is a hard to define term, but a 'wc -l' shows that _cffi_backend has 8498 lines of python and _cppyy has 4265. Of course, the latter uses the former, saving lots of code for it. :) Further, _cppyy includes 470 lines in pythonify.py which could move out, as it is not RPython, but that would make it harder to write tests, which would have to be low-level (as in test_cppyy.py) instead of high level (the rest). I wrote some small benchmarks for the PyHPC '16 paper. If I find the time to flesh that out further, I can better judge what can move out of RPython (_cppyy) and into Python (cppyy), without loss of performance. Best regards, Wim -- WLavrijsen at lbl.gov -- +1 (510) 486 6411 -- www.lavrijsen.net From wlavrijsen at lbl.gov Thu Jul 20 21:17:47 2017 From: wlavrijsen at lbl.gov (wlavrijsen at lbl.gov) Date: Thu, 20 Jul 2017 18:17:47 -0700 (PDT) Subject: [pypy-dev] okay to rename cppyy -> _cppyy In-Reply-To: References: Message-ID: Hi, [replying to self] > And yes, it is technically possible to write a bindings generator that only > depends on LLVM during offline bindings generation, not at run-time. But > then you'd just have SWIG (albeit with a standards-compliant parser). it hit me yesterday that instead of generating something offline for cppyy to read, you could also use cppyy_backend directly to generate a pure python module for use with cffi as it knows about sizes, offsets, name mangling, class relations etc., etc., etc. (The point is that cppyy_backend exposes a minimalistic C-API, which itself can be bound with cffi, as _cppyy in fact does.) (There are still wrappers needed for inlined functions, overloaded operator new, virtual inheritance offsets, etc., but those could be stuffed in a .cxx on the side of the cffi-based Python module. Can't solve automatic template instantiation this way, though: cppyy will always be superior.) So, I wrote a quick proof of concept (https://bitbucket.org/wlav/quaff/src): $ cat Simple.h class Simple { public: Simple(); Simple(int); virtual ~Simple(); public: int get_data(); void set_data(int); private: int m_data; }; $ cat Simple.cxx #include "Simple.h" Simple::Simple() : m_data(42) { } Simple::Simple(int i) : m_data(i) { } Simple::~Simple() { } int Simple::get_data() { return m_data; } void Simple::set_data(int i) { m_data = i; } $ g++ -shared -o libsimple.so -fPIC Simple.cxx $ python >>>> from quaff import generate >>>> generate.generate_class('simple', 'Simple', 'libsimple.so') >>>> import simple >>>> s1 = simple.Simple() >>>> s1.get_data() 42 >>>> s1.set_data(13) >>>> s1.get_data() 13 >>>> s2 = simple.Simple(99) >>>> s2.get_data() 99 >>>> What quaff generated looks like this: $ cat simple.py from cffi import FFI _ffi = FFI() _ffi.cdef(""" void _ZN6SimpleC1Ev(void*); void _ZN6SimpleC1Ei(void*, int); int _ZN6Simple8get_dataEv(void*); void _ZN6Simple8set_dataEi(void*, int); """) _dll = _ffi.dlopen('libsimple.so') class Simple(object): def __init__(self, *args): self._cpp_this = _ffi.new('char[]', 16) if len(args) == 0: _dll._ZN6SimpleC1Ev(self._cpp_this, *args) if len(args) == 1: _dll._ZN6SimpleC1Ei(self._cpp_this, *args) def get_data(self, *args): return _dll._ZN6Simple8get_dataEv(self._cpp_this, *args) def set_data(self, *args): return _dll._ZN6Simple8set_dataEi(self._cpp_this, *args) which is nicely standalone for distribution (yes, the result is platform specific, but no more so than a binary .so and way better than a Python extension module). Best regards, Wim -- WLavrijsen at lbl.gov -- +1 (510) 486 6411 -- www.lavrijsen.net From armin.rigo at gmail.com Fri Jul 21 02:36:41 2017 From: armin.rigo at gmail.com (Armin Rigo) Date: Fri, 21 Jul 2017 08:36:41 +0200 Subject: [pypy-dev] okay to rename cppyy -> _cppyy In-Reply-To: References: Message-ID: Hi Wim, On 21 July 2017 at 03:17, wrote: > _ffi.cdef(""" > void _ZN6SimpleC1Ev(void*); > void _ZN6SimpleC1Ei(void*, int); > int _ZN6Simple8get_dataEv(void*); > void _ZN6Simple8set_dataEi(void*, int); > """) Ouch, I suppose :-) Explicit C++ mangling. The alternative would be to use ffi.set_source() instead of ffi.dlopen(), and use a C++ compiler again to produce the cffi binding. This reduces cppyy to a generator emitting a cffi build script. But as you mentioned there are additional issues, e.g. with instantiation of templates. A bient?t, Armin. From fijall at gmail.com Fri Jul 21 02:58:36 2017 From: fijall at gmail.com (Maciej Fijalkowski) Date: Fri, 21 Jul 2017 08:58:36 +0200 Subject: [pypy-dev] Link to paper Message-ID: Hi Wim There was a complaint on reddit, that, quoting: "The link to the pyhpc paper does not work for me, does anyone have a mirror? Would love to read it." can you fix the link please? Cheers, fijal From wlavrijsen at lbl.gov Fri Jul 21 03:26:40 2017 From: wlavrijsen at lbl.gov (wlavrijsen at lbl.gov) Date: Fri, 21 Jul 2017 00:26:40 -0700 (PDT) Subject: [pypy-dev] Link to paper In-Reply-To: References: Message-ID: Fijal, that's odd ... Well, I've put a copy at cern.ch and redirected both the cppyy.readthedocs.org and README on bitbucket there: http://cern.ch/wlav/Cppyy_LavrijsenDutta_PyHPC16.pdf That should do for now. The other alternative record is: http://dl.acm.org/citation.cfm?id=3019087 but that's not open access (at least, I don't see the pdf from home). And slides are here: http://www.dlr.de/sc/Portaldata/15/Resources/dokumente/pyhpc2016/slides/PyHPC_2016_talk_9.pdf Best regards, Wim -- WLavrijsen at lbl.gov -- +1 (510) 486 6411 -- www.lavrijsen.net From wlavrijsen at lbl.gov Fri Jul 21 03:46:06 2017 From: wlavrijsen at lbl.gov (wlavrijsen at lbl.gov) Date: Fri, 21 Jul 2017 00:46:06 -0700 (PDT) Subject: [pypy-dev] okay to rename cppyy -> _cppyy In-Reply-To: References: Message-ID: Armin, > Ouch, I suppose :-) Explicit C++ mangling. The alternative would be > to use ffi.set_source() instead of ffi.dlopen(), and use a C++ > compiler again to produce the cffi binding. sure, but the mangled name saves a redirection and has less code to generate, was my point. Besides, who looks at generated code. :) Have you seen what comes out of the PyPy translator ... > But as you mentioned there are additional issues, e.g. with instantiation > of templates. And all other dynamic behaviors, incl. deriving Python classes from C++ ones for use in C++-land. (Is currently disabled, as I never implemented that for the PyPy side, only CPython (it uses the Python C-API).) Or general patch-ups (e.g. byte* <-> char*). Or auto-downcasting (favorite). Or class unique-ness. Or adding user-defined classes interactively. As said, in the end it'd be just SWIG but with a better parser (and no Python versioning issues, or new language to learn). I'd like to see the performance of a more complex case, though. E.g. when doing class matches using Python reflection such as isinstance. I've always had a hard time getting these things fast in RPython, and PyPy may have an easier time with proper Python code (another reason to reject C++ wrappers and live with mangled names). Best regards, Wim -- WLavrijsen at lbl.gov -- +1 (510) 486 6411 -- www.lavrijsen.net From armin.rigo at gmail.com Fri Jul 21 11:03:58 2017 From: armin.rigo at gmail.com (Armin Rigo) Date: Fri, 21 Jul 2017 17:03:58 +0200 Subject: [pypy-dev] okay to rename cppyy -> _cppyy In-Reply-To: References: Message-ID: Hi Wim, On 21 July 2017 at 09:46, wrote: > had a hard time getting these things fast in RPython, and PyPy may have an > easier time with proper Python code (another reason to reject C++ wrappers > and live with mangled names). Comparing ABI mode (ffi.dlopen) and API mode (ffi.set_source) for performance, people often fail to realize that the API mode is generally much faster because calls don't go through libffi. That is always the case on CPython. On PyPy, the JIT can compile all calls in API mode, but only "simple enough" calls in ABI mode, making performance more predictable. For the cases where the call is "simple enough", you might think that the API mode adds the marginal cost of a wrapper, but that is mostly wrong too: the JIT replaces a call to a JMP instruction with a call to the jmp target. A bient?t, Armin. From wlavrijsen at lbl.gov Fri Jul 21 12:08:32 2017 From: wlavrijsen at lbl.gov (wlavrijsen at lbl.gov) Date: Fri, 21 Jul 2017 09:08:32 -0700 (PDT) Subject: [pypy-dev] okay to rename cppyy -> _cppyy In-Reply-To: References: Message-ID: Armin, > people often fail to realize count me among those. :) Yes, that is surprising. > For the cases where the call is "simple enough", you might think that the > API mode adds the marginal cost of a wrapper To be sure, if the code is really simple, it's typically inlined on the C++ end and then a wrapper is needed anyway (shouldn't, but linkers tends to be too smart by half as we found out). Alright, let me try some micro-benches. If it can outperform CPython/cppyy then I'm interested in pursuing this track. (With cppyy, wrappers are already generated anyway and compiled by the LLVM JIT, but those are fully generic ones, leaving lots of (un)packing and some copying on each call.) Best regards, Wim -- WLavrijsen at lbl.gov -- +1 (510) 486 6411 -- www.lavrijsen.net From danielerolando90 at gmail.com Tue Jul 25 14:47:24 2017 From: danielerolando90 at gmail.com (Daniele Rolando) Date: Tue, 25 Jul 2017 18:47:24 +0000 Subject: [pypy-dev] Need to rebuild wheels for every pypy minor version Message-ID: Hi guys. Right now pypy wheels names include both the major and minor pypy version in them: e.g. uWSGI-2.0.14.0.*pp257*-pypy_41-linux_x86_64.whl This means that if we want to upgrade pypy from 5.7.1 to 5.8 we'd need to rebuild all our wheels and this is not scalable since there are new pypy releases every 3/4 months. Wouldn't it be enough to only include the major version in the wheel name? Are minor pypy versions really incompatible between them? Thanks, Daniele -------------- next part -------------- An HTML attachment was scrubbed... URL: From anto.cuni at gmail.com Tue Jul 25 15:15:47 2017 From: anto.cuni at gmail.com (Antonio Cuni) Date: Tue, 25 Jul 2017 21:15:47 +0200 Subject: [pypy-dev] Need to rebuild wheels for every pypy minor version In-Reply-To: References: Message-ID: Hi, note that this is not because of PyPy: it's the wheel package which chooses what to include in the wheel filename: https://bitbucket.org/pypa/wheel/src/5d49f8cf18679d1bc6f3e1e414a5df3a1e492644/wheel/pep425tags.py?at=default&fileviewer=file-view-default#pep425tags.py-39 PyPy reports only the ABI version, which is pypy_41. This is probably wrong for the opposite reasons, i.e. it claims it's backward compatible even when it's not: https://bitbucket.org/pypy/pypy/issues/2613/fix-the-abi-tag ciao, Antonio On Tue, Jul 25, 2017 at 8:47 PM, Daniele Rolando wrote: > Hi guys. > > Right now pypy wheels names include both the major and minor pypy version > in them: e.g. uWSGI-2.0.14.0.*pp257*-pypy_41-linux_x86_64.whl > This means that if we want to upgrade pypy from 5.7.1 to 5.8 we'd need to > rebuild all our wheels and this is not scalable since there are new pypy > releases every 3/4 months. > > Wouldn't it be enough to only include the major version in the wheel name? > Are minor pypy versions really incompatible between them? > > Thanks, > Daniele > > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > https://mail.python.org/mailman/listinfo/pypy-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From njs at pobox.com Tue Jul 25 16:24:04 2017 From: njs at pobox.com (Nathaniel Smith) Date: Tue, 25 Jul 2017 13:24:04 -0700 Subject: [pypy-dev] Need to rebuild wheels for every pypy minor version In-Reply-To: References: Message-ID: On Jul 25, 2017 12:16 PM, "Antonio Cuni" wrote: Hi, note that this is not because of PyPy: it's the wheel package which chooses what to include in the wheel filename: https://bitbucket.org/pypa/wheel/src/5d49f8cf18679d1bc6f3e1e414a5df 3a1e492644/wheel/pep425tags.py?at=default&fileviewer=file- view-default#pep425tags.py-39 Well, probably it is most correct to say that this happens because PyPy has not come up with a coherent strategy for ABI compatibility and communicated that to wheel/pip :-). In the mean time, if you want to live dangerously, you can always rename your wheel something like 'foo-py2-pypy_41-linux_x86_64.whl', and then pip will happily install it on newer pypys. Whether it will work though is hard to say... -n -------------- next part -------------- An HTML attachment was scrubbed... URL: From danielerolando90 at gmail.com Tue Jul 25 16:37:30 2017 From: danielerolando90 at gmail.com (Daniele Rolando) Date: Tue, 25 Jul 2017 20:37:30 +0000 Subject: [pypy-dev] Need to rebuild wheels for every pypy minor version In-Reply-To: References: Message-ID: If I understand this correctly, the issue is that pip/wheel cannot rely on the ABI version from pypy so they have no way if a wheel would still be compatible or not. So they just generate a different wheel for every version to avoid having to deal with that problem at all. Is that correct? On Tue, Jul 25, 2017 at 1:24 PM Nathaniel Smith wrote: > On Jul 25, 2017 12:16 PM, "Antonio Cuni" wrote: > > Hi, > > note that this is not because of PyPy: it's the wheel package which > chooses what to include in the wheel filename: > > https://bitbucket.org/pypa/wheel/src/5d49f8cf18679d1bc6f3e1e414a5df3a1e492644/wheel/pep425tags.py?at=default&fileviewer=file-view-default#pep425tags.py-39 > > > Well, probably it is most correct to say that this happens because PyPy > has not come up with a coherent strategy for ABI compatibility and > communicated that to wheel/pip :-). > > In the mean time, if you want to live dangerously, you can always rename > your wheel something like 'foo-py2-pypy_41-linux_x86_64.whl', and then pip > will happily install it on newer pypys. Whether it will work though is hard > to say... > > -n > -------------- next part -------------- An HTML attachment was scrubbed... URL: From matti.picus at gmail.com Tue Jul 25 17:30:24 2017 From: matti.picus at gmail.com (Matti Picus) Date: Wed, 26 Jul 2017 00:30:24 +0300 Subject: [pypy-dev] Need to rebuild wheels for every pypy minor version In-Reply-To: References: Message-ID: Both the ABI version identifier pypy_41 and the python identifier pp257 need overhauling. I have submitted issues to pip and wheel for the Python Tag pp257 (i.e. pypy python 2 version 5.7) which IMO should be pp27 (i.e. pypy implementing python 2.7). https://github.com/pypa/pip/issues/4631 https://bitbucket.org/pypa/wheel/issues/187 Matti On 25/07/17 22:15, Antonio Cuni wrote: > Hi, > > note that this is not because of PyPy: it's the wheel package which > chooses what to include in the wheel filename: > https://bitbucket.org/pypa/wheel/src/5d49f8cf18679d1bc6f3e1e414a5df3a1e492644/wheel/pep425tags.py?at=default&fileviewer=file-view-default#pep425tags.py-39 > > PyPy reports only the ABI version, which is pypy_41. This is probably > wrong for the opposite reasons, i.e. it claims it's backward > compatible even when it's not: > https://bitbucket.org/pypy/pypy/issues/2613/fix-the-abi-tag > > ciao, > Antonio > > On Tue, Jul 25, 2017 at 8:47 PM, Daniele Rolando > > wrote: > > Hi guys. > > Right now pypy wheels names include both the major and minor pypy > version in them: e.g. uWSGI-2.0.14.0.*pp257*-pypy_41-linux_x86_64.whl > This means that if we want to upgrade pypy from 5.7.1 to 5.8 we'd > need to rebuild all our wheels and this is not scalable since > there are new pypy releases every 3/4 months. > > Wouldn't it be enough to only include the major version in the > wheel name? Are minor pypy versions really incompatible between them? > > Thanks, > Daniele > > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > https://mail.python.org/mailman/listinfo/pypy-dev > > > > > > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > https://mail.python.org/mailman/listinfo/pypy-dev From njs at pobox.com Tue Jul 25 21:52:33 2017 From: njs at pobox.com (Nathaniel Smith) Date: Tue, 25 Jul 2017 18:52:33 -0700 Subject: [pypy-dev] Need to rebuild wheels for every pypy minor version In-Reply-To: References: Message-ID: I think that the intended idea is: The first field says what variant of the Python language the wheel uses. So it can be py2, py33, etc. for generic "python 2", "python 3.3 or better". Or it can be cp33 to mean "we specifically need cpython 3.3, other language variants won't do". And then the second field says what variant of the C ABI the wheel uses. So it can be cp35dm to mean "CPython, 3.5, debug, pymalloc". In practice this distinction doesn't make a *ton* of sense for CPython wheels, because it's redundant -- pretty much the only reason why you might need the cpython implementation specifically is if you're using the C ABI, and if you need the cp35dm ABI then obviously only CPython can give you that. bdist_wheel by default uses both, because, uh... belt *and* suspenders, I guess? But in principle they could vary independently. For example, if some package is python 2, and depends on some API in __pypy__, but doesn't use cpyext, then it might make sense to tag that as 'pp2_57-none' or something to mean "I need the pypy 2 v5.7-or-better variant of the language, but I don't care about the ABI". In general pypy can come up with any system that makes sense here. A given release can tell pip that it's compatible with whatever mix of python tags and ABI tags that make sense, and bdist_wheel can have whatever defaults make sense as well (and in general users can override if they have a weird case). You could even rethink it a bit and make extension modules generally use tags like 'py27-pp2_58' ("the python part is vanilla python 2.7, but it also needs the pypy2 v5.8 ABI"), and you could have a single release of PyPy declare that it supports multiple ABIs (e.g. you could have a pp2_cffi${X} ABI that you keep relatively stable and increment occasionally, as well as a pp2_cpyext${X} ABI that gets revved more often.) I guess one thing is that the python tags and ABI tags should probably be distinct for the pypy2 and pypy3 branches. And a "pp27" tag doesn't make much sense to me, because the difference between pp27 and py27 would be that the former declares that it specifically needs some kind of pypy extension... but what version of pypy added that extension? There's no way to say. Anyway, I guess the main thing is to think ahead about what kind of ABI/stability/evolution strategy you actually want to use :-). -n On Tue, Jul 25, 2017 at 2:30 PM, Matti Picus wrote: > Both the ABI version identifier pypy_41 and the python identifier pp257 need > overhauling. I have submitted issues to pip and wheel for the Python Tag > pp257 (i.e. pypy python 2 version 5.7) which IMO should be pp27 (i.e. pypy > implementing python 2.7). > https://github.com/pypa/pip/issues/4631 > https://bitbucket.org/pypa/wheel/issues/187 > > Matti > > On 25/07/17 22:15, Antonio Cuni wrote: >> >> Hi, >> >> note that this is not because of PyPy: it's the wheel package which >> chooses what to include in the wheel filename: >> >> https://bitbucket.org/pypa/wheel/src/5d49f8cf18679d1bc6f3e1e414a5df3a1e492644/wheel/pep425tags.py?at=default&fileviewer=file-view-default#pep425tags.py-39 >> >> PyPy reports only the ABI version, which is pypy_41. This is probably >> wrong for the opposite reasons, i.e. it claims it's backward compatible even >> when it's not: https://bitbucket.org/pypy/pypy/issues/2613/fix-the-abi-tag >> >> ciao, >> Antonio >> >> On Tue, Jul 25, 2017 at 8:47 PM, Daniele Rolando >> > wrote: >> >> Hi guys. >> >> Right now pypy wheels names include both the major and minor pypy >> version in them: e.g. uWSGI-2.0.14.0.*pp257*-pypy_41-linux_x86_64.whl >> This means that if we want to upgrade pypy from 5.7.1 to 5.8 we'd >> need to rebuild all our wheels and this is not scalable since >> there are new pypy releases every 3/4 months. >> >> Wouldn't it be enough to only include the major version in the >> wheel name? Are minor pypy versions really incompatible between them? >> >> Thanks, >> Daniele >> >> _______________________________________________ >> pypy-dev mailing list >> pypy-dev at python.org >> https://mail.python.org/mailman/listinfo/pypy-dev >> >> >> >> >> >> _______________________________________________ >> pypy-dev mailing list >> pypy-dev at python.org >> https://mail.python.org/mailman/listinfo/pypy-dev > > > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > https://mail.python.org/mailman/listinfo/pypy-dev -- Nathaniel J. Smith -- https://vorpus.org From armin.rigo at gmail.com Wed Jul 26 04:51:42 2017 From: armin.rigo at gmail.com (Armin Rigo) Date: Wed, 26 Jul 2017 10:51:42 +0200 Subject: [pypy-dev] lib_pypy/_marshal.py looks out of date for Python 3 In-Reply-To: References: Message-ID: Hi Rocky, On 10 July 2017 at 02:53, Rocky Bernstein wrote: > Is lib_pypy/_marshal.py used? If so, shouldn't this be corrected? I think it is not used, unless you translate a pypy without the built-in marshal module---which you can't really do any more, anyway... I guess we should remove ``lib_pypy/_marshal.py`` and ``lib_pypy/marshal.py``. Armin From tinchester at gmail.com Thu Jul 27 10:17:41 2017 From: tinchester at gmail.com (=?UTF-8?Q?Tin_Tvrtkovi=C4=87?=) Date: Thu, 27 Jul 2017 14:17:41 +0000 Subject: [pypy-dev] Change function closure cell contents Message-ID: Hello! Here's a very simple example (Python 3): >>>> class C: .... def t(self): .... return __class__ .... >>>> C.t.__closure__[0] My question is, what's the easiest/most natural way of changing the contents of this closure cell on PyPy? On CPython there's a way using ctypes. This is regarding https://github.com/python-attrs/attrs/issues/102, but the issue boils down to this. Cheers! -------------- next part -------------- An HTML attachment was scrubbed... URL: From armin.rigo at gmail.com Thu Jul 27 17:59:02 2017 From: armin.rigo at gmail.com (Armin Rigo) Date: Thu, 27 Jul 2017 23:59:02 +0200 Subject: [pypy-dev] Change function closure cell contents In-Reply-To: References: Message-ID: Hi Tin, On 27 July 2017 at 16:17, Tin Tvrtkovi? wrote: > My question is, what's the easiest/most natural way of changing the contents > of this closure cell on PyPy? On PyPy you can do (tested on PyPy2): cell = g.func_closure[0] cell.__setstate__((43,)) # sets the 'cell_contents' attribute to 43 Armin From tinchester at gmail.com Thu Jul 27 18:10:41 2017 From: tinchester at gmail.com (=?UTF-8?Q?Tin_Tvrtkovi=C4=87?=) Date: Thu, 27 Jul 2017 22:10:41 +0000 Subject: [pypy-dev] Change function closure cell contents In-Reply-To: References: Message-ID: That seems to work! I'd just like to add, Armin, that whenever I have a question you are almost always the first to respond and your replies are always very helpful and insightful. It is very much appreciated; thank you. :) On Thu, Jul 27, 2017 at 11:59 PM Armin Rigo wrote: > Hi Tin, > > On 27 July 2017 at 16:17, Tin Tvrtkovi? wrote: > > My question is, what's the easiest/most natural way of changing the > contents > > of this closure cell on PyPy? > > On PyPy you can do (tested on PyPy2): > > cell = g.func_closure[0] > cell.__setstate__((43,)) # sets the 'cell_contents' attribute to 43 > > > Armin > -------------- next part -------------- An HTML attachment was scrubbed... URL: From michal.moiseev at firmitas.solutions Sun Jul 30 07:53:25 2017 From: michal.moiseev at firmitas.solutions (Michal Moiseev) Date: Sun, 30 Jul 2017 14:53:25 +0300 Subject: [pypy-dev] Pypy3 lxml support - please help Message-ID: Hi, In our firm we are developing on python 3.4, and from performance issues, we have been trying to see if we we can run our code with pypy3, I couldn't seem to fine a solution for the lxml package, the pypy implementation - lxml-cffi is only for python 2.x, and when installing lxml, it is also seems to work only for python 2.x, and the following error is printed when trying to import: from lxml import etree File "src/lxml/xmlschema.pxi", line 22, in init lxml.etree (src/lxml/lxml.etree.c:231895) File "src/lxml/xpath.pxi", line 420, in lxml.etree.XPath.__init__ (src/lxml/lxml.etree.c:173999) File "src/lxml/xpath.pxi", line 150, in lxml.etree._XPathEvaluatorBase.set_context (src/lxml/lxml.etree.c:170398) File "src/lxml/xpath.pxi", line 66, in lxml.etree._XPathContext.set_context (src/lxml/lxml.etree.c:169176) File "src/lxml/extensions.pxi", line 255, in lxml.etree._BaseContext.registerLocalFunctions (src/lxml/lxml.etree.c:160400) AttributeError: 'dict' object has no attribute 'iteritems' thank you! -- -------------- next part -------------- An HTML attachment was scrubbed... URL: From armin.rigo at gmail.com Sun Jul 30 10:09:51 2017 From: armin.rigo at gmail.com (Armin Rigo) Date: Sun, 30 Jul 2017 16:09:51 +0200 Subject: [pypy-dev] Pypy3 lxml support - please help In-Reply-To: References: Message-ID: Hi Michal, On 30 July 2017 at 13:53, Michal Moiseev wrote: > I couldn't seem to fine a solution for the lxml package, the pypy > implementation - lxml-cffi is only for python 2.x, and when installing > lxml, it is also seems to work only for python 2.x, and the following error > is printed when trying to import: > This used to be a known problem with Cython: https://github.com/cython/cython/pull/1631 . It has been fixed in Cython 0.26. The issue here is that lxml comes with pre-Cythonized files produced by Cython 0.25.2. Apart from waiting for the next release of lxml, you could manually install lxml. To do so, download the source, and run: pypy3 -m pip install cython pypy3 setup.py build --with-cython # forced re-Cythonize pypy3 setup.py install A bient?t, Armin. -------------- next part -------------- An HTML attachment was scrubbed... URL: