From matti.picus at gmail.com Fri Jan 1 04:31:16 2016 From: matti.picus at gmail.com (Matti Picus) Date: Fri, 1 Jan 2016 11:31:16 +0200 Subject: [pypy-dev] Leysin Winter sprint? In-Reply-To: References: <9904363526be1c7bf53db0739b2719d4@indus.uberspace.de> Message-ID: <56864764.6030800@gmail.com> An HTML attachment was scrubbed... URL: From arigo at tunes.org Fri Jan 1 05:54:11 2016 From: arigo at tunes.org (Armin Rigo) Date: Fri, 1 Jan 2016 11:54:11 +0100 Subject: [pypy-dev] Leysin Winter sprint? In-Reply-To: <56864764.6030800@gmail.com> References: <9904363526be1c7bf53db0739b2719d4@indus.uberspace.de> <56864764.6030800@gmail.com> Message-ID: Hi Matti, Happy new year :-) On Fri, Jan 1, 2016 at 10:31 AM, Matti Picus wrote: > Are the dates firm enough that I can order flights? It is getting late... I should have the definitive confirmation tomorrow. Armin From sergeymatyunin at gmail.com Fri Jan 1 16:43:46 2016 From: sergeymatyunin at gmail.com (Sergey Matyunin) Date: Fri, 1 Jan 2016 22:43:46 +0100 Subject: [pypy-dev] numpypy unit tests Message-ID: Hello, I am curious about the status of unit tests of numpy for pypy. Where can I get actual information about it? I have installed pypy 4.0.1, numpypy for it (tag 4.0.1) and launched numpy tests using numpy.test() Looks like there is a plenty of failed tests: > Ran 4157 tests in 15.592s > FAILED (KNOWNFAIL=5, SKIP=116, errors=648, failures=149) Is it the actual state or may be I did something wrong? -- Sergey From fijall at gmail.com Fri Jan 1 16:50:06 2016 From: fijall at gmail.com (Maciej Fijalkowski) Date: Fri, 1 Jan 2016 23:50:06 +0200 Subject: [pypy-dev] numpypy unit tests In-Reply-To: References: Message-ID: This is the actual state On Fri, Jan 1, 2016 at 11:43 PM, Sergey Matyunin wrote: > Hello, > > I am curious about the status of unit tests of numpy for pypy. Where > can I get actual information about it? > > I have installed pypy 4.0.1, numpypy for it (tag 4.0.1) and launched > numpy tests using > numpy.test() > Looks like there is a plenty of failed tests: >> Ran 4157 tests in 15.592s >> FAILED (KNOWNFAIL=5, SKIP=116, errors=648, failures=149) > > Is it the actual state or may be I did something wrong? > > > -- > Sergey > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > https://mail.python.org/mailman/listinfo/pypy-dev From sergeymatyunin at gmail.com Fri Jan 1 18:44:09 2016 From: sergeymatyunin at gmail.com (Sergey Matyunin) Date: Sat, 2 Jan 2016 00:44:09 +0100 Subject: [pypy-dev] numpypy unit tests In-Reply-To: References: Message-ID: Dear Maciej, Thanks for the answer! Does it make sense to fix failing tests? Or may be they were left as is because they are irrelevant? Do numpypy developers use these tests or some other tests? Does any guidelines exist for numpypy developers? Do I need to create an issue for each PR? Is this maillist the right place for asking that questions? On Fri, Jan 1, 2016 at 10:50 PM, Maciej Fijalkowski wrote: > This is the actual state > > On Fri, Jan 1, 2016 at 11:43 PM, Sergey Matyunin > wrote: >> Hello, >> >> I am curious about the status of unit tests of numpy for pypy. Where >> can I get actual information about it? >> >> I have installed pypy 4.0.1, numpypy for it (tag 4.0.1) and launched >> numpy tests using >> numpy.test() >> Looks like there is a plenty of failed tests: >>> Ran 4157 tests in 15.592s >>> FAILED (KNOWNFAIL=5, SKIP=116, errors=648, failures=149) >> >> Is it the actual state or may be I did something wrong? >> >> >> -- >> Sergey >> _______________________________________________ >> pypy-dev mailing list >> pypy-dev at python.org >> https://mail.python.org/mailman/listinfo/pypy-dev -- Sergey From fijall at gmail.com Sat Jan 2 02:22:56 2016 From: fijall at gmail.com (Maciej Fijalkowski) Date: Sat, 2 Jan 2016 09:22:56 +0200 Subject: [pypy-dev] numpypy unit tests In-Reply-To: References: Message-ID: yes, it makes sense to fix failing tests, they're real problems. You don't need to create the issue and you can just do the PR. This is the appropriate mailing list, but we work a lot through IRC On Sat, Jan 2, 2016 at 1:44 AM, Sergey Matyunin wrote: > Dear Maciej, > > Thanks for the answer! > Does it make sense to fix failing tests? Or may be they were left as > is because they are irrelevant? > > Do numpypy developers use these tests or some other tests? > Does any guidelines exist for numpypy developers? Do I need to create > an issue for each PR? > > Is this maillist the right place for asking that questions? > > > > On Fri, Jan 1, 2016 at 10:50 PM, Maciej Fijalkowski wrote: >> This is the actual state >> >> On Fri, Jan 1, 2016 at 11:43 PM, Sergey Matyunin >> wrote: >>> Hello, >>> >>> I am curious about the status of unit tests of numpy for pypy. Where >>> can I get actual information about it? >>> >>> I have installed pypy 4.0.1, numpypy for it (tag 4.0.1) and launched >>> numpy tests using >>> numpy.test() >>> Looks like there is a plenty of failed tests: >>>> Ran 4157 tests in 15.592s >>>> FAILED (KNOWNFAIL=5, SKIP=116, errors=648, failures=149) >>> >>> Is it the actual state or may be I did something wrong? >>> >>> >>> -- >>> Sergey >>> _______________________________________________ >>> pypy-dev mailing list >>> pypy-dev at python.org >>> https://mail.python.org/mailman/listinfo/pypy-dev > > > > -- > Sergey From matti.picus at gmail.com Sat Jan 2 12:17:28 2016 From: matti.picus at gmail.com (Matti Picus) Date: Sat, 2 Jan 2016 19:17:28 +0200 Subject: [pypy-dev] numpypy unit tests In-Reply-To: References: Message-ID: <56880628.9090703@gmail.com> On 02/01/16 01:44, Sergey Matyunin wrote: > Dear Maciej, > > Thanks for the answer! > Does it make sense to fix failing tests? Or may be they were left as > is because they are irrelevant? > > Do numpypy developers use these tests or some other tests? > Does any guidelines exist for numpypy developers? Do I need to create > an issue for each PR? > > Is this maillist the right place for asking that questions? > The test suite in the pypy/numpy repo is forked directly from the upstream numpy repo. We so far consider our inability to run them failures in the pypy/pypy micronumpy module. You should not fix these tests or the pypy/numpy repo itself for most of these issues, rather the essence of the failing test should be rewritten in pypy/pypy/module/micronumpy/tests/test*.py and fixed there. Some of the failures may actually require modification of the pypy/numpy repo, but that would be only after we exhaust options to make our micronumpy implementation 100% compatible. And just to point out one such exception to the rule of modifying only micronumpy: One of the most prevalent test failures (as can be seen here https://gist.github.com/mattip/2e6f05f1900eb6a9fd99 ) is the lack of numpy.ndarray.partition. I have a plan to reimplement this in the pypy/numpy repo in cffi (see here for an outline https://gist.github.com/mattip/ab34268b049b859554ad ) rather than rewriting it in rpython, but have not been able to get around to actually doing it. Feel free to ask here or on IRC for more help Matti From marky1991 at gmail.com Sat Jan 2 13:20:49 2016 From: marky1991 at gmail.com (marky1991 .) Date: Sat, 2 Jan 2016 13:20:49 -0500 Subject: [pypy-dev] Py3.3 Import Test Question Message-ID: (Sending an email instead of just asking in irc so I don't keep spamming the question) In the 3.3 branch, there is this failing test: http://buildbot.pypy.org/summary/longrepr?testname=AppTestMagic.%28%29.test_save_module_content_for_future_reload&builder=own-linux-x86-64&build=4372&mod=module.__pypy__.test.test_magic I made this change to make it not fail: https://bitbucket.org/marky1991/pypy/commits/afb2b3cd9535399043722f3d822762e536660613 The call chain is thus (mostly pseudocode here): (test code) reload(module) lib-python/3/imp.py, in reload loader.load_module(module.name) lib-python/3/importlib/_bootstrap.py, in load_module init_builtin(module.name) pypy/module/imp/interp_imp.py, in init_builtin space.getbuiltinmodule(module.name) pypy/interpreter/baseobjspace.py sys.modules.get(module.name) The last method in that chain, space.getbuiltinmodule, has a parameter force_init, which defaults to False. If you pass it as true, we avoid the (rpython equivalent to) sys.modules.get(module_name) code, getting the module object and successfully calling init() on it. (The failing test needs reload() to invoke module.init()) In the commit pasted above, I changed init_builtin to call space.getbuiltinmodule with force_init=True. This fixes reload, but I would think that this would be wrong for nonreload scenarios. (I've checked the py3k branch, which has this test (which passes locally), and it passes force_init=True if the finder's modtype is C_BUILTIN) However, I don't see a good way to conditionalize the passing of force_init. Does anyone have any suggestions as to how to handle this issue? (Given what I've found in py3k's code, is my change actually a valid fix?) If anyone has time to reply, feel free to just ping me in irc if I'm on. Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From sergeymatyunin at gmail.com Sat Jan 2 20:07:43 2016 From: sergeymatyunin at gmail.com (Sergey Matyunin) Date: Sun, 3 Jan 2016 02:07:43 +0100 Subject: [pypy-dev] numpypy unit tests In-Reply-To: <56880628.9090703@gmail.com> References: <56880628.9090703@gmail.com> Message-ID: Thanks Maciej, Matti! That was really helpful! Could you please point me to the place where I could find overview of numpypy architecture? (if it exists) I got some vision after looking at the code. I would be grateful if someone says if it?s correct. -Implementation of numpy for pypy is splitted into two parts: micronumpy and numpypy -micronumpy is implementation of basic concepts of numpy inside pypy. Source code is in https://bitbucket.org/pypy/pypy/ ->pypy/module/micronumpy. --micronumpy is compiled together with pypy itself and cannot be compiled separately. --micronumpy uses timsort instead of quicksort in original numpy -numpypy implements the rest of the library (not everything is implemented yet). Source code is in https://bitbucket.org/pypy/numpypy. This module must be installed manually. --numpypy uses basic entities from micronumpy --Source code of numpypy contains source codes of numpy with a bunch of fixes. Code is partially unused because versions from micronumpy is used instead. Looking at the code is the only way to find out which part of the code is used. Also I have a couple of naive questions about pypy in general. I investigated very little here. Feel free to skip. -Compiling pypy for linux on takes about 1 hour, right? -Does faster build mode exist? Probably with less optimizations (without -O3 etc.) -Is there way to rebuild only micronumpy? -How do developers usually debug modules such as micronumpy? I am curious about tools and techniques. On Sat, Jan 2, 2016 at 6:17 PM, Matti Picus wrote: > > On 02/01/16 01:44, Sergey Matyunin wrote: >> >> Dear Maciej, >> >> Thanks for the answer! >> Does it make sense to fix failing tests? Or may be they were left as >> is because they are irrelevant? >> >> Do numpypy developers use these tests or some other tests? >> Does any guidelines exist for numpypy developers? Do I need to create >> an issue for each PR? >> >> Is this maillist the right place for asking that questions? >> > The test suite in the pypy/numpy repo is forked directly from the upstream numpy repo. We so far consider our inability to run them failures in the pypy/pypy micronumpy module. You should not fix these tests or the pypy/numpy repo itself for most of these issues, rather the essence of the failing test should be rewritten in pypy/pypy/module/micronumpy/tests/test*.py and fixed there. Some of the failures may actually require modification of the pypy/numpy repo, but that would be only after we exhaust options to make our micronumpy implementation 100% compatible. > > And just to point out one such exception to the rule of modifying only micronumpy: > One of the most prevalent test failures (as can be seen here https://gist.github.com/mattip/2e6f05f1900eb6a9fd99 ) is the lack of numpy.ndarray.partition. I have a plan to reimplement this in the pypy/numpy repo in cffi (see here for an outline https://gist.github.com/mattip/ab34268b049b859554ad ) rather than rewriting it in rpython, but have not been able to get around to actually doing it. > > Feel free to ask here or on IRC for more help > > Matti -- ?????? From yury at shurup.com Sun Jan 3 02:19:14 2016 From: yury at shurup.com (Yury V. Zaytsev) Date: Sun, 03 Jan 2016 11:19:14 +0400 Subject: [pypy-dev] numpypy unit tests In-Reply-To: References: <56880628.9090703@gmail.com> Message-ID: <1451805554.2787.22.camel@newpride> On Sun, 2016-01-03 at 02:07 +0100, Sergey Matyunin wrote: > > -Compiling pypy for linux on takes about 1 hour, right? Something in this range, depending on your hardware. > -Does faster build mode exist? Probably with less optimizations > (without -O3 etc.) As you might have already noticed, most of the time goes into *translation* and not compilation itself. Unfortunately, translation is still single-threaded, and therefore doesn't benefit from having more cores. One thing that you definitively should do is to translate PyPy with the latest version of PyPy instead of CPython, this makes a huge difference. I think it is also still possible to disable gc to speed the translation up a bit, but I'm not sure if this makes much sense (see below). > -Is there way to rebuild only micronumpy? > -How do developers usually debug modules such as micronumpy? I am > curious about tools and techniques. PyPy has a interpreted mode, in which the interpreter is interpreted, rather than translated and compiled. This mode is way too slow for normal usage, but it's good enough to run most of the tests. The developers usually write tests for the functionality they want to implement and make sure they fail, then implement it and make sure they pass in the interpreted mode (without doing full translations between the iterations). Only then they run a full translation or wait for a nightly, and hopefully the tests still pass for the translated version as well. P.S. As a side note, I found it curious and amusing that lots of people are talking TDD, but apparently for a project of complexity and scale of PyPy, there is simply no other practical way to do development, irrespectively of whether you like it or not :-) P.P.S. Feel free to re-use my email for a FAQ and such, if something along these lines isn't already in there... -- Sincerely yours, Yury V. Zaytsev From sergeymatyunin at gmail.com Sun Jan 3 15:53:59 2016 From: sergeymatyunin at gmail.com (Sergey Matyunin) Date: Sun, 3 Jan 2016 21:53:59 +0100 Subject: [pypy-dev] numpypy unit tests In-Reply-To: <1451805554.2787.22.camel@newpride> References: <56880628.9090703@gmail.com> <1451805554.2787.22.camel@newpride> Message-ID: Thank you, Yury. Looks like your lines are already in FAQ. At least I couldn't find reasonable way to update any chapter. Thing look clear in theory. However I cannot go on in practice. How to launch any test for micronumpy in interactive mode? I suppose it should be possible to import some modules from micronumpy using interactive mode of pypy. I check out branch release-4.0.x, then ~/work/pypy/pypy_src$ python pypy/bin/pyinteractive.py --allworkingmodules -c "import pypy.module.micronumpy.MultiArrayModule" I complains about signal module and fails. The same happens when I use python 2.7 and pypy 4.0.1 for launching pyinteractive. The same for module micronumpy.ctor. Whole output is here: https://gist.github.com/serge-m/d3f9f9863e15fc5c6af2 What am I doing wrong? In general I want to run a test for micronumpy. Then make it debuggable to see how micronumpy works. I also tried to use pytest and test. I extracted a single test from pypy/pypy_src/pypy/module/micronumpy/test/test_selection.py into test_selection_2.py to make things faster. Output is here: https://gist.github.com/serge-m/3c51f35c702cc57b00c2 On Sun, Jan 3, 2016 at 8:19 AM, Yury V. Zaytsev wrote: > On Sun, 2016-01-03 at 02:07 +0100, Sergey Matyunin wrote: >> >> -Compiling pypy for linux on takes about 1 hour, right? > > Something in this range, depending on your hardware. > >> -Does faster build mode exist? Probably with less optimizations >> (without -O3 etc.) > > As you might have already noticed, most of the time goes into > *translation* and not compilation itself. Unfortunately, translation is > still single-threaded, and therefore doesn't benefit from having more > cores. One thing that you definitively should do is to translate PyPy > with the latest version of PyPy instead of CPython, this makes a huge > difference. I think it is also still possible to disable gc to speed the > translation up a bit, but I'm not sure if this makes much sense (see > below). > >> -Is there way to rebuild only micronumpy? >> -How do developers usually debug modules such as micronumpy? I am >> curious about tools and techniques. > > PyPy has a interpreted mode, in which the interpreter is interpreted, > rather than translated and compiled. This mode is way too slow for > normal usage, but it's good enough to run most of the tests. > > The developers usually write tests for the functionality they want to > implement and make sure they fail, then implement it and make sure they > pass in the interpreted mode (without doing full translations between > the iterations). Only then they run a full translation or wait for a > nightly, and hopefully the tests still pass for the translated version > as well. > > P.S. As a side note, I found it curious and amusing that lots of people > are talking TDD, but apparently for a project of complexity and scale of > PyPy, there is simply no other practical way to do development, > irrespectively of whether you like it or not :-) > > P.P.S. Feel free to re-use my email for a FAQ and such, if something > along these lines isn't already in there... > > -- > Sincerely yours, > Yury V. Zaytsev > > -- ?????? From vincent.legoll at gmail.com Sun Jan 3 17:38:12 2016 From: vincent.legoll at gmail.com (Vincent Legoll) Date: Sun, 3 Jan 2016 23:38:12 +0100 Subject: [pypy-dev] numpypy unit tests In-Reply-To: References: <56880628.9090703@gmail.com> <1451805554.2787.22.camel@newpride> Message-ID: Hello, to launch a single test you can do it that way, assuming you're in pypy's top level and that py.test is installed on your system: py.test pypy/module/micronumpy/test/test_ndarray.py -k test_array_indexing_bool On Sun, Jan 3, 2016 at 9:53 PM, Sergey Matyunin wrote: > Thank you, Yury. > Looks like your lines are already in FAQ. At least I couldn't find > reasonable way to update any chapter. > > Thing look clear in theory. However I cannot go on in practice. How to > launch any test for micronumpy in interactive mode? > > I suppose it should be possible to import some modules from micronumpy > using interactive mode of pypy. > I check out branch release-4.0.x, then > ~/work/pypy/pypy_src$ python pypy/bin/pyinteractive.py > --allworkingmodules -c "import > pypy.module.micronumpy.MultiArrayModule" > I complains about signal module and fails. > The same happens when I use python 2.7 and pypy 4.0.1 for launching > pyinteractive. The same for module micronumpy.ctor. > Whole output is here: https://gist.github.com/serge-m/d3f9f9863e15fc5c6af2 > > What am I doing wrong? > > In general I want to run a test for micronumpy. Then make it > debuggable to see how micronumpy works. > > I also tried to use pytest and test. I extracted a single test from > pypy/pypy_src/pypy/module/micronumpy/test/test_selection.py into > test_selection_2.py to make things faster. > Output is here: > https://gist.github.com/serge-m/3c51f35c702cc57b00c2 > > > On Sun, Jan 3, 2016 at 8:19 AM, Yury V. Zaytsev wrote: >> On Sun, 2016-01-03 at 02:07 +0100, Sergey Matyunin wrote: >>> >>> -Compiling pypy for linux on takes about 1 hour, right? >> >> Something in this range, depending on your hardware. >> >>> -Does faster build mode exist? Probably with less optimizations >>> (without -O3 etc.) >> >> As you might have already noticed, most of the time goes into >> *translation* and not compilation itself. Unfortunately, translation is >> still single-threaded, and therefore doesn't benefit from having more >> cores. One thing that you definitively should do is to translate PyPy >> with the latest version of PyPy instead of CPython, this makes a huge >> difference. I think it is also still possible to disable gc to speed the >> translation up a bit, but I'm not sure if this makes much sense (see >> below). >> >>> -Is there way to rebuild only micronumpy? >>> -How do developers usually debug modules such as micronumpy? I am >>> curious about tools and techniques. >> >> PyPy has a interpreted mode, in which the interpreter is interpreted, >> rather than translated and compiled. This mode is way too slow for >> normal usage, but it's good enough to run most of the tests. >> >> The developers usually write tests for the functionality they want to >> implement and make sure they fail, then implement it and make sure they >> pass in the interpreted mode (without doing full translations between >> the iterations). Only then they run a full translation or wait for a >> nightly, and hopefully the tests still pass for the translated version >> as well. >> >> P.S. As a side note, I found it curious and amusing that lots of people >> are talking TDD, but apparently for a project of complexity and scale of >> PyPy, there is simply no other practical way to do development, >> irrespectively of whether you like it or not :-) >> >> P.P.S. Feel free to re-use my email for a FAQ and such, if something >> along these lines isn't already in there... >> >> -- >> Sincerely yours, >> Yury V. Zaytsev >> >> > > > > -- > ?????? > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > https://mail.python.org/mailman/listinfo/pypy-dev -- Vincent Legoll From arigo at tunes.org Mon Jan 4 07:15:03 2016 From: arigo at tunes.org (Armin Rigo) Date: Mon, 4 Jan 2016 13:15:03 +0100 Subject: [pypy-dev] Leysin Winter Sprint 2016 Message-ID: Hi all, ===================================================================== PyPy Leysin Winter Sprint (20-27th February 2016) ===================================================================== The next PyPy sprint will be in Leysin, Switzerland, for the eleventh time. This is a fully public sprint: newcomers and topics other than those proposed below are welcome. ------------------------------ Goals and topics of the sprint ------------------------------ The details depend on who is here and ready to work. The list of topics is mostly the same as last year (did PyPy became a mature project with only long-term goals?): * cpyext (CPython C API emulation layer): various speed and completeness topics * cleaning up the optimization step in the JIT, change the register allocation done by the JIT's backend, or more improvements to the warm-up time * finish vmprof - a statistical profiler for CPython and PyPy * Py3k (Python 3.x support), NumPyPy (the numpy module) * STM (Software Transaction Memory), notably: try to come up with benchmarks, and measure them carefully in order to test and improve the conflict reporting tools, and more generally to figure out how practical it is in large projects to avoid conflicts * And as usual, the main side goal is to have fun in winter sports :-) We can take a day off for ski. ----------- Exact times ----------- I have booked the week from Saturday 20 to Saturday 27. It is fine to leave either the 27 or the 28, or even stay a few more days on either side. The plan is to work full days between the 21 and the 27. You are of course allowed to show up for a part of that time only, too. ----------------------- Location & Accomodation ----------------------- Leysin, Switzerland, "same place as before". Let me refresh your memory: both the sprint venue and the lodging will be in a pair of chalets built specifically for bed & breakfast: http://www.ermina.ch/. The place has a good ADSL Internet connection with wireless installed. You can also arrange your own lodging elsewhere (as long as you are in Leysin, you cannot be more than a 15 minutes walk away from the sprint venue). Please *confirm* that you are coming so that we can adjust the reservations as appropriate. The options of rooms are a bit more limited than on previous years because the place for bed-and-breakfast is shrinking: what is guaranteed is only one double-bed room and a bigger room with 5-6 individual beds (the latter at 50-60 CHF per night, breakfast included). If there are more people that would prefer a single room, please contact me and we'll see what choices you have. There are a choice of hotels, many of them reasonably priced for Switzerland. Please register by Mercurial:: https://bitbucket.org/pypy/extradoc/ https://bitbucket.org/pypy/extradoc/raw/extradoc/sprintinfo/leysin-winter-2016 or on this mailing list if you do not yet have check-in rights. You need a Swiss-to-(insert country here) power adapter. There will be some Swiss-to-EU adapters around, and at least one EU-format power strip. -------Armin Rigo From armin.rigo at gmail.com Mon Jan 4 08:23:25 2016 From: armin.rigo at gmail.com (Armin Rigo) Date: Mon, 4 Jan 2016 14:23:25 +0100 Subject: [pypy-dev] Fwd: Leysin Winter Sprint 2016 In-Reply-To: References: Message-ID: Hi all, ===================================================================== PyPy Leysin Winter Sprint (20-27th February 2016) ===================================================================== The next PyPy sprint will be in Leysin, Switzerland, for the eleventh time. This is a fully public sprint: newcomers and topics other than those proposed below are welcome. ------------------------------ Goals and topics of the sprint ------------------------------ The details depend on who is here and ready to work. The list of topics is mostly the same as last year (did PyPy became a mature project with only long-term goals?): * cpyext (CPython C API emulation layer): various speed and completeness topics * cleaning up the optimization step in the JIT, change the register allocation done by the JIT's backend, or more improvements to the warm-up time * finish vmprof - a statistical profiler for CPython and PyPy * Py3k (Python 3.x support), NumPyPy (the numpy module) * STM (Software Transaction Memory), notably: try to come up with benchmarks, and measure them carefully in order to test and improve the conflict reporting tools, and more generally to figure out how practical it is in large projects to avoid conflicts * And as usual, the main side goal is to have fun in winter sports :-) We can take a day off for ski. ----------- Exact times ----------- I have booked the week from Saturday 20 to Saturday 27. It is fine to leave either the 27 or the 28, or even stay a few more days on either side. The plan is to work full days between the 21 and the 27. You are of course allowed to show up for a part of that time only, too. ----------------------- Location & Accomodation ----------------------- Leysin, Switzerland, "same place as before". Let me refresh your memory: both the sprint venue and the lodging will be in a pair of chalets built specifically for bed & breakfast: http://www.ermina.ch/. The place has a good ADSL Internet connection with wireless installed. You can also arrange your own lodging elsewhere (as long as you are in Leysin, you cannot be more than a 15 minutes walk away from the sprint venue). Please *confirm* that you are coming so that we can adjust the reservations as appropriate. The options of rooms are a bit more limited than on previous years because the place for bed-and-breakfast is shrinking: what is guaranteed is only one double-bed room and a bigger room with 5-6 individual beds (the latter at 50-60 CHF per night, breakfast included). If there are more people that would prefer a single room, please contact me and we'll see what choices you have. There are a choice of hotels, many of them reasonably priced for Switzerland. Please register by Mercurial:: https://bitbucket.org/pypy/extradoc/ https://bitbucket.org/pypy/extradoc/raw/extradoc/sprintinfo/leysin-winter-2016 or on this mailing list if you do not yet have check-in rights. You need a Swiss-to-(insert country here) power adapter. There will be some Swiss-to-EU adapters around, and at least one EU-format power strip. -------Armin Rigo From arigo at tunes.org Tue Jan 5 03:56:37 2016 From: arigo at tunes.org (Armin Rigo) Date: Tue, 5 Jan 2016 09:56:37 +0100 Subject: [pypy-dev] Py3.3 Import Test Question In-Reply-To: References: Message-ID: Hi Marky, No clue why, but your e-mail written 3 days ago only shows up now on the mailing list... If you didn't get any answer on IRC in the meantime: that test is about this line: __pypy__.save_module_content_for_future_reload(sys) The test is failing because that line didn't have any effect. A bient?t, Armin. From arigo at tunes.org Tue Jan 5 04:04:46 2016 From: arigo at tunes.org (Armin Rigo) Date: Tue, 5 Jan 2016 10:04:46 +0100 Subject: [pypy-dev] Py3.3 Import Test Question In-Reply-To: References: Message-ID: Hi Marky, On Tue, Jan 5, 2016 at 9:56 AM, Armin Rigo wrote: > __pypy__.save_module_content_for_future_reload(sys) > > The test is failing because that line didn't have any effect. Maybe I'm wrong, and the exact cause is more subtle. In general, you should first understand why it works in "default" and what is the difference in py3.3. In "default" I see that the function imp.init_builtin() calls space.getbuiltinmodule(name) without the force_init argument too, so the difference is somewhere else... Probably closer to wherever "reload()" is implemented? A bient?t, Armin. From arigo at tunes.org Tue Jan 5 04:13:16 2016 From: arigo at tunes.org (Armin Rigo) Date: Tue, 5 Jan 2016 10:13:16 +0100 Subject: [pypy-dev] Dead loop occurs when using python-daemon and multiprocessing together in PyPy 4.0.1 In-Reply-To: References: <567a7198.01911c0a.85ee3.ffffce95SMTPIN_ADDED_BROKEN@mx.google.com> <567a8b8b.e89c420a.3f3a4.ffffb2d7SMTPIN_ADDED_BROKEN@mx.google.com> <567a9e35.470c620a.772a6.02b5SMTPIN_ADDED_BROKEN@mx.google.com> <567aa34e.6a92420a.9259.0756SMTPIN_ADDED_BROKEN@mx.google.com> Message-ID: Hi Vincent, On Sun, Dec 27, 2015 at 11:45 AM, Vincent Legoll wrote: > So to stay on the safe side, you prefer to keep reopening every time ? > https://bitbucket.org/vincentlegoll/pypy/commits/branch/fix-urandom-closed I'm not against a proper fix :-) The problem is that for the fix to be proper, it needs a bit more work than what you did. The problem is that the calls to os.fstat() in your code can each release the GIL; in CPython, the corresponding calls to fstat() are done without the GIL released, and I believe it is important. A bient?t, Armin. From vincent.legoll at gmail.com Tue Jan 5 05:10:07 2016 From: vincent.legoll at gmail.com (Vincent Legoll) Date: Tue, 5 Jan 2016 11:10:07 +0100 Subject: [pypy-dev] Dead loop occurs when using python-daemon and multiprocessing together in PyPy 4.0.1 In-Reply-To: References: <567a7198.01911c0a.85ee3.ffffce95SMTPIN_ADDED_BROKEN@mx.google.com> <567a8b8b.e89c420a.3f3a4.ffffb2d7SMTPIN_ADDED_BROKEN@mx.google.com> <567a9e35.470c620a.772a6.02b5SMTPIN_ADDED_BROKEN@mx.google.com> <567aa34e.6a92420a.9259.0756SMTPIN_ADDED_BROKEN@mx.google.com> Message-ID: Hello and happy new pypyear to all pypyers ! On Tue, Jan 5, 2016 at 10:13 AM, Armin Rigo wrote: >> So to stay on the safe side, you prefer to keep reopening every time ? >> https://bitbucket.org/vincentlegoll/pypy/commits/branch/fix-urandom-closed > > I'm not against a proper fix :-) The problem is that for the fix to > be proper, it needs a bit more work than what you did. The problem is > that the calls to os.fstat() in your code can each release the GIL; in > CPython, the corresponding calls to fstat() are done without the GIL > released, and I believe it is important. OK for a proper fix, how can we write a test that will check for that ? Is there a doc for pypy's way to grab the GIL ? BTW, in the always open()ing, isn't the race still there, because the GIL is released between open() and read(), no ? So the fd can be closed here too... That could certainly be a smaller gap, but still... -- Vincent Legoll From phyo.arkarlwin at gmail.com Wed Jan 6 07:43:29 2016 From: phyo.arkarlwin at gmail.com (Phyo Arkar) Date: Wed, 6 Jan 2016 19:13:29 +0630 Subject: [pypy-dev] More Push towards Python 3.x? Message-ID: Pypy, Python 3.x is lagging behind a lot. Python 3.5 is a version we should be Evolving to so Is there any planned timeline for Pypy Python 3.x ? -------------- next part -------------- An HTML attachment was scrubbed... URL: From marky1991 at gmail.com Wed Jan 6 09:07:31 2016 From: marky1991 at gmail.com (marky1991 .) Date: Wed, 6 Jan 2016 09:07:31 -0500 Subject: [pypy-dev] More Push towards Python 3.x? In-Reply-To: References: Message-ID: I don't have a timeline (and as far as I know pypy-dev doesn't either), but I do intend to help push this forwards however I can. (I am brand new at this though) Work is progressing on 3.3. There's no point in trying to immediately work on 3.5, because all changes needed for 3.3 will mostly all need to be made for 3.5 anyway. As always in open source, if you want it faster, feel free to jump in and help. : ) -------------- next part -------------- An HTML attachment was scrubbed... URL: From me at manueljacob.de Wed Jan 6 12:22:24 2016 From: me at manueljacob.de (Manuel Jacob) Date: Wed, 06 Jan 2016 18:22:24 +0100 Subject: [pypy-dev] More Push towards Python 3.x? In-Reply-To: References: Message-ID: Hi, On 2016-01-06 13:43, Phyo Arkar wrote: > Pypy, Python 3.x is lagging behind a lot. > Python 3.5 is a version we should be Evolving to so Is there any > planned > timeline for Pypy Python 3.x ? There is no official timeline. My personal roadmap, as one of the main contributors in the last months, is the following: 1) Regularly merge default into the py3k branch, adapting code to Python 3.x as necessary, trying to keep the buildbots green. 2) Implement missing features / fixing bugs in the py3.3 branch when I have time and motivation. 3) In any case, the py3.3 branch will be merged back into py3k and closed before or at the beginning of the Leysin sprint. A new branch py3.5 will be opened to create many opportunities for newcomers at the sprint. Many people have asked why we have a py3k branch, currently implementing Python 3.2, a version almost nobody uses. The point is to see easily whether a test failure resulted from the last merge from the default branch, or was failing a potentially long time before. This helps making sure we don't accumulate failing tests over time. -Manuel From planrichi at gmail.com Thu Jan 14 12:32:56 2016 From: planrichi at gmail.com (Richard Plangger) Date: Thu, 14 Jan 2016 18:32:56 +0100 Subject: [pypy-dev] s390x libffi issue In-Reply-To: References: Message-ID: <5697DBC8.2030700@gmail.com> Hi, so far this issue is not resolved. Sadly the argument is that I/we probably do not handle the return type of libffi correctly. Let me explain the problem (again). For instance in the test [1] the case lltype.FuncType([rffi.Short, rffi.Short], rffi.Short). Here is what happens step by step (if not readable in the mail see [3]): jit compiles trace: -- parameter (1213,1213) both 64-bit | v ffi_closure_SYSV | v ffi_closure_helper_SYSV (1) | v < enters ctypes> closure_fcn | v _CallPythonObject | v (2) | v (3) contents of the buffer of (1) before the call: 0xdeadbeefdeadbeef contents of the buffer of (1) after the call: 0xdeadbeefdead097a || vv returns every frame back to and saves 0xdeadbeefdead097a in the variable e.g. i42 = call_i(..., 1213, 1213) (1) provides the stack location (3) writes the result to (2) leads into ll2types.py internal_callback and returns 2426L My understanding is that: libffi expects that should, just after returning the 64 bit value, cast the result to a 16 bit value. We cannot do that! AFAIK the jit has no notion of a narrower integer type than the machine register. The only exception is loading and storing from/to memory that are later sign extended. Any suggestions? Cheers, Richard [1] rpython/jit/backend/test/runner_test.py:test_call [2] https://github.com/python/cpython/blob/1fe0fd9feb6a4472a9a1b186502eb9c0b2366326/Modules/_ctypes/cfield.c#L551 [3] http://paste.pound-python.org/show/hvcqL68eqUApEH0PhnuG/ -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: OpenPGP digital signature URL: From arigo at tunes.org Fri Jan 15 03:33:23 2016 From: arigo at tunes.org (Armin Rigo) Date: Fri, 15 Jan 2016 09:33:23 +0100 Subject: [pypy-dev] s390x libffi issue In-Reply-To: <5697DBC8.2030700@gmail.com> References: <5697DBC8.2030700@gmail.com> Message-ID: Hi Richard, On Thu, Jan 14, 2016 at 6:32 PM, Richard Plangger wrote: > -- parameter (1213,1213) both 64-bit > as 64bit integer> (2) Do you mean in both cases 16-bit integer instead of 64-bit integer? > returns every frame back to and saves 0xdeadbeefdead097a > My understanding is that: libffi expects that should, just > after returning the 64 bit value, cast the result to a 16 bit value. > We cannot do that! If I understand correctly, you can do exactly that. After a call instruction to a function that returns a 16 bits result, simply add another instruction to sign- or zero-extend the result to a full 64-bit value. Surely it is not a performance problem to add a single simple instruction after some rare calls? In more details: my point of view is that libffi is *documented* to return the value 0x000000000000097a, but instead it returns 0xdeadbeefdead097a. It's a bug in libffi, but maybe one that is not going to be fixed promptly. In that case, you can simply work around it. Specifically, after a "call_i" instruction that should return a 16-bit number: the official ABI says it should return 0x000000000000097a; when called via ctypes it returns 0xdeadbeefdead097a instead; so go for the careful solution and only assume that the last 16 bits are valid, and emit an instruction just after the call to sign- or zero-extend the result from 16 bits to 64 bits. Then you can leave the issue in the hands of libffi for s390x and not be annoyed if it doesn't get fixed. A bient?t, Armin. From dje.gcc at gmail.com Fri Jan 15 08:28:35 2016 From: dje.gcc at gmail.com (David Edelsohn) Date: Fri, 15 Jan 2016 08:28:35 -0500 Subject: [pypy-dev] s390x libffi issue In-Reply-To: References: <5697DBC8.2030700@gmail.com> Message-ID: On Fri, Jan 15, 2016 at 3:33 AM, Armin Rigo wrote: > Hi Richard, > > On Thu, Jan 14, 2016 at 6:32 PM, Richard Plangger wrote: >> -- parameter (1213,1213) both 64-bit >> > as 64bit integer> (2) > > Do you mean in both cases 16-bit integer instead of 64-bit integer? > >> returns every frame back to and saves 0xdeadbeefdead097a > >> My understanding is that: libffi expects that should, just >> after returning the 64 bit value, cast the result to a 16 bit value. >> We cannot do that! > > If I understand correctly, you can do exactly that. After a call > instruction to a function that returns a 16 bits result, simply add > another instruction to sign- or zero-extend the result to a full > 64-bit value. Surely it is not a performance problem to add a > single simple instruction after some rare calls? > > In more details: my point of view is that libffi is *documented* to > return the value > 0x000000000000097a, but instead it returns 0xdeadbeefdead097a. It's a > bug in libffi, but maybe one that is not going to be fixed promptly. > In that case, you can simply work around it. Specifically, > after a "call_i" instruction that should return a 16-bit number: the > official ABI says it should return 0x000000000000097a; when called via > ctypes it returns 0xdeadbeefdead097a instead; so go for the careful > solution and only assume that the last 16 bits are valid, and emit an > instruction just after the call to sign- or zero-extend the result > from 16 bits to 64 bits. Then you can leave the issue in the hands of > libffi for s390x and not be annoyed if it doesn't get fixed. libffi is *documented* to return the non sign-extended value. - David From planrichi at gmail.com Fri Jan 15 08:59:59 2016 From: planrichi at gmail.com (Richard Plangger) Date: Fri, 15 Jan 2016 14:59:59 +0100 Subject: [pypy-dev] s390x libffi issue In-Reply-To: References: <5697DBC8.2030700@gmail.com> Message-ID: <5698FB5F.8090603@gmail.com> Hi, > libffi is *documented* to return the non sign-extended value. I have fixed this issue at the call site. The caller sign/zero extends narrower integer types. The reason I did not change it to that in the first place is: I thought that it is not easy to determine this information at the callsite (because it is not done in any other backend). Apparently it is available. Cheers, Richard -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: OpenPGP digital signature URL: From arigo at tunes.org Fri Jan 15 13:01:56 2016 From: arigo at tunes.org (Armin Rigo) Date: Fri, 15 Jan 2016 19:01:56 +0100 Subject: [pypy-dev] s390x libffi issue In-Reply-To: References: <5697DBC8.2030700@gmail.com> Message-ID: Hi David, On Fri, Jan 15, 2016 at 2:28 PM, David Edelsohn wrote: >> In more details: my point of view is that libffi is *documented* to >> return the value >> 0x000000000000097a, but instead it returns 0xdeadbeefdead097a. > > libffi is *documented* to return the non sign-extended value. Ok. I think my confusion came from the fact that we have to tell ffi whether values are signed or unsigned. As far as I can tell, this would be only useful in order to sign- or zero-extend the result value, if it did. (The docs included with libffi-3.2.1.tar.gz don't seem to say anything about that, which would make us both wrong---this specific behavior is not documented at all. I may be missing a point; please correct me in that case.) A bient?t, Armin. From arigo at tunes.org Fri Jan 15 13:06:43 2016 From: arigo at tunes.org (Armin Rigo) Date: Fri, 15 Jan 2016 19:06:43 +0100 Subject: [pypy-dev] s390x libffi issue In-Reply-To: <5698FB5F.8090603@gmail.com> References: <5697DBC8.2030700@gmail.com> <5698FB5F.8090603@gmail.com> Message-ID: Hi Richard, On Fri, Jan 15, 2016 at 2:59 PM, Richard Plangger wrote: > I have fixed this issue at the call site. The caller sign/zero extends > narrower integer types. The reason I did not change it to that in the > first place is: I thought that it is not easy to determine this > information at the callsite (because it is not done in any other > backend). Apparently it is available. It is done in the x86 backend. See load_result() in x86/callbuilder.py, which abuses load_from_mem() to emit an instruction MOV(S|Z)X(bits) for sign- or zero extension. It is also done in the arm backend, where load_result() calls self._ensure_result_bit_extension(). A bient?t, Armin. From siddhartha.gairola18 at gmail.com Sun Jan 17 03:44:09 2016 From: siddhartha.gairola18 at gmail.com (Siddhartha Gairola) Date: Sun, 17 Jan 2016 14:14:09 +0530 Subject: [pypy-dev] Need Assistance Message-ID: Dear Developers, I am new to this community and would like to get started. I have forked the pypy repository on bitbucket and have cloned it on my local machine. Would appreciate some guidance. Thank You. Regards, Sid -------------- next part -------------- An HTML attachment was scrubbed... URL: From planrichi at gmail.com Tue Jan 19 06:12:30 2016 From: planrichi at gmail.com (Richard Plangger) Date: Tue, 19 Jan 2016 12:12:30 +0100 Subject: [pypy-dev] s390x the last failing tests Message-ID: <569E1A1E.20901@gmail.com> hi, I wanted to give a quick update on the state of the implementation. Good news! I think there is not that much left to be done! I'm currently waiting for a bigger VM (already wrote an email to linux1 at us.ibm.com, 2 days ago? They are maybe on holiday?) to translate the full project. There are approx. 20 Failing tests that are left (own-linux-s390x). All other pass on my virtual machine. They are mostly related to big endian issues. Here are some questions: 1) Generally I got the impression that there are some tests that do not consider endianess (e.g. micronumpy). I guess it is time to change them to handle this? What about PPC? Did those not come up there? 2) It seems that the gcc on the build bot is quite old? It can for instance not assemble the instruction LAY (load address), but the VM I got (gcc version 4.8.5 20150623 (Red Hat 4.8.5-4)) is able to. As soon as I can get my hands on a Debian machine that is configured similarly I can say more (end of Jan?). Cheers, Richard -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: OpenPGP digital signature URL: From arigo at tunes.org Tue Jan 19 10:35:40 2016 From: arigo at tunes.org (Armin Rigo) Date: Tue, 19 Jan 2016 16:35:40 +0100 Subject: [pypy-dev] s390x the last failing tests In-Reply-To: <569E1A1E.20901@gmail.com> References: <569E1A1E.20901@gmail.com> Message-ID: Hi Richard, On Tue, Jan 19, 2016 at 12:12 PM, Richard Plangger wrote: > 1) Generally I got the impression that there are some tests that do not > consider endianess (e.g. micronumpy). I guess it is time to change them > to handle this? What about PPC? Did those not come up there? I must admit I did not try to run the whole test suite of PyPy on PPC; only the JIT's. Yes, it is expected to discover such small issues. I expect most of the issues to be in tests not written with that case in mind. A bient?t, Armin. From dje.gcc at gmail.com Tue Jan 19 10:38:35 2016 From: dje.gcc at gmail.com (David Edelsohn) Date: Tue, 19 Jan 2016 10:38:35 -0500 Subject: [pypy-dev] s390x the last failing tests In-Reply-To: <569E1A1E.20901@gmail.com> References: <569E1A1E.20901@gmail.com> Message-ID: On Tue, Jan 19, 2016 at 6:12 AM, Richard Plangger wrote: > hi, > > I wanted to give a quick update on the state of the implementation. Good > news! I think there is not that much left to be done! > > I'm currently waiting for a bigger VM (already wrote an email to > linux1 at us.ibm.com, 2 days ago? They are maybe on holiday?) to translate > the full project. > > There are approx. 20 Failing tests that are left (own-linux-s390x). All > other pass on my virtual machine. They are mostly related to big endian > issues. Here are some questions: > > 1) Generally I got the impression that there are some tests that do not > consider endianess (e.g. micronumpy). I guess it is time to change them > to handle this? What about PPC? Did those not come up there? > > 2) It seems that the gcc on the build bot is quite old? It can for > instance not assemble the instruction LAY (load address), but the VM I > got (gcc version 4.8.5 20150623 (Red Hat 4.8.5-4)) is able to. As soon > as I can get my hands on a Debian machine that is configured similarly I > can say more (end of Jan?). The default GCC on the buildbot is GCC 5.2 gcc version 5.2.1 20150911 (Debian 5.2.1-17) I just updated Binutils on the system in case something was out of date. I can try translating on the buildbot, if you wish. - David From dje.gcc at gmail.com Tue Jan 19 10:40:24 2016 From: dje.gcc at gmail.com (David Edelsohn) Date: Tue, 19 Jan 2016 10:40:24 -0500 Subject: [pypy-dev] s390x the last failing tests In-Reply-To: <569E1A1E.20901@gmail.com> References: <569E1A1E.20901@gmail.com> Message-ID: $ as -v GNU assembler version 2.25.90 (s390x-linux-gnu) using BFD version (GNU Binutils for Debian) 2.25.90.20160101 $ ld -v GNU ld (GNU Binutils for Debian) 2.25.90.20160101 On Tue, Jan 19, 2016 at 6:12 AM, Richard Plangger wrote: > hi, > > I wanted to give a quick update on the state of the implementation. Good > news! I think there is not that much left to be done! > > I'm currently waiting for a bigger VM (already wrote an email to > linux1 at us.ibm.com, 2 days ago? They are maybe on holiday?) to translate > the full project. > > There are approx. 20 Failing tests that are left (own-linux-s390x). All > other pass on my virtual machine. They are mostly related to big endian > issues. Here are some questions: > > 1) Generally I got the impression that there are some tests that do not > consider endianess (e.g. micronumpy). I guess it is time to change them > to handle this? What about PPC? Did those not come up there? > > 2) It seems that the gcc on the build bot is quite old? It can for > instance not assemble the instruction LAY (load address), but the VM I > got (gcc version 4.8.5 20150623 (Red Hat 4.8.5-4)) is able to. As soon > as I can get my hands on a Debian machine that is configured similarly I > can say more (end of Jan?). > > Cheers, > Richard > > > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > https://mail.python.org/mailman/listinfo/pypy-dev > From planrichi at gmail.com Tue Jan 19 10:47:54 2016 From: planrichi at gmail.com (Richard Plangger) Date: Tue, 19 Jan 2016 16:47:54 +0100 Subject: [pypy-dev] s390x the last failing tests In-Reply-To: References: <569E1A1E.20901@gmail.com> Message-ID: <569E5AAA.2020400@gmail.com> It seems that I'm using old software on the vm. :) I kicked the build bot to see if the update has any effect. Some of the failing tests (on the buildbot only) are very severe, and it is hard to find out the cause if they do not fail on the development machine... We could try to start a translation, but I'm unsure if it will really work. Cheers, Richard On 01/19/2016 04:40 PM, David Edelsohn wrote: > $ as -v > GNU assembler version 2.25.90 (s390x-linux-gnu) using BFD version (GNU > Binutils for Debian) 2.25.90.20160101 > > $ ld -v > GNU ld (GNU Binutils for Debian) 2.25.90.20160101 > > On Tue, Jan 19, 2016 at 6:12 AM, Richard Plangger wrote: >> hi, >> >> I wanted to give a quick update on the state of the implementation. Good >> news! I think there is not that much left to be done! >> >> I'm currently waiting for a bigger VM (already wrote an email to >> linux1 at us.ibm.com, 2 days ago? They are maybe on holiday?) to translate >> the full project. >> >> There are approx. 20 Failing tests that are left (own-linux-s390x). All >> other pass on my virtual machine. They are mostly related to big endian >> issues. Here are some questions: >> >> 1) Generally I got the impression that there are some tests that do not >> consider endianess (e.g. micronumpy). I guess it is time to change them >> to handle this? What about PPC? Did those not come up there? >> >> 2) It seems that the gcc on the build bot is quite old? It can for >> instance not assemble the instruction LAY (load address), but the VM I >> got (gcc version 4.8.5 20150623 (Red Hat 4.8.5-4)) is able to. As soon >> as I can get my hands on a Debian machine that is configured similarly I >> can say more (end of Jan?). >> >> Cheers, >> Richard >> >> >> _______________________________________________ >> pypy-dev mailing list >> pypy-dev at python.org >> https://mail.python.org/mailman/listinfo/pypy-dev >> -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: OpenPGP digital signature URL: From arigo at tunes.org Tue Jan 19 10:54:00 2016 From: arigo at tunes.org (Armin Rigo) Date: Tue, 19 Jan 2016 16:54:00 +0100 Subject: [pypy-dev] s390x the last failing tests In-Reply-To: <569E5AAA.2020400@gmail.com> References: <569E1A1E.20901@gmail.com> <569E5AAA.2020400@gmail.com> Message-ID: Hi Richard, On Tue, Jan 19, 2016 at 4:47 PM, Richard Plangger wrote: > Some of the failing tests (on the buildbot only) are very severe, and it > is hard to find out the cause if they do not fail on the development > machine... Note: at this point where a JIT backend is "mostly done", subtle bugs left in the JIT backend generate hard crashes that show up very rarely. I recommend that you use test_zll_stress_*.py in jit/backend/test/ --- and not just once. For example, leave them running repeatedly for 12 or 24 hours and be happy only if they pass successfully every time. These tests are good at figuring out such rare cases. A bient?t, Armin. From dje.gcc at gmail.com Tue Jan 19 14:01:54 2016 From: dje.gcc at gmail.com (David Edelsohn) Date: Tue, 19 Jan 2016 14:01:54 -0500 Subject: [pypy-dev] s390x the last failing tests In-Reply-To: <569E5AAA.2020400@gmail.com> References: <569E1A1E.20901@gmail.com> <569E5AAA.2020400@gmail.com> Message-ID: GCC has to be invoked with -march=zEC12 Thanks, David On Tue, Jan 19, 2016 at 10:47 AM, Richard Plangger wrote: > It seems that I'm using old software on the vm. :) > > I kicked the build bot to see if the update has any effect. > Some of the failing tests (on the buildbot only) are very severe, and it > is hard to find out the cause if they do not fail on the development > machine... We could try to start a translation, but I'm unsure if it > will really work. > > Cheers, > Richard > > > > On 01/19/2016 04:40 PM, David Edelsohn wrote: >> $ as -v >> GNU assembler version 2.25.90 (s390x-linux-gnu) using BFD version (GNU >> Binutils for Debian) 2.25.90.20160101 >> >> $ ld -v >> GNU ld (GNU Binutils for Debian) 2.25.90.20160101 >> >> On Tue, Jan 19, 2016 at 6:12 AM, Richard Plangger wrote: >>> hi, >>> >>> I wanted to give a quick update on the state of the implementation. Good >>> news! I think there is not that much left to be done! >>> >>> I'm currently waiting for a bigger VM (already wrote an email to >>> linux1 at us.ibm.com, 2 days ago? They are maybe on holiday?) to translate >>> the full project. >>> >>> There are approx. 20 Failing tests that are left (own-linux-s390x). All >>> other pass on my virtual machine. They are mostly related to big endian >>> issues. Here are some questions: >>> >>> 1) Generally I got the impression that there are some tests that do not >>> consider endianess (e.g. micronumpy). I guess it is time to change them >>> to handle this? What about PPC? Did those not come up there? >>> >>> 2) It seems that the gcc on the build bot is quite old? It can for >>> instance not assemble the instruction LAY (load address), but the VM I >>> got (gcc version 4.8.5 20150623 (Red Hat 4.8.5-4)) is able to. As soon >>> as I can get my hands on a Debian machine that is configured similarly I >>> can say more (end of Jan?). >>> >>> Cheers, >>> Richard >>> >>> >>> _______________________________________________ >>> pypy-dev mailing list >>> pypy-dev at python.org >>> https://mail.python.org/mailman/listinfo/pypy-dev >>> > From dje.gcc at gmail.com Tue Jan 19 14:14:31 2016 From: dje.gcc at gmail.com (David Edelsohn) Date: Tue, 19 Jan 2016 14:14:31 -0500 Subject: [pypy-dev] s390x the last failing tests In-Reply-To: <569E5AAA.2020400@gmail.com> References: <569E1A1E.20901@gmail.com> <569E5AAA.2020400@gmail.com> Message-ID: Debian apparently supports all z/Arch processors, so the toolchain defaults to much older processor model. For zEC12 and above, one must explicitly invoke GCC with -march=zEC12. I set CFLAGS and the translation is much happier. The s390x buildbot should explicitly use -march=zEC12. Thanks, David On Tue, Jan 19, 2016 at 10:47 AM, Richard Plangger wrote: > It seems that I'm using old software on the vm. :) > > I kicked the build bot to see if the update has any effect. > Some of the failing tests (on the buildbot only) are very severe, and it > is hard to find out the cause if they do not fail on the development > machine... We could try to start a translation, but I'm unsure if it > will really work. > > Cheers, > Richard > > > > On 01/19/2016 04:40 PM, David Edelsohn wrote: >> $ as -v >> GNU assembler version 2.25.90 (s390x-linux-gnu) using BFD version (GNU >> Binutils for Debian) 2.25.90.20160101 >> >> $ ld -v >> GNU ld (GNU Binutils for Debian) 2.25.90.20160101 >> >> On Tue, Jan 19, 2016 at 6:12 AM, Richard Plangger wrote: >>> hi, >>> >>> I wanted to give a quick update on the state of the implementation. Good >>> news! I think there is not that much left to be done! >>> >>> I'm currently waiting for a bigger VM (already wrote an email to >>> linux1 at us.ibm.com, 2 days ago? They are maybe on holiday?) to translate >>> the full project. >>> >>> There are approx. 20 Failing tests that are left (own-linux-s390x). All >>> other pass on my virtual machine. They are mostly related to big endian >>> issues. Here are some questions: >>> >>> 1) Generally I got the impression that there are some tests that do not >>> consider endianess (e.g. micronumpy). I guess it is time to change them >>> to handle this? What about PPC? Did those not come up there? >>> >>> 2) It seems that the gcc on the build bot is quite old? It can for >>> instance not assemble the instruction LAY (load address), but the VM I >>> got (gcc version 4.8.5 20150623 (Red Hat 4.8.5-4)) is able to. As soon >>> as I can get my hands on a Debian machine that is configured similarly I >>> can say more (end of Jan?). >>> >>> Cheers, >>> Richard >>> >>> >>> _______________________________________________ >>> pypy-dev mailing list >>> pypy-dev at python.org >>> https://mail.python.org/mailman/listinfo/pypy-dev >>> > From dje.gcc at gmail.com Tue Jan 19 14:52:23 2016 From: dje.gcc at gmail.com (David Edelsohn) Date: Tue, 19 Jan 2016 14:52:23 -0500 Subject: [pypy-dev] s390x the last failing tests In-Reply-To: <569E5AAA.2020400@gmail.com> References: <569E1A1E.20901@gmail.com> <569E5AAA.2020400@gmail.com> Message-ID: Translation got pretty far, until ... ***%**********++++++++++++++++[37a03] translation-task} [Timer] Timings: [Timer] annotate --- 485.9 s [Timer] rtype_lltype --- 822.1 s [Timer] pyjitpl_lltype --- 933.9 s [Timer] backendopt_lltype --- 255.8 s [Timer] =========================================== [Timer] Total: --- 2497.6 s [translation:info] Error: [translation:info] File "/home/dje/src/pypy/rpython/translator/goal/translate.py", line 318, in main [translation:info] drv.proceed(goals) [translation:info] File "/home/dje/src/pypy/rpython/translator/driver.py", line 549, in proceed [translation:info] result = self._execute(goals, task_skip = self._maybe_skip()) [translation:info] File "/home/dje/src/pypy/rpython/translator/tool/taskengine.py", line 114, in _execute [translation:info] res = self._do(goal, taskcallable, *args, **kwds) [translation:info] File "/home/dje/src/pypy/rpython/translator/driver.py", line 278, in _do [translation:info] res = func() [translation:info] File "/home/dje/src/pypy/rpython/translator/driver.py", line 384, in task_backendopt_lltype [translation:info] backend_optimizations(self.translator) [translation:info] File "/home/dje/src/pypy/rpython/translator/backendopt/all.py", line 142, in backend_optimizations [translation:info] gilanalysis.analyze(graphs, translator) [translation:info] File "/home/dje/src/pypy/rpython/translator/backendopt/gilanalysis.py", line 51, in analyze [translation:info] " %s\n%s" % (func, err.getvalue())) [translation:ERROR] Exception: 'no_release_gil' function can release the GIL: [translation:ERROR] [GilAnalyzer] analyze_direct_call((rpython.rtyper.lltypesystem.rffi:3)ccall_write__INT_arrayPtr_Unsigned): True [translation:ERROR] [GilAnalyzer] analyze_direct_call((rpython.rlib.rposix:446)write): True [translation:ERROR] [GilAnalyzer] analyze_direct_call((rpython.flowspace.specialcase:81)rpython_print_newline): True [translation:ERROR] [GilAnalyzer] analyze_indirect_call([, , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ]): True [translation:ERROR] [GilAnalyzer] analyze_direct_call((rpython.jit.backend.zarch.regalloc:470)Regalloc.walk_operations): True [translation:ERROR] [GilAnalyzer] analyze_direct_call((rpython.jit.backend.zarch.assembler:840)AssemblerZARCH._assemble): True [translation:ERROR] [translation] start debugger... > /home/dje/src/pypy/rpython/translator/backendopt/gilanalysis.py(51)analyze() -> " %s\n%s" % (func, err.getvalue())) On Tue, Jan 19, 2016 at 10:47 AM, Richard Plangger wrote: > It seems that I'm using old software on the vm. :) > > I kicked the build bot to see if the update has any effect. > Some of the failing tests (on the buildbot only) are very severe, and it > is hard to find out the cause if they do not fail on the development > machine... We could try to start a translation, but I'm unsure if it > will really work. > > Cheers, > Richard > > > > On 01/19/2016 04:40 PM, David Edelsohn wrote: >> $ as -v >> GNU assembler version 2.25.90 (s390x-linux-gnu) using BFD version (GNU >> Binutils for Debian) 2.25.90.20160101 >> >> $ ld -v >> GNU ld (GNU Binutils for Debian) 2.25.90.20160101 >> >> On Tue, Jan 19, 2016 at 6:12 AM, Richard Plangger wrote: >>> hi, >>> >>> I wanted to give a quick update on the state of the implementation. Good >>> news! I think there is not that much left to be done! >>> >>> I'm currently waiting for a bigger VM (already wrote an email to >>> linux1 at us.ibm.com, 2 days ago? They are maybe on holiday?) to translate >>> the full project. >>> >>> There are approx. 20 Failing tests that are left (own-linux-s390x). All >>> other pass on my virtual machine. They are mostly related to big endian >>> issues. Here are some questions: >>> >>> 1) Generally I got the impression that there are some tests that do not >>> consider endianess (e.g. micronumpy). I guess it is time to change them >>> to handle this? What about PPC? Did those not come up there? >>> >>> 2) It seems that the gcc on the build bot is quite old? It can for >>> instance not assemble the instruction LAY (load address), but the VM I >>> got (gcc version 4.8.5 20150623 (Red Hat 4.8.5-4)) is able to. As soon >>> as I can get my hands on a Debian machine that is configured similarly I >>> can say more (end of Jan?). >>> >>> Cheers, >>> Richard >>> >>> >>> _______________________________________________ >>> pypy-dev mailing list >>> pypy-dev at python.org >>> https://mail.python.org/mailman/listinfo/pypy-dev >>> > From me at manueljacob.de Tue Jan 19 23:30:25 2016 From: me at manueljacob.de (Manuel Jacob) Date: Wed, 20 Jan 2016 05:30:25 +0100 Subject: [pypy-dev] s390x the last failing tests In-Reply-To: References: <569E1A1E.20901@gmail.com> <569E5AAA.2020400@gmail.com> Message-ID: <4690490ed111590a29a106cb1325c1c2@indus.uberspace.de> This is caused by the print statement in the notimplemented_op() function (file rpython/jit/backend/zarch/assembler.py , line 1464). I'll look into improving the output of GraphAnalyzer later today, so it shows more clearly which function caused the problem. -Manuel On 2016-01-19 20:52, David Edelsohn wrote: > Translation got pretty far, until ... > > ***%**********++++++++++++++++[37a03] translation-task} > > [Timer] Timings: > [Timer] annotate --- 485.9 s > [Timer] rtype_lltype --- 822.1 s > [Timer] pyjitpl_lltype --- 933.9 s > [Timer] backendopt_lltype --- 255.8 s > [Timer] =========================================== > [Timer] Total: --- 2497.6 s > [translation:info] Error: > [translation:info] File > "/home/dje/src/pypy/rpython/translator/goal/translate.py", line 318, > in main > [translation:info] drv.proceed(goals) > [translation:info] File > "/home/dje/src/pypy/rpython/translator/driver.py", line 549, in > proceed > [translation:info] result = self._execute(goals, task_skip = > self._maybe_skip()) > [translation:info] File > "/home/dje/src/pypy/rpython/translator/tool/taskengine.py", line 114, > in _execute > [translation:info] res = self._do(goal, taskcallable, *args, > **kwds) > [translation:info] File > "/home/dje/src/pypy/rpython/translator/driver.py", line 278, in _do > [translation:info] res = func() > [translation:info] File > "/home/dje/src/pypy/rpython/translator/driver.py", line 384, in > task_backendopt_lltype > [translation:info] backend_optimizations(self.translator) > [translation:info] File > "/home/dje/src/pypy/rpython/translator/backendopt/all.py", line 142, > in backend_optimizations > [translation:info] gilanalysis.analyze(graphs, translator) > [translation:info] File > "/home/dje/src/pypy/rpython/translator/backendopt/gilanalysis.py", > line 51, in analyze > [translation:info] " %s\n%s" % (func, err.getvalue())) > [translation:ERROR] Exception: 'no_release_gil' function can release > the GIL: > [translation:ERROR] [GilAnalyzer] > analyze_direct_call((rpython.rtyper.lltypesystem.rffi:3)ccall_write__INT_arrayPtr_Unsigned): > True > [translation:ERROR] [GilAnalyzer] > analyze_direct_call((rpython.rlib.rposix:446)write): True > [translation:ERROR] [GilAnalyzer] > analyze_direct_call((rpython.flowspace.specialcase:81)rpython_print_newline): > True > [translation:ERROR] [GilAnalyzer] > analyze_indirect_call([ (rpython.jit.backend.zarch.opassembler:889)AssemblerZARCH._emit_gc_load > at 0x3ff7c9cb278>, (rpython.jit.backend.zarch.opassembler:905)AssemblerZARCH._emit_gc_load_indexed > at 0x3ff7ca12e48>, (rpython.jit.backend.zarch.opassembler:1071)AssemblerZARCH.emit_force_token > at 0x3ff7c860898>, (rpython.jit.backend.zarch.opassembler:923)AssemblerZARCH.emit_gc_store > at 0x3ff7c9b9940>, (rpython.jit.backend.zarch.opassembler:937)AssemblerZARCH.emit_gc_store_indexed > at 0x3ff7c8aca90>, (rpython.jit.backend.zarch.opassembler:1185)AssemblerZARCH.emit_increment_debug_counter > at 0x3ff7c8ecba8>, (rpython.jit.backend.zarch.opassembler:1032)AssemblerZARCH.emit_zero_array > at 0x3ff7c82f860>, (rpython.jit.backend.zarch.opassembler:612)AssemblerZARCH.emit_cond_call_gc_wb > at 0x3ff7c8acb70>, (rpython.jit.backend.zarch.opassembler:615)AssemblerZARCH.emit_cond_call_gc_wb_array > at 0x3ff7c8c9d68>, (rpython.jit.backend.zarch.opassembler:458)AssemblerZARCH.emit_debug_merge_point > at 0x3ff7c860208>, (rpython.jit.backend.zarch.opassembler:464)AssemblerZARCH.emit_enter_portal_frame > at 0x3ff7c9cb0b8>, (rpython.jit.backend.zarch.opassembler:467)AssemblerZARCH.emit_leave_portal_frame > at 0x3ff7c9d3278>, (rpython.jit.backend.zarch.opassembler:964)AssemblerZARCH.emit_copystrcontent > at 0x3ff7c963a58>, (rpython.jit.backend.zarch.opassembler:967)AssemblerZARCH.emit_copyunicodecontent > at 0x3ff7c963320>, (rpython.jit.backend.zarch.opassembler:838)AssemblerZARCH.emit_save_exception > at 0x3ff7c963cf8>, (rpython.jit.backend.zarch.opassembler:833)AssemblerZARCH.emit_save_exc_class > at 0x3ff7c9636a0>, (rpython.jit.backend.zarch.opassembler:842)AssemblerZARCH.emit_restore_exception > at 0x3ff7c9d3a20>, (rpython.jit.backend.zarch.opassembler:298)AssemblerZARCH._genop_call > at 0x3ff7c935160>, (rpython.jit.backend.zarch.opassembler:365)AssemblerZARCH.emit_cond_call > at 0x3ff7c826eb8>, (rpython.jit.backend.zarch.opassembler:1075)AssemblerZARCH._genop_call_assembler > at 0x3ff7c9b28d0>, (rpython.jit.backend.zarch.opassembler:334)AssemblerZARCH._genop_call_may_force > at 0x3ff7c8c9a20>, (rpython.jit.backend.zarch.opassembler:343)AssemblerZARCH._genop_call_release_gil > at 0x3ff7c970a58>, (rpython.jit.backend.zarch.opassembler:415)AssemblerZARCH.emit_call_malloc_gc > at 0x3ff7c860470>, (rpython.jit.backend.zarch.opassembler:419)AssemblerZARCH.emit_call_malloc_nursery > at 0x3ff7c86b828>, (rpython.jit.backend.zarch.opassembler:441)AssemblerZARCH.emit_call_malloc_nursery_varsize > at 0x3ff7c9292e8>, (rpython.jit.backend.zarch.opassembler:431)AssemblerZARCH.emit_call_malloc_nursery_varsize_frame > at 0x3ff7c9b9d30>, (rpython.jit.backend.zarch.opassembler:49)AssemblerZARCH.emit_int_mul_ovf > at 0x3ff7c920cc0>, (rpython.jit.backend.zarch.opassembler:193)AssemblerZARCH.emit_int_force_ge_zero > at 0x3ff7c860358>, (rpython.jit.backend.zarch.helper.assembler:51)AssemblerZARCH.emit at > 0x3ff7ca12630>, (rpython.jit.backend.zarch.opassembler:661)AssemblerZARCH.emit_guard_value > at 0x3ff7ca12b00>, (rpython.jit.backend.zarch.opassembler:680)AssemblerZARCH.emit_guard_class > at 0x3ff7ca12978>, (rpython.jit.backend.zarch.helper.assembler:62)AssemblerZARCH.emit at > 0x3ff7c9a3320>, (rpython.jit.backend.zarch.opassembler:799)AssemblerZARCH.emit_guard_not_forced > at 0x3ff7c9b2320>, (rpython.jit.backend.zarch.opassembler:728)AssemblerZARCH.emit_guard_gc_type > at 0x3ff7c8a9588>, (rpython.jit.backend.zarch.opassembler:657)AssemblerZARCH.emit_guard_no_overflow > at 0x3ff7c9b2668>, (rpython.jit.backend.zarch.opassembler:685)AssemblerZARCH.emit_guard_nonnull_class > at 0x3ff7c9b2630>, (rpython.jit.backend.zarch.opassembler:1174)AssemblerZARCH._genop_same_as > at 0x3ff7c9cb4e0>, (rpython.jit.backend.zarch.opassembler:733)AssemblerZARCH.emit_guard_is_object > at 0x3ff7c9a3b38>, (rpython.jit.backend.zarch.assembler:1160)AssemblerZARCH.emit_label at > 0x3ff7c963b00>, (rpython.jit.backend.zarch.opassembler:753)AssemblerZARCH.emit_guard_subclass > at 0x3ff7c970208>, (rpython.jit.backend.zarch.opassembler:1203)AssemblerZARCH.emit_guard_no_exception > at 0x3ff7c970e80>, (rpython.jit.backend.zarch.assembler:1463)notimplemented_op at > 0x3ff7c97f2e8>, (rpython.jit.backend.zarch.assembler:1165)AssemblerZARCH.emit_jump at > 0x3ff7c98f7f0>, (rpython.jit.backend.zarch.assembler:1192)AssemblerZARCH.emit_finish > at 0x3ff7c98fb00>, (rpython.jit.backend.zarch.opassembler:646)AssemblerZARCH.emit_guard_true > at 0x3ff7c9a3978>, (rpython.jit.backend.zarch.opassembler:653)AssemblerZARCH.emit_guard_overflow > at 0x3ff7c9d3ef0>, (rpython.jit.backend.zarch.opassembler:37)AssemblerZARCH.emit_int_sub > at 0x3ff7c963f98>, (rpython.jit.backend.zarch.opassembler:649)AssemblerZARCH.emit_guard_false > at 0x3ff7ca05080>, (rpython.jit.backend.zarch.helper.assembler:51)AssemblerZARCH.emit at > 0x3ff7d496d68>, (rpython.jit.backend.zarch.opassembler:796)AssemblerZARCH.emit_guard_not_invalidated > at 0x3ff87ad71d0>, (rpython.jit.backend.zarch.opassembler:806)AssemblerZARCH.emit_guard_not_forced_2 > at 0x3ff7d734470>, (rpython.jit.backend.zarch.opassembler:813)AssemblerZARCH.emit_guard_exception > at 0x3ff7c93b5f8>, (rpython.jit.backend.zarch.helper.assembler:62)AssemblerZARCH.emit at > 0x3ff7cc8d048>, (rpython.jit.backend.zarch.helper.assembler:62)AssemblerZARCH.emit at > 0x3ff87b32160>, (rpython.jit.backend.zarch.helper.assembler:40)AssemblerZARCH.f at > 0x3ff7c899128>, (rpython.jit.backend.zarch.helper.assembler:40)AssemblerZARCH.f at > 0x3ff7c8ec438>, (rpython.jit.backend.zarch.helper.assembler:40)AssemblerZARCH.f at > 0x3ff7c849748>, (rpython.jit.backend.zarch.helper.assembler:29)AssemblerZARCH.f at > 0x3ff7c8490b8>, (rpython.jit.backend.zarch.helper.assembler:29)AssemblerZARCH.f at > 0x3ff7c8263c8>, (rpython.jit.backend.zarch.helper.assembler:29)AssemblerZARCH.f at > 0x3ff7c826cc0>, (rpython.jit.backend.zarch.opassembler:181)AssemblerZARCH.emit_int_signext > at 0x3ff7c82f898>, (rpython.jit.backend.zarch.helper.assembler:40)AssemblerZARCH.f at > 0x3ff7c8499b0>, (rpython.jit.backend.zarch.helper.assembler:40)AssemblerZARCH.f at > 0x3ff7c8a99b0>, (rpython.jit.backend.zarch.helper.assembler:40)AssemblerZARCH.f at > 0x3ff7c8af2b0>, (rpython.jit.backend.zarch.helper.assembler:40)AssemblerZARCH.f at > 0x3ff7c8af400>, (rpython.jit.backend.zarch.opassembler:250)AssemblerZARCH.emit_float_neg > at 0x3ff7c849048>, (rpython.jit.backend.zarch.opassembler:254)AssemblerZARCH.emit_float_abs > at 0x3ff7c839c88>, (rpython.jit.backend.zarch.opassembler:258)AssemblerZARCH.emit_cast_float_to_int > at 0x3ff7c839588>, (rpython.jit.backend.zarch.opassembler:262)AssemblerZARCH.emit_cast_int_to_float > at 0x3ff7c839cc0>, (rpython.jit.backend.zarch.opassembler:266)AssemblerZARCH.emit_convert_float_bytes_to_longlong > at 0x3ff7c8acef0>, (rpython.jit.backend.zarch.opassembler:270)AssemblerZARCH.emit_convert_longlong_bytes_to_float > at 0x3ff7c8acb38>, (rpython.jit.backend.zarch.helper.assembler:24)AssemblerZARCH.f at > 0x3ff7c8accc0>, (rpython.jit.backend.zarch.helper.assembler:24)AssemblerZARCH.f at > 0x3ff7c8c92e8>, (rpython.jit.backend.zarch.helper.assembler:24)AssemblerZARCH.f at > 0x3ff7c906a58>, (rpython.jit.backend.zarch.helper.assembler:24)AssemblerZARCH.f at > 0x3ff7c8c9cf8>, (rpython.jit.backend.zarch.helper.assembler:24)AssemblerZARCH.f at > 0x3ff7c8c9be0>, (rpython.jit.backend.zarch.helper.assembler:24)AssemblerZARCH.f at > 0x3ff7c8c9eb8>, (rpython.jit.backend.zarch.helper.assembler:24)AssemblerZARCH.f at > 0x3ff7c8c9e10>, (rpython.jit.backend.zarch.helper.assembler:24)AssemblerZARCH.f at > 0x3ff7c860080>, (rpython.jit.backend.zarch.helper.assembler:24)AssemblerZARCH.f at > 0x3ff7c860048>, (rpython.jit.backend.zarch.helper.assembler:24)AssemblerZARCH.f at > 0x3ff7c860320>, (rpython.jit.backend.zarch.helper.assembler:24)AssemblerZARCH.f at > 0x3ff7c860be0>, (rpython.jit.backend.zarch.helper.assembler:24)AssemblerZARCH.f at > 0x3ff7c860828>, (rpython.jit.backend.zarch.helper.assembler:24)AssemblerZARCH.f at > 0x3ff7c860c50>, (rpython.jit.backend.zarch.helper.assembler:24)AssemblerZARCH.f at > 0x3ff7c860748>, (rpython.jit.backend.zarch.helper.assembler:24)AssemblerZARCH.f at > 0x3ff7c860cc0>, (rpython.jit.backend.zarch.helper.assembler:24)AssemblerZARCH.f at > 0x3ff7c860940>, (rpython.jit.backend.zarch.opassembler:199)AssemblerZARCH.emit_int_is_zero > at 0x3ff7c860438>, (rpython.jit.backend.zarch.opassembler:204)AssemblerZARCH.emit_int_is_true > at 0x3ff7c860dd8>, (rpython.jit.backend.zarch.opassembler:177)AssemblerZARCH.emit_int_neg > at 0x3ff7c860f98>, (rpython.jit.backend.zarch.opassembler:172)AssemblerZARCH.emit_int_invert > at 0x3ff7c860fd0>]): True > [translation:ERROR] [GilAnalyzer] > analyze_direct_call((rpython.jit.backend.zarch.regalloc:470)Regalloc.walk_operations): > True > [translation:ERROR] [GilAnalyzer] > analyze_direct_call((rpython.jit.backend.zarch.assembler:840)AssemblerZARCH._assemble): > True > [translation:ERROR] > [translation] start debugger... >> /home/dje/src/pypy/rpython/translator/backendopt/gilanalysis.py(51)analyze() > -> " %s\n%s" % (func, err.getvalue())) > > On Tue, Jan 19, 2016 at 10:47 AM, Richard Plangger > wrote: >> It seems that I'm using old software on the vm. :) >> >> I kicked the build bot to see if the update has any effect. >> Some of the failing tests (on the buildbot only) are very severe, and >> it >> is hard to find out the cause if they do not fail on the development >> machine... We could try to start a translation, but I'm unsure if it >> will really work. >> >> Cheers, >> Richard >> >> >> >> On 01/19/2016 04:40 PM, David Edelsohn wrote: >>> $ as -v >>> GNU assembler version 2.25.90 (s390x-linux-gnu) using BFD version >>> (GNU >>> Binutils for Debian) 2.25.90.20160101 >>> >>> $ ld -v >>> GNU ld (GNU Binutils for Debian) 2.25.90.20160101 >>> >>> On Tue, Jan 19, 2016 at 6:12 AM, Richard Plangger >>> wrote: >>>> hi, >>>> >>>> I wanted to give a quick update on the state of the implementation. >>>> Good >>>> news! I think there is not that much left to be done! >>>> >>>> I'm currently waiting for a bigger VM (already wrote an email to >>>> linux1 at us.ibm.com, 2 days ago? They are maybe on holiday?) to >>>> translate >>>> the full project. >>>> >>>> There are approx. 20 Failing tests that are left (own-linux-s390x). >>>> All >>>> other pass on my virtual machine. They are mostly related to big >>>> endian >>>> issues. Here are some questions: >>>> >>>> 1) Generally I got the impression that there are some tests that do >>>> not >>>> consider endianess (e.g. micronumpy). I guess it is time to change >>>> them >>>> to handle this? What about PPC? Did those not come up there? >>>> >>>> 2) It seems that the gcc on the build bot is quite old? It can for >>>> instance not assemble the instruction LAY (load address), but the VM >>>> I >>>> got (gcc version 4.8.5 20150623 (Red Hat 4.8.5-4)) is able to. As >>>> soon >>>> as I can get my hands on a Debian machine that is configured >>>> similarly I >>>> can say more (end of Jan?). >>>> >>>> Cheers, >>>> Richard >>>> >>>> >>>> _______________________________________________ >>>> pypy-dev mailing list >>>> pypy-dev at python.org >>>> https://mail.python.org/mailman/listinfo/pypy-dev >>>> >> > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > https://mail.python.org/mailman/listinfo/pypy-dev From elmir at unity3d.com Wed Jan 20 05:05:22 2016 From: elmir at unity3d.com (Elmir Jagudin) Date: Wed, 20 Jan 2016 11:05:22 +0100 Subject: [pypy-dev] using python-ldap under pypy In-Reply-To: References: Message-ID: On Mon, Dec 14, 2015 at 10:09 AM, Armin Rigo wrote: > Hi again, > > On Mon, Dec 14, 2015 at 10:01 AM, Armin Rigo wrote: > > So it means it's really a bug of python-ldap, which just happens to > > crash more often on PyPy than on CPython. It should be fixed there. > > Actually it's a known issue. See the comment line 255: > > XXX the strings should live longer than the resulting attrs pointer. > > > A bient?t, > > Armin. > This bug have been fixed in python-ldap package, as of version 2.4.25: http://python-ldap.cvs.sourceforge.net/viewvc/python-ldap/python-ldap/CHANGES?revision=1.370 Thanks again for info regarding this problem. /Elmir -------------- next part -------------- An HTML attachment was scrubbed... URL: From fijall at gmail.com Wed Jan 20 05:07:56 2016 From: fijall at gmail.com (Maciej Fijalkowski) Date: Wed, 20 Jan 2016 11:07:56 +0100 Subject: [pypy-dev] using python-ldap under pypy In-Reply-To: References: Message-ID: great! thanks for letting us know On Wed, Jan 20, 2016 at 11:05 AM, Elmir Jagudin wrote: > > > On Mon, Dec 14, 2015 at 10:09 AM, Armin Rigo wrote: >> >> Hi again, >> >> On Mon, Dec 14, 2015 at 10:01 AM, Armin Rigo wrote: >> > So it means it's really a bug of python-ldap, which just happens to >> > crash more often on PyPy than on CPython. It should be fixed there. >> >> Actually it's a known issue. See the comment line 255: >> >> XXX the strings should live longer than the resulting attrs pointer. >> >> >> A bient?t, >> >> Armin. > > > This bug have been fixed in python-ldap package, as of version 2.4.25: > > http://python-ldap.cvs.sourceforge.net/viewvc/python-ldap/python-ldap/CHANGES?revision=1.370 > > Thanks again for info regarding this problem. > > /Elmir > > > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > https://mail.python.org/mailman/listinfo/pypy-dev > From bra at fsn.hu Thu Jan 21 04:47:27 2016 From: bra at fsn.hu (Nagy, Attila) Date: Thu, 21 Jan 2016 10:47:27 +0100 Subject: [pypy-dev] TypeError: expected string, got NoneType object with setuptools and co with pypy 4 Message-ID: <56A0A92F.3070300@fsn.hu> Hi, After installing pypy 4.0.1 on FreeBSD (from ports), I get the above exception when trying to install setuptools, or when installing anything with pip (after pypy -m ensurepip). This all worked with the previous version (2.6). I could find this problem elsewhere: http://stackoverflow.com/questions/34566676/failed-to-install-pip-for-pypy-on-ubuntu so it doesn't seem to be related to FreeBSD. An example full trace: # pypy -m pip install --upgrade pip You are using pip version 6.1.1, however version 8.0.0 is available. You should consider upgrading via the 'pip install --upgrade pip' command. Collecting pip Exception: Traceback (most recent call last): File "/usr/local/pypy-4.0/site-packages/pip/basecommand.py", line 246, in main status = self.run(options, args) File "/usr/local/pypy-4.0/site-packages/pip/commands/install.py", line 342, in run requirement_set.prepare_files(finder) File "/usr/local/pypy-4.0/site-packages/pip/req/req_set.py", line 345, in prepare_files functools.partial(self._prepare_file, finder)) File "/usr/local/pypy-4.0/site-packages/pip/req/req_set.py", line 290, in _walk_req_to_install more_reqs = handler(req_to_install) File "/usr/local/pypy-4.0/lib_pypy/_functools.py", line 42, in __call__ return self._func(*(self._args + fargs), **fkeywords) File "/usr/local/pypy-4.0/site-packages/pip/req/req_set.py", line 487, in _prepare_file download_dir, do_download, session=self.session, File "/usr/local/pypy-4.0/site-packages/pip/download.py", line 827, in unpack_url session, File "/usr/local/pypy-4.0/site-packages/pip/download.py", line 673, in unpack_http_url from_path, content_type = _download_http_url(link, session, temp_dir) File "/usr/local/pypy-4.0/site-packages/pip/download.py", line 887, in _download_http_url with open(file_path, 'wb') as content_file: TypeError: expected string, got NoneType object And the one for setuptools: # pypy setup.py install running install Checking .pth file support in /usr/local/pypy-4.0/site-packages/ /usr/local/bin/pypy -E -c pass TEST PASSED: /usr/local/pypy-4.0/site-packages/ appears to support .pth files running bdist_egg running egg_info writing setuptools.egg-info/PKG-INFO writing dependency_links to setuptools.egg-info/dependency_links.txt writing entry points to setuptools.egg-info/entry_points.txt writing requirements to setuptools.egg-info/requires.txt writing top-level names to setuptools.egg-info/top_level.txt Traceback (most recent call last): File "setup.py", line 169, in dist = setuptools.setup(**setup_params) File "/usr/local/pypy-4.0/lib-python/2.7/distutils/core.py", line 151, in setup dist.run_commands() File "/usr/local/pypy-4.0/lib-python/2.7/distutils/dist.py", line 953, in run_commands self.run_command(cmd) File "/usr/local/pypy-4.0/lib-python/2.7/distutils/dist.py", line 972, in run_command cmd_obj.run() File "/tmp/setuptools-19.4/setuptools/command/install.py", line 67, in run self.do_egg_install() File "/tmp/setuptools-19.4/setuptools/command/install.py", line 109, in do_egg_install self.run_command('bdist_egg') File "/usr/local/pypy-4.0/lib-python/2.7/distutils/cmd.py", line 326, in run_command self.distribution.run_command(command) File "/usr/local/pypy-4.0/lib-python/2.7/distutils/dist.py", line 972, in run_command cmd_obj.run() File "/tmp/setuptools-19.4/setuptools/command/bdist_egg.py", line 152, in run self.run_command("egg_info") File "/usr/local/pypy-4.0/lib-python/2.7/distutils/cmd.py", line 326, in run_command self.distribution.run_command(command) File "/usr/local/pypy-4.0/lib-python/2.7/distutils/dist.py", line 972, in run_command cmd_obj.run() File "/tmp/setuptools-19.4/setuptools/command/egg_info.py", line 186, in run self.find_sources() File "/tmp/setuptools-19.4/setuptools/command/egg_info.py", line 209, in find_sources mm.run() File "/tmp/setuptools-19.4/setuptools/command/egg_info.py", line 293, in run self.add_defaults() File "/tmp/setuptools-19.4/setuptools/command/egg_info.py", line 322, in add_defaults sdist.add_defaults(self) File "/tmp/setuptools-19.4/setuptools/command/sdist.py", line 100, in add_defaults self.filelist.append(fn) File "/tmp/setuptools-19.4/setuptools/command/egg_info.py", line 236, in append if self._safe_path(path): File "/tmp/setuptools-19.4/setuptools/command/egg_info.py", line 256, in _safe_path u_path = unicode_utils.filesys_decode(path) File "/tmp/setuptools-19.4/setuptools/unicode_utils.py", line 31, in filesys_decode return path.decode(enc) TypeError: expected string, got NoneType object In this case the context is: def filesys_decode(path): """ Ensure that the given path is decoded, NONE when no expected encoding works """ fs_enc = sys.getfilesystemencoding() if isinstance(path, six.text_type): return path for enc in (fs_enc, "utf-8"): try: return path.decode(enc) except UnicodeDecodeError: continue On python 2.7, fs_enc here is: Python 2.7.11 (default, Dec 20 2015, 01:15:21) [GCC 4.2.1 Compatible FreeBSD Clang 3.4.1 (tags/RELEASE_34/dot1-final 208032)] on freebsd10 Type "help", "copyright", "credits" or "license" for more information. >>> import sys >>> repr(sys.getfilesystemencoding()) "'US-ASCII'" While on pypy: Python 2.7.10 (5f8302b8bf9f53056e40426f10c72151564e5b19, Jan 16 2016, 01:16:36) [PyPy 4.0.1 with GCC 4.2.1 Compatible FreeBSD Clang 3.4.1 (tags/RELEASE_34/dot1-final 208032)] on freebsd10 Type "help", "copyright", "credits" or "license" for more information. >>>> import sys >>>> repr(sys.getfilesystemencoding()) 'None' Returning None here is fine according to the docs, but the above code snippet doesn't handle the TypeError, which it gets when doing path.decode(None). How this could work? On pypy 2.6 on the same machine: Python 2.7.9 (295ee98b69288471b0fcf2e0ede82ce5209eb90b, Jun 12 2015, 19:25:58) [PyPy 2.6.0] on freebsd10 Type "help", "copyright", "credits" or "license" for more information. >>>> 'test'.decode(None) u'test' No exception! On pypy 4.0.1: Python 2.7.10 (5f8302b8bf9f53056e40426f10c72151564e5b19, Dec 10 2015, 01:17:03) [PyPy 4.0.1 with GCC 4.2.1 Compatible FreeBSD Clang 3.4.1 (tags/RELEASE_34/dot1-final 208032)] on freebsd10 Type "help", "copyright", "credits" or "license" for more information. >>>> 'test'.decode(None) Traceback (most recent call last): File "", line 1, in TypeError: expected string, got NoneType object Unhandled TypeError exception! python 2.7 works the same way: >>> 'test'.decode(None) Traceback (most recent call last): File "", line 1, in TypeError: decode() argument 1 must be string, not None Which installs setuptools just fine (although it doesn't get to the above snippet, because it doesn't return None to sys.getfilesystemencoding()). If I set fs_enc to None in the above snippet, even python 2.7 fails. So the key here (if I'm not completely lost) seems to be not returning None to the getfilesystemencoding to fix broken(?) software. What do you think about this? Thanks, From bra at fsn.hu Thu Jan 21 05:12:12 2016 From: bra at fsn.hu (Nagy, Attila) Date: Thu, 21 Jan 2016 11:12:12 +0100 Subject: [pypy-dev] TypeError: expected string, got NoneType object with setuptools and co with pypy 4 In-Reply-To: <56A0A92F.3070300@fsn.hu> References: <56A0A92F.3070300@fsn.hu> Message-ID: <56A0AEFC.7050002@fsn.hu> On 01/21/16 10:47, Nagy, Attila wrote: > > While on pypy: > Python 2.7.10 (5f8302b8bf9f53056e40426f10c72151564e5b19, Jan 16 2016, > 01:16:36) > [PyPy 4.0.1 with GCC 4.2.1 Compatible FreeBSD Clang 3.4.1 > (tags/RELEASE_34/dot1-final 208032)] on freebsd10 > Type "help", "copyright", "credits" or "license" for more information. >>>>> import sys >>>>> repr(sys.getfilesystemencoding()) > 'None' Also, trying to work this around by setting LC_CTYPE doesn't help, because it doesn't work: $ LC_CTYPE=en_US.ASCII python -c 'import sys; print sys.getfilesystemencoding()' US-ASCII $ LC_CTYPE=en_US.UTF-8 python -c 'import sys; print sys.getfilesystemencoding()' UTF-8 $ LC_CTYPE=en_US.ASCII pypy -c 'import sys; print sys.getfilesystemencoding()' None $ LC_CTYPE=en_US.UTF-8 pypy -c 'import sys; print sys.getfilesystemencoding()' None I guess this is where the problem lies. From sergeymatyunin at gmail.com Sat Jan 23 11:24:07 2016 From: sergeymatyunin at gmail.com (Sergey Matyunin) Date: Sat, 23 Jan 2016 17:24:07 +0100 Subject: [pypy-dev] partition in numpypy Message-ID: Hello. Need a little help with numpypy. I want to implement partition method for numpy array. Let's say I can compile npy_partition.h.src, import it through CFFI. Therefore I can write a python function my_partition(numpy_array, other_arguments...) that performs partitioning for a give numpy array. Now I want to create partition method for ndarray. As far as I understand methods of ndarray are defined in pypy/module/micronumpy/ndarray.py in a special way and ndarray.partition = my_partition doesn't work. Is it possible to add method to ndarray inside numpypy, not inside pypy's micronumpy module? -- Sergey From matti.picus at gmail.com Sat Jan 23 14:16:04 2016 From: matti.picus at gmail.com (Matti Picus) Date: Sat, 23 Jan 2016 21:16:04 +0200 Subject: [pypy-dev] partition in numpypy In-Reply-To: References: Message-ID: <56A3D174.6060804@gmail.com> On 23/01/2016 6:24 PM, Sergey Matyunin wrote: > Hello. > > Need a little help with numpypy. I want to implement partition method > for numpy array. Let's say I can compile npy_partition.h.src, import > it through CFFI. Therefore I can write a python function > my_partition(numpy_array, other_arguments...) that performs > partitioning for a give numpy array. > > Now I want to create partition method for ndarray. > As far as I understand methods of ndarray are defined in > pypy/module/micronumpy/ndarray.py in a special way and > ndarray.partition = my_partition doesn't work. > > Is it possible to add method to ndarray inside numpypy, not inside > pypy's micronumpy module? Thanks for picking this up. I would suggest you first play around with implementing partition in cffi, it may take a while to get the interface just right and you may decide that this implementation design is too unwieldy. Here is how I would add the app level function to ndarray, similar to the tactic taken for set_string_function and ndarray.__repre__ - create a function in module/micronumpy/appbridge.py that accepts your partition function and stores it in a cache. - Exposed this new function in _numpypy by adding it to module/micronumpy/__init__.py. - create a default descr_partition() function in module/micronumpy/ndarray.py that raises a w_NotImplementedError if the cache entry is empty (see descr_repr for an example of how to use the cache function if it has been assigned) - Add the call to your new function in step 1 into numpy/core/multiarray.py, which is only used in pypy (multiarray is a compiled extension module in cpython) Matti From planrichi at gmail.com Thu Jan 28 08:37:44 2016 From: planrichi at gmail.com (Richard Plangger) Date: Thu, 28 Jan 2016 14:37:44 +0100 Subject: [pypy-dev] Stack limit in the jit backends Message-ID: <56AA19A8.8030800@gmail.com> Hi, the file rpython/translator/c/src/stack.h defines MAX_STACK_SIZE. PPC has a bigger limit than e.g. x86. I experienced that on s390x there is as well a higher consumption of memory for stack frames (they are variable sized with a pretty high minimum limit (160 bytes) by the ABI). I have two questions: 1) The OS (i.e. linux) defines a stack limit (ulimit -s), does pypy overwrite this value with MAX_STACK_SIZE? 2) How would I determine which size is best for s390x? Or how did we come up with 768kb for x86, and 2.8mb for ppc? Cheers, Richard -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: OpenPGP digital signature URL: From arigo at tunes.org Thu Jan 28 08:58:18 2016 From: arigo at tunes.org (Armin Rigo) Date: Thu, 28 Jan 2016 14:58:18 +0100 Subject: [pypy-dev] Stack limit in the jit backends In-Reply-To: <56AA19A8.8030800@gmail.com> References: <56AA19A8.8030800@gmail.com> Message-ID: Hi Richard, On Thu, Jan 28, 2016 at 2:37 PM, Richard Plangger wrote: > the file rpython/translator/c/src/stack.h defines MAX_STACK_SIZE. PPC > has a bigger limit than e.g. x86. I experienced that on s390x there is > as well a higher consumption of memory for stack frames (they are > variable sized with a pretty high minimum limit (160 bytes) by the ABI). > > I have two questions: > > 1) The OS (i.e. linux) defines a stack limit (ulimit -s), does pypy > overwrite this value with MAX_STACK_SIZE? > > 2) How would I determine which size is best for s390x? Or how did we > come up with 768kb for x86, and 2.8mb for ppc? The stack limit is some number that is choosen to correspond to a bit more than 1000 recursive levels of a typical Python program when run by PyPy without JIT, as compiled with a typical "gcc -O3". It's thus very much hand-waving. The limit of 768 KB was done in this way. It's much lower than the stack provided by the OS, so no "ulimit" is needed. I had to pick a higher limit for PPC because 768 KB was definitely too low there. Likely, you should measure more precisely how many levels you get on x86-64 (I think it was around 1400) and pick a value for s390x that gives a similar limit. A bient?t, Armin. From matti.picus at gmail.com Fri Jan 29 09:49:28 2016 From: matti.picus at gmail.com (Matti Picus) Date: Fri, 29 Jan 2016 16:49:28 +0200 Subject: [pypy-dev] windows and python27.lib Message-ID: <56AB7BF8.4050909@gmail.com> When linking to a dll with MSVC, one needs to provide not the dll itself rather an import library. Also, the linker must be able to resolve all function definitions in order to run to completion, so the linker must be able to find the import library. On cpython, the import library used to link to python27.dll is called Python27.lib, it is located in a 'libs' directory at the top level in a cpython distribution together with things like pyexpat.lib, _socket.lib etc. distutils (and setuptools) do not need to know about the import library, since a #pragma is used to issue a link directive for the import library whenever pyconfig.h is included in a c file. A debug build will create and use python27_d.lib that corresponds to python27_d.dll What do we do in PyPy? At translation we build a libpypy-c.dll, and an export library libpypy-c.lib. These are copied together with pypy-c.exe to the directory where translation occurred (on the build bots this is in pypy/goal) Then the package script copies the libpypy-c.lib to pypy's include directory as python27.lib. A debug build will use these same names. The same pragma is used as in cpython to force linking with the import library whenever pyconfig.h is included So what you ask? I think the exe should be created as pypy.exe, the dll should be called pypy27.dll, and the import library should be consistently named pypy27.lib. There should be no renaming in package.py. This has implications in the following places: - the exe_name in targetpypystandalone should drop the -%(backend) modifier - pyconfig.h and the package script should be modified to use pypy27 consistently - probably some tests will fail, they should be fixed - cffi/api.py needs tweaking in _apply_embedding_fix - package.py should not rename (what do we do on linux about pypy-c -> pypy?) what did I forget? We should also handle a debug build of pypy, we should be creating a pypy27_d.lib, pypy27_d.dll, today the usual names are reused. Are there compelling reasons _not_ to make the naming consistent with cpython? As a fallback, we could just rename the import library to pypy27.lib My current motivation to do this is that _apply_embedding_fix does not work for win32 pypy Matti