From matti.picus at gmail.com Fri Jan 2 10:11:43 2015 From: matti.picus at gmail.com (Matti Picus) Date: Fri, 02 Jan 2015 11:11:43 +0200 Subject: [pypy-dev] minor question about configure.Works() Message-ID: <54A660CF.50503@gmail.com> if Works() fails, it currently prints the compiler error to stderr. This is confusing since the error has no context to determine if the "fatal error" reported during translation is actually important or not. I started a "quieter-translation" branch where I silenced the error entirely, does anyone have opinions one way or another? Matti From arigo at tunes.org Fri Jan 2 16:29:42 2015 From: arigo at tunes.org (Armin Rigo) Date: Fri, 2 Jan 2015 16:29:42 +0100 Subject: [pypy-dev] minor question about configure.Works() In-Reply-To: <54A660CF.50503@gmail.com> References: <54A660CF.50503@gmail.com> Message-ID: Hi Matti, Do you mean ./rtyper/tool/rffi_platform.py, class Works? A bient?t, Armin. From pjenvey at underboss.org Fri Jan 2 21:43:39 2015 From: pjenvey at underboss.org (Philip Jenvey) Date: Fri, 2 Jan 2015 12:43:39 -0800 Subject: [pypy-dev] stdlib-2.7.9! In-Reply-To: References: Message-ID: I doubt the VCS can help here. I think the only sane way of dealing with this is to wrap said changes in feature flags, e.g. if PYTHON34_SSL: On Dec 15, 2014, at 2:01 AM, Amaury Forgeot d'Arc wrote: > I suspect that some 2.7.9 changes should not go in 3.2, but are only compatible with a 3.3 or 3.4 stdlib... > Is there a way to skip the merge so these changes directly go to the 3.3 branch? > > 2014-12-14 22:15 GMT+01:00 Alex Gaynor : > Hey all, > > Earlier today I created the 2.7.9 branch, with the copy of the 2.7.9 stdlib. > > http://buildbot.pypy.org/summary?branch=stdlib-2.7.9 is the branch summary. > > It's no surprise, the biggest work to be done is for the ssl module, 2.7.9 contains a complete backport of 3.4's ssl module. > > We have up through 3.2s version of the ssl module implemented on the py3k branch. I'd like some feedback from folks on how you think we should best handle finishing the 2.7.9 work. > > Should I copy the work from py3k, finish anything missing, and then when we get to python 3.4 on the py3k branch the work is just "already done"? Something else? > > Feedback please! > Alex > > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > https://mail.python.org/mailman/listinfo/pypy-dev > > > > > -- > Amaury Forgeot d'Arc > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > https://mail.python.org/mailman/listinfo/pypy-dev -- Philip Jenvey From arigo at tunes.org Sat Jan 3 17:46:57 2015 From: arigo at tunes.org (Armin Rigo) Date: Sat, 3 Jan 2015 17:46:57 +0100 Subject: [pypy-dev] Fwd: [Python-Dev] More compact dictionaries with faster iteration In-Reply-To: References: <9BD2AD6A-125D-4A34-B6BF-A99B167554B6@gmail.com> <54A3F65A.1060406@gmail.com> Message-ID: Hi all, About ordered dictionaries: ---------- Forwarded message ---------- From: Armin Rigo Date: 3 January 2015 at 17:39 Subject: Re: [Python-Dev] More compact dictionaries with faster iteration To: Maciej Fijalkowski Cc: Serhiy Storchaka , "" Hi all, On 1 January 2015 at 14:52, Maciej Fijalkowski wrote: > PS. I wonder who came up with the idea first, PHP or rhettinger and > who implemented it first (I'm pretty sure it was used in hippy before > it was used in Zend PHP) We'd need to look more in detail to that question, but a quick look made me find this Java code from 2012: https://code.google.com/r/wassermanlouis-guava/source/browse/guava/src/com/google/common/collect/CompactHashMap.java?name=refs/remotes/gcode-clone/compact-maps which implements almost exactly the original idea of Raymond. (It has a twist because Java doesn't support arrays of (int, int, Object, Object), and instead encodes it as one array of long and one array of Objects. It also uses a chain of buckets instead of open addressing.) A bient?t, Armin. From matti.picus at gmail.com Sat Jan 3 18:23:57 2015 From: matti.picus at gmail.com (Matti Picus) Date: Sat, 03 Jan 2015 19:23:57 +0200 Subject: [pypy-dev] minor question about configure.Works() In-Reply-To: References: <54A660CF.50503@gmail.com> Message-ID: <54A825AD.1090906@gmail.com> No, I actually meant ctypes_configure/configure.py, as modified changed it on the quieter-translation branch. But now that you mention it we do not need two implementations of Works() ... Matti On 02/01/15 17:29, Armin Rigo wrote: > Hi Matti, > > Do you mean ./rtyper/tool/rffi_platform.py, class Works? > > > A bient?t, > > Armin. From thomas.f.hahn2 at gmail.com Sun Jan 4 05:25:50 2015 From: thomas.f.hahn2 at gmail.com (thomas hahn) Date: Sat, 3 Jan 2015 22:25:50 -0600 Subject: [pypy-dev] Help with finding tutors for Python, Linux, R, Perl, Octave, MATLAB and/or Cytoscape for yeast microarray analysis, next generation sequencing and constructing gene interaction networks Message-ID: *Help with finding tutors for Python, Linux, R, Perl, Octave, MATLAB and/or Cytoscape for yeast microarray analysis, next generation sequencing and constructing gene interaction networks* Hi I am a visually impaired bioinformatics graduate student using microarray data for my master?s thesis aimed at deciphering the mechanism by which the yeast wild type can suppress the rise of free reactive oxygen species (ROS) induced by caloric restriction (CR) but the Atg15 and Erg6 knockout mutant cannot. Since my remaining vision is very limited I need very high magnification. But that makes my visual field very small. Therefore I need somebody to teach me how to use these programming environments, especially for microarray analysis, next generation sequencing and constructing gene and pathway interaction networks. This is very difficult for me to figure out without assistance because Zoomtext, my magnification and text to speech software, on which I am depending because I am almost blind, has problems reading out aloud many programming related websites to me. And even those websites it can read, it can only read sequentially from left to right and then from top to bottom. Unfortunately, this way of acquiring, finding, selecting and processing new information and answering questions is too tiresome, exhausting, ineffective and especially way too time consuming for graduating with a PhD in bioinformatics before my funding runs out despite being severely limited by my visual disability. I would also need help with writing a good literature review and applying the described techniques to my own yeast Affimetrix microarray dataset because I cannot see well enough to find all relevant publications on my own. Some examples for specific tasks I urgently need help with are: 1. Analyzing and comparing the three publically available microarray datasets that can be accessed at: A. http://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=GSE41860 B. http://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=GSE38635 C. http://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=GSE9217 2. Learning how to use the Affymetrics microarray analysis software for the Yeast 2 chip, which can be found at http://www.affymetrix.com/support/technical/libraryfilesmain.affx 3. For Cytoscape I need somebody, who can teach me how to execute the tutorials at the following links because due to my very limited vision field I cannot see tutorial and program interface simultaneously. A. http://opentutorials.cgl.ucsf.edu/index.php/Tutorial:Introduction_to_Cytoscape_3.1-part2#Importing_and_Exploring_Your_Data B. http://opentutorials.cgl.ucsf.edu/index.php/Tutorial:Filtering_and_Editing_in_Cytoscape_3 C. http://cytoscape.org/manual/Cytoscape2_8Manual.html#Import%20Fixed-Format%20Network%20Files D. http://wiki.cytoscape.org/Cytoscape_User_Manual/Network_Formats 4. Learning how to use the TopGo R package to perform statistical analysis on GO enrichments. Since I am legally blind the rehab agency is giving me money to pay tutors for this purpose. Could you please help me getting in touch regarding this with anybody, who could potentially be interested in teaching me one on one thus saving me time for acquiring new information and skills, which I need to finish my thesis on time, so that I can remain eligible for funding to continue in my bioinformatics PhD program despite being almost blind? The tutoring can be done remotely via TeamViewer 5 and Skype. Hence, it does not matter where my tutors are physically located. Currently I have tutors in Croatia and UK. But since they both work full time jobs while working on their PhD dissertation they only have very limited time to teach me online. Could you therefore please forward this request for help to anybody, who could potentially be interested or, who could connect me to somebody, who might be, because my graduation and career depend on it? Who else would you recommend me to contact regarding this? Where else could I post this because I am in urgent need for help? Could you please contact me directly via email at Thomas.F.Hahn2 at gmail.com and/or Skype at tfh002 because my text to speech software has problems to read out this website aloud to me? I thank you very much in advance for your thoughts, ideas, suggestions, recommendations, time, help, efforts and support. With very warm regards, *Thomas Hahn* 1) *Graduate student in the Joint Bioinformatics Program at the University of Arkansas at Little Rock (UALR) and the University of Arkansas Medical Sciences (UAMS) &* 2) *Research & Industry Advocate, Founder and Board Member of RADISH MEDICAL SOLUTIONS, INC. (**http://www.radishmedical.com/thomas-hahn/* *) * *Primary email: **Thomas.F.Hahn2 at gmail.com* *Cell phone: 318 243 3940* *Office phone: 501 682 1440* *Office location: EIT 535* *Skype ID: tfh002* *Virtual Google Voice phone to reach me while logged into my email (i.e. * *Thomas.F.Hahn2 at gmail.com* *), even when having no cell phone reception, e.g. in big massive buildings: *(501) 301-4890 <%28501%29%20301-4890> *Web links: * 1) https://ualr.academia.edu/ThomasHahn 2) https://www.linkedin.com/pub/thomas-hahn/42/b29/42 3) http://facebook.com/Thomas.F.Hahn 4) https://twitter.com/Thomas_F_Hahn -------------- next part -------------- An HTML attachment was scrubbed... URL: From arigo at tunes.org Wed Jan 7 11:02:45 2015 From: arigo at tunes.org (Armin Rigo) Date: Wed, 7 Jan 2015 11:02:45 +0100 Subject: [pypy-dev] =?utf-8?q?Thank_You_Matti_Picus_AND_I_may_have_some_ne?= =?utf-8?q?w_questions_=E2=80=94=E2=80=94_bitpeach_from_china?= In-Reply-To: References: <548CA6B7.3010605@gmail.com> Message-ID: Hi, On 14 December 2014 at 08:52, ? <958816492 at qq.com> wrote: > (1) I check the VirtualEnv version list and the 1.11.6 is the latest > by this URL . There > is no newer version than that. It means that 1.11.6 version is the newest > and no versions can support pypy-2.4.0. I see that the current virtualenv version is now 12.0.5 ( https://pypi.python.org/pypi/virtualenv/#downloads). I assume that this means that the fixes Matti talks about are now officially part of it. Can you try again with this version 12.0.5? A bient?t, Armin. -------------- next part -------------- An HTML attachment was scrubbed... URL: From 958816492 at qq.com Wed Jan 7 13:47:34 2015 From: 958816492 at qq.com (=?utf-8?B?5oCd?=) Date: Wed, 7 Jan 2015 20:47:34 +0800 Subject: [pypy-dev] Thank You! [Question about VirtualEnv and PyPy is solved by your help!] Message-ID: Dear Directors or Dear Friends: I'm so glad to get your message. My honor to you, sir. Your email reminds me. I really appreciate it and write this email formally? as the reply . Firstly, I found the newest version 12.0.5 and I install this in my OS(Windows 8). Secondly, I use the command as below Thirdly, I use the cmd to go into the virtual environment of Pypy built by 2nd operation. Finally, I active the in cmd and I enter into the pypy environment. The picture below proved my process?. ? I'm grateful for your help and informing me in time. Although PyPy has a long way to go because the 3rd packages is still not enough, I believe PyPy is the future of Python as well as Cython. The attitude of your team shows great perseverance in the face of difficulty. I will join the donation towards the 3rd site-packages or your research to render what trifling service I can?. Best Wishes to your team and PyPy! :-) bitpeach?? ------------------ ???? ------------------ ???: "Armin Rigo";; ????: 2015?1?7?(???) ??6:02 ???: "?"<958816492 at qq.com>; ??: "Matti Picus"; "pypy-dev"; ??: Re: [pypy-dev] Thank You Matti Picus AND I may have some new questions ?? bitpeach from china Hi, On 14 December 2014 at 08:52, ? <958816492 at qq.com> wrote: (1) I check the VirtualEnv version list and the 1.11.6 is the latest by this URL. There is no newer version than that. It means that 1.11.6 version is the newest and no versions can support pypy-2.4.0. I see that the current virtualenv version is now 12.0.5 (https://pypi.python.org/pypi/virtualenv/#downloads). I assume that this means that the fixes Matti talks about are now officially part of it. Can you try again with this version 12.0.5? A bient?t, Armin. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: B1934022 at 62536A58.E62AAD54 Type: application/octet-stream Size: 48745 bytes Desc: not available URL: From stuaxo2 at yahoo.com Wed Jan 7 16:06:49 2015 From: stuaxo2 at yahoo.com (Stuart Axon) Date: Wed, 7 Jan 2015 15:06:49 +0000 (UTC) Subject: [pypy-dev] Segmentation Fault Message-ID: <1139641234.3467624.1420643209157.JavaMail.yahoo@jws10025.mail.ne1.yahoo.com> Hi,? ?I'm running pypy 2.4.0 on Ubuntu Utopic. Running the pypy works OK (though it outputs 'trusty' weirdly): ?$ ?pypyPython 2.7.8 (2.4.0+dfsg-1~ppa2+trusty, Sep 25 2014, 04:35:04)[PyPy 2.4.0 with GCC 4.8.2] on linux2Type "help", "copyright", "credits" or "license" for more information.>>>> Running my program in it gives a segfault: $ ?sbotSegmentation fault If I run in CPython then things are OK $ ?sbotusage: usage: sbot [options] inputfile.bot [args] [-h] [-o FILE] [-w] [-f]? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? [-t TITLE] [-s] [-dv]? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? [-p SERVERPORT] [-r REPEAT]? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? [-g GRAMMAR] [-c] [-v VARS]? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? script [script_args]usage: sbot [options] inputfile.bot [args]: error: too few arguments I got this out of gdb: gdb -ex r --args `which pypy` `which sbot`?GNU gdb (Ubuntu 7.8-1ubuntu4) 7.8.0.20141001-cvsCopyright (C) 2014 Free Software Foundation, Inc.License GPLv3+: GNU GPL version 3 or later This is free software: you are free to change and redistribute it.There is NO WARRANTY, to the extent permitted by law. ?Type "show copying"and "show warranty" for details.This GDB was configured as "x86_64-linux-gnu".Type "show configuration" for configuration details.For bug reporting instructions, please see:.Find the GDB manual and other documentation resources online at:.For help, type "help".Type "apropos word" to search for commands related to "word"...Reading symbols from /mnt/data/home/stu/.virtualenvs/shoebot-pgi-pypy/bin/pypy...(no debugging symbols found)...done.Starting program: /mnt/data/home/stu/.virtualenvs/shoebot-pgi-pypy/bin/pypy /mnt/data/home/stu/.virtualenvs/shoebot-pgi-pypy/bin/sbot[Thread debugging using libthread_db enabled]Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1". Program received signal SIGSEGV, Segmentation fault.0x00007fffecdb686e in std::_Rb_tree >, std::_Select1st > >, std::less, std::allocator > > >::_M_get_insert_unique_pos(std::string const&) ()? ?from /usr/lib/x86_64-linux-gnu/libprotobuf.so.8(gdb)? -------------- next part -------------- An HTML attachment was scrubbed... URL: From amauryfa at gmail.com Wed Jan 7 16:45:28 2015 From: amauryfa at gmail.com (Amaury Forgeot d'Arc) Date: Wed, 7 Jan 2015 16:45:28 +0100 Subject: [pypy-dev] Segmentation Fault In-Reply-To: <1139641234.3467624.1420643209157.JavaMail.yahoo@jws10025.mail.ne1.yahoo.com> References: <1139641234.3467624.1420643209157.JavaMail.yahoo@jws10025.mail.ne1.yahoo.com> Message-ID: Hi, Please file a bug at https://bitbucket.org/pypy/pypy/issues Also, please run "bt" in gdb to show the full stack of the failure. 2015-01-07 16:06 GMT+01:00 Stuart Axon : > Hi, > I'm running pypy 2.4.0 on Ubuntu Utopic. > > Running the pypy works OK (though it outputs 'trusty' weirdly): > > $ ?pypy > Python 2.7.8 (2.4.0+dfsg-1~ppa2+trusty, Sep 25 2014, 04:35:04) > [PyPy 2.4.0 with GCC 4.8.2] on linux2 > Type "help", "copyright", "credits" or "license" for more information. > >>>> > > Running my program in it gives a segfault: > > $ ?sbot > Segmentation fault > > > If I run in CPython then things are OK > > $ ?sbot > usage: usage: sbot [options] inputfile.bot [args] [-h] [-o FILE] [-w] [-f] > [-t TITLE] [-s] [-dv] > [-p SERVERPORT] [-r > REPEAT] > [-g GRAMMAR] [-c] [-v > VARS] > script [script_args] > usage: sbot [options] inputfile.bot [args]: error: too few arguments > > > > I got this out of gdb: > > gdb -ex r --args `which pypy` `which sbot` > GNU gdb (Ubuntu 7.8-1ubuntu4) 7.8.0.20141001-cvs > Copyright (C) 2014 Free Software Foundation, Inc. > License GPLv3+: GNU GPL version 3 or later < > http://gnu.org/licenses/gpl.html> > This is free software: you are free to change and redistribute it. > There is NO WARRANTY, to the extent permitted by law. Type "show copying" > and "show warranty" for details. > This GDB was configured as "x86_64-linux-gnu". > Type "show configuration" for configuration details. > For bug reporting instructions, please see: > . > Find the GDB manual and other documentation resources online at: > . > For help, type "help". > Type "apropos word" to search for commands related to "word"... > Reading symbols from > /mnt/data/home/stu/.virtualenvs/shoebot-pgi-pypy/bin/pypy...(no debugging > symbols found)...done. > Starting program: > /mnt/data/home/stu/.virtualenvs/shoebot-pgi-pypy/bin/pypy > /mnt/data/home/stu/.virtualenvs/shoebot-pgi-pypy/bin/sbot > [Thread debugging using libthread_db enabled] > Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1". > > Program received signal SIGSEGV, Segmentation fault. > 0x00007fffecdb686e in std::_Rb_tree const, std::pair >, std::_Select1st const, std::pair > >, std::less, > std::allocator > > > >::_M_get_insert_unique_pos(std::string const&) () > from /usr/lib/x86_64-linux-gnu/libprotobuf.so.8 > (gdb) > > > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > https://mail.python.org/mailman/listinfo/pypy-dev > > -- Amaury Forgeot d'Arc -------------- next part -------------- An HTML attachment was scrubbed... URL: From stuaxo2 at yahoo.com Wed Jan 7 16:52:09 2015 From: stuaxo2 at yahoo.com (Stuart Axon) Date: Wed, 7 Jan 2015 15:52:09 +0000 (UTC) Subject: [pypy-dev] Segmentation Fault In-Reply-To: References: Message-ID: <2069021849.2850377.1420645929988.JavaMail.yahoo@jws100117.mail.ne1.yahoo.com> Done https://bitbucket.org/pypy/pypy/issue/1955/segfault-on-running-program ?S++ On Wednesday, January 7, 2015 8:45 PM, Amaury Forgeot d'Arc wrote: Hi, Please file a bug at?https://bitbucket.org/pypy/pypy/issuesAlso, please run "bt" in gdb to show the full stack of the failure. 2015-01-07 16:06 GMT+01:00 Stuart Axon : Hi,? ?I'm running pypy 2.4.0 on Ubuntu Utopic. Running the pypy works OK (though it outputs 'trusty' weirdly): ?$ ?pypyPython 2.7.8 (2.4.0+dfsg-1~ppa2+trusty, Sep 25 2014, 04:35:04)[PyPy 2.4.0 with GCC 4.8.2] on linux2Type "help", "copyright", "credits" or "license" for more information.>>>> Running my program in it gives a segfault: $ ?sbotSegmentation fault If I run in CPython then things are OK $ ?sbotusage: usage: sbot [options] inputfile.bot [args] [-h] [-o FILE] [-w] [-f]? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? [-t TITLE] [-s] [-dv]? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? [-p SERVERPORT] [-r REPEAT]? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? [-g GRAMMAR] [-c] [-v VARS]? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? script [script_args]usage: sbot [options] inputfile.bot [args]: error: too few arguments I got this out of gdb: gdb -ex r --args `which pypy` `which sbot`?GNU gdb (Ubuntu 7.8-1ubuntu4) 7.8.0.20141001-cvsCopyright (C) 2014 Free Software Foundation, Inc.License GPLv3+: GNU GPL version 3 or later This is free software: you are free to change and redistribute it.There is NO WARRANTY, to the extent permitted by law.? Type "show copying"and "show warranty" for details.This GDB was configured as "x86_64-linux-gnu".Type "show configuration" for configuration details.For bug reporting instructions, please see:.Find the GDB manual and other documentation resources online at:.For help, type "help".Type "apropos word" to search for commands related to "word"...Reading symbols from /mnt/data/home/stu/.virtualenvs/shoebot-pgi-pypy/bin/pypy...(no debugging symbols found)...done.Starting program: /mnt/data/home/stu/.virtualenvs/shoebot-pgi-pypy/bin/pypy /mnt/data/home/stu/.virtualenvs/shoebot-pgi-pypy/bin/sbot[Thread debugging using libthread_db enabled]Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1". Program received signal SIGSEGV, Segmentation fault.0x00007fffecdb686e in std::_Rb_tree >, std::_Select1st > >, std::less, std::allocator > > >::_M_get_insert_unique_pos(std::string const&) ()? ?from /usr/lib/x86_64-linux-gnu/libprotobuf.so.8(gdb)? _______________________________________________ pypy-dev mailing list pypy-dev at python.org https://mail.python.org/mailman/listinfo/pypy-dev -- Amaury Forgeot d'Arc -------------- next part -------------- An HTML attachment was scrubbed... URL: From stuaxo2 at yahoo.com Wed Jan 7 17:07:39 2015 From: stuaxo2 at yahoo.com (Stuart Axon) Date: Wed, 7 Jan 2015 16:07:39 +0000 (UTC) Subject: [pypy-dev] Segmentation Fault In-Reply-To: <2069021849.2850377.1420645929988.JavaMail.yahoo@jws100117.mail.ne1.yahoo.com> References: <2069021849.2850377.1420645929988.JavaMail.yahoo@jws100117.mail.ne1.yahoo.com> Message-ID: <2043359252.3514954.1420646859987.JavaMail.yahoo@jws10052.mail.ne1.yahoo.com> Here is the line that causes the segfault: https://github.com/shoebot/shoebot/blob/shoebot-gtk3-pgi/shoebot/sbot.py#L76 from shoebot.grammar import DrawBot, NodeBot If I change it to only import one of these then it doesn't segfault.?S++ On Wednesday, January 7, 2015 8:57 PM, Stuart Axon wrote: Done https://bitbucket.org/pypy/pypy/issue/1955/segfault-on-running-program ?S++ On Wednesday, January 7, 2015 8:45 PM, Amaury Forgeot d'Arc wrote: Hi, Please file a bug at?https://bitbucket.org/pypy/pypy/issuesAlso, please run "bt" in gdb to show the full stack of the failure. 2015-01-07 16:06 GMT+01:00 Stuart Axon : Hi,? ?I'm running pypy 2.4.0 on Ubuntu Utopic. Running the pypy works OK (though it outputs 'trusty' weirdly): ?$ ?pypyPython 2.7.8 (2.4.0+dfsg-1~ppa2+trusty, Sep 25 2014, 04:35:04)[PyPy 2.4.0 with GCC 4.8.2] on linux2Type "help", "copyright", "credits" or "license" for more information.>>>> Running my program in it gives a segfault: $ ?sbotSegmentation fault If I run in CPython then things are OK $ ?sbotusage: usage: sbot [options] inputfile.bot [args] [-h] [-o FILE] [-w] [-f]? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? [-t TITLE] [-s] [-dv]? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? [-p SERVERPORT] [-r REPEAT]? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? [-g GRAMMAR] [-c] [-v VARS]? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? script [script_args]usage: sbot [options] inputfile.bot [args]: error: too few arguments I got this out of gdb: gdb -ex r --args `which pypy` `which sbot`?GNU gdb (Ubuntu 7.8-1ubuntu4) 7.8.0.20141001-cvsCopyright (C) 2014 Free Software Foundation, Inc.License GPLv3+: GNU GPL version 3 or later This is free software: you are free to change and redistribute it.There is NO WARRANTY, to the extent permitted by law.? Type "show copying"and "show warranty" for details.This GDB was configured as "x86_64-linux-gnu".Type "show configuration" for configuration details.For bug reporting instructions, please see:.Find the GDB manual and other documentation resources online at:.For help, type "help".Type "apropos word" to search for commands related to "word"...Reading symbols from /mnt/data/home/stu/.virtualenvs/shoebot-pgi-pypy/bin/pypy...(no debugging symbols found)...done.Starting program: /mnt/data/home/stu/.virtualenvs/shoebot-pgi-pypy/bin/pypy /mnt/data/home/stu/.virtualenvs/shoebot-pgi-pypy/bin/sbot[Thread debugging using libthread_db enabled]Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1". Program received signal SIGSEGV, Segmentation fault.0x00007fffecdb686e in std::_Rb_tree >, std::_Select1st > >, std::less, std::allocator > > >::_M_get_insert_unique_pos(std::string const&) ()? ?from /usr/lib/x86_64-linux-gnu/libprotobuf.so.8(gdb)? _______________________________________________ pypy-dev mailing list pypy-dev at python.org https://mail.python.org/mailman/listinfo/pypy-dev -- Amaury Forgeot d'Arc _______________________________________________ pypy-dev mailing list pypy-dev at python.org https://mail.python.org/mailman/listinfo/pypy-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From kcednalino at gmail.com Sat Jan 10 07:57:47 2015 From: kcednalino at gmail.com (Kevin Ednalino) Date: Fri, 9 Jan 2015 22:57:47 -0800 Subject: [pypy-dev] JIT Spikes During Timings Message-ID: Hi. I'm using PyPy3 in the context of a game engine and I'm getting these spikes when timing it, such as: Engine: 3.0 ms Engine: 3.9 ms Engine: 3.0 ms Engine: 7.9 ms Engine: 25.1 ms Engine: 2.2 ms Engine: 2.2 ms Engine: 3.1 ms Engine: 3.0 ms Engine: 4.2 ms When I run the engine on CPython 3, the timings are more consistent: Engine: 1.7 ms Engine: 1.8 ms Engine: 1.2 ms Engine: 1.8 ms Engine: 1.7 ms Engine: 1.3 ms Engine: 1.6 ms Engine: 1.8 ms Engine: 1.9 ms Engine: 1.5 ms I've gotten spikes when timing much simpler things. I don't believe my code is doing anything exceptional to cause the spikes. My educated guess is it's due to the GC or more likely the JIT. When I run the engine with JIT off like so, "pypy3 --jit off main.py", the spikes disappear (albeit not as fast with it on). I've also been playing around with the minimark settings but no luck so far; possibly tweaking the JIT settings might help. Any suggestions are welcomed :). Specifications: pypy3 --version: Python 3.2.5 (b2091e973da69152b3f928bfaabd5d2347e6df46, Nov 18 2014, 20:15:54) [PyPy 2.4.0 with GCC 4.9.2] uname -a: Linux archlinux 3.17.6-1-ARCH #1 SMP PREEMPT Sun Dec 7 23:43:32 UTC 2014 x86_64 GNU/Linux Sincerely, Kevin Ednalino -------------- next part -------------- An HTML attachment was scrubbed... URL: From fijall at gmail.com Sat Jan 10 17:11:13 2015 From: fijall at gmail.com (Maciej Fijalkowski) Date: Sat, 10 Jan 2015 18:11:13 +0200 Subject: [pypy-dev] JIT Spikes During Timings In-Reply-To: References: Message-ID: Hi Kevin. Those are most likely JIT spikes. We're working on reducing them Cheers, fijal On Sat, Jan 10, 2015 at 8:57 AM, Kevin Ednalino wrote: > Hi. I'm using PyPy3 in the context of a game engine and I'm getting these > spikes when timing it, such as: > > Engine: 3.0 ms > Engine: 3.9 ms > Engine: 3.0 ms > Engine: 7.9 ms > Engine: 25.1 ms > Engine: 2.2 ms > Engine: 2.2 ms > Engine: 3.1 ms > Engine: 3.0 ms > Engine: 4.2 ms > > When I run the engine on CPython 3, the timings are more consistent: > > Engine: 1.7 ms > Engine: 1.8 ms > Engine: 1.2 ms > Engine: 1.8 ms > Engine: 1.7 ms > Engine: 1.3 ms > Engine: 1.6 ms > Engine: 1.8 ms > Engine: 1.9 ms > Engine: 1.5 ms > > I've gotten spikes when timing much simpler things. I don't believe my code > is doing anything exceptional to cause the spikes. My educated guess is it's > due to the GC or more likely the JIT. When I run the engine with JIT off > like so, "pypy3 --jit off main.py", the spikes disappear (albeit not as fast > with it on). I've also been playing around with the minimark settings but no > luck so far; possibly tweaking the JIT settings might help. > > Any suggestions are welcomed :). > > Specifications: > > pypy3 --version: > Python 3.2.5 (b2091e973da69152b3f928bfaabd5d2347e6df46, Nov 18 2014, > 20:15:54) > [PyPy 2.4.0 with GCC 4.9.2] > > uname -a: > Linux archlinux 3.17.6-1-ARCH #1 SMP PREEMPT Sun Dec 7 23:43:32 UTC 2014 > x86_64 GNU/Linux > > Sincerely, > Kevin Ednalino > > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > https://mail.python.org/mailman/listinfo/pypy-dev > From arigo at tunes.org Sun Jan 11 19:48:21 2015 From: arigo at tunes.org (Armin Rigo) Date: Sun, 11 Jan 2015 19:48:21 +0100 Subject: [pypy-dev] Ordered dict in PyPy Message-ID: Hi all, The all-ordered-dicts branch is not quite ready for merging, but getting there. In one word, it makes all dicts ordered by default, by a subtle change of the internal model which also makes them *more* compact. An annoying detail is the OrderedDict subclass. We can simplify it a lot in PyPy, but not completely remove it. It adds a few methods like popitem(last=False) or __reversed__() which are not available on the base class, and it has a different notion of equality (two OrderedDicts are equal only if they store items in the same order). Moreover, it's still useful to have a class OrderedDict in PyPy, if only to say "I really want this dict to be ordered and I want a CPython-compatible way to express that". One annoyance is what to do with iteration over OrderedDicts. The CPython logic doesn't raise RuntimeError for concurrent modifications. In fact it doesn't care at all: it gives more or less what we could reasonably expect in simple cases (like not deleting items, only adding new ones). Of course in less simple cases it gives nonsense or crashes obscurely, like after you deleted the item most recently returned by an iterator. What should we do? We can (1) raise RuntimeError as soon as any change is detected, like dict, and propagate this to offending applications; (2) write some slightly different approximation of "what you'd expect"; or (3) come up with a sane and exact definition of what to expect and implement that; or (4) find some hack that happens to give the same result as CPython in the cases where CPython's result makes sense, but not necessarily in all cases. Right now the branch implements 2, but I think either 1 or 3 (but not 4) would be a better idea. Thoughts? A bient?t, Armin. From lac at openend.se Sun Jan 11 19:57:13 2015 From: lac at openend.se (Laura Creighton) Date: Sun, 11 Jan 2015 19:57:13 +0100 Subject: [pypy-dev] Ordered dict in PyPy In-Reply-To: Message from Armin Rigo of "Sun, 11 Jan 2015 19:48:21 +0100." References: Message-ID: <201501111857.t0BIvD3x025766@fido.openend.se> Can we talk the CPython developers into raising RunTimeError for concurrent modifications? Laura From arigo at tunes.org Sun Jan 11 20:19:32 2015 From: arigo at tunes.org (Armin Rigo) Date: Sun, 11 Jan 2015 20:19:32 +0100 Subject: [pypy-dev] Ordered dict in PyPy In-Reply-To: <201501111857.t0BIvD3x025766@fido.openend.se> References: <201501111857.t0BIvD3x025766@fido.openend.se> Message-ID: Hi Laura, On 11 January 2015 at 19:57, Laura Creighton wrote: > Can we talk the CPython developers into raising RunTimeError for > concurrent modifications? No, we can't expect them to change that: http://bugs.python.org/issue19414 shows they have no plan to have well-defined behavior (either RuntimeError or well-defined results). So for us, the question is if it makes sense for us to break compatibility with CPython 2.7 in this undocumented aspect by arguing that CPython's sometimes bogus results shows it was never meant to work at all (which is true), or if instead we should go the opposite way and offer some well-defined results that generalizes the partially working results of CPython (which would make people happy but is harder to implement). Note that I would be fine if we can't find any existing program that relies on this. Then we can decide to implement the RuntimeError solution. If and when somebody files a PyPy bug report, we can argue again. A bient?t, Armin. From alex.gaynor at gmail.com Sun Jan 11 20:26:17 2015 From: alex.gaynor at gmail.com (Alex Gaynor) Date: Sun, 11 Jan 2015 19:26:17 +0000 Subject: [pypy-dev] Ordered dict in PyPy References: <201501111857.t0BIvD3x025766@fido.openend.se> Message-ID: IMO, it's clear that CPython intends this to be "undefined behavior", raising a RuntimeError is a perfectly acceptable undefined behavior IMO -- better than corrupting the data. For __eq__ and __reversed__ and popitem(last=False) we can just have functions in __pypy__ and call them from class OrderedDict(dict): in collections I think? Alex On Sun Jan 11 2015 at 11:20:55 AM Armin Rigo wrote: > Hi Laura, > > On 11 January 2015 at 19:57, Laura Creighton wrote: > > Can we talk the CPython developers into raising RunTimeError for > > concurrent modifications? > > No, we can't expect them to change that: > http://bugs.python.org/issue19414 shows they have no plan to have > well-defined behavior (either RuntimeError or well-defined results). > > So for us, the question is if it makes sense for us to break > compatibility with CPython 2.7 in this undocumented aspect by arguing > that CPython's sometimes bogus results shows it was never meant to > work at all (which is true), or if instead we should go the opposite > way and offer some well-defined results that generalizes the partially > working results of CPython (which would make people happy but is > harder to implement). > > Note that I would be fine if we can't find any existing program that > relies on this. Then we can decide to implement the RuntimeError > solution. If and when somebody files a PyPy bug report, we can argue > again. > > > A bient?t, > > Armin. > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > https://mail.python.org/mailman/listinfo/pypy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fijall at gmail.com Sun Jan 11 20:37:40 2015 From: fijall at gmail.com (Maciej Fijalkowski) Date: Sun, 11 Jan 2015 21:37:40 +0200 Subject: [pypy-dev] Ordered dict in PyPy In-Reply-To: References: <201501111857.t0BIvD3x025766@fido.openend.se> Message-ID: I'm with Alex on that - raising RuntimeError is a good behavior when "you're not supposed to do that" happens. I would go with 1) as opposed to 2) On Sun, Jan 11, 2015 at 9:26 PM, Alex Gaynor wrote: > IMO, it's clear that CPython intends this to be "undefined behavior", > raising a RuntimeError is a perfectly acceptable undefined behavior IMO -- > better than corrupting the data. > > For __eq__ and __reversed__ and popitem(last=False) we can just have > functions in __pypy__ and call them from class OrderedDict(dict): in > collections I think? > > Alex > > > On Sun Jan 11 2015 at 11:20:55 AM Armin Rigo wrote: >> >> Hi Laura, >> >> On 11 January 2015 at 19:57, Laura Creighton wrote: >> > Can we talk the CPython developers into raising RunTimeError for >> > concurrent modifications? >> >> No, we can't expect them to change that: >> http://bugs.python.org/issue19414 shows they have no plan to have >> well-defined behavior (either RuntimeError or well-defined results). >> >> So for us, the question is if it makes sense for us to break >> compatibility with CPython 2.7 in this undocumented aspect by arguing >> that CPython's sometimes bogus results shows it was never meant to >> work at all (which is true), or if instead we should go the opposite >> way and offer some well-defined results that generalizes the partially >> working results of CPython (which would make people happy but is >> harder to implement). >> >> Note that I would be fine if we can't find any existing program that >> relies on this. Then we can decide to implement the RuntimeError >> solution. If and when somebody files a PyPy bug report, we can argue >> again. >> >> >> A bient?t, >> >> Armin. >> _______________________________________________ >> pypy-dev mailing list >> pypy-dev at python.org >> https://mail.python.org/mailman/listinfo/pypy-dev > > > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > https://mail.python.org/mailman/listinfo/pypy-dev > From samo at meluria.com Sun Jan 11 20:39:12 2015 From: samo at meluria.com (Samuel Villamonte) Date: Sun, 11 Jan 2015 14:39:12 -0500 Subject: [pypy-dev] Ordered dict in PyPy In-Reply-To: References: <201501111857.t0BIvD3x025766@fido.openend.se> Message-ID: IMHO, there's a part of an old "saying": [...] Errors should never pass silently. Unless explicitly silenced. So I'd say better to have users deal with the RuntimeError, and document it in the CPython diferences page. 2015-01-11 14:26 GMT-05:00 Alex Gaynor : > IMO, it's clear that CPython intends this to be "undefined behavior", > raising a RuntimeError is a perfectly acceptable undefined behavior IMO -- > better than corrupting the data. > > For __eq__ and __reversed__ and popitem(last=False) we can just have > functions in __pypy__ and call them from class OrderedDict(dict): in > collections I think? > > Alex > > On Sun Jan 11 2015 at 11:20:55 AM Armin Rigo wrote: > >> Hi Laura, >> >> On 11 January 2015 at 19:57, Laura Creighton wrote: >> > Can we talk the CPython developers into raising RunTimeError for >> > concurrent modifications? >> >> No, we can't expect them to change that: >> http://bugs.python.org/issue19414 shows they have no plan to have >> well-defined behavior (either RuntimeError or well-defined results). >> >> So for us, the question is if it makes sense for us to break >> compatibility with CPython 2.7 in this undocumented aspect by arguing >> that CPython's sometimes bogus results shows it was never meant to >> work at all (which is true), or if instead we should go the opposite >> way and offer some well-defined results that generalizes the partially >> working results of CPython (which would make people happy but is >> harder to implement). >> >> Note that I would be fine if we can't find any existing program that >> relies on this. Then we can decide to implement the RuntimeError >> solution. If and when somebody files a PyPy bug report, we can argue >> again. >> >> >> A bient?t, >> >> Armin. >> _______________________________________________ >> pypy-dev mailing list >> pypy-dev at python.org >> https://mail.python.org/mailman/listinfo/pypy-dev >> > > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > https://mail.python.org/mailman/listinfo/pypy-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fijall at gmail.com Sun Jan 11 20:50:39 2015 From: fijall at gmail.com (Maciej Fijalkowski) Date: Sun, 11 Jan 2015 21:50:39 +0200 Subject: [pypy-dev] Ordered dict in PyPy In-Reply-To: References: <201501111857.t0BIvD3x025766@fido.openend.se> Message-ID: btw, I'm also ok with reversed being implemented by simply making a copy On Sun, Jan 11, 2015 at 9:37 PM, Maciej Fijalkowski wrote: > I'm with Alex on that - raising RuntimeError is a good behavior when > "you're not supposed to do that" happens. I would go with 1) as > opposed to 2) > > On Sun, Jan 11, 2015 at 9:26 PM, Alex Gaynor wrote: >> IMO, it's clear that CPython intends this to be "undefined behavior", >> raising a RuntimeError is a perfectly acceptable undefined behavior IMO -- >> better than corrupting the data. >> >> For __eq__ and __reversed__ and popitem(last=False) we can just have >> functions in __pypy__ and call them from class OrderedDict(dict): in >> collections I think? >> >> Alex >> >> >> On Sun Jan 11 2015 at 11:20:55 AM Armin Rigo wrote: >>> >>> Hi Laura, >>> >>> On 11 January 2015 at 19:57, Laura Creighton wrote: >>> > Can we talk the CPython developers into raising RunTimeError for >>> > concurrent modifications? >>> >>> No, we can't expect them to change that: >>> http://bugs.python.org/issue19414 shows they have no plan to have >>> well-defined behavior (either RuntimeError or well-defined results). >>> >>> So for us, the question is if it makes sense for us to break >>> compatibility with CPython 2.7 in this undocumented aspect by arguing >>> that CPython's sometimes bogus results shows it was never meant to >>> work at all (which is true), or if instead we should go the opposite >>> way and offer some well-defined results that generalizes the partially >>> working results of CPython (which would make people happy but is >>> harder to implement). >>> >>> Note that I would be fine if we can't find any existing program that >>> relies on this. Then we can decide to implement the RuntimeError >>> solution. If and when somebody files a PyPy bug report, we can argue >>> again. >>> >>> >>> A bient?t, >>> >>> Armin. >>> _______________________________________________ >>> pypy-dev mailing list >>> pypy-dev at python.org >>> https://mail.python.org/mailman/listinfo/pypy-dev >> >> >> _______________________________________________ >> pypy-dev mailing list >> pypy-dev at python.org >> https://mail.python.org/mailman/listinfo/pypy-dev >> From alex.gaynor at gmail.com Sun Jan 11 20:52:38 2015 From: alex.gaynor at gmail.com (Alex Gaynor) Date: Sun, 11 Jan 2015 19:52:38 +0000 Subject: [pypy-dev] Ordered dict in PyPy References: <201501111857.t0BIvD3x025766@fido.openend.se> Message-ID: I think we should avoid doing extras copies, it creates weird scenarios where the performance is randomly worse on PyPy, which can be very hard to debug. Alex On Sun Jan 11 2015 at 11:51:00 AM Maciej Fijalkowski wrote: > btw, I'm also ok with reversed being implemented by simply making a copy > > On Sun, Jan 11, 2015 at 9:37 PM, Maciej Fijalkowski > wrote: > > I'm with Alex on that - raising RuntimeError is a good behavior when > > "you're not supposed to do that" happens. I would go with 1) as > > opposed to 2) > > > > On Sun, Jan 11, 2015 at 9:26 PM, Alex Gaynor > wrote: > >> IMO, it's clear that CPython intends this to be "undefined behavior", > >> raising a RuntimeError is a perfectly acceptable undefined behavior IMO > -- > >> better than corrupting the data. > >> > >> For __eq__ and __reversed__ and popitem(last=False) we can just have > >> functions in __pypy__ and call them from class OrderedDict(dict): in > >> collections I think? > >> > >> Alex > >> > >> > >> On Sun Jan 11 2015 at 11:20:55 AM Armin Rigo wrote: > >>> > >>> Hi Laura, > >>> > >>> On 11 January 2015 at 19:57, Laura Creighton wrote: > >>> > Can we talk the CPython developers into raising RunTimeError for > >>> > concurrent modifications? > >>> > >>> No, we can't expect them to change that: > >>> http://bugs.python.org/issue19414 shows they have no plan to have > >>> well-defined behavior (either RuntimeError or well-defined results). > >>> > >>> So for us, the question is if it makes sense for us to break > >>> compatibility with CPython 2.7 in this undocumented aspect by arguing > >>> that CPython's sometimes bogus results shows it was never meant to > >>> work at all (which is true), or if instead we should go the opposite > >>> way and offer some well-defined results that generalizes the partially > >>> working results of CPython (which would make people happy but is > >>> harder to implement). > >>> > >>> Note that I would be fine if we can't find any existing program that > >>> relies on this. Then we can decide to implement the RuntimeError > >>> solution. If and when somebody files a PyPy bug report, we can argue > >>> again. > >>> > >>> > >>> A bient?t, > >>> > >>> Armin. > >>> _______________________________________________ > >>> pypy-dev mailing list > >>> pypy-dev at python.org > >>> https://mail.python.org/mailman/listinfo/pypy-dev > >> > >> > >> _______________________________________________ > >> pypy-dev mailing list > >> pypy-dev at python.org > >> https://mail.python.org/mailman/listinfo/pypy-dev > >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From yorik.sar at gmail.com Sun Jan 11 23:38:36 2015 From: yorik.sar at gmail.com (Yuriy Taraday) Date: Mon, 12 Jan 2015 02:38:36 +0400 Subject: [pypy-dev] can_enter_jit - what is it and did something change? Message-ID: Hello. I'm poking around lang-js [0] looking if I can make it better (just for fun). First issue I've stumbled upon is mysterious "Bad can_enter_jit() placement" error. It looks like when we do a jump in user code [1], we should do "can_enter_jit" with new instruction pointer, not the old one. I've changed it (s/=pc/=new_pc/) and everything seems to work fine. It looks like in PyPy it works the same way [2] My questions are: - Did something change wrt can_enter_jit since that code had been written? I mean besides check that raises that error [3]. - Is it correct to not have any _user_ code run between can_enter_jit and jit_merge_point calls but have some _interpreter_ code there? Error message says that there should be _any_ code between them which seems hardly possible. - Why do we need to place this hint in the beginning of the next iteration of the loop but not at the end of current iteration? It would seem logical to say "you might want to compile from here (start of the loop, one of jit_merge_points) and here (end of the loop, can_enter_jit hing) if you like". - It looks like another piece of old RPython code - tutorial (BF implementation [4]) works fine and gets JIT speedup even without can_enter_jit calls. How does this work? - Would it be more beneficial to analyse user code and provide can_enter_jit hint only for real loops instead of all backward jumps? I tried to google around can_enter_jit but it leads mostly to PyPy commit history so I hope to find answers here. [0] https://bitbucket.org/pypy/lang-js [1] https://bitbucket.org/pypy/lang-js/src/e2275b2/js/jscode.py#cl-240 [2] https://bitbucket.org/pypy/pypy/src/d0f031c/pypy/module/pypyjit/interp_jit.py#cl-89 [3] https://bitbucket.org/pypy/pypy/commits/b538c2f [4] https://bitbucket.org/brownan/pypy-tutorial -- Kind regards, Yuriy. -------------- next part -------------- An HTML attachment was scrubbed... URL: From arigo at tunes.org Sun Jan 11 23:50:18 2015 From: arigo at tunes.org (Armin Rigo) Date: Sun, 11 Jan 2015 23:50:18 +0100 Subject: [pypy-dev] can_enter_jit - what is it and did something change? In-Reply-To: References: Message-ID: Hi Yuriy, On 11 January 2015 at 23:38, Yuriy Taraday wrote: > - Did something change wrt can_enter_jit since that code had been written? I > mean besides check that raises that error [3]. No. There are probably cases where that check fails but the code was still correct; I bet it was the case in 'lang-js' (or, actually, I *hope* it was correct in the first place!). You're correct in killing 'can_enter_jit' completely. As you found out, it doesn't have any impact on the performance of JIT-generated code; it only impacts the performance of JITting itself. Nowadays it is considered an "advanced hint" only. A bient?t, Armin. From arigo at tunes.org Sun Jan 11 23:57:17 2015 From: arigo at tunes.org (Armin Rigo) Date: Sun, 11 Jan 2015 23:57:17 +0100 Subject: [pypy-dev] Ordered dict in PyPy In-Reply-To: References: <201501111857.t0BIvD3x025766@fido.openend.se> Message-ID: Hi all, Thanks for the feedback. It looks like the general opinion is to raise RuntimeError when detecting changes. I'll do that then. About '__reversed__', I suppose it should be implemented the same way, with an RPython-provided iterator which raises RuntimeError too. A bient?t, Armin. From yorik.sar at gmail.com Mon Jan 12 00:06:13 2015 From: yorik.sar at gmail.com (Yuriy Taraday) Date: Mon, 12 Jan 2015 03:06:13 +0400 Subject: [pypy-dev] can_enter_jit - what is it and did something change? In-Reply-To: References: Message-ID: Thanks for quick reply! On Mon, Jan 12, 2015 at 1:50 AM, Armin Rigo wrote: > Hi Yuriy, > > On 11 January 2015 at 23:38, Yuriy Taraday wrote: > > - Did something change wrt can_enter_jit since that code had been > written? I > > mean besides check that raises that error [3]. > > No. There are probably cases where that check fails but the code was > still correct; I bet it was the case in 'lang-js' (or, actually, I > *hope* it was correct in the first place!). > > You're correct in killing 'can_enter_jit' completely. As you found > out, it doesn't have any impact on the performance of JIT-generated > code; it only impacts the performance of JITting itself. Nowadays it > is considered an "advanced hint" only. > Ok, I will. But I'm still curious about other questions though. Can you point me in the direction where can I dig for answers? -- Kind regards, Yuriy. -------------- next part -------------- An HTML attachment was scrubbed... URL: From arigo at tunes.org Mon Jan 12 11:09:30 2015 From: arigo at tunes.org (Armin Rigo) Date: Mon, 12 Jan 2015 11:09:30 +0100 Subject: [pypy-dev] can_enter_jit - what is it and did something change? In-Reply-To: References: Message-ID: Hi Yuriy, See the old explanation about our JIT here: rpython/doc/jit/pyjitpl5.rst. What changed from this old explanation is that if no can_enter_jit is found in the source code, one is automatically inserted just before the jit_merge_point. About your other questions: > - Is it correct to not have any _user_ code run between can_enter_jit and jit_merge_point calls but have some _interpreter_ code there? Error message says that there should be _any_ code between them which seems hardly possible. There should not be any code that doesn't fully constant-fold away. You can have some function returns, and maybe close a loop that says "while True". You cannot close a loop that says "while more complicated condition". If this condition seems unclear to you, you can also restrict it to: you should compute a flag, "needs_can_enter_jit", during one bytecode, defaulting to False. Then your interpreter looks like: needs_can_enter_jit = False while some_condition: if needs_can_enter_jit: jitdriver.can_enter_jit(...) jitdriver.jit_merge_point(...) needs_can_enter_jit = False ... # sometimes set needs_can_enter_jit to True where the two sets of arguments must be exactly identical (and not include 'needs_can_enter_jit'). > - Why do we need to place this hint in the beginning of the next iteration of the loop but not at the end of current iteration? It would seem logical to say "you might want to compile from here (start of the loop, one of jit_merge_points) and here (end of the loop, can_enter_jit hing) if you like". I'm not sure to understand the distinction. The end of one iteration of the loop should be exactly the same as the start of the next iteration, unless there is code between the two -- which is disallowed by the previous rule. > - Would it be more beneficial to analyse user code and provide can_enter_jit hint only for real loops instead of all backward jumps? It's a bit pointless. As I said, can_enter_jit is only an optimization: you make the JIT tracing a little bit faster by not calling it repeatedly before every single jit_merge_point. You could spend efforts and make it even less common using bytecode analysis, but you're hitting diminishing returns very quickly imho. A bient?t, Armin. From yorik.sar at gmail.com Mon Jan 12 15:34:29 2015 From: yorik.sar at gmail.com (Yuriy Taraday) Date: Mon, 12 Jan 2015 18:34:29 +0400 Subject: [pypy-dev] can_enter_jit - what is it and did something change? In-Reply-To: References: Message-ID: On Mon, Jan 12, 2015 at 1:09 PM, Armin Rigo wrote: > See the old explanation about our JIT here: > rpython/doc/jit/pyjitpl5.rst. What changed from this old explanation > is that if no can_enter_jit is found in the source code, one is > automatically inserted just before the jit_merge_point. > Yes, I think I understand it now. I guess that doc needs some update to include this as well as the fact that there should be no code between these hints and both hints should have identical argument set. About your other questions: > > > - Is it correct to not have any _user_ code run between can_enter_jit > and jit_merge_point calls but have some _interpreter_ code there? Error > message says that there should be _any_ code between them which seems > hardly possible. > > There should not be any code that doesn't fully constant-fold away. > You can have some function returns, and maybe close a loop that says > "while True". You cannot close a loop that says "while more > complicated condition". > I see only simple assignment "pc = new_pc" between them in lang-js. I guess it's simple enough. Looking at PyPy I've noticed that there still can be some code between these hints if we're unlucky enough to catch e.g. KeyboardInterrupt after can_enter_jit is called but before we leave handle_bytecode. Or does exception handling in RPython differs from Python? There're just some return's there but in CPython KeyboardInterrupt can be raised from any bytecode. By the way, can't this condition ("no complex code between hints") be enforced at run-time (during tracing)? It seems that the only check verifies only arguments passed to hints but not code itself. I'm not sure to understand the distinction. > Yes, now I see that was nonsense. > - Would it be more beneficial to analyse user code and provide > can_enter_jit hint only for real loops instead of all backward jumps? > > It's a bit pointless. As I said, can_enter_jit is only an > optimization: you make the JIT tracing a little bit faster by not > calling it repeatedly before every single jit_merge_point. You could > spend efforts and make it even less common using bytecode analysis, > but you're hitting diminishing returns very quickly imho. > OK, I won't go there until tracing become bottleneck if that ever happens. I guess having explicit can_enter_jit hint should still be useful. At least I don't feel comfortable having JIT setting counts for every single bytecode without it. That should be a waste of memory and cycles. -- Kind regards, Yuriy. -------------- next part -------------- An HTML attachment was scrubbed... URL: From arigo at tunes.org Thu Jan 15 16:03:58 2015 From: arigo at tunes.org (Armin Rigo) Date: Thu, 15 Jan 2015 16:03:58 +0100 Subject: [pypy-dev] Next sprint in Leysin, Switzerland (20-28 Feb 2015) Message-ID: Hi all, Here is the next sprint announcement! ============================================================ PyPy Leysin Winter Sprint (20-28th February 2015) ============================================================ The next PyPy sprint will be in Leysin, Switzerland, for the tenth time. This is a fully public sprint: newcomers and topics other than those proposed below are welcome. ------------------------------ Goals and topics of the sprint ------------------------------ The details depend on who is here and ready to work. We might touch topics such as: * cleaning up the optimization step in the JIT, change the register allocation done by the JIT's backend, or improvements to the warm-up time * STM (Software Transaction Memory), notably: try to come up with benchmarks, and measure them carefully in order to test and improve the conflict reporting tools, and more generally to figure out how practical it is in large projects to avoid conflicts * vmprof - a statistical profiler for CPython and PyPy work, including making it more user friendly. * Py3k (Python 3.x support), NumPyPy (the numpy module) * And as usual, the main side goal is to have fun in winter sports :-) We can take a day off for ski. ----------- Exact times ----------- For a change, and as an attempt to simplify things, I specified the dates as 20-28 Februrary 2015, where 20 and 28 are travel days. We will work full days between the 21 and the 27. You are of course allowed to show up for a part of that time only, too. ----------------------- Location & Accomodation ----------------------- Leysin, Switzerland, "same place as before". Let me refresh your memory: both the sprint venue and the lodging will be in a very spacious pair of chalets built specifically for bed & breakfast: http://www.ermina.ch/. The place has a good ADSL Internet connexion with wireless installed. You can of course arrange your own lodging anywhere (as long as you are in Leysin, you cannot be more than a 15 minutes walk away from the sprint venue), but I definitely recommend lodging there too -- you won't find a better view anywhere else (though you probably won't get much worse ones easily, either :-) Please *confirm* that you are coming so that we can adjust the reservations as appropriate. In the past, the rates were around 60 CHF a night all included in 2-person rooms, with breakfast. Now, the rooms available are either single-person (or couple), or rooms for 3 persons. The latter choice is recommended and should be under 60 CHF per person. Please register by Mercurial:: https://bitbucket.org/pypy/extradoc/ https://bitbucket.org/pypy/extradoc/raw/extradoc/sprintinfo/leysin-winter-2015 or on the pypy-dev mailing list if you do not yet have check-in rights: http://mail.python.org/mailman/listinfo/pypy-dev You need a Swiss-to-(insert country here) power adapter. There will be some Swiss-to-EU adapters around, and at least one EU-format power strip. -------------------------------- Armin Rigo From dymantex at yahoo.com Mon Jan 19 03:26:31 2015 From: dymantex at yahoo.com (Be like AVG, Zone Alarm and Microsoft.) Date: Mon, 19 Jan 2015 03:26:31 +0100 Subject: [pypy-dev] =?windows-1252?q?Send_desktop_notifications_to_everyon?= =?windows-1252?q?e_that_downloads_your_software=2E?= Message-ID: Be like AVG, Zone Alarm and Microsoft and send Desktop Notifications to everyone that downloads your software. AVG, Zone Alarm and Microsoft and many other major software sellers now send full colour desktop notifications to everyone that downloads their software. These notifications appear directly onto their desktops informing them about upgrades, new products and server notifications and more. These companies can communicate with everyone that downloads their software. You have probably seen them and wondered how they did that. As a result they have found that their sales have increased and they have a high customer satisfaction rate. You can now do the same! Using our Dymantex desktop messaging system that is even better than the ones used by the major companies you can send full colour interactive desktop notifications so that you too can offer upgrades, tell them about special offers and give technical support to everyone that downloads your software! We offer you a Free, no obligation trial of our Dymantex Desktop Notification system. Please Press Here for more information and we send it to you. Dymantex The Desktop Notification System This is a B2B comminication. If received in error please accept our apologise -------------- next part -------------- An HTML attachment was scrubbed... URL: From matti.picus at gmail.com Mon Jan 19 21:37:40 2015 From: matti.picus at gmail.com (Matti Picus) Date: Mon, 19 Jan 2015 22:37:40 +0200 Subject: [pypy-dev] pypy2.5 with stdlib-2.7.9? Message-ID: <54BD6B14.5090101@gmail.com> I would like to start a release cycle of pypy 2.5.0 which seems to be quite a jump from 2.4 in terms of performance. The major blocker for me is stdlib-2.7.9, especially the improved ssl support. Could we get a show of hands for: - yes I will make an effort to help finish stdlib-2.7.9 - nah, just give up and release with stdlib-2.7.8 It would be nice to get the version out before the sprint Matti If there are other blocker issues and/or branches, now is the time to mention them From rymg19 at gmail.com Mon Jan 19 21:39:15 2015 From: rymg19 at gmail.com (Ryan Gonzalez) Date: Mon, 19 Jan 2015 14:39:15 -0600 Subject: [pypy-dev] pypy2.5 with stdlib-2.7.9? In-Reply-To: <54BD6B14.5090101@gmail.com> References: <54BD6B14.5090101@gmail.com> Message-ID: I don't have a lot of time to help...but I do enjoy contributing to various OSS projects. Is there a file somewhere that says what the things that need to be finished for stdlib 2.7.9 are? Also, I'm assuming PR's are the preferred method of contributing. On Mon, Jan 19, 2015 at 2:37 PM, Matti Picus wrote: > I would like to start a release cycle of pypy 2.5.0 which seems to be > quite a jump from 2.4 in terms of performance. The major blocker for me is > stdlib-2.7.9, especially the improved ssl support. Could we get a show of > hands for: > - yes I will make an effort to help finish stdlib-2.7.9 > - nah, just give up and release with stdlib-2.7.8 > > It would be nice to get the version out before the sprint > > Matti > If there are other blocker issues and/or branches, now is the time to > mention them > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > https://mail.python.org/mailman/listinfo/pypy-dev > -- Ryan If anybody ever asks me why I prefer C++ to C, my answer will be simple: "It's becauseslejfp23(@#Q*(E*EIdc-SEGFAULT. Wait, I don't think that was nul-terminated." Personal reality distortion fields are immune to contradictory evidence. - srean Check out my website: http://kirbyfan64.github.io/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From matti.picus at gmail.com Mon Jan 19 21:53:16 2015 From: matti.picus at gmail.com (Matti Picus) Date: Mon, 19 Jan 2015 22:53:16 +0200 Subject: [pypy-dev] pypy2.5 with stdlib-2.7.9? In-Reply-To: References: <54BD6B14.5090101@gmail.com> Message-ID: <54BD6EBC.1030208@gmail.com> There are buildbot test failures http://buildbot.pypy.org/summary?branch=stdlib-2.7.9 Once the buildbots are green we are pretty confidant we completed the task. For more info join us on #pypy on IRC or ask questions here. Matti On 19/01/2015 10:39 PM, Ryan Gonzalez wrote: > I don't have a lot of time to help...but I do enjoy contributing to > various OSS projects. > > Is there a file somewhere that says what the things that need to be > finished for stdlib 2.7.9 are? > > Also, I'm assuming PR's are the preferred method of contributing. > > On Mon, Jan 19, 2015 at 2:37 PM, Matti Picus > wrote: > > I would like to start a release cycle of pypy 2.5.0 which seems to > be quite a jump from 2.4 in terms of performance. The major > blocker for me is stdlib-2.7.9, especially the improved ssl > support. Could we get a show of hands for: > - yes I will make an effort to help finish stdlib-2.7.9 > - nah, just give up and release with stdlib-2.7.8 > > It would be nice to get the version out before the sprint > > Matti > If there are other blocker issues and/or branches, now is the time > to mention them > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > https://mail.python.org/mailman/listinfo/pypy-dev > > > > > -- > Ryan > If anybody ever asks me why I prefer C++ to C, my answer will be > simple: "It's becauseslejfp23(@#Q*(E*EIdc-SEGFAULT. Wait, I don't > think that was nul-terminated." > Personal reality distortion fields are immune to contradictory > evidence. - srean > Check out my website: http://kirbyfan64.github.io/ From mike.kaplinskiy at gmail.com Tue Jan 20 05:26:17 2015 From: mike.kaplinskiy at gmail.com (Mike Kaplinskiy) Date: Mon, 19 Jan 2015 23:26:17 -0500 Subject: [pypy-dev] RFC: Copy-on-write list slices Message-ID: Hey folks, https://bitbucket.org/mikekap/pypy/commits/b774ae0be11b2012852a175f4bae44841343f067 has an implementation of list slicing that copies the data on write. (The third idea from http://doc.pypy.org/en/latest/project-ideas.html .) It's a first pass (and also my first time working on the pypy codebase), so I wanted to solicit some feedback. I'm curious if this was even the right direction and if I'm actually breaking/slowing something down without realizing it. Also would anyone happen to know some representative stress/performance tests I could run? I ran some simple tests myself (some things got slightly faster), but I doubt that's enough :) Thanks, Mike. (Aside: there is a pull request @ https://bitbucket.org/pypy/pypy/pull-request/282/add-a-copy-on-write-slice-list-strategy/diff for this commit, but I clearly messed something up with hg - the diff is from an earlier copy and bitbucket doesn't seem to want to pick it up.) -------------- next part -------------- An HTML attachment was scrubbed... URL: From fijall at gmail.com Tue Jan 20 08:30:30 2015 From: fijall at gmail.com (Maciej Fijalkowski) Date: Tue, 20 Jan 2015 09:30:30 +0200 Subject: [pypy-dev] pypy2.5 with stdlib-2.7.9? In-Reply-To: <54BD6EBC.1030208@gmail.com> References: <54BD6B14.5090101@gmail.com> <54BD6EBC.1030208@gmail.com> Message-ID: I'm generally opposed to releases waiting on branches since we can always do another release. On Mon, Jan 19, 2015 at 10:53 PM, Matti Picus wrote: > There are buildbot test failures > http://buildbot.pypy.org/summary?branch=stdlib-2.7.9 > Once the buildbots are green we are pretty confidant we completed the task. > For more info join us on #pypy on IRC or ask questions here. > Matti > > On 19/01/2015 10:39 PM, Ryan Gonzalez wrote: >> >> I don't have a lot of time to help...but I do enjoy contributing to >> various OSS projects. >> >> Is there a file somewhere that says what the things that need to be >> finished for stdlib 2.7.9 are? >> >> Also, I'm assuming PR's are the preferred method of contributing. >> >> On Mon, Jan 19, 2015 at 2:37 PM, Matti Picus > > wrote: >> >> I would like to start a release cycle of pypy 2.5.0 which seems to >> be quite a jump from 2.4 in terms of performance. The major >> blocker for me is stdlib-2.7.9, especially the improved ssl >> support. Could we get a show of hands for: >> - yes I will make an effort to help finish stdlib-2.7.9 >> - nah, just give up and release with stdlib-2.7.8 >> >> It would be nice to get the version out before the sprint >> >> Matti >> If there are other blocker issues and/or branches, now is the time >> to mention them >> _______________________________________________ >> pypy-dev mailing list >> pypy-dev at python.org >> https://mail.python.org/mailman/listinfo/pypy-dev >> >> >> >> >> -- >> Ryan >> If anybody ever asks me why I prefer C++ to C, my answer will be simple: >> "It's becauseslejfp23(@#Q*(E*EIdc-SEGFAULT. Wait, I don't think that was >> nul-terminated." >> Personal reality distortion fields are immune to contradictory evidence. - >> srean >> Check out my website: http://kirbyfan64.github.io/ > > > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > https://mail.python.org/mailman/listinfo/pypy-dev From fijall at gmail.com Tue Jan 20 10:38:16 2015 From: fijall at gmail.com (Maciej Fijalkowski) Date: Tue, 20 Jan 2015 11:38:16 +0200 Subject: [pypy-dev] RFC: Copy-on-write list slices In-Reply-To: References: Message-ID: Hi Mike A good test suite is pypy benchmark suite (https://bitbucket.org/pypy/benchmarks) which is relatively comprehensive and we run it nightly. If you run in trouble running it, please pop in to #pypy on freenode and we can help :-) On Tue, Jan 20, 2015 at 6:26 AM, Mike Kaplinskiy wrote: > Hey folks, > > https://bitbucket.org/mikekap/pypy/commits/b774ae0be11b2012852a175f4bae44841343f067 > has an implementation of list slicing that copies the data on write. (The > third idea from http://doc.pypy.org/en/latest/project-ideas.html .) It's a > first pass (and also my first time working on the pypy codebase), so I > wanted to solicit some feedback. I'm curious if this was even the right > direction and if I'm actually breaking/slowing something down without > realizing it. > > Also would anyone happen to know some representative stress/performance > tests I could run? I ran some simple tests myself (some things got slightly > faster), but I doubt that's enough :) > > Thanks, > Mike. > > (Aside: there is a pull request @ > https://bitbucket.org/pypy/pypy/pull-request/282/add-a-copy-on-write-slice-list-strategy/diff > for this commit, but I clearly messed something up with hg - the diff is > from an earlier copy and bitbucket doesn't seem to want to pick it up.) > > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > https://mail.python.org/mailman/listinfo/pypy-dev > From omer.drow at gmail.com Tue Jan 20 14:14:50 2015 From: omer.drow at gmail.com (Omer Katz) Date: Tue, 20 Jan 2015 15:14:50 +0200 Subject: [pypy-dev] cppyy questions Message-ID: I'm trying to use protobuf with PyPy and I've been quite successful doing so with cppyy. I generated the protobuf in C++ and used reflex to generate the bindings. I've encountered some problems that I don't know how to deal with and the documentation doesn't describe what you can do to resolve them. I discovered that If you don't specify --deep when generating the reflex bindings and you try to pass a string to the C++ side you get a segfault. I'm guessing that's a bug. I can't catch exceptions that are being raised from C++ (it's also undocumented). I ensured that protobuf's FatalException has reflex bindings but the process crashes on an exception. e.SerializeAsString() [libprotobuf FATAL google/protobuf/message_lite.cc:273] CHECK failed: IsInitialized(): Can't serialize message of type "MyProtobufType" because it is missing required fields: header terminate called after throwing an instance of 'google::protobuf::FatalException' what(): CHECK failed: IsInitialized(): Can't serialize message of type "MyProtobufType" because it is missing required fields: header Aborted (core dumped) How can I catch that exception? The documentation is unclear how you can pass a pointer to a Python variable e.g.: str = "" e.SerializeToString(str) --------------------------------------------------------------------------- TypeError Traceback (most recent call last) in () ----> 1 e.SerializeToString(str) TypeError: none of the 5 overloaded methods succeeded. Full details: bool google::protobuf::MessageLite::SerializeToString(std::string*) => TypeError: cannot pass str as basic_string bool google::protobuf::MessageLite::SerializeToString(std::string*) => TypeError: cannot pass str as basic_string bool google::protobuf::MessageLite::SerializeToString(std::string*) => TypeError: cannot pass str as basic_string bool google::protobuf::MessageLite::SerializeToString(std::string*) => TypeError: cannot pass str as basic_string bool google::protobuf::MessageLite::SerializeToString(std::string*) => TypeError: cannot pass str as basic_string Best Regards, Omer Katz. -------------- next part -------------- An HTML attachment was scrubbed... URL: From amauryfa at gmail.com Tue Jan 20 14:40:45 2015 From: amauryfa at gmail.com (Amaury Forgeot d'Arc) Date: Tue, 20 Jan 2015 14:40:45 +0100 Subject: [pypy-dev] cppyy questions In-Reply-To: References: Message-ID: Hi, 2015-01-20 14:14 GMT+01:00 Omer Katz : > The documentation is unclear how you can pass a pointer to a Python > variable e.g.: > str = "" > > e.SerializeToString(str) > Message::SerializeToString() updates its argument in-place, but Python strings are not mutable. You should allocate a std::string from Python code, and pass it to the function. Maybe something like: s = cppyy.gbl.std.string() e.SerializeToString(s) print s > --------------------------------------------------------------------------- > TypeError Traceback (most recent call last) > in () > ----> 1 e.SerializeToString(str) > > TypeError: none of the 5 overloaded methods succeeded. Full details: > bool google::protobuf::MessageLite::SerializeToString(std::string*) => > TypeError: cannot pass str as basic_string > bool google::protobuf::MessageLite::SerializeToString(std::string*) => > TypeError: cannot pass str as basic_string > bool google::protobuf::MessageLite::SerializeToString(std::string*) => > TypeError: cannot pass str as basic_string > bool google::protobuf::MessageLite::SerializeToString(std::string*) => > TypeError: cannot pass str as basic_string > bool google::protobuf::MessageLite::SerializeToString(std::string*) => > TypeError: cannot pass str as basic_string > > Best Regards, > Omer Katz. > > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > https://mail.python.org/mailman/listinfo/pypy-dev > > -- Amaury Forgeot d'Arc -------------- next part -------------- An HTML attachment was scrubbed... URL: From tbaldridge at gmail.com Tue Jan 20 15:59:37 2015 From: tbaldridge at gmail.com (Timothy Baldridge) Date: Tue, 20 Jan 2015 07:59:37 -0700 Subject: [pypy-dev] Sudden failures during compile-c Message-ID: Recently my builds on linux with --opt=jit have started failing with the following error: [translation:info] Error: [translation:info] File "/home/travis/build/pixie-lang/externals/pypy/rpython/translator/goal/translate.py", line 316, in main [translation:info] drv.proceed(goals) [translation:info] File "/home/travis/build/pixie-lang/externals/pypy/rpython/translator/driver.py", line 539, in proceed [translation:info] return self._execute(goals, task_skip = self._maybe_skip()) [translation:info] File "/home/travis/build/pixie-lang/externals/pypy/rpython/translator/tool/taskengine.py", line 114, in _execute [translation:info] res = self._do(goal, taskcallable, *args, **kwds) [translation:info] File "/home/travis/build/pixie-lang/externals/pypy/rpython/translator/driver.py", line 276, in _do [translation:info] res = func() [translation:info] File "/home/travis/build/pixie-lang/externals/pypy/rpython/translator/driver.py", line 505, in task_compile_c [translation:info] cbuilder.compile(**kwds) [translation:info] File "/home/travis/build/pixie-lang/externals/pypy/rpython/translator/c/genc.py", line 375, in compile [translation:info] extra_opts) [translation:info] File "/home/travis/build/pixie-lang/externals/pypy/rpython/translator/platform/posix.py", line 198, in execute_makefile [translation:info] self._handle_error(returncode, stdout, stderr, path.join('make')) [translation:info] File "/home/travis/build/pixie-lang/externals/pypy/rpython/translator/platform/__init__.py", line 151, in _handle_error [translation:info] raise CompilationError(stdout, stderr) [translation:ERROR] CompilationError: CompilationError(err=""" [translation:ERROR] pixie_vm_threads.c: In function ?pypy_g_do_yield_thread?: [translation:ERROR] pixie_vm_threads.c:475:21: error: ?pypy_g_do_yield_thread_reload? undeclared (first use in this function) [translation:ERROR] pixie_vm_threads.c:475:21: note: each undeclared identifier is reported only once for each function it appears in [translation:ERROR] make[1]: *** [pixie_vm_threads.gcmap] Error 1 [translation:ERROR] """) [translation] start debugger... > /home/travis/build/pixie-lang/externals/pypy/rpython/translator/platform/__init__.py(151)_handle_error() -> raise CompilationError(stdout, stderr) (Pdb+) I find this odd as it works just fine without the JIT, and compiles fine on OS X. The code in question is basically a copy-and-paste from PyPy's code: https://github.com/pixie-lang/pixie/blob/master/pixie/vm/threads.py#L90 Any ideas why this would suddenly have started failing recently? I normally build against a pretty recent version of PyPy master, so did something change in the pypy source? Thanks again for any help, Timothy -------------- next part -------------- An HTML attachment was scrubbed... URL: From arigo at tunes.org Tue Jan 20 16:04:25 2015 From: arigo at tunes.org (Armin Rigo) Date: Tue, 20 Jan 2015 16:04:25 +0100 Subject: [pypy-dev] Sudden failures during compile-c In-Reply-To: References: Message-ID: Hi Timothy, On 20 January 2015 at 15:59, Timothy Baldridge wrote: > I find this odd as it works just fine without the JIT, and compiles fine on > OS X. The code in question is basically a copy-and-paste from PyPy's code: > https://github.com/pixie-lang/pixie/blob/master/pixie/vm/threads.py#L90 Maybe related: there is a copy mistake in after_external_call(). It should not call the get/set_saved_errno() functions. A bient?t, Armin. From arigo at tunes.org Tue Jan 20 16:05:34 2015 From: arigo at tunes.org (Armin Rigo) Date: Tue, 20 Jan 2015 16:05:34 +0100 Subject: [pypy-dev] Sudden failures during compile-c In-Reply-To: References: Message-ID: Re-hi, On 20 January 2015 at 16:04, Armin Rigo wrote: > Maybe related: there is a copy mistake in after_external_call(). It > should not call the get/set_saved_errno() functions. Also, did you mean "_cleanup_()" instead of "__cleanup__()"? Armin From omer.drow at gmail.com Tue Jan 20 16:07:12 2015 From: omer.drow at gmail.com (Omer Katz) Date: Tue, 20 Jan 2015 17:07:12 +0200 Subject: [pypy-dev] cppyy questions In-Reply-To: References: Message-ID: That's correct but can't we handle those cases in cppyy? We should provide a native Python interface whenever it's possible. 2015-01-20 15:40 GMT+02:00 Amaury Forgeot d'Arc : > Hi, > > 2015-01-20 14:14 GMT+01:00 Omer Katz : > >> The documentation is unclear how you can pass a pointer to a Python >> variable e.g.: >> str = "" >> >> e.SerializeToString(str) >> > > Message::SerializeToString() updates its argument in-place, but Python > strings are not mutable. > You should allocate a std::string from Python code, and pass it to the > function. > Maybe something like: > > s = cppyy.gbl.std.string() > e.SerializeToString(s) > print s > > > > >> >> --------------------------------------------------------------------------- >> TypeError Traceback (most recent call >> last) >> in () >> ----> 1 e.SerializeToString(str) >> >> TypeError: none of the 5 overloaded methods succeeded. Full details: >> bool google::protobuf::MessageLite::SerializeToString(std::string*) => >> TypeError: cannot pass str as basic_string >> bool google::protobuf::MessageLite::SerializeToString(std::string*) => >> TypeError: cannot pass str as basic_string >> bool google::protobuf::MessageLite::SerializeToString(std::string*) => >> TypeError: cannot pass str as basic_string >> bool google::protobuf::MessageLite::SerializeToString(std::string*) => >> TypeError: cannot pass str as basic_string >> bool google::protobuf::MessageLite::SerializeToString(std::string*) => >> TypeError: cannot pass str as basic_string >> >> Best Regards, >> Omer Katz. >> >> _______________________________________________ >> pypy-dev mailing list >> pypy-dev at python.org >> https://mail.python.org/mailman/listinfo/pypy-dev >> >> > > > -- > Amaury Forgeot d'Arc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From amauryfa at gmail.com Tue Jan 20 16:49:24 2015 From: amauryfa at gmail.com (Amaury Forgeot d'Arc) Date: Tue, 20 Jan 2015 16:49:24 +0100 Subject: [pypy-dev] cppyy questions In-Reply-To: References: Message-ID: 2015-01-20 16:07 GMT+01:00 Omer Katz : > That's correct but can't we handle those cases in cppyy? > We should provide a native Python interface whenever it's possible. > It's not possible to take a Python string as mutable reference. Here are some options that cppyy could implement: - Use bytearray, which is mutable. a = bytearray() e.SerializeToString(a) s = str(a) - Pass a list, and expect the function to append a (python) string l = [] e.SerializeToString(s) s = l[0] - Change the signature of the function so that it *returns* the string (like swig's OUTPUT ) result, s = e.SerializeToString() I don't know which method is the most convenient with cppyy. > > 2015-01-20 15:40 GMT+02:00 Amaury Forgeot d'Arc : > >> Hi, >> >> 2015-01-20 14:14 GMT+01:00 Omer Katz : >> >>> The documentation is unclear how you can pass a pointer to a Python >>> variable e.g.: >>> str = "" >>> >>> e.SerializeToString(str) >>> >> >> Message::SerializeToString() updates its argument in-place, but Python >> strings are not mutable. >> You should allocate a std::string from Python code, and pass it to the >> function. >> Maybe something like: >> >> s = cppyy.gbl.std.string() >> e.SerializeToString(s) >> print s >> >> >> >> >>> >>> --------------------------------------------------------------------------- >>> TypeError Traceback (most recent call >>> last) >>> in () >>> ----> 1 e.SerializeToString(str) >>> >>> TypeError: none of the 5 overloaded methods succeeded. Full details: >>> bool google::protobuf::MessageLite::SerializeToString(std::string*) => >>> TypeError: cannot pass str as basic_string >>> bool google::protobuf::MessageLite::SerializeToString(std::string*) => >>> TypeError: cannot pass str as basic_string >>> bool google::protobuf::MessageLite::SerializeToString(std::string*) => >>> TypeError: cannot pass str as basic_string >>> bool google::protobuf::MessageLite::SerializeToString(std::string*) => >>> TypeError: cannot pass str as basic_string >>> bool google::protobuf::MessageLite::SerializeToString(std::string*) => >>> TypeError: cannot pass str as basic_string >>> >>> Best Regards, >>> Omer Katz. >>> >>> _______________________________________________ >>> pypy-dev mailing list >>> pypy-dev at python.org >>> https://mail.python.org/mailman/listinfo/pypy-dev >>> >>> >> >> >> -- >> Amaury Forgeot d'Arc >> > -- Amaury Forgeot d'Arc -------------- next part -------------- An HTML attachment was scrubbed... URL: From omer.drow at gmail.com Tue Jan 20 17:00:30 2015 From: omer.drow at gmail.com (Omer Katz) Date: Tue, 20 Jan 2015 18:00:30 +0200 Subject: [pypy-dev] cppyy questions In-Reply-To: References: Message-ID: I tried to pass a bytearray and that's also not currently supported. Any clue about what should I do with the exception? It definitely shouldn't crash the process. I need it to raise a python exception instead. ?????? 20 ???? 2015 17:49, ?"Amaury Forgeot d'Arc" ???: > 2015-01-20 16:07 GMT+01:00 Omer Katz : > >> That's correct but can't we handle those cases in cppyy? >> We should provide a native Python interface whenever it's possible. >> > > It's not possible to take a Python string as mutable reference. > > Here are some options that cppyy could implement: > > - Use bytearray, which is mutable. > a = bytearray() > e.SerializeToString(a) > s = str(a) > > - Pass a list, and expect the function to append a (python) string > l = [] > e.SerializeToString(s) > s = l[0] > > - Change the signature of the function so that it *returns* the string > (like swig's OUTPUT > ) > result, s = e.SerializeToString() > > I don't know which method is the most convenient with cppyy. > > > >> >> 2015-01-20 15:40 GMT+02:00 Amaury Forgeot d'Arc : >> >>> Hi, >>> >>> 2015-01-20 14:14 GMT+01:00 Omer Katz : >>> >>>> The documentation is unclear how you can pass a pointer to a Python >>>> variable e.g.: >>>> str = "" >>>> >>>> e.SerializeToString(str) >>>> >>> >>> Message::SerializeToString() updates its argument in-place, but Python >>> strings are not mutable. >>> You should allocate a std::string from Python code, and pass it to the >>> function. >>> Maybe something like: >>> >>> s = cppyy.gbl.std.string() >>> e.SerializeToString(s) >>> print s >>> >>> >>> >>> >>>> >>>> --------------------------------------------------------------------------- >>>> TypeError Traceback (most recent call >>>> last) >>>> in () >>>> ----> 1 e.SerializeToString(str) >>>> >>>> TypeError: none of the 5 overloaded methods succeeded. Full details: >>>> bool google::protobuf::MessageLite::SerializeToString(std::string*) => >>>> TypeError: cannot pass str as basic_string >>>> bool google::protobuf::MessageLite::SerializeToString(std::string*) => >>>> TypeError: cannot pass str as basic_string >>>> bool google::protobuf::MessageLite::SerializeToString(std::string*) => >>>> TypeError: cannot pass str as basic_string >>>> bool google::protobuf::MessageLite::SerializeToString(std::string*) => >>>> TypeError: cannot pass str as basic_string >>>> bool google::protobuf::MessageLite::SerializeToString(std::string*) => >>>> TypeError: cannot pass str as basic_string >>>> >>>> Best Regards, >>>> Omer Katz. >>>> >>>> _______________________________________________ >>>> pypy-dev mailing list >>>> pypy-dev at python.org >>>> https://mail.python.org/mailman/listinfo/pypy-dev >>>> >>>> >>> >>> >>> -- >>> Amaury Forgeot d'Arc >>> >> > > > -- > Amaury Forgeot d'Arc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From amauryfa at gmail.com Tue Jan 20 17:18:17 2015 From: amauryfa at gmail.com (Amaury Forgeot d'Arc) Date: Tue, 20 Jan 2015 17:18:17 +0100 Subject: [pypy-dev] cppyy questions In-Reply-To: References: Message-ID: 2015-01-20 17:00 GMT+01:00 Omer Katz : > I tried to pass a bytearray and that's also not currently supported. > Any clue about what should I do with the exception? It definitely > shouldn't crash the process. I need it to raise a python exception instead. > The only way to prevent a crash is to add a "catch" block somehow in C++, and I don't see anything like this in cppyy. This said, it's probably a bad idea to continue something after what the library calls a "FatalError"... Better add a check like "if e.IsInitialized()" before calling SerializeToString. > ?????? 20 ???? 2015 17:49, ?"Amaury Forgeot d'Arc" > ???: > > 2015-01-20 16:07 GMT+01:00 Omer Katz : >> >>> That's correct but can't we handle those cases in cppyy? >>> We should provide a native Python interface whenever it's possible. >>> >> >> It's not possible to take a Python string as mutable reference. >> >> Here are some options that cppyy could implement: >> >> - Use bytearray, which is mutable. >> a = bytearray() >> e.SerializeToString(a) >> s = str(a) >> >> - Pass a list, and expect the function to append a (python) string >> l = [] >> e.SerializeToString(s) >> s = l[0] >> >> - Change the signature of the function so that it *returns* the string >> (like swig's OUTPUT >> ) >> result, s = e.SerializeToString() >> >> I don't know which method is the most convenient with cppyy. >> >> >> >>> >>> 2015-01-20 15:40 GMT+02:00 Amaury Forgeot d'Arc : >>> >>>> Hi, >>>> >>>> 2015-01-20 14:14 GMT+01:00 Omer Katz : >>>> >>>>> The documentation is unclear how you can pass a pointer to a Python >>>>> variable e.g.: >>>>> str = "" >>>>> >>>>> e.SerializeToString(str) >>>>> >>>> >>>> Message::SerializeToString() updates its argument in-place, but Python >>>> strings are not mutable. >>>> You should allocate a std::string from Python code, and pass it to the >>>> function. >>>> Maybe something like: >>>> >>>> s = cppyy.gbl.std.string() >>>> e.SerializeToString(s) >>>> print s >>>> >>>> >>>> >>>> >>>>> >>>>> --------------------------------------------------------------------------- >>>>> TypeError Traceback (most recent call >>>>> last) >>>>> in () >>>>> ----> 1 e.SerializeToString(str) >>>>> >>>>> TypeError: none of the 5 overloaded methods succeeded. Full details: >>>>> bool google::protobuf::MessageLite::SerializeToString(std::string*) >>>>> => >>>>> TypeError: cannot pass str as basic_string >>>>> bool google::protobuf::MessageLite::SerializeToString(std::string*) >>>>> => >>>>> TypeError: cannot pass str as basic_string >>>>> bool google::protobuf::MessageLite::SerializeToString(std::string*) >>>>> => >>>>> TypeError: cannot pass str as basic_string >>>>> bool google::protobuf::MessageLite::SerializeToString(std::string*) >>>>> => >>>>> TypeError: cannot pass str as basic_string >>>>> bool google::protobuf::MessageLite::SerializeToString(std::string*) >>>>> => >>>>> TypeError: cannot pass str as basic_string >>>>> >>>>> Best Regards, >>>>> Omer Katz. >>>>> >>>>> _______________________________________________ >>>>> pypy-dev mailing list >>>>> pypy-dev at python.org >>>>> https://mail.python.org/mailman/listinfo/pypy-dev >>>>> >>>>> >>>> >>>> >>>> -- >>>> Amaury Forgeot d'Arc >>>> >>> >> >> >> -- >> Amaury Forgeot d'Arc >> > -- Amaury Forgeot d'Arc -------------- next part -------------- An HTML attachment was scrubbed... URL: From omer.drow at gmail.com Tue Jan 20 17:50:56 2015 From: omer.drow at gmail.com (Omer Katz) Date: Tue, 20 Jan 2015 18:50:56 +0200 Subject: [pypy-dev] cppyy questions In-Reply-To: References: Message-ID: The fatal exception is not really that fatal. It just means that it can't serialize the protobuf object to a string. The normal protobuf Python bindings just raise Python exceptions. See https://github.com/google/protobuf/search?l=python&q=Exception&utf8=%E2%9C%93 The problem with IsInitialized() is that it doesn't report what's wrong exactly. I can get that information from e.InitializationErrorString() and raise a Python exception but it would be preferable that if reflex has a binding to the exception object it will catch it and reraise a Python version of that exception. 2015-01-20 18:18 GMT+02:00 Amaury Forgeot d'Arc : > 2015-01-20 17:00 GMT+01:00 Omer Katz : > >> I tried to pass a bytearray and that's also not currently supported. >> Any clue about what should I do with the exception? It definitely >> shouldn't crash the process. I need it to raise a python exception instead. >> > The only way to prevent a crash is to add a "catch" block somehow in C++, > and I don't see anything like this in cppyy. > This said, it's probably a bad idea to continue something after what the > library calls a "FatalError"... > Better add a check like "if e.IsInitialized()" before calling > SerializeToString. > > > >> ?????? 20 ???? 2015 17:49, ?"Amaury Forgeot d'Arc" >> ???: >> >> 2015-01-20 16:07 GMT+01:00 Omer Katz : >>> >>>> That's correct but can't we handle those cases in cppyy? >>>> We should provide a native Python interface whenever it's possible. >>>> >>> >>> It's not possible to take a Python string as mutable reference. >>> >>> Here are some options that cppyy could implement: >>> >>> - Use bytearray, which is mutable. >>> a = bytearray() >>> e.SerializeToString(a) >>> s = str(a) >>> >>> - Pass a list, and expect the function to append a (python) string >>> l = [] >>> e.SerializeToString(s) >>> s = l[0] >>> >>> - Change the signature of the function so that it *returns* the string >>> (like swig's OUTPUT >>> ) >>> result, s = e.SerializeToString() >>> >>> I don't know which method is the most convenient with cppyy. >>> >>> >>> >>>> >>>> 2015-01-20 15:40 GMT+02:00 Amaury Forgeot d'Arc : >>>> >>>>> Hi, >>>>> >>>>> 2015-01-20 14:14 GMT+01:00 Omer Katz : >>>>> >>>>>> The documentation is unclear how you can pass a pointer to a Python >>>>>> variable e.g.: >>>>>> str = "" >>>>>> >>>>>> e.SerializeToString(str) >>>>>> >>>>> >>>>> Message::SerializeToString() updates its argument in-place, but Python >>>>> strings are not mutable. >>>>> You should allocate a std::string from Python code, and pass it to the >>>>> function. >>>>> Maybe something like: >>>>> >>>>> s = cppyy.gbl.std.string() >>>>> e.SerializeToString(s) >>>>> print s >>>>> >>>>> >>>>> >>>>> >>>>>> >>>>>> --------------------------------------------------------------------------- >>>>>> TypeError Traceback (most recent call >>>>>> last) >>>>>> in () >>>>>> ----> 1 e.SerializeToString(str) >>>>>> >>>>>> TypeError: none of the 5 overloaded methods succeeded. Full details: >>>>>> bool google::protobuf::MessageLite::SerializeToString(std::string*) >>>>>> => >>>>>> TypeError: cannot pass str as basic_string >>>>>> bool google::protobuf::MessageLite::SerializeToString(std::string*) >>>>>> => >>>>>> TypeError: cannot pass str as basic_string >>>>>> bool google::protobuf::MessageLite::SerializeToString(std::string*) >>>>>> => >>>>>> TypeError: cannot pass str as basic_string >>>>>> bool google::protobuf::MessageLite::SerializeToString(std::string*) >>>>>> => >>>>>> TypeError: cannot pass str as basic_string >>>>>> bool google::protobuf::MessageLite::SerializeToString(std::string*) >>>>>> => >>>>>> TypeError: cannot pass str as basic_string >>>>>> >>>>>> Best Regards, >>>>>> Omer Katz. >>>>>> >>>>>> _______________________________________________ >>>>>> pypy-dev mailing list >>>>>> pypy-dev at python.org >>>>>> https://mail.python.org/mailman/listinfo/pypy-dev >>>>>> >>>>>> >>>>> >>>>> >>>>> -- >>>>> Amaury Forgeot d'Arc >>>>> >>>> >>> >>> >>> -- >>> Amaury Forgeot d'Arc >>> >> > > > -- > Amaury Forgeot d'Arc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From omer.drow at gmail.com Tue Jan 20 17:59:34 2015 From: omer.drow at gmail.com (Omer Katz) Date: Tue, 20 Jan 2015 18:59:34 +0200 Subject: [pypy-dev] cppyy questions In-Reply-To: References: Message-ID: Also how do I catch exceptions that are caused when parsing an event? >> e.ParseFromString('') >> [libprotobuf ERROR google/protobuf/message_lite.cc:123] Can't parse message of type "MyProtobufType" because it is missing required fields: header >> False 2015-01-20 18:18 GMT+02:00 Amaury Forgeot d'Arc : > 2015-01-20 17:00 GMT+01:00 Omer Katz : > >> I tried to pass a bytearray and that's also not currently supported. >> Any clue about what should I do with the exception? It definitely >> shouldn't crash the process. I need it to raise a python exception instead. >> > The only way to prevent a crash is to add a "catch" block somehow in C++, > and I don't see anything like this in cppyy. > This said, it's probably a bad idea to continue something after what the > library calls a "FatalError"... > Better add a check like "if e.IsInitialized()" before calling > SerializeToString. > > > >> ?????? 20 ???? 2015 17:49, ?"Amaury Forgeot d'Arc" >> ???: >> >> 2015-01-20 16:07 GMT+01:00 Omer Katz : >>> >>>> That's correct but can't we handle those cases in cppyy? >>>> We should provide a native Python interface whenever it's possible. >>>> >>> >>> It's not possible to take a Python string as mutable reference. >>> >>> Here are some options that cppyy could implement: >>> >>> - Use bytearray, which is mutable. >>> a = bytearray() >>> e.SerializeToString(a) >>> s = str(a) >>> >>> - Pass a list, and expect the function to append a (python) string >>> l = [] >>> e.SerializeToString(s) >>> s = l[0] >>> >>> - Change the signature of the function so that it *returns* the string >>> (like swig's OUTPUT >>> ) >>> result, s = e.SerializeToString() >>> >>> I don't know which method is the most convenient with cppyy. >>> >>> >>> >>>> >>>> 2015-01-20 15:40 GMT+02:00 Amaury Forgeot d'Arc : >>>> >>>>> Hi, >>>>> >>>>> 2015-01-20 14:14 GMT+01:00 Omer Katz : >>>>> >>>>>> The documentation is unclear how you can pass a pointer to a Python >>>>>> variable e.g.: >>>>>> str = "" >>>>>> >>>>>> e.SerializeToString(str) >>>>>> >>>>> >>>>> Message::SerializeToString() updates its argument in-place, but Python >>>>> strings are not mutable. >>>>> You should allocate a std::string from Python code, and pass it to the >>>>> function. >>>>> Maybe something like: >>>>> >>>>> s = cppyy.gbl.std.string() >>>>> e.SerializeToString(s) >>>>> print s >>>>> >>>>> >>>>> >>>>> >>>>>> >>>>>> --------------------------------------------------------------------------- >>>>>> TypeError Traceback (most recent call >>>>>> last) >>>>>> in () >>>>>> ----> 1 e.SerializeToString(str) >>>>>> >>>>>> TypeError: none of the 5 overloaded methods succeeded. Full details: >>>>>> bool google::protobuf::MessageLite::SerializeToString(std::string*) >>>>>> => >>>>>> TypeError: cannot pass str as basic_string >>>>>> bool google::protobuf::MessageLite::SerializeToString(std::string*) >>>>>> => >>>>>> TypeError: cannot pass str as basic_string >>>>>> bool google::protobuf::MessageLite::SerializeToString(std::string*) >>>>>> => >>>>>> TypeError: cannot pass str as basic_string >>>>>> bool google::protobuf::MessageLite::SerializeToString(std::string*) >>>>>> => >>>>>> TypeError: cannot pass str as basic_string >>>>>> bool google::protobuf::MessageLite::SerializeToString(std::string*) >>>>>> => >>>>>> TypeError: cannot pass str as basic_string >>>>>> >>>>>> Best Regards, >>>>>> Omer Katz. >>>>>> >>>>>> _______________________________________________ >>>>>> pypy-dev mailing list >>>>>> pypy-dev at python.org >>>>>> https://mail.python.org/mailman/listinfo/pypy-dev >>>>>> >>>>>> >>>>> >>>>> >>>>> -- >>>>> Amaury Forgeot d'Arc >>>>> >>>> >>> >>> >>> -- >>> Amaury Forgeot d'Arc >>> >> > > > -- > Amaury Forgeot d'Arc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From wlavrijsen at lbl.gov Tue Jan 20 18:03:33 2015 From: wlavrijsen at lbl.gov (wlavrijsen at lbl.gov) Date: Tue, 20 Jan 2015 09:03:33 -0800 (PST) Subject: [pypy-dev] cppyy questions In-Reply-To: References: Message-ID: Hi, > I tried to pass a bytearray and that's also not currently supported. no, it really expects an std::string object to be passed through an std::string*, as Amaury advised. Any other type would require the creation of a temporary. (A C++ string is not a byte array. Typically, it carries a length data member and a pointer to what is a byte array. Further, for short strings, those data members can be used as the actual payload.) > Any clue about what should I do with the exception? It definitely shouldn't > crash the process. I need it to raise a python exception instead. Right, which is done on the CPython side. Haven't gotten around to implement the same on the PyPy side. Isn't hard, but takes time. (There's a separate issue for the cling backend, where we have yet to move to MCJit, which is needed to support C++ exceptions through LLVM JIT-ted code.) Likewise, returning out-parameters in a tuple is a nice pythonization that is on the TODO-list (which is long). Lacking time, again. Of course, this can be fixed on the python side also, by replacing the bound function with one that creates a temporary s = std.string(), calls the original function, then returns an str(s). Best regards, Wim -- WLavrijsen at lbl.gov -- +1 (510) 486 6411 -- www.lavrijsen.net From omer.drow at gmail.com Tue Jan 20 18:07:26 2015 From: omer.drow at gmail.com (Omer Katz) Date: Tue, 20 Jan 2015 19:07:26 +0200 Subject: [pypy-dev] cppyy questions In-Reply-To: References: Message-ID: So cppyy isn't production ready yet? If C++ exceptions can cause the process to crash that's very dangerous in production systems. Can we warn about this in the documentation. I think that people should know about this before investing time with it. 2015-01-20 19:03 GMT+02:00 : > Hi, > > I tried to pass a bytearray and that's also not currently supported. >> > > no, it really expects an std::string object to be passed through an > std::string*, as Amaury advised. Any other type would require the creation > of a temporary. (A C++ string is not a byte array. Typically, it carries a > length data member and a pointer to what is a byte array. Further, for > short strings, those data members can be used as the actual payload.) > > Any clue about what should I do with the exception? It definitely >> shouldn't >> crash the process. I need it to raise a python exception instead. >> > > Right, which is done on the CPython side. Haven't gotten around to > implement > the same on the PyPy side. Isn't hard, but takes time. (There's a separate > issue for the cling backend, where we have yet to move to MCJit, which is > needed to support C++ exceptions through LLVM JIT-ted code.) > > Likewise, returning out-parameters in a tuple is a nice pythonization that > is on the TODO-list (which is long). Lacking time, again. > > Of course, this can be fixed on the python side also, by replacing the > bound > function with one that creates a temporary s = std.string(), calls the > original function, then returns an str(s). > > Best regards, > Wim > -- > WLavrijsen at lbl.gov -- +1 (510) 486 6411 -- www.lavrijsen.net > -------------- next part -------------- An HTML attachment was scrubbed... URL: From wlavrijsen at lbl.gov Tue Jan 20 18:16:41 2015 From: wlavrijsen at lbl.gov (wlavrijsen at lbl.gov) Date: Tue, 20 Jan 2015 09:16:41 -0800 (PST) Subject: [pypy-dev] cppyy questions In-Reply-To: References: Message-ID: Omar, > So cppyy isn't production ready yet? that will always be in the eye of the beholder. :) > If C++ exceptions can cause the process to crash that's very dangerous in > production systems. Yes, C++ exceptions do that in C++ as well. :) Which is why we forbid them. > Can we warn about this in the documentation. I think that people should > know about this before investing time with it. Adding support is probably less work then updating the docs. :P But at the moment, I'm falling from one "world-ending" emergency to another. :P Best regards, Wim -- WLavrijsen at lbl.gov -- +1 (510) 486 6411 -- www.lavrijsen.net From stevenjackson121 at gmail.com Wed Jan 21 01:57:46 2015 From: stevenjackson121 at gmail.com (Steven Jackson) Date: Tue, 20 Jan 2015 19:57:46 -0500 Subject: [pypy-dev] Numpy Topics Message-ID: Hey I'd like to know if the proposed numpy projects list at https://bitbucket.org/pypy/extradoc/src/extradoc/planning/micronumpy.txt is still up to date, and if so what is meant by "a good sort function." If it's just a matter of implementing a known good algorithm, that seems like a good way to start contributing to pypy. The advice on http://doc.pypy.org/en/latest/project-ideas.html suggested posting this question to #pypy on IRC, which I attempted to do through http://webchat.freenode.net/ but I never got a response. It was my first time trying to communicate over IRC, so I'm not sure if I did something incorrectly while trying to join the channel (I saw buildbot messages but no one else speaking) or if the lack of activity was simply due to time difference (I'm on USA east coast time, while I'm aware that much of the pypy-dev community is located in Europe). Any help with either the original question or joining the IRC discussion would be greatly appreciated :) -- Steven Jackson -------------- next part -------------- An HTML attachment was scrubbed... URL: From tbaldridge at gmail.com Wed Jan 21 02:07:17 2015 From: tbaldridge at gmail.com (Timothy Baldridge) Date: Tue, 20 Jan 2015 18:07:17 -0700 Subject: [pypy-dev] Sudden failures during compile-c In-Reply-To: References: Message-ID: I fixed both of those now, but to no avail. The build still fails. But looking at the error it's interesting to note that "do_yield_thread" does exist, but "do_yield_thread_reload" does not. From what I can figure out, this "_reload" function is generated by asmgcroot. My OSX build is defaulting to shadowstack. I switched all builds to use shadowstack and the build error goes away. So I guess that was the issue. Timothy On Tue, Jan 20, 2015 at 8:05 AM, Armin Rigo wrote: > Re-hi, > > On 20 January 2015 at 16:04, Armin Rigo wrote: > > Maybe related: there is a copy mistake in after_external_call(). It > > should not call the get/set_saved_errno() functions. > > Also, did you mean "_cleanup_()" instead of "__cleanup__()"? > > > Armin > -- ?One of the main causes of the fall of the Roman Empire was that?lacking zero?they had no way to indicate successful termination of their C programs.? (Robert Firth) -------------- next part -------------- An HTML attachment was scrubbed... URL: From drsalists at gmail.com Wed Jan 21 03:19:12 2015 From: drsalists at gmail.com (Dan Stromberg) Date: Tue, 20 Jan 2015 18:19:12 -0800 Subject: [pypy-dev] Numpy Topics In-Reply-To: References: Message-ID: On Tue, Jan 20, 2015 at 4:57 PM, Steven Jackson wrote: > > Hey I'd like to know if the proposed numpy projects list at > https://bitbucket.org/pypy/extradoc/src/extradoc/planning/micronumpy.txt is > still up to date, and if so what is meant by "a good sort function." > If it's just a matter of implementing a known good algorithm, that seems > like a good way to start contributing to pypy. As far as sorting in python goes, you might find http://stromberg.dnsalias.org/svn/sorts/compare/trunk/ interesting. It includes a pure-python version of timsort, among others. I'm guessing either timsort or funnelsort would be best on pypy because of their locality of reference. From arigo at tunes.org Wed Jan 21 07:13:17 2015 From: arigo at tunes.org (Armin Rigo) Date: Wed, 21 Jan 2015 07:13:17 +0100 Subject: [pypy-dev] Numpy Topics In-Reply-To: References: Message-ID: Hi Dan, On 21 January 2015 at 03:19, Dan Stromberg wrote: > It includes a pure-python version of timsort, among others. There is one in PyPy too. Kind of obvious, in fact: we need one for implementing `list.sort()`. I think the original question was instead very numpy-specific: it refers, I guess, to some sorting that you do on numpy arrays. I guess (but I have no real clue) that timsort is not considered good for that purpose. Otherwise, it would be written in the planning file: "plug our existing timsort into numpy". A bient?t, Armin. From arigo at tunes.org Wed Jan 21 07:07:12 2015 From: arigo at tunes.org (Armin Rigo) Date: Wed, 21 Jan 2015 07:07:12 +0100 Subject: [pypy-dev] Sudden failures during compile-c In-Reply-To: References: Message-ID: Hi Timothy, On 21 January 2015 at 02:07, Timothy Baldridge wrote: > I switched all builds to use shadowstack and the > build error goes away. So I guess that was the issue. Can you tell us how to reproduce anyway? It's strange that you get this problem, because a "void pypy_g_do_yield_thread_reload(void)" is present in the C code generated for PyPy with asmgcc. A bient?t, Armin. From arigo at tunes.org Wed Jan 21 07:23:41 2015 From: arigo at tunes.org (Armin Rigo) Date: Wed, 21 Jan 2015 07:23:41 +0100 Subject: [pypy-dev] OrderedDict.move_to_end() Message-ID: Hi all, About the new OrderedDict and how to support `move_to_end(last=False)` in the py3k branch: an implementation of the correct complexity is possible. It would piggy-back on the part of `lookup_function_no` that acts as counter for how many entries at the start are known to be deleted. This number is present to allow for a good implementation of `popitem(last=False)`, so that it doesn't have to scan a larger and larger area of deleted items. So you can use the same number in reverse. As long as this number is greater than zero, you can insert the new item at position "this number minus one". When it is zero, you resize and reindex the dictionary by adding an extra argument to the relevant functions which would force it to artificially reserve n free entries at the start. If n is proportional to "num_live_items", maybe 1/8 or 1/16 of it, it should be enough to give amortized constant time to the operation. A bient?t, Armin. From omer.drow at gmail.com Wed Jan 21 11:10:21 2015 From: omer.drow at gmail.com (Omer Katz) Date: Wed, 21 Jan 2015 12:10:21 +0200 Subject: [pypy-dev] cppyy questions In-Reply-To: References: Message-ID: One last thing, The documentation doesn't specify where should I place the rootmap file. Is it where $REFLEX_HOME is set? 2015-01-20 19:16 GMT+02:00 : > Omar, > > So cppyy isn't production ready yet? >> > > that will always be in the eye of the beholder. :) > > If C++ exceptions can cause the process to crash that's very dangerous in >> production systems. >> > > Yes, C++ exceptions do that in C++ as well. :) Which is why we forbid them. > > Can we warn about this in the documentation. I think that people should >> know about this before investing time with it. >> > > Adding support is probably less work then updating the docs. :P But at the > moment, I'm falling from one "world-ending" emergency to another. :P > > > Best regards, > Wim > -- > WLavrijsen at lbl.gov -- +1 (510) 486 6411 -- www.lavrijsen.net > -------------- next part -------------- An HTML attachment was scrubbed... URL: From matti.picus at gmail.com Wed Jan 21 11:18:59 2015 From: matti.picus at gmail.com (Matti Picus) Date: Wed, 21 Jan 2015 12:18:59 +0200 Subject: [pypy-dev] Numpy Topics In-Reply-To: References: Message-ID: <54BF7D13.70903@gmail.com> On 21/01/2015 2:57 AM, Steven Jackson wrote: > > Hey I'd like to know if the proposed numpy projects list at > https://bitbucket.org/pypy/extradoc/src/extradoc/planning/micronumpy.txt > is still up to date, and if so what is meant by "a good sort function." > If it's just a matter of implementing a known good algorithm, that > seems like a good way to start contributing to pypy. > > The advice on http://doc.pypy.org/en/latest/project-ideas.html > suggested posting this question to #pypy on IRC, which I attempted to > do through http://webchat.freenode.net/ but I never got a response. It > was my first time trying to communicate over IRC, so I'm not sure if I > did something incorrectly while trying to join the channel (I saw > buildbot messages but no one else speaking) or if the lack of activity > was simply due to time difference (I'm on USA east coast time, while > I'm aware that much of the pypy-dev community is located in Europe). > > Any help with either the original question or joining the IRC > discussion would be greatly appreciated :) > > -- > Steven Jackson > > > _______________________________________________ > The list is not really up to date, IRC is a good place to get more info but it is a low-activity channel with most devs online during European day-evening, and not every day. As for sort, we have implemented only timsort for ndarrays of numeric types, not record arrays. Numpy supports ?quicksort?, ?mergesort?, and ?heapsort?, timsort is not supported. So to be numpy-compatible we should maybe support the other kinds, and support sorting record arrays. It would be great to get more people involved in pypy-numpy. Matti From stevenjackson121 at gmail.com Wed Jan 21 13:00:27 2015 From: stevenjackson121 at gmail.com (Steven Jackson) Date: Wed, 21 Jan 2015 07:00:27 -0500 Subject: [pypy-dev] Numpy Topics In-Reply-To: <54BF7D13.70903@gmail.com> References: <54BF7D13.70903@gmail.com> Message-ID: Thank you for your responses; I'm pretty sure I can do quicksort, mergesort, and heapsort. :) Also, I did see responses starting in the minute after I decided to send an email, so I think I'm good on the IRC front. Am I right in assuming that I should checkout "default," make my own branch, write failing test cases, make the test cases succeed (occasionally merging in default) and then... actually i don't know what then. Commit? Pull request? I've never worked on a large project under version control and "Getting Started Developing with Pypy" "How to contribute to Pypy" and "You want to help with Pypy, now what?" all seem to assume prior knowledge about version control in general. Can anyone point me to a resource that can help me understand the development cycle? On Wed, Jan 21, 2015 at 5:18 AM, Matti Picus wrote: > > On 21/01/2015 2:57 AM, Steven Jackson wrote: > >> >> Hey I'd like to know if the proposed numpy projects list at >> https://bitbucket.org/pypy/extradoc/src/extradoc/planning/micronumpy.txt >> is still up to date, and if so what is meant by "a good sort function." >> If it's just a matter of implementing a known good algorithm, that seems >> like a good way to start contributing to pypy. >> >> The advice on http://doc.pypy.org/en/latest/project-ideas.html suggested >> posting this question to #pypy on IRC, which I attempted to do through >> http://webchat.freenode.net/ but I never got a response. It was my first >> time trying to communicate over IRC, so I'm not sure if I did something >> incorrectly while trying to join the channel (I saw buildbot messages but >> no one else speaking) or if the lack of activity was simply due to time >> difference (I'm on USA east coast time, while I'm aware that much of the >> pypy-dev community is located in Europe). >> >> Any help with either the original question or joining the IRC discussion >> would be greatly appreciated :) >> >> -- >> Steven Jackson >> >> >> _______________________________________________ >> >> The list is not really up to date, IRC is a good place to get more info > but it is a low-activity channel with most devs online during European > day-evening, and not every day. > As for sort, we have implemented only timsort for ndarrays of numeric > types, not record arrays. Numpy supports ?quicksort?, ?mergesort?, and > ?heapsort?, timsort is not supported. So to be numpy-compatible we should > maybe support the other kinds, and support sorting record arrays. > It would be great to get more people involved in pypy-numpy. > Matti > -- Steven Jackson -------------- next part -------------- An HTML attachment was scrubbed... URL: From arigo at tunes.org Wed Jan 21 16:23:51 2015 From: arigo at tunes.org (Armin Rigo) Date: Wed, 21 Jan 2015 16:23:51 +0100 Subject: [pypy-dev] Numpy Topics In-Reply-To: References: <54BF7D13.70903@gmail.com> Message-ID: Hi Steven, On 21 January 2015 at 13:00, Steven Jackson wrote: > Am I right in assuming that I should checkout "default," make my own branch, > write failing test cases, make the test cases succeed (occasionally merging > in default) and then... Yes, you're right up to here :-) The point is that you should commit often, in your own branch; by "often" I mean for example after you have added one or a few tests, after you fixed one of them, and so on. In more details, what you should do is make an account on bitbucket, then go to https://bitbucket.org/pypy/pypy/ and click "fork" (left icons). You get a fork of the repository, e.g. in https://bitbucket.org/yourname/pypy. Then you clone that locally (it takes time) with "hg clone https://bitbucket.org/yourname/pypy". Make a branch with e.g. "hg branch numpy-sorting". Edit stuff, and "hg commit" regularly; a one-line checkin message is fine. Remember to do "hg push" to publish your commits back to https://bitbucket.org/yourname/pypy, which you should do regularly too, e.g. after every commit or group of commits. The final step is to open a pull request, so that we know that you'd like to merge that branch back to the original pypy/pypy repo (which can also be done several times if you have interesting intermediate states). And if at this point you feel safe working with "hg", we can give you access to pypy/pypy where you can directly push your work; if it is done in branches there is no risk to break stuff and we can still review the branches you want to merge. A bient?t, Armin. From wlavrijsen at lbl.gov Wed Jan 21 19:00:28 2015 From: wlavrijsen at lbl.gov (wlavrijsen at lbl.gov) Date: Wed, 21 Jan 2015 10:00:28 -0800 (PST) Subject: [pypy-dev] cppyy questions In-Reply-To: References: Message-ID: Omar, > The documentation doesn't specify where should I place the rootmap file. yes it does. :) "By convention, the rootmap files should be located next to the reflection info libraries, so that they can be found through the normal shared library search path." i.e. they are found through the LD_LIBRARY_PATH envar. Standard problem when writing documentation: either tutorial style, as was chosen, which is nice for people new to it, but makes it hard to find info afterwards. Or index-style, which has the opposite as result. :P Best regards, Wim -- WLavrijsen at lbl.gov -- +1 (510) 486 6411 -- www.lavrijsen.net From omer.drow at gmail.com Wed Jan 21 21:56:26 2015 From: omer.drow at gmail.com (Omer Katz) Date: Wed, 21 Jan 2015 22:56:26 +0200 Subject: [pypy-dev] cppyy questions In-Reply-To: References: Message-ID: My name is Omer which is the same as Omar (which is in Arabic) only in Hebrew. It's a common mistake. Don't worry. Is there a good guide for compiling C++ code when running setup.py? I think we should link to it in the documentation. What happens if I install a cppyy extension using setup.py? Will the rootmap be loaded from site-packages? ?????? 21 ???? 2015 19:59, ? ???: > Omar, > > The documentation doesn't specify where should I place the rootmap file. >> > > yes it does. :) > > "By convention, the rootmap files should be located next to the reflection > info libraries, so that they can be found through the normal shared > library > search path." > > i.e. they are found through the LD_LIBRARY_PATH envar. > > Standard problem when writing documentation: either tutorial style, as was > chosen, which is nice for people new to it, but makes it hard to find info > afterwards. Or index-style, which has the opposite as result. :P > > Best regards, > Wim > -- > WLavrijsen at lbl.gov -- +1 (510) 486 6411 -- www.lavrijsen.net > -------------- next part -------------- An HTML attachment was scrubbed... URL: From wlavrijsen at lbl.gov Wed Jan 21 22:15:58 2015 From: wlavrijsen at lbl.gov (wlavrijsen at lbl.gov) Date: Wed, 21 Jan 2015 13:15:58 -0800 (PST) Subject: [pypy-dev] cppyy questions In-Reply-To: References: Message-ID: Omer, > My name is Omer which is the same as Omar (which is in Arabic) only in > Hebrew. It's a common mistake. Don't worry. whoops ... :} Sorry! Time for new glasses (or a larger font size). > Is there a good guide for compiling C++ code when running setup.py? I > think we should link to it in the documentation. Yes, that's a good idea. I've not spend any time thinking about that, though. > What happens if I install a cppyy extension using setup.py? Will the > rootmap be loaded from site-packages? No, as that is not part of LD_LIBRARY_PATH. I'd have to try and see whether it can be added dynamically (i.e. whether the path envar is cached or not) through an os.environ update. I can try later (am in a workshop atm.). Best regards, Wim -- WLavrijsen at lbl.gov -- +1 (510) 486 6411 -- www.lavrijsen.net From arigo at tunes.org Fri Jan 23 10:28:50 2015 From: arigo at tunes.org (Armin Rigo) Date: Fri, 23 Jan 2015 10:28:50 +0100 Subject: [pypy-dev] errno and GetLastError in RPython Message-ID: Hi all, I recently merged the "errno-again" branch. This branch moves the reading/saving of errno (and on Windows Get/SetLastError) closer to the actual function call. It should avoid bugs about rarely getting the wrong value for errno, in case "something special" happened: for example, it was not impossible that a call to malloc would invoke the GC at precisely the wrong spot, which might need to ask the OS for more memory, which would overwrite the current value of errno. The bug actually showed up on the Windows buildbots for GetLastError(), which would in some cases incorrectly return 0 just if the code happened to be JIT-traced (not before and not after). This is now fixed. It means any RPython project needs to be updated when it upgrades to the trunk version of the RPython translation toolchain. The fix is rather mechanical. Replace rposix.get_errno() with rposix.get_saved_errno(). Importantly, review each place that you change. You need to make sure which external function call is done before (usually in the few lines before). Once you're sure which function's errno is being checked, go to the declaration of that function, which should be using rffi.llexternal(). Add the keyword argument "save_err=rffi.RFFI_SAVE_ERRNO". This forces errno to be saved immediately after the function call, into the so-called "saved errno". This "saved errno" is another thread-local variable, which rposix.get_saved_errno() returns. Similarly with rwin32.GetLastValue() -> rwin32.GetLastValue_saved() + rffi.RFFI_SAVE_LASTERROR. If there are cases with rposix.set_errno(0), they can be killed and the following function given the flag "RFFI_ZERO_ERRNO_BEFORE". See the new docstrings of the rposix.get/set_saved_errno() and rwin32.Get/SetLastValue_saved() for more details. A bient?t, Armin. From arigo at tunes.org Fri Jan 23 15:29:33 2015 From: arigo at tunes.org (Armin Rigo) Date: Fri, 23 Jan 2015 15:29:33 +0100 Subject: [pypy-dev] RFC: Copy-on-write list slices In-Reply-To: References: Message-ID: Hi Mike, On 20 January 2015 at 05:26, Mike Kaplinskiy wrote: > https://bitbucket.org/mikekap/pypy/commits/b774ae0be11b2012852a175f4bae44841343f067 > has an implementation of list slicing that copies the data on write. (The > third idea from http://doc.pypy.org/en/latest/project-ideas.html .) One hard bit about implementing this kind of change is making sure you don't accidentally slow some other kinds of code down. This requires running all our benchmarks, at least, as fijal pointed out. Here are some additional issues to consider. We have to consider the overhead in terms of memory usage: it's probably fine if the overhead is only one pointer per list object (which often points to some fixed tuple like (0, None, None)). However, if you do "x = large_list[5:7]" you keep the whole "large_list" alive as long as "x" is alive. This might be an issue for some cases. Resolving it is possible but more involved. It would probably require GC support --- i.e. we can't really solve this nicely in regular RPython as it is now. The details need discussion, but I can think for example about a way to tell the GC "this is a pointer to the full list, but I'm only going to access this range of items, so ignore the rest". Another way would be to have some callback that copies just the items needed out of the large list, but that's full of open questions... A bient?t, Armin. From ho33e5 at gmail.com Sun Jan 25 00:05:36 2015 From: ho33e5 at gmail.com (Ho33e5) Date: Sun, 25 Jan 2015 00:05:36 +0100 Subject: [pypy-dev] rpython and pep 484 Message-ID: <289E006E-3008-40E9-8F4A-4B03F872932C@gmail.com> Hi everybody, firstly, this is just an email for personal interest and has nothing to do directly with development so this mailing list may not be quite the right place (I am going to hang around on #pypy...). I am a student and generally interested with the pypy development, especially with the rpython language, and I have some general questions: What is your view on the new typing/mypy things that are happening on python-dev (pep 484)? What I mean is will this make the typing system of rpython evolve? Could RTyper be adapted to work on pep 484 annotations (would it actually be useful)? I read a bit of the paper about rpython listed on the docs and i had the feeling that your typing is a bit more low level. The quite different goals and contraints that the 2 type system have may explain that they look different, but could there be an interaction (in one way or another)? An other question that is related: it's maybe early to think about that but could it be reasonable to expect that pypy will better optimize pep-484-annotated python programs? The thrusting of these user annotations is indeed a problem, so a pypy option could specify that we want it to thrust the type annotations. It may then be worth just writing programs in rpython directly. These questions are quite hypothetical so I don't expect concrete answers, just thoughts! If someone wants to react to this or point me to other (theoretical) ressources about rpython... :) Bonsoir, Peio From arigo at tunes.org Sun Jan 25 09:28:15 2015 From: arigo at tunes.org (Armin Rigo) Date: Sun, 25 Jan 2015 09:28:15 +0100 Subject: [pypy-dev] rpython and pep 484 In-Reply-To: <289E006E-3008-40E9-8F4A-4B03F872932C@gmail.com> References: <289E006E-3008-40E9-8F4A-4B03F872932C@gmail.com> Message-ID: Hi, On 25 January 2015 at 00:05, Ho33e5 wrote: > What is your view on the new typing/mypy things that are happening on python-dev > (pep 484)? What I mean is will this make the typing system of rpython evolve? Could > RTyper be adapted to work on pep 484 annotations (would it actually be useful)? You are confusing RPython for being "Python-with-type-annotations". It is not: RPython does not have explicit types annotations. I think that this alone invalidates the rest of your discussion about RPython. So let's instead talk about "APython", which would be Python-with-type-annotations. (If we're designing some new language, it can be like Python 3.x for x large enough to include support for the pep 484 syntax, as opposed to RPython which is a subset of Python 2.) > An other question that is related: it's maybe early to think about that but could it be > reasonable to expect that pypy will better optimize pep-484-annotated python programs? > The thrusting of these user annotations is indeed a problem, so a pypy option could > specify that we want it to thrust the type annotations. The type annotations have not been written with low-level performance in mind. For example, there is no standard type annotation that means "this is a machine-sized integer"; you have only "this is a Python 3 int object", which is a Python 2 "int-or-long". Similarly, there is no mention in PEP 484 about specifying the type of instance attributes, for example. So APython would need a subtly different set of types to work on. Let's ignore the problem that this breaks compatibility with any other tool written for PEP 484. It is very unclear how much speed PyPy could potentially gain. Basically we would trade removing an unknown but likely small fraction of the guards emitted by the JIT compiler against the very real result of PyPy segfaulting as soon as there is a mismatch somewhere. At least in C++ the compiler does some real type checking and reports to you. Supporting the APython approach would basically be a lot of hard work (for us) with the end result of giving users a sure way to shoot themselves in the foot. I would argue that there are plenty of old ways to shoot yourself in the foot which are at least more supported by a large number of tools. For example, C++ comes with two essential tools that would be missing: the first is the C++ compiler itself, which does a better job at reporting typing errors than any best-effort type checker can; the second is gdb. I would argue that you first need equivalents of these two tools. Exact type checking is stricter than best-effort. I doubt that it is possible to write such a tool that would accept large 3rd-party Python libraries which have not been structured for type-checking in the first place. If you think otherwise, then I would say it is your job to write it --- this would seem like a reasonable first step along this path :-) A bient?t, Armin. From n210241048576 at gmail.com Sun Jan 25 10:13:09 2015 From: n210241048576 at gmail.com (Robert Grosse) Date: Sun, 25 Jan 2015 01:13:09 -0800 Subject: [pypy-dev] rpython and pep 484 In-Reply-To: References: <289E006E-3008-40E9-8F4A-4B03F872932C@gmail.com> Message-ID: Wouldn't it be possible to get a performance improvement from type annotations without sacrificing correctness? >From the perspective of static compilation, if you have an infinitely powerful type inference engine, there shouldn't be any difference to have type annotations or not. The reason that programmer supplied type annotations are useful is because in practice compilers are not infinitely smart, and for efficiency, you need to judiciously apply widening to the type inference process. e.g. telling the compiler "regardless of what the actual types this function is called by, it's written to generically operate on super type C so you can save work on the type inference and just assume type C here" I think the same effect could be useful in dynamic compilation. You have to place guards to ensure correct behavior, but there are a lot of different places or methods you can do to insert guards to have the same effect and you pretty much have to guess which one is best. Programmer supplied type annotations could be used as a hint to place guards more intelligently without sacrificing correctness. Of course, I haven't done any Pypy development, so I don't know how feasible this is in practice. On Sun, Jan 25, 2015 at 12:28 AM, Armin Rigo wrote: > Hi, > > On 25 January 2015 at 00:05, Ho33e5 wrote: > > What is your view on the new typing/mypy things that are happening on > python-dev > > (pep 484)? What I mean is will this make the typing system of rpython > evolve? Could > > RTyper be adapted to work on pep 484 annotations (would it actually be > useful)? > > You are confusing RPython for being "Python-with-type-annotations". > It is not: RPython does not have explicit types annotations. > > I think that this alone invalidates the rest of your discussion about > RPython. So let's instead talk about "APython", which would be > Python-with-type-annotations. (If we're designing some new language, > it can be like Python 3.x for x large enough to include support for > the pep 484 syntax, as opposed to RPython which is a subset of Python > 2.) > > > An other question that is related: it's maybe early to think about that > but could it be > > reasonable to expect that pypy will better optimize pep-484-annotated > python programs? > > The thrusting of these user annotations is indeed a problem, so a pypy > option could > > specify that we want it to thrust the type annotations. > > The type annotations have not been written with low-level performance > in mind. For example, there is no standard type annotation that means > "this is a machine-sized integer"; you have only "this is a Python 3 > int object", which is a Python 2 "int-or-long". Similarly, there is > no mention in PEP 484 about specifying the type of instance > attributes, for example. > > So APython would need a subtly different set of types to work on. > Let's ignore the problem that this breaks compatibility with any other > tool written for PEP 484. It is very unclear how much speed PyPy > could potentially gain. Basically we would trade removing an unknown > but likely small fraction of the guards emitted by the JIT compiler > against the very real result of PyPy segfaulting as soon as there is a > mismatch somewhere. At least in C++ the compiler does some real type > checking and reports to you. Supporting the APython approach would > basically be a lot of hard work (for us) with the end result of giving > users a sure way to shoot themselves in the foot. > > I would argue that there are plenty of old ways to shoot yourself in > the foot which are at least more supported by a large number of tools. > For example, C++ comes with two essential tools that would be missing: > the first is the C++ compiler itself, which does a better job at > reporting typing errors than any best-effort type checker can; the > second is gdb. I would argue that you first need equivalents of these > two tools. > > Exact type checking is stricter than best-effort. I doubt that it is > possible to write such a tool that would accept large 3rd-party Python > libraries which have not been structured for type-checking in the > first place. If you think otherwise, then I would say it is your job > to write it --- this would seem like a reasonable first step along > this path :-) > > > A bient?t, > > Armin. > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > https://mail.python.org/mailman/listinfo/pypy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fijall at gmail.com Sun Jan 25 11:41:00 2015 From: fijall at gmail.com (Maciej Fijalkowski) Date: Sun, 25 Jan 2015 12:41:00 +0200 Subject: [pypy-dev] rpython and pep 484 In-Reply-To: References: <289E006E-3008-40E9-8F4A-4B03F872932C@gmail.com> Message-ID: In theory it's possible. In practice I struggle to imagine an example where it would make real difference (unlike other hints, e.g. "this value tends to be smaller than 5" or "this list tends to store only/mostly integers" etc.) On Sun, Jan 25, 2015 at 11:13 AM, Robert Grosse wrote: > Wouldn't it be possible to get a performance improvement from type > annotations without sacrificing correctness? > > From the perspective of static compilation, if you have an infinitely > powerful type inference engine, there shouldn't be any difference to have > type annotations or not. The reason that programmer supplied type > annotations are useful is because in practice compilers are not infinitely > smart, and for efficiency, you need to judiciously apply widening to the > type inference process. e.g. telling the compiler "regardless of what the > actual types this function is called by, it's written to generically operate > on super type C so you can save work on the type inference and just assume > type C here" > > I think the same effect could be useful in dynamic compilation. You have to > place guards to ensure correct behavior, but there are a lot of different > places or methods you can do to insert guards to have the same effect and > you pretty much have to guess which one is best. Programmer supplied type > annotations could be used as a hint to place guards more intelligently > without sacrificing correctness. Of course, I haven't done any Pypy > development, so I don't know how feasible this is in practice. > > On Sun, Jan 25, 2015 at 12:28 AM, Armin Rigo wrote: >> >> Hi, >> >> On 25 January 2015 at 00:05, Ho33e5 wrote: >> > What is your view on the new typing/mypy things that are happening on >> > python-dev >> > (pep 484)? What I mean is will this make the typing system of rpython >> > evolve? Could >> > RTyper be adapted to work on pep 484 annotations (would it actually be >> > useful)? >> >> You are confusing RPython for being "Python-with-type-annotations". >> It is not: RPython does not have explicit types annotations. >> >> I think that this alone invalidates the rest of your discussion about >> RPython. So let's instead talk about "APython", which would be >> Python-with-type-annotations. (If we're designing some new language, >> it can be like Python 3.x for x large enough to include support for >> the pep 484 syntax, as opposed to RPython which is a subset of Python >> 2.) >> >> > An other question that is related: it's maybe early to think about that >> > but could it be >> > reasonable to expect that pypy will better optimize pep-484-annotated >> > python programs? >> > The thrusting of these user annotations is indeed a problem, so a pypy >> > option could >> > specify that we want it to thrust the type annotations. >> >> The type annotations have not been written with low-level performance >> in mind. For example, there is no standard type annotation that means >> "this is a machine-sized integer"; you have only "this is a Python 3 >> int object", which is a Python 2 "int-or-long". Similarly, there is >> no mention in PEP 484 about specifying the type of instance >> attributes, for example. >> >> So APython would need a subtly different set of types to work on. >> Let's ignore the problem that this breaks compatibility with any other >> tool written for PEP 484. It is very unclear how much speed PyPy >> could potentially gain. Basically we would trade removing an unknown >> but likely small fraction of the guards emitted by the JIT compiler >> against the very real result of PyPy segfaulting as soon as there is a >> mismatch somewhere. At least in C++ the compiler does some real type >> checking and reports to you. Supporting the APython approach would >> basically be a lot of hard work (for us) with the end result of giving >> users a sure way to shoot themselves in the foot. >> >> I would argue that there are plenty of old ways to shoot yourself in >> the foot which are at least more supported by a large number of tools. >> For example, C++ comes with two essential tools that would be missing: >> the first is the C++ compiler itself, which does a better job at >> reporting typing errors than any best-effort type checker can; the >> second is gdb. I would argue that you first need equivalents of these >> two tools. >> >> Exact type checking is stricter than best-effort. I doubt that it is >> possible to write such a tool that would accept large 3rd-party Python >> libraries which have not been structured for type-checking in the >> first place. If you think otherwise, then I would say it is your job >> to write it --- this would seem like a reasonable first step along >> this path :-) >> >> >> A bient?t, >> >> Armin. >> _______________________________________________ >> pypy-dev mailing list >> pypy-dev at python.org >> https://mail.python.org/mailman/listinfo/pypy-dev > > > > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > https://mail.python.org/mailman/listinfo/pypy-dev > From matti.picus at gmail.com Thu Jan 29 06:22:05 2015 From: matti.picus at gmail.com (Matti Picus) Date: Thu, 29 Jan 2015 07:22:05 +0200 Subject: [pypy-dev] starting 2.5 release cycle, help needed with macos Message-ID: <54C9C37D.2040503@gmail.com> Buildbots for linux are green (arm and x86), windows seems as good as it gets. I have looked at the open issues, none seem like blockers. My personal baby, the ufuncapi branch, seems to be functioning after I found the "last bug" So I guess it is time to start the 2.5 release cycle, unless I missed something. We have a persistent crash with macos nightly builds in the _continuation module, help is needed to track it down http://buildbot.pypy.org/summary?builder=pypy-c-jit-macosx-x86-64 Armin suggested maybe it was shadowstack, I think I ruled that out by translating on x86 linux: https://gist.github.com/mattip/8407b7fa7dbe1cc2f786 Any help/criticism/comments are welcome Matti From arigo at tunes.org Thu Jan 29 10:58:04 2015 From: arigo at tunes.org (Armin Rigo) Date: Thu, 29 Jan 2015 10:58:04 +0100 Subject: [pypy-dev] starting 2.5 release cycle, help needed with macos In-Reply-To: <54C9C37D.2040503@gmail.com> References: <54C9C37D.2040503@gmail.com> Message-ID: Hi all, On 29 January 2015 at 06:22, Matti Picus wrote: > We have a persistent crash with macos nightly builds in the _continuation > module, help is needed to track it down I am willing to look into this bug provided someone provides an OS/X machine where I can log in and run gdb. It just happens that the core team is on Linux and can access Windows (both VMs and real machines), but no OS/X. A bient?t, Armin. From fijall at gmail.com Thu Jan 29 11:05:51 2015 From: fijall at gmail.com (Maciej Fijalkowski) Date: Thu, 29 Jan 2015 12:05:51 +0200 Subject: [pypy-dev] starting 2.5 release cycle, help needed with macos In-Reply-To: References: <54C9C37D.2040503@gmail.com> Message-ID: On Thu, Jan 29, 2015 at 11:58 AM, Armin Rigo wrote: > Hi all, > > On 29 January 2015 at 06:22, Matti Picus wrote: >> We have a persistent crash with macos nightly builds in the _continuation >> module, help is needed to track it down > > I am willing to look into this bug provided someone provides an OS/X > machine where I can log in and run gdb. It just happens that the core > team is on Linux and can access Windows (both VMs and real machines), > but no OS/X. > > > A bient?t, > > Armin. I happen to have an OS X machine, so I can probably look and/or give you access. The problem is finding a decent enough internet ;-) Cheers, fijal From matti.picus at gmail.com Fri Jan 30 11:09:38 2015 From: matti.picus at gmail.com (Matti Picus) Date: Fri, 30 Jan 2015 12:09:38 +0200 Subject: [pypy-dev] pypy-c failing to find libpypy-c.so on freebsd Message-ID: <54CB5862.8070104@gmail.com> The freebsd builds are failing since we changed to --shared by default. While translation succeeds, the resulting pypy-c cannot find libpypy-c.so even though it seems to be copied properly. See for instance http://buildbot.pypy.org/builders/pypy-c-jit-freebsd-9-x86-64/builds/462/steps/shell_1/logs/stdio It seems the $ORIGIN flag is somehow not functioning properly. I found this link http://stackoverflow.com/questions/6324131/rpath-origin-not-having-desired-effect which would seems to suggest we need to add "-z origin" as well as -rpath=$ORIGIN to the linker flags. Could someone (Tobias is the admin of the buildbot, but anyone else is welcome to try) with a freebsd platform try to track this down? It should be enough to run python pytest.py rpython/translator/c/test/test_standalone.py -k shared --verbose -s on a pypy default repo after commit fa382e9b1c95, the test should fail. Then try to mess with the rpath_flags in rpython/translator/platform/posix.py till it passes Thanks Matti From rich at pasra.at Sat Jan 31 10:51:13 2015 From: rich at pasra.at (Richard Plangger) Date: Sat, 31 Jan 2015 10:51:13 +0100 Subject: [pypy-dev] PyPy improving generated machine code Message-ID: <54CCA591.6000806@pasra.at> Hi, I'm a student at the technical university of Vienna and currently looking for a topic to complete my master thesis. I stumbled over PEP 484 that currently is discussed on the mailing list. It seems to me that this is going to become reality pretty soon. I had the idea that this additional type annotations could beneficial for JIT compilation. Two weeks ago someone already mentioned pretty much the same idea (https://mail.python.org/pipermail/pypy-dev/2015-January/013037.html). In this thread it was mentioned that to improve the compiled code more detailed information (such as an integer stays in range [0-50], ...) would be necessary to remove guards of a trace. I read the document "Tracing the Meta-Level: PyPy'??s Tracing JIT Compiler" that was published 2009 to understand the basics of how PyPy currently works. I assume that PyPy is still a trace JIT compiler. By using the PEP 484 proposal I think this opens up new possibilities. Using trace compilation as it is done in PyPy or SpiderMonkey makes a lot of sense because most of the time type information is not present prior the first execution. PEP 484 changes the game. After type inference has completed e.g. on a function it should not occur often that a variable's type is unknown. The document "Tracing the Meta-Level" already mentioned that when RPython is provided as input to PyPy it already infers the type. Is that true for not RPython programs as well? I think there are two possibilities to improve the generated machine code for PyPy: * Find a sensible sub set of optimizations that rely on the available type information and further improve the trace compilation * Evaluate other possibilities of inter procedural methods to compile good machine code or completely move to method based jit compilation. I could imagine to evaluate and implement this for my master thesis. What do you think? Would it benefit PyPy? Has anybody else started to implement something similar? Best, Richard -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: OpenPGP digital signature URL: From arigo at tunes.org Sat Jan 31 15:40:05 2015 From: arigo at tunes.org (Armin Rigo) Date: Sat, 31 Jan 2015 15:40:05 +0100 Subject: [pypy-dev] PyPy improving generated machine code In-Reply-To: <54CCA591.6000806@pasra.at> References: <54CCA591.6000806@pasra.at> Message-ID: Hi Richard, On 31 January 2015 at 10:51, Richard Plangger wrote: > By using the PEP 484 proposal I think this opens up new possibilities. The short answer is - no, it doesn't make sense. User-supplied type annotations wouldn't help at all if they must still be checked, like PEP 484 says. Or, assuming you're fine with obscure crashes when the type annotations are wrong, you would get at most extremely minor speed benefits. There are several reasons for why. One of them is that annotations are at the wrong level (e.g. a PEP 484 "int" corresponds to Python 3's int type, which does not necessarily fits inside one machine word; even worse, an "int" annotation allows arbitrary int subclasses). Another is that a lot more information is needed to produce good code (e.g. "this `f()` called here really means this function there, and will never be monkey-patched" -- same with `len()` or `list()`, btw). The third reason is that some "guards" in PyPy's JIT traces don't really have an obvious corresponding type (e.g. "this dict is so far using keys which don't override `__hash__` so a more efficient implementation was used"). Many guards don't even any correspondence with types at all ("this class attribute was not modified"; "the loop counter did not reach zero so we don't need to release the GIL"; and so on). In summary, as PyPy works right now, it is able to derive far more useful information than can ever be given by PEP 484, and it works automatically. As far as we know, this is true even if we would add other techniques to PyPy, like a fast first-pass method JIT. This should be obvious from the fact that many high-performance JavaScript VMs are method JITs too, and they work very well on source code with no explicit types either. In my opinion, the introductory sentence in that PEP is a lie: "This PEP aims to provide (...) opening up Python code to (...) performance optimizations utilizing type information." This doesn't mean the performance of PyPy is perfectly optimal today. There are certainly things to do and try. One of the major ones (in terms of work involved) would be to add a method-JIT-like approach with a quick-and-dirty initial JIT, able to give not-too-bad performance but without the large warm-up times of our current meta-tracing JIT. More about this or others in a later e-mail, if you're interested. A bient?t, Armin. From hrc706 at gmail.com Sat Jan 31 15:11:39 2015 From: hrc706 at gmail.com (=?utf-8?B?6buE6Iul5bCY?=) Date: Sat, 31 Jan 2015 23:11:39 +0900 Subject: [pypy-dev] A question about RPython's JIT in scheduler Message-ID: <01D41760-62D1-43DD-97B9-78197D8A84C6@gmail.com> Mr. Bolz, Hello, I?m a master student in Japan and this is the second time that I send a mail to you. :) Recently I?m implementing an Erlang interpreter by RPython, and I just have added a scheduler to my interpreter to simulate the multi-process in single-core. I compared two version of my interpreter, one has a scheduler, and one doesn?t, and I?m very surprised to find that there was only a very little overhead for the scheduler mechanism, in my benchmark it was only 3%. In my implementation, the scheduler has a run able queue, whose element is a tuple of an object which has a function for dispatch loop, a program counter and a reference to the Erlang byte code. While doing scheduling, the scheduler just dequeue an element from the run able queue, call the function for dispatch loop, the loop which only run for a limited time, and the scheduler will enqueue the element (tuple of object with dispatch loop, program counter and reference to the Erlang byte code) to the run able queue again. So in my opinion, it may be a trouble for JIT?s work because the dispatch loop is not always continuing, from the view of scheduler, the dispatch loop will run only a limited time of iterations, then be suspended, and resumed, and suspended again and so on. I think it may cause the JIT hard to do profiling, and I have no idea if the complied native code from JIT can be reuse when the dispatch loop resumed, either. From the benchmark I run I guess there may be some special cares taken to hold this situation, (actually I have also compared the JIT log generated for the two versions of interpreter below, it seemed quite similar at most of cases) so I?m just curious about how the JIT actually do under a scheduler? How does JIT overcome the trouble in this discontinuous environment? Best Regards, Ruochen Huang From yury at shurup.com Sat Jan 31 17:49:31 2015 From: yury at shurup.com (Yury V. Zaytsev) Date: Sat, 31 Jan 2015 17:49:31 +0100 Subject: [pypy-dev] PyPy improving generated machine code In-Reply-To: References: <54CCA591.6000806@pasra.at> Message-ID: <1422722971.2730.100.camel@newpride> On Sat, 2015-01-31 at 15:40 +0100, Armin Rigo wrote: > In my opinion, the introductory sentence in that PEP is a lie: "This > PEP aims to provide (...) opening up Python code to (...) performance > optimizations utilizing type information." I might be wrong, by my impression was that it's mainly driven by the desire to have a standardized way to add type hints for the benefit of static analysis, and "performance optimizations" just means stuff Cython could do if the code was explicitly typed. > More about this or others in a later e-mail, if you're interested. I am!!! -- Sincerely yours, Yury V. Zaytsev From ronan.lamy at gmail.com Sat Jan 31 18:32:17 2015 From: ronan.lamy at gmail.com (Ronan Lamy) Date: Sat, 31 Jan 2015 17:32:17 +0000 Subject: [pypy-dev] pypy-c failing to find libpypy-c.so on freebsd In-Reply-To: <54CB5862.8070104@gmail.com> References: <54CB5862.8070104@gmail.com> Message-ID: <54CD11A1.8020309@gmail.com> Le 30/01/15 10:09, Matti Picus a ?crit : > The freebsd builds are failing since we changed to --shared by default. > While translation succeeds, the resulting pypy-c cannot find > libpypy-c.so even though it seems to be copied properly. See for instance > > http://buildbot.pypy.org/builders/pypy-c-jit-freebsd-9-x86-64/builds/462/steps/shell_1/logs/stdio > > > It seems the $ORIGIN flag is somehow not functioning properly. I found > this link > > http://stackoverflow.com/questions/6324131/rpath-origin-not-having-desired-effect > > > which would seems to suggest we need to add "-z origin" as well as > -rpath=$ORIGIN to the linker flags. Could someone (Tobias is the admin > of the buildbot, but anyone else is welcome to try) with a freebsd > platform try to track this down? It should be enough to run > > python pytest.py rpython/translator/c/test/test_standalone.py -k shared > --verbose -s > > on a pypy default repo after commit fa382e9b1c95, the test should fail. > Then try to mess with the rpath_flags in > rpython/translator/platform/posix.py till it passes > Thanks for the pointers. Fixed on default in 9e7b2bbd471c767c876df266f7a32201662d9246 From rich at pasra.at Sat Jan 31 19:37:30 2015 From: rich at pasra.at (Richard Plangger) Date: Sat, 31 Jan 2015 19:37:30 +0100 Subject: [pypy-dev] PyPy improving generated machine code In-Reply-To: References: <54CCA591.6000806@pasra.at> Message-ID: <54CD20EA.9080705@pasra.at> Hi, Even if my idea (PEP 484) does not work out I might still be interested in contributing to PyPy. To decide and get my thesis going I need some more resources I can read (maybe some papers that do it similar what you have in mind) plus some hints to isolate the topic. The method-JIT-like approach sounds interesting. It would be nice if you could provide more detail on the method-JIT-like approach and other things that can be done right now to make PyPy faster. After that I will discuss with my adviser if this is a suitable topic. Best, Richard On 01/31/2015 03:40 PM, Armin Rigo wrote: > Hi Richard, > > On 31 January 2015 at 10:51, Richard Plangger wrote: >> By using the PEP 484 proposal I think this opens up new possibilities. > > The short answer is - no, it doesn't make sense. User-supplied type > annotations wouldn't help at all if they must still be checked, like > PEP 484 says. Or, assuming you're fine with obscure crashes when the > type annotations are wrong, you would get at most extremely minor > speed benefits. > > There are several reasons for why. One of them is that annotations > are at the wrong level (e.g. a PEP 484 "int" corresponds to Python 3's > int type, which does not necessarily fits inside one machine word; > even worse, an "int" annotation allows arbitrary int subclasses). > Another is that a lot more information is needed to produce good code > (e.g. "this `f()` called here really means this function there, and > will never be monkey-patched" -- same with `len()` or `list()`, btw). > The third reason is that some "guards" in PyPy's JIT traces don't > really have an obvious corresponding type (e.g. "this dict is so far > using keys which don't override `__hash__` so a more efficient > implementation was used"). Many guards don't even any correspondence > with types at all ("this class attribute was not modified"; "the loop > counter did not reach zero so we don't need to release the GIL"; and > so on). > > In summary, as PyPy works right now, it is able to derive far more > useful information than can ever be given by PEP 484, and it works > automatically. As far as we know, this is true even if we would add > other techniques to PyPy, like a fast first-pass method JIT. This > should be obvious from the fact that many high-performance JavaScript > VMs are method JITs too, and they work very well on source code with > no explicit types either. In my opinion, the introductory sentence in > that PEP is a lie: "This PEP aims to provide (...) opening up Python > code to (...) performance optimizations utilizing type information." > > This doesn't mean the performance of PyPy is perfectly optimal today. > There are certainly things to do and try. One of the major ones (in > terms of work involved) would be to add a method-JIT-like approach > with a quick-and-dirty initial JIT, able to give not-too-bad > performance but without the large warm-up times of our current > meta-tracing JIT. More about this or others in a later e-mail, if > you're interested. > > > A bient?t, > > Armin. > -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: OpenPGP digital signature URL: From dynamicgl at gmail.com Sat Jan 31 19:32:44 2015 From: dynamicgl at gmail.com (Gelin Yan) Date: Sun, 1 Feb 2015 02:32:44 +0800 Subject: [pypy-dev] pickle in pypy is slow Message-ID: Hi All I noticed pickle in pypy is slower than cPickle in python (faster than pickle in python). Is it a feature? Regards gelin yan -------------- next part -------------- An HTML attachment was scrubbed... URL: From luciano at ramalho.org Sat Jan 31 20:43:09 2015 From: luciano at ramalho.org (Luciano Ramalho) Date: Sat, 31 Jan 2015 17:43:09 -0200 Subject: [pypy-dev] pickle in pypy is slow In-Reply-To: References: Message-ID: On Sat, Jan 31, 2015 at 4:32 PM, Gelin Yan wrote: > I noticed pickle in pypy is slower than cPickle in python (faster than > pickle in python). > > Is it a feature? the feature is that nobody had to write it in C, yet it's faster. isn't that cool? > > Regards > > gelin yan > > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > https://mail.python.org/mailman/listinfo/pypy-dev > -- Luciano Ramalho Twitter: @ramalhoorg Professor em: http://python.pro.br Twitter: @pythonprobr From arigo at tunes.org Sat Jan 31 21:18:28 2015 From: arigo at tunes.org (Armin Rigo) Date: Sat, 31 Jan 2015 21:18:28 +0100 Subject: [pypy-dev] pickle in pypy is slow In-Reply-To: References: Message-ID: Hi, On 31 January 2015 at 20:43, Luciano Ramalho wrote: > On Sat, Jan 31, 2015 at 4:32 PM, Gelin Yan wrote: >> I noticed pickle in pypy is slower than cPickle in python (faster than >> pickle in python). >> >> Is it a feature? > > the feature is that nobody had to write it in C, yet it's faster. > isn't that cool? Our cPickle is written in pure Python: ``from pickle import *`` The speed of that is a bit slower than CPython's optimized C version, yes. If someone really cares about the performance of cPickle he could attempt to port it to RPython code. Likely, you need to start from CPython's cPickle module and get the same C-ish style, instead of starting from pickle.py. In this case the goal is only performance, so there wouldn't be much point if the result is only a little bit faster than pickle.py-with-the-JIT-applied-to-it. Finally, note that while we could hope getting up to the same speed as CPython, we can't hope to be much *faster* than C code that is not repeatedly interpreting anything; there is very little overhead to remove. A bient?t, Armin.