From dinov at microsoft.com Thu Sep 1 05:28:42 2011 From: dinov at microsoft.com (Dino Viehland) Date: Thu, 1 Sep 2011 03:28:42 +0000 Subject: [pypy-dev] Here's a fun one... Message-ID: <6C7ABA8B4E309440B857D74348836F2E28F432C6@TK5EX14MBXC292.redmond.corp.microsoft.com> This came up on an internal discussion, I thought it was fun, especially given that we all behave differently: Paste this into the REPL: class PS1(object): def __init__(self): self.count = 0 def __str__(self): self.count += 1 return "%d >>>" % self.count import sys sys.ps1 = PS1() CPython - calls __str__ Jython - calls __repr__ IronPython - ignores ps1 PyPy - unsupported operand type for unary buffer: 'PS1' (note I don't necessarily have the latest versions for everyone) -------------- next part -------------- An HTML attachment was scrubbed... URL: From fenn.bailey at gmail.com Thu Sep 1 07:29:09 2011 From: fenn.bailey at gmail.com (Fenn Bailey) Date: Thu, 1 Sep 2011 15:29:09 +1000 Subject: [pypy-dev] djangobench performance Message-ID: Hi all, As an experiment, I thought I'd test JKM's djangobench ( https://github.com/jacobian/djangobench) under pypy as a way of determining a (hopefully) more useful benchmark than the template-only "django" benchmark that's standard on speed.pypy.org and also to get an idea as to whether switching to pypy for production django apps could (currently) be a good idea. djangobench is designed to fairly comprehensively compare the performance of different aspects of differing versions of django in an effort to detect performance degradation/regression/etc. It's based on perf.py from the unladen swallow project, so it was fairly easy to crudely hack up to instead compare a single django version running under cpython 2.6 vs pypy 1.6. --- $ python -V Python 2.6.5 $ pypy -V Python 2.7.1 (d8ac7d23d3ec, Aug 17 2011, 11:51:19) [PyPy 1.6.0 with GCC 4.4.3] --- The results were a little surprising (and not in a good way): http://pastie.org/2463906 Based on the highly degraded performance (>2 orders of magnitude in some cases) I'm guessing there's some sort of issue in the way I'm benchmarking things. Code can be found here: https://github.com/fennb/djangobench Environment is ubuntu 10.04 64bit running in a VM on a macbook pro. cpython was the current ubuntu binary package, pypy was 1.6 precompiled binary from pypy.org. It's quite possible memory size issues may have impacted some of the benchmarks (but not all). Any ideas as to why the performance drop-off would be so significant? Cheers, Fenn. -------------- next part -------------- An HTML attachment was scrubbed... URL: From tleeuwenburg at gmail.com Thu Sep 1 08:45:00 2011 From: tleeuwenburg at gmail.com (Tennessee Leeuwenburg) Date: Thu, 1 Sep 2011 16:45:00 +1000 Subject: [pypy-dev] [Speed] Co-ordinating benchmarking Message-ID: Okay, so all of a sudden there seem to be a *lot* of people looking at this. This was a long thread quickly, and I only just got up to speed with it. There are a lot of new names, and I don't know what existing skills, interest and territories exist. Apologies for any faux pas. I would like to organise the list of tasks a bit more clearly if that is okay. I may be less familiar with the parts of this process than others, so I just want to get it down clearly. What I've done: -- Clone the Python repo into /home/speedracer/cpython/cpython (updated to 2.7) -- Installed os packages to support a reasonable build. -- Built and installed python2.7 into /home/speedraver/cpython/27_bin Presumably, people would like to be monitoring both PyPy and Cpython as they progress over time. This means some kind of auto runner which updates from the repo, re-runs the timings, and submits them to codespeed. I am unclear on whether there is a "clear winner" for this piece of technology. List of Required Infrastructure on speed.python.org: -- A home for the cpython repo (check) -- An installed and running codespeed server (pending) -- A buildbot / automation for keeping up to date with the PyPy and cpython repos (???) -- Automation to execute a process which must (???) 1) Run the benchmarks 2) Submit results to codespeed I would suggest that the codespeed server be installed into speedracer's home directory. This must all be installable and configurable from Chef (which appears to me like Fabric, i.e. an automation tool for deployment and management of systems). This is not yet accomplished. We also clearly need some kind of wiki documentation on what is going on, so that contributors (or just newbies like me) can figure out where things are at and what is going on. The bitbucket project is great, but the task titles are currently rather brief if someone isn't already totally up-to-speed on what is going on. There appear to me to be two unresolved questions: 1) What piece of technology should we use for a buildbot / build automation 2) What piece of technology should we use for the benchmark runner? I have no suggestions on (1), it's not a strong point for me. As regards (2), I am ignorant to what others might already be using, except to say the landscape seems unclear and fractured to me. My work, benchmarker.py, is likely to be adaptable to our needs and I am more than happy to support the package so it can be applied here. As I understand it, the GSOC project was about the actual benchmarking functions, not so much about automation and support for managing the results of benchmarking. If an already-working alternative to benchmarker.py exists and makes more sense to use, then that is fine by me. I would still like to help out to learn more about benchmarking. The main issue with me as a contributor will be time. I have a full plate as a result of Real Life, so I will sometimes go dark for a week or so. However, I'm motivated and interested, and can put in a few hours a week most weeks. Do I have this right? Is that a reasonable description of the work breakdown? Do we have clear names against tasks so that co-ordination can be done through those people (rather than via the whole list)? Regards, -Tennessee On Thu, Sep 1, 2011 at 6:28 AM, Noah Kantrowitz wrote: > Its all branches all the way down, so we can start work anywhere and push it to an "official" PSF bin later I think. I'm sure we will want to host a mirror of it on the python.org hg server too, just for discoverability. > > --Noah > > On Aug 31, 2011, at 1:12 PM, Miquel Torres wrote: > >> Oh, cool, so there will be an Opscode hosted account for the PSF, >> right? Then the Chef repo should be for the PSF. Maybe in a current >> account somewhere? What do you propose? >> >> Miquel >> >> >> 2011/8/31 Noah Kantrowitz : >>> Opscode has already agreed to donate a Hosted account as long we keep it under ~20 clients :-) I can hand out the info for it to anyone that wants. As for setting up the Chef repo, just remember we are trying to not manage this system in isolation and that it will be part of a bigger PSF infrastructure management effort. >>> >>> --Noah >>> >>> On Aug 31, 2011, at 11:34 AM, Miquel Torres wrote: >>> >>>> Hi all, >>>> >>>> though I took up on the task of installing a Codespeed instance >>>> myself, I didn't have time until now. This weekend I will definitely >>>> have ?a *lot* of time to work on this, so count on that task being >>>> done by then. >>>> >>>> The bitbucket issue tracker is a start (though a organization account >>>> would be better) and the splash page is great. So let's get started >>>> organizing things. >>>> >>>> Regarding the deployment strategy, it turns out I use Chef at work, so >>>> I am in full agreement with Noah here (yey!). Actually, I am the >>>> author of LittleChef (which we can use as a tool to execute Chef on >>>> the node). >>>> >>>> So, Configuration Management. I would propose that Noah starts the >>>> repo with the Chef cookbooks (preferably a complete LittleChef >>>> kitchen, but that is not a must :), and gets the main recipes (apache, >>>> django) going, while I create a cookbook for Codespeed. What do you >>>> think? >>>> >>>> The benchmark runner question is still open. We need to clarify that. >>>> Use the pypy runner? Tennessee's work? >>>> >>>> Regarding repositories and issues, we could maybe have a "speed" >>>> organization account (not sure on Bitbucket, you can do that in >>>> Github), where we have a wiki, issues, and runner + config management >>>> repo + other stuff. >>>> >>>> Cheers, >>>> Miquel >>>> >>>> 2011/8/31 Jesse Noller : >>>>> I've put up a splash page for the project this AM: >>>>> >>>>> http://speed.python.org/ >>>>> >>>>> jesse >>>>> _______________________________________________ >>>>> pypy-dev mailing list >>>>> pypy-dev at python.org >>>>> http://mail.python.org/mailman/listinfo/pypy-dev >>>>> >>>> _______________________________________________ >>>> Speed mailing list >>>> Speed at python.org >>>> http://mail.python.org/mailman/listinfo/speed >>> >>> > > > _______________________________________________ > Speed mailing list > Speed at python.org > http://mail.python.org/mailman/listinfo/speed > > -- -------------------------------------------------- Tennessee Leeuwenburg http://myownhat.blogspot.com/ "Don't believe everything you think" From william.leslie.ttg at gmail.com Thu Sep 1 09:23:52 2011 From: william.leslie.ttg at gmail.com (William ML Leslie) Date: Thu, 1 Sep 2011 17:23:52 +1000 Subject: [pypy-dev] djangobench performance In-Reply-To: References: Message-ID: On 1 September 2011 15:29, Fenn Bailey wrote: > The results were a little surprising (and not in a good way): > http://pastie.org/2463906 ... > Any ideas as to why the performance drop-off would be so significant? N = 200 means most of the benchmarks probably won't even JIT, so that might be a start. The threshold in the released pypy is N = 1000. But even without JIT, 20+ fold slowdowns are very interesting: 10n_render, query_all and query_raw. I wonder if anyone has benchmarked sqlite under pypy - that would have the most dramatic effect here. -- William Leslie From fenn.bailey at gmail.com Thu Sep 1 09:33:03 2011 From: fenn.bailey at gmail.com (Fenn Bailey) Date: Thu, 1 Sep 2011 17:33:03 +1000 Subject: [pypy-dev] djangobench performance In-Reply-To: References: Message-ID: Hi William, > N = 200 means most of the benchmarks probably won't even JIT, so that > might be a start. The threshold in the released pypy is N = 1000. > > Yeah, I suspected that might be the case, and did a few test individual benchmarks with a much higher N (ie: >20,000). It definitely improved things comparatively quite a lot, but ultimately still resulted in a 3-4x slowdown over CPython. Fenn. -------------- next part -------------- An HTML attachment was scrubbed... URL: From anto.cuni at gmail.com Thu Sep 1 09:50:14 2011 From: anto.cuni at gmail.com (Antonio Cuni) Date: Thu, 01 Sep 2011 09:50:14 +0200 Subject: [pypy-dev] [Speed] Moving the project forward In-Reply-To: References: Message-ID: <4E5F3936.8050002@gmail.com> On 31/08/11 22:11, Brett Cannon wrote: > The PyPy folk could answer this as they have their repo on bitbucket > already. Else I guess we can just create a standalone account that > represents the official speed.python.org account. for pypy we do exactly that. There is a bitbucket user named "pypy" whose credentials are shared among all the core devs. ciao, Anto From anto.cuni at gmail.com Thu Sep 1 09:58:14 2011 From: anto.cuni at gmail.com (Antonio Cuni) Date: Thu, 01 Sep 2011 09:58:14 +0200 Subject: [pypy-dev] Here's a fun one... In-Reply-To: <6C7ABA8B4E309440B857D74348836F2E28F432C6@TK5EX14MBXC292.redmond.corp.microsoft.com> References: <6C7ABA8B4E309440B857D74348836F2E28F432C6@TK5EX14MBXC292.redmond.corp.microsoft.com> Message-ID: <4E5F3B16.3080605@gmail.com> On 01/09/11 05:28, Dino Viehland wrote: > This came up on an internal discussion, I thought it was fun, especially given > that we all behave differently: > > Paste this into the REPL: [cut] it seems to work fine with pypy 1.6. Note that str() is called twice for each line, so we get 1, 3, 5, 7..., but this happens only on cpython. >>>> class PS1(object): .... def __init__(self): .... self.count = 0 .... def __str__(self): .... self.count += 1 .... return "%d >>>" % self.count .... >>>> import sys >>>> sys.ps1 = PS1() 1 >>> 3 >>> 5 >>> 7 >>> 9 >>> ciao, Anto From ncoghlan at gmail.com Thu Sep 1 10:10:27 2011 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 1 Sep 2011 18:10:27 +1000 Subject: [pypy-dev] [Speed] Moving the project forward In-Reply-To: <4E5F3936.8050002@gmail.com> References: <4E5F3936.8050002@gmail.com> Message-ID: On Thu, Sep 1, 2011 at 5:50 PM, Antonio Cuni wrote: > On 31/08/11 22:11, Brett Cannon wrote: >> >> The PyPy folk could answer this as they have their repo on bitbucket >> already. Else I guess we can just create a standalone account that >> represents the official speed.python.org account. > > for pypy we do exactly that. There is a bitbucket user named "pypy" whose > credentials are shared among all the core devs. The security auditing part of my brain has its fingers in its ears and is singing "La La La" rather loudly :) Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From anto.cuni at gmail.com Thu Sep 1 10:37:26 2011 From: anto.cuni at gmail.com (Antonio Cuni) Date: Thu, 01 Sep 2011 10:37:26 +0200 Subject: [pypy-dev] djangobench performance In-Reply-To: References: Message-ID: <4E5F4446.6060801@gmail.com> On 01/09/11 09:23, William ML Leslie wrote: > I wonder if anyone has benchmarked sqlite under pypy - that would have > the most dramatic effect here. I'm doing it right now. It seems that for some reasons the JIT does not remove the ctypes overhead of sqlite calls, thus they are much slower than they should be. ciao, Anto From arigo at tunes.org Thu Sep 1 10:57:13 2011 From: arigo at tunes.org (Armin Rigo) Date: Thu, 1 Sep 2011 10:57:13 +0200 Subject: [pypy-dev] Here's a fun one... In-Reply-To: <4E5F3B16.3080605@gmail.com> References: <6C7ABA8B4E309440B857D74348836F2E28F432C6@TK5EX14MBXC292.redmond.corp.microsoft.com> <4E5F3B16.3080605@gmail.com> Message-ID: Hi, On Thu, Sep 1, 2011 at 9:58 AM, Antonio Cuni wrote: > it seems to work fine with pypy 1.6. ?Note that str() is called twice for > each line, so we get 1, 3, 5, 7..., but this happens only on cpython. ...but this happens only on PyPy, you mean. It works as expected on CPython 2.7. Is it a bug? :-) A bient?t, Armin. From anto.cuni at gmail.com Thu Sep 1 11:02:14 2011 From: anto.cuni at gmail.com (Antonio Cuni) Date: Thu, 01 Sep 2011 11:02:14 +0200 Subject: [pypy-dev] Here's a fun one... In-Reply-To: References: <6C7ABA8B4E309440B857D74348836F2E28F432C6@TK5EX14MBXC292.redmond.corp.microsoft.com> <4E5F3B16.3080605@gmail.com> Message-ID: <4E5F4A16.5000504@gmail.com> On 01/09/11 10:57, Armin Rigo wrote: > Hi, > > On Thu, Sep 1, 2011 at 9:58 AM, Antonio Cuni wrote: >> it seems to work fine with pypy 1.6. Note that str() is called twice for >> each line, so we get 1, 3, 5, 7..., but this happens only on cpython. > > ...but this happens only on PyPy, you mean. It works as expected on > CPython 2.7. Is it a bug? :-) no, I wanted to write "it happens *also* on cpython". Note that I use pyrepl both on pypy and cpython, so it's probably pyrepl's "fault" (assuming it's a fault, I'm happy to ignore it :-)) From arigo at tunes.org Thu Sep 1 11:12:14 2011 From: arigo at tunes.org (Armin Rigo) Date: Thu, 1 Sep 2011 11:12:14 +0200 Subject: [pypy-dev] Here's a fun one... In-Reply-To: References: <6C7ABA8B4E309440B857D74348836F2E28F432C6@TK5EX14MBXC292.redmond.corp.microsoft.com> <4E5F3B16.3080605@gmail.com> Message-ID: Hi, On Thu, Sep 1, 2011 at 10:57 AM, Armin Rigo wrote: > It works as expected on CPython 2.7. ?Is it a bug? :-) Fixed in 414bb2d98b0c. Armin From fijall at gmail.com Thu Sep 1 11:27:59 2011 From: fijall at gmail.com (Maciej Fijalkowski) Date: Thu, 1 Sep 2011 11:27:59 +0200 Subject: [pypy-dev] Question about "Completing 'os' module" (issue 833) In-Reply-To: References: Message-ID: On Sat, Aug 27, 2011 at 9:00 AM, Mitchell Hashimoto wrote: > Sorry to ping the list again, but I've addressed the issues raised in the > issue to complete the "os.getlogin" feature. Is there any way I can get > another review to get this merged please? Seems to be merged, thanks! > Best, > Mitchell > On Mon, Aug 22, 2011 at 4:53 AM, Mitchell Hashimoto > wrote: >> >> Amaury, >> I've implemented one method (getlogin) and have created an issue + patch >> for it: >> https://bugs.pypy.org/issue841 >> Best, >> Mitchell >> >> On Sun, Aug 21, 2011 at 2:20 PM, Amaury Forgeot d'Arc >> wrote: >>> >>> Hello, >>> >>> 2011/8/21 Mitchell Hashimoto >>>> >>>> I noticed the 'os' module is incomplete, and I'd like to help complete >>>> this. >>> >>> You are very welcome! >>> >>>> >>>> CPython does this by simply having these methods available on "posix" >>>> "nt" "os2" etc. and the "os" module imports those. It appears that PyPy does >>>> the same thing. I was able to successfully add 'getlogin' as practice, but I >>>> wanted to post here before going further. Some questions below: >>>> 1.) Should I mimic CPython and add the functionality to the OS-specific >>>> modules? >>> >>> Yes we should mimic CPython: fortunately these modules have different >>> names but share the same source file. >>> With CPython it's Modules/posixmodule.c, with PyPy it's in >>> pypy/module/posix. >>> >>>> >>>> 2.) I don't have a Windows computer on hand. What is the standard >>>> practice for implementing some stdlib for one OS but not the other? Would >>>> PyPy accept this temporarily? >>> >>> Yes, no problem. In this case, I think it's best to let the test fail on >>> Windows so that someone may notice and fix it. >>> >>>> >>>> 3.) There are many missing methods, to simplify implementation time and >>>> the patch, would it be okay to submit a patch for each stdlib method, so >>>> that this was built up over time? >>> >>> Yes, smaller patches are easier to read and merge. I'd be happy to review >>> and commit them. >>> -- >>> Amaury Forgeot d'Arc >> > > > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > http://mail.python.org/mailman/listinfo/pypy-dev > > From pjodrr at gmail.com Thu Sep 1 13:52:26 2011 From: pjodrr at gmail.com (Peter Kruse) Date: Thu, 1 Sep 2011 13:52:26 +0200 Subject: [pypy-dev] Solaris Support? Message-ID: Hello, I'd like to compile PyPy under Solaris/Sparc. But it looks that this is not supported. Right now when I run "python2.7 translate.py -O2" as suggested on http://pypy.org/download.html I get an exception: [version:WARNING] Errors getting Mercurial information: Not running from a Mercurial repository! [platform:msg] Setting platform to 'host' cc=None [translation:info] Translating target as defined by targetpypystandalone Traceback (most recent call last): File "translate.py", line 324, in ... File "/net/sdevc01/export/data/soft/opensource/source/pypy/pypy-pypy-release-1.6/pypy/module/sys/version.py", line 17, in elif platform.cc.startswith('gcc'): AttributeError: 'NoneType' object has no attribute 'startswith' As I have python2.7, gcc 4.6.1 and libffi it seems that the requirements for PyPy should be fulfilled, so what do I have to do to compile it under Solaris? Thanks, Peter ps: I'm not subscribed to this list, it would be very kind if you cc me on replies. From gertjanvanzwieten at gmail.com Thu Sep 1 13:59:14 2011 From: gertjanvanzwieten at gmail.com (Gertjan van Zwieten) Date: Thu, 1 Sep 2011 13:59:14 +0200 Subject: [pypy-dev] your thoughts on low level optimizations In-Reply-To: References: Message-ID: Hi Wim, Thanks for the quick reply, this is very helpful information and in some ways surprising. Let me just try to confirm that I got all this correctly so that I am sure to draw the right conclusions. First of all, to clarify, I understand that the overhead of calling into C is not such a big deal if indeed the time spent in that call is orders of magnitude longer. For instance, components like iterative linear solvers would be of that kind, where the majority of work is done inside a single call. But if I would need to implement a more numpy-array-like data type then I suppose the overhead of the connected data manipulation calls is an issue of concern. I was not actually aware that ctypes is considered that efficient. Does this apply to CPython as well? I always assumed that going by the Python API would be the most direct, least overhead interface possible. If ctypes provides an equally efficient interface for both CPython and PyPy then that is certainly something I would consider using. By the way, you mention ctypes *or* libffi as if they are two distinct options, but I believe ctypes was built on top of libffi. Is it then possible, and is there reason, to use libffi directly? Perhaps too generic, but just to fire away all my questions for anyone to comment on: what would be the recommended way to raise exceptions going through ctypes; special return values, or is there maybe a function call that can be intercepted? It's one of those things where I see advantages in using the Python API (even though that is also based on simply returning NULL, but then with the additional option of setting an exception state; an intercepted function call would be *much* nicer, actually). #perfectworld Back on topic, it surprised me, too, that RPython components are not modular. Do I understand correctly that this means that, after making modifications to the component, the entire PyPy interpreter needs to be rebuilt? Considering the time involved that sounds like a big drawback, although of course during development the same module could be left untranslated. Are there plans to allow for independently translated modules? Or is this somehow fundamentally impossible. I must also admit that it is still not entirely clear to me what the precise differences are between translated and non-translated code, as in both situations the JIT compiler appears to be active. (Right? After all RPython is still dynamically typed). Is there a good text that explains these PyPy fundamentals a little bit more entry-level than the RPython Toolchain [1] reference? Lastly, you mention SWIG of equivalent (Boost?) as alternative options. But don't these tools generate Python API code, and thus (in PyPy) rely on cpyext? This 2008 sprint discussion [2] loosely suggests that there will be no direct PyPy-ish implementation of these tools, and instead argues for reflex, leading to this week's post. So I think if anything I should consider that. Again, if I demonstrate any misconceptions please do correct me. I am not necessarily bound to existing code so I could decide to make the switch from C to C++, but I would do so only if it offers clear advantages. If reflex offers a one-to-one translation of C++ classes to Python then that certainly sounds useful, but unless it is something that I could not equally achieve by manual ctypes annotations I think I would prefer to keep things under manual control, and keep the C library entirely independent. My feelings are that that approach is the most future-proof, which is my primary concern before efficiency. Overall, not many direct questions, but I hope to be corrected if any of my assertions are false, and of course I would still like to learn additional arguments for or against possible approaches for low level optimization. Thanks Gertjan [1] http://codespeak.net/pypy/dist/pypy/doc/translation.html [2] http://morepypy.blogspot.com/2008/10/sprint-discussions-c-library-bindings.html PS @Wim, that's interesting. People tend to be a bit confused when I tell them I went from earthquake research to printer ink. Now I can explain that printer ink is just one step away from high energy particle physics. -------------- next part -------------- An HTML attachment was scrubbed... URL: From arigo at tunes.org Thu Sep 1 14:44:05 2011 From: arigo at tunes.org (Armin Rigo) Date: Thu, 1 Sep 2011 14:44:05 +0200 Subject: [pypy-dev] Stacklets In-Reply-To: References: <4E451B04.6050104@gmail.com> Message-ID: Hi, The "stacklet" branch has been merged now. The "_continuation" module is available on all PyPys with or without the JIT on x86 and x86-64 since a few days, and it will of course be part of release 1.6.1. There is an almost-complete wrapper "greenlet.py". For documentation and current limitations see here: http://doc.pypy.org/en/latest/stackless.html . A bient?t, Armin. From fijall at gmail.com Thu Sep 1 15:01:45 2011 From: fijall at gmail.com (Maciej Fijalkowski) Date: Thu, 1 Sep 2011 15:01:45 +0200 Subject: [pypy-dev] Suggestions for small projects for getting started hacking on pypy? In-Reply-To: <4E596FC3.7030706@pianocktail.org> References: <4E571744.70406@pianocktail.org> <4E596FC3.7030706@pianocktail.org> Message-ID: On Sun, Aug 28, 2011 at 12:29 AM, Christian Hudon wrote: > Le Sat Aug 27 09:10:12 2011, Samuel Ytterbrink a ?crit : >> >> What part? The Interpreter or the tool chain? or usage of the Interpreter? > > Hmm. A bit of the first two, I guess. I'm not clear how "usage of the > interpreter" would be any different from using CPython (except for things > executing faster). Actually, I liked the suggestion from Wim (off list?) to > start by improving bits of the cppyy C++ bridge. I'd like to take a look at > moving this along a bit further, unless other people chime in saying that > it's not something relevant to work on. > > The only information I found about cppyy is a blog post from last Summer > (CERN Sprint Report - Wrapping C++ Libraries). Is the information in that > blog post still relevant? Which branch to I have to get? (I didn't find > cppyy on the trunk). I assume if I have more specific questions about a > specific piece of code while I'm coding, the best way to proceed is to ask > on the IRC channel? > > Thanks, > > ?Christian > There is an incomplete list here: http://doc.pypy.org/en/latest/project-ideas.html cheers, fijal From fijall at gmail.com Thu Sep 1 15:04:36 2011 From: fijall at gmail.com (Maciej Fijalkowski) Date: Thu, 1 Sep 2011 15:04:36 +0200 Subject: [pypy-dev] Great results for MyHDL simulations In-Reply-To: <4E5A8271.7030002@jandecaluwe.com> References: <4DEE3D94.1020408@jandecaluwe.com> <4DEE4267.3020304@gmail.com> <4DEE9AE7.6020005@jandecaluwe.com> <4E5A8271.7030002@jandecaluwe.com> Message-ID: On Sun, Aug 28, 2011 at 8:01 PM, Jan Decaluwe wrote: > On 06/07/2011 11:40 PM, Jan Decaluwe wrote: >> >> On 06/07/2011 05:23 PM, Antonio Cuni wrote: >>> >>> On 07/06/11 17:02, Jan Decaluwe wrote: >>>> >>>> I am seeing great improvements for MyHDL simulations >>>> by using PyPy, and I have written a page about it: >>>> >>>> http://www.myhdl.org/doku.php/performance >>> >>> Hello Jan, >>> >>> this is really nice to hear :-) >>> >>> Did you try to run the benchmarks with a more recent version of PyPy? >>> According to this chart, we are up to 30% faster than 1.5 on some >>> benchmarks, >>> so you might get even better results: >> >> Not yet, I like to leave some further excitement for later :-) >> >> ?From now on, I plan to track the evolution of my benchmarks >> with official PyPy releases. > > Getting better all the time :-) > > With PyPy 1.6, I see additional significant improvements (probably > also thanks to the generator-specific speedup). Speedup compared > to cPython is now 8-20x (was 6-12x). > > http://www.myhdl.org/doku.php/performance > > Thanks to all! Wow great! From fijall at gmail.com Thu Sep 1 15:28:34 2011 From: fijall at gmail.com (Maciej Fijalkowski) Date: Thu, 1 Sep 2011 15:28:34 +0200 Subject: [pypy-dev] Failing App-Level-Test test_posix2.py In-Reply-To: <189483600.102.1314777253718.JavaMail.fmail@mwmweb068> References: <189483600.102.1314777253718.JavaMail.fmail@mwmweb068> Message-ID: On Wed, Aug 31, 2011 at 9:54 AM, Juergen Boemmels wrote: > Hi, > > Since some weeks in one Applevel-test is failing consitently on the > buildbot: > module/posix/test/test_posix2.py > with the failure: > ??????? if sys.platform.startswith('linux'): >>?????????? assert hasattr(st, 'st_rdev') > E?????????? assert hasattr(posix.stat_result(st_mode=33152,st_ino=1588275L, st_dev=64256L, st_nlink=1,s...integer_atime=1314759734, _integer_mtime=1314759734,_integer_ctime=1314759734), 'st_rdev') > module/posix/test/test_posix2.py:135: AssertionError ================ 1 failed, 77 passed, 6 skipped in 4.80 seconds ================ > So it seems that st_rdev is not available > > I tracked this problem down to a chicken-egg problem. The relevant > portion is here in > pypy/rpython/module/ll_os_stat.py: > # for now, check the host Python to know which st_xxx fields exist > STAT_FIELDS = [(_name, _TYPE) for (_name, _TYPE) in ALL_STAT_FIELDS > ????????????????????????????? if hasattr(os.stat_result, _name)] > > As pypy is meanwhile build with pypy, relying on the host python > is not a good idea. A pypy without st_rdev will again build only > a new pypy without st_rdev, even if the platform supports st_rdev. > > As an experiment I bootstrapped pypy again with python an the > error disappered > # python translate.py -O2 targetpypystandalone.py > # ./pypy-c ../../../pytest.py ../../module/posix/test/test_posix2.py -A > ========================= test session starts ========================== > platform linux2 -- Python 2.7.1[pypy-1.6.0] -- pytest-2.1.0.dev4 > pytest-2.1.0.dev4 from /home/boemmels/src/pypy/pytest.pyc > collected 80 items > > ../../module/posix/test/test_posix2.py > ...........................................................s............ > ........ > > ================= 79 passed, 1 skipped in 5.07 seconds ================= > This bootstrapped pypy-c is also cappable of rebuilding an pypy with > st_rdev enabled. But this is not a clean solution. > > I think the way to go is to no longer rely on the Host-Python, but > use the appropiate configure magic. Something like > #ifdef HAVE_STRUCT_STAT_ST_RDEV > but in a pythonic way. Unfortunatly I'm not familar with pypy's > configuration system so I got stuck here. > > Can anybody tell me how to test for available struct members like > AC_CHECK_MEMBERS([struct stat.st_rdev]) in autoconf? You should use rffi_platform for that. You can have a look at current usages. This is not only st_rdev problem, but all kinds of os module functions come from the host python. This should be fixed at some point.... Cheers, fijal From arigo at tunes.org Thu Sep 1 18:33:41 2011 From: arigo at tunes.org (Armin Rigo) Date: Thu, 1 Sep 2011 18:33:41 +0200 Subject: [pypy-dev] Solaris Support? In-Reply-To: References: Message-ID: Hi Peter, On Thu, Sep 1, 2011 at 1:52 PM, Peter Kruse wrote: > ? ?elif platform.cc.startswith('gcc'): > AttributeError: 'NoneType' object has no attribute 'startswith' Ah, not-explicitly-supported platforms end up as a platform where cc is None. The line above needs to be fixed to handle this case. Done in f1f9f3782931; can you pull and update and try again? Thanks! Note that I'm not 100% sure that a not-explicitly-supported platform can work. If it still doesn't work, you may have to edit solaris support to pypy/translator/platform/. Armin From arigo at tunes.org Thu Sep 1 19:03:39 2011 From: arigo at tunes.org (Armin Rigo) Date: Thu, 1 Sep 2011 19:03:39 +0200 Subject: [pypy-dev] your thoughts on low level optimizations In-Reply-To: References: Message-ID: Hi Gertjan, On Thu, Sep 1, 2011 at 1:59 PM, Gertjan van Zwieten wrote: > Thanks for the quick reply, this is very helpful information and in some > ways surprising. Let me just try to confirm that I got all this correctly so > that I am sure to draw the right conclusions. The meta-answer first: the problem is that it's still not completely clear to us which approach is the best. They all have benefits and drawbacks so far... > I was not actually aware that ctypes is considered that efficient. Does this > apply to CPython as well? No, that's the first messy part: ctypes code is very efficient on top of PyPy, at least after the JIT has kicked in. It is not fast on top of CPython. > I always assumed that going by the Python API > would be the most direct, least overhead interface possible. By this you probably mean the "CPython API"... The difference is important. The C-level API that you're talking about is really CPython's. PyPy can emulate it with the cpyext module, but this emulation is slow. Moreover, if you want to compare it with ctypes, the PyPy JIT gets ctypes *faster* than the CPython C API can ever be on top of CPython, because the latter needs to explicitly wrap and unwrap the Python objects. > Perhaps too generic, but just to fire away all my questions for anyone to > comment on: what would be the recommended way to raise exceptions going > through ctypes; special return values, or is there maybe a function call > that can be intercepted? The ctypes way to do things is to design the C library with a "normal" C API, usable from other C programs. From that point of view the correct thing is to return error codes, and to check them in pure Python, after the call to the ctypes function. > Back on topic, it surprised me, too, that RPython components are not > modular. Do I understand correctly that this means that, after making > modifications to the component, the entire PyPy interpreter needs to be > rebuilt? Yes. You should only build RPython modules if you have a specific reason to. One example is the numpy module: we want to build it in a special way so that the JIT can look inside and perform delayed computations "in bulk". > Considering the time involved that sounds like a big drawback This is certainly a drawback, but it's not that big as it first seem. The RPython module must simply be well-tested as normal Python code first. Once it is complete and tested, then we translate it. It usually takes a few attempts to fix the typing issues, but once it's done, it usually works as expected (provided the tests were good in the first place). > Are there plans to allow for independently translated modules? > Or is this somehow fundamentally impossible. This is a question that comes back regularly. We don't have any plan, but there have been attempts. They have been mostly unsuccessful, however. From our point of view we can survive with the drawback, as it is actually not that big, and as we don't generally recommend to write RPython modules for everything. > I must also admit that it is still not entirely clear to me what the precise > differences are between translated and non-translated code, as in both > situations the JIT compiler appears to be active. (Right? After all RPython > is still dynamically typed). No, precisely, RPython is not dynamically typed. It is also valid Python, and as such, it can be run non-translated; but at the same time, if it's valid RPython, then it can be translated together with the rest of the interpreter, and we get a statically-typed version of this RPython code turned into C code. This translation process works by assuming (and to a large extent, checking) that the RPython code is statically typed, or at least "statically typeable"... > Is there a good text that explains these PyPy fundamentals a little bit more > entry-level than the RPython Toolchain [1] reference? The architecture overview is oldish but still up-to-date: http://doc.pypy.org/en/latest/architecture.html > Lastly, you mention SWIG of equivalent (Boost?) as alternative options. These are not really supported so far. It may be that some SWIG modules turn into C code that can be loaded by cpyext, but that doesn't work for Cython, for example. The case of Cython is instructive: Romain Guillebert is working right now on a way to take a Cython module and emit, not C code for the CPython API, but Python code using ctypes. This would give a way to "compile" any Cython module to plain Python that works both of PyPy and CPython (but which is only fast on PyPy). We haven't thought so far very deeply about SWIG. Reflex is another solution that is likely to work very nicely if you can rewrite your C module as a C++ module and use the Reflex-provided Python API extracted from the C++ module. Again, it's unclear if it's the "best" path, but it's definitely one path. > My feelings are that that approach is the most future-proof, which is my > primary concern before efficiency. I would say that in this case, keeping your module in C with a C-friendly API is the most future-proof solution I can think of. That means so far --- with today's tools --- that you need to wrap it twice, as a CPython C extension module and as a pure Python ctypes, in order to get good performance on both CPython and PyPy. We hope to be able to provide better answers in the future, like "wrap it with Cython and generate the two interfaces for CPython and PyPy automatically". A bient?t, Armin. From dinov at microsoft.com Thu Sep 1 19:07:19 2011 From: dinov at microsoft.com (Dino Viehland) Date: Thu, 1 Sep 2011 17:07:19 +0000 Subject: [pypy-dev] Here's a fun one... In-Reply-To: <4E5F3B16.3080605@gmail.com> References: <6C7ABA8B4E309440B857D74348836F2E28F432C6@TK5EX14MBXC292.redmond.corp.microsoft.com> <4E5F3B16.3080605@gmail.com> Message-ID: <6C7ABA8B4E309440B857D74348836F2E28F43E98@TK5EX14MBXC292.redmond.corp.microsoft.com> Antonio wrote: > it seems to work fine with pypy 1.6. Note that str() is called twice for each > line, so we get 1, 3, 5, 7..., but this happens only on cpython. Ahh yeah, I think I had some weird 1.5 build on my laptop where I tried it. Guess it's time to upgrade. From arigo at tunes.org Thu Sep 1 19:21:10 2011 From: arigo at tunes.org (Armin Rigo) Date: Thu, 1 Sep 2011 19:21:10 +0200 Subject: [pypy-dev] Here's a fun one... In-Reply-To: <6C7ABA8B4E309440B857D74348836F2E28F43E98@TK5EX14MBXC292.redmond.corp.microsoft.com> References: <6C7ABA8B4E309440B857D74348836F2E28F432C6@TK5EX14MBXC292.redmond.corp.microsoft.com> <4E5F3B16.3080605@gmail.com> <6C7ABA8B4E309440B857D74348836F2E28F43E98@TK5EX14MBXC292.redmond.corp.microsoft.com> Message-ID: Hi Dino, On Thu, Sep 1, 2011 at 7:07 PM, Dino Viehland wrote: > Ahh yeah, I think I had some weird 1.5 build on my laptop where I tried it. > Guess it's time to upgrade. Same in 1.6, but I fixed it in "default" today. Armin From fijall at gmail.com Thu Sep 1 19:26:44 2011 From: fijall at gmail.com (Maciej Fijalkowski) Date: Thu, 1 Sep 2011 19:26:44 +0200 Subject: [pypy-dev] your thoughts on low level optimizations In-Reply-To: References: Message-ID: Hi Gert Jan. Let me clarify what I got from your question - does it make sense to write performance sensitive code in C, or would PyPy optimize loops well enough? If you want to use only PyPy, you can quite easily use numpy arrays to get a C-like performance. indeed, hakan ardo was able to run his video processing routines (using array.array instead of numpy.array, but that's not relevant) at almost C speed [1] and we'll get there at some point in not so distant future. Also numpy vector operations are already faster using PyPy than cpython (by stacking multiple operations in one go) and we're planning to implement SSE in some not-so-distant future. This is however, if you plan to use PyPy. Those kind of solutions don't work on CPython at all. [1] http://morepypy.blogspot.com/2011/07/realtime-image-processing-in-python.html I hope that answers your questions. Cheers, fijal From yselivanov.ml at gmail.com Thu Sep 1 20:26:35 2011 From: yselivanov.ml at gmail.com (Yury Selivanov) Date: Thu, 1 Sep 2011 14:26:35 -0400 Subject: [pypy-dev] your thoughts on low level optimizations In-Reply-To: References: Message-ID: <5A724910-0F83-40F4-96AD-A5305563792D@gmail.com> On 2011-09-01, at 1:03 PM, Armin Rigo wrote: >> Back on topic, it surprised me, too, that RPython components are not >> modular. Do I understand correctly that this means that, after making >> modifications to the component, the entire PyPy interpreter needs to be >> rebuilt? > > Yes. You should only build RPython modules if you have a specific > reason to. One example is the numpy module: we want to build it in a > special way so that the JIT can look inside and perform delayed > computations "in bulk". Will it be possible at some point to write modules for pypy in RPython without the need to rebuild the entire interpreter? This way, for instance, we could write an import hook to compile *.rpy files on demand to simplify distribution. -Yury From tobami at googlemail.com Thu Sep 1 20:44:27 2011 From: tobami at googlemail.com (Miquel Torres) Date: Thu, 1 Sep 2011 20:44:27 +0200 Subject: [pypy-dev] [Speed] Moving the project forward In-Reply-To: References: <4E5F3936.8050002@gmail.com> Message-ID: You can also do that in Github, which I prefer. However, since CPython and PyPy use mercurial, the general preference for Bitbucket is understandable. 2011/9/1 Brett Cannon : > On Thu, Sep 1, 2011 at 01:10, Nick Coghlan wrote: >> On Thu, Sep 1, 2011 at 5:50 PM, Antonio Cuni wrote: >>> On 31/08/11 22:11, Brett Cannon wrote: >>>> >>>> The PyPy folk could answer this as they have their repo on bitbucket >>>> already. Else I guess we can just create a standalone account that >>>> represents the official speed.python.org account. >>> >>> for pypy we do exactly that. There is a bitbucket user named "pypy" whose >>> credentials are shared among all the core devs. >> >> The security auditing part of my brain has its fingers in its ears and >> is singing "La La La" rather loudly :) > > What about Google Code? Projects there can have multiple owners and > they support hg, have a tracker, and a wiki. > > >> >> Cheers, >> Nick. >> >> -- >> Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia >> > _______________________________________________ > Speed mailing list > Speed at python.org > http://mail.python.org/mailman/listinfo/speed > From wlavrijsen at lbl.gov Thu Sep 1 22:26:54 2011 From: wlavrijsen at lbl.gov (wlavrijsen at lbl.gov) Date: Thu, 1 Sep 2011 13:26:54 -0700 (PDT) Subject: [pypy-dev] your thoughts on low level optimizations In-Reply-To: References: Message-ID: Hi, On Thu, 1 Sep 2011, Armin Rigo wrote: > Reflex is another solution that is likely to work very nicely if you > can rewrite your C module as a C++ module and use the Reflex-provided > Python API extracted from the C++ module. for most practical purposes, the rewriting of C -> C++ for wrapping purposes with Reflex would be a simple matter of: $ cat mycppheader.h extern "C" { #include "mycheader.h" } But using Reflex for C is overkill, given that no reflection information is absolutely needed. What can also be done, is to generate the reflection info as part of the build process, and use it to generate annotations for ctypes. Then put those in a pickle file and ship that. On Thu, 1 Sep 2011, Gertjan van Zwieten wrote: > By the way, you mention ctypes *or* libffi as if they are two distinct > options, but I believe ctypes was built on top of libffi. Yes, but what I meant in the same sentence was the pair of Python+ctypes and the pair RPython+libffi. Both are efficient as Armin already explained because once the JIT is warmed up, no wrapping/unwrapping is needed anymore. > Lastly, you mention SWIG of equivalent (Boost?) as alternative options. I mentioned those on the CPython side as reasons why I've never chosen to make Reflex-based (or CINT-based, rather) bindings available as a standalone application. They take the same amount of work if reflection information is not generated yet (in our applications, the reflection info is already there for the I/O, so the end-user does not need to deal with that as they would if the choice had fallen on SWIG). I think a part of the discussion that is missing, is who the target is of the various tools and who ends up using the product: if I'm an end-user, installing binary Python extension modules from the package manager that comes with my OS, then cpyext is probably my best friend. But if I'm a developer of an extension module, like you are, I would not rely on it, and instead provide a solution that works best on both, and that could run on all Pythons from using ctypes to writing custom code. > This 2008 sprint discussion [2] loosely suggests that there will be no > direct PyPy-ish implementation of these tools, and instead argues for > reflex, leading to this week's post. There's a 2010 post in between, when work was started: http://morepypy.blogspot.com/2010/07/cern-sprint-report-wrapping-c-libraries.html Work is progressing as time allows and there are some nice results, but it's not production quality yet. Getting there, though, as the list of available features shows. However, everytime I throw it at a large class library (large meaning thousands of classes), there's always something new to tackle so far. > If reflex offers a one-to-one translation of C++ classes to Python then that > certainly sounds useful, but unless it is something that I could not equally > achieve by manual ctypes annotations That depends on your C++ classes. E.g. for calculations of offsets between a derived class and virtual base classes, some form of reflection information is absolutely needed. Best regards, Wim > PS @Wim, that's interesting. People tend to be a bit confused when I tell > them I went from earthquake research to printer ink. Now I can explain that > printer ink is just one step away from high energy particle physics. Ah. :) Actually, Oce was a detour. A fun one where I learned a lot to be sure, but I did start out in HEP and astrophysics originally. -- WLavrijsen at lbl.gov -- +1 (510) 486 6411 -- www.lavrijsen.net From brett at python.org Thu Sep 1 20:10:21 2011 From: brett at python.org (Brett Cannon) Date: Thu, 1 Sep 2011 11:10:21 -0700 Subject: [pypy-dev] [Speed] Moving the project forward In-Reply-To: References: <4E5F3936.8050002@gmail.com> Message-ID: On Thu, Sep 1, 2011 at 01:10, Nick Coghlan wrote: > On Thu, Sep 1, 2011 at 5:50 PM, Antonio Cuni wrote: >> On 31/08/11 22:11, Brett Cannon wrote: >>> >>> The PyPy folk could answer this as they have their repo on bitbucket >>> already. Else I guess we can just create a standalone account that >>> represents the official speed.python.org account. >> >> for pypy we do exactly that. There is a bitbucket user named "pypy" whose >> credentials are shared among all the core devs. > > The security auditing part of my brain has its fingers in its ears and > is singing "La La La" rather loudly :) What about Google Code? Projects there can have multiple owners and they support hg, have a tracker, and a wiki. > > Cheers, > Nick. > > -- > Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia > From dalke at dalkescientific.com Fri Sep 2 00:24:12 2011 From: dalke at dalkescientific.com (Andrew Dalke) Date: Fri, 2 Sep 2011 00:24:12 +0200 Subject: [pypy-dev] Windows build possibility on Amazon Message-ID: I was talking with Laura and she said there's still no good way to get pypy builds for Windows. I mentioned that Amazon EC2 has Windows available for rent http://aws.amazon.com/windows/ There's prebuilt disk images for Windows with a Django install http://aws.amazon.com/amis/Microsoft-Windows/7235836237155671 so Python would come with the image. You all probably want Large Instance: 7.5 GB of memory (small is 1.7, extra large is 15) It's available for $0.48 per hour . Andrew dalke at dalkescientific.com From alex.gaynor at gmail.com Fri Sep 2 01:17:10 2011 From: alex.gaynor at gmail.com (Alex Gaynor) Date: Thu, 1 Sep 2011 19:17:10 -0400 Subject: [pypy-dev] Windows build possibility on Amazon In-Reply-To: References: Message-ID: I'm pretty sure if we get in contact with the right people that Amazon will give open source groups credit towards buildbots. Alex On Thu, Sep 1, 2011 at 6:24 PM, Andrew Dalke wrote: > I was talking with Laura and she said there's still no good > way to get pypy builds for Windows. > > I mentioned that Amazon EC2 has Windows available for rent > > http://aws.amazon.com/windows/ > > There's prebuilt disk images for Windows with a Django install > > http://aws.amazon.com/amis/Microsoft-Windows/7235836237155671 > > so Python would come with the image. > > You all probably want > > Large Instance: 7.5 GB of memory > > (small is 1.7, extra large is 15) > > It's available for $0.48 per hour . > > > > Andrew > dalke at dalkescientific.com > > > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > http://mail.python.org/mailman/listinfo/pypy-dev > -- "I disapprove of what you say, but I will defend to the death your right to say it." -- Evelyn Beatrice Hall (summarizing Voltaire) "The people's good is the highest law." -- Cicero -------------- next part -------------- An HTML attachment was scrubbed... URL: From caleb.hattingh at gmail.com Fri Sep 2 08:01:02 2011 From: caleb.hattingh at gmail.com (Caleb Hattingh) Date: Fri, 2 Sep 2011 08:01:02 +0200 Subject: [pypy-dev] Windows build possibility on Amazon In-Reply-To: References: Message-ID: On 02 Sep 2011, at 12:24 AM, Andrew Dalke wrote: > I was talking with Laura and she said there's still no good > way to get pypy builds for Windows. I have also been struggling to build pypy on my own windows box, either with msvc2010 or mingw. I would preferably like to be able to use mingw, because it will then be easy to put the complete build environment on a usb drive. Amaury helped with several problems a short bit ago, but I hit another snag and I haven't had the time to analyze the latest error in more detail. I would be glad to help improve the building-on-windows documentation, and I'll submit patches as soon as I can get it working myself. So if there is some way for me to be able to see all the technical discussion around getting a windows buildbox set up on Amazon (or wherever), that would be great. Regards Caleb From arigo at tunes.org Fri Sep 2 08:48:04 2011 From: arigo at tunes.org (Armin Rigo) Date: Fri, 2 Sep 2011 08:48:04 +0200 Subject: [pypy-dev] your thoughts on low level optimizations In-Reply-To: <5A724910-0F83-40F4-96AD-A5305563792D@gmail.com> References: <5A724910-0F83-40F4-96AD-A5305563792D@gmail.com> Message-ID: Hi Yury, On Thu, Sep 1, 2011 at 8:26 PM, Yury Selivanov wrote: > Will it be possible at some point to write modules for pypy in RPython without the need to rebuild the entire interpreter? I've added an answer to this Frequently Asked Question to https://bitbucket.org/pypy/pypy/raw/default/pypy/doc/faq.rst . Armin From arigo at tunes.org Fri Sep 2 08:57:59 2011 From: arigo at tunes.org (Armin Rigo) Date: Fri, 2 Sep 2011 08:57:59 +0200 Subject: [pypy-dev] Windows build possibility on Amazon In-Reply-To: References: Message-ID: Hi Andrew, On Fri, Sep 2, 2011 at 12:24 AM, Andrew Dalke wrote: > I was talking with Laura and she said there's still no good > way to get pypy builds for Windows. Why not? I am actually happy with the Windows machine at OpenEnd, bigboard. Or as little unhappy as it gets. I doubt very very very much that any solution on Windows is going to magically resolve the number of messes that regularly show up, like "we don't want to have a dialog box when a process segfaults because there is no-one to go and click OK" or "how do I tell the translation toolchain where to look for zlib.h". What is missing is not really hardware, but people that care about Windows. (The same is true about OS/X, btw; thanks to you for solving the hardware part :-) A bient?t, Armin. From danchr at gmail.com Fri Sep 2 09:37:42 2011 From: danchr at gmail.com (Dan Villiom Podlaski Christiansen) Date: Fri, 2 Sep 2011 09:37:42 +0200 Subject: [pypy-dev] Windows build possibility on Amazon In-Reply-To: References: Message-ID: <8D70C798-9696-40E7-8763-63EBF492DD30@gmail.com> On 2 Sep 2011, at 08:57, Armin Rigo wrote: > The same is true about OS/X, btw? Just wondering; do you have anything specific in mind? -- Dan Villiom Podlaski Christiansen danchr at gmail.com From arigo at tunes.org Fri Sep 2 09:51:06 2011 From: arigo at tunes.org (Armin Rigo) Date: Fri, 2 Sep 2011 09:51:06 +0200 Subject: [pypy-dev] Windows build possibility on Amazon In-Reply-To: <8D70C798-9696-40E7-8763-63EBF492DD30@gmail.com> References: <8D70C798-9696-40E7-8763-63EBF492DD30@gmail.com> Message-ID: Hi, On Fri, Sep 2, 2011 at 9:37 AM, Dan Villiom Podlaski Christiansen wrote: >> The same is true about OS/X, btw? > > Just wondering; do you have anything specific in mind? Not more or less than what I explained: it seems that all current "core" developers are on Linux, so OS/X and Windows are in the situation of not being very well attended to. A bient?t, Armin. From arigo at tunes.org Fri Sep 2 09:54:27 2011 From: arigo at tunes.org (Armin Rigo) Date: Fri, 2 Sep 2011 09:54:27 +0200 Subject: [pypy-dev] Solaris Support? In-Reply-To: References: Message-ID: Hi Peter, On Fri, Sep 2, 2011 at 9:02 AM, Peter Kruse wrote: > I guess Solaris is an explicitly-not-supported platform then. May well be. But what I had in mind was not to hack at distutils --- because I don't know what this really does --- but instead to write a file called pypy/translator/platform/solaris.py, based on the same model as linux.px/freebsd.py/openbsd.py/etc, and to link it from the __init__.py. A bient?t, Armin. From pjodrr at gmail.com Fri Sep 2 09:02:56 2011 From: pjodrr at gmail.com (Peter Kruse) Date: Fri, 2 Sep 2011 09:02:56 +0200 Subject: [pypy-dev] Solaris Support? In-Reply-To: References: Message-ID: Hi Armin, On Thu, Sep 1, 2011 at 6:33 PM, Armin Rigo wrote: > Ah, not-explicitly-supported platforms end up as a platform where cc > is None. ?The line above needs to be fixed to handle this case. ?Done > in f1f9f3782931; can you pull and update and try again? ?Thanks! ?Note > that I'm not 100% sure that a not-explicitly-supported platform can > work. ?If it still doesn't work, you may have to edit solaris support > to pypy/translator/platform/. ah well, I tried but then it complains a bit later: [platform:execute] gcc -O3 -fomit-frame-pointer -pthreads -c platcheck_0.c -o platcheck_0.o Traceback (most recent call last): File "translate.py", line 324, in main() File "translate.py", line 210, in main targetspec_dic, translateconfig, config, args = parse_options_and_load_target() File "translate.py", line 178, in parse_options_and_load_target targetspec_dic['handle_config'](config, translateconfig) ... pypy.translator.platform.CompilationError: CompilationError(err=""" In file included from /usr/include/stdio.h:22:0, from platcheck_0.c:22: /apps/local/gcc/4.6.1/lib/gcc/sparc-sun-solaris2.10/4.6.1/include-fixed/sys/feature_tests.h:345:2: error: #error "Compiler or options invalid; UNIX 03 and POSIX.1-2001 applications require the use of c99" """) then I added --cflags="$CFLAGS -std=c99" to the call of translate.py but it looks that this option is ignored so I had to modify pypy/translator/platform/distutils_platform.py and add that option, but still no joy: [translation:ERROR] File ".../pypy-pypy-release-1.6/pypy/rlib/clibffi.py", line 267, in [translation:ERROR] assert libc_name is not None, "Cannot find C library, ctypes.util.find_library('c') returned None" [translation:ERROR] AssertionError: Cannot find C library, ctypes.util.find_library('c') returned None and this is because the function find_library() of python 2.7.2 does not work under Solaris when using /usr/ccs/bin/ld and not GNU ld ... but even if I hack pypy/rlib/clibffi.py and set libc_name = "/lib/libc.so" I still get an error: [translation:ERROR] File ".../pypy-pypy-release-1.6/pypy/rpython/lltypesystem/ll2ctypes.py", line 1060, in get_ctypes_callable [translation:ERROR] funcname, place)) [translation:ERROR] NotImplementedError: function 'RPyThreadGetIdent' not found in library '/tmp/usession-default-4/shared_cache/externmod' *sigh* I give up ... I guess Solaris is an explicitly-not-supported platform then. Peter From lac at openend.se Fri Sep 2 16:31:41 2011 From: lac at openend.se (Laura Creighton) Date: Fri, 2 Sep 2011 16:31:41 +0200 Subject: [pypy-dev] PyCON Finland has just announced its opening Message-ID: <201109021431.p82EVfeR025824@theraft.openend.se> http://fi.pycon.org/2011/#schedule It's not that far away for some of us, Finland is beautiful, and they still have space for talks and sprints. Laura From arigo at tunes.org Fri Sep 2 17:32:40 2011 From: arigo at tunes.org (Armin Rigo) Date: Fri, 2 Sep 2011 17:32:40 +0200 Subject: [pypy-dev] Misc news Message-ID: Hi all, Misc news: * the windows buildbot should now upload its nightly results, similarly to the other buildbots. You can find it together with builds for other platforms at http://buildbot.pypy.org/nightly/trunk/ . There seem to be still Windows-specific bugs, though. * stackless support could do with some volunteer work now --- in particular, lib_pypy/stackless.py could be refactored to use directly continulets, in a similar way to lib_pypy/greenlet.py. I am ready to explain a bit more on irc how continulets work, but I don't have enough motivation to do the complete refactoring myself. It's also a nice entry-level task for someone who wants to. A bient?t, Armin. From yselivanov.ml at gmail.com Fri Sep 2 18:36:50 2011 From: yselivanov.ml at gmail.com (Yury Selivanov) Date: Fri, 2 Sep 2011 12:36:50 -0400 Subject: [pypy-dev] your thoughts on low level optimizations In-Reply-To: References: <5A724910-0F83-40F4-96AD-A5305563792D@gmail.com> Message-ID: <32EBC03E-242A-4B86-8F73-30EC38E5AB23@gmail.com> Thank you, Armin. On 2011-09-02, at 2:48 AM, Armin Rigo wrote: > Hi Yury, > > On Thu, Sep 1, 2011 at 8:26 PM, Yury Selivanov wrote: >> Will it be possible at some point to write modules for pypy in RPython without the need to rebuild the entire interpreter? > > I've added an answer to this Frequently Asked Question to > https://bitbucket.org/pypy/pypy/raw/default/pypy/doc/faq.rst . > > > Armin From andrewfr_ice at yahoo.com Fri Sep 2 18:35:35 2011 From: andrewfr_ice at yahoo.com (Andrew Francis) Date: Fri, 2 Sep 2011 09:35:35 -0700 (PDT) Subject: [pypy-dev] Misc news In-Reply-To: References: Message-ID: <1314981335.73967.YahooMailNeo@web120703.mail.ne1.yahoo.com> Hi Armin: I have been doing work with stackless.py lately. And I started to look at the continuelet documentation.? I would be happy to get my hands dirty with with continuelets. Cheers, Andrew ________________________________ From: Armin Rigo To: PyPy Developer Mailing List Sent: Friday, September 2, 2011 11:32 AM Subject: [pypy-dev] Misc news Hi all, Misc news: * the windows buildbot should now upload its nightly results, similarly to the other buildbots.? You can find it together with builds for other platforms at http://buildbot.pypy.org/nightly/trunk/ .? There seem to be still Windows-specific bugs, though. * stackless support could do with some volunteer work now --- in particular, lib_pypy/stackless.py could be refactored to use directly continulets, in a similar way to lib_pypy/greenlet.py.? I am ready to explain a bit more on irc how continulets work, but I don't have enough motivation to do the complete refactoring myself.? It's also a nice entry-level task for someone who wants to. A bient?t, Armin. _______________________________________________ pypy-dev mailing list pypy-dev at python.org http://mail.python.org/mailman/listinfo/pypy-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From zooko at zooko.com Fri Sep 2 23:22:08 2011 From: zooko at zooko.com (Zooko O'Whielacronx) Date: Fri, 2 Sep 2011 15:22:08 -0600 Subject: [pypy-dev] Status of ARM backend In-Reply-To: References: Message-ID: I asked on the #linaro channel on IRC (related to the Linaro organization [1]) and Chris Ball said he could arrange for one of the OLPC project's [2] special large-RAM dev boards to serve as a buildslave. What's the next step? Maybe someone should volunteer to install buildbot on it and Chris should give that person ssh access? Send your ssh public key to Chris "cjb" Ball, carbon-copied. Also, I don't know if the OLPC project can afford to do this, but if that dev board could be dedicated just to the PyPy project then maybe it could run http://speed.pypy.org measurements so that we can get a perspective on PyPy performance on ARM. What would that take? Regards, Zooko [1] https://linaro.org [2] http://laptop.org From alex.gaynor at gmail.com Sat Sep 3 03:17:45 2011 From: alex.gaynor at gmail.com (Alex Gaynor) Date: Fri, 2 Sep 2011 21:17:45 -0400 Subject: [pypy-dev] speed and 1.6 Message-ID: Can someone with the appropriate permissions add a tag for 1.6 on speed.pypy.org? Thanks, Alex -- "I disapprove of what you say, but I will defend to the death your right to say it." -- Evelyn Beatrice Hall (summarizing Voltaire) "The people's good is the highest law." -- Cicero -------------- next part -------------- An HTML attachment was scrubbed... URL: From tobami at googlemail.com Sat Sep 3 08:51:23 2011 From: tobami at googlemail.com (Miquel Torres) Date: Sat, 3 Sep 2011 08:51:23 +0200 Subject: [pypy-dev] speed and 1.6 In-Reply-To: References: Message-ID: Which revision is (or "simulates") 1.6? 2011/9/3 Alex Gaynor : > Can someone with the appropriate permissions add a tag for 1.6 on > speed.pypy.org? > Thanks, > Alex > > -- > "I disapprove of what you say, but I will defend to the death your right to > say it." -- Evelyn Beatrice Hall (summarizing Voltaire) > "The people's good is the highest law." -- Cicero > > > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > http://mail.python.org/mailman/listinfo/pypy-dev > > From arigo at tunes.org Sat Sep 3 13:53:26 2011 From: arigo at tunes.org (Armin Rigo) Date: Sat, 3 Sep 2011 13:53:26 +0200 Subject: [pypy-dev] PyCON Finland has just announced its opening In-Reply-To: <201109021431.p82EVfeR025824@theraft.openend.se> References: <201109021431.p82EVfeR025824@theraft.openend.se> Message-ID: Hi Laura, On Fri, Sep 2, 2011 at 4:31 PM, Laura Creighton wrote: > It's not that far away for some of us, Finland is beautiful, and they > still have space for talks and sprints. I appreciate the "bait" factor in this message :-) But as importantly, I think that you should tell this list about planning to organize a sprint in G?teborg around this time, so that we can check with interested people if the dates seem to work. A bient?t, Armin. From lac at openend.se Sat Sep 3 14:31:10 2011 From: lac at openend.se (Laura Creighton) Date: Sat, 03 Sep 2011 14:31:10 +0200 Subject: [pypy-dev] PyCON Finland has just announced its opening In-Reply-To: Message from Armin Rigo of "Sat, 03 Sep 2011 13:53:26 +0200." References: <201109021431.p82EVfeR025824@theraft.openend.se> Message-ID: <201109031231.p83CVArK029288@theraft.openend.se> In a message of Sat, 03 Sep 2011 13:53:26 +0200, Armin Rigo writes: >Hi Laura, > >On Fri, Sep 2, 2011 at 4:31 PM, Laura Creighton wrote: >> It's not that far away for some of us, Finland is beautiful, and they >> still have space for talks and sprints. > >I appreciate the "bait" factor in this message :-) But as >importantly, I think that you should tell this list about planning to >organize a sprint in G??teborg around this time, so that we can check >with interested people if the dates seem to work. >A bient??t, > >Armin. Ok, let's have a sprint in G??teborg around this time. Also of interest is FSCons: http://fscons.org/ Nov 11-13 who already have been promised a talk by somebody from PyPy. So a reasonable thing to do is to Finland for Oct 17-18, and have a sprint sometime in between. A tricky bit is that we need to be available for an interview with Vinnova in Stockholm on the 24th or the 28th of October. I don't know how much notice they will give us -- so my tentative plan is to start a sprint in G??teborg in the first week of November. Is this good for other people? Would others like a different start or end time, or ... Laura From gertjanvanzwieten at gmail.com Sat Sep 3 18:09:55 2011 From: gertjanvanzwieten at gmail.com (Gertjan van Zwieten) Date: Sat, 3 Sep 2011 18:09:55 +0200 Subject: [pypy-dev] your thoughts on low level optimizations In-Reply-To: References: Message-ID: Hi Armin > I would say that in this case, keeping your module in C with a > C-friendly API is the most future-proof solution I can think of. That > means so far --- with today's tools --- that you need to wrap it > twice, as a CPython C extension module and as a pure Python ctypes, in > order to get good performance on both CPython and PyPy. > Thanks, that's a very helpful conclusion and actually a perfectly workable solution for now. I will keep a close eye on the blog for future developments in this area, and I certainly hope that I will be able to make the switch soon. Let me just tie this up by thanking you all for your extensive and very helpful replies. It's been a very enlightening discussion. Much obliged, Gertjan -------------- next part -------------- An HTML attachment was scrubbed... URL: From khamenya at gmail.com Sat Sep 3 15:22:19 2011 From: khamenya at gmail.com (Valery Khamenya) Date: Sat, 3 Sep 2011 15:22:19 +0200 Subject: [pypy-dev] C-based numpy's nonzero() has the same performance as its vanilla Python implementation running pypy Message-ID: isn't that cool? my 1-dimensional vanilla Python implementation was: lambda v: [i for i,e in enumerate(v) if e != 0] best regards -- Valery A.Khamenya -------------- next part -------------- An HTML attachment was scrubbed... URL: From fijall at gmail.com Sun Sep 4 19:37:34 2011 From: fijall at gmail.com (Maciej Fijalkowski) Date: Sun, 4 Sep 2011 19:37:34 +0200 Subject: [pypy-dev] Status of ARM backend In-Reply-To: References: Message-ID: On Fri, Sep 2, 2011 at 11:22 PM, Zooko O'Whielacronx wrote: > I asked on the #linaro channel on IRC (related to the Linaro > organization [1]) and Chris Ball said he could arrange for one of the > OLPC project's [2] special large-RAM dev boards to serve as a > buildslave. What's the next step? Maybe someone should volunteer to > install buildbot on it and Chris should give that person ssh access? > Send your ssh public key to Chris "cjb" Ball, carbon-copied. > > Also, I don't know if the OLPC project can afford to do this, but if > that dev board could be dedicated just to the PyPy project then maybe > it could run http://speed.pypy.org measurements so that we can get a > perspective on PyPy performance on ARM. What would that take? > I think it would take, besides the good will and the machine, a dedicated volunteer making this stuff run. it's probably not a lot of work, but someone has to be responsive and care about this slave. From fijall at gmail.com Sun Sep 4 19:40:01 2011 From: fijall at gmail.com (Maciej Fijalkowski) Date: Sun, 4 Sep 2011 19:40:01 +0200 Subject: [pypy-dev] PyCON Finland has just announced its opening In-Reply-To: <201109031231.p83CVArK029288@theraft.openend.se> References: <201109021431.p82EVfeR025824@theraft.openend.se> <201109031231.p83CVArK029288@theraft.openend.se> Message-ID: On Sat, Sep 3, 2011 at 2:31 PM, Laura Creighton wrote: > In a message of Sat, 03 Sep 2011 13:53:26 +0200, Armin Rigo writes: >>Hi Laura, >> >>On Fri, Sep 2, 2011 at 4:31 PM, Laura Creighton wrote: >>> It's not that far away for some of us, Finland is beautiful, and they >>> still have space for talks and sprints. >> >>I appreciate the "bait" factor in this message :-) ?But as >>importantly, I think that you should tell this list about planning to >>organize a sprint in G??teborg around this time, so that we can check >>with interested people if the dates seem to work. > >>A bient??t, >> >>Armin. > > Ok, let's have a sprint in G??teborg around this time. ?Also of interest > is FSCons: http://fscons.org/ ?Nov 11-13 who already have been promised > a talk by somebody from PyPy. ?So a reasonable thing to do is to > Finland for Oct 17-18, and have a sprint sometime in between. ?A tricky > bit is that we need to be available for an interview with Vinnova in > Stockholm on the 24th or the 28th of October. ?I don't know how much > notice they will give us -- so my tentative plan is to start a sprint > in G??teborg in the first week of November. ?Is this good for other people? > Would others like a different start or end time, or ... > > Laura It's a surprisingly good time for me. I stay in EU until 14th of Nov. From lac at openend.se Sun Sep 4 20:20:10 2011 From: lac at openend.se (Laura Creighton) Date: Sun, 04 Sep 2011 20:20:10 +0200 Subject: [pypy-dev] PyCON Finland has just announced its opening In-Reply-To: Message from Maciej Fijalkowski of "Sun, 04 Sep 2011 19:40:01 +0200." References: <201109021431.p82EVfeR025824@theraft.openend.se> <201109031231.p83CVArK029288@theraft.openend.se> Message-ID: <201109041820.p84IKAOt013662@theraft.openend.se> In a message of Sun, 04 Sep 2011 19:40:01 +0200, Maciej Fijalkowski writes: >It's a surprisingly good time for me. I stay in EU until 14th of Nov. Cool! Been too long since I saw you. Laura From fenrrir at gmail.com Mon Sep 5 02:44:21 2011 From: fenrrir at gmail.com (=?UTF-8?Q?Rodrigo_Pinheiro_Marques_de_Ara=C3=BAjo?=) Date: Sun, 4 Sep 2011 21:44:21 -0300 Subject: [pypy-dev] Misc news In-Reply-To: References: Message-ID: 2011/9/2 Armin Rigo > > * stackless support could do with some volunteer work now --- in > particular, lib_pypy/stackless.py could be refactored to use directly > continulets, in a similar way to lib_pypy/greenlet.py. I am ready to > explain a bit more on irc how continulets work, but I don't have > enough motivation to do the complete refactoring myself. It's also a > nice entry-level task for someone who wants to. > > > hi guys, I made a first version of stackless with continulets. Attached the patch and the complete stackless.py file for review. Someone have time for look the patch? I have attached too a file with a helper and examples for users to solve "RuntimeError: maximum recursion depth exceeded" based on pypy stackless online doc. I'm not sure if my version of stackless with continulets is much faster than stackless with greenlets, in factorial.py file, in my machine for input 2000, the new version runs on 20 seconds and the old version 30. best regards, Rodrigo Ara?jo -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: stackless.patch Type: text/x-patch Size: 7610 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: stackless.py Type: text/x-python Size: 19138 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: recursion_helper.py Type: text/x-python Size: 1486 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: factorial.py Type: text/x-python Size: 1009 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: producerConsumerTextmode.py Type: text/x-python Size: 5457 bytes Desc: not available URL: From fijall at gmail.com Mon Sep 5 08:54:52 2011 From: fijall at gmail.com (Maciej Fijalkowski) Date: Mon, 5 Sep 2011 08:54:52 +0200 Subject: [pypy-dev] [pypy-commit] pypy default: fix bz2. tests didn't find this. In-Reply-To: <20110831061835.87AA68204C@wyvern.cs.uni-duesseldorf.de> References: <20110831061835.87AA68204C@wyvern.cs.uni-duesseldorf.de> Message-ID: On Wed, Aug 31, 2011 at 8:18 AM, justinpeel wrote: > Author: Justin Peel > Branch: > Changeset: r46937:b4d8eb5fdf6c > Date: 2011-08-31 00:17 -0600 > http://bitbucket.org/pypy/pypy/changeset/b4d8eb5fdf6c/ > > Log: ? ?fix bz2. tests didn't find this. > > diff --git a/pypy/module/bz2/interp_bz2.py b/pypy/module/bz2/interp_bz2.py > --- a/pypy/module/bz2/interp_bz2.py > +++ b/pypy/module/bz2/interp_bz2.py > @@ -446,7 +446,9 @@ > ? ? ? ? ? ? result = self.buffer[pos:pos + n] > ? ? ? ? ? ? self.pos += n > ? ? ? ? else: > - ? ? ? ? ? ?result = self.buffer > + ? ? ? ? ? ?pos = self.pos > + ? ? ? ? ? ?assert pos >= 0 > + ? ? ? ? ? ?result = self.buffer[pos:] > ? ? ? ? ? ? self.pos = 0 > ? ? ? ? ? ? self.buffer = "" > ? ? ? ? self.readlength += len(result) > _______________________________________________ > pypy-commit mailing list > pypy-commit at python.org > http://mail.python.org/mailman/listinfo/pypy-commit > This should come with a test From arigo at tunes.org Mon Sep 5 14:22:40 2011 From: arigo at tunes.org (Armin Rigo) Date: Mon, 5 Sep 2011 14:22:40 +0200 Subject: [pypy-dev] Misc news In-Reply-To: References: Message-ID: Hi Rodrigo, On Mon, Sep 5, 2011 at 2:44 AM, Rodrigo Pinheiro Marques de Ara?jo wrote: > I made a first version of stackless with continulets. Attached the patch and > the complete stackless.py file for review. Someone have time for look the > patch? Great! Thanks a lot. All of test_stackless.py passes, so I'm checking in the patch. If someone wants to give it a more thorough review he is welcome :-) Regarding performance: it's already good to get 1/3 performance improvement. I think it corresponds well to the removal of the extra levels: indeed, our JIT should be good at "compressing" this overhead (if not completely removing), so getting an extra 33% by manual rewriting sounds reasonable to me. About recursion_helper.py: ah, good idea to turn it into a decorator. Maybe we could include it e.g. in the stackless module. But of course the best thing to do would be to have the effect semi-automatically, e.g. adding a way to ask the interpreter "when you have consumed more than X% of the stack, automatically do the next call via a switch to this continulet"... A bient?t, Armin. From fenrrir at gmail.com Mon Sep 5 14:48:08 2011 From: fenrrir at gmail.com (=?UTF-8?Q?Rodrigo_Pinheiro_Marques_de_Ara=C3=BAjo?=) Date: Mon, 5 Sep 2011 09:48:08 -0300 Subject: [pypy-dev] Misc news In-Reply-To: References: Message-ID: Hi Armin, 2011/9/5 Armin Rigo > Great! Thanks a lot. All of test_stackless.py passes, so I'm > checking in the patch. test_stackless.py has only pickle tests then i have tested with examples attached. > Regarding performance: it's already good to get 1/3 performance > improvement. I think it corresponds well to the removal of the extra > levels: indeed, our JIT should be good at "compressing" this overhead > (if not completely removing), so getting an extra 33% by manual > rewriting sounds reasonable to me. > only tested with attached factorial.py. > > About recursion_helper.py: ah, good idea to turn it into a decorator. > Maybe we could include it e.g. in the stackless module. But of course > the best thing to do would be to have the effect semi-automatically, > e.g. adding a way to ask the interpreter "when you have consumed more > than X% of the stack, automatically do the next call via a switch to > this continulet"... > > > I think that should be included in stackless too. I not did this because i'm not sure if the implementation is good and the best function to decorate. In factorial.py, _channel_action is a good place but for others i'm not sure. Maybe expose the helper in stackless for user choose. best regards, Rodrigo Ara?jo -------------- next part -------------- An HTML attachment was scrubbed... URL: From arigo at tunes.org Mon Sep 5 14:55:48 2011 From: arigo at tunes.org (Armin Rigo) Date: Mon, 5 Sep 2011 14:55:48 +0200 Subject: [pypy-dev] Misc news In-Reply-To: References: Message-ID: Hi, On Mon, Sep 5, 2011 at 2:48 PM, Rodrigo Pinheiro Marques de Ara?jo wrote: >> Great! ?Thanks a lot. ?All of test_stackless.py passes, so I'm >> checking in the patch. > > test_stackless.py has only pickle tests then i have tested with examples > attached. I mean lib_pypy/pypy_test/test_stackless.py: cd lib_pypy/pypy_test/ pypy-c ../../pypy/test_all.py test_stackless.py A bient?t, Armin. From peelpy at gmail.com Mon Sep 5 18:30:45 2011 From: peelpy at gmail.com (Justin Peel) Date: Mon, 5 Sep 2011 10:30:45 -0600 Subject: [pypy-dev] [pypy-commit] pypy default: fix bz2. tests didn't find this. In-Reply-To: References: <20110831061835.87AA68204C@wyvern.cs.uni-duesseldorf.de> Message-ID: On Mon, Sep 5, 2011 at 12:54 AM, Maciej Fijalkowski wrote: > On Wed, Aug 31, 2011 at 8:18 AM, justinpeel wrote: >> Author: Justin Peel >> Branch: >> Changeset: r46937:b4d8eb5fdf6c >> Date: 2011-08-31 00:17 -0600 >> http://bitbucket.org/pypy/pypy/changeset/b4d8eb5fdf6c/ >> >> Log: ? ?fix bz2. tests didn't find this. >> >> diff --git a/pypy/module/bz2/interp_bz2.py b/pypy/module/bz2/interp_bz2.py >> --- a/pypy/module/bz2/interp_bz2.py >> +++ b/pypy/module/bz2/interp_bz2.py >> @@ -446,7 +446,9 @@ >> ? ? ? ? ? ? result = self.buffer[pos:pos + n] >> ? ? ? ? ? ? self.pos += n >> ? ? ? ? else: >> - ? ? ? ? ? ?result = self.buffer >> + ? ? ? ? ? ?pos = self.pos >> + ? ? ? ? ? ?assert pos >= 0 >> + ? ? ? ? ? ?result = self.buffer[pos:] >> ? ? ? ? ? ? self.pos = 0 >> ? ? ? ? ? ? self.buffer = "" >> ? ? ? ? self.readlength += len(result) >> _______________________________________________ >> pypy-commit mailing list >> pypy-commit at python.org >> http://mail.python.org/mailman/listinfo/pypy-commit >> > > This should come with a test > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > http://mail.python.org/mailman/listinfo/pypy-dev > I thought that I already wrote back to you about this, but here is more info. I didn't put in a separate test for this, but I changed a test as per the following to make it catch this bug. I tested it with the code before the bug was fixed and after and it does find this bug. The reason that the test didn't find the bug before is because the test data has a length of 770 and the test was reading chunks of 10 characters. Since 10 divides evenly into 770, this other path in the code was never taken. However, if we read 9 character chunks, 9 doesn't divide evenly into 770, this other path is taken and we end up with the wrong result in the test. The following is commit 13c94c0591c3 which I did later on in the day (my time) after I put in the fix. It didn't occur to me right away that this was the easy way to get coverage for this or I would have done it right away. Perhaps it was because it was around 2 am my time when I discovered the bug. # HG changeset patch # User Justin Peel # Date 1314819164 21600 # Node ID 13c94c0591c34b5c0f10978871e33880cdbb5ce7 # Parent 0d75ab342438fc401e1a44fdf7d2822bedc5e392 change bz2 test so that it reads chunks which don't divide evenly into test data's length diff -r 0d75ab342438fc401e1a44fdf7d2822bedc5e392 -r 13c94c0591c34b5c0f10978871e33880cdbb5ce7 pypy/module/bz2/test/test_bz2_file.py --- a/pypy/module/bz2/test/test_bz2_file.py Wed Aug 31 20:01:39 2011 +0200 +++ b/pypy/module/bz2/test/test_bz2_file.py Wed Aug 31 13:32:44 2011 -0600 @@ -274,14 +274,14 @@ pass del bz2f # delete from this frame, which is captured in the traceback - def test_read_chunk10(self): + def test_read_chunk9(self): from bz2 import BZ2File self.create_temp_file() bz2f = BZ2File(self.temppath) text_read = "" while True: - data = bz2f.read(10) + data = bz2f.read(9) # 9 doesn't divide evenly into data length if not data: break text_read = "%s%s" % (text_read, data) From cjb at laptop.org Mon Sep 5 19:49:43 2011 From: cjb at laptop.org (Chris Ball) Date: Mon, 05 Sep 2011 13:49:43 -0400 Subject: [pypy-dev] Status of ARM backend In-Reply-To: (Zooko O'Whielacronx's message of "Fri, 2 Sep 2011 15:22:08 -0600") References: Message-ID: Hi folks, On Fri, Sep 02 2011, Zooko O'Whielacronx wrote: > I asked on the #linaro channel on IRC (related to the Linaro > organization [1]) and Chris Ball said he could arrange for one of the > OLPC project's [2] special large-RAM dev boards to serve as a > buildslave. What's the next step? Maybe someone should volunteer to > install buildbot on it and Chris should give that person ssh access? fijal on IRC helped me to get set up, but it looks like my board won't be helpful -- it's armv5tel, and the ARM backend wants armv7l. Running test_all gives: [cjb at koji3 pypy]$ python ./pypy/test_all.py pypy/jit/backend/arm -x ========================== test session starts =========================== platform linux2 -- Python 2.7.2 -- pytest-2.1.0.dev4 pytest-2.1.0.dev4 from /home/cjb/pypy/pytest.pyc collected 1494 items pypy/jit/backend/arm/test/test_arch.py .... pypy/jit/backend/arm/test/test_assembler.py .Illegal instruction [cjb at koji3 pypy]$ So, unless it's easy and worthwhile to add armv5tel support to the ARM backend, I'll give up for now and let you know if I find any large-RAM ARMv7 boards in the future. Thanks, - Chris. -- Chris Ball One Laptop Per Child From gbowyer at fastmail.co.uk Wed Sep 7 01:43:53 2011 From: gbowyer at fastmail.co.uk (Greg Bowyer) Date: Tue, 06 Sep 2011 16:43:53 -0700 Subject: [pypy-dev] Errors running pypy with ctype library Message-ID: <4E66B039.8040608@fastmail.co.uk> Hi all, I have a rather interesting in house networking tool that uses pcap to sniff packets, take them into twisted and replay them against a target. Internally the tight loop for packet reassembly is currently run via twisted and some custom parsing and packet reconstruction code, I have been investigating if I can make this code faster _without_ reimplementing the capture part in C, as such I think I have two options: * Pypy (which I would prefer as it means that I hopefully will gain performance improvements over time, as well as JIT acceleration throughout the code) * Cython (which will let me change the main loop to be mostly C without having to write a lot of C) The tool currently uses an old style cPython c extension to bind python to pcap, since this will be slow in pypy I found the first semi implemented ctype pcap binding from google code here (http://code.google.com/p/pcap/) (I didnt write it so it may be broken) The following test code works fine on cPython2.7 --------------- %< --------------- from pycap import pycap pp = pycap.open_live('eth0', 1596, True, 250) bpf = pycap.compile(r'tcp dst port 80') bpf = pycap.compile(pp, 'tcp', True, 0) def process(user, pkthdr, packet): print 'callback' print 'pkthdr[0:7]', pkthdr.contents.len cb = pycap.CALLBACK(process) pycap.loop(pp, 100, cb, "greg") --------------- >% --------------- but fails with the following error on pypy trunk --------------- %< --------------- greg at localhost ~/projects/pcap-read-only/packet $ /home/greg/projects/pypy/pypy/translator/goal/pypy-c Python 2.7.1 (ddff981df9d5, Sep 06 2011, 19:21:21) [PyPy 1.6.0-dev1 with GCC 4.4.5] on linux2 Type "help", "copyright", "credits" or "license" for more information. And now for something completely different: ``this is a self-referential channel topic'' >>>> import pycap >>>> from pycap import pycap >>>> pp = pycap.open_live('eth0', 1596, True, 250) pycap/__buildin_funcs__/pcap_native_funcs.py:188: RuntimeWarning: C function without declared arguments called handle=pcap_c_funcs.pcap_open_live(source,snaplen,promisc,to_ms,error) Segmentation fault --------------- >% --------------- The segmentation fault might be down to pcap being very twitchy about its inputs rather than pypy itself having a segfault Any ideas whats wrong in the ctypes binding here ? -- Greg From amauryfa at gmail.com Wed Sep 7 08:57:50 2011 From: amauryfa at gmail.com (Amaury Forgeot d'Arc) Date: Wed, 7 Sep 2011 08:57:50 +0200 Subject: [pypy-dev] Errors running pypy with ctype library In-Reply-To: <4E66B039.8040608@fastmail.co.uk> References: <4E66B039.8040608@fastmail.co.uk> Message-ID: 2011/9/7 Greg Bowyer > Hi all, I have a rather interesting in house networking tool that uses pcap > to sniff packets, take them into twisted and replay them against a target. > > Internally the tight loop for packet reassembly is currently run via > twisted and some custom parsing and packet reconstruction code, I have been > investigating if I can make this code faster _without_ reimplementing the > capture part in C, as such I think I have two options: > > * Pypy (which I would prefer as it means that I hopefully will gain > performance improvements over time, as well as JIT acceleration throughout > the code) > * Cython (which will let me change the main loop to be mostly C without > having to write a lot of C) > > The tool currently uses an old style cPython c extension to bind python to > pcap, since this will be slow in pypy I found the first semi implemented > ctype pcap binding from google code here (http://code.google.com/p/**pcap/) > (I didnt write it so it may be broken) > > The following test code works fine on cPython2.7 > The pcap module has an important issue; pcap_open_live() contains this code: error=c_char_p() handle=pcap_c_funcs.pcap_open_live(source,snaplen,promisc,to_ms,error) Which is wrong: according to the man page, the "error" parameter "is assumed to be able to hold at least PCAP_ERRBUF_SIZE chars" which is not the case here, NULL is passed instead and bad things will happen at runtime. pcap should be modified, probably with something like "error = create_string_buffer(256)" -- Amaury Forgeot d'Arc -------------- next part -------------- An HTML attachment was scrubbed... URL: From jmdj at pml.ac.uk Wed Sep 7 16:43:13 2011 From: jmdj at pml.ac.uk (Jorge de Jesus) Date: Wed, 07 Sep 2011 15:43:13 +0100 Subject: [pypy-dev] deepcopy slower in PyPY ?! Message-ID: <4E678301.9070204@pml.ac.uk> Hi to all I've benchmark/profile some code (PyWPS API) and PyPy-c is 2/3x times slower than CPython. This was done in a virtual machine using x86_64 The code being benchmark spends most of the time making calls to copy/deepcopy. I've found that this was an issue in PyPy 1.6 (https://bugs.pypy.org/issue767), but the issue has been closed. So I've downloaded the latest dev version but PyPy-c continues to be slow compared to CPython. Does anyone has extra information on this issue ? Thank you Jorge -------------------------------------------------------------------------------- Plymouth Marine Laboratory Registered Office: Prospect Place The Hoe Plymouth PL1 3DH Website: www.pml.ac.uk Click here for PML Annual Review Registered Charity No. 1091222 PML is a company limited by guarantee registered in England & Wales company number 4178503 Please think before you print -------------------------------------------------------------------------------- This e-mail, its content and any file attachments are confidential. If you have received this e-mail in error please do not copy, disclose it to any third party or use the contents or attachments in any way. Please notify the sender by replying to this e-mail or e-mail forinfo at pml.ac.uk and then delete the email without making any copies or using it in any other way. The content of this message may contain personal views which are not the views of Plymouth Marine Laboratory unless specifically stated. You are reminded that e-mail communications are not secure and may contain viruses. Plymouth Marine Laboratory accepts no liability for any loss or damage which may be caused by viruses. -------------------------------------------------------------------------------- From anto.cuni at gmail.com Wed Sep 7 20:16:27 2011 From: anto.cuni at gmail.com (Antonio Cuni) Date: Wed, 07 Sep 2011 20:16:27 +0200 Subject: [pypy-dev] deepcopy slower in PyPY ?! In-Reply-To: <4E678301.9070204@pml.ac.uk> References: <4E678301.9070204@pml.ac.uk> Message-ID: <4E67B4FB.8060503@gmail.com> Hi Jorge, On 07/09/11 16:43, Jorge de Jesus wrote: > Hi to all > > I've benchmark/profile some code (PyWPS API) and PyPy-c is 2/3x times > slower than CPython. This was done in a virtual machine using x86_64 > > The code being benchmark spends most of the time making calls to > copy/deepcopy. I've found that this was an issue in PyPy 1.6 > (https://bugs.pypy.org/issue767), but the issue has been closed. So I've > downloaded the latest dev version but PyPy-c continues to be slow > compared to CPython. could you please send us a benchmark which showcases the problem? The smaller the better, ideally a benchmark which is contained in a single file is easier to run and debug than one which involves to download lots of code from the internet. Moreover, maybe you could also open a ticket in our bug tracker, so we are sure not to forget it. ciao and thanks, Anto From fijall at gmail.com Wed Sep 7 22:38:10 2011 From: fijall at gmail.com (Maciej Fijalkowski) Date: Wed, 7 Sep 2011 22:38:10 +0200 Subject: [pypy-dev] deepcopy slower in PyPY ?! In-Reply-To: <4E67B4FB.8060503@gmail.com> References: <4E678301.9070204@pml.ac.uk> <4E67B4FB.8060503@gmail.com> Message-ID: On Wed, Sep 7, 2011 at 8:16 PM, Antonio Cuni wrote: > Hi Jorge, > > On 07/09/11 16:43, Jorge de Jesus wrote: >> >> ?Hi to all >> >> I've benchmark/profile ?some code (PyWPS API) and PyPy-c is 2/3x times >> slower than CPython. ?This was done in a virtual machine using x86_64 >> >> The code being benchmark spends most of the time making calls to >> copy/deepcopy. ?I've found that this was an issue in PyPy 1.6 >> (https://bugs.pypy.org/issue767), but the issue has been closed. So I've >> downloaded the latest dev version but PyPy-c continues to be slow >> compared to CPython. > > could you please send us a benchmark which showcases the problem? The > smaller the better, ideally a benchmark which is contained in a single file > is easier to run and debug than one which involves to download lots of code > from the internet. the internet is not the problem here ;-) > > Moreover, maybe you could also open a ticket in our bug tracker, so we are > sure not to forget it. > > ciao and thanks, > Anto > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > http://mail.python.org/mailman/listinfo/pypy-dev > From wlavrijsen at lbl.gov Thu Sep 8 01:11:14 2011 From: wlavrijsen at lbl.gov (wlavrijsen at lbl.gov) Date: Wed, 7 Sep 2011 16:11:14 -0700 (PDT) Subject: [pypy-dev] segfault in translation; C backend Message-ID: Hi, I have a crash in the translation chain when I enable the CINT back-end, and I just don't seem to be able to figure it out: [c] 132000 nodes [ array: 15760 framework rtti: 724 func: 10234 group: 1 struct: 131169 ] * [c] 133000 nodes [ array: 16008 framework rtti: 738 func: 10312 group: 1 struct: 132216 ] *** Break *** segmentation violation this segfault occurs in pypy_g_wrap_value__get_elem(). What I've found so far, using python rather than pypy-c, is that it is really a problem in passing an array with an erroneous address to ctypes: #0 B_get (ptr=0x1, size=1) at /install/Python-2.6.7/Modules/_ctypes/cfield.c:549 #1 0xb7984b7d in CData_get (type=0x850adac, getfunc=0, src=0x5af43bb4, index=0, size=1, adr=0x1
) at /install/Python-2.6.7/Modules/_ctypes/_ctypes.c:2798 #2 0xb798650f in Array_item (_self=0x5af43bb4, item=0x8056d9c) at /install/Python-2.6.7/Modules/_ctypes/_ctypes.c:4248 #3 Array_subscript (_self=0x5af43bb4, item=0x8056d9c) at /install/Python-2.6.7/Modules/_ctypes/_ctypes.c:4310 #4 0xb7e910aa in PyObject_GetItem (o=0x5af43bb4, key=0x8056d9c) at Objects/abstract.c:141 #5 0xb7f32e12 in PyEval_EvalFrameEx (f=0x8b28b74, throwflag=0) at Python/ceval.c:1261 Note that at #2, the debug code claims that the 2nd argument is an item object, yet, it's a Py_ssize_t index value in reality. I don't know why this is wrong. Further, the array comes in when writing a node for one of my module types, see this node and the obj it carries (note the address of the dependency "value" that is produced): (Pdb+) up > /home/wlav/pypydev/pypy/pypy/translator/c/database.py(294)add_dependencies() -> self.get(value) (Pdb+) print value * (Pdb+) print self (Pdb+) print node.name pypy_g_pypy_module_cppyy_interp_cppyy_W_CPPNamespace.wcppn_super (Pdb+) print node.obj struct pypy.module.cppyy.interp_cppyy.W_CPPScope { super=..., inst_space=None, inst_data_members=..., inst_handle=..., inst_methods=..., inst_name=... } None (Pdb+) print node.nodekind struct (Pdb+) print node.typename struct pypy_pypy_module_cppyy_interp_cppyy_W_CPPScope0 @ (Pdb+) print node.obj.inst_data_members * struct dicttable { num_items=0, num_pristine_entries=8, entries=... } (Pdb+) print node.typename struct pypy_pypy_module_cppyy_interp_cppyy_W_CPPScope0 @ (Pdb+) print node.obj._TYPE GcStruct pypy.module.cppyy.interp_cppyy.W_CPPScope { super, inst_space, inst_data_members, inst_handle, inst_methods, inst_name } So, I've been removing bits and pieces from that class, hoping to find the real problem when it stops crashing, but no luck so far. Removing bits is a bit harder then it sounds, b/c if the system as a whole is not conistent anymore, it won't get past the rtyper. Anyone has a better way of nailing this bug? Thanks! Best regards, Wim -- WLavrijsen at lbl.gov -- +1 (510) 486 6411 -- www.lavrijsen.net From wlavrijsen at lbl.gov Thu Sep 8 05:15:22 2011 From: wlavrijsen at lbl.gov (wlavrijsen at lbl.gov) Date: Wed, 7 Sep 2011 20:15:22 -0700 (PDT) Subject: [pypy-dev] segfault in translation; C backend In-Reply-To: References: Message-ID: Hi, [reply-ing to myself] one of those things: I spent a several days banging my head on this and then explaining it in an e-mail is enough to focus my thoughts and solve it. That said, I'm not 100% sure what's going on. What I know is that I have an opaque (or so I thought!) handle type, namely void*. However, after the chain, the C back-end sees what I think is a (overly) recycled type: pypy.rpython.lltypesystem.ll2ctypes.c_ubyte_Array_33554431 so a char* of length 33554431 (or 0x1ffffff). This as the actual content of what is marked a . So the code thinks it can dereference void* handles as it really sees a char*. For the Reflex backend, which uses valid pointers, that works up to some extend (apparently), but not so in the case of the CINT as it uses indices, dereferencing of which gives an immediate segfault. So, I turn things into rffi.LONGs across the board and all is fine. Still, I think that any attempt to dereference a void*, isn't very nice. Best regards, Wim -- WLavrijsen at lbl.gov -- +1 (510) 486 6411 -- www.lavrijsen.net From romain.py at gmail.com Thu Sep 8 06:18:24 2011 From: romain.py at gmail.com (Romain Guillebert) Date: Thu, 8 Sep 2011 06:18:24 +0200 Subject: [pypy-dev] CTypes backend for Cython Status Message-ID: <20110908041824.GA29645@ubuntu> Hi The Google Summer of Code has ended and I didn't give the current status to anyone yet (I was very busy with a report I had to write for my university). There is still work to do on the project (there was more work than I expected, especially because of semantic differences between Cython and ctypes) so I'll talk about what needs to be done (even if it does not sound good compared to talking about what has been done) from the most important to the least important in my opinion : - Pointer vs Array, Cython mixes the two while ctypes does not, this can probably be fixed by using arrays everywhere (if we can convert pointers into arrays) - Take into account header files declared globally - Macros, this is probably the biggest part but it's doable, Cython has the types of the arguments and the return value so it's possible to generate a C function corresponding to a macro - Pointer to functions Some of them are trivial, others just require good ideas and macros demands a big amount of work. I'm still working on it and if someone wants to give a hand, I'll be happy to explain what I've done. Thanks Romain From davidf at sjsoft.com Thu Sep 8 09:57:54 2011 From: davidf at sjsoft.com (David Fraser) Date: Thu, 8 Sep 2011 02:57:54 -0500 (CDT) Subject: [pypy-dev] deepcopy slower in PyPY ?! In-Reply-To: Message-ID: <62768815-4ddc-43c4-9b3b-2d375dccf5a5@jackdaw.local> On Wednesday, September 7, 2011 at 10:38:10 PM, Maciej Fijalkowski wrote: > On Wed, Sep 7, 2011 at 8:16 PM, Antonio Cuni > wrote: > > Hi Jorge, > > > > On 07/09/11 16:43, Jorge de Jesus wrote: > >> > >> ?Hi to all > >> > >> I've benchmark/profile ?some code (PyWPS API) and PyPy-c is 2/3x > >> times > >> slower than CPython. ?This was done in a virtual machine using > >> x86_64 > >> > >> The code being benchmark spends most of the time making calls to > >> copy/deepcopy. ?I've found that this was an issue in PyPy 1.6 > >> (https://bugs.pypy.org/issue767), but the issue has been closed. > >> So I've > >> downloaded the latest dev version but PyPy-c continues to be slow > >> compared to CPython. > > > > could you please send us a benchmark which showcases the problem? > > The > > smaller the better, ideally a benchmark which is contained in a > > single file > > is easier to run and debug than one which involves to download lots > > of code > > from the internet. > > the internet is not the problem here ;-) So here's my benchmark of doing a copy.deepcopy of the internet - or at least, of the ipv4 address space... (unfortunately it needs to download that, but caches if possible, and doesn't time that) In this case it's only testing copying nested xml elementtree nodes, and some basic dicts. It actually shows a remarkable improvement in pypy; here are the average speeds per copy for 100 and 1000 repeats (showing how the JIT kicks in in pypy) executable repeats etree dicts cpython2.6 100 37.17 3.98 cpython2.6 1000 36.42 3.97 cpython2.7 100 58.10 4.38 cpython2.7 1000 57.29 4.06 cpython3.2 100 57.41 3.61 cpython3.2 1000 56.98 3.68 pypy1.5.0 100 32.08 1.34 pypy1.5.0 1000 25.54 1.11 pypy1.6.0 100 25.89 1.17 pypy1.6.0 1000 16.32 0.81 So, pypy can even speed up copying the internet :) Cheers David -------------- next part -------------- A non-text attachment was scrubbed... Name: benchmark_internet.py Type: text/x-python Size: 1749 bytes Desc: not available URL: From anto.cuni at gmail.com Thu Sep 8 10:15:46 2011 From: anto.cuni at gmail.com (Antonio Cuni) Date: Thu, 08 Sep 2011 10:15:46 +0200 Subject: [pypy-dev] speed and 1.6 In-Reply-To: References: Message-ID: <4E6879B2.5070804@gmail.com> On 03/09/11 08:51, Miquel Torres wrote: > Which revision is (or "simulates") 1.6? I don't think there is the exact revision on codespeed, because the release was made on a branch (release-1.6.x, I think), not on trunk. What about starting the benchmarks manually on the branch release-1.6.x? Miquel, would it be possible to tag those results as "PyPy 1.6" when we have them? ciao, Anto From fijall at gmail.com Thu Sep 8 10:46:15 2011 From: fijall at gmail.com (Maciej Fijalkowski) Date: Thu, 8 Sep 2011 10:46:15 +0200 Subject: [pypy-dev] speed and 1.6 In-Reply-To: <4E6879B2.5070804@gmail.com> References: <4E6879B2.5070804@gmail.com> Message-ID: On Thu, Sep 8, 2011 at 10:15 AM, Antonio Cuni wrote: > On 03/09/11 08:51, Miquel Torres wrote: >> >> Which revision is (or "simulates") 1.6? > > I don't think there is the exact revision on codespeed, because the release > was made on a branch (release-1.6.x, I think), not on trunk. > > What about starting the benchmarks manually on the branch release-1.6.x? > Miquel, would it be possible to tag those results as "PyPy 1.6" when we have > them? We can mark trunk from the day we made the release branch (that's what we did so far). > > ciao, > Anto > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > http://mail.python.org/mailman/listinfo/pypy-dev > From anto.cuni at gmail.com Thu Sep 8 11:50:48 2011 From: anto.cuni at gmail.com (Antonio Cuni) Date: Thu, 08 Sep 2011 11:50:48 +0200 Subject: [pypy-dev] speed and 1.6 In-Reply-To: References: <4E6879B2.5070804@gmail.com> Message-ID: <4E688FF8.6050009@gmail.com> On 08/09/11 10:46, Maciej Fijalkowski wrote: >> What about starting the benchmarks manually on the branch release-1.6.x? >> Miquel, would it be possible to tag those results as "PyPy 1.6" when we have >> them? > > We can mark trunk from the day we made the release branch (that's what > we did so far). which is fine as long as we did not transplant any performance related revision into the branch (which doesn't seem to be the case). So, the last day in which we merged default into the release branch is 3rd of August at 17:33. ciao, Anto From tobami at googlemail.com Thu Sep 8 12:15:00 2011 From: tobami at googlemail.com (Miquel Torres) Date: Thu, 8 Sep 2011 12:15:00 +0200 Subject: [pypy-dev] speed and 1.6 In-Reply-To: <4E688FF8.6050009@gmail.com> References: <4E6879B2.5070804@gmail.com> <4E688FF8.6050009@gmail.com> Message-ID: which sadly doesn't have data on speed.pypy.org (not all data saved on the removed environment was kept, sorry). The August 1st revision is probably not acceptable to portrait as being 1.6 ... 2011/9/8 Antonio Cuni : > On 08/09/11 10:46, Maciej Fijalkowski wrote: > >>> What about starting the benchmarks manually on the branch release-1.6.x? >>> Miquel, would it be possible to tag those results as "PyPy 1.6" when we >>> have >>> them? >> >> We can mark trunk from the day we made the release branch (that's what >> we did so far). > > which is fine as long as we did not transplant any performance related > revision into the branch (which doesn't seem to be the case). > > So, the last day in which we merged default into the release branch is 3rd > of August at 17:33. > > ciao, > Anto > From anto.cuni at gmail.com Thu Sep 8 12:34:12 2011 From: anto.cuni at gmail.com (Antonio Cuni) Date: Thu, 08 Sep 2011 12:34:12 +0200 Subject: [pypy-dev] speed and 1.6 In-Reply-To: References: <4E6879B2.5070804@gmail.com> <4E688FF8.6050009@gmail.com> Message-ID: <4E689A24.6060107@gmail.com> On 08/09/11 12:15, Miquel Torres wrote: > which sadly doesn't have data on speed.pypy.org (not all data saved on > the removed environment was kept, sorry). The August 1st revision is > probably not acceptable to portrait as being 1.6 ... looking at the graphs, I don't see any big difference between august 1st and august 3rd, so I think we could just use that and be happy. ciao, anto From jmdj at pml.ac.uk Thu Sep 8 12:58:54 2011 From: jmdj at pml.ac.uk (Jorge de Jesus) Date: Thu, 08 Sep 2011 11:58:54 +0100 Subject: [pypy-dev] deepcopy slower in PyPy - testing script In-Reply-To: <62768815-4ddc-43c4-9b3b-2d375dccf5a5@jackdaw.local> References: <62768815-4ddc-43c4-9b3b-2d375dccf5a5@jackdaw.local> Message-ID: <4E689FEE.4050507@pml.ac.uk> Hi to all Thank you for all the answers concerning the topic. The deepcopy testing script in issue 767 [1], is working faster in PyPy than in CPython, but deepcopy is run on a list of numbers. BUT, PyWPS runs a lot of DOM functions and deepcopy calls that pass DOM Elements as argument. Just to add my 2cents, it seems that a deepcopy of a complexer object (compared to a number list) is slower in PyPy I've managed to replicate the problem in a small script [2] that is slower in PyPy: python 2.7.1+ : 0.3057 s pypy1.6 (jit): 1.42s pypy-c-1.6.svn: 1.23s The pypy-c-1.6-svn is a compiled version from the SVN and its compilation options can be found here [3] . The tests were done in 32bit machine Can someone give a look at the testing script and determine why is it slow ?! Thank you for the support, and I must say that PyPy is an amazing project !!!! All the Best Jorge [1] https://bugs.pypy.org/issue767 [2] http://pastebin.com/rehXtTyM [3] http://pastebin.com/FhLLNxMT -------------------------------------------------------------------------------- Plymouth Marine Laboratory Registered Office: Prospect Place The Hoe Plymouth PL1 3DH Website: www.pml.ac.uk Click here for PML Annual Review Registered Charity No. 1091222 PML is a company limited by guarantee registered in England & Wales company number 4178503 Please think before you print -------------------------------------------------------------------------------- This e-mail, its content and any file attachments are confidential. If you have received this e-mail in error please do not copy, disclose it to any third party or use the contents or attachments in any way. Please notify the sender by replying to this e-mail or e-mail forinfo at pml.ac.uk and then delete the email without making any copies or using it in any other way. The content of this message may contain personal views which are not the views of Plymouth Marine Laboratory unless specifically stated. You are reminded that e-mail communications are not secure and may contain viruses. Plymouth Marine Laboratory accepts no liability for any loss or damage which may be caused by viruses. -------------------------------------------------------------------------------- -------------- next part -------------- A non-text attachment was scrubbed... Name: dc_dom.py Type: text/x-python Size: 441 bytes Desc: not available URL: From jnoller at gmail.com Thu Sep 8 15:15:54 2011 From: jnoller at gmail.com (Jesse Noller) Date: Thu, 8 Sep 2011 09:15:54 -0400 Subject: [pypy-dev] [Speed] Moving the project forward In-Reply-To: References: <4E5F3936.8050002@gmail.com> Message-ID: PING: Did we make progress? On Thu, Sep 1, 2011 at 2:44 PM, Miquel Torres wrote: > You can also do that in Github, which I prefer. > > However, since CPython and PyPy use mercurial, the general preference > for Bitbucket is understandable. > > > 2011/9/1 Brett Cannon : >> On Thu, Sep 1, 2011 at 01:10, Nick Coghlan wrote: >>> On Thu, Sep 1, 2011 at 5:50 PM, Antonio Cuni wrote: >>>> On 31/08/11 22:11, Brett Cannon wrote: >>>>> >>>>> The PyPy folk could answer this as they have their repo on bitbucket >>>>> already. Else I guess we can just create a standalone account that >>>>> represents the official speed.python.org account. >>>> >>>> for pypy we do exactly that. There is a bitbucket user named "pypy" whose >>>> credentials are shared among all the core devs. >>> >>> The security auditing part of my brain has its fingers in its ears and >>> is singing "La La La" rather loudly :) >> >> What about Google Code? Projects there can have multiple owners and >> they support hg, have a tracker, and a wiki. >> >> >>> >>> Cheers, >>> Nick. >>> >>> -- >>> Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia >>> >> _______________________________________________ >> Speed mailing list >> Speed at python.org >> http://mail.python.org/mailman/listinfo/speed >> > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > http://mail.python.org/mailman/listinfo/pypy-dev > From davidf at sjsoft.com Thu Sep 8 15:23:26 2011 From: davidf at sjsoft.com (David Fraser) Date: Thu, 8 Sep 2011 08:23:26 -0500 (CDT) Subject: [pypy-dev] deepcopy slower in PyPy - testing script In-Reply-To: <4E689FEE.4050507@pml.ac.uk> Message-ID: <015527d4-c1f0-4149-8702-41d27f6cd300@jackdaw.local> On Thursday, September 8, 2011 at 12:58:54 PM, Jorge de Jesus wrote: > Hi to all > > Thank you for all the answers concerning the topic. > > The deepcopy testing script in issue 767 [1], is working faster in PyPy > than in CPython, but deepcopy is run on a list of numbers. > > BUT, PyWPS runs a lot of DOM functions and deepcopy calls that pass DOM > Elements as argument. Just to add my 2cents, it seems that a deepcopy > of a complexer object (compared to a number list) is slower in PyPy > > I've managed to replicate the problem in a small script [2] that is > slower in PyPy: > python 2.7.1+ : 0.3057 s > pypy1.6 (jit): 1.42s > pypy-c-1.6.svn: 1.23s > > The pypy-c-1.6-svn is a compiled version from the SVN and its > compilation options can be found here [3] . The tests were done in 32bit > machine > > Can someone give a look at the testing script and determine why is it > slow ?! > > Thank you for the support, and I must say that PyPy is an amazing > project !!!! I've attached a slightly modified version of your script that uses timeit to measure the time, and takes an argument to specify the number of repeats... This shows that PyPy gets faster as the JIT kicks in: python2.7 1000 0.33 python2.7 10000 0.33 python3.2 1000 0.41 python3.2 10000 0.40 pypy 1000 1.29 pypy 10000 0.22 pypy 100000 0.08 Hope that helps David From jmdj at pml.ac.uk Thu Sep 8 15:45:51 2011 From: jmdj at pml.ac.uk (Jorge de Jesus) Date: Thu, 08 Sep 2011 14:45:51 +0100 Subject: [pypy-dev] deepcopy slower in PyPy - testing script In-Reply-To: <015527d4-c1f0-4149-8702-41d27f6cd300@jackdaw.local> References: <015527d4-c1f0-4149-8702-41d27f6cd300@jackdaw.local> Message-ID: <4E68C70F.6010402@pml.ac.uk> Hi to all That was an interesting result, so (for what I understood) there is nothing wrong with PyPy, it's just the code I'm trying to run doesn't have "sufficient" loops for the JIT to kickstart and be useful ? Any one has a copy of "PyPy for dummies" ^_^ All the best Jorge On 08/09/11 14:23, David Fraser wrote: > On Thursday, September 8, 2011 at 12:58:54 PM, Jorge de Jesus wrote: >> Hi to all >> >> Thank you for all the answers concerning the topic. >> >> The deepcopy testing script in issue 767 [1], is working faster in PyPy >> than in CPython, but deepcopy is run on a list of numbers. >> >> BUT, PyWPS runs a lot of DOM functions and deepcopy calls that pass DOM >> Elements as argument. Just to add my 2cents, it seems that a deepcopy >> of a complexer object (compared to a number list) is slower in PyPy >> >> I've managed to replicate the problem in a small script [2] that is >> slower in PyPy: >> python 2.7.1+ : 0.3057 s >> pypy1.6 (jit): 1.42s >> pypy-c-1.6.svn: 1.23s >> >> The pypy-c-1.6-svn is a compiled version from the SVN and its >> compilation options can be found here [3] . The tests were done in 32bit >> machine >> >> Can someone give a look at the testing script and determine why is it >> slow ?! >> >> Thank you for the support, and I must say that PyPy is an amazing >> project !!!! > I've attached a slightly modified version of your script that uses timeit to measure the time, and takes an argument to specify the number of repeats... > > This shows that PyPy gets faster as the JIT kicks in: > python2.7 1000 0.33 > python2.7 10000 0.33 > python3.2 1000 0.41 > python3.2 10000 0.40 > pypy 1000 1.29 > pypy 10000 0.22 > pypy 100000 0.08 > > Hope that helps > > David -------------------------------------------------------------------------------- Plymouth Marine Laboratory Registered Office: Prospect Place The Hoe Plymouth PL1 3DH Website: www.pml.ac.uk Click here for PML Annual Review Registered Charity No. 1091222 PML is a company limited by guarantee registered in England & Wales company number 4178503 Please think before you print -------------------------------------------------------------------------------- This e-mail, its content and any file attachments are confidential. If you have received this e-mail in error please do not copy, disclose it to any third party or use the contents or attachments in any way. Please notify the sender by replying to this e-mail or e-mail forinfo at pml.ac.uk and then delete the email without making any copies or using it in any other way. The content of this message may contain personal views which are not the views of Plymouth Marine Laboratory unless specifically stated. You are reminded that e-mail communications are not secure and may contain viruses. Plymouth Marine Laboratory accepts no liability for any loss or damage which may be caused by viruses. -------------------------------------------------------------------------------- From gbowyer at fastmail.co.uk Fri Sep 9 00:24:32 2011 From: gbowyer at fastmail.co.uk (Greg Bowyer) Date: Thu, 08 Sep 2011 15:24:32 -0700 Subject: [pypy-dev] Errors running pypy with ctype library In-Reply-To: References: <4E66B039.8040608@fastmail.co.uk> Message-ID: <4E6940A0.5000400@fastmail.co.uk> Humm interesting, I wonder why it works in Cpython, when I get the chance I will try making those changes and see if pypy works. On 06/09/11 23:57, Amaury Forgeot d'Arc wrote: > 2011/9/7 Greg Bowyer > > > Hi all, I have a rather interesting in house networking tool that > uses pcap to sniff packets, take them into twisted and replay them > against a target. > > Internally the tight loop for packet reassembly is currently run > via twisted and some custom parsing and packet reconstruction > code, I have been investigating if I can make this code faster > _without_ reimplementing the capture part in C, as such I think I > have two options: > > * Pypy (which I would prefer as it means that I hopefully will > gain performance improvements over time, as well as JIT > acceleration throughout the code) > * Cython (which will let me change the main loop to be mostly C > without having to write a lot of C) > > The tool currently uses an old style cPython c extension to bind > python to pcap, since this will be slow in pypy I found the first > semi implemented ctype pcap binding from google code here > (http://code.google.com/p/pcap/) (I didnt write it so it may be > broken) > > The following test code works fine on cPython2.7 > > > The pcap module has an important issue; pcap_open_live() contains this > code: > error=c_char_p() > handle=pcap_c_funcs.pcap_open_live(source,snaplen,promisc,to_ms,error) > Which is wrong: according to the man page, the "error" parameter > "is assumed to be able to hold at least PCAP_ERRBUF_SIZE chars" > which is not the case here, NULL is passed instead and bad things will > happen at runtime. > > pcap should be modified, probably with something like "error = > create_string_buffer(256)" > > -- > Amaury Forgeot d'Arc -------------- next part -------------- An HTML attachment was scrubbed... URL: From lac at openend.se Thu Sep 8 18:36:13 2011 From: lac at openend.se (Laura Creighton) Date: Thu, 8 Sep 2011 18:36:13 +0200 Subject: [pypy-dev] PyCON UK Sept 24 + 25 Message-ID: <201109081636.p88GaDWP012036@theraft.openend.se> John Pinner wants to know if any of us are coming and will there be a PyPy sprint. Laura From arigo at tunes.org Thu Sep 8 20:02:51 2011 From: arigo at tunes.org (Armin Rigo) Date: Thu, 8 Sep 2011 20:02:51 +0200 Subject: [pypy-dev] PyCON UK Sept 24 + 25 In-Reply-To: <201109081636.p88GaDWP012036@theraft.openend.se> References: <201109081636.p88GaDWP012036@theraft.openend.se> Message-ID: Hi Laura, On Thu, Sep 8, 2011 at 6:36 PM, Laura Creighton wrote: > John Pinner wants to know if any of us are coming and will there be a PyPy > sprint. I am not --- England is no longer on my yearly road nowadays... Also, maybe it's worth being recalled: the "classical PyPy sprint" format we all love is organized by someone more or less local and is around one week long. If anyone in Europe has interest in PyPy and would like to contribute something particular, organizing a week of sprint around his or her place is an excellent way to get to know us :-) A bient?t, Armin. From tobami at googlemail.com Thu Sep 8 20:50:39 2011 From: tobami at googlemail.com (Miquel Torres) Date: Thu, 8 Sep 2011 20:50:39 +0200 Subject: [pypy-dev] speed and 1.6 In-Reply-To: <4E689A24.6060107@gmail.com> References: <4E6879B2.5070804@gmail.com> <4E688FF8.6050009@gmail.com> <4E689A24.6060107@gmail.com> Message-ID: Done, I tagged revision 46161:eb30a0ef328e (1st of August) as PyPy 1.6. Can be seen now on the start page. Cheers, Miquel 2011/9/8 Antonio Cuni : > On 08/09/11 12:15, Miquel Torres wrote: >> >> which sadly doesn't have data on speed.pypy.org (not all data saved on >> the removed environment was kept, sorry). The August 1st revision is >> probably not acceptable to portrait as being 1.6 ... > > looking at the graphs, I don't see any big difference between august 1st and > august 3rd, so I think we could just use that and be happy. > > ciao, > anto > From anto.cuni at gmail.com Fri Sep 9 09:15:13 2011 From: anto.cuni at gmail.com (Antonio Cuni) Date: Fri, 09 Sep 2011 09:15:13 +0200 Subject: [pypy-dev] speed and 1.6 In-Reply-To: References: <4E6879B2.5070804@gmail.com> <4E688FF8.6050009@gmail.com> <4E689A24.6060107@gmail.com> Message-ID: <4E69BD01.7040301@gmail.com> On 08/09/11 20:50, Miquel Torres wrote: > Done, I tagged revision 46161:eb30a0ef328e (1st of August) as PyPy > 1.6. Can be seen now on the start page. thank you! From arigo at tunes.org Sun Sep 11 10:36:51 2011 From: arigo at tunes.org (Armin Rigo) Date: Sun, 11 Sep 2011 10:36:51 +0200 Subject: [pypy-dev] CTypes backend for Cython Status In-Reply-To: <20110908041824.GA29645@ubuntu> References: <20110908041824.GA29645@ubuntu> Message-ID: Hi Romain, Can you give again the location of your work? I have https://github.com/hardshooter/CythonCTypesBackend but I would like to be sure it is the most recent location. If so, then I'm a bit confused because I don't find more than three tests. Where are the tests? (Sorry, anyone with some git knowledge would know how to diff your branch and the original Cython, but I don't...) A bient?t, Armin. From sanxiyn at gmail.com Sun Sep 11 11:25:44 2011 From: sanxiyn at gmail.com (Seo Sanghyeon) Date: Sun, 11 Sep 2011 18:25:44 +0900 Subject: [pypy-dev] CTypes backend for Cython Status In-Reply-To: References: <20110908041824.GA29645@ubuntu> Message-ID: 2011/9/11 Armin Rigo : > Can you give again the location of your work? ?I have > https://github.com/hardshooter/CythonCTypesBackend but I would like to > be sure it is the most recent location. ?If so, then I'm a bit > confused because I don't find more than three tests. ?Where are the > tests? ?(Sorry, anyone with some git knowledge would know how to diff > your branch and the original Cython, but I don't...) I diff'ed branch to original Cython, and there are just three tests in Cython/CTypesBackend/Tests. I think the idea is to reuse Cython tests under tests directory. But doesn't CTypes backend need its own unit tests in addition to functional tests of .pyx files in Cython test suite? -- Seo Sanghyeon From romain.py at gmail.com Sun Sep 11 11:38:27 2011 From: romain.py at gmail.com (Romain Guillebert) Date: Sun, 11 Sep 2011 11:38:27 +0200 Subject: [pypy-dev] CTypes backend for Cython Status In-Reply-To: References: <20110908041824.GA29645@ubuntu> Message-ID: <20110911093827.GA20347@ubuntu> On Sun, Sep 11, 2011 at 10:36:51AM +0200, Armin Rigo wrote: > Hi Romain, > > Can you give again the location of your work? I have > https://github.com/hardshooter/CythonCTypesBackend but I would like to > be sure it is the most recent location. If so, then I'm a bit > confused because I don't find more than three tests. Where are the > tests? (Sorry, anyone with some git knowledge would know how to diff > your branch and the original Cython, but I don't...) > > > A bient?t, > > Armin. Hi Yes my most recent work is there, I wrote only 3 tests (and it's bad) but I'm also running the Cython test suite (all the tests are in the tests directory at the root of the repository) and even though it's not "unit" tests it should provide a very good coverage of the code (as you might guess I don't pass all of them). Romain From khamenya at gmail.com Mon Sep 12 00:29:06 2011 From: khamenya at gmail.com (Valery Khamenya) Date: Mon, 12 Sep 2011 00:29:06 +0200 Subject: [pypy-dev] CUDA/OpenCL under PyPy Message-ID: Hi all, (replying, please, Cc to me ) I was quite surprised to see that cooperhead could be compiled and installed OK under PyPy. Of course it didn't work, because micronumpy is still young: "AttributeError: 'module' object has no attribute 'float64' " Did anyone try to execute some numeric stuff using CUDA from PyPy ? CUDA + PyPy -- it would be just fantastic. best regards -- Valery A.Khamenya -------------- next part -------------- An HTML attachment was scrubbed... URL: From orangewarrior at gmail.com Mon Sep 12 02:26:28 2011 From: orangewarrior at gmail.com (=?ISO-8859-2?Q?=A3ukasz_Ligowski?=) Date: Mon, 12 Sep 2011 02:26:28 +0200 Subject: [pypy-dev] CUDA/OpenCL under PyPy In-Reply-To: References: Message-ID: Hello, 2011/9/12 Valery Khamenya : > > (replying,?please, Cc to me ) > I was quite surprised to see that cooperhead could be compiled and installed > OK under PyPy. > Of course it didn't work, because micronumpy is still young: > ? ? ?"AttributeError: 'module' object has no attribute 'float64' " > Did anyone try to execute some numeric stuff using CUDA from PyPy ? > CUDA + PyPy -- it would be just fantastic. CUDA through ctypes works. Best regards, L From kwatford at gmail.com Mon Sep 12 05:50:20 2011 From: kwatford at gmail.com (Ken Watford) Date: Sun, 11 Sep 2011 23:50:20 -0400 Subject: [pypy-dev] CUDA/OpenCL under PyPy In-Reply-To: References: Message-ID: I started a project called PyCL about two months ago. It's OpenCL through ctypes, and it works with PyPy. I don't believe it currently works with the new numpy stuff, but the standard Python array module should work. It's available through the cheeseshop, and there's a repository for it here: https://bitbucket.org/kw/pycl I can't say I've had any time to work on it much in last month. Image support wasn't quite ready yet, last I recall, but doing basic stuff with buffers and kernels should work. Though last time I checked it ran generally slower in PyPy than CPython. Haven't checked it since 1.6 came out, though. On Sun, Sep 11, 2011 at 6:29 PM, Valery Khamenya wrote: > Hi all, > (replying,?please, Cc to me ) > I was quite surprised to see that cooperhead could be compiled and installed > OK under PyPy. > Of course it didn't work, because micronumpy is still young: > ? ? ?"AttributeError: 'module' object has no attribute 'float64' " > Did anyone try to execute some numeric stuff using CUDA from PyPy ? > CUDA + PyPy -- it would be just fantastic. > best regards > -- > Valery A.Khamenya > > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > http://mail.python.org/mailman/listinfo/pypy-dev > > From khamenya at gmail.com Mon Sep 12 08:08:38 2011 From: khamenya at gmail.com (Valery Khamenya) Date: Mon, 12 Sep 2011 08:08:38 +0200 Subject: [pypy-dev] CUDA/OpenCL under PyPy In-Reply-To: References: Message-ID: Hi what you and ?ukasz Ligowski are saying is just amazing. It means, that there is generally no problems with PyPy and CUDA. best regards -- Valery A.Khamenya On Mon, Sep 12, 2011 at 5:50 AM, Ken Watford wrote: > I started a project called PyCL about two months ago. It's OpenCL > through ctypes, and it works with PyPy. I don't believe it currently > works with the new numpy stuff, but the standard Python array module > should work. > > It's available through the cheeseshop, and there's a repository for it > here: > https://bitbucket.org/kw/pycl > > I can't say I've had any time to work on it much in last month. Image > support wasn't quite ready yet, last I recall, but doing basic stuff > with buffers and kernels should work. Though last time I checked it > ran generally slower in PyPy than CPython. Haven't checked it since > 1.6 came out, though. > > On Sun, Sep 11, 2011 at 6:29 PM, Valery Khamenya > wrote: > > Hi all, > > (replying, please, Cc to me ) > > I was quite surprised to see that cooperhead could be compiled and > installed > > OK under PyPy. > > Of course it didn't work, because micronumpy is still young: > > "AttributeError: 'module' object has no attribute 'float64' " > > Did anyone try to execute some numeric stuff using CUDA from PyPy ? > > CUDA + PyPy -- it would be just fantastic. > > best regards > > -- > > Valery A.Khamenya > > > > _______________________________________________ > > pypy-dev mailing list > > pypy-dev at python.org > > http://mail.python.org/mailman/listinfo/pypy-dev > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From arigo at tunes.org Mon Sep 12 09:43:21 2011 From: arigo at tunes.org (Armin Rigo) Date: Mon, 12 Sep 2011 09:43:21 +0200 Subject: [pypy-dev] "checkout benchmarks failed" Message-ID: Hi all (particularly Fijal or Antonio), Can you explain the current state of the benchmarks, and possibly fix it? As far as I understand it runs fine but no longer updates the benchmarks themselves: http://buildbot.pypy.org/summary?category=benchmark-run A bient?t, Armin. From alex.gaynor at gmail.com Mon Sep 12 09:45:56 2011 From: alex.gaynor at gmail.com (Alex Gaynor) Date: Mon, 12 Sep 2011 03:45:56 -0400 Subject: [pypy-dev] "checkout benchmarks failed" In-Reply-To: References: Message-ID: Is there a reason we don't switch to doing an hg checkout from bitbucket? Those seem to be more stable than the SVN ones. Alex On Mon, Sep 12, 2011 at 3:43 AM, Armin Rigo wrote: > Hi all (particularly Fijal or Antonio), > > Can you explain the current state of the benchmarks, and possibly fix > it? As far as I understand it runs fine but no longer updates the > benchmarks themselves: > > http://buildbot.pypy.org/summary?category=benchmark-run > > > A bient?t, > > Armin. > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > http://mail.python.org/mailman/listinfo/pypy-dev > -- "I disapprove of what you say, but I will defend to the death your right to say it." -- Evelyn Beatrice Hall (summarizing Voltaire) "The people's good is the highest law." -- Cicero -------------- next part -------------- An HTML attachment was scrubbed... URL: From fijall at gmail.com Mon Sep 12 09:52:58 2011 From: fijall at gmail.com (Maciej Fijalkowski) Date: Mon, 12 Sep 2011 09:52:58 +0200 Subject: [pypy-dev] "checkout benchmarks failed" In-Reply-To: References: Message-ID: On Mon, Sep 12, 2011 at 9:45 AM, Alex Gaynor wrote: > Is there a reason we don't switch to doing an hg checkout from bitbucket? > ?Those seem to be more stable than the SVN ones. > Alex Anto failed to have 2 mercurial checkouts (pypy & hg) in one buildbot build. I'm not sure what the actual reason for that is (seems like you can just invoke hg to me, but I did not try). Cheers, fijal > > On Mon, Sep 12, 2011 at 3:43 AM, Armin Rigo wrote: >> >> Hi all (particularly Fijal or Antonio), >> >> Can you explain the current state of the benchmarks, and possibly fix >> it? ?As far as I understand it runs fine but no longer updates the >> benchmarks themselves: >> >> ? ?http://buildbot.pypy.org/summary?category=benchmark-run >> >> >> A bient?t, >> >> Armin. >> _______________________________________________ >> pypy-dev mailing list >> pypy-dev at python.org >> http://mail.python.org/mailman/listinfo/pypy-dev > > > > -- > "I disapprove of what you say, but I will defend to the death your right to > say it." -- Evelyn Beatrice Hall (summarizing Voltaire) > "The people's good is the highest law." -- Cicero > > > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > http://mail.python.org/mailman/listinfo/pypy-dev > > From anto.cuni at gmail.com Mon Sep 12 11:01:06 2011 From: anto.cuni at gmail.com (Antonio Cuni) Date: Mon, 12 Sep 2011 11:01:06 +0200 Subject: [pypy-dev] "checkout benchmarks failed" In-Reply-To: References: Message-ID: <4E6DCA52.4020607@gmail.com> On 12/09/11 09:52, Maciej Fijalkowski wrote: > On Mon, Sep 12, 2011 at 9:45 AM, Alex Gaynor wrote: >> Is there a reason we don't switch to doing an hg checkout from bitbucket? >> Those seem to be more stable than the SVN ones. >> Alex > > Anto failed to have 2 mercurial checkouts (pypy& hg) in one buildbot > build. I'm not sure what the actual reason for that is (seems like you > can just invoke hg to me, but I did not try). yes, I tried to have two checkouts in parallel but failed. I don't remember the details, only that it looks easy (how hard can it be?), but then I encountered a herd of yaks to shave. After two days of fighting, I gave up and decided to use the bitbucket SVN interface, but it turns out that it's not very stable :-(. Anyone feel free to try again to use hg, but I'd prefer not to go there by myself :-) From arigo at tunes.org Mon Sep 12 11:34:34 2011 From: arigo at tunes.org (Armin Rigo) Date: Mon, 12 Sep 2011 11:34:34 +0200 Subject: [pypy-dev] "checkout benchmarks failed" In-Reply-To: <4E6DCA52.4020607@gmail.com> References: <4E6DCA52.4020607@gmail.com> Message-ID: Hi Anto, On Mon, Sep 12, 2011 at 11:01 AM, Antonio Cuni wrote: > yes, I tried to have two checkouts in parallel but failed. ?I don't remember > the details, only that it looks easy (how hard can it be?), but then I > encountered a herd of yaks to shave. Of course, now that I tried, it worked after 10 minutes... I suppose that it's either an example of getting stuck on some problem while not seeing the obvious solution elsewhere, or else (possibly) I messed up something else and didn't realize. A bient?t, Armin. From anto.cuni at gmail.com Mon Sep 12 15:49:18 2011 From: anto.cuni at gmail.com (Antonio Cuni) Date: Mon, 12 Sep 2011 15:49:18 +0200 Subject: [pypy-dev] PyPy support for Python 3 -- request for comments Message-ID: <4E6E0DDE.9070901@gmail.com> Hello pypy-dev, in the past weeks, I and the other core developers have talked a lot about supporting Python 3 in PyPy. The task is huge and it's unlikely that it will be completed shortly based only on volunteer work, so we came up with the following proposal, which splits the work into several steps and sub-steps, togheter with an estimate of how much money is needed to complete each one. The plan is to publish the proposal on the blog, and publicly ask for donations, with the hope to collect enough to cover the whole work. However, before putting it on the blog we would like to ask for your comments/thoughts about it. Any feedback will be appreciated! Thank you, Antonio (on the behalf of the PyPy team) Python 3 on PyPy ================= The release of Python 3 has been a major undertaking for the Python community, both technically and socially. So far PyPy implements only the version 2 of the Python language, which creates a very dangerous potential for a community split. We believe that by supporting both versions of the language we will help to fill the gap. This project should help both the part of the Python community which is reluctant to use PyPy because it does not support Python 3, and the part which is reluctant to move to Python 3 because they are already PyPy users. However, porting PyPy to Python 3 requires a lot of work, and it will take a long time before we can complete it by relying only on volunteer work. Thus, we are asking the community to help with funding the necessary work, to make it happen faster. High level description ----------------------- The goal of this project is to write an interpreter that interprets version 3 of Python language. To be precise we would aim at having Python 3.2 interpreter together in the same codebase as python 2.7 one. At the end of the project, it will be possible to decide at translation time whether to build an interpreter which supports Python 2.7 or Python 3.2 and both versions will be nightly tested and available from nightly builds. The focus of this project is on compatibility, not performance. In particular, it might be possible that the resulting Python 3 interpreter will be slower than the Python 2 one. If needed, optimizing and making it more JIT friendly will be the scope of a separate project. Step 1: core language ---------------------- In this step, we implement all the changes to the core language, i.e. everything which is not in the extension modules. This includes, but it is not necessarily limited to the following items, which are split into two big areas: * **Sub-step 1.1**: string vs unicode and I/O: - adapt the existing testing infrastructure to support running Python 3 code - string vs bytes: the interpreter uses unicode strings everywhere. - the ``print`` function - ``open`` is now an alias for ``io.open``, removal of the old file type. - string formatting (for the part which is not already implemented in Python 2.7) - the _io module (for the part which is not already implemented in Python 2.7) - syntactic changes to make ``io.py`` importable (in particular: ``metaclass=...`` in class declarations) - **Estimate cost**: 35.000 $ * **Sub-step 1.2**: other syntactic changes, builtin types and functions, exceptions: - views and iterators instead of lists (e.g., ``dict.items()``, ``map``, ``range`` & co.) - new rules for ordering comparisons - removal of old-style classes - int/long unification - function annotations - smaller syntax changes, such as keyword-only arguments, ``nonlocal``, extended iterable unpacking, set literals, dict and set comprehension, etc. - changes to exceptions: ``__traceback__`` attribute, chained exceptions, ``del e`` at the end of the except block, etc. - changes to builtins: ``super``, ``input``, ``next()``, etc. - improved ``with`` statement - **Estimate cost**: 25.000 $ Note that the distinction between sub-steps 1.1 and 1.2 is blur, and it might be possible that during the development we will decide to move items between the two sub-steps, as needed. For more information, look at the various "What's new" documents: - http://docs.python.org/py3k/whatsnew/3.0.html - http://docs.python.org/py3k/whatsnew/3.1.html - http://docs.python.org/py3k/whatsnew/3.2.html **Total estimate cost**: 60.000 $ Step 2: extension modules -------------------------- In this step, we implement all the changes to the extension modules which are written in C in CPython. This includes, but it is not necessarily limited to: - ``collections``, ``gzip``, ``bz2``, ``decimal``, ``itertools``, ``re``, ``functools``, ``pickle``, ``_elementtree``, ``math``, etc. **Estimate cost**: this is hard to do at this point, we will be able to give a more precise estimate as soon as Step 1 is completed. As a reference, it should be possible to complete it with 35.000 $ Step 3: cpyext -------------- The ``cpyext`` module allows to load CPython C extensions in PyPy. Since the C API changed a lot between Python 2.7 and Python 3.2, ``cpyext`` will not work out of the box in the Python 3 PyPy interpreter. In this step, we will adapt it to work with Python 3 as well. Note that, even for Python 2, ``cpyext`` is still in a beta state. In particular, not all extension modules compile and load correctly. As a consequence, the same will be true for Python 3 as well. As a general rule, we expect that if a Python 2 module works with ``cpyext``, the corresponding Python 3 version will also work when this step is completed, although the details might vary depending on the exact C extension module. **Estimate cost**: 10.000 $ From yselivanov.ml at gmail.com Mon Sep 12 20:56:33 2011 From: yselivanov.ml at gmail.com (Yury Selivanov) Date: Mon, 12 Sep 2011 14:56:33 -0400 Subject: [pypy-dev] PyPy support for Python 3 -- request for comments In-Reply-To: <4E6E0DDE.9070901@gmail.com> References: <4E6E0DDE.9070901@gmail.com> Message-ID: Hello Antonio, And what are the rough time-estimates? Thank you, -Yury On 2011-09-12, at 9:49 AM, Antonio Cuni wrote: > Hello pypy-dev, > > in the past weeks, I and the other core developers have talked a lot about > supporting Python 3 in PyPy. The task is huge and it's unlikely that it will > be completed shortly based only on volunteer work, so we came up with the > following proposal, which splits the work into several steps and sub-steps, > togheter with an estimate of how much money is needed to complete each one. > > The plan is to publish the proposal on the blog, and publicly ask for > donations, with the hope to collect enough to cover the whole work. However, > before putting it on the blog we would like to ask for your comments/thoughts > about it. Any feedback will be appreciated! > > Thank you, > Antonio (on the behalf of the PyPy team) > > > Python 3 on PyPy > ================= > > The release of Python 3 has been a major undertaking for the Python community, > both technically and socially. So far PyPy implements only the version 2 of > the Python language, which creates a very dangerous potential for a community > split. We believe that by supporting both versions of the language we will > help to fill the gap. > > This project should help both the part of the Python community which is > reluctant to use PyPy because it does not support Python 3, and the part which > is reluctant to move to Python 3 because they are already PyPy users. > > However, porting PyPy to Python 3 requires a lot of work, and it will take a > long time before we can complete it by relying only on volunteer work. Thus, > we are asking the community to help with funding the necessary work, to make > it happen faster. > > High level description > ----------------------- > > The goal of this project is to write an interpreter that interprets version > 3 of Python language. To be precise we would aim at having Python 3.2 > interpreter together in the same codebase as python 2.7 one. > > At the end of the project, it will be possible to decide at translation time > whether to build an interpreter which supports Python 2.7 or Python 3.2 and > both versions will be nightly tested and available from nightly builds. > > The focus of this project is on compatibility, not performance. In > particular, it might be possible that the resulting Python 3 interpreter will > be slower than the Python 2 one. If needed, optimizing and making it more JIT > friendly will be the scope of a separate project. > > Step 1: core language > ---------------------- > > In this step, we implement all the changes to the core language, > i.e. everything which is not in the extension modules. This includes, but it > is not necessarily limited to the following items, which are split into two > big areas: > > * **Sub-step 1.1**: string vs unicode and I/O: > > - adapt the existing testing infrastructure to support running Python 3 code > > - string vs bytes: the interpreter uses unicode strings everywhere. > > - the ``print`` function > > - ``open`` is now an alias for ``io.open``, removal of the old file type. > > - string formatting (for the part which is not already implemented in Python > 2.7) > > - the _io module (for the part which is not already implemented in Python > 2.7) > > - syntactic changes to make ``io.py`` importable (in particular: > ``metaclass=...`` in class declarations) > > - **Estimate cost**: 35.000 $ > > * **Sub-step 1.2**: other syntactic changes, builtin types and functions, > exceptions: > > - views and iterators instead of lists (e.g., ``dict.items()``, ``map``, > ``range`` & co.) > > - new rules for ordering comparisons > > - removal of old-style classes > > - int/long unification > > - function annotations > > - smaller syntax changes, such as keyword-only arguments, ``nonlocal``, > extended iterable unpacking, set literals, dict and set comprehension, etc. > > - changes to exceptions: ``__traceback__`` attribute, chained exceptions, > ``del e`` at the end of the except block, etc. > > - changes to builtins: ``super``, ``input``, ``next()``, etc. > > - improved ``with`` statement > > - **Estimate cost**: 25.000 $ > > > Note that the distinction between sub-steps 1.1 and 1.2 is blur, and it might be > possible that during the development we will decide to move items between the > two sub-steps, as needed. > > For more information, look at the various "What's new" documents: > > - http://docs.python.org/py3k/whatsnew/3.0.html > > - http://docs.python.org/py3k/whatsnew/3.1.html > > - http://docs.python.org/py3k/whatsnew/3.2.html > > **Total estimate cost**: 60.000 $ > > > Step 2: extension modules > -------------------------- > > In this step, we implement all the changes to the extension modules which are > written in C in CPython. This includes, but it is not necessarily limited to: > > - ``collections``, ``gzip``, ``bz2``, ``decimal``, ``itertools``, ``re``, > ``functools``, ``pickle``, ``_elementtree``, ``math``, etc. > > **Estimate cost**: this is hard to do at this point, we will be able to give a > more precise estimate as soon as Step 1 is completed. As a reference, it > should be possible to complete it with 35.000 $ > > Step 3: cpyext > -------------- > > The ``cpyext`` module allows to load CPython C extensions in PyPy. Since the > C API changed a lot between Python 2.7 and Python 3.2, ``cpyext`` will not > work out of the box in the Python 3 PyPy interpreter. In this step, we will > adapt it to work with Python 3 as well. > > Note that, even for Python 2, ``cpyext`` is still in a beta state. In > particular, not all extension modules compile and load correctly. As a > consequence, the same will be true for Python 3 as well. As a general rule, > we expect that if a Python 2 module works with ``cpyext``, the corresponding > Python 3 version will also work when this step is completed, although the > details might vary depending on the exact C extension module. > > **Estimate cost**: 10.000 $ > > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > http://mail.python.org/mailman/listinfo/pypy-dev From benjamin at python.org Mon Sep 12 22:11:27 2011 From: benjamin at python.org (Benjamin Peterson) Date: Mon, 12 Sep 2011 16:11:27 -0400 Subject: [pypy-dev] PyPy support for Python 3 -- request for comments In-Reply-To: References: <4E6E0DDE.9070901@gmail.com> Message-ID: 2011/9/12 Yury Selivanov : > Hello Antonio, > > And what are the rough time-estimates? This is partial a function of how fast the funding comes in. :) -- Regards, Benjamin From dirkjan at ochtman.nl Mon Sep 12 22:44:10 2011 From: dirkjan at ochtman.nl (Dirkjan Ochtman) Date: Mon, 12 Sep 2011 22:44:10 +0200 Subject: [pypy-dev] PyPy support for Python 3 -- request for comments In-Reply-To: <4E6E0DDE.9070901@gmail.com> References: <4E6E0DDE.9070901@gmail.com> Message-ID: (stupidly sent this to Antonio only before, once more for the list...) On Mon, Sep 12, 2011 at 15:49, Antonio Cuni wrote: > be completed shortly based only on volunteer work, so we came up with the > following proposal, which splits the work into several steps and sub-steps, > togheter with an estimate of how much money is needed to complete each one. It might be a tad more friendly to present actual costs in a man-hour estimate, perhaps something like this? - - **Estimate cost**: 35.000 $ + - **Estimate cost**: 350 hours (+/- 35000 $) Or at the very least state elsewhere how you came up with the financial numbers... This may also be conducive for companies that would rather loan you a capable engineer for a few months, perhaps? One textual nit: "Note that the distinction between sub-steps 1.1 and 1.2 is blur, and it": blur -> blurry. In general, great idea to take a whack at putting numbers on this, helping the community understand how much work this project is. Cheers, Dirkjan From jfcgauss at gmail.com Mon Sep 12 23:03:58 2011 From: jfcgauss at gmail.com (Serhat Sevki Dincer) Date: Tue, 13 Sep 2011 00:03:58 +0300 Subject: [pypy-dev] PyPy support for Python 3 -- request for comments Message-ID: > From:?Dirkjan Ochtman > To:?Antonio Cuni > Date:?Mon, 12 Sep 2011 22:44:10 +0200 > Subject:?Re: [pypy-dev] PyPy support for Python 3 -- request for comments > Or at the very least state elsewhere how you came up with the > financial numbers... This may also be conducive for companies that > would rather loan you a capable engineer for a few months, perhaps? > > One textual nit: "Note that the distinction between sub-steps 1.1 and > 1.2 is blur, and it": blur -> blurry. > > In general, great idea to take a whack at putting numbers on this, > helping the community understand how much work this project is. http://www.ohloh.net/p/pypy is a reference for current work's value, I guess. Must be useful for the upcoming estimate.. From dirkjan at ochtman.nl Mon Sep 12 23:06:56 2011 From: dirkjan at ochtman.nl (Dirkjan Ochtman) Date: Mon, 12 Sep 2011 23:06:56 +0200 Subject: [pypy-dev] PyPy support for Python 3 -- request for comments In-Reply-To: References: Message-ID: On Mon, Sep 12, 2011 at 23:03, Serhat Sevki Dincer wrote: > http://www.ohloh.net/p/pypy is a reference for current work's value, I > guess. Must be useful for the upcoming estimate.. Meh, the numbers tools like that come up with are generally just based on counting lines, which I don't think is a particularly accurate indicator of the amount of work done to get there. Cheers, Dirkjan From zachkelling at gmail.com Tue Sep 13 03:53:08 2011 From: zachkelling at gmail.com (Zach Kelling) Date: Mon, 12 Sep 2011 20:53:08 -0500 Subject: [pypy-dev] PyPy support for Python 3 -- request for comments Message-ID: I'd gladly donate if you throw your proposal for funding up on kickstarter.com, I'd not be surprised if there were quite a few other people willing to donate to get Python 3 supported. -- Zach Kelling http://twitter.com/zeekay -------------- next part -------------- An HTML attachment was scrubbed... URL: From dirkjan at ochtman.nl Tue Sep 13 09:25:32 2011 From: dirkjan at ochtman.nl (Dirkjan Ochtman) Date: Tue, 13 Sep 2011 09:25:32 +0200 Subject: [pypy-dev] PyPy support for Python 3 -- request for comments In-Reply-To: References: Message-ID: On Tue, Sep 13, 2011 at 03:53, Zach Kelling wrote: > I'd gladly donate if you throw your proposal for funding up on > kickstarter.com, I'd not be surprised if there were quite a few other people > willing to donate to get Python 3 supported. I thought about kickstarter.com for this, too, but I think there aren't that many successful 100k projects on Kickstarter. Plus, this kind of project is probably more interesting to large companies than to individuals, and Kickstarter is not the best way for companies to donate. Cheers, Dirkjan From fijall at gmail.com Tue Sep 13 09:23:07 2011 From: fijall at gmail.com (Maciej Fijalkowski) Date: Tue, 13 Sep 2011 09:23:07 +0200 Subject: [pypy-dev] PyPy support for Python 3 -- request for comments In-Reply-To: References: Message-ID: On Tue, Sep 13, 2011 at 3:53 AM, Zach Kelling wrote: > I'd gladly donate if you throw your proposal for funding up on > kickstarter.com, I'd not be surprised if there were quite a few other people > willing to donate to get Python 3 supported. We can't use kickstarter - we'll put a donation on our website. kickstarter requires you to be a US resident and most of us aren't. > -- > Zach Kelling > http://twitter.com/zeekay > > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > http://mail.python.org/mailman/listinfo/pypy-dev > > From yoann at linkeos.com Tue Sep 13 12:42:39 2011 From: yoann at linkeos.com (Yoann) Date: Tue, 13 Sep 2011 12:42:39 +0200 Subject: [pypy-dev] PyPy support for Python 3 -- request for comments Message-ID: <4E6F339F.309@linkeos.com> Hello, We just launched a website that can help you with the funding process : https://elveos.org Elveos is a website created specifically to fund open source software. Instead of using a simple donation box, it offers a fully functional fundraising tool, with capacity to track the progress, to comment, to check who made contributions ... Of course this come at a cost but it will also be a huge timesaver on your side as you will just have to create the fundraising (and advertise for it) and we'll take care of everything else, including the payment issues. We also provide you with tools helping you communicate on the campaign such as small buttons than can easily be inserted into a blog. As you can imagine, working with pypy would be an important project for us, and we'll be more than happy to help you using elveos, and can develop new functionalities if you see fit. We created an example on pre-production website of what your fundraising campaign could look like : https://test.elveos.org/features/635/description (you have to accept the certificate for the test server). We'll read answers on the list, but if you want to discuss directly with us, you can find on the #elveos irc channel on the freenode.net server. P.S: A few more technical details : * We are based in France and use euro as our operating currency. We accept payment from most currency though. * We accept payments by credit cards or wire transfers (wire transfers didn't make it on live yet, they're coming soon) Regards, Yoann for the elveos team From cfbolz at gmx.de Tue Sep 13 13:35:11 2011 From: cfbolz at gmx.de (Carl Friedrich Bolz) Date: Tue, 13 Sep 2011 13:35:11 +0200 Subject: [pypy-dev] Gothenburg Sprint Dates Message-ID: <4E6F3FEF.5080600@gmx.de> Hi all, Laura asked me to post the following, since she has problems with her mail: ---------------------------------------------------------------------- Some of us need to be in Stockholm Oct 24 and 28. Anto needs to be with his family Nov 1. Fscons starts Friday Nov 11 in G?teborg, and we're giving a talk. Thus I propose that we hold a Sprint at my house Wednesday Nov 2nd through Thursday Nov 10. fscons: http://fscons.org/ Nov 11-13 Not sure what day we will be speaking yet. What do the rest of you think of this idea? Laura From anto.cuni at gmail.com Tue Sep 13 14:09:05 2011 From: anto.cuni at gmail.com (Antonio Cuni) Date: Tue, 13 Sep 2011 14:09:05 +0200 Subject: [pypy-dev] Gothenburg Sprint Dates In-Reply-To: <4E6F3FEF.5080600@gmx.de> References: <4E6F3FEF.5080600@gmx.de> Message-ID: <4E6F47E1.7070302@gmail.com> On 13/09/11 13:35, Carl Friedrich Bolz wrote: > Some of us need to be in Stockholm Oct 24 and 28. > Anto needs to be with his family Nov 1. > Fscons starts Friday Nov 11 in G?teborg, and we're giving a talk. > > Thus I propose that we hold a Sprint at my house Wednesday Nov 2nd through > Thursday Nov 10. > > fscons: http://fscons.org/ Nov 11-13 Not sure what day we will be speaking > yet. > > What do the rest of you think of this idea? It should work for me, although I might arrive a bit later. If needed, I can also stay for fscons and be a co-speaker. Who is going to speak and about what? ciao, Anto From yoann at elveos.org Tue Sep 13 13:54:10 2011 From: yoann at elveos.org (=?ISO-8859-1?Q?Yoann_Pl=E9net?=) Date: Tue, 13 Sep 2011 13:54:10 +0200 Subject: [pypy-dev] PyPy support for Python 3 -- request for comments Message-ID: Hello, We just launched a website that can help you with the funding process : https://elveos.org Elveos is a website created specifically to fund open source software. Instead of using a simple donation box, it offers a fully functional fundraising tool, with capacity to track the progress, to comment, to check who made contributions ... Of course this come at a cost but it will also be a huge timesaver on your side as you will just have to create the fundraising (and advertise for it) and we'll take care of everything else, including the payment issues. We also provide you with tools helping you communicate on the campaign such as small buttons than can easily be inserted into a blog. As you can imagine, working with pypy would be an important project for us, and we'll be more than happy to help you using elveos, and can develop new functionalities if you see fit. We created an example on pre-production website of what your fundraising campaign could look like : https://test.elveos.org/features/635/description (you have to accept the certificate for the test server). We'll read answers on the list, but if you want to discuss directly with us, you can find on the #elveos irc channel on the freenode.net server. P.S: A few more technical details : * We are based in France and use euro as our operating currency. We accept payment from most currency though. * We accept payments by credit cards or wire transfers (wire transfers didn't make it on live yet, they're coming soon) Regards, Yoann for the elveos team -------------- next part -------------- An HTML attachment was scrubbed... URL: From fijall at gmail.com Tue Sep 13 14:58:58 2011 From: fijall at gmail.com (Maciej Fijalkowski) Date: Tue, 13 Sep 2011 14:58:58 +0200 Subject: [pypy-dev] Gothenburg Sprint Dates In-Reply-To: <4E6F47E1.7070302@gmail.com> References: <4E6F3FEF.5080600@gmx.de> <4E6F47E1.7070302@gmail.com> Message-ID: On Tue, Sep 13, 2011 at 2:09 PM, Antonio Cuni wrote: > On 13/09/11 13:35, Carl Friedrich Bolz wrote: > >> Some of us need to be in Stockholm Oct 24 and 28. >> Anto needs to be with his family Nov 1. >> Fscons starts Friday Nov 11 in G?teborg, and we're giving a talk. >> >> Thus I propose that we hold a Sprint at my house Wednesday Nov 2nd through >> Thursday Nov 10. >> >> fscons: http://fscons.org/ Nov 11-13 Not sure what day we will be speaking >> yet. >> >> What do the rest of you think of this idea? > > It should work for me, although I might arrive a bit later. ?If needed, I > can also stay for fscons and be a co-speaker. ?Who is going to speak and > about what? > Surprisingly enough works for me. I have to be Nov 14th on the warsaw airport and 24th of Oct on Prague airport. From fijall at gmail.com Tue Sep 13 15:01:26 2011 From: fijall at gmail.com (Maciej Fijalkowski) Date: Tue, 13 Sep 2011 15:01:26 +0200 Subject: [pypy-dev] PyPy support for Python 3 -- request for comments In-Reply-To: <4E6F339F.309@linkeos.com> References: <4E6F339F.309@linkeos.com> Message-ID: On Tue, Sep 13, 2011 at 12:42 PM, Yoann wrote: > Hello, > > We just launched a website that can help you with the funding process : > https://elveos.org > > Elveos is a website created specifically to fund open source software. > Instead of using a simple donation box, it offers a fully functional > fundraising tool, with capacity to track the progress, to comment, to check > who made contributions ... > > Of course this come at a cost but it will also be a huge timesaver on your > side as you will just have to create the fundraising (and advertise for it) > and we'll take care of everything else, including the payment issues. We > also provide you with tools helping you communicate on the campaign such as > small buttons than can easily be inserted into a blog. > > As you can imagine, working with pypy would be an important project for us, > and we'll be more than happy to help you using elveos, and can develop new > functionalities if you see fit. > > We created an example on pre-production website of what your fundraising > campaign could look like : https://test.elveos.org/features/635/description > ?(you have to accept the certificate for the test server). > > We'll read answers on the list, but if you want to discuss directly with us, > you can find on the #elveos irc channel on the freenode.net server. > > P.S: > A few more technical details : > > ?* We are based in France and use euro as our operating currency. We accept > payment from most currency though. > ?* We accept payments by credit cards or wire transfers (wire transfers > didn't make it on live yet, they're coming soon) > > > Regards, > Yoann for the elveos team > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > http://mail.python.org/mailman/listinfo/pypy-dev > Hi Terms & conditions not translated to english is a bit of a blocker for me to even evaluate it right now. Cheers, fijal From yoann at linkeos.com Tue Sep 13 15:24:48 2011 From: yoann at linkeos.com (Yoann) Date: Tue, 13 Sep 2011 15:24:48 +0200 Subject: [pypy-dev] PyPy support for Python 3 -- request for comments In-Reply-To: References: <4E6F339F.309@linkeos.com> Message-ID: <4E6F59A0.2050900@linkeos.com> Le 13/09/2011 15:01, Maciej Fijalkowski a ?crit : > On Tue, Sep 13, 2011 at 12:42 PM, Yoann wrote: >> Hello, >> >> We just launched a website that can help you with the funding process : >> https://elveos.org >> >> Elveos is a website created specifically to fund open source software. >> Instead of using a simple donation box, it offers a fully functional >> fundraising tool, with capacity to track the progress, to comment, to check >> who made contributions ... >> >> Of course this come at a cost but it will also be a huge timesaver on your >> side as you will just have to create the fundraising (and advertise for it) >> and we'll take care of everything else, including the payment issues. We >> also provide you with tools helping you communicate on the campaign such as >> small buttons than can easily be inserted into a blog. >> >> As you can imagine, working with pypy would be an important project for us, >> and we'll be more than happy to help you using elveos, and can develop new >> functionalities if you see fit. >> >> We created an example on pre-production website of what your fundraising >> campaign could look like : https://test.elveos.org/features/635/description >> (you have to accept the certificate for the test server). >> >> We'll read answers on the list, but if you want to discuss directly with us, >> you can find on the #elveos irc channel on the freenode.net server. >> >> P.S: >> A few more technical details : >> >> * We are based in France and use euro as our operating currency. We accept >> payment from most currency though. >> * We accept payments by credit cards or wire transfers (wire transfers >> didn't make it on live yet, they're coming soon) >> >> >> Regards, >> Yoann for the elveos team >> _______________________________________________ >> pypy-dev mailing list >> pypy-dev at python.org >> http://mail.python.org/mailman/listinfo/pypy-dev >> > Hi > > Terms& conditions not translated to english is a bit of a blocker for > me to even evaluate it right now. > > Cheers, > fijal Hello, To give you the outlines of the terms content I'll provide you an english draft of the terms later today. The definitive version is in the pipeline of a lawyer, however I can't give you an ETA on their release date yet. Yoann From lac at openend.se Tue Sep 13 16:10:36 2011 From: lac at openend.se (Laura Creighton) Date: Tue, 13 Sep 2011 16:10:36 +0200 Subject: [pypy-dev] Gothenburg Sprint Dates In-Reply-To: Message from Antonio Cuni of "Tue, 13 Sep 2011 14:09:05 +0200." <4E6F47E1.7070302@gmail.com> References: <4E6F3FEF.5080600@gmx.de><4E6F47E1.7070302@gmail.com> Message-ID: <201109131410.p8DEAa7D013014@theraft.openend.se> Fscons has asked for somebody to speak on PyPy. So far I have said 'yes' and nothing specific about the content. Laura From yoann at linkeos.com Tue Sep 13 19:26:50 2011 From: yoann at linkeos.com (=?ISO-8859-1?Q?Yoann_Pl=E9net?=) Date: Tue, 13 Sep 2011 19:26:50 +0200 Subject: [pypy-dev] PyPy support for Python 3 -- request for comments In-Reply-To: <4E6F59A0.2050900@linkeos.com> References: <4E6F339F.309@linkeos.com> <4E6F59A0.2050900@linkeos.com> Message-ID: Hello, We made a quick translation of the terms of use. You can find them on: https://elveos.org/en/documentation/cgu As I told you before they are not definitive but give the outlines of what will be in the final version. We are available to discuss them, or anything else, on the irc channel #elveos on the freenode.net server. By the way, feel free to comment the terms of use if you see anything strange or disturbing. Regards, Yoann 2011/9/13 Yoann > Le 13/09/2011 15:01, Maciej Fijalkowski a ?crit : > > On Tue, Sep 13, 2011 at 12:42 PM, Yoann wrote: >> >>> Hello, >>> >>> We just launched a website that can help you with the funding process : >>> https://elveos.org >>> >>> Elveos is a website created specifically to fund open source software. >>> Instead of using a simple donation box, it offers a fully functional >>> fundraising tool, with capacity to track the progress, to comment, to >>> check >>> who made contributions ... >>> >>> Of course this come at a cost but it will also be a huge timesaver on >>> your >>> side as you will just have to create the fundraising (and advertise for >>> it) >>> and we'll take care of everything else, including the payment issues. We >>> also provide you with tools helping you communicate on the campaign such >>> as >>> small buttons than can easily be inserted into a blog. >>> >>> As you can imagine, working with pypy would be an important project for >>> us, >>> and we'll be more than happy to help you using elveos, and can develop >>> new >>> functionalities if you see fit. >>> >>> We created an example on pre-production website of what your fundraising >>> campaign could look like : https://test.elveos.org/** >>> features/635/description >>> (you have to accept the certificate for the test server). >>> >>> We'll read answers on the list, but if you want to discuss directly with >>> us, >>> you can find on the #elveos irc channel on the freenode.net server. >>> >>> P.S: >>> A few more technical details : >>> >>> * We are based in France and use euro as our operating currency. We >>> accept >>> payment from most currency though. >>> * We accept payments by credit cards or wire transfers (wire transfers >>> didn't make it on live yet, they're coming soon) >>> >>> >>> Regards, >>> Yoann for the elveos team >>> ______________________________**_________________ >>> pypy-dev mailing list >>> pypy-dev at python.org >>> http://mail.python.org/**mailman/listinfo/pypy-dev >>> >>> Hi >> >> Terms& conditions not translated to english is a bit of a blocker for >> me to even evaluate it right now. >> >> Cheers, >> fijal >> > Hello, > > To give you the outlines of the terms content I'll provide you an english > draft of the terms later today. > The definitive version is in the pipeline of a lawyer, however I can't give > you an ETA on their release date yet. > > Yoann > > -- Yoann Pl?net 06 74 84 08 96 -------------- next part -------------- An HTML attachment was scrubbed... URL: From yeomanyaacov at gmail.com Wed Sep 14 05:17:21 2011 From: yeomanyaacov at gmail.com (Yaacov Finkelman) Date: Tue, 13 Sep 2011 23:17:21 -0400 Subject: [pypy-dev] PyPy support for Python 3 -- request for comments In-Reply-To: References: Message-ID: Thank you all for your work on Pypy! I have learned so much reading about the work that has been done on the project, and have enjoyed lurking on this list. According to my newbieish understanding of it, Pypy is a large RPython program running in Python 2 that runs an interpreter for Python 2. This project would allow the RPython program running in Python 2 to run an interpreter for 3. Do I have this right? If so what are the prospects for porting Pypy it self to run on 3? I think if I were a large company interested in fronting this kind of money I would want detailed information on how the money would be spent. How is it that the Pypy team having access to this money will lead to an implementation of Python 3? Sorry for my newbieish questions, and thank you again. Jacob On Tue, Sep 13, 2011 at 3:23 AM, Maciej Fijalkowski wrote: > On Tue, Sep 13, 2011 at 3:53 AM, Zach Kelling > wrote: > > I'd gladly donate if you throw your proposal for funding up on > > kickstarter.com, I'd not be surprised if there were quite a few other > people > > willing to donate to get Python 3 supported. > > We can't use kickstarter - we'll put a donation on our website. > kickstarter requires you to be a US resident and most of us aren't. > > > -- > > Zach Kelling > > http://twitter.com/zeekay > > > > _______________________________________________ > > pypy-dev mailing list > > pypy-dev at python.org > > http://mail.python.org/mailman/listinfo/pypy-dev > > > > > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > http://mail.python.org/mailman/listinfo/pypy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From anto.cuni at gmail.com Wed Sep 14 16:02:40 2011 From: anto.cuni at gmail.com (Antonio Cuni) Date: Wed, 14 Sep 2011 16:02:40 +0200 Subject: [pypy-dev] PyPy support for Python 3 -- request for comments In-Reply-To: References: Message-ID: <4E70B400.3080007@gmail.com> Hello Yaacov, On 14/09/11 05:17, Yaacov Finkelman wrote: > Thank you all for your work on Pypy! I have learned so much reading about the > work that has been done on the project, and have enjoyed lurking on this list. > > According to my newbieish understanding of it, Pypy is a large RPython program > running in Python 2 that runs an interpreter for Python 2. This project would > allow the RPython program running in Python 2 to run an interpreter for 3. Do > I have this right? yes, you are right > If so what are the prospects for porting Pypy it self to > run on 3? there is no plan at the moment. For sure, we want to be able to translate pypy on top of pypy, so having a working py3 interpreter is a prerequisite. > I think if I were a large company interested in fronting this kind of money I > would want detailed information on how the money would be spent. How is it > that the Pypy team having access to this money will lead to an implementation > of Python 3? The plan is to hire one (or more) of the core developers, so that they will be able to work full time on this. We will setup a separate page to answer these questions, including what happens if we don't raise enough money, etc. ciao, Anto From e2lahav at gmail.com Wed Sep 14 16:12:47 2011 From: e2lahav at gmail.com (Elad Lahav) Date: Wed, 14 Sep 2011 10:12:47 -0400 Subject: [pypy-dev] Separate building of the C source files Message-ID: Hello, I am trying to build Pypy for an embedded platform, with its own build system. For that purpose, I would like to have the translation process run on a Linux/x86 box, generating the C source files, and then have the platform's own build system do C compilation and linking. With the 1.6 sources, I used the following command: $ python translate.py --opt-jit --source targetpypystandalone.py The source files were created under the /tmp directory. The first thing I am missing, though, is a makefile. Looking at the translator sources, it appears as though a GNU make-compatible makefile should have been created, but I cannot find one. Is such a makefile written to disk as part of the source generation stage, or does the translator use a different mechanism? Instead of using a makefile, I also tried manual compilation of the files under /tmp/usession-unknown-NUMBER/testing_1. The linking stage (with -lffi -lm -ldl -lpthread) fails with unresolved symbols, which I do not recognize from any of the common libraries. Nor could I find any documentation of the post-translation build process. Can anyone describe the C compilation and linking commands, or point me to the right place in the code/documentation? Thanks, --Elad -------------- next part -------------- An HTML attachment was scrubbed... URL: From arigo at tunes.org Wed Sep 14 21:57:11 2011 From: arigo at tunes.org (Armin Rigo) Date: Wed, 14 Sep 2011 21:57:11 +0200 Subject: [pypy-dev] Separate building of the C source files In-Reply-To: References: Message-ID: Hi Elad, On Wed, Sep 14, 2011 at 4:12 PM, Elad Lahav wrote: > The source files were created under the /tmp directory. The first thing I am > missing, though, is a makefile. The Makefile should be in /tmp/usession-xxx/testing_1/. A bient?t, Armin. From e2lahav at gmail.com Wed Sep 14 22:08:53 2011 From: e2lahav at gmail.com (Elad Lahav) Date: Wed, 14 Sep 2011 16:08:53 -0400 Subject: [pypy-dev] Separate building of the C source files In-Reply-To: References: Message-ID: Thanks, Armin, but that's the first place I looked. There is no makefile there. --Elad On Wed, Sep 14, 2011 at 3:57 PM, Armin Rigo wrote: > Hi Elad, > > On Wed, Sep 14, 2011 at 4:12 PM, Elad Lahav wrote: > > The source files were created under the /tmp directory. The first thing I > am > > missing, though, is a makefile. > > The Makefile should be in /tmp/usession-xxx/testing_1/. > > > A bient?t, > > Armin. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lionel at gamr7.com Thu Sep 15 13:59:20 2011 From: lionel at gamr7.com (Lionel Barret De Nazaris) Date: Thu, 15 Sep 2011 13:59:20 +0200 Subject: [pypy-dev] bounties for pypy In-Reply-To: References: <201106281751.p5SHpixC014361@theraft.openend.se> <201106290753.p5T7rMKh002901@theraft.openend.se> Message-ID: <4E71E898.5020608@gamr7.com> Add EUR200 to that. Where do I pay ? regards, -- Best regards, Lionel Barret de Nazaris Gamr7 - CEO === Create bigger cities faster with Urban PAD . follow us : blog , twitter , facebook Disclaimer: This message contains confidential and legally privileged information. This information is intended only for the addressee of this message. In case you have received this message in error, you should not disseminate, distribute or copy it. Please notify the sender immediately by e-mail and delete this message from your IT system. On 08/15/2011 03:03 PM, Ian Ozsvald wrote: > I'll re-open this earlier thread...Didrik Pinte of Enthought is > offering to match my suggested ?600 donation towards a numpy-pypy > project. He's offering it in a personal capacity (i.e. not as an > Enthought activity) out of his own pocket (and my donation would come > out of my Consultancy). His offer came out of a discussion on pypy's > merits at the London Financial Python Usergroup meet a few weeks back. > > So, you've got ?1,200 via two individuals ready as a gift towards > numpy integration, as/when someone can use the money. I won't promise > that my gift will always be available as my personal situation may be > changing in the next 6 weeks (so please - if you can use it - ask for > it soon!). > > Regards, > Ian (UK) > > On 29 June 2011 14:16, Ian Ozsvald wrote: >> I'm glad this thread is up. Laura - I'm the chap from Armin's talk who >> offered a monthly retainer for a year towards numpy integration (in my >> mind I'm offering ?50/month for 12 months). I spoke to you later and >> you mentioned the flattr site but having to do it each month is a bit >> of a pain (I know it is simple but I don't want to think about it...). >> >> So, for the record, I have ?600 sitting here with someone's name on >> it, I'll account for it as a marketing expense or something out of my >> company. I'm a one man consultancy, PyPy doesn't directly help me as >> I'm an A.I./science researcher (so I need numpy, IPython, matplotlib >> etc) but I believe strongly that it will help all of Python (and me in >> part) over time, so it is worth pledging some of my earnings towards >> the goal of eventual numpy integration. >> >> If I can pledge it to someone or a project then that's cool, if I >> should just move the money to someone's account then that's cool too. >> I'm quite happy to have my name down as ContributorNumber1ForNumpy if >> it helps you spread the word. >> >> Ian. >> ps. I posted the v0.1 PDF of my High Performance Python tutorial this >> morning (it is based on my EuroPython training session). It has a >> section on PyPy and I'd happily accept input if that section should be >> expanded: http://ianozsvald.com/2011/06/29/high-performance-pyethon-tutorial-v0-1-from-my-4-hour-tutorial-at-europython-2011/ >> >> On 29 June 2011 08:53, Laura Creighton wrote: >>>> The idea was also to possibly attract new developers ... for example, if >>>> there would be "10 days in money" for adapting py2exe, I am sure many wou >>>> ld >>>> jump to solve this puzzle. >>> This is sort of a bad example. Because py2exe embeds CPython, and >>> we wouldn't want to do that. So what we would probably want to do is >>> to make some general tool that willmake a windows binary, or a >>> mac one, and get rid of the need for bzfreeze and friends. So now >>> you are looking at a general embedding solution, and that is more >>> than 10 days worth of work. >>> >>> But I get the idea. >>> >>> >>>>> my dream was of a trustee service: after somebody commits to do the wor >>>> k, >>>> the pledgers have to pay to a trustee. then the work is done. then the >>>> trustee pays the worker. >>> This is one of the things I want to talk with fundedbyme about. But >>> having an explicit trustee is a new idea. I think the pypy core >>> developers are already rather well trusted in this community, but >>> this may be important to new developers who aren't as well known. >>> And it handles the problem' of 'I got sick and cannot do this any >>> more' more gracefully than other solutions. >>> >>>> Hmmm.... a structure could be: >>>> >>>> - service provider does the technical stuff, as in: >>>> # website >>>> # collect pledges >>>> # handle project description >>>> # collect money >>>> # distribute money after feature completion >>> fundedbyme has sort of indicated an interst in doing this (except >>> they were talking about distribution before, and I was leaving >>> project description to the project, not outsiders). I will follow >>> up on this when I get back home to Sweden. >>> >>>> - PSF / pypy-foundation / whateverfoundation provides the trust >>>> >>>> Thanks for confirming the need for such a thing! >>>> >>>> Harald >>> Thanks once again for seeing a marketing solution that nerds like >>> us often miss. >>> >>> Laura >>> _______________________________________________ >>> pypy-dev mailing list >>> pypy-dev at python.org >>> http://mail.python.org/mailman/listinfo/pypy-dev >>> >> >> >> -- >> Ian Ozsvald (A.I. researcher, screencaster) >> ian at IanOzsvald.com >> >> http://IanOzsvald.com >> http://SocialTiesApp.com/ >> http://MorConsulting.com/ >> http://blog.AICookbook.com/ >> http://TheScreencastingHandbook.com >> http://FivePoundApp.com/ >> http://twitter.com/IanOzsvald >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.j.a.cock at googlemail.com Thu Sep 15 23:00:53 2011 From: p.j.a.cock at googlemail.com (Peter Cock) Date: Thu, 15 Sep 2011 22:00:53 +0100 Subject: [pypy-dev] Detecting numpy vs micronumpy Message-ID: Dear all, I tried asking this on the NumPy mailing list, but realise here is more likely: http://mail.scipy.org/pipermail/numpy-discussion/2011-September/058439.html How should a python script (e.g. setup.py) distinguish between real numpy and micronumpy? Or should I instead be looking to distinguish PyPy versus another Python implementation? Thanks, Peter From alex.gaynor at gmail.com Thu Sep 15 23:02:42 2011 From: alex.gaynor at gmail.com (Alex Gaynor) Date: Thu, 15 Sep 2011 17:02:42 -0400 Subject: [pypy-dev] Detecting numpy vs micronumpy In-Reply-To: References: Message-ID: I think, for the time being, the appropriate solution is to just check the Python version, the original NumPy doesn't run on PyPy so it should be fine. Alex On Thu, Sep 15, 2011 at 5:00 PM, Peter Cock wrote: > Dear all, > > I tried asking this on the NumPy mailing list, but realise here is more > likely: > http://mail.scipy.org/pipermail/numpy-discussion/2011-September/058439.html > > How should a python script (e.g. setup.py) distinguish between > real numpy and micronumpy? Or should I instead be looking > to distinguish PyPy versus another Python implementation? > > Thanks, > > Peter > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > http://mail.python.org/mailman/listinfo/pypy-dev > -- "I disapprove of what you say, but I will defend to the death your right to say it." -- Evelyn Beatrice Hall (summarizing Voltaire) "The people's good is the highest law." -- Cicero -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.j.a.cock at googlemail.com Thu Sep 15 23:55:06 2011 From: p.j.a.cock at googlemail.com (Peter Cock) Date: Thu, 15 Sep 2011 22:55:06 +0100 Subject: [pypy-dev] Detecting numpy vs micronumpy In-Reply-To: References: Message-ID: On Thu, Sep 15, 2011 at 10:02 PM, Alex Gaynor wrote: > I think, for the time being, the appropriate solution is to just check the > Python version, the original NumPy doesn't run on PyPy so it should be fine. > Alex How precisely? The problem I am running into is that "import numpy" appears to work under PyPy 1.6 (you get micronumpy) but later things like numpy.get_include() don't work (AttributeError). Should I just treat that exception itself as meaning it is micronumpy not real numpy? Thanks, Peter From alex.gaynor at gmail.com Fri Sep 16 00:04:37 2011 From: alex.gaynor at gmail.com (Alex Gaynor) Date: Thu, 15 Sep 2011 18:04:37 -0400 Subject: [pypy-dev] Detecting numpy vs micronumpy In-Reply-To: References: Message-ID: On Thu, Sep 15, 2011 at 5:55 PM, Peter Cock wrote: > On Thu, Sep 15, 2011 at 10:02 PM, Alex Gaynor > wrote: > > I think, for the time being, the appropriate solution is to just check > the > > Python version, the original NumPy doesn't run on PyPy so it should be > fine. > > Alex > > How precisely? > > The problem I am running into is that "import numpy" appears > to work under PyPy 1.6 (you get micronumpy) but later things like > numpy.get_include() don't work (AttributeError). Should I just treat > that exception itself as meaning it is micronumpy not real numpy? > > Thanks, > > Peter > Well, until we implement it anyways :) That's why I think something like "import platform; platform.python_implementation == 'PyPy'" is a godo way to check. Alex -- "I disapprove of what you say, but I will defend to the death your right to say it." -- Evelyn Beatrice Hall (summarizing Voltaire) "The people's good is the highest law." -- Cicero -------------- next part -------------- An HTML attachment was scrubbed... URL: From arigo at tunes.org Fri Sep 16 09:51:44 2011 From: arigo at tunes.org (Armin Rigo) Date: Fri, 16 Sep 2011 09:51:44 +0200 Subject: [pypy-dev] Separate building of the C source files In-Reply-To: References: Message-ID: Hi Elad, On Wed, Sep 14, 2011 at 10:08 PM, Elad Lahav wrote: > Thanks, Armin, but that's the first place I looked. There is no makefile > there. Doesn't make much sense to me. A "Makefile" (not a "makefile") should be created. If it wasn't, then maybe it crashed during writing the C sources and you missed this? Sorry to not answer your original question. The issue is that there are various libraries that may or may not be needed, depending on exactly which functions are put or not in the final C sources, not to mention your particular platform; that's why we always rely on the Makefile to say it for us. It's a hard job to figure out manually the list of libraries. You'd have to grep all over the "pypy/" directory for "libraries = [...]" and do the filtering yourself. There is no central place that lists all possible libraries. A bient?t, Armin. From p.j.a.cock at googlemail.com Fri Sep 16 11:48:00 2011 From: p.j.a.cock at googlemail.com (Peter Cock) Date: Fri, 16 Sep 2011 10:48:00 +0100 Subject: [pypy-dev] Detecting numpy vs micronumpy In-Reply-To: References: Message-ID: On Thu, Sep 15, 2011 at 11:04 PM, Alex Gaynor wrote: > >> >> The problem I am running into is that "import numpy" appears >> to work under PyPy 1.6 (you get micronumpy) but later things like >> numpy.get_include() don't work (AttributeError). Should I just treat >> that exception itself as meaning it is micronumpy not real numpy? >> >> Thanks, >> >> Peter > > Well, until we implement it anyways :) ?That's why I think something like > "import platform; platform.python_implementation == 'PyPy'" is a godo way to > check. > Alex Thanks, I'll use that. Its a shame that wasn't in Python 2.5 though, my copy of Jython doesn't support it either. Peter From arigo at tunes.org Fri Sep 16 13:56:14 2011 From: arigo at tunes.org (Armin Rigo) Date: Fri, 16 Sep 2011 13:56:14 +0200 Subject: [pypy-dev] Detecting numpy vs micronumpy In-Reply-To: References: Message-ID: Hi, On Fri, Sep 16, 2011 at 11:48 AM, Peter Cock wrote: > Thanks, I'll use that. Its a shame that wasn't in Python 2.5 though, > my copy of Jython doesn't support it either. The older and more robust way to check this is: "__pypy__" in sys.builtin_module_names Armin From fijall at gmail.com Fri Sep 16 18:07:27 2011 From: fijall at gmail.com (Maciej Fijalkowski) Date: Fri, 16 Sep 2011 18:07:27 +0200 Subject: [pypy-dev] bounties for pypy In-Reply-To: <4E71E898.5020608@gamr7.com> References: <201106281751.p5SHpixC014361@theraft.openend.se> <201106290753.p5T7rMKh002901@theraft.openend.se> <4E71E898.5020608@gamr7.com> Message-ID: On Thu, Sep 15, 2011 at 1:59 PM, Lionel Barret De Nazaris wrote: > Add ?200 to that. > > Where do I pay ? > > regards, > -- > Best regards, > Lionel Barret de Nazaris > Gamr7 - CEO Hi We'll set up a way to donate to PyPy with certain proposal (like numpy) in mind really soon. Cheers, fijal From boris2010 at boristhebrave.com Sat Sep 17 17:14:39 2011 From: boris2010 at boristhebrave.com (Boris) Date: Sat, 17 Sep 2011 16:14:39 +0100 Subject: [pypy-dev] Spurious dict lookups in my JIT loops Message-ID: Hi, I've been trying out writing my own interpreter using the PyPy framework recently, as a bit of fun. I've been trying to get the JIT to optimize a trivial loop down to the minimal amount of operations. With judicious use of `_immutable_fields_` and `_virtualizable2_`, I've got pretty close. But I'm still seeing lots of calls to `ll_dict_lookup__dicttablePtr_Signed_Signed`, which don't correspond to any code in my interpreter. I don't think I even have any dicts that take integer keys. Could someone give me a hint where these are coming from and for what purpose? Or perhaps how to inspect the dicts or get further info? I append what I'm seeing in the logs (Note: I've augmented the logs to give the raw pointer). In this case, it is only looking up the value 21, but I've seen other values in addition when running other programs. The setarrayitem_gc calls are expected - it is nulling out the stack that was being used. Everything from i17 is unexpected. I tested on revisions 00711ff1e03d and 96a212b0688a. Thanks, Boris ############################# [3fd12ea6569db] {jit-log-opt-loop # Loop 0 : loop with 37 ops [p0, p1, p2, i3, p4, p5] debug_merge_point(0, '::Test$iinit:20') +113: i7 = int_add(i3, 1) debug_merge_point(0, '::Test$iinit:22') debug_merge_point(0, '::Test$iinit:23') debug_merge_point(0, '::Test$iinit:26') +116: setarrayitem_gc(p5, 0, ConstPtr(ptr9,0x0), descr=) +126: setarrayitem_gc(p5, 1, ConstPtr(ptr11,0x0), descr=) +133: i13 = uint_lt(i7, 10000) guard_true(i13, descr=) [p0, p1, p2, p4, i7] +145: i17 = call(ConstClass(ll_dict_lookup__dicttablePtr_Signed_Signed), ConstPtr(ptr15,0x84c2958), 21, 21, descr=) +176: guard_no_exception(, descr=) [p0, i17, p1, p2, p4, i7] +189: i19 = int_and(i17, -2147483648) +195: i20 = int_is_true(i19) guard_true(i20, descr=) [p0, p1, p2, p4, i7] +204: i23 = call(ConstClass(ll_dict_lookup__dicttablePtr_Signed_Signed), ConstPtr(ptr22,0x84c2978), 21, 21, descr=) +235: guard_no_exception(, descr=) [p0, i23, p1, p2, p4, i7] +248: i24 = int_and(i23, -2147483648) +254: i25 = int_is_true(i24) guard_true(i25, descr=) [p0, p1, p2, p4, i7] +263: i28 = call(ConstClass(ll_dict_lookup__dicttablePtr_Signed_Signed), ConstPtr(ptr27,0x84c2988), 21, 21, descr=) +294: guard_no_exception(, descr=) [p0, i28, p1, p2, p4, i7] +307: i29 = int_and(i28, -2147483648) +313: i30 = int_is_true(i29) guard_true(i30, descr=) [p0, p1, p2, p4, i7] +322: i33 = call(ConstClass(ll_dict_lookup__dicttablePtr_Signed_Signed), ConstPtr(ptr32,0x84c2998), 21, 21, descr=) +353: guard_no_exception(, descr=) [p0, i33, p1, p2, p4, i7] +366: i34 = int_and(i33, -2147483648) +372: i35 = int_is_true(i34) guard_false(i35, descr=) [p0, p1, p2, p4, i7] +381: i38 = call(ConstClass(ll_dict_lookup__dicttablePtr_Signed_Signed), ConstPtr(ptr37,0x84c2968), 21, 21, descr=) +412: guard_no_exception(, descr=) [p0, i38, p1, p2, p4, i7] +425: i39 = int_and(i38, -2147483648) +431: i40 = int_is_true(i39) guard_true(i40, descr=) [p0, p1, p2, p4, i7] debug_merge_point(0, '::Test$iinit:20') +440: i41 = arraylen_gc(p5, descr=) +440: jump(p0, p1, p2, i7, p4, p5, descr=) +448: --end of the loop-- [3fd12ea696ce9] jit-log-opt-loop} -------------- next part -------------- An HTML attachment was scrubbed... URL: From benjamin at python.org Sat Sep 17 19:25:24 2011 From: benjamin at python.org (Benjamin Peterson) Date: Sat, 17 Sep 2011 13:25:24 -0400 Subject: [pypy-dev] Spurious dict lookups in my JIT loops In-Reply-To: References: Message-ID: This would probably be easier if you showed us the code. 2011/9/17 Boris : > Hi, > > I've been trying out writing my own interpreter using the PyPy framework > recently, as a bit of fun. I've been trying to get the JIT to optimize a > trivial loop down to the minimal amount of operations. With judicious use of > `_immutable_fields_` and `_virtualizable2_`, I've got pretty close. > > But I'm still seeing lots of calls to > `ll_dict_lookup__dicttablePtr_Signed_Signed`, which don't correspond to any > code in my interpreter. I don't think I even have any dicts that take > integer keys. Could someone give me a hint where these are coming from and > for what purpose? Or perhaps how to inspect the dicts or get further info? > > I append what I'm seeing in the logs (Note: I've augmented the logs to give > the raw pointer). In this case, it is only looking up the value 21, but I've > seen other values in addition when running other programs. The > setarrayitem_gc calls are expected - it is nulling out the stack that was > being used. Everything from i17 is unexpected. I tested on revisions > 00711ff1e03d and 96a212b0688a. > > Thanks, > > Boris > > > ############################# > > [3fd12ea6569db] {jit-log-opt-loop > # Loop 0 : loop with 37 ops > [p0, p1, p2, i3, p4, p5] > debug_merge_point(0, '::Test$iinit:20') > +113: i7 = int_add(i3, 1) > debug_merge_point(0, '::Test$iinit:22') > debug_merge_point(0, '::Test$iinit:23') > debug_merge_point(0, '::Test$iinit:26') > +116: setarrayitem_gc(p5, 0, ConstPtr(ptr9,0x0), descr=) > +126: setarrayitem_gc(p5, 1, ConstPtr(ptr11,0x0), descr=) > +133: i13 = uint_lt(i7, 10000) > guard_true(i13, descr=) [p0, p1, p2, p4, i7] > +145: i17 = call(ConstClass(ll_dict_lookup__dicttablePtr_Signed_Signed), > ConstPtr(ptr15,0x84c2958), 21, 21, descr=) > +176: guard_no_exception(, descr=) [p0, i17, p1, p2, p4, i7] > +189: i19 = int_and(i17, -2147483648) > +195: i20 = int_is_true(i19) > guard_true(i20, descr=) [p0, p1, p2, p4, i7] > +204: i23 = call(ConstClass(ll_dict_lookup__dicttablePtr_Signed_Signed), > ConstPtr(ptr22,0x84c2978), 21, 21, descr=) > +235: guard_no_exception(, descr=) [p0, i23, p1, p2, p4, i7] > +248: i24 = int_and(i23, -2147483648) > +254: i25 = int_is_true(i24) > guard_true(i25, descr=) [p0, p1, p2, p4, i7] > +263: i28 = call(ConstClass(ll_dict_lookup__dicttablePtr_Signed_Signed), > ConstPtr(ptr27,0x84c2988), 21, 21, descr=) > +294: guard_no_exception(, descr=) [p0, i28, p1, p2, p4, i7] > +307: i29 = int_and(i28, -2147483648) > +313: i30 = int_is_true(i29) > guard_true(i30, descr=) [p0, p1, p2, p4, i7] > +322: i33 = call(ConstClass(ll_dict_lookup__dicttablePtr_Signed_Signed), > ConstPtr(ptr32,0x84c2998), 21, 21, descr=) > +353: guard_no_exception(, descr=) [p0, i33, p1, p2, p4, i7] > +366: i34 = int_and(i33, -2147483648) > +372: i35 = int_is_true(i34) > guard_false(i35, descr=) [p0, p1, p2, p4, i7] > +381: i38 = call(ConstClass(ll_dict_lookup__dicttablePtr_Signed_Signed), > ConstPtr(ptr37,0x84c2968), 21, 21, descr=) > +412: guard_no_exception(, descr=) [p0, i38, p1, p2, p4, i7] > +425: i39 = int_and(i38, -2147483648) > +431: i40 = int_is_true(i39) > guard_true(i40, descr=) [p0, p1, p2, p4, i7] > debug_merge_point(0, '::Test$iinit:20') > +440: i41 = arraylen_gc(p5, descr=) > +440: jump(p0, p1, p2, i7, p4, p5, descr=) > +448: --end of the loop-- > [3fd12ea696ce9] jit-log-opt-loop} > > > > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > http://mail.python.org/mailman/listinfo/pypy-dev > > -- Regards, Benjamin From boris2010 at boristhebrave.com Sat Sep 17 23:38:43 2011 From: boris2010 at boristhebrave.com (Boris) Date: Sat, 17 Sep 2011 22:38:43 +0100 Subject: [pypy-dev] Spurious dict lookups in my JIT loops In-Reply-To: References: Message-ID: It's a little long for that, I was hoping people would request what is relevant. Here's some pertinent snippets (again, this is all an experiment, so it is not pretty). jitdriver = JitDriver(greens=['pc', 'method'], reds=['self'], virtualizables=['self'], get_printable_location=get_location) class Frame: """ Represents one stack frame.""" _immutable_fields_ = ['space', 'method', 'opStack', 'scopeStack', 'locals'] _virtualizable2_ = ["locals[*]"] def __init__(self, space, method): self = jit.hint(self, access_directly=True, fresh_virtualizable=True) self.space = space self.method = method body = method.method_body self.opStack = [space.getUndefined()] * body.max_stack self.opCount = 0 self.scopeStack = [space.getUndefined()] * body.max_scope_depth self.scopeCount = 0 self.locals = [space.getUndefined()] * body.local_count def dispatch(self): self = jit.hint(self, access_directly=True) pc = 0 while True: jitdriver.jit_merge_point(self=self, pc=pc, method=self.method) self.opCount = jit.hint(self.opCount, promote=True) self.scopeCount = jit.hint(self.scopeCount, promote=True) r, pc = self.handle_bytecode(pc) if r is not None: return r def handle_bytecode(self, pc): """Runs the interpreter for a single bytecode. Returns (retvalue,pc) where retvalue is non-None if if the RETURNVALUE or RETURNVOID opcodes are run, and pc the new program counter.""" bytecode = ord(self.method.method_body.code[pc]) pc+=1 if bytecode == Frame.LABEL: pass elif bytecode == Frame.INCLOCAL_I: index, pc = readU30(self.method.method_body.code, pc) self.locals[force_non_neg(index)] = self.space.wrapInt(self.space.toInteger(self.locals[force_non_neg(index)]) + 1) elif bytecode == Frame.GETLOCAL2: self.push(self.locals[2]) elif bytecode == Frame.PUSHSHORT: i, pc = readU30(self.method.method_body.code, pc) self.push(self.space.wrapInt(i)) elif (bytecode == Frame.IFGE or bytecode == Frame.IFGT or bytecode == Frame.IFLE or bytecode == Frame.IFLT or bytecode == Frame.IFNGE or bytecode == Frame.IFNGT or bytecode == Frame.IFNLE or bytecode == Frame.IFNLT): offset, pc = readSI24(self.method.method_body.code, pc) b = self.pop() a = self.pop() c = self.compare(a, b) doBranch = False if c == -99: doBranch = False else: if bytecode in (Frame.IFGE, Frame.IFNGE): doBranch = c >= 0 elif bytecode in (Frame.IFGT, Frame.IFNGT): doBranch = c > 0 elif bytecode in (Frame.IFLE, Frame.IFNLE): doBranch = c <= 0 elif bytecode in (Frame.IFLT, Frame.IFNLT): doBranch = c < 0 if bytecode in (Frame.IFNGE,Frame.IFNGT,Frame.IFNLE,Frame.IFNLT): doBranch = not doBranch if doBranch: pc += offset #jitdriver.can_enter_jit(self=self, pc=pc, method=self.method) The bytecode in question: L1: label inclocal_i 2 L0: getlocal2 pushshort 10000 iflt L1 (i.e. increment local variable 2 by 1, then compare it against 10000) On Sat, Sep 17, 2011 at 6:25 PM, Benjamin Peterson wrote: > This would probably be easier if you showed us the code. > > 2011/9/17 Boris : > > Hi, > > > > I've been trying out writing my own interpreter using the PyPy framework > > recently, as a bit of fun. I've been trying to get the JIT to optimize a > > trivial loop down to the minimal amount of operations. With judicious use > of > > `_immutable_fields_` and `_virtualizable2_`, I've got pretty close. > > > > But I'm still seeing lots of calls to > > `ll_dict_lookup__dicttablePtr_Signed_Signed`, which don't correspond to > any > > code in my interpreter. I don't think I even have any dicts that take > > integer keys. Could someone give me a hint where these are coming from > and > > for what purpose? Or perhaps how to inspect the dicts or get further > info? > > > > I append what I'm seeing in the logs (Note: I've augmented the logs to > give > > the raw pointer). In this case, it is only looking up the value 21, but > I've > > seen other values in addition when running other programs. The > > setarrayitem_gc calls are expected - it is nulling out the stack that was > > being used. Everything from i17 is unexpected. I tested on revisions > > 00711ff1e03d and 96a212b0688a. > > > > Thanks, > > > > Boris > > > > > > ############################# > > > > [3fd12ea6569db] {jit-log-opt-loop > > # Loop 0 : loop with 37 ops > > [p0, p1, p2, i3, p4, p5] > > debug_merge_point(0, '::Test$iinit:20') > > +113: i7 = int_add(i3, 1) > > debug_merge_point(0, '::Test$iinit:22') > > debug_merge_point(0, '::Test$iinit:23') > > debug_merge_point(0, '::Test$iinit:26') > > +116: setarrayitem_gc(p5, 0, ConstPtr(ptr9,0x0), descr=) > > +126: setarrayitem_gc(p5, 1, ConstPtr(ptr11,0x0), > descr=) > > +133: i13 = uint_lt(i7, 10000) > > guard_true(i13, descr=) [p0, p1, p2, p4, i7] > > +145: i17 = call(ConstClass(ll_dict_lookup__dicttablePtr_Signed_Signed), > > ConstPtr(ptr15,0x84c2958), 21, 21, descr=) > > +176: guard_no_exception(, descr=) [p0, i17, p1, p2, p4, i7] > > +189: i19 = int_and(i17, -2147483648) > > +195: i20 = int_is_true(i19) > > guard_true(i20, descr=) [p0, p1, p2, p4, i7] > > +204: i23 = call(ConstClass(ll_dict_lookup__dicttablePtr_Signed_Signed), > > ConstPtr(ptr22,0x84c2978), 21, 21, descr=) > > +235: guard_no_exception(, descr=) [p0, i23, p1, p2, p4, i7] > > +248: i24 = int_and(i23, -2147483648) > > +254: i25 = int_is_true(i24) > > guard_true(i25, descr=) [p0, p1, p2, p4, i7] > > +263: i28 = call(ConstClass(ll_dict_lookup__dicttablePtr_Signed_Signed), > > ConstPtr(ptr27,0x84c2988), 21, 21, descr=) > > +294: guard_no_exception(, descr=) [p0, i28, p1, p2, p4, i7] > > +307: i29 = int_and(i28, -2147483648) > > +313: i30 = int_is_true(i29) > > guard_true(i30, descr=) [p0, p1, p2, p4, i7] > > +322: i33 = call(ConstClass(ll_dict_lookup__dicttablePtr_Signed_Signed), > > ConstPtr(ptr32,0x84c2998), 21, 21, descr=) > > +353: guard_no_exception(, descr=) [p0, i33, p1, p2, p4, i7] > > +366: i34 = int_and(i33, -2147483648) > > +372: i35 = int_is_true(i34) > > guard_false(i35, descr=) [p0, p1, p2, p4, i7] > > +381: i38 = call(ConstClass(ll_dict_lookup__dicttablePtr_Signed_Signed), > > ConstPtr(ptr37,0x84c2968), 21, 21, descr=) > > +412: guard_no_exception(, descr=) [p0, i38, p1, p2, p4, i7] > > +425: i39 = int_and(i38, -2147483648) > > +431: i40 = int_is_true(i39) > > guard_true(i40, descr=) [p0, p1, p2, p4, i7] > > debug_merge_point(0, '::Test$iinit:20') > > +440: i41 = arraylen_gc(p5, descr=) > > +440: jump(p0, p1, p2, i7, p4, p5, descr=) > > +448: --end of the loop-- > > [3fd12ea696ce9] jit-log-opt-loop} > > > > > > > > _______________________________________________ > > pypy-dev mailing list > > pypy-dev at python.org > > http://mail.python.org/mailman/listinfo/pypy-dev > > > > > > > > -- > Regards, > Benjamin > -------------- next part -------------- An HTML attachment was scrubbed... URL: From arigo at tunes.org Sun Sep 18 09:56:26 2011 From: arigo at tunes.org (Armin Rigo) Date: Sun, 18 Sep 2011 09:56:26 +0200 Subject: [pypy-dev] Spurious dict lookups in my JIT loops In-Reply-To: References: Message-ID: Hi Boris, All machine code instructions produced by the JIT have a place that they code from in your RPython code. In this case I suspect that it's from self.compare(), but again, it's a bit hard to know without having access to the complete source code. Alternatively, there is a way to display where the operations come from, but only during testing. You might rewrite your example as a test --- as opposed to, I suppose, a targetxxx.py. See the infinite amount of tiny examples in pypy/jit/metainterp/test/test_*.py, e.g. test_ajit.py. Write a new file in the same directory. The point is then that you can run the test with "python test_all.py test_yourfile.py --viewloops". It reports the same traces, but organized as a call graph and with the origin shown before every operation. A bient?t, Armin. From cfbolz at gmx.de Sun Sep 18 10:17:52 2011 From: cfbolz at gmx.de (Carl Friedrich Bolz) Date: Sun, 18 Sep 2011 10:17:52 +0200 Subject: [pypy-dev] Spurious dict lookups in my JIT loops In-Reply-To: References: Message-ID: <4E75A930.7010200@gmx.de> On 09/17/2011 11:38 PM, Boris wrote: > It's a little long for that, I was hoping people would request what is > relevant. You could put the code on some public code hosting (e.g. bitbucket.org) and point to the repo if you don't want to mail it around. Carl Friedrich From tismer at stackless.com Mon Sep 19 16:58:33 2011 From: tismer at stackless.com (Christian Tismer) Date: Mon, 19 Sep 2011 16:58:33 +0200 Subject: [pypy-dev] Gothenburg Sprint Dates In-Reply-To: <4E6F3FEF.5080600@gmx.de> References: <4E6F3FEF.5080600@gmx.de> Message-ID: <4E775899.8010204@stackless.com> On 9/13/11 1:35 PM, Carl Friedrich Bolz wrote: > Some of us need to be in Stockholm Oct 24 and 28. > Anto needs to be with his family Nov 1. > Fscons starts Friday Nov 11 in G?teborg, and we're giving a talk. > > Thus I propose that we hold a Sprint at my house Wednesday Nov 2nd > through > Thursday Nov 10. > > fscons: http://fscons.org/ Nov 11-13 Not sure what day we will be > speaking > yet. > > What do the rest of you think of this idea? > This should work for me as well. So if nothing suddenly happens, count me in. -- Christian Tismer :^) tismerysoft GmbH : Have a break! Take a ride on Python's Karl-Liebknecht-Str. 121 : *Starship* http://starship.python.net/ 14482 Potsdam : PGP key -> http://pgp.uni-mainz.de work +49 173 24 18 776 mobile +49 173 24 18 776 fax n.a. PGP 0x57F3BF04 9064 F4E1 D754 C2FF 1619 305B C09C 5A3B 57F3 BF04 whom do you want to sponsor today? http://www.stackless.com/ From arigo at tunes.org Mon Sep 19 17:36:55 2011 From: arigo at tunes.org (Armin Rigo) Date: Mon, 19 Sep 2011 17:36:55 +0200 Subject: [pypy-dev] Gothenburg Sprint Dates In-Reply-To: <4E775899.8010204@stackless.com> References: <4E6F3FEF.5080600@gmx.de> <4E775899.8010204@stackless.com> Message-ID: Hi, On Mon, Sep 19, 2011 at 4:58 PM, Christian Tismer wrote: > This should work for me as well. So if nothing suddenly happens, > count me in. Nice to see you again! Yes, I confirm that I will also be there starting from the 24th or 28th of October (hopefully we know soonish which one it is), and for the sprint and FSCons. A bient?t, Armin. From geofft at MIT.EDU Mon Sep 19 21:40:44 2011 From: geofft at MIT.EDU (Geoffrey Thomas) Date: Mon, 19 Sep 2011 15:40:44 -0400 (EDT) Subject: [pypy-dev] Sandbox examples Message-ID: Hi, I'm looking at building a real application using PyPy's sandbox mode, and am having a harder time than I'd expect finding any examples of people using the sandbox in the "real world". Specifically, I'm not easily finding examples of interaction scripts other than pypy_interact.py in the PyPy source tree, and would be curious to take a look at how people virtualize things that pypy_interact.py currently doesn't handle. Would any of you have pointers to other interaction scripts, or at least would like to talk about how you've used the sandbox, even if you can't provide a pointer to the code? (I'm not subscribed to the list -- please keep me in the Cc on replies, or let me know if I'm supposed to subscribe.) Thanks, -- Geoffrey Thomas geofft at mit.edu From matti.picus at gmail.com Tue Sep 20 10:26:11 2011 From: matti.picus at gmail.com (matti picus) Date: Tue, 20 Sep 2011 11:26:11 +0300 Subject: [pypy-dev] Before I start hacking on numpy Message-ID: I would really love to have 2 dimensional matrices in micronumpy, and am willing to donate some hours of coding. There seems to be a number of "heads" on the mercurial tree that use numpy in their keyword. Can anyone give me pointer as to what branch (maybe just tip?) would be the recommended one to continue with? Any coding guidelines? My goal is to get to an implementation of inverse on a smallish (5x5 matrix) so I can build some useful code. Matti -------------- next part -------------- An HTML attachment was scrubbed... URL: From benjamin at python.org Tue Sep 20 13:52:19 2011 From: benjamin at python.org (Benjamin Peterson) Date: Tue, 20 Sep 2011 07:52:19 -0400 Subject: [pypy-dev] Before I start hacking on numpy In-Reply-To: References: Message-ID: 2011/9/20 matti picus : > I would really love to have 2 dimensional matrices in micronumpy, and am > willing to donate some hours of coding. There seems to be a number of > "heads" on the mercurial tree that use numpy in their keyword. Can anyone > give me pointer as to what branch (maybe just tip?) would be the recommended > one to continue with? Any coding guidelines? Usually we take a branch for each "project", so I suggest you make your own branched, named something like "numpy-multi-dimen". -- Regards, Benjamin From peelpy at gmail.com Tue Sep 20 20:02:45 2011 From: peelpy at gmail.com (Justin Peel) Date: Tue, 20 Sep 2011 12:02:45 -0600 Subject: [pypy-dev] Before I start hacking on numpy In-Reply-To: References: Message-ID: There are a few things to be done before we start on a multi-dimensional array. First, we need to do some refactoring to get all of the 1D parts out of BaseArray and into the SingleDimArray class. Also, we need to decide if we want to keep single dimensional arrays as a specialized case or just use the same multi-dimensional array for everything. We can also consider adding a 2D class as well. Personally, I think that we should do the refactoring, implement a multi-dim array while keeping the 1D array for now, and compare the performance of using the multi-dim array for 1D arrays vs. using the SingleDimArray class. On Tue, Sep 20, 2011 at 2:26 AM, matti picus wrote: > I would really love to have 2 dimensional matrices in micronumpy, and am > willing to donate some hours of coding. There seems to be a number of > "heads" on the mercurial tree that use numpy in their keyword. Can anyone > give me pointer as to what branch (maybe just tip?) would be the recommended > one to continue with? Any coding guidelines? > My goal is to get to an implementation of inverse on a smallish (5x5 matrix) > so I can build some useful code. > Matti > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > http://mail.python.org/mailman/listinfo/pypy-dev > > From alex.gaynor at gmail.com Tue Sep 20 20:35:04 2011 From: alex.gaynor at gmail.com (Alex Gaynor) Date: Tue, 20 Sep 2011 14:35:04 -0400 Subject: [pypy-dev] Before I start hacking on numpy In-Reply-To: References: Message-ID: On Tue, Sep 20, 2011 at 2:02 PM, Justin Peel wrote: > There are a few things to be done before we start on a > multi-dimensional array. First, we need to do some refactoring to get > all of the 1D parts out of BaseArray and into the SingleDimArray > class. Also, we need to decide if we want to keep single dimensional > arrays as a specialized case or just use the same multi-dimensional > array for everything. We can also consider adding a 2D class as well. > Personally, I think that we should do the refactoring, implement a > multi-dim array while keeping the 1D array for now, and compare the > performance of using the multi-dim array for 1D arrays vs. using the > SingleDimArray class. > > On Tue, Sep 20, 2011 at 2:26 AM, matti picus > wrote: > > I would really love to have 2 dimensional matrices in micronumpy, and am > > willing to donate some hours of coding. There seems to be a number of > > "heads" on the mercurial tree that use numpy in their keyword. Can anyone > > give me pointer as to what branch (maybe just tip?) would be the > recommended > > one to continue with? Any coding guidelines? > > My goal is to get to an implementation of inverse on a smallish (5x5 > matrix) > > so I can build some useful code. > > Matti > > _______________________________________________ > > pypy-dev mailing list > > pypy-dev at python.org > > http://mail.python.org/mailman/listinfo/pypy-dev > > > > > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > http://mail.python.org/mailman/listinfo/pypy-dev > Personally I think just having a single ndimarray is going to be the same for performance, and since this work should happen on a branch it'd be preferable to do it that way, we can still compare cross-branch. Alex -- "I disapprove of what you say, but I will defend to the death your right to say it." -- Evelyn Beatrice Hall (summarizing Voltaire) "The people's good is the highest law." -- Cicero -------------- next part -------------- An HTML attachment was scrubbed... URL: From boris2010 at boristhebrave.com Wed Sep 21 00:00:03 2011 From: boris2010 at boristhebrave.com (Boris) Date: Tue, 20 Sep 2011 23:00:03 +0100 Subject: [pypy-dev] Spurious dict lookups in my JIT loops In-Reply-To: References: Message-ID: > Alternatively, there is a way to display where the operations come from, but only during testing. I did this; which was good advice, as it generated several errors that the ordinary compiler doesn't flag. After fixing those up, I get the following (paraphrased): pop__AccessDirect_None:46 setarrayitem_gc(p30, Const(0), Const(* None), descr=) pop__AccessDirect_None:46 setarrayitem_gc(p30, Const(1), Const(* None), descr=) compare__AccessDirect_None:34 i138 = uint_lt(i137, Const(10000)) compare__AccessDirect_None:38 guard_true(i138, ... ll_contains__dicttablePtr_Signed:10 i139 = call(Const(), Const(*dicttable), Const(21), Const(21), descr=) ll_contains__dicttablePtr_Signed:10 guard_no_exception(... ll_contains__dicttablePtr_Signed:14 i40 = int_and(i139, Const(-2147483684)) ll_contains__dicttablePtr_Signed:17 141 = int_is_true(140) followed by much more dicttable stuff. So that doesn't really help me pinpoint the problem, unless *dicttable is some magic constant? What do the offsets reference - they don't correspond to line numbers. Here's my compare function for what it's worth. Note that there is an implicit downcast in order to get intValue. In the above loop, the klass's of both objects is Int, which the JIT is able to deduce. class Frame: # ... def compare(self, a, b): """ Compares two values using the builting comparison operator. Returns -1, 0, 1 or -99, where -99 means NaN comparison, otherwise such that (a op b) iff (result op 0)""" import math if a.klass == Int and b.klass == Int: if a.intValue < b.intValue: return -1 elif a.intValue > b.intValue: return 1 else: return 0 #elif a.klass in (Int, Uint, Number) and b.klass in (Int, Uint, Number): # aValue = self.space.toNumber(a) # bValue = self.space.toNumber(b) # if math.isnan(aValue): # return -99 # if math.isnan(bValue): # return -99 # if aValue < bValue: # return -1 # elif aValue > bValue: # return 1 # else: # return 0 raise StandardError("Not implemented") -------------- next part -------------- An HTML attachment was scrubbed... URL: From arigo at tunes.org Wed Sep 21 09:53:19 2011 From: arigo at tunes.org (Armin Rigo) Date: Wed, 21 Sep 2011 09:53:19 +0200 Subject: [pypy-dev] Spurious dict lookups in my JIT loops In-Reply-To: References: Message-ID: Hi Boris, Sorry, I can't help you more from just seeing the fragments of code. I would need to look at the whole source. Armin From arigo at tunes.org Wed Sep 21 12:12:14 2011 From: arigo at tunes.org (Armin Rigo) Date: Wed, 21 Sep 2011 12:12:14 +0200 Subject: [pypy-dev] Sandbox examples In-Reply-To: References: Message-ID: Hi Geoffrey, On Mon, Sep 19, 2011 at 9:40 PM, Geoffrey Thomas wrote: > I'm looking at building a real application using PyPy's sandbox mode, and am > having a harder time than I'd expect finding any examples of people using > the sandbox in the "real world". This is because, as far as I know, nobody ever did anything "real" with it. At most, a few attempts were discussed but went nowhere, again to my knowledge. The basics work and are believed to be extremely secure, but with no serious review. At least reviewing the few hundreds of lines involved in sandboxing would be a good idea. It is possible that an extension module uses directly raw pointers in a buggy way which would not be caught (workaround: disable most modules); it is also possible that there is a bug in the JIT assembler generation part (workaround: disable the JIT). Right now we are missing interest and use cases to develop it more ourselves, and truthfully, it should rather be done by someone that has an interest in serious security. If you want to work in completing it, we will be happy to provide support :-) A bient?t, Armin. From mkaniaris at gmail.com Wed Sep 21 19:34:15 2011 From: mkaniaris at gmail.com (Matthew Kaniaris) Date: Wed, 21 Sep 2011 13:34:15 -0400 Subject: [pypy-dev] contributing to pypy Message-ID: Hello, I'd like to contribute to pypy. I've been following the project for a while but don't know much about the internals apart from reading the docs. Does anyone have an idea for a good place to start? -kans From peelpy at gmail.com Wed Sep 21 20:20:17 2011 From: peelpy at gmail.com (Justin Peel) Date: Wed, 21 Sep 2011 12:20:17 -0600 Subject: [pypy-dev] contributing to pypy In-Reply-To: References: Message-ID: You didn't say what sort of work you are looking for. Here are some ideas that I've had on my TODO (at some point) list: -contribute to micronumpy -astype -scalar types like numpy.int8 -a ufunc that isn't implemented yet -speed up json module by adapting simplejson's pypy-support branch's somewhat optimized encoder and putting it in as lib_pypy/_json.py -speed up the encoder further -make getrandbits non-quadratic by adding a method to pypy/rlib/rbigint.py to generate a bigint from an array of bytes -speed up bigint->str conversion. Currently it is quite a lot slower than CPython which has a separate function for converting to a base-10 string. -speed up long->bytes conversion in pickle module. CPython has optimized C code for this while the Python version that pypy uses does some roundabout conversion to hex and then to bytes. Maybe we should consider making it possible to do this conversion in RPython. -speed up bigint multiplication -add a non-moving (from a garbage collection point of view) char buffer object to RPython. This is very important for I/O among other things. -speed up the garbage collector, especially in regard to minor collections on dicts. Anyway, that's just a few ideas with a wide range of difficulty. I believe that there is still work to be done with ctypes and with regular expressions. Of course, you can look through the bug tracker (bugs.pypy.org) which is where some of these ideas came from. On Wed, Sep 21, 2011 at 11:34 AM, Matthew Kaniaris wrote: > Hello, > > I'd like to contribute to pypy. ?I've been following the project for a > while but don't know much about the internals apart from reading the > docs. ?Does anyone have an idea for a good place to start? > > -kans > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > http://mail.python.org/mailman/listinfo/pypy-dev > From van.lindberg at gmail.com Thu Sep 22 00:51:18 2011 From: van.lindberg at gmail.com (VanL) Date: Wed, 21 Sep 2011 17:51:18 -0500 Subject: [pypy-dev] Sandbox examples In-Reply-To: References: Message-ID: <4E7A6A66.4070202@gmail.com> On 9/21/2011 5:12 AM, Armin Rigo wrote: > Hi Geoffrey, > > On Mon, Sep 19, 2011 at 9:40 PM, Geoffrey Thomas wrote: >> I'm looking at building a real application using PyPy's sandbox mode, and am >> having a harder time than I'd expect finding any examples of people using >> the sandbox in the "real world". > This is because, as far as I know, nobody ever did anything "real" > with it. At most, a few attempts were discussed but went nowhere, > again to my knowledge. It works; without getting into specifics (I am not sure I can), I know of at least one "real world" deployment using the sandbox functionality. Thanks, Van From zariko.taba at gmail.com Thu Sep 22 15:01:32 2011 From: zariko.taba at gmail.com (Zariko Taba) Date: Thu, 22 Sep 2011 15:01:32 +0200 Subject: [pypy-dev] Number of constants in a jitted rpython interpreter Message-ID: Hi pypy ! I'm still exploring rpython and I face a problem when adding a jit to an interpreter. In Assembler class (pypy.jit.codewriter.assembler), in emit_const method, it seems to be assumed that there is no more than 256 constants. (constant seems to be accessed in a array with a 1 byte index). If I try to translate an interpreter with more than 256 constant objects (like string ?), I get this error : [translation:ERROR] Error: [translation:ERROR] Traceback (most recent call last): [translation:ERROR] File "/home/olivier/workspace/talstai_ext/pypy-1.6-src/pypy/translator/goal/translate.py", line 308, in main [translation:ERROR] drv.proceed(goals) [translation:ERROR] File "/home/olivier/workspace/talstai_ext/pypy-1.6-src/pypy/translator/driver.py", line 810, in proceed [translation:ERROR] return self._execute(goals, task_skip = self._maybe_skip()) [translation:ERROR] File "/home/olivier/workspace/talstai_ext/pypy-1.6-src/pypy/translator/tool/taskengine.py", line 116, in _execute [translation:ERROR] res = self._do(goal, taskcallable, *args, **kwds) [translation:ERROR] File "/home/olivier/workspace/talstai_ext/pypy-1.6-src/pypy/translator/driver.py", line 286, in _do [translation:ERROR] res = func() [translation:ERROR] File "/home/olivier/workspace/talstai_ext/pypy-1.6-src/pypy/translator/driver.py", line 397, in task_pyjitpl_lltype [translation:ERROR] backend_name=self.config.translation.jit_backend, inline=True) [translation:ERROR] File "/home/olivier/workspace/talstai_ext/pypy-1.6-src/pypy/jit/metainterp/warmspot.py", line 42, in apply_jit [translation:ERROR] **kwds) [translation:ERROR] File "/home/olivier/workspace/talstai_ext/pypy-1.6-src/pypy/jit/metainterp/warmspot.py", line 199, in __init__ [translation:ERROR] self.codewriter.make_jitcodes(verbose=verbose) [translation:ERROR] File "/home/olivier/workspace/talstai_ext/pypy-1.6-src/pypy/jit/codewriter/codewriter.py", line 72, in make_jitcodes [translation:ERROR] self.transform_graph_to_jitcode(graph, jitcode, verbose) [translation:ERROR] File "/home/olivier/workspace/talstai_ext/pypy-1.6-src/pypy/jit/codewriter/codewriter.py", line 61, in transform_graph_to_jitcode [translation:ERROR] self.assembler.assemble(ssarepr, jitcode) [translation:ERROR] File "/home/olivier/workspace/talstai_ext/pypy-1.6-src/pypy/jit/codewriter/assembler.py", line 35, in assemble [translation:ERROR] self.write_insn(insn) [translation:ERROR] File "/home/olivier/workspace/talstai_ext/pypy-1.6-src/pypy/jit/codewriter/assembler.py", line 135, in write_insn [translation:ERROR] is_short = self.emit_const(x, kind, allow_short=True) [translation:ERROR] File "/home/olivier/workspace/talstai_ext/pypy-1.6-src/pypy/jit/codewriter/assembler.py", line 108, in emit_const [translation:ERROR] self.code.append(chr(self.constants_dict[key])) [translation:ERROR] ValueError: character code not in range(256) With this snippet of code : self.constants_dict[key] = 256 - len(constants) If len(constants) is 257, then self.constants_dict[key] is -1 and chr(-1) raise the ValueError. I attached a (really) stupid example to reproduce. When I browse pypy sources in rpython, I can't believe there is less than 256 constants of type 'ref'. What do you think ? Did I miss something ? Thanks ! Zariko. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: many_constant.py Type: text/x-python Size: 24082 bytes Desc: not available URL: From amauryfa at gmail.com Thu Sep 22 15:09:14 2011 From: amauryfa at gmail.com (Amaury Forgeot d'Arc) Date: Thu, 22 Sep 2011 15:09:14 +0200 Subject: [pypy-dev] Number of constants in a jitted rpython interpreter In-Reply-To: References: Message-ID: 2011/9/22 Zariko Taba > [translation:ERROR] ValueError: character code not in range(256) > > With this snippet of code : > > self.constants_dict[key] = 256 - len(constants) > > If len(constants) is 257, > then self.constants_dict[key] is -1 > and chr(-1) raise the ValueError. > > I attached a (really) stupid example to reproduce. > When I browse pypy sources in rpython, I can't believe there is less than > 256 constants of type 'ref'. > > What do you think ? Did I miss something ? > There are many string constants in your function. Did you try something like raise Exception("Opcode not implemented : %d" % opcode) -- Amaury Forgeot d'Arc -------------- next part -------------- An HTML attachment was scrubbed... URL: From cfbolz at gmx.de Thu Sep 22 15:11:56 2011 From: cfbolz at gmx.de (Carl Friedrich Bolz) Date: Thu, 22 Sep 2011 15:11:56 +0200 Subject: [pypy-dev] Number of constants in a jitted rpython interpreter In-Reply-To: References: Message-ID: <4E7B341C.6070609@gmx.de> On 09/22/2011 03:01 PM, Zariko Taba wrote: > Hi pypy ! > > I'm still exploring rpython and I face a problem when adding a jit to an > interpreter. > In Assembler class (pypy.jit.codewriter.assembler), in emit_const > method, it seems to be assumed that there is no more than 256 constants. > (constant seems to be accessed in a array with a 1 byte index). > > If I try to translate an interpreter with more than 256 constant objects > (like string ?), I get this error : There is a limit of 256 constants *per function*. If you need more, maybe your functions are too complex :-). Carl Friedrich From zariko.taba at gmail.com Thu Sep 22 15:37:40 2011 From: zariko.taba at gmail.com (Zariko Taba) Date: Thu, 22 Sep 2011 15:37:40 +0200 Subject: [pypy-dev] Number of constants in a jitted rpython interpreter In-Reply-To: <4E7B341C.6070609@gmx.de> References: <4E7B341C.6070609@gmx.de> Message-ID: >>> There is a limit of 256 constants *per function*. If you need more, maybe your functions are too complex :-). Great ! Thanks for the advice ! I was generating a function from meta data, so I didn't care about size of the generated code. When splitting the function, error disappears. :) Thanks for your help. Zariko. On Thu, Sep 22, 2011 at 3:11 PM, Carl Friedrich Bolz wrote: > On 09/22/2011 03:01 PM, Zariko Taba wrote: > >> Hi pypy ! >> >> I'm still exploring rpython and I face a problem when adding a jit to an >> interpreter. >> In Assembler class (pypy.jit.codewriter.**assembler), in emit_const >> method, it seems to be assumed that there is no more than 256 constants. >> (constant seems to be accessed in a array with a 1 byte index). >> >> If I try to translate an interpreter with more than 256 constant objects >> (like string ?), I get this error : >> > > There is a limit of 256 constants *per function*. If you need more, maybe > your functions are too complex :-). > > Carl Friedrich > ______________________________**_________________ > pypy-dev mailing list > pypy-dev at python.org > http://mail.python.org/**mailman/listinfo/pypy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From steve at pearwood.info Fri Sep 23 06:04:02 2011 From: steve at pearwood.info (Steven D'Aprano) Date: Fri, 23 Sep 2011 14:04:02 +1000 Subject: [pypy-dev] Question about byte-code hacking Message-ID: <4E7C0532.1070503@pearwood.info> Hi guys, Over on the python-ideas mailing list, there is a long thread about the default argument hack in functions, used for micro-optimizations, early-binding, and monkey-patching. Various alternatives are being argued about. One proposal put forward involves bytecode manipulations to change global lookups to local so that one could have a decorator that "injects" a value into a copy of the function. What's the PyPy position on bytecode hacking? Good, bad, evil, don't mind either way? For those who care, the thread starts here: http://mail.python.org/pipermail/python-ideas/2011-September/011691.html (beware, it's long). Thanks in advance, -- Steven From benjamin at python.org Fri Sep 23 06:12:54 2011 From: benjamin at python.org (Benjamin Peterson) Date: Fri, 23 Sep 2011 00:12:54 -0400 Subject: [pypy-dev] Question about byte-code hacking In-Reply-To: <4E7C0532.1070503@pearwood.info> References: <4E7C0532.1070503@pearwood.info> Message-ID: 2011/9/23 Steven D'Aprano : > Hi guys, > > Over on the python-ideas mailing list, there is a long thread about the > default argument hack in functions, used for micro-optimizations, > early-binding, and monkey-patching. Various alternatives are being argued > about. One proposal put forward involves bytecode manipulations to change > global lookups to local so that one could have a decorator that "injects" a > value into a copy of the function. > > > What's the PyPy position on bytecode hacking? Good, bad, evil, don't mind > either way? First of all, it's going to be implementation defined. So, you can't expect *any* bytecode you create on one VM to work on another. Secondly, it's useless for speed when you have a JIT. -- Regards, Benjamin From arigo at tunes.org Fri Sep 23 10:05:12 2011 From: arigo at tunes.org (Armin Rigo) Date: Fri, 23 Sep 2011 10:05:12 +0200 Subject: [pypy-dev] Question about byte-code hacking In-Reply-To: References: <4E7C0532.1070503@pearwood.info> Message-ID: Hi, On Fri, Sep 23, 2011 at 6:12 AM, Benjamin Peterson wrote: >> What's the PyPy position on bytecode hacking? Good, bad, evil, don't mind >> either way? > > (...) > Secondly, it's useless for speed when you have a JIT. Indeed, although it is not 100% true, because we also have an interpreter. But it's still 95% true. All in all micro-optimizations that gain at most some small number of percents in CPython's run-time and that don't give anything anyway with the JIT are particularly pointless in PyPy. Well, let's just say I'm amazed at the energy people can put in endless threads of discussion. Feel free to do it, because it's usually easy to adapt our interpreter, and our JIT follows automatically. I would very naively suggest the following: How about putting the same amount of effort into bringing forward http://hotpy.blogspot.com/ instead? A bient?t, Armin. From steve at pearwood.info Fri Sep 23 11:27:44 2011 From: steve at pearwood.info (Steven D'Aprano) Date: Fri, 23 Sep 2011 19:27:44 +1000 Subject: [pypy-dev] Question about byte-code hacking In-Reply-To: References: <4E7C0532.1070503@pearwood.info> Message-ID: <4E7C5110.7040502@pearwood.info> Benjamin Peterson wrote: > 2011/9/23 Steven D'Aprano : >> Hi guys, >> >> Over on the python-ideas mailing list, there is a long thread about the >> default argument hack in functions, used for micro-optimizations, >> early-binding, and monkey-patching. Various alternatives are being argued >> about. One proposal put forward involves bytecode manipulations to change >> global lookups to local so that one could have a decorator that "injects" a >> value into a copy of the function. >> >> >> What's the PyPy position on bytecode hacking? Good, bad, evil, don't mind >> either way? > > First of all, it's going to be implementation defined. So, you can't > expect *any* bytecode you create on one VM to work on another. > Secondly, it's useless for speed when you have a JIT. I don't expect that the same bytecode would work on multiple implementations. Obviously each implementation would either need its own bytecode manipulation, or simply refuse to support it. Regardless of whether it is useless for speed or not, it is legal syntax. Default arguments are also used for early binding, which has nothing to do with speed. So the question is: would it be a burden for PyPy to make any guarantees about the stability of bytecode? -- Steven From benjamin at python.org Fri Sep 23 14:06:26 2011 From: benjamin at python.org (Benjamin Peterson) Date: Fri, 23 Sep 2011 08:06:26 -0400 Subject: [pypy-dev] Question about byte-code hacking In-Reply-To: <4E7C5110.7040502@pearwood.info> References: <4E7C0532.1070503@pearwood.info> <4E7C5110.7040502@pearwood.info> Message-ID: 2011/9/23 Steven D'Aprano : > > So the question is: would it be a burden for PyPy to make any guarantees > about the stability of bytecode? I would say not without great benefit. If you're doing something that requires changing bytecode, the obvious answer is to add some syntax instead. -- Regards, Benjamin From fijall at gmail.com Fri Sep 23 14:26:24 2011 From: fijall at gmail.com (Maciej Fijalkowski) Date: Fri, 23 Sep 2011 14:26:24 +0200 Subject: [pypy-dev] Question about byte-code hacking In-Reply-To: References: <4E7C0532.1070503@pearwood.info> <4E7C5110.7040502@pearwood.info> Message-ID: On Fri, Sep 23, 2011 at 2:06 PM, Benjamin Peterson wrote: > 2011/9/23 Steven D'Aprano : >> >> So the question is: would it be a burden for PyPy to make any guarantees >> about the stability of bytecode? > > I would say not without great benefit. If you're doing something that > requires changing bytecode, the obvious answer is to add some syntax > instead. > Wait a second. Why do you need such guarantees in the first place? It's not like pypy's pycs and CPython pycs are interexchangeable. Whether pypy uses bytecode or not is even an internal detail. But as armin said, why bother? Cheers, fijal From zariko.taba at gmail.com Fri Sep 23 14:37:12 2011 From: zariko.taba at gmail.com (Zariko Taba) Date: Fri, 23 Sep 2011 14:37:12 +0200 Subject: [pypy-dev] rlist and ll_delitem_nonneg index Message-ID: Hi pypy ! I hit an assert in pypy/annotation/annrpython.py in addpendingblock line 231 : assert annmodel.unionof(s_oldarg, s_newarg) == s_oldarg I think I found an explanation by digging in code : The treated block is in the "ll_delitem_nonneg" function, s_newarg is a "SomeInteger(nonneg=True)" and s_oldarg is a "SomeInteger(nonneg=False)" and in fact they are the "index" argument of this function. oldarg comes from a previous annotation using the "rtype_method_remove" of rlist.py. "rtype_method_remove" use "ll_listremove" which use : ll_delitem_nonneg(..., ll_listindex(...)) Looking at ll_listindex, it seems clear that the index can be proved as always positive. That's probably why s_oldarg is a "SomeInteger(nonneg=True)" The new annotation comes from the method "rtype_delitem" (class __extend__ in rlist.py). the index from hop is always positive, so the code : if hop.args_s[1].nonneg: llfn = ll_delitem_nonneg select "ll_delitem_nonneg" as the deletion function. But the index given to the annotator is taken from : v_lst, v_index = hop.inputargs(r_lst, Signed) v_index is a "Signed" The information concerning the "non negative" property of the index integer is lost here. index is now "SomeInteger(nonneg=False)" and trigs the assert. What do you think ? Zariko. -------------- next part -------------- An HTML attachment was scrubbed... URL: From arigo at tunes.org Fri Sep 23 15:27:44 2011 From: arigo at tunes.org (Armin Rigo) Date: Fri, 23 Sep 2011 15:27:44 +0200 Subject: [pypy-dev] Question about byte-code hacking In-Reply-To: <4E7C5110.7040502@pearwood.info> References: <4E7C0532.1070503@pearwood.info> <4E7C5110.7040502@pearwood.info> Message-ID: Hi, On Fri, Sep 23, 2011 at 11:27 AM, Steven D'Aprano wrote: > So the question is: would it be a burden for PyPy to make any guarantees > about the stability of bytecode? The answer is: Feel free to do anything or nothing with CPython's bytecode. As Fijal says it has little to do with PyPy. It's even more true about the CPython 3 bytecode. A bient?t, Armin. From arigo at tunes.org Fri Sep 23 15:34:50 2011 From: arigo at tunes.org (Armin Rigo) Date: Fri, 23 Sep 2011 15:34:50 +0200 Subject: [pypy-dev] rlist and ll_delitem_nonneg index In-Reply-To: References: Message-ID: Hi Zariko, On Fri, Sep 23, 2011 at 2:37 PM, Zariko Taba wrote: > I hit an assert in pypy/annotation/annrpython.py in addpendingblock line 231 > : assert annmodel.unionof(s_oldarg, s_newarg) == s_oldarg This is an assert that we keep hitting from time to time. Your explanation is wrong, though, it's not about some variable having a type "Signed", because that's a low-level type, not something seen by the annotator. I imagine that the solution is easy --- it usually is, but we never found a correct solution covering all possible cases. So instead we just have to fix the particular issue that you're getting. But it would require us to have the full context (in this case, the complete RPython program that you're trying to translate). A bient?t, Armin. From elec.lomy.ru at gmail.com Fri Sep 23 20:35:05 2011 From: elec.lomy.ru at gmail.com (=?UTF-8?B?0JDQu9C10LrRgdCw0L3QtNGAINCh0LXQtNC+0LI=?=) Date: Fri, 23 Sep 2011 22:35:05 +0400 Subject: [pypy-dev] Stacklets In-Reply-To: References: <4E451B04.6050104@gmail.com> Message-ID: 2011/9/1 Armin Rigo : > Hi, > > The "stacklet" branch has been merged now. ?The "_continuation" module > is available on all PyPys with or without the JIT on x86 and x86-64 > since a few days, and it will of course be part of release 1.6.1. > There is an almost-complete wrapper "greenlet.py". ?For documentation > and current limitations see here: > > ? ?http://doc.pypy.org/en/latest/stackless.html . > > > A bient?t, > > Armin. > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > http://mail.python.org/mailman/listinfo/pypy-dev > Hello Armin, I'm interested in porting _stackless to stacklets (and also probably making it inter-thread). Where can I find reference API documentation for channels and tasklets because I think it's probably would be simpler to rewrite some parts of code completely. -- Best regards, Alexander Sedov From arigo at tunes.org Fri Sep 23 22:38:43 2011 From: arigo at tunes.org (Armin Rigo) Date: Fri, 23 Sep 2011 22:38:43 +0200 Subject: [pypy-dev] Stacklets In-Reply-To: References: <4E451B04.6050104@gmail.com> Message-ID: Hi, 2011/9/23 ????????? ????? : > I'm interested in porting _stackless to stacklets (and also probably > making it inter-thread). Thanks! Work in this direction is already well advanced. More precisely, the directory pypy/module/_stackless is obsolete and gone, and the pure Python module lib_pypy/stackless.py has been ported to use _continuation. (I wonder somehow why we had all this code in pypy/module/_stackless that seems not needed any more.) But it is not multi-thread-safe so far, which is probably an easy fix, using a thread-local instead of all these global variables initialized in _init() in stackless.py. Note also that there is a branch "continulet-pickle" that could do with help from someone with more motivation than me to finish this. So far you can pickle continulets, greenlets, and coroutines, but not tasklets. It looks messy because of early-optimization issues from stackless.py --- e.g. it would be much more natural for it to switch to the main tasklet every time it needs to do the scheduling and choose the next tasklet to switch to, instead of being clever and switching directly to the target tasklet; this "unwanted cleverness" prevents pickling from working at all, because it sees too much unrelated stuff in a suspended tasklet. All in all what I would be most happy with, at this point, is if someone would step up and finish porting and maintaining stackless.py. Ideally it would be someone that needs this code for his own projects, too. > Where can I find reference API documentation for channels and tasklets At the Stackless Python original web site. A bient?t, Armin. From nick at njwilson.net Sat Sep 24 02:44:21 2011 From: nick at njwilson.net (Nick Wilson) Date: Fri, 23 Sep 2011 17:44:21 -0700 Subject: [pypy-dev] Student project ideas Message-ID: <4E7D27E5.6030708@njwilson.net> I'm interested in volunteering my time to mentor a small group of senior Computer Science students at Oregon State University on a project relevant to the Python community. PyPy definitely qualifies, and I'm looking for project ideas. The project would be for their senior capstone class. Groups of 2-4 students vote on the list of available projects and then work from roughly mid-November to mid-May (along with all their other coursework) to complete it. The scope of a projects are similar to what you'd assign a full-time summer intern. I'm relatively new to the Python community and haven't poked around PyPy much yet. I see the potential PyPy project list [1] in the developer documentation. That's very helpful, but is anyone able to recommend some projects from that list that are about the right difficulty and size? I have a decent amount of time to work with the students and am looking for a project I could make significant contributions to as well. So I should be able to work closely with the students and take whatever they produce and work it into something usable if they are unable to complete the entire project. Any suggestions? Thanks, Nick Wilson [1] http://doc.pypy.org/en/latest/project-ideas.html From fijall at gmail.com Sat Sep 24 14:47:33 2011 From: fijall at gmail.com (Maciej Fijalkowski) Date: Sat, 24 Sep 2011 14:47:33 +0200 Subject: [pypy-dev] Performance, json and standard library Message-ID: Hello. I would like to raise the topic of modifying standard library for performance reasons in *some* places. I know the policy so far is to avoid modifications as much as possible and in general I agree. For example the changes justinpeel made to bz2 (or tarfile? please remind me about details) were not good, since it seems equivalent changes can be achieved by tweaking the JIT. However, json comes to mind. The situation is as follows: * json in stdlib is some old version of simplejson * simplejson has been played with to improve the performance on top of pypy * there are reports that relatively simple changes will improve performance: https://bugs.pypy.org/issue868 * json in stdlib will *not* be updated for 2.7 series and even if it gets updated for 3.x it'll be incompatible So while I agree that ideally, JIT could handle whatever it has, but maybe json is an example good enough to warrant changes. There are people out there who would base migration to pypy on json performance for example. Any opinions? Cheers, fijal From lac at openend.se Sat Sep 24 15:07:09 2011 From: lac at openend.se (Laura Creighton) Date: Sat, 24 Sep 2011 15:07:09 +0200 Subject: [pypy-dev] Performance, json and standard library In-Reply-To: Message from Maciej Fijalkowski of "Sat, 24 Sep 2011 14:47:33 +0200." References: Message-ID: <201109241307.p8OD79Hf019467@theraft.openend.se> In a message of Sat, 24 Sep 2011 14:47:33 +0200, Maciej Fijalkowski writes: >Hello. > >I would like to raise the topic of modifying standard library for >performance reasons in *some* places. I know the policy so far is to >avoid modifications as much as possible and in general I agree. For >example the changes justinpeel made to bz2 (or tarfile? please remind >me about details) were not good, since it seems equivalent changes can >be achieved by tweaking the JIT. > >However, json comes to mind. The situation is as follows: > >* json in stdlib is some old version of simplejson >* simplejson has been played with to improve the performance on top of py >py >* there are reports that relatively simple changes will improve >performance: https://bugs.pypy.org/issue868 >* json in stdlib will *not* be updated for 2.7 series and even if it >gets updated for 3.x it'll be incompatible > >So while I agree that ideally, JIT could handle whatever it has, but >maybe json is an example good enough to warrant changes. There are >people out there who would base migration to pypy on json performance >for example. > >Any opinions? > >Cheers, >fijal I am at PyCON UK right now. I have already had 4 conversations with people who badly want better json performance. (And one person who says zip performance bad, is it? I told him to write a bug report), and its only lunch time of day 1 of the con. Faster json thus appears to be an itch that needs a lot of scratching around here. Laura From arigo at tunes.org Sat Sep 24 15:26:10 2011 From: arigo at tunes.org (Armin Rigo) Date: Sat, 24 Sep 2011 15:26:10 +0200 Subject: [pypy-dev] Performance, json and standard library In-Reply-To: References: Message-ID: Hi Maciej, On Sat, Sep 24, 2011 at 2:47 PM, Maciej Fijalkowski wrote: > So while I agree that ideally, JIT could handle whatever it has, but > maybe json is an example good enough to warrant changes. Yes, I agree in theory. (Didn't look in detail at the proposed patches.) Alternatively, couldn't the patches be formulated as a new file lib_pypy/_json.py? The stdlib's json module imports _json if available, and of course it doesn't have to be written in RPython, it can just be a tweaked version of the Python library. A bient?t, Armin. From fijall at gmail.com Sat Sep 24 15:32:51 2011 From: fijall at gmail.com (Maciej Fijalkowski) Date: Sat, 24 Sep 2011 15:32:51 +0200 Subject: [pypy-dev] Performance, json and standard library In-Reply-To: References: Message-ID: On Sat, Sep 24, 2011 at 3:26 PM, Armin Rigo wrote: > Hi Maciej, > > On Sat, Sep 24, 2011 at 2:47 PM, Maciej Fijalkowski wrote: >> So while I agree that ideally, JIT could handle whatever it has, but >> maybe json is an example good enough to warrant changes. > > Yes, I agree in theory. ?(Didn't look in detail at the proposed > patches.) ?Alternatively, couldn't the patches be formulated as a new > file lib_pypy/_json.py? ?The stdlib's json module imports _json if > available, and of course it doesn't have to be written in RPython, it > can just be a tweaked version of the Python library. > > > A bient?t, > > Armin. > I agree that's a neat escape from "we don't modify standard library" policy :) Cheers, fijal From andrewfr_ice at yahoo.com Sat Sep 24 22:29:43 2011 From: andrewfr_ice at yahoo.com (Andrew Francis) Date: Sat, 24 Sep 2011 13:29:43 -0700 (PDT) Subject: [pypy-dev] Stacklets References: <4E451B04.6050104@gmail.com> Message-ID: <1316896183.20733.YahooMailNeo@web120701.mail.ne1.yahoo.com> Hi Armin and Folks: ________________________________ From: Armin Rigo To: ????????? ????? Cc: pypy-dev at python.org Sent: Friday, September 23, 2011 4:38 PM Subject: Re: [pypy-dev] Stacklets >Thanks!? Work in this direction is already well advanced.? More >precisely, the directory pypy/module/_stackless is obsolete and gone, >and the pure Python module lib_pypy/stackless.py has been ported to >use _continuation.? (I wonder somehow why we had all this code in >pypy/module/_stackless that seems not needed any more.) I downloaded the latest build? looked at stackless.py and wrote a simple test. When I have time, I will try incorporate and test my new (and very unofficial) stackless features. I also should have enough code to run weird examples that ought to stress the system. That said, I'm really impressed that continulets was ported so fast. Kudos to Rodrigo! A suggestion. Perhaps it would be good to keep the test for whether CPython is the interpreter and greenlets ought to be used? In this fashion, someone that does want to use pypy-c can still play with stackless. And one authoritative copy of stackless.py can be kept (as opposed to hacking a version for greenlets). > But it is not multi-thread-safe so far, which is probably an easy fix, using a >thread-local instead of all these global variables initialized in >_init() in stackless.py. In the stackless mailing list, there is a conversation about some gotchas concerning threads and tasklets that one may want to read. Although the conversation revolves around C Stackless Python based internals, the 50000ft view is about threads dying with tasklets binded to them. >So far you can pickle continulets, greenlets, and coroutines, but not >tasklets.? It looks messy because of early-optimization issues from >stackless.py . All I know about pickling is that one cannot pickle a tasklet with cstate. Or a blocked tasklet. I don't know how that translates into the pypy world. >--- e.g. it would be much more natural for it to switch >to the main tasklet every time it needs to do the scheduling and >choose the next tasklet to switch to, instead of being clever and >switching directly to the target tasklet; this "unwanted cleverness" >prevents pickling from working at all, because it sees too much >unrelated stuff in a suspended tasklet This makes the scheduler sound like a generator trampoline. Also adds an additional context switch, if I understand things correctly. >All in all what I would be most happy with, at this point, is if >someone would step up and finish porting and maintaining stackless.py. >Ideally it would be someone that needs this code for his own >projects, too. Seems that Rodrigo did a pretty good job. What is left to be done? Cheers, Andrew -------------- next part -------------- An HTML attachment was scrubbed... URL: From elec.lomy.ru at gmail.com Sun Sep 25 09:50:07 2011 From: elec.lomy.ru at gmail.com (=?UTF-8?B?0JDQu9C10LrRgdCw0L3QtNGAINCh0LXQtNC+0LI=?=) Date: Sun, 25 Sep 2011 11:50:07 +0400 Subject: [pypy-dev] Problems when trying to translate. Message-ID: Translating PyPy on Linux with -O1 --continuation --withmod-_continuation I've got the error: ValueError: unknown value for translation.gc: 'ref' Googling gave me no results. Is there a way to fix this? PS: By the way, translation procedure reports about pypy.module.interp_mmap unreachable block in W_MMap.descr_getitem. From zooko at zooko.com Sun Sep 25 19:30:22 2011 From: zooko at zooko.com (Zooko O'Whielacronx) Date: Sun, 25 Sep 2011 11:30:22 -0600 Subject: [pypy-dev] Performance, json and standard library In-Reply-To: References: Message-ID: But don't people who need better json performance use simplejson explicitly instead of using the standard library's json? Regards, Zooko From bob at redivi.com Sun Sep 25 19:49:14 2011 From: bob at redivi.com (Bob Ippolito) Date: Sun, 25 Sep 2011 10:49:14 -0700 Subject: [pypy-dev] Performance, json and standard library In-Reply-To: References: Message-ID: simplejson would be a good target for changes that would not be easy to implement on top of the stdlib json. I'd be happy to accept any contributions. I failed to make big differences in performance when I tried at PyCon (at least that didn't regress performance for some people). The other things I'm missing are a good suite of documents to benchmark with, and a good tool to run the benchmarks so it's easy to see if incremental changes are better or worse. However, if RPython is required to make it faster, maybe implementing _json for the stdlib would actually be best. On Sun, Sep 25, 2011 at 10:30 AM, Zooko O'Whielacronx wrote: > But don't people who need better json performance use simplejson > explicitly instead of using the standard library's json? > > Regards, > > Zooko > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > http://mail.python.org/mailman/listinfo/pypy-dev > From alex.gaynor at gmail.com Sun Sep 25 19:53:57 2011 From: alex.gaynor at gmail.com (Alex Gaynor) Date: Sun, 25 Sep 2011 13:53:57 -0400 Subject: [pypy-dev] Performance, json and standard library In-Reply-To: References: Message-ID: On Sun, Sep 25, 2011 at 1:49 PM, Bob Ippolito wrote: > simplejson would be a good target for changes that would not be easy > to implement on top of the stdlib json. I'd be happy to accept any > contributions. I failed to make big differences in performance when I > tried at PyCon (at least that didn't regress performance for some > people). The other things I'm missing are a good suite of documents to > benchmark with, and a good tool to run the benchmarks so it's easy to > see if incremental changes are better or worse. > > However, if RPython is required to make it faster, maybe implementing > _json for the stdlib would actually be best. > > On Sun, Sep 25, 2011 at 10:30 AM, Zooko O'Whielacronx > wrote: > > But don't people who need better json performance use simplejson > > explicitly instead of using the standard library's json? > > > > Regards, > > > > Zooko > > _______________________________________________ > > pypy-dev mailing list > > pypy-dev at python.org > > http://mail.python.org/mailman/listinfo/pypy-dev > > > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > http://mail.python.org/mailman/listinfo/pypy-dev > For what it's worth, I think we can get there, without needing to write any RPython, through a combination of careful Python, and more JIT optimizations. For example, I'd like to get the code input[i:i+4] == "NULL" to eventually generate: read str length check length >= 4 read 4 bytes out of input (single MOVL) integer compare to ('N' << 0) | ('U' << 8) | ('L' << 16) | ('L' << 24) in total about 7 x86 instructions. I think this is definitely possible! Alex -- "I disapprove of what you say, but I will defend to the death your right to say it." -- Evelyn Beatrice Hall (summarizing Voltaire) "The people's good is the highest law." -- Cicero -------------- next part -------------- An HTML attachment was scrubbed... URL: From arigo at tunes.org Mon Sep 26 08:41:04 2011 From: arigo at tunes.org (Armin Rigo) Date: Mon, 26 Sep 2011 08:41:04 +0200 Subject: [pypy-dev] Problems when trying to translate. In-Reply-To: References: Message-ID: Hi, 2011/9/25 ????????? ????? : > Translating PyPy on Linux with -O1 --continuation > --withmod-_continuation I've got the error: > ValueError: unknown value for translation.gc: 'ref' > Googling gave me no results. Is there a way to fix this? I suppose you didn't install Boehm, which is needed in -O1 compilations nowadays. See our "prerequisites" list. A bient?t, Armin. From elec.lomy.ru at gmail.com Mon Sep 26 13:08:00 2011 From: elec.lomy.ru at gmail.com (=?UTF-8?B?0JDQu9C10LrRgdCw0L3QtNGAINCh0LXQtNC+0LI=?=) Date: Mon, 26 Sep 2011 15:08:00 +0400 Subject: [pypy-dev] Problems when trying to translate. In-Reply-To: References: Message-ID: 26 ???????? 2011??. 10:41 ???????????? Armin Rigo ???????: > Hi, > > 2011/9/25 ????????? ????? : >> Translating PyPy on Linux with -O1 --continuation >> --withmod-_continuation I've got the error: >> ValueError: unknown value for translation.gc: 'ref' >> Googling gave me no results. Is there a way to fix this? > > I suppose you didn't install Boehm, which is needed in -O1 > compilations nowadays. ?See our "prerequisites" list. Of course I've seen it and just copied apt-get command to my terminal. Just checked -- libgc and libgc-dev is installed. From fijall at gmail.com Mon Sep 26 13:48:44 2011 From: fijall at gmail.com (Maciej Fijalkowski) Date: Mon, 26 Sep 2011 08:48:44 -0300 Subject: [pypy-dev] Problems when trying to translate. In-Reply-To: References: Message-ID: 2011/9/26 ????????? ????? : > 26 ???????? 2011??. 10:41 ???????????? Armin Rigo ???????: >> Hi, >> >> 2011/9/25 ????????? ????? : >>> Translating PyPy on Linux with -O1 --continuation >>> --withmod-_continuation I've got the error: >>> ValueError: unknown value for translation.gc: 'ref' >>> Googling gave me no results. Is there a way to fix this? >> >> I suppose you didn't install Boehm, which is needed in -O1 >> compilations nowadays. ?See our "prerequisites" list. > Of course I've seen it and just copied apt-get command to my terminal. > Just checked -- libgc and libgc-dev is installed. > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > http://mail.python.org/mailman/listinfo/pypy-dev > Er no, it seems to be someone sets gc to "ref" which is reference counting. No clue why. Anyway your command line is not working. Did you mean --continuelets and --withmod-somethingelse? Also --withmod comes after target. Please precisely paste it From fijall at gmail.com Mon Sep 26 13:53:44 2011 From: fijall at gmail.com (Maciej Fijalkowski) Date: Mon, 26 Sep 2011 08:53:44 -0300 Subject: [pypy-dev] [pypy-commit] pypy default: graphviewer - split the dot2plain function into one for local and one for the codespeak cgi In-Reply-To: <20110926104036.8CA7C820CE@wyvern.cs.uni-duesseldorf.de> References: <20110926104036.8CA7C820CE@wyvern.cs.uni-duesseldorf.de> Message-ID: Can't we just kill codespeak's CGI? On Mon, Sep 26, 2011 at 7:40 AM, RonnyPfannschmidt wrote: > Author: Ronny Pfannschmidt > Branch: > Changeset: r47606:7acf2b8fcafd > Date: 2011-09-26 12:40 +0200 > http://bitbucket.org/pypy/pypy/changeset/7acf2b8fcafd/ > > Log: ? ?graphviewer - split the dot2plain function into one for local and > ? ? ? ?one for the codespeak cgi > > diff --git a/dotviewer/graphparse.py b/dotviewer/graphparse.py > --- a/dotviewer/graphparse.py > +++ b/dotviewer/graphparse.py > @@ -36,48 +36,45 @@ > ? ? print >> sys.stderr, "Warning: could not guess file type, using 'dot'" > ? ? return 'unknown' > > -def dot2plain(content, contenttype, use_codespeak=False): > - ? ?if contenttype == 'plain': > - ? ? ? ?# already a .plain file > - ? ? ? ?return content > +def dot2plain_graphviz(content, contenttype, use_codespeak=False): > + ? ?if contenttype != 'neato': > + ? ? ? ?cmdline = 'dot -Tplain' > + ? ?else: > + ? ? ? ?cmdline = 'neato -Tplain' > + ? ?#print >> sys.stderr, '* running:', cmdline > + ? ?close_fds = sys.platform != 'win32' > + ? ?p = subprocess.Popen(cmdline, shell=True, close_fds=close_fds, > + ? ? ? ? ? ? ? ? ? ? ? ? stdin=subprocess.PIPE, stdout=subprocess.PIPE) > + ? ?(child_in, child_out) = (p.stdin, p.stdout) > + ? ?try: > + ? ? ? ?import thread > + ? ?except ImportError: > + ? ? ? ?bkgndwrite(child_in, content) > + ? ?else: > + ? ? ? ?thread.start_new_thread(bkgndwrite, (child_in, content)) > + ? ?plaincontent = child_out.read() > + ? ?child_out.close() > + ? ?if not plaincontent: ? ?# 'dot' is likely not installed > + ? ? ? ?raise PlainParseError("no result from running 'dot'") > + ? ?return plaincontent > > - ? ?if not use_codespeak: > - ? ? ? ?if contenttype != 'neato': > - ? ? ? ? ? ?cmdline = 'dot -Tplain' > - ? ? ? ?else: > - ? ? ? ? ? ?cmdline = 'neato -Tplain' > - ? ? ? ?#print >> sys.stderr, '* running:', cmdline > - ? ? ? ?close_fds = sys.platform != 'win32' > - ? ? ? ?p = subprocess.Popen(cmdline, shell=True, close_fds=close_fds, > - ? ? ? ? ? ? ? ? ? ? ? ? ? ? stdin=subprocess.PIPE, stdout=subprocess.PIPE) > - ? ? ? ?(child_in, child_out) = (p.stdin, p.stdout) > - ? ? ? ?try: > - ? ? ? ? ? ?import thread > - ? ? ? ?except ImportError: > - ? ? ? ? ? ?bkgndwrite(child_in, content) > - ? ? ? ?else: > - ? ? ? ? ? ?thread.start_new_thread(bkgndwrite, (child_in, content)) > - ? ? ? ?plaincontent = child_out.read() > - ? ? ? ?child_out.close() > - ? ? ? ?if not plaincontent: ? ?# 'dot' is likely not installed > - ? ? ? ? ? ?raise PlainParseError("no result from running 'dot'") > - ? ?else: > - ? ? ? ?import urllib > - ? ? ? ?request = urllib.urlencode({'dot': content}) > - ? ? ? ?url = 'http://codespeak.net/pypy/convertdot.cgi' > - ? ? ? ?print >> sys.stderr, '* posting:', url > - ? ? ? ?g = urllib.urlopen(url, data=request) > - ? ? ? ?result = [] > - ? ? ? ?while True: > - ? ? ? ? ? ?data = g.read(16384) > - ? ? ? ? ? ?if not data: > - ? ? ? ? ? ? ? ?break > - ? ? ? ? ? ?result.append(data) > - ? ? ? ?g.close() > - ? ? ? ?plaincontent = ''.join(result) > - ? ? ? ?# very simple-minded way to give a somewhat better error message > - ? ? ? ?if plaincontent.startswith(' - ? ? ? ? ? ?raise Exception("the dot on codespeak has very likely crashed") > +def dot2plain_codespeak(content, contenttype): > + ? ?import urllib > + ? ?request = urllib.urlencode({'dot': content}) > + ? ?url = 'http://codespeak.net/pypy/convertdot.cgi' > + ? ?print >> sys.stderr, '* posting:', url > + ? ?g = urllib.urlopen(url, data=request) > + ? ?result = [] > + ? ?while True: > + ? ? ? ?data = g.read(16384) > + ? ? ? ?if not data: > + ? ? ? ? ? ?break > + ? ? ? ?result.append(data) > + ? ?g.close() > + ? ?plaincontent = ''.join(result) > + ? ?# very simple-minded way to give a somewhat better error message > + ? ?if plaincontent.startswith(' + ? ? ? ?raise Exception("the dot on codespeak has very likely crashed") > ? ? return plaincontent > > ?def bkgndwrite(f, data): > @@ -148,10 +145,13 @@ > > ?def parse_dot(graph_id, content, links={}, fixedfont=False): > ? ? contenttype = guess_type(content) > - ? ?try: > - ? ? ? ?plaincontent = dot2plain(content, contenttype, use_codespeak=False) > - ? ? ? ?return list(parse_plain(graph_id, plaincontent, links, fixedfont)) > - ? ?except PlainParseError: > - ? ? ? ?# failed, retry via codespeak > - ? ? ? ?plaincontent = dot2plain(content, contenttype, use_codespeak=True) > - ? ? ? ?return list(parse_plain(graph_id, plaincontent, links, fixedfont)) > + ? ?if contenttype == 'plain': > + ? ? ? ?plaincontent = content > + ? ?else: > + ? ? ? ?try: > + ? ? ? ? ? ?plaincontent = dot2plain_graphviz(content, contenttype) > + ? ? ? ?except PlainParseError, e: > + ? ? ? ? ? ?print e > + ? ? ? ? ? ?# failed, retry via codespeak > + ? ? ? ? ? ?plaincontent = dot2plain_codespeak(content, contenttype) > + ? ?return list(parse_plain(graph_id, plaincontent, links, fixedfont)) > _______________________________________________ > pypy-commit mailing list > pypy-commit at python.org > http://mail.python.org/mailman/listinfo/pypy-commit > From arigo at tunes.org Mon Sep 26 15:05:14 2011 From: arigo at tunes.org (Armin Rigo) Date: Mon, 26 Sep 2011 15:05:14 +0200 Subject: [pypy-dev] Stacklets In-Reply-To: <1316896183.20733.YahooMailNeo@web120701.mail.ne1.yahoo.com> References: <4E451B04.6050104@gmail.com> <1316896183.20733.YahooMailNeo@web120701.mail.ne1.yahoo.com> Message-ID: Hi, 2011/9/24 Andrew Francis : > A suggestion. Perhaps it would be good to keep the test for whether CPython is > the interpreter and greenlets ought to be used? Feel free to propose concrete improvements. As I said already, I implemented the code so far but I don't really have deep interest myself in this feature. If someone wants seriously to start working on it, I can give a few hints. Otherwise, I'm sorry but I'm not going to take an active part in design discussions about how stackless features could be improved. A bient?t, Armin. From andrewfr_ice at yahoo.com Mon Sep 26 19:21:51 2011 From: andrewfr_ice at yahoo.com (Andrew Francis) Date: Mon, 26 Sep 2011 10:21:51 -0700 (PDT) Subject: [pypy-dev] Stacklets References: <4E451B04.6050104@gmail.com> <1316896183.20733.YahooMailNeo@web120701.mail.ne1.yahoo.com> Message-ID: <1317057711.4024.YahooMailNeo@web120712.mail.ne1.yahoo.com> Hi Armin: ________________________________ From: Armin Rigo To: Andrew Francis Cc: ????????? ????? ; "pypy-dev at python.org" Sent: Monday, September 26, 2011 9:05 AM Subject: Re: [pypy-dev] Stacklets >Feel free to propose concrete improvements. Welll the easiest thing to do is to see if import _continuation fails. And if it does fail, try to import greenlets. Also keep the old greenlet code. This is very much the way the previous stackless.py worked. >As I said already, I implemented the code so far but I don't really >have deep interest myself in this feature.? If someone wants seriously >to start working on it, I can give a few hints. I would be happy to work with other folks that are interested in the stackless.py module. I have a patchy knowledge of PyPy but a decent knowledge of stackless.py and stackless. Let me study continuations and the existing stackless.py module. In this fashion, I can make every hint I ask count. >? Otherwise, I'm sorry but I'm not going to take an active part in design discussions about >how stackless features could be improved. As discussed on IRC, I think an approach that would work is fork stackless.py in two. One branch would be conventional. That that,? it would track C basedStackless but incorporate stuff like continuelets and bug fixes and more conservative features. The other branch would be experimental. Wilder stuff would be done there. Cheers, Andrew -------------- next part -------------- An HTML attachment was scrubbed... URL: From mkaniaris at gmail.com Mon Sep 26 23:39:10 2011 From: mkaniaris at gmail.com (Matthew Kaniaris) Date: Mon, 26 Sep 2011 17:39:10 -0400 Subject: [pypy-dev] Performance, json and standard library In-Reply-To: References: Message-ID: I did some testing to see where we stand on JSON. The pypy is from the trunk and the simplejson used with pypy is the _pypy_speedups branch. The speedups make pypy about 2x faster on dumps than with the stdlib JSON module, slightly slower with loads, but up to ten times slower than cpython with simplejson with the 32kb file. I'll try profiling the speedups branch to see if there is any easy fruit left, but I doubt we will get another 50% improvement out of it. -kans results: python using json: /home/test/3.4kb.json loads: 5 loops, best of 1000: 953 usec per loop dumps: 5 loops, best of 1000: 706 usec per loop /home/test/32kb.json loads: 5 loops, best of 1000: 10.9 msec per loop dumps: 5 loops, best of 1000: 9.13 msec per loop ------------------------- python using simplejson: /home/test/3.4kb.json loads: 5 loops, best of 1000: 41.2 usec per loop dumps: 5 loops, best of 1000: 56 usec per loop /home/test/32kb.json loads: 5 loops, best of 1000: 604 usec per loop dumps: 5 loops, best of 1000: 391 usec per loop ------------------------- pypy using json: /home/test/3.4kb.json loads: 5 loops, best of 1000: 146 usec per loop dumps: 5 loops, best of 1000: 429 usec per loop /home/test/32kb.json loads: 5 loops, best of 1000: 2.93 msec per loop dumps: 5 loops, best of 1000: 7.16 msec per loop ------------------------- pypy using simplejson: /home/test/3.4kb.json loads: 5 loops, best of 1000: 197 usec per loop dumps: 5 loops, best of 1000: 148 usec per loop /home/test/32kb.json loads: 5 loops, best of 1000: 3.47 msec per loop dumps: 5 loops, best of 1000: 3.2 msec per loop On Sun, Sep 25, 2011 at 1:53 PM, Alex Gaynor wrote: > > > On Sun, Sep 25, 2011 at 1:49 PM, Bob Ippolito wrote: >> >> simplejson would be a good target for changes that would not be easy >> to implement on top of the stdlib json. I'd be happy to accept any >> contributions. I failed to make big differences in performance when I >> tried at PyCon (at least that didn't regress performance for some >> people). The other things I'm missing are a good suite of documents to >> benchmark with, and a good tool to run the benchmarks so it's easy to >> see if incremental changes are better or worse. >> >> However, if RPython is required to make it faster, maybe implementing >> _json for the stdlib would actually be best. >> >> On Sun, Sep 25, 2011 at 10:30 AM, Zooko O'Whielacronx >> wrote: >> > But don't people who need better json performance use simplejson >> > explicitly instead of using the standard library's json? >> > >> > Regards, >> > >> > Zooko >> > _______________________________________________ >> > pypy-dev mailing list >> > pypy-dev at python.org >> > http://mail.python.org/mailman/listinfo/pypy-dev >> > >> _______________________________________________ >> pypy-dev mailing list >> pypy-dev at python.org >> http://mail.python.org/mailman/listinfo/pypy-dev > > For what it's worth, I think we can get there, without needing to write any > RPython, through a combination of careful Python, and more JIT > optimizations. ?For example, I'd like to get the code input[i:i+4] == "NULL" > to eventually generate: > read str length > check length >= 4 > read 4 bytes out of input (single MOVL) > integer compare to ('N' << 0) | ('U' << 8) | ('L' << 16) | ('L' << 24) > in total about 7 x86 instructions. ?I think this is definitely possible! > Alex > > -- > "I disapprove of what you say, but I will defend to the death your right to > say it." -- Evelyn Beatrice Hall (summarizing Voltaire) > "The people's good is the highest law." -- Cicero > > > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > http://mail.python.org/mailman/listinfo/pypy-dev > > From bob at redivi.com Tue Sep 27 00:13:25 2011 From: bob at redivi.com (Bob Ippolito) Date: Mon, 26 Sep 2011 15:13:25 -0700 Subject: [pypy-dev] Performance, json and standard library In-Reply-To: References: Message-ID: You should also try the master branch of simplejson, the _pypy_speedups branch is not necessarily better (which is why it is not master). On Mon, Sep 26, 2011 at 2:39 PM, Matthew Kaniaris wrote: > I did some testing to see where we stand on JSON. ?The pypy is from > the trunk and the simplejson used with pypy is the _pypy_speedups > branch. ?The speedups make pypy about 2x faster on dumps than with the > stdlib JSON module, slightly slower with loads, but up to ten times > slower than cpython with simplejson with the 32kb file. ?I'll try > profiling the speedups branch to see if there is any easy fruit left, > but I doubt we will get another 50% improvement out of it. > > -kans > > results: > python using json: > > /home/test/3.4kb.json > loads: 5 loops, best of 1000: 953 usec per loop > > dumps: 5 loops, best of 1000: 706 usec per loop > > /home/test/32kb.json > loads: 5 loops, best of 1000: 10.9 msec per loop > > dumps: 5 loops, best of 1000: 9.13 msec per loop > > ------------------------- > python using simplejson: > > /home/test/3.4kb.json > loads: 5 loops, best of 1000: 41.2 usec per loop > > dumps: 5 loops, best of 1000: 56 usec per loop > > /home/test/32kb.json > loads: 5 loops, best of 1000: 604 usec per loop > > dumps: 5 loops, best of 1000: 391 usec per loop > > ------------------------- > pypy using json: > > /home/test/3.4kb.json > loads: 5 loops, best of 1000: 146 usec per loop > > dumps: 5 loops, best of 1000: 429 usec per loop > > /home/test/32kb.json > loads: 5 loops, best of 1000: 2.93 msec per loop > > dumps: 5 loops, best of 1000: 7.16 msec per loop > > ------------------------- > pypy using simplejson: > > /home/test/3.4kb.json > loads: 5 loops, best of 1000: 197 usec per loop > > dumps: 5 loops, best of 1000: 148 usec per loop > > /home/test/32kb.json > loads: 5 loops, best of 1000: 3.47 msec per loop > > dumps: 5 loops, best of 1000: 3.2 msec per loop > > > > On Sun, Sep 25, 2011 at 1:53 PM, Alex Gaynor wrote: >> >> >> On Sun, Sep 25, 2011 at 1:49 PM, Bob Ippolito wrote: >>> >>> simplejson would be a good target for changes that would not be easy >>> to implement on top of the stdlib json. I'd be happy to accept any >>> contributions. I failed to make big differences in performance when I >>> tried at PyCon (at least that didn't regress performance for some >>> people). The other things I'm missing are a good suite of documents to >>> benchmark with, and a good tool to run the benchmarks so it's easy to >>> see if incremental changes are better or worse. >>> >>> However, if RPython is required to make it faster, maybe implementing >>> _json for the stdlib would actually be best. >>> >>> On Sun, Sep 25, 2011 at 10:30 AM, Zooko O'Whielacronx >>> wrote: >>> > But don't people who need better json performance use simplejson >>> > explicitly instead of using the standard library's json? >>> > >>> > Regards, >>> > >>> > Zooko >>> > _______________________________________________ >>> > pypy-dev mailing list >>> > pypy-dev at python.org >>> > http://mail.python.org/mailman/listinfo/pypy-dev >>> > >>> _______________________________________________ >>> pypy-dev mailing list >>> pypy-dev at python.org >>> http://mail.python.org/mailman/listinfo/pypy-dev >> >> For what it's worth, I think we can get there, without needing to write any >> RPython, through a combination of careful Python, and more JIT >> optimizations. ?For example, I'd like to get the code input[i:i+4] == "NULL" >> to eventually generate: >> read str length >> check length >= 4 >> read 4 bytes out of input (single MOVL) >> integer compare to ('N' << 0) | ('U' << 8) | ('L' << 16) | ('L' << 24) >> in total about 7 x86 instructions. ?I think this is definitely possible! >> Alex >> >> -- >> "I disapprove of what you say, but I will defend to the death your right to >> say it." -- Evelyn Beatrice Hall (summarizing Voltaire) >> "The people's good is the highest law." -- Cicero >> >> >> _______________________________________________ >> pypy-dev mailing list >> pypy-dev at python.org >> http://mail.python.org/mailman/listinfo/pypy-dev >> >> > From richard.m.tew at gmail.com Tue Sep 27 01:50:51 2011 From: richard.m.tew at gmail.com (Richard Tew) Date: Tue, 27 Sep 2011 07:50:51 +0800 Subject: [pypy-dev] Stacklets In-Reply-To: <1317057711.4024.YahooMailNeo@web120712.mail.ne1.yahoo.com> References: <4E451B04.6050104@gmail.com> <1316896183.20733.YahooMailNeo@web120701.mail.ne1.yahoo.com> <1317057711.4024.YahooMailNeo@web120712.mail.ne1.yahoo.com> Message-ID: On Tue, Sep 27, 2011 at 1:21 AM, Andrew Francis wrote: > Welll the easiest thing to do is to see if import _continuation fails. And > if it does fail, try to import > greenlets. Also keep the old greenlet code. This is very much the way the > previous stackless.py > worked. Wouldn't that complicate the code unnecessarily? Perhaps a better way would be to put the burden on the greenlet users and if they wish to share the implementation, they should write an emulation layer for continuations. > As discussed on IRC, I think an approach that would work is fork > stackless.py in two. One branch would be conventional. That that,? it would > track C basedStackless but incorporate stuff like continuelets and bug fixes > and more conservative features. > > The other branch would be experimental. Wilder stuff would be done there. Sounds like a good idea to me. As long as any new or altered features do not make it into what is labelled as an implementation of the Stackless API without also being accepted into Stackless itself. Cheers, Richard. From fijall at gmail.com Tue Sep 27 03:18:02 2011 From: fijall at gmail.com (Maciej Fijalkowski) Date: Mon, 26 Sep 2011 22:18:02 -0300 Subject: [pypy-dev] Performance, json and standard library In-Reply-To: References: Message-ID: On Mon, Sep 26, 2011 at 7:13 PM, Bob Ippolito wrote: > You should also try the master branch of simplejson, the > _pypy_speedups branch is not necessarily better (which is why it is > not master). You should also look at https://bugs.pypy.org/issue866 for various patches. > > On Mon, Sep 26, 2011 at 2:39 PM, Matthew Kaniaris wrote: >> I did some testing to see where we stand on JSON. ?The pypy is from >> the trunk and the simplejson used with pypy is the _pypy_speedups >> branch. ?The speedups make pypy about 2x faster on dumps than with the >> stdlib JSON module, slightly slower with loads, but up to ten times >> slower than cpython with simplejson with the 32kb file. ?I'll try >> profiling the speedups branch to see if there is any easy fruit left, >> but I doubt we will get another 50% improvement out of it. >> >> -kans >> >> results: >> python using json: >> >> /home/test/3.4kb.json >> loads: 5 loops, best of 1000: 953 usec per loop >> >> dumps: 5 loops, best of 1000: 706 usec per loop >> >> /home/test/32kb.json >> loads: 5 loops, best of 1000: 10.9 msec per loop >> >> dumps: 5 loops, best of 1000: 9.13 msec per loop >> >> ------------------------- >> python using simplejson: >> >> /home/test/3.4kb.json >> loads: 5 loops, best of 1000: 41.2 usec per loop >> >> dumps: 5 loops, best of 1000: 56 usec per loop >> >> /home/test/32kb.json >> loads: 5 loops, best of 1000: 604 usec per loop >> >> dumps: 5 loops, best of 1000: 391 usec per loop >> >> ------------------------- >> pypy using json: >> >> /home/test/3.4kb.json >> loads: 5 loops, best of 1000: 146 usec per loop >> >> dumps: 5 loops, best of 1000: 429 usec per loop >> >> /home/test/32kb.json >> loads: 5 loops, best of 1000: 2.93 msec per loop >> >> dumps: 5 loops, best of 1000: 7.16 msec per loop >> >> ------------------------- >> pypy using simplejson: >> >> /home/test/3.4kb.json >> loads: 5 loops, best of 1000: 197 usec per loop >> >> dumps: 5 loops, best of 1000: 148 usec per loop >> >> /home/test/32kb.json >> loads: 5 loops, best of 1000: 3.47 msec per loop >> >> dumps: 5 loops, best of 1000: 3.2 msec per loop >> >> >> >> On Sun, Sep 25, 2011 at 1:53 PM, Alex Gaynor wrote: >>> >>> >>> On Sun, Sep 25, 2011 at 1:49 PM, Bob Ippolito wrote: >>>> >>>> simplejson would be a good target for changes that would not be easy >>>> to implement on top of the stdlib json. I'd be happy to accept any >>>> contributions. I failed to make big differences in performance when I >>>> tried at PyCon (at least that didn't regress performance for some >>>> people). The other things I'm missing are a good suite of documents to >>>> benchmark with, and a good tool to run the benchmarks so it's easy to >>>> see if incremental changes are better or worse. >>>> >>>> However, if RPython is required to make it faster, maybe implementing >>>> _json for the stdlib would actually be best. >>>> >>>> On Sun, Sep 25, 2011 at 10:30 AM, Zooko O'Whielacronx >>>> wrote: >>>> > But don't people who need better json performance use simplejson >>>> > explicitly instead of using the standard library's json? >>>> > >>>> > Regards, >>>> > >>>> > Zooko >>>> > _______________________________________________ >>>> > pypy-dev mailing list >>>> > pypy-dev at python.org >>>> > http://mail.python.org/mailman/listinfo/pypy-dev >>>> > >>>> _______________________________________________ >>>> pypy-dev mailing list >>>> pypy-dev at python.org >>>> http://mail.python.org/mailman/listinfo/pypy-dev >>> >>> For what it's worth, I think we can get there, without needing to write any >>> RPython, through a combination of careful Python, and more JIT >>> optimizations. ?For example, I'd like to get the code input[i:i+4] == "NULL" >>> to eventually generate: >>> read str length >>> check length >= 4 >>> read 4 bytes out of input (single MOVL) >>> integer compare to ('N' << 0) | ('U' << 8) | ('L' << 16) | ('L' << 24) >>> in total about 7 x86 instructions. ?I think this is definitely possible! >>> Alex >>> >>> -- >>> "I disapprove of what you say, but I will defend to the death your right to >>> say it." -- Evelyn Beatrice Hall (summarizing Voltaire) >>> "The people's good is the highest law." -- Cicero >>> >>> >>> _______________________________________________ >>> pypy-dev mailing list >>> pypy-dev at python.org >>> http://mail.python.org/mailman/listinfo/pypy-dev >>> >>> >> > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > http://mail.python.org/mailman/listinfo/pypy-dev > From fijall at gmail.com Tue Sep 27 03:18:34 2011 From: fijall at gmail.com (Maciej Fijalkowski) Date: Mon, 26 Sep 2011 22:18:34 -0300 Subject: [pypy-dev] Performance, json and standard library In-Reply-To: References: Message-ID: On Mon, Sep 26, 2011 at 10:18 PM, Maciej Fijalkowski wrote: > On Mon, Sep 26, 2011 at 7:13 PM, Bob Ippolito wrote: >> You should also try the master branch of simplejson, the >> _pypy_speedups branch is not necessarily better (which is why it is >> not master). > > You should also look at https://bugs.pypy.org/issue866 for various patches. Wrong link https://bugs.pypy.org/issue868 > >> >> On Mon, Sep 26, 2011 at 2:39 PM, Matthew Kaniaris wrote: >>> I did some testing to see where we stand on JSON. ?The pypy is from >>> the trunk and the simplejson used with pypy is the _pypy_speedups >>> branch. ?The speedups make pypy about 2x faster on dumps than with the >>> stdlib JSON module, slightly slower with loads, but up to ten times >>> slower than cpython with simplejson with the 32kb file. ?I'll try >>> profiling the speedups branch to see if there is any easy fruit left, >>> but I doubt we will get another 50% improvement out of it. >>> >>> -kans >>> >>> results: >>> python using json: >>> >>> /home/test/3.4kb.json >>> loads: 5 loops, best of 1000: 953 usec per loop >>> >>> dumps: 5 loops, best of 1000: 706 usec per loop >>> >>> /home/test/32kb.json >>> loads: 5 loops, best of 1000: 10.9 msec per loop >>> >>> dumps: 5 loops, best of 1000: 9.13 msec per loop >>> >>> ------------------------- >>> python using simplejson: >>> >>> /home/test/3.4kb.json >>> loads: 5 loops, best of 1000: 41.2 usec per loop >>> >>> dumps: 5 loops, best of 1000: 56 usec per loop >>> >>> /home/test/32kb.json >>> loads: 5 loops, best of 1000: 604 usec per loop >>> >>> dumps: 5 loops, best of 1000: 391 usec per loop >>> >>> ------------------------- >>> pypy using json: >>> >>> /home/test/3.4kb.json >>> loads: 5 loops, best of 1000: 146 usec per loop >>> >>> dumps: 5 loops, best of 1000: 429 usec per loop >>> >>> /home/test/32kb.json >>> loads: 5 loops, best of 1000: 2.93 msec per loop >>> >>> dumps: 5 loops, best of 1000: 7.16 msec per loop >>> >>> ------------------------- >>> pypy using simplejson: >>> >>> /home/test/3.4kb.json >>> loads: 5 loops, best of 1000: 197 usec per loop >>> >>> dumps: 5 loops, best of 1000: 148 usec per loop >>> >>> /home/test/32kb.json >>> loads: 5 loops, best of 1000: 3.47 msec per loop >>> >>> dumps: 5 loops, best of 1000: 3.2 msec per loop >>> >>> >>> >>> On Sun, Sep 25, 2011 at 1:53 PM, Alex Gaynor wrote: >>>> >>>> >>>> On Sun, Sep 25, 2011 at 1:49 PM, Bob Ippolito wrote: >>>>> >>>>> simplejson would be a good target for changes that would not be easy >>>>> to implement on top of the stdlib json. I'd be happy to accept any >>>>> contributions. I failed to make big differences in performance when I >>>>> tried at PyCon (at least that didn't regress performance for some >>>>> people). The other things I'm missing are a good suite of documents to >>>>> benchmark with, and a good tool to run the benchmarks so it's easy to >>>>> see if incremental changes are better or worse. >>>>> >>>>> However, if RPython is required to make it faster, maybe implementing >>>>> _json for the stdlib would actually be best. >>>>> >>>>> On Sun, Sep 25, 2011 at 10:30 AM, Zooko O'Whielacronx >>>>> wrote: >>>>> > But don't people who need better json performance use simplejson >>>>> > explicitly instead of using the standard library's json? >>>>> > >>>>> > Regards, >>>>> > >>>>> > Zooko >>>>> > _______________________________________________ >>>>> > pypy-dev mailing list >>>>> > pypy-dev at python.org >>>>> > http://mail.python.org/mailman/listinfo/pypy-dev >>>>> > >>>>> _______________________________________________ >>>>> pypy-dev mailing list >>>>> pypy-dev at python.org >>>>> http://mail.python.org/mailman/listinfo/pypy-dev >>>> >>>> For what it's worth, I think we can get there, without needing to write any >>>> RPython, through a combination of careful Python, and more JIT >>>> optimizations. ?For example, I'd like to get the code input[i:i+4] == "NULL" >>>> to eventually generate: >>>> read str length >>>> check length >= 4 >>>> read 4 bytes out of input (single MOVL) >>>> integer compare to ('N' << 0) | ('U' << 8) | ('L' << 16) | ('L' << 24) >>>> in total about 7 x86 instructions. ?I think this is definitely possible! >>>> Alex >>>> >>>> -- >>>> "I disapprove of what you say, but I will defend to the death your right to >>>> say it." -- Evelyn Beatrice Hall (summarizing Voltaire) >>>> "The people's good is the highest law." -- Cicero >>>> >>>> >>>> _______________________________________________ >>>> pypy-dev mailing list >>>> pypy-dev at python.org >>>> http://mail.python.org/mailman/listinfo/pypy-dev >>>> >>>> >>> >> _______________________________________________ >> pypy-dev mailing list >> pypy-dev at python.org >> http://mail.python.org/mailman/listinfo/pypy-dev >> > From andrewfr_ice at yahoo.com Tue Sep 27 17:03:23 2011 From: andrewfr_ice at yahoo.com (Andrew Francis) Date: Tue, 27 Sep 2011 08:03:23 -0700 (PDT) Subject: [pypy-dev] Stacklets In-Reply-To: References: <4E451B04.6050104@gmail.com> <1316896183.20733.YahooMailNeo@web120701.mail.ne1.yahoo.com> <1317057711.4024.YahooMailNeo@web120712.mail.ne1.yahoo.com> Message-ID: <1317135803.74304.YahooMailNeo@web120718.mail.ne1.yahoo.com> Hello Richard: ________________________________ From: Richard Tew To: Andrew Francis Cc: Armin Rigo ; "pypy-dev at python.org" Sent: Monday, September 26, 2011 7:50 PM Subject: Re: [pypy-dev] Stacklets On Tue, Sep 27, 2011 at 1:21 AM, Andrew Francis wrote: AF> Welll the easiest thing to do is to see if import _continuation fails. And AF> if it does fail, try to import greenlets. Also keep the old greenlet code. This is very much the way AF>the previous stackless.py worked. >Wouldn't that complicate the code unnecessarily? It complicates the code a bit more. However Stackless Python's big problem is that people do not want to install another Python interpreter. Stackless.py with greenlets gives folks one less excuse not to test drive Stackless. >? Perhaps a better way would be to put the burden on the greenlet users and if they wish to >share the implementation, they should write an emulation layer for continuations. How can this be better? My own experiences: I greatly benefited from not having to worry about greenlets and being allowed to focus solely on select(). If users have to write their own emulation layer, I see major two things happening: 1) folks walk away. 2) One gets a proliferation of emulation layers - wasted manpower. As it stands the PyPy developers made the right choice. As an example, look at the number of spinoffs from Stackless and stackless.py due to a lack of networking. AF> The other branch would be experimental. Wilder stuff would be done there. >Sounds like a good idea to me.? As long as any new or altered features >do not make it into what is labelled as an implementation of the >Stackless API without also being accepted into Stackless itself. Richard, in the long run, people will use whatever solves their problems and creates opportunities. I don't know about you but I'm interested in using PyPy and stackless.py to prototype new concurrency constructs that I want to use.... and in the process, throwing the prototypes out there to see what sticks. Cheers, Andrew -------------- next part -------------- An HTML attachment was scrubbed... URL: From cfbolz at gmx.de Tue Sep 27 17:15:43 2011 From: cfbolz at gmx.de (Carl Friedrich Bolz) Date: Tue, 27 Sep 2011 17:15:43 +0200 Subject: [pypy-dev] Stacklets In-Reply-To: <1317135803.74304.YahooMailNeo@web120718.mail.ne1.yahoo.com> References: <4E451B04.6050104@gmail.com> <1316896183.20733.YahooMailNeo@web120701.mail.ne1.yahoo.com> <1317057711.4024.YahooMailNeo@web120712.mail.ne1.yahoo.com> <1317135803.74304.YahooMailNeo@web120718.mail.ne1.yahoo.com> Message-ID: <4E81E89F.4030805@gmx.de> On 09/27/2011 05:03 PM, Andrew Francis wrote: > >Sounds like a good idea to me. As long as any new or altered features > >do not make it into what is labelled as an implementation of the > >Stackless API without also being accepted into Stackless itself. > > Richard, in the long run, people will use whatever solves their problems > and creates opportunities. I don't know about you but I'm interested in > using PyPy and stackless.py to prototype new concurrency constructs that > I want to use.... and in the process, throwing the prototypes out there > to see what sticks. Throwing a prototype out is not the same as giving the prototype a semi-official blessing by packaging it with PyPy in the stackless module. I agree with Richard. Carl Friedrich From andrewfr_ice at yahoo.com Tue Sep 27 20:16:54 2011 From: andrewfr_ice at yahoo.com (Andrew Francis) Date: Tue, 27 Sep 2011 11:16:54 -0700 (PDT) Subject: [pypy-dev] Stacklets In-Reply-To: <4E81E89F.4030805@gmx.de> References: <4E451B04.6050104@gmail.com> <1316896183.20733.YahooMailNeo@web120701.mail.ne1.yahoo.com> <1317057711.4024.YahooMailNeo@web120712.mail.ne1.yahoo.com> <1317135803.74304.YahooMailNeo@web120718.mail.ne1.yahoo.com> <4E81E89F.4030805@gmx.de> Message-ID: <1317147414.2547.YahooMailNeo@web120713.mail.ne1.yahoo.com> Hi Carl: ________________________________ From: Carl Friedrich Bolz To: pypy-dev at python.org Sent: Tuesday, September 27, 2011 11:15 AM Subject: Re: [pypy-dev] Stacklets >Throwing a prototype out is not the same as giving the prototype a semi-official blessing by packaging it >with PyPy in the stackless module. I agree with Richard. The only blessing I read out a potential packaging is that the powers-that-be are saying: "it is cool to experiment." However the solution to that is simple: don't package experimental with PyPy but make folks aware it exists. The reason I suggested an experimental branch is two fold. 1) Keeping unendorsed features out of a version of stackless.py that ought to track Stackless Python. 2) Have a central place to experiment with new features and get feedback. At the risk of this sounding like a rant or being off-topic, it seems to me the big picture that is getting lost is that stackless.py and PyPy makes it easier for individuals to prototype new ideas for Stackless Pythons and probably Python in general. Take join patterns. To date, I have read about join patterns being implemented in Java, Erlang, Scala, ML, Polyphonic C#, and Lua. What gives? Cheers, Andrew -------------- next part -------------- An HTML attachment was scrubbed... URL: From naylor.b.david at gmail.com Tue Sep 27 22:43:20 2011 From: naylor.b.david at gmail.com (David Naylor) Date: Tue, 27 Sep 2011 22:43:20 +0200 Subject: [pypy-dev] Pypy jit and (meta) genetic algorithms Message-ID: <201109272243.26092.naylor.b.david@gmail.com> Hi All It occurred to me that with the many options available for jit (such as inlining, function_threshold) there may be some merit to optimising those values. I would expect that the optimised values would be workload specific however if a workload takes days to run then it would be worth optimising. I recall an article that used genetic algorithms to select the best parameters (for gcc) that produces the fastest execution. Is there an equivalent program for pypy? Or if it is easy enough could someone put together such a (shell script) program? I, unfortunitely, have no experience with genetic algorithms nor know how to optimise the jit parameters. Regards, -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 196 bytes Desc: This is a digitally signed message part. URL: From fijall at gmail.com Tue Sep 27 22:55:20 2011 From: fijall at gmail.com (Maciej Fijalkowski) Date: Tue, 27 Sep 2011 17:55:20 -0300 Subject: [pypy-dev] Student project ideas In-Reply-To: <4E7D27E5.6030708@njwilson.net> References: <4E7D27E5.6030708@njwilson.net> Message-ID: Hi Nick. Sorry for the late reply. On Fri, Sep 23, 2011 at 9:44 PM, Nick Wilson wrote: > I'm interested in volunteering my time to mentor a small group of senior > Computer Science students at Oregon State University on a project relevant > to the Python community. PyPy definitely qualifies, and I'm looking for > project ideas. Great :) > > The project would be for their senior capstone class. Groups of 2-4 > students vote on the list of available projects and then work from roughly > mid-November to mid-May (along with all their other coursework) to > complete it. The scope of a projects are similar to what you'd assign a > full-time summer intern. > > I'm relatively new to the Python community and haven't poked around PyPy > much yet. I see the potential PyPy project list [1] in the developer > documentation. That's very helpful, but is anyone able to recommend some > projects from that list that are about the right difficulty and size? It would be great to schedule some sort of IRC discussions and especially ask what people are interested in working on. It also depends vastly on people's knowledge of Python, compilers etc, so it's hard to tell which projects are what size for what people upfront. What's your timezone? When is a good time to have such discussion? > > I have a decent amount of time to work with the students and am looking > for a project I could make significant contributions to as well. So I > should be able to work closely with the students and take whatever they > produce and work it into something usable if they are unable to complete > the entire project. That's great. > > Any suggestions? > > Thanks, > > Nick Wilson > > [1] http://doc.pypy.org/en/latest/project-ideas.html > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > http://mail.python.org/mailman/listinfo/pypy-dev > Cheers, fijal From Ronny.Pfannschmidt at gmx.de Tue Sep 27 23:20:31 2011 From: Ronny.Pfannschmidt at gmx.de (Ronny Pfannschmidt) Date: Tue, 27 Sep 2011 23:20:31 +0200 Subject: [pypy-dev] Pypy jit and (meta) genetic algorithms In-Reply-To: <201109272243.26092.naylor.b.david@gmail.com> References: <201109272243.26092.naylor.b.david@gmail.com> Message-ID: <4E823E1F.6050308@gmx.de> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 09/27/2011 10:43 PM, David Naylor wrote: > Hi All > > It occurred to me that with the many options available for jit (such as > inlining, function_threshold) there may be some merit to optimising those > values. I would expect that the optimised values would be workload specific > however if a workload takes days to run then it would be worth optimising. > > I recall an article that used genetic algorithms to select the best parameters > (for gcc) that produces the fastest execution. Is there an equivalent program > for pypy? Or if it is easy enough could someone put together such a (shell > script) program? > > I, unfortunitely, have no experience with genetic algorithms nor know how to > optimise the jit parameters. pretty simple genomes could already do i suppose (pyevolve should have everything you need the main tricky parts will be choosing what sizes of population and how to test i am very sure that this will be very computation-intensive - -- ronny -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.11 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iQIcBAEBAgAGBQJOgj4fAAoJEE8uAqxPKbjk1l8QAIc3VWFEYbVhGIWfcx4SggGt tT1G8A2zZBJDZ1JOOAkrkomx4i1PGOY05/cXQweHLoBK95aP2l+uVooRWGwpiavV aC6KkJBQ6WFBRoIDvPGx/HUNcxPN4YQzqGFI9qZ6+IbbDpm+j9BrPAz10sNL/lem EpxP8yHggwhY3mTVm4rLaKZzCY7i/78V7nQPfDRd1lADR2EM2sMmgq+MJAeKEn29 pscGgm4Pm3XTAgS3pwcO2zR7CKjHlhRSNcKqSJ4yFVt1xzSSpACq6RRQVxfQLDz8 ifIF+9gLFerKWhQnsspNCLlq6zxn+4baA6/IH1RFo8QBB9fhWZa8LpXuTxaeA6yW oEDsd6Hc9i8IeKuOxV5EDSKSgFqhxUJsG097FVytS0N7qjYLdTb13J88MOjH367T KFdKM4ekqedHytHS2Y/dTZHGx8axGaNB53RV6yb6EiDrjLNBCtDMOaCcDsa/fN/d 8uf8deCyhULmkxW3MvzyYIcoUDwPEnb0T9pqiXEAgq35IMZ+830O+BR6Lhpm0aya 1la6mfGC1BB1vHAr8b0UCPkqAqFxPEd/dJoQxvQdzsMjmxgtO881N2ERfFyOACwD 4J2i3rP+7ZvQ8VMmJ0F5pqBiLZR8Yz02t8aaoQGvO8OVbV1fQHvmOwm5ESvBuM4m f84oSEQxc1U0GOE+ISk7 =BWpS -----END PGP SIGNATURE----- From Ronny.Pfannschmidt at gmx.de Tue Sep 27 23:27:46 2011 From: Ronny.Pfannschmidt at gmx.de (Ronny Pfannschmidt) Date: Tue, 27 Sep 2011 23:27:46 +0200 Subject: [pypy-dev] thoughts on built-in support for coroutine suspension on lowlevel io Message-ID: <4E823FD2.8090802@gmx.de> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Hi, I?d like to collect thoughts on having built-in primitives for co-routine suspension it would greatly simplify the work for tool-kits like eventlet/gevent, since no longer they would need to monkey-patch all parts of the std-lib instead they could use a simple built-in primitive to integrate the std-lib with co-routine based async io - -- Ronny -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.11 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iQIcBAEBAgAGBQJOgj/SAAoJEE8uAqxPKbjkDoIP/1Wy6X0g194fHpqK18PClw/K iD0fcMe3by758TEzqLe3moUwGJilkXqjSNntVZAmBp8QNGdiSWfuq5bxx/pqJlpQ X87kTEOpd5JyNIpQY9/adUGZFzjaqinbUinhmZB8Vm6Kfb7JV95TR0TheborrrDW GEttnPc/CcSv97/AqsvzIJWUDzPvlZ5F66zgKumijBlFv6ja/aRCbsRMozVlEeFx Mshxazuu9F4y6JYZee/UzESzHyByFd3sClWt8CwUgHftZwrdgoyVGM2zvlMVzzdr u7ct44p2PqDw1S634LAgJrghBEUiK3nLiQs8l/3zazti8FKesN8yXGm5Ivaf1lgj DrnTKmMGIvePwTXXLjmwWBWIdY8v2B3Mgztne8iISSjhtY0L5xzji63CGJotr/uK hLa1avbZMhuPZcr/qNEFABYBDEhPO+WdiKUu0nqFecQtGRIw7P6CmxRp28AmBTgd 9eB8k4mczbYLWKP2TYK7Im6mhHDn0ZkuZUl6uCefODIenrJ6u5SISXXVKss2R3kz fOm5eLQ59LVaoClE7+d8K9ckMiO3dAwD5qsfk7+7laKCO3RfSQI2OcxnA4555BUz LxjdOz3r2DJc6yub9eMl07Ox+Kr22XL47I/w2tL2MM+2tNg7pAaiSrzN8bAeDJf3 IODEj++L8hNttf1XSGnE =1hkn -----END PGP SIGNATURE----- From Ronny.Pfannschmidt at gmx.de Tue Sep 27 23:31:25 2011 From: Ronny.Pfannschmidt at gmx.de (Ronny Pfannschmidt) Date: Tue, 27 Sep 2011 23:31:25 +0200 Subject: [pypy-dev] [pypy-commit] pypy default: graphviewer - split the dot2plain function into one for local and one for the codespeak cgi In-Reply-To: References: <20110926104036.8CA7C820CE@wyvern.cs.uni-duesseldorf.de> Message-ID: <4E8240AD.5090303@gmx.de> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 09/26/2011 01:53 PM, Maciej Fijalkowski wrote: > Can't we just kill codespeak's CGI? its a nice fallback tho, we should probably move it to wyvern - -- Ronny -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.11 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iQIcBAEBAgAGBQJOgkCtAAoJEE8uAqxPKbjkSD8P/2oBYioUh+F3H5BPIB8W+EKA SKD9UZJoZWhpAqynMsuvQj1YB2D7EsBN/4wND9bDPCTake97QXBbROYpg7yaIsz1 GaJGlo51ayu/Tf+bwqB9Z6VsD2bokp4XBMS3iGqyW9xRSSFjZnJZKYssjVKisK4D TSSkF/qRBy8ZHunUL1TTR/El3jtXfMDQSq5odREdKYHGa/TodNk5glxvVPQamGfD jHJ/bQDbpIHPt+Qo7lRnXlxVlUdimvsTZF/5G+8O+5YaR1ZRQ1nQRhsDlCxqJtEh JaaSGLCIZwfYvYYyNncop7Er/Xm9/JvU5uikEBWPInjqwqjqw5kTopV+uj0FpNHy bVRTBKNsQm9A1yMXV8f0hbXvlRV31cSczxxNbdFNsmXw66GQxhZvJITGoLjL8wks yleI85yuw2tTWpHN4n+eINZ+FEugUP5iiNjaRU+eL5RvNJEGJ8zb8Fdz2BztnKTM VcJIU4WHQK4VC3FDFNuhmZIusykG87dtMt9h0ZUu4T1GaHGch575NOCBpC2lDTG+ 5cVICRffiiB4p3igC2C55HhMd9K7jNe6DOU+CC3nEtYyE2bs6B3bEdV9sQ3h8owF 828XlaABooJo3wnh0n7JPLBzdejSip1iawWFR2msVpWyDHLxpQRSXovBjyhRyqsl NnSwi6lQGd/cBezW7Otr =eZa2 -----END PGP SIGNATURE----- From nick at njwilson.net Tue Sep 27 23:47:55 2011 From: nick at njwilson.net (Nick Wilson) Date: Tue, 27 Sep 2011 14:47:55 -0700 Subject: [pypy-dev] Student project ideas In-Reply-To: References: <4E7D27E5.6030708@njwilson.net> Message-ID: <4E82448B.9070907@njwilson.net> On 9/27/11 1:55 PM, Maciej Fijalkowski wrote: > Hi Nick. > > Sorry for the late reply. > > On Fri, Sep 23, 2011 at 9:44 PM, Nick Wilson wrote: >> I'm interested in volunteering my time to mentor a small group of senior >> Computer Science students at Oregon State University on a project relevant >> to the Python community. PyPy definitely qualifies, and I'm looking for >> project ideas. > > Great :) > >> >> The project would be for their senior capstone class. Groups of 2-4 >> students vote on the list of available projects and then work from roughly >> mid-November to mid-May (along with all their other coursework) to >> complete it. The scope of a projects are similar to what you'd assign a >> full-time summer intern. >> >> I'm relatively new to the Python community and haven't poked around PyPy >> much yet. I see the potential PyPy project list [1] in the developer >> documentation. That's very helpful, but is anyone able to recommend some >> projects from that list that are about the right difficulty and size? > > It would be great to schedule some sort of IRC discussions and > especially ask what people are interested in working on. It also > depends vastly on people's knowledge of Python, compilers etc, so it's > hard to tell which projects are what size for what people upfront. > What's your timezone? When is a good time to have such discussion? Thanks for the response :) I'm on the west coast of the US (UTC-7), but it looks like I may have missed the opportunity to submit a PyPy-related proposal for the class. I got another Python proposal in there but the deadline for submitting more (yesterday) was closer than I realized. I'll have to talk to the professor to see if there might be more room, but let's hold off on a discussion for now. Nick >> I have a decent amount of time to work with the students and am looking >> for a project I could make significant contributions to as well. So I >> should be able to work closely with the students and take whatever they >> produce and work it into something usable if they are unable to complete >> the entire project. > > That's great. > >> >> Any suggestions? >> >> Thanks, >> >> Nick Wilson >> > > Cheers, > fijal From orangewarrior at gmail.com Wed Sep 28 00:17:37 2011 From: orangewarrior at gmail.com (=?ISO-8859-2?Q?=A3ukasz_Ligowski?=) Date: Wed, 28 Sep 2011 00:17:37 +0200 Subject: [pypy-dev] SpaceOperation Message-ID: Hello, I'd like to know what is the purpose of offset field on pypy.objspace.flow.model.SpaceOperation object. It is labeled as "offset in code string" but I have problem to find the right string. I tried with string that is returned by using string returned by FunctionGraph.getsource() but it didn't bring expected results. What I try to accomplish is to learn whether it is possible to map particular SpaceOperation with right line of original source file. L From amauryfa at gmail.com Wed Sep 28 00:25:12 2011 From: amauryfa at gmail.com (Amaury Forgeot d'Arc) Date: Wed, 28 Sep 2011 00:25:12 +0200 Subject: [pypy-dev] SpaceOperation In-Reply-To: References: Message-ID: Hi, 2011/9/28 ?ukasz Ligowski > Hello, > > I'd like to know what is the purpose of offset field on > pypy.objspace.flow.model.SpaceOperation object. > It is labeled as "offset in code string" but I have problem to find > the right string. > I tried with string that is returned by using string returned by > FunctionGraph.getsource() but it didn't bring expected results. > > What I try to accomplish is to learn whether it is possible to map > particular SpaceOperation with right line of original source file. > The "code string" is very likely the bytecode of the Python function, i.e f.__code__.co_code. It's not very easy to get back to the original source line. f.__code__.co_firstlineno gives the first line number, and other lines must be computed with the help of f.__code__.co_lnotab. Good luck with this one. You may want to look at the "dis" module to see how it is used. -- Amaury Forgeot d'Arc -------------- next part -------------- An HTML attachment was scrubbed... URL: From richard.m.tew at gmail.com Wed Sep 28 01:57:38 2011 From: richard.m.tew at gmail.com (Richard Tew) Date: Wed, 28 Sep 2011 07:57:38 +0800 Subject: [pypy-dev] Stacklets In-Reply-To: <1317147414.2547.YahooMailNeo@web120713.mail.ne1.yahoo.com> References: <4E451B04.6050104@gmail.com> <1316896183.20733.YahooMailNeo@web120701.mail.ne1.yahoo.com> <1317057711.4024.YahooMailNeo@web120712.mail.ne1.yahoo.com> <1317135803.74304.YahooMailNeo@web120718.mail.ne1.yahoo.com> <4E81E89F.4030805@gmx.de> <1317147414.2547.YahooMailNeo@web120713.mail.ne1.yahoo.com> Message-ID: On Wed, Sep 28, 2011 at 2:16 AM, Andrew Francis wrote: > At the risk of this sounding like a rant or being off-topic, it seems to me > the big picture that is getting lost is that stackless.py and PyPy makes it > easier for individuals to prototype new ideas for Stackless Pythons and > probably Python in general. Take join patterns. To date, I have read about > join patterns being implemented in Java, Erlang, Scala, ML, Polyphonic C#, > and Lua. What gives? Can't you do that in another file that doesn't represent itself as an implementation of Stackless with no loss to your freedoms? This way, anyone who would use stackless.py would get the stable set of features and API that Stackless has had for over five years now and likely the ability to switch between the two implementations. Or am I misunderstanding? Cheers, Richard. From arigo at tunes.org Wed Sep 28 10:07:47 2011 From: arigo at tunes.org (Armin Rigo) Date: Wed, 28 Sep 2011 10:07:47 +0200 Subject: [pypy-dev] Pypy jit and (meta) genetic algorithms In-Reply-To: <201109272243.26092.naylor.b.david@gmail.com> References: <201109272243.26092.naylor.b.david@gmail.com> Message-ID: Hi David, On Tue, Sep 27, 2011 at 22:43, David Naylor wrote: > It occurred to me that with the many options available for jit (such as > inlining, function_threshold) there may be some merit to optimising those > values. You are correct in that it makes sense to try more to optimize, notably the "trace_limit" parameter, but we try to do it from time to time, manually. There is also the "retrace_limit" about which I am not sure --- on obscure cases I'm sure that increasing it makes sense. On long-running processes, all other parameters should have a much smaller impact (threshold, function_threshold, trace_eagerness), are about reusing memory on large programs with different phases (loop_longevity), or are for debugging (enable_opts, inlining). All in all we don't really have more than this one parameter, the trace_limit, to optimize heuristically (and one is even too much, we're trying to think of ways to avoid it). A bient?t, Armin. From arigo at tunes.org Wed Sep 28 10:13:19 2011 From: arigo at tunes.org (Armin Rigo) Date: Wed, 28 Sep 2011 10:13:19 +0200 Subject: [pypy-dev] thoughts on built-in support for coroutine suspension on lowlevel io In-Reply-To: <4E823FD2.8090802@gmx.de> References: <4E823FD2.8090802@gmx.de> Message-ID: Hi Ronny, On Tue, Sep 27, 2011 at 23:27, Ronny Pfannschmidt wrote: > I?d like to collect thoughts on having built-in primitives for > co-routine suspension > > it would greatly simplify the work for tool-kits like eventlet/gevent, > since no longer they would need to monkey-patch all parts of the std-lib While it's a worthwhile goal, I don't know if the final benefit is positive. I can see two ways to do it inside PyPy: either we tweak all built-in functions to call OS-provided non-blocking versions internally (which wouldn't be any less annoying than just patching all stdlib functions from app-level), or we spawn threads and run the blocking calls there (but then you have a big scalability issue). Ideally a mix of the two would be best, but I don't quite see how to work around *all* scalability issues. A bient?t, Armin. From arigo at tunes.org Wed Sep 28 10:19:26 2011 From: arigo at tunes.org (Armin Rigo) Date: Wed, 28 Sep 2011 10:19:26 +0200 Subject: [pypy-dev] SpaceOperation In-Reply-To: References: Message-ID: Hi, 2011/9/28 Amaury Forgeot d'Arc : >> What I try to accomplish is to learn whether it is possible to map >> particular SpaceOperation with right line of original source file. > > Good luck with this one. You may want to look at the "dis" module > to see how it is used. Note also that it only works for annotated flow graphs before RTyping. Nowadays, we generally look at graphs that have been RTyped, so 'offset' is lost. If you really work hard in carrying around 'offset' then it's probably possible to keep it around more often. Basically the 'offset' field is an old feature that is not used anywhere, and probably not tested either, so you're on your own there. A bient?t, Armin. From alex.pyattaev at gmail.com Wed Sep 28 11:06:02 2011 From: alex.pyattaev at gmail.com (Alex Pyattaev) Date: Wed, 28 Sep 2011 12:06:02 +0300 Subject: [pypy-dev] swig + pypy - object reference counting Message-ID: <3582156.ExqqykTldp@hunter-laptop.tontut.fi> Hi! I have a quite sophisticated program that can be summarized as follows: 1. Save a pypy object pointer inside C program. Here I call Py_XINCREF so that it does not get deleted. 2. Do some logic, move this reference around C code. 3. Return a python tuple via typemap, here I am probably supposed to return a borrowed reference. And in Python2 it works just fine. BUT. In pypy, for some reason, it causes segfault with following message: """ Fatal error in cpyext, CPython compatibility layer, calling PyTuple_SetItem Either report a bug or consider not using this particular extension RPython traceback: File "module_cpyext_api_1.c", line 28965, in PyTuple_SetItem File "module_cpyext_pyobject.c", line 1018, in CpyTypedescr_realize Segmentation fault """ If I call Py_XINCREF before returning the object, the crash does not happen and the memory does not seem to be leaking (at least not noticeably massive amounts of it). So it seems that PyPy is somewhat incompatible with Python2 in that matter. If you want I could send the code example that triggers the bug (it IS quite large app, which might have many more bugs apart from this, but still). Thank you, Alex. From amauryfa at gmail.com Wed Sep 28 11:11:07 2011 From: amauryfa at gmail.com (Amaury Forgeot d'Arc) Date: Wed, 28 Sep 2011 11:11:07 +0200 Subject: [pypy-dev] swig + pypy - object reference counting In-Reply-To: <3582156.ExqqykTldp@hunter-laptop.tontut.fi> References: <3582156.ExqqykTldp@hunter-laptop.tontut.fi> Message-ID: 2011/9/28 Alex Pyattaev > Hi! > I have a quite sophisticated program that can be summarized as follows: > 1. Save a pypy object pointer inside C program. Here I call Py_XINCREF so > that > it does not get deleted. > 2. Do some logic, move this reference around C code. > 3. Return a python tuple via typemap, here I am probably supposed to return > a > borrowed reference. And in Python2 it works just fine. BUT. In pypy, for > some > reason, it causes segfault with following message: > """ > Fatal error in cpyext, CPython compatibility layer, calling PyTuple_SetItem > Either report a bug or consider not using this particular extension > > RPython traceback: > File "module_cpyext_api_1.c", line 28965, in PyTuple_SetItem > File "module_cpyext_pyobject.c", line 1018, in CpyTypedescr_realize > Segmentation fault > """ > If I call Py_XINCREF before returning the object, the crash does not happen > and the memory does not seem to be leaking (at least not noticeably massive > amounts of it). So it seems that PyPy is somewhat incompatible with Python2 > in > that matter. > If you want I could send the code example that triggers the bug (it IS > quite > large app, which might have many more bugs apart from this, but still). > Isn't PyTuple_SetItem supposed to "steal" the reference? In this case you'd better INCREF the object if it is globally shared. -- Amaury Forgeot d'Arc -------------- next part -------------- An HTML attachment was scrubbed... URL: From alex.pyattaev at gmail.com Wed Sep 28 11:41:05 2011 From: alex.pyattaev at gmail.com (Alex Pyattaev) Date: Wed, 28 Sep 2011 12:41:05 +0300 Subject: [pypy-dev] swig + pypy - object reference counting In-Reply-To: References: <3582156.ExqqykTldp@hunter-laptop.tontut.fi> Message-ID: <6523613.pt0AZzDt1n@hunter-laptop.tontut.fi> Well, the point is that first I make an owned copy of the object: %typemap(in) void* { Py_XINCREF($input); $1 = $input; } Here is the storage struct: struct event{ int code; void* node_tx; void* node_rx; void* packet; double signal_power; double noise_power; double BER; struct event* next; }; For the C-code python objects are just void*, so they are perfectly safe. Now, when I fetch the objects by my own get function i have the following typemap: %typemap (out) event_t* { if ($1 == NULL) { $result = Py_None; Py_XINCREF($result); } else { $result = PyTuple_New(7); PyTuple_SetItem($result, 0 , PyInt_FromLong($1->code)); PyTuple_SetItem($result, 1 , $1->node_tx); PyTuple_SetItem($result, 2 , $1->node_rx); PyTuple_SetItem($result, 3 , $1->packet); #ifdef PYTHON_PYPY Py_XINCREF($1->node_tx); Py_XINCREF($1->node_rx); Py_XINCREF($1->packet); #endif PyTuple_SetItem($result, 4 , PyFloat_FromDouble($1->signal_power)); PyTuple_SetItem($result, 5 , PyFloat_FromDouble($1->noise_power)); PyTuple_SetItem($result, 6 , PyFloat_FromDouble($1->BER)); free($1); Py_XINCREF($result); } } As you can see, in Python I do not need to INCREF object references, but in PYPY I do, otherwise it crashes. In the wrapper function it looks like this: SWIGINTERN PyObject *_wrap_fetch_event(PyObject *SWIGUNUSEDPARM(self), PyObject *args) { PyObject *resultobj = 0; event_t *result = 0 ; if (!SWIG_Python_UnpackTuple(args,"fetch_event",0,0,0)) SWIG_fail; result = (event_t *)fetch_event(); { if (result == NULL) { resultobj = Py_None; Py_XINCREF(resultobj); } else { resultobj = PyTuple_New(7); #ifdef PYTHON_PYPY Py_XINCREF($1->node_tx); Py_XINCREF($1->node_rx); Py_XINCREF($1->packet); #endif PyTuple_SetItem(resultobj, 0 , PyInt_FromLong(result->code)); PyTuple_SetItem(resultobj, 1 , result->node_tx); PyTuple_SetItem(resultobj, 2 , result->node_rx); PyTuple_SetItem(resultobj, 3 , result->packet); PyTuple_SetItem(resultobj, 4 , PyFloat_FromDouble(result- >signal_power)); PyTuple_SetItem(resultobj, 5 , PyFloat_FromDouble(result->noise_power)); PyTuple_SetItem(resultobj, 6 , PyFloat_FromDouble(result->BER)); free(result); Py_XINCREF(resultobj); } } return resultobj; fail: return NULL; } So essentially the same code works in different ways for python and pypy. IMHO there is a bug somewhere, but I have not time ATM to find it. And yes, it leaks memory like hell due to extra ref=( On Wednesday 28 September 2011 11:11:07 Amaury Forgeot d'Arc wrote: > 2011/9/28 Alex Pyattaev > > > Hi! > > I have a quite sophisticated program that can be summarized as follows: > > 1. Save a pypy object pointer inside C program. Here I call Py_XINCREF > > so > > that > > it does not get deleted. > > 2. Do some logic, move this reference around C code. > > 3. Return a python tuple via typemap, here I am probably supposed to > > return a > > borrowed reference. And in Python2 it works just fine. BUT. In pypy, for > > some > > reason, it causes segfault with following message: > > """ > > Fatal error in cpyext, CPython compatibility layer, calling > > PyTuple_SetItem Either report a bug or consider not using this > > particular extension > > > > RPython traceback: > > File "module_cpyext_api_1.c", line 28965, in PyTuple_SetItem > > File "module_cpyext_pyobject.c", line 1018, in CpyTypedescr_realize > > > > Segmentation fault > > """ > > If I call Py_XINCREF before returning the object, the crash does not > > happen and the memory does not seem to be leaking (at least not > > noticeably massive amounts of it). So it seems that PyPy is somewhat > > incompatible with Python2 in > > that matter. > > If you want I could send the code example that triggers the bug (it IS > > quite > > large app, which might have many more bugs apart from this, but still). > > Isn't PyTuple_SetItem supposed to "steal" the reference? > In this case you'd better INCREF the object if it is globally shared. From amauryfa at gmail.com Wed Sep 28 11:58:06 2011 From: amauryfa at gmail.com (Amaury Forgeot d'Arc) Date: Wed, 28 Sep 2011 11:58:06 +0200 Subject: [pypy-dev] swig + pypy - object reference counting In-Reply-To: <6523613.pt0AZzDt1n@hunter-laptop.tontut.fi> References: <3582156.ExqqykTldp@hunter-laptop.tontut.fi> <6523613.pt0AZzDt1n@hunter-laptop.tontut.fi> Message-ID: 2011/9/28 Alex Pyattaev > Py_XINCREF(resultobj); > What is this call doing? It should not be necessary, since you called PyTuple_New. This may explain why it does not crash with CPython: the tuple object always leaks but happens to keep the necessary reference to the global object. (PyPy does not use refcounting in tuples, so behaviour differs) If you fix the tuple leak, you will certainly see that CPython needs the additional Py_XINCREF($1->node_tx) as well... -- Amaury Forgeot d'Arc -------------- next part -------------- An HTML attachment was scrubbed... URL: From fijall at gmail.com Wed Sep 28 13:34:05 2011 From: fijall at gmail.com (Maciej Fijalkowski) Date: Wed, 28 Sep 2011 08:34:05 -0300 Subject: [pypy-dev] thoughts on built-in support for coroutine suspension on lowlevel io In-Reply-To: References: <4E823FD2.8090802@gmx.de> Message-ID: On Wed, Sep 28, 2011 at 5:13 AM, Armin Rigo wrote: > Hi Ronny, > > On Tue, Sep 27, 2011 at 23:27, Ronny Pfannschmidt > wrote: >> I?d like to collect thoughts on having built-in primitives for >> co-routine suspension >> >> it would greatly simplify the work for tool-kits like eventlet/gevent, >> since no longer they would need to monkey-patch all parts of the std-lib > > While it's a worthwhile goal, I don't know if the final benefit is > positive. ?I can see two ways to do it inside PyPy: either we tweak > all built-in functions to call OS-provided non-blocking versions > internally (which wouldn't be any less annoying than just patching all > stdlib functions from app-level), or we spawn threads and run the > blocking calls there (but then you have a big scalability issue). > Ideally a mix of the two would be best, but I don't quite see how to > work around *all* scalability issues. > And Ctrl-C stops working etc. etc. Those are not *really* answers. From andrewfr_ice at yahoo.com Wed Sep 28 15:52:25 2011 From: andrewfr_ice at yahoo.com (Andrew Francis) Date: Wed, 28 Sep 2011 06:52:25 -0700 (PDT) Subject: [pypy-dev] Stacklets In-Reply-To: References: <4E451B04.6050104@gmail.com> <1316896183.20733.YahooMailNeo@web120701.mail.ne1.yahoo.com> <1317057711.4024.YahooMailNeo@web120712.mail.ne1.yahoo.com> <1317135803.74304.YahooMailNeo@web120718.mail.ne1.yahoo.com> <4E81E89F.4030805@gmx.de> <1317147414.2547.YahooMailNeo@web120713.mail.ne1.yahoo.com> Message-ID: <1317217945.26764.YahooMailNeo@web120719.mail.ne1.yahoo.com> Hello Richard: ________________________________ From: Richard Tew To: Andrew Francis Cc: Carl Friedrich Bolz ; "pypy-dev at python.org" Sent: Tuesday, September 27, 2011 7:57 PM Subject: Re: [pypy-dev] Stacklets >Can't you do that in another file that doesn't represent itself as an >implementation of Stackless with no loss to your freedoms?? Yes Richard, I can give a file another name. If I called the experimental module something-that-I-read-in-a-paper-and-decided-to-implement-in-StacklessPy-because-I-do-not-like-hacking-in-C.py, would that satisfy you? Regardless of name,? this other file sitting in an experimental branch claiming to be a representation of stackless.py would implement the entire Stackless API and about 60% of the current stackless.py's code base. More importantly, quirks and bugs in the overlapping 60% of the code base, I would be inclined to fix in the legitimate stackless.py as well. "What's in a name? that which we call a rose by any other name would smell as sweet" >This way, anyone who would use stackless.py would get the stable set of features >and API that Stackless has had for over five years now and likely the ability to switch between > the two implementations. And what is stopping folks from using a stackless.py that moves lockstep with Stackless Python, while there is a stackless_v3.py lay in an experimental branch? Isn't this sort of like Python 2.x existing while Python 3.x was being worked on and put as alphas? >Or am I misunderstanding? Yes Richard you are misunderstanding.? What I am working on (or have in mind) is not Concurrence, or a gEvent like package but potential new Stackless Python features. And you know this.? To me the real issue is NIH invented here. One of the things that will complicate Stackless Python's world is that advances courtesy of PyPy make experimenting with Stackless Python and bypassing C based Stackless Python increasingly the most attractive evolutionary path.? Rather than quibbling, figure out how to take best advantage of this. Cheers, Andrew -------------- next part -------------- An HTML attachment was scrubbed... URL: From arigo at tunes.org Wed Sep 28 20:41:49 2011 From: arigo at tunes.org (Armin Rigo) Date: Wed, 28 Sep 2011 20:41:49 +0200 Subject: [pypy-dev] thoughts on built-in support for coroutine suspension on lowlevel io In-Reply-To: References: <4E823FD2.8090802@gmx.de> Message-ID: Hi Maciek, On Wed, Sep 28, 2011 at 13:34, Maciej Fijalkowski wrote: > And Ctrl-C stops working etc. etc. Those are not *really* answers. Good point. Armin From sontek at gmail.com Wed Sep 28 23:02:49 2011 From: sontek at gmail.com (John Anderson) Date: Wed, 28 Sep 2011 17:02:49 -0400 Subject: [pypy-dev] pypy with virtualenv? Message-ID: I read that this should just work with the latest versions, here is what I'm getting: sontek at beast$ virtualenv --python=pypy ~/code/pypyenv Running virtualenv with interpreter /usr/bin/pypy New pypy executable in /home/sontek/code/pypyenv/bin/pypy ERROR: The executable /home/sontek/code/pypyenv/bin/pypy is not functioning ERROR: It thinks sys.prefix is u'/usr/lib64/pypy-1.5' (should be '/home/sontek/code/pypyenv') ERROR: virtualenv is not compatible with this system or executable sontek at beast$ pypy --version Python 2.7.1 (?, May 02 2011, 19:05:35) [PyPy 1.5.0-alpha0 with GCC 4.6.0] ~ sontek at beast$ virtualenv --version 1.6.4 -------------- next part -------------- An HTML attachment was scrubbed... URL: From fijall at gmail.com Wed Sep 28 23:41:05 2011 From: fijall at gmail.com (Maciej Fijalkowski) Date: Wed, 28 Sep 2011 18:41:05 -0300 Subject: [pypy-dev] pypy with virtualenv? In-Reply-To: References: Message-ID: On Wed, Sep 28, 2011 at 6:02 PM, John Anderson wrote: > I read that this should just work with the latest versions, here is what I'm > getting: > sontek at beast$ ?virtualenv --python=pypy ~/code/pypyenv > Running virtualenv with interpreter /usr/bin/pypy > New pypy executable in /home/sontek/code/pypyenv/bin/pypy > ERROR: The executable /home/sontek/code/pypyenv/bin/pypy is not functioning > ERROR: It thinks sys.prefix is u'/usr/lib64/pypy-1.5' (should be > '/home/sontek/code/pypyenv') > ERROR: virtualenv is not compatible with this system or executable > sontek at beast$ ?pypy --version > Python 2.7.1 (?, May 02 2011, 19:05:35) > [PyPy 1.5.0-alpha0 with GCC 4.6.0] > ~ > sontek at beast$ ?virtualenv --version > 1.6.4 > > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > http://mail.python.org/mailman/listinfo/pypy-dev > > 1.5 is an old release of PyPy. Try 1.6? From sontek at gmail.com Thu Sep 29 00:09:55 2011 From: sontek at gmail.com (John Anderson) Date: Wed, 28 Sep 2011 18:09:55 -0400 Subject: [pypy-dev] pypy with virtualenv? In-Reply-To: References: Message-ID: Fedora 15 doesn't have 1.6 out yet. I tried to use the binary release but it seems to be compiled against different libssl/libcrypto's than what I have on my system... I symlinked them over but it fails to create the virtualenv still: sontek at beast$ virtualenv -p /home/sontek/Downloads/pypy16/pypy-1.6/bin/pypy /home/sontek/code/pypyenv2 Running virtualenv with interpreter /home/sontek/Downloads/pypy16/pypy-1.6/bin/pypy /home/sontek/Downloads/pypy16/pypy-1.6/bin/pypy: /usr/lib64/libssl.so.0.9.8: no version information available (required by /home/sontek/Downloads/pypy16/pypy-1.6/bin/pypy) /home/sontek/Downloads/pypy16/pypy-1.6/bin/pypy: /usr/lib64/libcrypto.so.0.9.8: no version information available (required by /home/sontek/Downloads/pypy16/pypy-1.6/bin/pypy) and then the directory isn't created. On Wed, Sep 28, 2011 at 5:41 PM, Maciej Fijalkowski wrote: > On Wed, Sep 28, 2011 at 6:02 PM, John Anderson wrote: > > I read that this should just work with the latest versions, here is what > I'm > > getting: > > sontek at beast$ virtualenv --python=pypy ~/code/pypyenv > > Running virtualenv with interpreter /usr/bin/pypy > > New pypy executable in /home/sontek/code/pypyenv/bin/pypy > > ERROR: The executable /home/sontek/code/pypyenv/bin/pypy is not > functioning > > ERROR: It thinks sys.prefix is u'/usr/lib64/pypy-1.5' (should be > > '/home/sontek/code/pypyenv') > > ERROR: virtualenv is not compatible with this system or executable > > sontek at beast$ pypy --version > > Python 2.7.1 (?, May 02 2011, 19:05:35) > > [PyPy 1.5.0-alpha0 with GCC 4.6.0] > > ~ > > sontek at beast$ virtualenv --version > > 1.6.4 > > > > _______________________________________________ > > pypy-dev mailing list > > pypy-dev at python.org > > http://mail.python.org/mailman/listinfo/pypy-dev > > > > > > 1.5 is an old release of PyPy. Try 1.6? > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sontek at gmail.com Thu Sep 29 03:16:36 2011 From: sontek at gmail.com (John Anderson) Date: Wed, 28 Sep 2011 21:16:36 -0400 Subject: [pypy-dev] pypy with virtualenv? In-Reply-To: References: Message-ID: sontek at beast$ virtualenv -p /usr/bin/pypy ~/code/eqpypy Running virtualenv with interpreter /usr/bin/pypy New pypy executable in /home/sontek/code/eqpypy/bin/pypy ERROR: The executable /home/sontek/code/eqpypy/bin/pypy is not functioning ERROR: It thinks sys.prefix is u'/usr/lib64/pypy-1.6' (should be '/home/sontek/code/eqpypy') ERROR: virtualenv is not compatible with this system or executable sontek at beast$ pypy --version Python 2.7.1 (?, Sep 12 2011, 23:40:42) [PyPy 1.6.0 with GCC 4.6.0] sontek at beast$ virtualenv --version 1.6.4 ppyp 1.6 is from http://koji.fedoraproject.org/koji/buildinfo?buildID=263267 On Wed, Sep 28, 2011 at 6:09 PM, John Anderson wrote: > Fedora 15 doesn't have 1.6 out yet. I tried to use the binary release but > it seems to be compiled against different libssl/libcrypto's than what I > have on my system... I symlinked them over but it fails to create the > virtualenv still: > > sontek at beast$ virtualenv -p > /home/sontek/Downloads/pypy16/pypy-1.6/bin/pypy /home/sontek/code/pypyenv2 > Running virtualenv with interpreter > /home/sontek/Downloads/pypy16/pypy-1.6/bin/pypy > /home/sontek/Downloads/pypy16/pypy-1.6/bin/pypy: > /usr/lib64/libssl.so.0.9.8: no version information available (required by > /home/sontek/Downloads/pypy16/pypy-1.6/bin/pypy) > /home/sontek/Downloads/pypy16/pypy-1.6/bin/pypy: > /usr/lib64/libcrypto.so.0.9.8: no version information available (required by > /home/sontek/Downloads/pypy16/pypy-1.6/bin/pypy) > > and then the directory isn't created. > > On Wed, Sep 28, 2011 at 5:41 PM, Maciej Fijalkowski wrote: > >> On Wed, Sep 28, 2011 at 6:02 PM, John Anderson wrote: >> > I read that this should just work with the latest versions, here is what >> I'm >> > getting: >> > sontek at beast$ virtualenv --python=pypy ~/code/pypyenv >> > Running virtualenv with interpreter /usr/bin/pypy >> > New pypy executable in /home/sontek/code/pypyenv/bin/pypy >> > ERROR: The executable /home/sontek/code/pypyenv/bin/pypy is not >> functioning >> > ERROR: It thinks sys.prefix is u'/usr/lib64/pypy-1.5' (should be >> > '/home/sontek/code/pypyenv') >> > ERROR: virtualenv is not compatible with this system or executable >> > sontek at beast$ pypy --version >> > Python 2.7.1 (?, May 02 2011, 19:05:35) >> > [PyPy 1.5.0-alpha0 with GCC 4.6.0] >> > ~ >> > sontek at beast$ virtualenv --version >> > 1.6.4 >> > >> > _______________________________________________ >> > pypy-dev mailing list >> > pypy-dev at python.org >> > http://mail.python.org/mailman/listinfo/pypy-dev >> > >> > >> >> 1.5 is an old release of PyPy. Try 1.6? >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cfbolz at gmx.de Thu Sep 29 10:33:47 2011 From: cfbolz at gmx.de (Carl Friedrich Bolz) Date: Thu, 29 Sep 2011 10:33:47 +0200 Subject: [pypy-dev] [pypy-commit] pypy default: Merge a branch that makes space.isinstance(w_obj, ) do a fastpath In-Reply-To: <20110929024041.0A859820CE@wyvern.cs.uni-duesseldorf.de> References: <20110929024041.0A859820CE@wyvern.cs.uni-duesseldorf.de> Message-ID: <4E842D6B.1080008@gmx.de> Hi Maciek, The objspace part of this test really needs tests! You should write tests that the .interplevel_cls attribute is set, and that calling isinstance_w actually goes through the fast path. Cheers, Carl Friedrich On 09/29/2011 04:40 AM, fijal wrote: > Author: Maciej Fijalkowski > Branch: > Changeset: r47667:ffbf1bcf89d6 > Date: 2011-09-28 23:39 -0300 > http://bitbucket.org/pypy/pypy/changeset/ffbf1bcf89d6/ > > Log: Merge a branch that makes space.isinstance(w_obj,) do a > fastpath with isinstance(w_obj, constant>) > > diff --git a/pypy/annotation/policy.py b/pypy/annotation/policy.py > --- a/pypy/annotation/policy.py > +++ b/pypy/annotation/policy.py > @@ -1,6 +1,6 @@ > # base annotation policy for specialization > from pypy.annotation.specialize import default_specialize as default > -from pypy.annotation.specialize import specialize_argvalue, specialize_argtype, specialize_arglistitemtype > +from pypy.annotation.specialize import specialize_argvalue, specialize_argtype, specialize_arglistitemtype, specialize_arg_or_var > from pypy.annotation.specialize import memo, specialize_call_location > # for some reason, model must be imported first, > # or we create a cycle. > @@ -73,6 +73,7 @@ > default_specialize = staticmethod(default) > specialize__memo = staticmethod(memo) > specialize__arg = staticmethod(specialize_argvalue) # specialize:arg(N) > + specialize__arg_or_var = staticmethod(specialize_arg_or_var) > specialize__argtype = staticmethod(specialize_argtype) # specialize:argtype(N) > specialize__arglistitemtype = staticmethod(specialize_arglistitemtype) > specialize__call_location = staticmethod(specialize_call_location) > diff --git a/pypy/annotation/specialize.py b/pypy/annotation/specialize.py > --- a/pypy/annotation/specialize.py > +++ b/pypy/annotation/specialize.py > @@ -353,6 +353,16 @@ > key = tuple(key) > return maybe_star_args(funcdesc, key, args_s) > > +def specialize_arg_or_var(funcdesc, args_s, *argindices): > + for argno in argindices: > + if not args_s[argno].is_constant(): > + break > + else: > + # all constant > + return specialize_argvalue(funcdesc, args_s, *argindices) > + # some not constant > + return maybe_star_args(funcdesc, None, args_s) > + > def specialize_argtype(funcdesc, args_s, *argindices): > key = tuple([args_s[i].knowntype for i in argindices]) > for cls in key: > diff --git a/pypy/annotation/test/test_annrpython.py b/pypy/annotation/test/test_annrpython.py > --- a/pypy/annotation/test/test_annrpython.py > +++ b/pypy/annotation/test/test_annrpython.py > @@ -1194,6 +1194,20 @@ > assert len(executedesc._cache[(0, 'star', 2)].startblock.inputargs) == 4 > assert len(executedesc._cache[(1, 'star', 3)].startblock.inputargs) == 5 > > + def test_specialize_arg_or_var(self): > + def f(a): > + return 1 > + f._annspecialcase_ = 'specialize:arg_or_var(0)' > + > + def fn(a): > + return f(3) + f(a) > + > + a = self.RPythonAnnotator() > + a.build_types(fn, [int]) > + executedesc = a.bookkeeper.getdesc(f) > + assert sorted(executedesc._cache.keys()) == [None, (3,)] > + # we got two different special > + > def test_specialize_call_location(self): > def g(a): > return a > diff --git a/pypy/objspace/descroperation.py b/pypy/objspace/descroperation.py > --- a/pypy/objspace/descroperation.py > +++ b/pypy/objspace/descroperation.py > @@ -6,6 +6,7 @@ > from pypy.interpreter.typedef import default_identity_hash > from pypy.tool.sourcetools import compile2, func_with_new_name > from pypy.module.__builtin__.interp_classobj import W_InstanceObject > +from pypy.rlib.objectmodel import specialize > > def object_getattribute(space): > "Utility that returns the app-level descriptor object.__getattribute__." > @@ -507,6 +508,7 @@ > def issubtype(space, w_sub, w_type): > return space._type_issubtype(w_sub, w_type) > > + @specialize.arg_or_var(2) > def isinstance(space, w_inst, w_type): > return space.wrap(space._type_isinstance(w_inst, w_type)) > > diff --git a/pypy/objspace/std/objspace.py b/pypy/objspace/std/objspace.py > --- a/pypy/objspace/std/objspace.py > +++ b/pypy/objspace/std/objspace.py > @@ -7,7 +7,7 @@ > from pypy.objspace.std import (builtinshortcut, stdtypedef, frame, model, > transparent, callmethod, proxyobject) > from pypy.objspace.descroperation import DescrOperation, raiseattrerror > -from pypy.rlib.objectmodel import instantiate, r_dict, specialize > +from pypy.rlib.objectmodel import instantiate, r_dict, specialize, is_constant > from pypy.rlib.debug import make_sure_not_resized > from pypy.rlib.rarithmetic import base_int, widen > from pypy.rlib.objectmodel import we_are_translated > @@ -83,6 +83,12 @@ > if self.config.objspace.std.withtproxy: > transparent.setup(self) > > + for type, classes in self.model.typeorder.iteritems(): > + if len(classes) == 3: > + # W_Root, AnyXxx and actual object > + self.gettypefor(type).interplevel_cls = classes[0][0] > + > + > def get_builtin_types(self): > return self.builtin_types > > @@ -567,10 +573,19 @@ > return self.wrap(w_sub.issubtype(w_type)) > raise OperationError(self.w_TypeError, self.wrap("need type objects")) > > + @specialize.arg_or_var(2) > def _type_isinstance(self, w_inst, w_type): > - if isinstance(w_type, W_TypeObject): > - return self.type(w_inst).issubtype(w_type) > - raise OperationError(self.w_TypeError, self.wrap("need type object")) > + if not isinstance(w_type, W_TypeObject): > + raise OperationError(self.w_TypeError, > + self.wrap("need type object")) > + if is_constant(w_type): > + cls = w_type.interplevel_cls > + if cls is not None: > + assert w_inst is not None > + if isinstance(w_inst, cls): > + return True > + return self.type(w_inst).issubtype(w_type) > > + @specialize.arg_or_var(2) > def isinstance_w(space, w_inst, w_type): > return space._type_isinstance(w_inst, w_type) > diff --git a/pypy/objspace/std/typeobject.py b/pypy/objspace/std/typeobject.py > --- a/pypy/objspace/std/typeobject.py > +++ b/pypy/objspace/std/typeobject.py > @@ -115,6 +115,9 @@ > # of the __new__ is an instance of the type > w_bltin_new = None > > + interplevel_cls = None # not None for prebuilt instances of > + # interpreter-level types > + > @dont_look_inside > def __init__(w_self, space, name, bases_w, dict_w, > overridetypedef=None): > diff --git a/pypy/rlib/objectmodel.py b/pypy/rlib/objectmodel.py > --- a/pypy/rlib/objectmodel.py > +++ b/pypy/rlib/objectmodel.py > @@ -46,6 +46,17 @@ > > return decorated_func > > + def arg_or_var(self, *args): > + """ Same as arg, but additionally allow for a 'variable' annotation, > + that would simply be a situation where designated arg is not > + a constant > + """ > + def decorated_func(func): > + func._annspecialcase_ = 'specialize:arg_or_var' + self._wrap(args) > + return func > + > + return decorated_func > + > def argtype(self, *args): > """ Specialize function based on types of arguments on given positions. > > @@ -165,6 +176,22 @@ > def keepalive_until_here(*values): > pass > > +def is_constant(thing): > + return True > + > +class Entry(ExtRegistryEntry): > + _about_ = is_constant > + > + def compute_result_annotation(self, s_arg): > + from pypy.annotation import model > + r = model.SomeBool() > + r.const = s_arg.is_constant() > + return r > + > + def specialize_call(self, hop): > + from pypy.rpython.lltypesystem import lltype > + return hop.inputconst(lltype.Bool, hop.s_result.const) > + > # ____________________________________________________________ > > class FREED_OBJECT(object): > diff --git a/pypy/rlib/test/test_objectmodel.py b/pypy/rlib/test/test_objectmodel.py > --- a/pypy/rlib/test/test_objectmodel.py > +++ b/pypy/rlib/test/test_objectmodel.py > @@ -339,6 +339,19 @@ > res = self.interpret(f, [42]) > assert res == 84 > > + def test_isconstant(self): > + from pypy.rlib.objectmodel import is_constant, specialize > + > + @specialize.arg_or_var(0) > + def f(arg): > + if is_constant(arg): > + return 1 > + return 10 > + > + def fn(arg): > + return f(arg) + f(3) > + > + assert self.interpret(fn, [15]) == 11 > > class TestLLtype(BaseTestObjectModel, LLRtypeMixin): > > @@ -451,5 +464,4 @@ > if llop.opname == 'malloc_varsize': > break > assert llop.args[2] is graph.startblock.inputargs[0] > - > > _______________________________________________ > pypy-commit mailing list > pypy-commit at python.org > http://mail.python.org/mailman/listinfo/pypy-commit From cfbolz at gmx.de Thu Sep 29 10:33:58 2011 From: cfbolz at gmx.de (Carl Friedrich Bolz) Date: Thu, 29 Sep 2011 10:33:58 +0200 Subject: [pypy-dev] [pypy-commit] pypy default: Merge a branch that makes space.isinstance(w_obj, ) do a fastpath In-Reply-To: <20110929024041.0A859820CE@wyvern.cs.uni-duesseldorf.de> References: <20110929024041.0A859820CE@wyvern.cs.uni-duesseldorf.de> Message-ID: <4E842D76.7000301@gmx.de> Hi Maciek, The objspace part of this merge really needs tests! You should write tests that the .interplevel_cls attribute is set, and that calling isinstance_w actually goes through the fast path. Cheers, Carl Friedrich On 09/29/2011 04:40 AM, fijal wrote: > Author: Maciej Fijalkowski > Branch: > Changeset: r47667:ffbf1bcf89d6 > Date: 2011-09-28 23:39 -0300 > http://bitbucket.org/pypy/pypy/changeset/ffbf1bcf89d6/ > > Log: Merge a branch that makes space.isinstance(w_obj,) do a > fastpath with isinstance(w_obj, constant>) > > diff --git a/pypy/annotation/policy.py b/pypy/annotation/policy.py > --- a/pypy/annotation/policy.py > +++ b/pypy/annotation/policy.py > @@ -1,6 +1,6 @@ > # base annotation policy for specialization > from pypy.annotation.specialize import default_specialize as default > -from pypy.annotation.specialize import specialize_argvalue, specialize_argtype, specialize_arglistitemtype > +from pypy.annotation.specialize import specialize_argvalue, specialize_argtype, specialize_arglistitemtype, specialize_arg_or_var > from pypy.annotation.specialize import memo, specialize_call_location > # for some reason, model must be imported first, > # or we create a cycle. > @@ -73,6 +73,7 @@ > default_specialize = staticmethod(default) > specialize__memo = staticmethod(memo) > specialize__arg = staticmethod(specialize_argvalue) # specialize:arg(N) > + specialize__arg_or_var = staticmethod(specialize_arg_or_var) > specialize__argtype = staticmethod(specialize_argtype) # specialize:argtype(N) > specialize__arglistitemtype = staticmethod(specialize_arglistitemtype) > specialize__call_location = staticmethod(specialize_call_location) > diff --git a/pypy/annotation/specialize.py b/pypy/annotation/specialize.py > --- a/pypy/annotation/specialize.py > +++ b/pypy/annotation/specialize.py > @@ -353,6 +353,16 @@ > key = tuple(key) > return maybe_star_args(funcdesc, key, args_s) > > +def specialize_arg_or_var(funcdesc, args_s, *argindices): > + for argno in argindices: > + if not args_s[argno].is_constant(): > + break > + else: > + # all constant > + return specialize_argvalue(funcdesc, args_s, *argindices) > + # some not constant > + return maybe_star_args(funcdesc, None, args_s) > + > def specialize_argtype(funcdesc, args_s, *argindices): > key = tuple([args_s[i].knowntype for i in argindices]) > for cls in key: > diff --git a/pypy/annotation/test/test_annrpython.py b/pypy/annotation/test/test_annrpython.py > --- a/pypy/annotation/test/test_annrpython.py > +++ b/pypy/annotation/test/test_annrpython.py > @@ -1194,6 +1194,20 @@ > assert len(executedesc._cache[(0, 'star', 2)].startblock.inputargs) == 4 > assert len(executedesc._cache[(1, 'star', 3)].startblock.inputargs) == 5 > > + def test_specialize_arg_or_var(self): > + def f(a): > + return 1 > + f._annspecialcase_ = 'specialize:arg_or_var(0)' > + > + def fn(a): > + return f(3) + f(a) > + > + a = self.RPythonAnnotator() > + a.build_types(fn, [int]) > + executedesc = a.bookkeeper.getdesc(f) > + assert sorted(executedesc._cache.keys()) == [None, (3,)] > + # we got two different special > + > def test_specialize_call_location(self): > def g(a): > return a > diff --git a/pypy/objspace/descroperation.py b/pypy/objspace/descroperation.py > --- a/pypy/objspace/descroperation.py > +++ b/pypy/objspace/descroperation.py > @@ -6,6 +6,7 @@ > from pypy.interpreter.typedef import default_identity_hash > from pypy.tool.sourcetools import compile2, func_with_new_name > from pypy.module.__builtin__.interp_classobj import W_InstanceObject > +from pypy.rlib.objectmodel import specialize > > def object_getattribute(space): > "Utility that returns the app-level descriptor object.__getattribute__." > @@ -507,6 +508,7 @@ > def issubtype(space, w_sub, w_type): > return space._type_issubtype(w_sub, w_type) > > + @specialize.arg_or_var(2) > def isinstance(space, w_inst, w_type): > return space.wrap(space._type_isinstance(w_inst, w_type)) > > diff --git a/pypy/objspace/std/objspace.py b/pypy/objspace/std/objspace.py > --- a/pypy/objspace/std/objspace.py > +++ b/pypy/objspace/std/objspace.py > @@ -7,7 +7,7 @@ > from pypy.objspace.std import (builtinshortcut, stdtypedef, frame, model, > transparent, callmethod, proxyobject) > from pypy.objspace.descroperation import DescrOperation, raiseattrerror > -from pypy.rlib.objectmodel import instantiate, r_dict, specialize > +from pypy.rlib.objectmodel import instantiate, r_dict, specialize, is_constant > from pypy.rlib.debug import make_sure_not_resized > from pypy.rlib.rarithmetic import base_int, widen > from pypy.rlib.objectmodel import we_are_translated > @@ -83,6 +83,12 @@ > if self.config.objspace.std.withtproxy: > transparent.setup(self) > > + for type, classes in self.model.typeorder.iteritems(): > + if len(classes) == 3: > + # W_Root, AnyXxx and actual object > + self.gettypefor(type).interplevel_cls = classes[0][0] > + > + > def get_builtin_types(self): > return self.builtin_types > > @@ -567,10 +573,19 @@ > return self.wrap(w_sub.issubtype(w_type)) > raise OperationError(self.w_TypeError, self.wrap("need type objects")) > > + @specialize.arg_or_var(2) > def _type_isinstance(self, w_inst, w_type): > - if isinstance(w_type, W_TypeObject): > - return self.type(w_inst).issubtype(w_type) > - raise OperationError(self.w_TypeError, self.wrap("need type object")) > + if not isinstance(w_type, W_TypeObject): > + raise OperationError(self.w_TypeError, > + self.wrap("need type object")) > + if is_constant(w_type): > + cls = w_type.interplevel_cls > + if cls is not None: > + assert w_inst is not None > + if isinstance(w_inst, cls): > + return True > + return self.type(w_inst).issubtype(w_type) > > + @specialize.arg_or_var(2) > def isinstance_w(space, w_inst, w_type): > return space._type_isinstance(w_inst, w_type) > diff --git a/pypy/objspace/std/typeobject.py b/pypy/objspace/std/typeobject.py > --- a/pypy/objspace/std/typeobject.py > +++ b/pypy/objspace/std/typeobject.py > @@ -115,6 +115,9 @@ > # of the __new__ is an instance of the type > w_bltin_new = None > > + interplevel_cls = None # not None for prebuilt instances of > + # interpreter-level types > + > @dont_look_inside > def __init__(w_self, space, name, bases_w, dict_w, > overridetypedef=None): > diff --git a/pypy/rlib/objectmodel.py b/pypy/rlib/objectmodel.py > --- a/pypy/rlib/objectmodel.py > +++ b/pypy/rlib/objectmodel.py > @@ -46,6 +46,17 @@ > > return decorated_func > > + def arg_or_var(self, *args): > + """ Same as arg, but additionally allow for a 'variable' annotation, > + that would simply be a situation where designated arg is not > + a constant > + """ > + def decorated_func(func): > + func._annspecialcase_ = 'specialize:arg_or_var' + self._wrap(args) > + return func > + > + return decorated_func > + > def argtype(self, *args): > """ Specialize function based on types of arguments on given positions. > > @@ -165,6 +176,22 @@ > def keepalive_until_here(*values): > pass > > +def is_constant(thing): > + return True > + > +class Entry(ExtRegistryEntry): > + _about_ = is_constant > + > + def compute_result_annotation(self, s_arg): > + from pypy.annotation import model > + r = model.SomeBool() > + r.const = s_arg.is_constant() > + return r > + > + def specialize_call(self, hop): > + from pypy.rpython.lltypesystem import lltype > + return hop.inputconst(lltype.Bool, hop.s_result.const) > + > # ____________________________________________________________ > > class FREED_OBJECT(object): > diff --git a/pypy/rlib/test/test_objectmodel.py b/pypy/rlib/test/test_objectmodel.py > --- a/pypy/rlib/test/test_objectmodel.py > +++ b/pypy/rlib/test/test_objectmodel.py > @@ -339,6 +339,19 @@ > res = self.interpret(f, [42]) > assert res == 84 > > + def test_isconstant(self): > + from pypy.rlib.objectmodel import is_constant, specialize > + > + @specialize.arg_or_var(0) > + def f(arg): > + if is_constant(arg): > + return 1 > + return 10 > + > + def fn(arg): > + return f(arg) + f(3) > + > + assert self.interpret(fn, [15]) == 11 > > class TestLLtype(BaseTestObjectModel, LLRtypeMixin): > > @@ -451,5 +464,4 @@ > if llop.opname == 'malloc_varsize': > break > assert llop.args[2] is graph.startblock.inputargs[0] > - > > _______________________________________________ > pypy-commit mailing list > pypy-commit at python.org > http://mail.python.org/mailman/listinfo/pypy-commit From arigo at tunes.org Thu Sep 29 10:51:12 2011 From: arigo at tunes.org (Armin Rigo) Date: Thu, 29 Sep 2011 10:51:12 +0200 Subject: [pypy-dev] [pypy-commit] pypy default: Merge a branch that makes space.isinstance(w_obj, ) do a fastpath In-Reply-To: <4E842D76.7000301@gmx.de> References: <20110929024041.0A859820CE@wyvern.cs.uni-duesseldorf.de> <4E842D76.7000301@gmx.de> Message-ID: Hi Maciek, On Thu, Sep 29, 2011 at 10:33, Carl Friedrich Bolz wrote: >> +class Entry(ExtRegistryEntry): >> + ? ?_about_ = is_constant >> + >> + ? ?def compute_result_annotation(self, s_arg): >> + ? ? ? ?from pypy.annotation import model >> + ? ? ? ?r = model.SomeBool() >> + ? ? ? ?r.const = s_arg.is_constant() >> + ? ? ? ?return r This is wrong. We tried at some point to have is_constant() but failed. The issue is that when calling is_constant(x), even if 'x' turns out not to be constant in the end, it's possible that its initial annotation says that it is a constant. In this case, crash, because you return "SomeBool(const=True)" and later "SomeBool(const=False)", which is not a superset of the previous value. Try with a test like this: def f(n): is_constant(n) def g(n): f(5) f(n) A bient?t, Armin. From fijall at gmail.com Thu Sep 29 12:14:02 2011 From: fijall at gmail.com (Maciej Fijalkowski) Date: Thu, 29 Sep 2011 07:14:02 -0300 Subject: [pypy-dev] [pypy-commit] pypy default: Merge a branch that makes space.isinstance(w_obj, ) do a fastpath In-Reply-To: References: <20110929024041.0A859820CE@wyvern.cs.uni-duesseldorf.de> <4E842D76.7000301@gmx.de> Message-ID: On Thu, Sep 29, 2011 at 5:51 AM, Armin Rigo wrote: > Hi Maciek, > > On Thu, Sep 29, 2011 at 10:33, Carl Friedrich Bolz wrote: >>> +class Entry(ExtRegistryEntry): >>> + ? ?_about_ = is_constant >>> + >>> + ? ?def compute_result_annotation(self, s_arg): >>> + ? ? ? ?from pypy.annotation import model >>> + ? ? ? ?r = model.SomeBool() >>> + ? ? ? ?r.const = s_arg.is_constant() >>> + ? ? ? ?return r > > This is wrong. ?We tried at some point to have is_constant() but > failed. ?The issue is that when calling is_constant(x), even if 'x' > turns out not to be constant in the end, it's possible that its > initial annotation says that it is a constant. ?In this case, crash, > because you return "SomeBool(const=True)" and later > "SomeBool(const=False)", which is not a superset of the previous > value. > > Try with a test like this: > > def f(n): > ? is_constant(n) > def g(n): > ? f(5) > ? f(n) > > > A bient?t, > > Armin. > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > http://mail.python.org/mailman/listinfo/pypy-dev > It works (maybe should be documented so) because you have a specialization on the constantness of function arg. From lac at openend.se Thu Sep 29 19:17:45 2011 From: lac at openend.se (Laura Creighton) Date: Thu, 29 Sep 2011 19:17:45 +0200 Subject: [pypy-dev] I was talking with Russel Winder at PyCON UK. Message-ID: <201109291717.p8THHjdn024997@theraft.openend.se> He says, currently, PyPy's threading does not scale properly. More below. Maybe we want to use his benchmark? Laura ------- Forwarded Message Return-Path: russel at russel.org.uk Delivery-Date: Thu Sep 29 13:53:50 2011 Subject: PyPy and multiprocessing From: Russel Winder To: Laura Creighton - --=-I2STZOatYEgK/vXAGHYd Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable Laura, I have a collection of various versions (using various features of various languages) of the embarrassingly parallel problem of calculating Pi using quadrature. It is a micro-benchmark and so suffers from all the issues they suffer from (especially on the JVM). The code is a Bazaar branch http://www.russel.org.uk/Bazaar/Pi_Quadrature. I am writing as there appears to be an interesting feature using PyPy and the microprocessing package in pool mode. This is a twin-Xeon machine so has 8 cores =E2=80=94 a 32 thread run should= only go as fast as an 8 thread run. Scaling should be linear in the number of cores. Using CPython 2.7, I get: |> python2.7 pi_python2_multiprocessing_pool.py =3D=3D=3D=3D Python Multiprocessing Pool pi =3D 3.14159265359 =3D=3D=3D=3D Python Multiprocessing Pool iteration count =3D 10000000 =3D=3D=3D=3D Python Multiprocessing Pool elapse =3D 3.5378549099 =3D=3D=3D=3D Python Multiprocessing Pool process count =3D 1 =3D=3D=3D=3D Python Multiprocessing Pool processor count =3D 8 =3D=3D=3D=3D Python Multiprocessing Pool pi =3D 3.14159265359 =3D=3D=3D=3D Python Multiprocessing Pool iteration count =3D 10000000 =3D=3D=3D=3D Python Multiprocessing Pool elapse =3D 1.97133994102 =3D=3D=3D=3D Python Multiprocessing Pool process count =3D 2 =3D=3D=3D=3D Python Multiprocessing Pool processor count =3D 8 =3D=3D=3D=3D Python Multiprocessing Pool pi =3D 3.14159265359 =3D=3D=3D=3D Python Multiprocessing Pool iteration count =3D 10000000 =3D=3D=3D=3D Python Multiprocessing Pool elapse =3D 0.515691041946 =3D=3D=3D=3D Python Multiprocessing Pool process count =3D 8 =3D=3D=3D=3D Python Multiprocessing Pool processor count =3D 8 =3D=3D=3D=3D Python Multiprocessing Pool pi =3D 3.14159265359 =3D=3D=3D=3D Python Multiprocessing Pool iteration count =3D 10000000 =3D=3D=3D=3D Python Multiprocessing Pool elapse =3D 0.521239995956 =3D=3D=3D=3D Python Multiprocessing Pool process count =3D 32 =3D=3D=3D=3D Python Multiprocessing Pool processor count =3D 8 Using PyPy 1.6 I get: |> pypy pi_python2_multiprocessing_pool.py =3D=3D=3D=3D Python Multiprocessing Pool pi =3D 3.14159265359 =3D=3D=3D=3D Python Multiprocessing Pool iteration count =3D 10000000 =3D=3D=3D=3D Python Multiprocessing Pool elapse =3D 0.249331951141 =3D=3D=3D=3D Python Multiprocessing Pool process count =3D 1 =3D=3D=3D=3D Python Multiprocessing Pool processor count =3D 8 =3D=3D=3D=3D Python Multiprocessing Pool pi =3D 3.14159265359 =3D=3D=3D=3D Python Multiprocessing Pool iteration count =3D 10000000 =3D=3D=3D=3D Python Multiprocessing Pool elapse =3D 0.104065895081 =3D=3D=3D=3D Python Multiprocessing Pool process count =3D 2 =3D=3D=3D=3D Python Multiprocessing Pool processor count =3D 8 =3D=3D=3D=3D Python Multiprocessing Pool pi =3D 3.14159265359 =3D=3D=3D=3D Python Multiprocessing Pool iteration count =3D 10000000 =3D=3D=3D=3D Python Multiprocessing Pool elapse =3D 0.0764398574829 =3D=3D=3D=3D Python Multiprocessing Pool process count =3D 8 =3D=3D=3D=3D Python Multiprocessing Pool processor count =3D 8 =3D=3D=3D=3D Python Multiprocessing Pool pi =3D 3.14159265359 =3D=3D=3D=3D Python Multiprocessing Pool iteration count =3D 10000000 =3D=3D=3D=3D Python Multiprocessing Pool elapse =3D 0.124751091003 =3D=3D=3D=3D Python Multiprocessing Pool process count =3D 32 =3D=3D=3D=3D Python Multiprocessing Pool processor count =3D 8 There is no statistical significance to these one off numbers but I am fairly confident that there are no large variations should a proper collection of data be taken. The point here is that whereas CPython shows the expected scaling, PyPy does not give the expected scaling for larger nubmers of cores. Indeed having more threads than cores is detrimental to PyPy but not to CPython. Hopefully we will soon be seeing PyPy be Python 3.2 compliant! - --=20 Russel. =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D Dr Russel Winder t: +44 20 7585 2200 voip: sip:russel.winder at ekiga.n= et 41 Buckmaster Road m: +44 7770 465 077 xmpp: russel at russel.org.uk London SW11 1EN, UK w: www.russel.org.uk skype: russel_winder - --=-I2STZOatYEgK/vXAGHYd Content-Type: application/pgp-signature; name="signature.asc" Content-Description: This is a digitally signed message part Content-Transfer-Encoding: 7bit - -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.11 (GNU/Linux) iEYEABECAAYFAk6EWpIACgkQr2EGkixYSbrjHQCeJCcoOamtzwY3rPSFofuXACYK 7RgAnjuWAuGHI0mSUzQ/BhS/clZbZaUy =kVgo - -----END PGP SIGNATURE----- - --=-I2STZOatYEgK/vXAGHYd-- ------- End of Forwarded Message From romain.py at gmail.com Thu Sep 29 19:28:13 2011 From: romain.py at gmail.com (Romain Guillebert) Date: Thu, 29 Sep 2011 19:28:13 +0200 Subject: [pypy-dev] I was talking with Russel Winder at PyCON UK. In-Reply-To: <201109291717.p8THHjdn024997@theraft.openend.se> References: <201109291717.p8THHjdn024997@theraft.openend.se> Message-ID: <20110929172813.GA27965@hardshooter> Hi David Beazley noticed that PyPy's GIL isn't very good compared to CPython's : https://twitter.com/#!/dabeaz/status/118889721358327808 https://twitter.com/#!/dabeaz/status/118888789136523264 https://twitter.com/#!/dabeaz/status/118864260175634433 IMO it's the same issue Cheers Romain On Thu, Sep 29, 2011 at 07:17:45PM +0200, Laura Creighton wrote: > > He says, currently, PyPy's threading does not scale properly. More below. Maybe we > want to use his benchmark? > > Laura > > > ------- Forwarded Message > > Return-Path: russel at russel.org.uk > Delivery-Date: Thu Sep 29 13:53:50 2011 > Subject: PyPy and multiprocessing > From: Russel Winder > To: Laura Creighton > > - --=-I2STZOatYEgK/vXAGHYd > Content-Type: text/plain; charset="UTF-8" > Content-Transfer-Encoding: quoted-printable > > Laura, > > I have a collection of various versions (using various features of > various languages) of the embarrassingly parallel problem of calculating > Pi using quadrature. It is a micro-benchmark and so suffers from all > the issues they suffer from (especially on the JVM). The code is a > Bazaar branch http://www.russel.org.uk/Bazaar/Pi_Quadrature. > > I am writing as there appears to be an interesting feature using PyPy > and the microprocessing package in pool mode. > > This is a twin-Xeon machine so has 8 cores =E2=80=94 a 32 thread run should= > only > go as fast as an 8 thread run. Scaling should be linear in the number > of cores. > > Using CPython 2.7, I get: > > |> python2.7 pi_python2_multiprocessing_pool.py > =3D=3D=3D=3D Python Multiprocessing Pool pi =3D 3.14159265359 > =3D=3D=3D=3D Python Multiprocessing Pool iteration count =3D 10000000 > =3D=3D=3D=3D Python Multiprocessing Pool elapse =3D 3.5378549099 > =3D=3D=3D=3D Python Multiprocessing Pool process count =3D 1 > =3D=3D=3D=3D Python Multiprocessing Pool processor count =3D 8 > > =3D=3D=3D=3D Python Multiprocessing Pool pi =3D 3.14159265359 > =3D=3D=3D=3D Python Multiprocessing Pool iteration count =3D 10000000 > =3D=3D=3D=3D Python Multiprocessing Pool elapse =3D 1.97133994102 > =3D=3D=3D=3D Python Multiprocessing Pool process count =3D 2 > =3D=3D=3D=3D Python Multiprocessing Pool processor count =3D 8 > > =3D=3D=3D=3D Python Multiprocessing Pool pi =3D 3.14159265359 > =3D=3D=3D=3D Python Multiprocessing Pool iteration count =3D 10000000 > =3D=3D=3D=3D Python Multiprocessing Pool elapse =3D 0.515691041946 > =3D=3D=3D=3D Python Multiprocessing Pool process count =3D 8 > =3D=3D=3D=3D Python Multiprocessing Pool processor count =3D 8 > > =3D=3D=3D=3D Python Multiprocessing Pool pi =3D 3.14159265359 > =3D=3D=3D=3D Python Multiprocessing Pool iteration count =3D 10000000 > =3D=3D=3D=3D Python Multiprocessing Pool elapse =3D 0.521239995956 > =3D=3D=3D=3D Python Multiprocessing Pool process count =3D 32 > =3D=3D=3D=3D Python Multiprocessing Pool processor count =3D 8 > > Using PyPy 1.6 I get: > > |> pypy pi_python2_multiprocessing_pool.py > =3D=3D=3D=3D Python Multiprocessing Pool pi =3D 3.14159265359 > =3D=3D=3D=3D Python Multiprocessing Pool iteration count =3D 10000000 > =3D=3D=3D=3D Python Multiprocessing Pool elapse =3D 0.249331951141 > =3D=3D=3D=3D Python Multiprocessing Pool process count =3D 1 > =3D=3D=3D=3D Python Multiprocessing Pool processor count =3D 8 > > =3D=3D=3D=3D Python Multiprocessing Pool pi =3D 3.14159265359 > =3D=3D=3D=3D Python Multiprocessing Pool iteration count =3D 10000000 > =3D=3D=3D=3D Python Multiprocessing Pool elapse =3D 0.104065895081 > =3D=3D=3D=3D Python Multiprocessing Pool process count =3D 2 > =3D=3D=3D=3D Python Multiprocessing Pool processor count =3D 8 > > =3D=3D=3D=3D Python Multiprocessing Pool pi =3D 3.14159265359 > =3D=3D=3D=3D Python Multiprocessing Pool iteration count =3D 10000000 > =3D=3D=3D=3D Python Multiprocessing Pool elapse =3D 0.0764398574829 > =3D=3D=3D=3D Python Multiprocessing Pool process count =3D 8 > =3D=3D=3D=3D Python Multiprocessing Pool processor count =3D 8 > > =3D=3D=3D=3D Python Multiprocessing Pool pi =3D 3.14159265359 > =3D=3D=3D=3D Python Multiprocessing Pool iteration count =3D 10000000 > =3D=3D=3D=3D Python Multiprocessing Pool elapse =3D 0.124751091003 > =3D=3D=3D=3D Python Multiprocessing Pool process count =3D 32 > =3D=3D=3D=3D Python Multiprocessing Pool processor count =3D 8 > > There is no statistical significance to these one off numbers but I am > fairly confident that there are no large variations should a proper > collection of data be taken. > > The point here is that whereas CPython shows the expected scaling, PyPy > does not give the expected scaling for larger nubmers of cores. Indeed > having more threads than cores is detrimental to PyPy but not to > CPython. > > Hopefully we will soon be seeing PyPy be Python 3.2 compliant! > > - --=20 > Russel. > =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= > =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= > =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= > =3D=3D > Dr Russel Winder t: +44 20 7585 2200 voip: sip:russel.winder at ekiga.n= > et > 41 Buckmaster Road m: +44 7770 465 077 xmpp: russel at russel.org.uk > London SW11 1EN, UK w: www.russel.org.uk skype: russel_winder > > - --=-I2STZOatYEgK/vXAGHYd > Content-Type: application/pgp-signature; name="signature.asc" > Content-Description: This is a digitally signed message part > Content-Transfer-Encoding: 7bit > > - -----BEGIN PGP SIGNATURE----- > Version: GnuPG v1.4.11 (GNU/Linux) > > iEYEABECAAYFAk6EWpIACgkQr2EGkixYSbrjHQCeJCcoOamtzwY3rPSFofuXACYK > 7RgAnjuWAuGHI0mSUzQ/BhS/clZbZaUy > =kVgo > - -----END PGP SIGNATURE----- > > - --=-I2STZOatYEgK/vXAGHYd-- > > ------- End of Forwarded Message > > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > http://mail.python.org/mailman/listinfo/pypy-dev From josh.ayers at gmail.com Fri Sep 30 05:24:17 2011 From: josh.ayers at gmail.com (Josh Ayers) Date: Thu, 29 Sep 2011 20:24:17 -0700 Subject: [pypy-dev] I was talking with Russel Winder at PyCON UK. In-Reply-To: <20110929172813.GA27965@hardshooter> References: <201109291717.p8THHjdn024997@theraft.openend.se> <20110929172813.GA27965@hardshooter> Message-ID: I think the slowdown you're seeing is due to the time it takes to create new processes. This seems to be quite a bit slower in PyPy than in CPython. However, once the process pool is created and has been used once, the execution time vs. process count behaves as expected. I attached a modified version of your code to demonstrate the behavior. It calculates Pi once without using multiprocessing as a baseline for comparison. Then a multiprocessing.Pool object is created with 8 processes, and the same pool is used multiple times. On my machine, creating the 8 new processes takes 0.60 seconds in PyPy and only 0.20 seconds in CPython. The pool is first used two times in a row with only a single process active. For some reason, the second run is a factor of 2 faster than the first. Is this just warmup of the JIT, or some other behavior? Next, it repeats using 2, 4, and 8 processes. This was run on a 4 core machine, and as expected there was an improvement in run time with 2 and 4 processes. Using 8 processes gives approximately the same run time as 4. The output is pasted below. I also pasted the modified code here in case the attached file doesn't come through: http://pastie.org/2614751. For reference, I'm running PyPy 1.6 on Windows 7. Sincerely, Josh C:\Users\jayers\Documents\SVN\randomStuff\pypy_comparisons>pypy-c pi_python2_multiprocessing_pool.py 3.14159265359 non parallel execution time: 1.52899980545 pool creation time: 0.559000015259 ==== Python Multiprocessing Pool pi = 3.14159265359 ==== Python Multiprocessing Pool iteration count = 100000000 ==== Python Multiprocessing Pool elapse = 3.1930000782 ==== Python Multiprocessing Pool process count = 1 ==== Python Multiprocessing Pool processor count = 4 ==== Python Multiprocessing Pool pi = 3.14159265359 ==== Python Multiprocessing Pool iteration count = 100000000 ==== Python Multiprocessing Pool elapse = 1.53900003433 ==== Python Multiprocessing Pool process count = 1 ==== Python Multiprocessing Pool processor count = 4 ==== Python Multiprocessing Pool pi = 3.14159265359 ==== Python Multiprocessing Pool iteration count = 100000000 ==== Python Multiprocessing Pool elapse = 0.802000045776 ==== Python Multiprocessing Pool process count = 2 ==== Python Multiprocessing Pool processor count = 4 ==== Python Multiprocessing Pool pi = 3.14159265359 ==== Python Multiprocessing Pool iteration count = 100000000 ==== Python Multiprocessing Pool elapse = 0.441999912262 ==== Python Multiprocessing Pool process count = 4 ==== Python Multiprocessing Pool processor count = 4 ==== Python Multiprocessing Pool pi = 3.14159265359 ==== Python Multiprocessing Pool iteration count = 100000000 ==== Python Multiprocessing Pool elapse = 0.457000017166 ==== Python Multiprocessing Pool process count = 8 ==== Python Multiprocessing Pool processor count = 4 -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: pi_python2_multiprocessing_pool.py Type: application/octet-stream Size: 1604 bytes Desc: not available URL: From sontek at gmail.com Fri Sep 30 05:39:21 2011 From: sontek at gmail.com (John Anderson) Date: Thu, 29 Sep 2011 23:39:21 -0400 Subject: [pypy-dev] Realtime communication and webserver to use with pypy? Message-ID: In cpython I deploy using gevent or gunicorn for high performance/low memory usage with the ability to be non-blocking for realtime communication using socket.io. If I want to move to using PyPy... what are my options for this type of setup? Is there a non-blocking webserver in python that works well with PyPy? -------------- next part -------------- An HTML attachment was scrubbed... URL: From josh.ayers at gmail.com Fri Sep 30 06:10:14 2011 From: josh.ayers at gmail.com (Josh Ayers) Date: Thu, 29 Sep 2011 21:10:14 -0700 Subject: [pypy-dev] I was talking with Russel Winder at PyCON UK. In-Reply-To: References: <201109291717.p8THHjdn024997@theraft.openend.se> <20110929172813.GA27965@hardshooter> Message-ID: Here's a further modified version. In this case, when using the pool for the first time, it uses an n of 10, instead of 100 million. Even with such a low precision, the first execution takes 1.3 seconds. It seems some significant warm up time is needed the first time a multiprocessing.Pool object is used. See the attachment or this link for the code: http://pastie.org/2614925 On Thu, Sep 29, 2011 at 8:24 PM, Josh Ayers wrote: > I think the slowdown you're seeing is due to the time it takes to create > new processes. This seems to be quite a bit slower in PyPy than in > CPython. However, once the process pool is created and has been used once, > the execution time vs. process count behaves as expected. > > I attached a modified version of your code to demonstrate the behavior. It > calculates Pi once without using multiprocessing as a baseline for > comparison. Then a multiprocessing.Pool object is created with 8 processes, > and the same pool is used multiple times. On my machine, creating the 8 new > processes takes 0.60 seconds in PyPy and only 0.20 seconds in CPython. > > The pool is first used two times in a row with only a single process > active. For some reason, the second run is a factor of 2 faster than the > first. Is this just warmup of the JIT, or some other behavior? > > Next, it repeats using 2, 4, and 8 processes. This was run on a 4 core > machine, and as expected there was an improvement in run time with 2 and 4 > processes. Using 8 processes gives approximately the same run time as 4. > > The output is pasted below. I also pasted the modified code here in case > the attached file doesn't come through: http://pastie.org/2614751. For > reference, I'm running PyPy 1.6 on Windows 7. > > Sincerely, > Josh > > > C:\Users\jayers\Documents\SVN\randomStuff\pypy_comparisons>pypy-c > pi_python2_multiprocessing_pool.py > > 3.14159265359 > non parallel execution time: 1.52899980545 > pool creation time: 0.559000015259 > ==== Python Multiprocessing Pool pi = 3.14159265359 > ==== Python Multiprocessing Pool iteration count = 100000000 > ==== Python Multiprocessing Pool elapse = 3.1930000782 > ==== Python Multiprocessing Pool process count = 1 > ==== Python Multiprocessing Pool processor count = 4 > > > ==== Python Multiprocessing Pool pi = 3.14159265359 > ==== Python Multiprocessing Pool iteration count = 100000000 > ==== Python Multiprocessing Pool elapse = 1.53900003433 > ==== Python Multiprocessing Pool process count = 1 > ==== Python Multiprocessing Pool processor count = 4 > > > ==== Python Multiprocessing Pool pi = 3.14159265359 > ==== Python Multiprocessing Pool iteration count = 100000000 > ==== Python Multiprocessing Pool elapse = 0.802000045776 > ==== Python Multiprocessing Pool process count = 2 > ==== Python Multiprocessing Pool processor count = 4 > > > ==== Python Multiprocessing Pool pi = 3.14159265359 > ==== Python Multiprocessing Pool iteration count = 100000000 > ==== Python Multiprocessing Pool elapse = 0.441999912262 > ==== Python Multiprocessing Pool process count = 4 > ==== Python Multiprocessing Pool processor count = 4 > > > ==== Python Multiprocessing Pool pi = 3.14159265359 > ==== Python Multiprocessing Pool iteration count = 100000000 > ==== Python Multiprocessing Pool elapse = 0.457000017166 > ==== Python Multiprocessing Pool process count = 8 > ==== Python Multiprocessing Pool processor count = 4 > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: pi_python2_multiprocessing_pool_v2.py Type: application/octet-stream Size: 1577 bytes Desc: not available URL: From william.leslie.ttg at gmail.com Fri Sep 30 07:42:29 2011 From: william.leslie.ttg at gmail.com (William ML Leslie) Date: Fri, 30 Sep 2011 15:42:29 +1000 Subject: [pypy-dev] Realtime communication and webserver to use with pypy? In-Reply-To: References: Message-ID: On 30 September 2011 13:39, John Anderson wrote: > In cpython I deploy using gevent or gunicorn for high performance/low memory > usage with the ability to be non-blocking for realtime communication using > socket.io. > If I want to move to using PyPy... what are my options for this type of > setup? ? Is there a non-blocking webserver in python that works well with > PyPy? Twisted has worked well for some time. Gevent is written in cython, which is currently not supported. Not sure about Gunicorn, it seems to be able to sit on top of several different workers. -- William Leslie From dirkjan at ochtman.nl Fri Sep 30 09:28:04 2011 From: dirkjan at ochtman.nl (Dirkjan Ochtman) Date: Fri, 30 Sep 2011 09:28:04 +0200 Subject: [pypy-dev] Realtime communication and webserver to use with pypy? In-Reply-To: References: Message-ID: On Fri, Sep 30, 2011 at 07:42, William ML Leslie wrote: > Twisted has worked well for some time. ?Gevent is written in cython, > which is currently not supported. ?Not sure about Gunicorn, it seems > to be able to sit on top of several different workers. Looks like gunicorn will work: https://bitbucket.org/pypy/compatibility/wiki/gunicorn (I remember reading about someone who had actually done this who was quite satisfied with the setup, but I don't remember where.) Cheers, Dirkjan From fijall at gmail.com Fri Sep 30 13:09:54 2011 From: fijall at gmail.com (Maciej Fijalkowski) Date: Fri, 30 Sep 2011 08:09:54 -0300 Subject: [pypy-dev] Realtime communication and webserver to use with pypy? In-Reply-To: References: Message-ID: On Fri, Sep 30, 2011 at 4:28 AM, Dirkjan Ochtman wrote: > On Fri, Sep 30, 2011 at 07:42, William ML Leslie > wrote: >> Twisted has worked well for some time. ?Gevent is written in cython, >> which is currently not supported. ?Not sure about Gunicorn, it seems >> to be able to sit on top of several different workers. > > Looks like gunicorn will work: > > https://bitbucket.org/pypy/compatibility/wiki/gunicorn > > (I remember reading about someone who had actually done this who was > quite satisfied with the setup, but I don't remember where.) > > Cheers, > > Dirkjan > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > http://mail.python.org/mailman/listinfo/pypy-dev > Are you talking about this by chance? http://www.reddit.com/r/Python/comments/kt8bx/ask_rpython_whats_your_experience_with_pypy_and/c2n5pog From dirkjan at ochtman.nl Fri Sep 30 13:56:45 2011 From: dirkjan at ochtman.nl (Dirkjan Ochtman) Date: Fri, 30 Sep 2011 13:56:45 +0200 Subject: [pypy-dev] Realtime communication and webserver to use with pypy? In-Reply-To: References: Message-ID: On Fri, Sep 30, 2011 at 13:09, Maciej Fijalkowski wrote: >> Looks like gunicorn will work: >> >> https://bitbucket.org/pypy/compatibility/wiki/gunicorn >> >> (I remember reading about someone who had actually done this who was >> quite satisfied with the setup, but I don't remember where.) > > Are you talking about this by chance? > > http://www.reddit.com/r/Python/comments/kt8bx/ask_rpython_whats_your_experience_with_pypy_and/c2n5pog Right on the money! Cheers, Dirkjan From arigo at tunes.org Fri Sep 30 15:20:52 2011 From: arigo at tunes.org (Armin Rigo) Date: Fri, 30 Sep 2011 15:20:52 +0200 Subject: [pypy-dev] I was talking with Russel Winder at PyCON UK. In-Reply-To: References: <201109291717.p8THHjdn024997@theraft.openend.se> <20110929172813.GA27965@hardshooter> Message-ID: Hi, Is the conclusion just the fact that, again, the JIT's warm-up time is important, which we know very well? Or is there some other effect that cannot be explained just by that? (BTW, Laura, it's unrelated to multithreading if it's based on the multiprocessing module.) A bient?t, Armin. From fijall at gmail.com Fri Sep 30 15:25:32 2011 From: fijall at gmail.com (Maciej Fijalkowski) Date: Fri, 30 Sep 2011 10:25:32 -0300 Subject: [pypy-dev] I was talking with Russel Winder at PyCON UK. In-Reply-To: References: <201109291717.p8THHjdn024997@theraft.openend.se> <20110929172813.GA27965@hardshooter> Message-ID: On Fri, Sep 30, 2011 at 10:20 AM, Armin Rigo wrote: > Hi, > > Is the conclusion just the fact that, again, the JIT's warm-up time is > important, which we know very well? ?Or is there some other effect > that cannot be explained just by that? ?(BTW, Laura, it's unrelated to > multithreading if it's based on the multiprocessing module.) > I guess what people didn't realize is that if you spawn a new process, you have to warmup the JIT *again* for each of the worker (at least in the worst case scenario). > > A bient?t, > > Armin. > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > http://mail.python.org/mailman/listinfo/pypy-dev > From galfyo.pundee at googlemail.com Fri Sep 30 16:09:50 2011 From: galfyo.pundee at googlemail.com (Galfy Pundee) Date: Fri, 30 Sep 2011 16:09:50 +0200 Subject: [pypy-dev] Compile to executable program running in sandboxed environment? Message-ID: Hi Pypy gurus, Is it possible to create an executable package, using PyPy, that is running the python code in a sandboxed environment? Also when I run in a sandbox environment is it possible to code the logic of the external process handling the policy in python? Thanks in advance for your answers. Regards, Gal From galfyo.pundee at googlemail.com Fri Sep 30 16:37:34 2011 From: galfyo.pundee at googlemail.com (Galfy Pundee) Date: Fri, 30 Sep 2011 16:37:34 +0200 Subject: [pypy-dev] Compile to executable program running in sandboxed environment? In-Reply-To: References: Message-ID: Hi Pypy gurus, ?Is it possible to create an executable package, using PyPy, that is running the python code in a sandboxed environment? ?Also when I run in a sandbox environment is it possible to code the logic of the external process handling the policy in python? Thanks in advance for your answers. Regards, ?Gal From galfyo.pundee at googlemail.com Fri Sep 30 16:43:25 2011 From: galfyo.pundee at googlemail.com (Galfy Pundee) Date: Fri, 30 Sep 2011 16:43:25 +0200 Subject: [pypy-dev] Compile to executable program running in sandboxed environment? In-Reply-To: References: Message-ID: Hi Pypy gurus, ?Is it possible to create an executable package, using PyPy, that is running the python code in a sandboxed environment? ?Also when I run in a sandbox environment is it possible to code the logic of the external process handling the policy in python? Thanks in advance for your answers. Regards, ?Gal From josh.ayers at gmail.com Fri Sep 30 17:54:06 2011 From: josh.ayers at gmail.com (Josh Ayers) Date: Fri, 30 Sep 2011 08:54:06 -0700 Subject: [pypy-dev] I was talking with Russel Winder at PyCON UK. In-Reply-To: References: <201109291717.p8THHjdn024997@theraft.openend.se> <20110929172813.GA27965@hardshooter> Message-ID: I don't think it's due to the warmup of the JIT. Here's a simpler example. import time import multiprocessing def do_nothing(): pass if __name__ == '__main__': time1 = time.time() do_nothing() time2 = time.time() pool = multiprocessing.Pool(processes=1) time3 = time.time() result = pool.apply_async(do_nothing) result.get() time4 = time.time() result = pool.apply_async(do_nothing) result.get() time5 = time.time() pool.close() print('not multiprocessing: ' + str(time2 - time1)) print('create pool: ' + str(time3 - time2)) print('run first time: ' + str(time4 - time3)) print('run second time: ' + str(time5 - time4)) Here are the results in PyPy. The first call to do_nothing() using multiprocessing.Pool takes 0.57 seconds. not multiprocessing: 0.0 create pool: 0.30999994278 run first time: 0.575999975204 run second time: 0.00100016593933 Here are the results in CPython. It also appears to be have some overhead the first time the pool is used, but it's less severe than PyPy. not multiprocessing: 0.0 create pool: 0.00500011444092 run first time: 0.134000062943 run second time: 0.0 On Fri, Sep 30, 2011 at 6:25 AM, Maciej Fijalkowski wrote: > On Fri, Sep 30, 2011 at 10:20 AM, Armin Rigo wrote: > > Hi, > > > > Is the conclusion just the fact that, again, the JIT's warm-up time is > > important, which we know very well? Or is there some other effect > > that cannot be explained just by that? (BTW, Laura, it's unrelated to > > multithreading if it's based on the multiprocessing module.) > > > > I guess what people didn't realize is that if you spawn a new process, > you have to warmup the JIT *again* for each of the worker (at least in > the worst case scenario). > > > > > A bient?t, > > > > Armin. > > _______________________________________________ > > pypy-dev mailing list > > pypy-dev at python.org > > http://mail.python.org/mailman/listinfo/pypy-dev > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From arigo at tunes.org Fri Sep 30 18:52:35 2011 From: arigo at tunes.org (Armin Rigo) Date: Fri, 30 Sep 2011 18:52:35 +0200 Subject: [pypy-dev] I was talking with Russel Winder at PyCON UK. In-Reply-To: References: <201109291717.p8THHjdn024997@theraft.openend.se> <20110929172813.GA27965@hardshooter> Message-ID: Hi, On Fri, Sep 30, 2011 at 17:54, Josh Ayers wrote: > I don't think it's due to the warmup of the JIT.? Here's a simpler example. I think that your example is perfectly compatible with the JIT warmup time theory. This is kind of obvious by comparing the CPython and the PyPy timings: - something that takes less than 1ms on CPython is going to be just as fast on PyPy (or at least, less than 2ms) because there is no JITting at all involved; - something that runs several seconds *in the same process* in CPython would be likely to be faster on PyPy; - everything shorter is at risk: I'd say that 0.1 to 0.5 seconds in CPython looks like the worst case for PyPy, because it needs to run the JIT but the process terminates before it's really useful. That's just what your example shows. On non-Windows I would recommend to prime the JIT my calling a few times the function, in so that a fork() can inherit already-JITted code. Of course it doesn't work on Windows. You're left with the usual remark: PyPy's JIT does have a long warm-up time for every process that is started anew, so make sure to use the multiprocessing module carefully (e.g. don't stop and restart processes all the time). A bient?t, Armin. From arigo at tunes.org Fri Sep 30 18:58:26 2011 From: arigo at tunes.org (Armin Rigo) Date: Fri, 30 Sep 2011 18:58:26 +0200 Subject: [pypy-dev] Compile to executable program running in sandboxed environment? In-Reply-To: References: Message-ID: Hi Galfy, On Fri, Sep 30, 2011 at 16:09, Galfy Pundee wrote: > ?Is it possible to create an executable package, using PyPy, that is > running the python code in a sandboxed environment? Unclear what you really mean, but I can answer "yes" to both interpretations of your question: if you want a "pypy-sandbox" binary that runs a given .py source file, then yes, it's the way it's supposed to work; or, if you want to include the .py source file inside the executable itself, then "yes" as well, although it's more complicated (and useless in my opinion, even in the context of sandboxing). > ?Also when I run in a sandbox environment is it possible to code the > logic of the external process handling the policy in python? Yes, that's how the external process demo is coded: in Python, to run with the normal (i.e. non-sandboxed) CPython or PyPy interpreter. A bient?t, Armin. From arigo at tunes.org Fri Sep 30 19:02:32 2011 From: arigo at tunes.org (Armin Rigo) Date: Fri, 30 Sep 2011 19:02:32 +0200 Subject: [pypy-dev] pypy with virtualenv? In-Reply-To: References: Message-ID: Hi, On Thu, Sep 29, 2011 at 03:16, John Anderson wrote: > sontek at beast$ ?pypy --version > Python 2.7.1 (?, Sep 12 2011, 23:40:42) > [PyPy 1.6.0 with GCC 4.6.0] Try running just "pypy" and see if it prints the following warning lines: debug: WARNING: Library path not found, using compiled-in sys.path. debug: WARNING: 'sys.prefix' will not be set. debug: WARNING: Make sure the pypy binary is kept inside its tree of files. debug: WARNING: It is ok to create a symlink to it from somewhere else. If it does, well, follow the recommendation. If it doesn't, then likely, something is again broken in the interaction of virtualenv and pypy. Antonio, do you know if virtualenv 1.6.4 is supposed to work with pypy? A bient?t, Armin. From sontek at gmail.com Fri Sep 30 19:26:04 2011 From: sontek at gmail.com (John Anderson) Date: Fri, 30 Sep 2011 13:26:04 -0400 Subject: [pypy-dev] pypy with virtualenv? In-Reply-To: References: Message-ID: > Try running just "pypy" and see if it prints the following warning lines: > > debug: WARNING: Library path not found, using compiled-in sys.path. > debug: WARNING: 'sys.prefix' will not be set. > debug: WARNING: Make sure the pypy binary is kept inside its tree of files. > debug: WARNING: It is ok to create a symlink to it from somewhere else. > > If it does, well, follow the recommendation. If it doesn't, then > likely, something is again broken in the interaction of virtualenv and > pypy. Antonio, do you know if virtualenv 1.6.4 is supposed to work > with pypy? > > > A bient?t, > > Armin. It seems to be a bad package from fedora because running from hg fixed all my problems -------------- next part -------------- An HTML attachment was scrubbed... URL: From fijall at gmail.com Fri Sep 30 22:37:38 2011 From: fijall at gmail.com (Maciej Fijalkowski) Date: Fri, 30 Sep 2011 17:37:38 -0300 Subject: [pypy-dev] PyPy packaging help needed Message-ID: Hi. Does anyone feel helping with PyPy's PPA? The packages are super-outdated and I think it's one of the most requested pypy feature. Cheers, fijal From alex.gaynor at gmail.com Fri Sep 30 22:39:42 2011 From: alex.gaynor at gmail.com (Alex Gaynor) Date: Fri, 30 Sep 2011 16:39:42 -0400 Subject: [pypy-dev] PyPy packaging help needed In-Reply-To: References: Message-ID: I'm CCing Andrew Godwin on this, because I know he created a .deb for PyPy. Alex On Fri, Sep 30, 2011 at 4:37 PM, Maciej Fijalkowski wrote: > Hi. > > Does anyone feel helping with PyPy's PPA? The packages are > super-outdated and I think it's one of the most requested pypy > feature. > > Cheers, > fijal > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > http://mail.python.org/mailman/listinfo/pypy-dev > -- "I disapprove of what you say, but I will defend to the death your right to say it." -- Evelyn Beatrice Hall (summarizing Voltaire) "The people's good is the highest law." -- Cicero -------------- next part -------------- An HTML attachment was scrubbed... URL: From randall.leeds at gmail.com Fri Sep 30 23:00:41 2011 From: randall.leeds at gmail.com (Randall Leeds) Date: Fri, 30 Sep 2011 14:00:41 -0700 Subject: [pypy-dev] PyPy packaging help needed In-Reply-To: References: Message-ID: I've done a little bit of deb packaging before and would love a reason to be more involved in pypy. I'd be happy to get stuck into this. On Fri, Sep 30, 2011 at 13:39, Alex Gaynor wrote: > I'm CCing Andrew Godwin on this, because I know he created a .deb for PyPy. > > Alex > > > On Fri, Sep 30, 2011 at 4:37 PM, Maciej Fijalkowski wrote: > >> Hi. >> >> Does anyone feel helping with PyPy's PPA? The packages are >> super-outdated and I think it's one of the most requested pypy >> feature. >> >> Cheers, >> fijal >> _______________________________________________ >> pypy-dev mailing list >> pypy-dev at python.org >> http://mail.python.org/mailman/listinfo/pypy-dev >> > > > > -- > "I disapprove of what you say, but I will defend to the death your right to > say it." -- Evelyn Beatrice Hall (summarizing Voltaire) > "The people's good is the highest law." -- Cicero > > > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > http://mail.python.org/mailman/listinfo/pypy-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dmalcolm at redhat.com Fri Sep 30 23:00:26 2011 From: dmalcolm at redhat.com (David Malcolm) Date: Fri, 30 Sep 2011 17:00:26 -0400 Subject: [pypy-dev] pypy with virtualenv? In-Reply-To: References: Message-ID: <1317416430.23847.51.camel@surprise> On Wed, 2011-09-28 at 18:09 -0400, John Anderson wrote: > Fedora 15 doesn't have 1.6 out yet. I tried to use the binary release FWIW, pypy 1.6 for Fedora 15 can now be found in the fedora-updates-testing repository: https://admin.fedoraproject.org/updates/FEDORA-2011-13521 (though IIRC you said on #pypy that you ran into issues with that build also) From fijall at gmail.com Fri Sep 30 23:03:44 2011 From: fijall at gmail.com (Maciej Fijalkowski) Date: Fri, 30 Sep 2011 18:03:44 -0300 Subject: [pypy-dev] PyPy packaging help needed In-Reply-To: References: Message-ID: On Fri, Sep 30, 2011 at 6:00 PM, Randall Leeds wrote: > I've done a little bit of deb packaging before and would love a reason to be > more involved in pypy. > I'd be happy to get stuck into this. I guess what we have now is in http://codespeak.net/svn/pypy/build/ubuntu/trunk/debian/ It's grossly outdated though. Drop in on #pypy on IRC if you need more info > > On Fri, Sep 30, 2011 at 13:39, Alex Gaynor wrote: >> >> I'm CCing Andrew Godwin on this, because I know he created a .deb for >> PyPy. >> Alex >> >> On Fri, Sep 30, 2011 at 4:37 PM, Maciej Fijalkowski >> wrote: >>> >>> Hi. >>> >>> Does anyone feel helping with PyPy's PPA? The packages are >>> super-outdated and I think it's one of the most requested pypy >>> feature. >>> >>> Cheers, >>> fijal >>> _______________________________________________ >>> pypy-dev mailing list >>> pypy-dev at python.org >>> http://mail.python.org/mailman/listinfo/pypy-dev >> >> >> >> -- >> "I disapprove of what you say, but I will defend to the death your right >> to say it." -- Evelyn Beatrice Hall (summarizing Voltaire) >> "The people's good is the highest law." -- Cicero >> >> >> _______________________________________________ >> pypy-dev mailing list >> pypy-dev at python.org >> http://mail.python.org/mailman/listinfo/pypy-dev >> > > From sontek at gmail.com Fri Sep 30 23:14:11 2011 From: sontek at gmail.com (John Anderson) Date: Fri, 30 Sep 2011 17:14:11 -0400 Subject: [pypy-dev] pypy with virtualenv? In-Reply-To: <1317416430.23847.51.camel@surprise> References: <1317416430.23847.51.camel@surprise> Message-ID: Yeah, that build doesn't work with virtualenv. On Fri, Sep 30, 2011 at 5:00 PM, David Malcolm wrote: > On Wed, 2011-09-28 at 18:09 -0400, John Anderson wrote: > > Fedora 15 doesn't have 1.6 out yet. I tried to use the binary release > > FWIW, pypy 1.6 for Fedora 15 can now be found in the > fedora-updates-testing repository: > https://admin.fedoraproject.org/updates/FEDORA-2011-13521 > > (though IIRC you said on #pypy that you ran into issues with that build > also) > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hakan at debian.org Fri Sep 30 23:15:52 2011 From: hakan at debian.org (Hakan Ardo) Date: Fri, 30 Sep 2011 23:15:52 +0200 Subject: [pypy-dev] [pypy-commit] pypy default: Hack to ensure that ll_arraycopy gets a proper effectinfo.write_descrs_arrays Message-ID: Hi, is there a better way to fix this? The same kind of issue might arise elsewhere? > Author: Hakan Ardo > Branch: > Changeset: r47722:bf3f65e2b1c2 > Date: 2011-09-30 19:48 +0200 > http://bitbucket.org/pypy/pypy/changeset/bf3f65e2b1c2/ > > Log: Hack to ensure that ll_arraycopy gets a proper > effectinfo.write_descrs_arrays > > diff --git a/pypy/rlib/rgc.py b/pypy/rlib/rgc.py > --- a/pypy/rlib/rgc.py > +++ b/pypy/rlib/rgc.py > @@ -143,6 +143,10 @@ > from pypy.rpython.lltypesystem.lloperation import llop > from pypy.rlib.objectmodel import keepalive_until_here > > + # XXX: Hack to ensure that we get a proper effectinfo.write_descrs_arrays > + if length > 0: > + dest[dest_start] = source[source_start] > + > # supports non-overlapping copies only > if not we_are_translated(): > if source == dest: -- H?kan Ard? From fijall at gmail.com Fri Sep 30 23:17:43 2011 From: fijall at gmail.com (Maciej Fijalkowski) Date: Fri, 30 Sep 2011 18:17:43 -0300 Subject: [pypy-dev] [pypy-commit] pypy default: Hack to ensure that ll_arraycopy gets a proper effectinfo.write_descrs_arrays In-Reply-To: References: Message-ID: On Fri, Sep 30, 2011 at 6:15 PM, Hakan Ardo wrote: > Hi, > is there a better way to fix this? The same kind of issue might arise elsewhere? Make sure that raw_memcopy has the correct effect on analyzer? > >> Author: Hakan Ardo >> Branch: >> Changeset: r47722:bf3f65e2b1c2 >> Date: 2011-09-30 19:48 +0200 >> http://bitbucket.org/pypy/pypy/changeset/bf3f65e2b1c2/ >> >> Log: ?Hack to ensure that ll_arraycopy gets a proper >> ? ? ? effectinfo.write_descrs_arrays >> >> diff --git a/pypy/rlib/rgc.py b/pypy/rlib/rgc.py >> --- a/pypy/rlib/rgc.py >> +++ b/pypy/rlib/rgc.py >> @@ -143,6 +143,10 @@ >> ? ? ?from pypy.rpython.lltypesystem.lloperation import llop >> ? ? ?from pypy.rlib.objectmodel import keepalive_until_here >> >> + ? ?# XXX: Hack to ensure that we get a proper effectinfo.write_descrs_arrays >> + ? ?if length > 0: >> + ? ? ? ?dest[dest_start] = source[source_start] >> + >> ? ? ?# supports non-overlapping copies only >> ? ? ?if not we_are_translated(): >> ? ? ? ? ?if source == dest: > > > -- > H?kan Ard? > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > http://mail.python.org/mailman/listinfo/pypy-dev > From dmalcolm at redhat.com Fri Sep 30 23:20:39 2011 From: dmalcolm at redhat.com (David Malcolm) Date: Fri, 30 Sep 2011 17:20:39 -0400 Subject: [pypy-dev] pypy with virtualenv? In-Reply-To: References: <1317416430.23847.51.camel@surprise> Message-ID: <1317417639.23847.53.camel@surprise> On Fri, 2011-09-30 at 17:14 -0400, John Anderson wrote: > Yeah, that build doesn't work with virtualenv. Thanks - I've filed a bug about this in Fedora's downstream bug tracker here: https://bugzilla.redhat.com/show_bug.cgi?id=742641 [snip]