From chengc0425 at gmail.com Thu Apr 2 23:54:39 2015 From: chengc0425 at gmail.com (Cheng Chen) Date: Thu, 2 Apr 2015 14:54:39 -0700 Subject: [pypy-dev] pypy performance ramp up Message-ID: Hey pypy-devs, Thanks a lot for you amazing work for pypy! I have a naive question regarding pypy's performance, I was doing some performance testing with pypy, at first i was using a sample size of 9000, our script was able to process this batch in 2.8 sec, which gives us a rate of 3200/s, which we thought was somewhat slow. Then we changed to a sample size of 1 million, the speed was as slow initially but soon ramped up and stabilized at around 40k/s which is more than enough for us. I am just curious what cause the performance ramp up, thanks a lot! Cheng -------------- next part -------------- An HTML attachment was scrubbed... URL: From matti.picus at gmail.com Fri Apr 3 09:43:32 2015 From: matti.picus at gmail.com (Matti Picus) Date: Fri, 03 Apr 2015 10:43:32 +0300 Subject: [pypy-dev] pypy performance ramp up In-Reply-To: References: Message-ID: <551E44A4.5010402@gmail.com> On 03/04/15 00:54, Cheng Chen wrote: > Hey pypy-devs, > > Thanks a lot for you amazing work for pypy! > > I have a naive question regarding pypy's performance, I was doing some > performance testing with pypy, at first i was using a sample size of > 9000, our script was able to process this batch in 2.8 sec, which > gives us a rate of 3200/s, which we thought was somewhat slow. Then we > changed to a sample size of 1 million, the speed was as slow initially > but soon ramped up and stabilized at around 40k/s which is more than > enough for us. > > I am just curious what cause the performance ramp up, thanks a lot! > > Cheng > > > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > https://mail.python.org/mailman/listinfo/pypy-dev Hi. Thanks for trying it. Our 'tracing JIT' traces your function and only after a while decides that that location is "hot" enough to optimize. We call this the JIT warmup, see http://pypy.readthedocs.org/en/latest/faq.html#how-fast-is-pypy. Also http://morepypy.blogspot.co.il/2009/03/applying-tracing-jit-to-interpreter.html for a backrounder from 2009 Matti From arigo at tunes.org Fri Apr 3 17:31:44 2015 From: arigo at tunes.org (Armin Rigo) Date: Fri, 3 Apr 2015 17:31:44 +0200 Subject: [pypy-dev] Allegro64 buildslave disappeared Message-ID: Hi all, The allegro64 buildslave seems to have definitely disappeared last week, with no warning to most of us. It used to run the nightly 64-bit Linux translations and tests. Do we have another Linux machine available? Armin From cherian.rosh at gmail.com Mon Apr 6 05:44:53 2015 From: cherian.rosh at gmail.com (Roshan Cherian) Date: Sun, 5 Apr 2015 20:44:53 -0700 Subject: [pypy-dev] gcc recommendation for pypy 2.5.0 Message-ID: Hi Team, I am building out the pypy 2.5.0 from source on linux. I built this using oel 6.3 which has a gcc 4.4.7 and built fine, however with oel 5.6 with a gcc version 4.1.2, it fails on the creation of libpypy-c.so shared library with the following error: /usr/bin/ld: implement.o: relocation R_X86_64_PC32 against `pypy_asm_stackwalk' can not be used when making a shared object; recompile with -fPIC /usr/bin/ld: final link failed: Bad value collect2: ld returned 1 exit status I rechecked that gcc asserts fPIC for position independent code, however it still fails and I don't have this problem on 4.4.7 gcc. Could I know the recommended gcc version for building pypy 2.5.0. I am sorry the recommended approach is to use an older version of pypy to build the newer version however we have a unique situation in our build environment where pypy is built using 2.7.5 python. I know this may not be relevant in this context but thought I should say so. Thanks in Advance, -Roshan -------------- next part -------------- An HTML attachment was scrubbed... URL: From arigo at tunes.org Mon Apr 6 09:38:10 2015 From: arigo at tunes.org (Armin Rigo) Date: Mon, 6 Apr 2015 09:38:10 +0200 Subject: [pypy-dev] gcc recommendation for pypy 2.5.0 In-Reply-To: References: Message-ID: Hi Roshan, On 6 April 2015 at 05:44, Roshan Cherian wrote: > Could I know the > recommended gcc version for building pypy 2.5.0. Any version that is not seriously outdated would do. I guess 4.1 is too old. We don't specifically try out every single old version of gcc so I can't be more precise than that. If for some reason you absolutely have to use gcc 4.1, you can probably translate with the more portable "--gcrootfinder=shadowstack" option at the cost of a some percents of final performance. > I am sorry the recommended approach is to use an older version of pypy to > build the newer version That's an unrelated question. You can use either PyPy or CPython to translate PyPy; it should only change the speed at which it is done. A bient?t, Armin. From fijall at gmail.com Mon Apr 6 14:48:12 2015 From: fijall at gmail.com (Maciej Fijalkowski) Date: Mon, 6 Apr 2015 14:48:12 +0200 Subject: [pypy-dev] FAQ entry Message-ID: maybe we should add something along those lines to FAQ http://emptysqua.re/blog/pypy-garbage-collection-and-a-deadlock/ From yorik.sar at gmail.com Mon Apr 6 15:28:45 2015 From: yorik.sar at gmail.com (Yuriy Taraday) Date: Mon, 06 Apr 2015 13:28:45 +0000 Subject: [pypy-dev] FAQ entry In-Reply-To: References: Message-ID: On Mon, Apr 6, 2015 at 3:48 PM Maciej Fijalkowski wrote: > maybe we should add something along those lines to FAQ > > http://emptysqua.re/blog/pypy-garbage-collection-and-a-deadlock/ Can't it be fixed? I see 2 possible solutions here: 1. We can detect deadlock (any lock contention during GC round will become deadlock) and bail out of GC round early to try again later or skip freeing this object (and mark all objects referenced by it). 2. We can defer all calls to __del__ until after GC round and run them in a separate Python thread which would allow them to yield processing to let other threads free some lock. I know that either of these solutions would change semantics a bit, but it shouldn't affect user's code. -------------- next part -------------- An HTML attachment was scrubbed... URL: From fijall at gmail.com Mon Apr 6 15:44:16 2015 From: fijall at gmail.com (Maciej Fijalkowski) Date: Mon, 6 Apr 2015 15:44:16 +0200 Subject: [pypy-dev] FAQ entry In-Reply-To: References: Message-ID: On Mon, Apr 6, 2015 at 3:28 PM, Yuriy Taraday wrote: > On Mon, Apr 6, 2015 at 3:48 PM Maciej Fijalkowski wrote: >> >> maybe we should add something along those lines to FAQ >> >> http://emptysqua.re/blog/pypy-garbage-collection-and-a-deadlock/ > > > Can't it be fixed? I see 2 possible solutions here: > 1. We can detect deadlock (any lock contention during GC round will become > deadlock) and bail out of GC round early to try again later or skip freeing > this object (and mark all objects referenced by it). > 2. We can defer all calls to __del__ until after GC round and run them in a > separate Python thread which would allow them to yield processing to let > other threads free some lock. > > I know that either of these solutions would change semantics a bit, but it > shouldn't affect user's code. as you can see from the blog post *any* change does affect user code. Note that finalizers are not called during GC, but at some later "safer" stage, where all the internals are in a sane state. Not sure how you would do 1, really (a global flag on locks?), 2 is something Java does. General FAQ entry should say "avoid __del__ doing any substantial job at any cost" I would think, whacking at locks is like lipstick on a pig. From yorik.sar at gmail.com Mon Apr 6 16:02:16 2015 From: yorik.sar at gmail.com (Yuriy Taraday) Date: Mon, 06 Apr 2015 14:02:16 +0000 Subject: [pypy-dev] FAQ entry In-Reply-To: References: Message-ID: On Mon, Apr 6, 2015 at 4:44 PM Maciej Fijalkowski wrote: > On Mon, Apr 6, 2015 at 3:28 PM, Yuriy Taraday wrote: > > On Mon, Apr 6, 2015 at 3:48 PM Maciej Fijalkowski > wrote: > >> > >> maybe we should add something along those lines to FAQ > >> > >> http://emptysqua.re/blog/pypy-garbage-collection-and-a-deadlock/ > > > > > > Can't it be fixed? I see 2 possible solutions here: > > 1. We can detect deadlock (any lock contention during GC round will > become > > deadlock) and bail out of GC round early to try again later or skip > freeing > > this object (and mark all objects referenced by it). > > 2. We can defer all calls to __del__ until after GC round and run them > in a > > separate Python thread which would allow them to yield processing to let > > other threads free some lock. > > > > I know that either of these solutions would change semantics a bit, but > it > > shouldn't affect user's code. > > as you can see from the blog post *any* change does affect user code. Well, I mean if we let user use locks as expected, it won't affect it in any bad way. Note that finalizers are not called during GC, but at some later "safer" stage, where all the internals are in a sane state. It's still a part of GC pause (or how you call it) when GIL is taken and won't ever be released back. Not sure how you would do 1, really (a global flag on locks?), Yes, smth like a global flag that would be set during that stage and Lock.acquire would check it to see if there's a deadlock. We can at least throw some exception or warning to make it clearer to user what happened. 2 is something Java does. > It seems to be the most sane way to do it. It avoids this special state in which __del__ code is run. General FAQ entry should say "avoid __del__ doing any substantial job > at any cost" I would think, whacking at locks is like lipstick on a > pig. > Yes, #1 would just cover one possible problem with them. -------------- next part -------------- An HTML attachment was scrubbed... URL: From arigo at tunes.org Mon Apr 6 21:05:12 2015 From: arigo at tunes.org (Armin Rigo) Date: Mon, 6 Apr 2015 21:05:12 +0200 Subject: [pypy-dev] FAQ entry In-Reply-To: References: Message-ID: Hi Yuriy, (2) cannot be done in Python without major changes in semantics. User code that makes no use of threads, for example, certainly doesn't expect to be careful about multithreading in the __del__ methods. (1) is hard too. What is hard is to decide when acquiring a lock in a __del__ is safe or not. For example, it would not be safe if the lock is some global lock. But it would be safe if the lock belongs to the object being finalized, in which case (we can hope that) nobody else can see it any more. We can't even be sure that an actual deadlock situation encountered in a __del__ is really a deadlock; maybe a different thread will come along and release that lock soon... I think this is a problem that is just as hard as the general deadlock problem (i.e. unsolvable, but the user can use some tools to help him figure out deadlocks when they really happen). A bient?t, Armin. From fijall at gmail.com Mon Apr 6 21:08:14 2015 From: fijall at gmail.com (Maciej Fijalkowski) Date: Mon, 6 Apr 2015 21:08:14 +0200 Subject: [pypy-dev] FAQ entry In-Reply-To: References: Message-ID: My question stands - should we add this short explanation (maybe with a link to the blog post) to FAQ as to why you should not use locks in dels? Or maybe why you should not have advanced logic in dels to start with. On Mon, Apr 6, 2015 at 9:05 PM, Armin Rigo wrote: > Hi Yuriy, > > (2) cannot be done in Python without major changes in semantics. User > code that makes no use of threads, for example, certainly doesn't > expect to be careful about multithreading in the __del__ methods. > > (1) is hard too. What is hard is to decide when acquiring a lock in a > __del__ is safe or not. For example, it would not be safe if the lock > is some global lock. But it would be safe if the lock belongs to the > object being finalized, in which case (we can hope that) nobody else > can see it any more. > > We can't even be sure that an actual deadlock situation encountered in > a __del__ is really a deadlock; maybe a different thread will come > along and release that lock soon... I think this is a problem that is > just as hard as the general deadlock problem (i.e. unsolvable, but the > user can use some tools to help him figure out deadlocks when they > really happen). > > > A bient?t, > > Armin. From cherian.rosh at gmail.com Tue Apr 7 08:02:56 2015 From: cherian.rosh at gmail.com (Roshan Mathew Cherian) Date: Mon, 6 Apr 2015 23:02:56 -0700 Subject: [pypy-dev] gcc recommendation for pypy 2.5.0 In-Reply-To: References: Message-ID: <2B96EA13-9421-4408-AC45-F657C83DF68E@gmail.com> Hi Armin Thanks that worked well with gcc 4.1.2. We are working to get our build system to a newer version of oel and gcc Thanks Roshan > On Apr 6, 2015, at 12:38 AM, Armin Rigo wrote: > > Hi Roshan, > >> On 6 April 2015 at 05:44, Roshan Cherian wrote: >> Could I know the >> recommended gcc version for building pypy 2.5.0. > > Any version that is not seriously outdated would do. I guess 4.1 is > too old. We don't specifically try out every single old version of > gcc so I can't be more precise than that. If for some reason you > absolutely have to use gcc 4.1, you can probably translate with the > more portable "--gcrootfinder=shadowstack" option at the cost of a > some percents of final performance. > >> I am sorry the recommended approach is to use an older version of pypy to >> build the newer version > > That's an unrelated question. You can use either PyPy or CPython to > translate PyPy; it should only change the speed at which it is done. > > > A bient?t, > > Armin. From arigo at tunes.org Tue Apr 7 10:07:53 2015 From: arigo at tunes.org (Armin Rigo) Date: Tue, 7 Apr 2015 10:07:53 +0200 Subject: [pypy-dev] FAQ entry In-Reply-To: References: Message-ID: Hi Maciej, On 6 April 2015 at 21:08, Maciej Fijalkowski wrote: > My question stands - should we add this short explanation (maybe with > a link to the blog post) to FAQ as to why you should not use locks in > dels? My problem with the blog post is that, after it correctly diagnoses the problem, it doesn't really solve it at all. It just moves allocations around to avoid having them while the lock is held. This is a workaround that can fail; any Python code can allocate. Even if it's not obvious why, when the JIT happen to be tracing that code, then you have many more allocations than usual, for example. So the blog post's solution is not a proper fix but merely reduces the likelihood of a deadlock. > Or maybe why you should not have advanced logic in dels to start > with. That would be a much better solution. I suppose we should also look at the blog post's original code and try to figure out, in this case, how to do it cleanly. If we can, we should mention how in the FAQ entry. But if it turns out to be close to impossible, we should mention in the FAQ entry that APIs can be too badly designed from that point of view, and still hint as some reasonable workarounds... A bient?t, Armin. From yorik.sar at gmail.com Tue Apr 7 16:00:23 2015 From: yorik.sar at gmail.com (Yuriy Taraday) Date: Tue, 07 Apr 2015 14:00:23 +0000 Subject: [pypy-dev] FAQ entry In-Reply-To: References: Message-ID: Sorry for hijacking thread. I hope I won't hider it too much. On Mon, Apr 6, 2015 at 10:05 PM Armin Rigo wrote: > (2) cannot be done in Python without major changes in semantics. User > code that makes no use of threads, for example, certainly doesn't > expect to be careful about multithreading in the __del__ methods. > GC introduces concurrency to user code anyway: call to some __del__ method can happen in any time in user code, so it might be called in a separate thread just as well (it'll be serialized by GIL anyway). On the other hand single-threaded program should not suffer any deadlocks since there's no locks in it, so we can use this approach only in multithreaded programs. __del__ methods themselves should already be pretty isolated in the sense that they can't assume anything about state of objects (except self which belongs only to them when they are executed). (1) is hard too. What is hard is to decide when acquiring a lock in a > __del__ is safe or not. For example, it would not be safe if the lock > is some global lock. But it would be safe if the lock belongs to the > object being finalized, in which case (we can hope that) nobody else > can see it any more. > We can't even be sure that an actual deadlock situation encountered in > a __del__ is really a deadlock; maybe a different thread will come > along and release that lock soon... I think this is a problem that is > just as hard as the general deadlock problem (i.e. unsolvable, but the > user can use some tools to help him figure out deadlocks when they > really happen). > It will 100% deadlock if the lock in question is held by another thread since we hold GIL that's needed to release it. -------------- next part -------------- An HTML attachment was scrubbed... URL: From arigo at tunes.org Tue Apr 7 19:18:00 2015 From: arigo at tunes.org (Armin Rigo) Date: Tue, 7 Apr 2015 19:18:00 +0200 Subject: [pypy-dev] FAQ entry In-Reply-To: References: Message-ID: Hi Yuriy, On 7 April 2015 at 16:00, Yuriy Taraday wrote: >> We can't even be sure that an actual deadlock situation encountered in >> a __del__ is really a deadlock; maybe a different thread will come >> along and release that lock soon... I think this is a problem that is >> just as hard as the general deadlock problem (i.e. unsolvable, but the >> user can use some tools to help him figure out deadlocks when they >> really happen). > > It will 100% deadlock if the lock in question is held by another thread > since we hold GIL that's needed to release it. No, that's wrong. You can't use the GIL as argument for the behavior of a long-running piece of Python code. The GIL is released periodically, also inside the __del__ method. If that __del__ method tries to acquire a lock that is already acquired, it suspends the thread, but as usual it does so by first releasing the GIL and letting other threads run. You're correct in that we don't know which thread the __del__ method runs in, and so we don't know exactly which thread's execution is suspended until the end of the __del__ method. This is in contrast with *some* cases in CPython, notably cases where we know an object 'x' is only ever created, manipulated, and freed in some thread; then (and only in this case) on CPython we know that the __del__ method will also be run in that same thread. That's not the case on PyPy (as long as you have more than one active thread, at least). Still, it's unclear what we can change about it. A bient?t, Armin. From fijall at gmail.com Tue Apr 7 20:14:57 2015 From: fijall at gmail.com (Maciej Fijalkowski) Date: Tue, 7 Apr 2015 20:14:57 +0200 Subject: [pypy-dev] FAQ entry In-Reply-To: References: Message-ID: On Tue, Apr 7, 2015 at 7:18 PM, Armin Rigo wrote: > Hi Yuriy, > > On 7 April 2015 at 16:00, Yuriy Taraday wrote: >>> We can't even be sure that an actual deadlock situation encountered in >>> a __del__ is really a deadlock; maybe a different thread will come >>> along and release that lock soon... I think this is a problem that is >>> just as hard as the general deadlock problem (i.e. unsolvable, but the >>> user can use some tools to help him figure out deadlocks when they >>> really happen). >> >> It will 100% deadlock if the lock in question is held by another thread >> since we hold GIL that's needed to release it. > > No, that's wrong. You can't use the GIL as argument for the behavior > of a long-running piece of Python code. The GIL is released > periodically, also inside the __del__ method. If that __del__ method > tries to acquire a lock that is already acquired, it suspends the > thread, but as usual it does so by first releasing the GIL and letting > other threads run. > > You're correct in that we don't know which thread the __del__ method > runs in, and so we don't know exactly which thread's execution is > suspended until the end of the __del__ method. > > This is in contrast with *some* cases in CPython, notably cases where > we know an object 'x' is only ever created, manipulated, and freed in > some thread; then (and only in this case) on CPython we know that the > __del__ method will also be run in that same thread. That's not the > case on PyPy (as long as you have more than one active thread, at > least). Still, it's unclear what we can change about it. > Are you sure this is true for the case where object is found inside a cycle? (these days, they're run, not sure if in 2.7 or 3.x) From yorik.sar at gmail.com Tue Apr 7 20:47:39 2015 From: yorik.sar at gmail.com (Yuriy Taraday) Date: Tue, 07 Apr 2015 18:47:39 +0000 Subject: [pypy-dev] FAQ entry In-Reply-To: References: Message-ID: On Tue, Apr 7, 2015 at 8:18 PM Armin Rigo wrote: > Hi Yuriy, > > On 7 April 2015 at 16:00, Yuriy Taraday wrote: > >> We can't even be sure that an actual deadlock situation encountered in > >> a __del__ is really a deadlock; maybe a different thread will come > >> along and release that lock soon... I think this is a problem that is > >> just as hard as the general deadlock problem (i.e. unsolvable, but the > >> user can use some tools to help him figure out deadlocks when they > >> really happen). > > > > It will 100% deadlock if the lock in question is held by another thread > > since we hold GIL that's needed to release it. > > No, that's wrong. You can't use the GIL as argument for the behavior > of a long-running piece of Python code. The GIL is released > periodically, also inside the __del__ method. If that __del__ method > tries to acquire a lock that is already acquired, it suspends the > thread, but as usual it does so by first releasing the GIL and letting > other threads run. > Sorry, I was under impression that GIL is being held by GC while finalizers are being called. So this line from the blogpost must be wrong then: > If any thread is holding either lock at this moment, the process deadlocks. I've checked it: https://gist.github.com/YorikSar/51b0b15fad41ef338e7f So, deadlock is guaranteed only if we're trying to acquire it in the same thread. We can handle at least this case. Although it seems pretty thin, as #1 is just working around the problem. You're correct in that we don't know which thread the __del__ method > runs in, and so we don't know exactly which thread's execution is > suspended until the end of the __del__ method. > So it shouldn't matter if we run them in a separate thread. This is in contrast with *some* cases in CPython, notably cases where > we know an object 'x' is only ever created, manipulated, and freed in > some thread; then (and only in this case) on CPython we know that the > __del__ method will also be run in that same thread. That's not the > case on PyPy (as long as you have more than one active thread, at > least). Still, it's unclear what we can change about it. > That's another reason for programmer to not rely on which thread __del__ runs in. So I think running them all in a separate thread would only make things clearer. -------------- next part -------------- An HTML attachment was scrubbed... URL: From arigo at tunes.org Wed Apr 8 09:10:03 2015 From: arigo at tunes.org (Armin Rigo) Date: Wed, 8 Apr 2015 09:10:03 +0200 Subject: [pypy-dev] FAQ entry In-Reply-To: References: Message-ID: Hi Fijal, On 7 April 2015 at 20:14, Maciej Fijalkowski wrote: > Are you sure this is true for the case where object is found inside a > cycle? (these days, they're run, not sure if in 2.7 or 3.x) Ah, you're right. There is always the case where objects are reachable from a cycle (even if they are not *inside* a cycle themselves). Even on CPython, one thread can make purely thread-local objects in this situation. Then the cycle can be broken in an unrelated thread and the __del__ called from there. A bient?t, Armin. From arigo at tunes.org Wed Apr 8 09:15:38 2015 From: arigo at tunes.org (Armin Rigo) Date: Wed, 8 Apr 2015 09:15:38 +0200 Subject: [pypy-dev] FAQ entry In-Reply-To: References: Message-ID: Hi Yuriy, It seems that now you've understood the problems, so please re-read my previous answers :-) In particular, I try to give a situation where even what looks like a deadlock might not be one (because any thread can release any lock), and why we can't always run finalizers in a separate thread (because simple non-multithreaded programs don't expect concurrency when running dels). A bient?t, Armin. From yorik.sar at gmail.com Wed Apr 8 10:47:52 2015 From: yorik.sar at gmail.com (Yuriy Taraday) Date: Wed, 08 Apr 2015 08:47:52 +0000 Subject: [pypy-dev] FAQ entry In-Reply-To: References: Message-ID: On Wed, Apr 8, 2015 at 10:16 AM Armin Rigo wrote: > Hi Yuriy, > > It seems that now you've understood the problems, so please re-read my > previous answers :-) > > In particular, I try to give a situation where even what looks like a > deadlock might not be one (because any thread can release any lock), > and why we can't always run finalizers in a separate thread (because > simple non-multithreaded programs don't expect concurrency when > running dels). > I see problem with solution #1 since deadlock is hard to detect. It actually is a workaround, so it won't work all the time and it's not a prefered solution. I still don't see problem with solution #2. I've been sent a link [0] to a post that shows that signals and finalizers are "unexpected concurrency" anyway. Furthermore even in multithreaded app we can't predict which thread will end up executing them. So my argument is: why not make it clear that __del__ will run in a separate thread instead of trying to pretend that it's something more predictable than that? [0] https://wingolog.org/archives/2012/02/16/unexpected-concurrency -------------- next part -------------- An HTML attachment was scrubbed... URL: From arigo at tunes.org Wed Apr 8 11:02:09 2015 From: arigo at tunes.org (Armin Rigo) Date: Wed, 8 Apr 2015 11:02:09 +0200 Subject: [pypy-dev] FAQ entry In-Reply-To: References: Message-ID: Hi Yuriy, On 8 April 2015 at 10:47, Yuriy Taraday wrote: > will end up executing them. So my argument is: why not make it clear that > __del__ will run in a separate thread instead of trying to pretend that it's > something more predictable than that? For example, because it would break this class (it's left as an exercise to the reader to understand why): class Foo(object): num_instances = 0 def __init__(self): Foo.num_instances += 1 def __del__(self): Foo.num_instances -= 1 Armin From yorik.sar at gmail.com Wed Apr 8 11:43:16 2015 From: yorik.sar at gmail.com (Yuriy Taraday) Date: Wed, 08 Apr 2015 09:43:16 +0000 Subject: [pypy-dev] FAQ entry In-Reply-To: References: Message-ID: On Wed, Apr 8, 2015 at 12:02 PM Armin Rigo wrote: > Hi Yuriy, > > On 8 April 2015 at 10:47, Yuriy Taraday wrote: > > will end up executing them. So my argument is: why not make it clear that > > __del__ will run in a separate thread instead of trying to pretend that > it's > > something more predictable than that? > > For example, because it would break this class (it's left as an > exercise to the reader to understand why): > > class Foo(object): > num_instances = 0 > > def __init__(self): > Foo.num_instances += 1 > > def __del__(self): > Foo.num_instances -= 1 > It's already broken if it's used in multithreaded app. For single-threaded apps we can make an exception and keep things running as they are now, i.e. keep it single-threaded. This will also prevent unnecessary multithreading initialization. -------------- next part -------------- An HTML attachment was scrubbed... URL: From arigo at tunes.org Wed Apr 8 12:56:36 2015 From: arigo at tunes.org (Armin Rigo) Date: Wed, 8 Apr 2015 12:56:36 +0200 Subject: [pypy-dev] FAQ entry In-Reply-To: References: Message-ID: Hi, On 8 April 2015 at 11:43, Yuriy Taraday wrote: > It's already broken if it's used in multithreaded app. For single-threaded > apps we can make an exception and keep things running as they are now, i.e. > keep it single-threaded. This will also prevent unnecessary multithreading > initialization. You'd end up with cases where you can have a deadlock in single-threaded programs that magically goes away if you just add anywhere the line "thread.start_new_thread(lambda:None, ())"... But maybe creating lock objects should be enough to change where destructors run? You can't easily have deadlocks without user lock objects. Or, if you care about deadlocks, maybe your Python program should explicitly start it own finalizer thread. Your potentially-deadlocking __del__ methods should use a decorator that, when called, simply puts the actual method into a Queue.Queue which is consumed by this finalizer thread. It would be the same, but not transparent. I think that the idea of doing it transparently may be interesting, but it needs some more careful design before we can think about changing PyPy like that. (Not that there is *any* idea about the GC that doesn't require careful design :-) A bient?t, Armin. From yorik.sar at gmail.com Wed Apr 8 13:35:00 2015 From: yorik.sar at gmail.com (Yuriy Taraday) Date: Wed, 08 Apr 2015 11:35:00 +0000 Subject: [pypy-dev] FAQ entry In-Reply-To: References: Message-ID: On Wed, Apr 8, 2015 at 1:57 PM Armin Rigo wrote: > On 8 April 2015 at 11:43, Yuriy Taraday wrote: > > It's already broken if it's used in multithreaded app. For > single-threaded > > apps we can make an exception and keep things running as they are now, > i.e. > > keep it single-threaded. This will also prevent unnecessary > multithreading > > initialization. > > You'd end up with cases where you can have a deadlock in > single-threaded programs that magically goes away if you just add > anywhere the line "thread.start_new_thread(lambda:None, ())"... But > maybe creating lock objects should be enough to change where > destructors run? You can't easily have deadlocks without user lock > objects. > Having locks in single-threaded app is rather strange, but it can be some library code, so programmer can be not aware of them and rely on that "serialized __del__". Or, if you care about deadlocks, maybe your Python program should > explicitly start it own finalizer thread. Your > potentially-deadlocking __del__ methods should use a decorator that, > when called, simply puts the actual method into a Queue.Queue which is > consumed by this finalizer thread. It would be the same, but not > transparent. > That would be creating new references to self in __del__ which is bad (at least docs say so). And after that these references will be gone once again and __del__ will be called again. I think that the idea of doing it transparently may be interesting, > but it needs some more careful design before we can think about > changing PyPy like that. (Not that there is *any* idea about the GC > that doesn't require careful design :-) > Oh, sure. I just think that we should at least consider this approach. -------------- next part -------------- An HTML attachment was scrubbed... URL: From arigo at tunes.org Wed Apr 8 16:09:48 2015 From: arigo at tunes.org (Armin Rigo) Date: Wed, 8 Apr 2015 16:09:48 +0200 Subject: [pypy-dev] FAQ entry In-Reply-To: References: Message-ID: Hi, On 8 April 2015 at 13:35, Yuriy Taraday wrote: > That would be creating new references to self in __del__ which is bad (at > least docs say so). And after that these references will be gone once again > and __del__ will be called again. No, in PyPy it is fine, but indeed it is a problem in CPython... A bient?t, Armin. From william.leslie.ttg at gmail.com Fri Apr 10 10:44:43 2015 From: william.leslie.ttg at gmail.com (William ML Leslie) Date: Fri, 10 Apr 2015 18:44:43 +1000 Subject: [pypy-dev] FAQ entry In-Reply-To: References: Message-ID: On 8 April 2015 at 04:54, Yuriy Taraday wrote: > Did you miss mailing list intentionally? > ?Ach no! I always seem to do this on pypy-dev. Thanks for pointing that out.? > > On Tue, Apr 7, 2015 at 5:59 PM William ML Leslie < > william.leslie.ttg at gmail.com> wrote: > >> On 8 April 2015 at 00:00, Yuriy Taraday wrote: >> >>> GC introduces concurrency to user code anyway: call to some __del__ >>> method can happen in any time in user code, so it might be called in a >>> separate thread just as well >>> >> ? >> ?Obligatory: ?http://wingolog.org/archives/201 >> ?? >> 2/02/16/unexpected-concurrency >> > > Yes, that's how I see it: one can't bet on where and when finalizers are > run, so they appear to the rest of the program as if they're run in some > special thread that wake ups in some scary moments. So a separate thread is > just as good for them. > Except that, up until now, you can expect that __del__ is run ?in /one/ of the threads you've started. If you only have one thread, you know exactly which thread your __del__ will be run in. So you could make assumptions about thread-local state when you write such a method. ?Not that I have an opinion here. __del__ is problematic, and entirely to be avoided in new code, afaiac.? -- William Leslie Notice: Likely much of this email is, by the nature of copyright, covered under copyright law. You absolutely MAY reproduce any part of it in accordance with the copyright law of the nation you are reading this in. Any attempt to DENY YOU THOSE RIGHTS would be illegal without prior contractual agreement. -------------- next part -------------- An HTML attachment was scrubbed... URL: From arigo at tunes.org Fri Apr 10 23:51:00 2015 From: arigo at tunes.org (Armin Rigo) Date: Fri, 10 Apr 2015 23:51:00 +0200 Subject: [pypy-dev] EuroPython? Message-ID: Hi all, I'm preparing a EuroPython submission about STM and/or about CFFI, and wondering if someone else also planned to submit a talk. If not, I'll include a general "status of PyPy" part in my submission. Armin From romain.py at gmail.com Sat Apr 11 00:10:05 2015 From: romain.py at gmail.com (Romain Guillebert) Date: Fri, 10 Apr 2015 18:10:05 -0400 Subject: [pypy-dev] EuroPython? In-Reply-To: References: Message-ID: Hi Armin I submitted the talk I gave at fosdem. Romain On Fri, Apr 10, 2015 at 5:51 PM, Armin Rigo wrote: > Hi all, > > I'm preparing a EuroPython submission about STM and/or about CFFI, and > wondering if someone else also planned to submit a talk. If not, I'll > include a general "status of PyPy" part in my submission. > > Armin > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > https://mail.python.org/mailman/listinfo/pypy-dev From anto.cuni at gmail.com Sat Apr 11 11:29:49 2015 From: anto.cuni at gmail.com (Antonio Cuni) Date: Sat, 11 Apr 2015 11:29:49 +0200 Subject: [pypy-dev] EuroPython? In-Reply-To: References: Message-ID: Hi, my plan was to submit a talk about profiling/optimizing, possibly together with fijal if he comes (but I didn't do yet :)). Probably the talk which suits best for talking about the general status is Romain's one? On Sat, Apr 11, 2015 at 12:10 AM, Romain Guillebert wrote: > Hi Armin > > I submitted the talk I gave at fosdem. > > Romain > > On Fri, Apr 10, 2015 at 5:51 PM, Armin Rigo wrote: > > Hi all, > > > > I'm preparing a EuroPython submission about STM and/or about CFFI, and > > wondering if someone else also planned to submit a talk. If not, I'll > > include a general "status of PyPy" part in my submission. > > > > Armin > > _______________________________________________ > > pypy-dev mailing list > > pypy-dev at python.org > > https://mail.python.org/mailman/listinfo/pypy-dev > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > https://mail.python.org/mailman/listinfo/pypy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From arigo at tunes.org Sat Apr 11 11:47:53 2015 From: arigo at tunes.org (Armin Rigo) Date: Sat, 11 Apr 2015 11:47:53 +0200 Subject: [pypy-dev] EuroPython? In-Reply-To: References: Message-ID: Hi, On 11 April 2015 at 11:29, Antonio Cuni wrote: > my plan was to submit a talk about profiling/optimizing, possibly together > with fijal if he comes (but I didn't do yet :)). > Probably the talk which suits best for talking about the general status is > Romain's one? I just submitted http://bitbucket.org/pypy/extradoc/raw/extradoc/talk/ep2015/stm-abstract.rst . I didn't expect there would be three talks, although I guess the vmprof talk is not really PyPy-only. At one point, maybe, we could do a talk about CFFI, which is not PyPy-only either... But there is no way I'm going to submit a 4th proposal :-) A bient?t, Armin. From Ajit.Dingankar at ieee.org Sat Apr 11 20:00:02 2015 From: Ajit.Dingankar at ieee.org (Ajit Dingankar) Date: Sat, 11 Apr 2015 18:00:02 +0000 (UTC) Subject: [pypy-dev] PyPy translation on Xeon Phi (pka MIC) Message-ID: I tried the translator for BF example on Xeon Phi: http://morepypy.blogspot.com/2011/04/tutorial-writing-interpreter-with-pypy.html It failed due to a "safety check" related to asserts at the end of the translate module. I use CPython v2.7.2 since that's the latest I could find for the Phi accelerator. I thought I'd try it before going to the step of cross-compiling a more recent version. (Just for reference the example works with CPython v2.7.5 on the Xeon host.) I tried to search for previous experience with Phi (or MIC) but could only find this old post on the mailing list: http://permalink.gmane.org/gmane.comp.python.pypy/11981 which is mainly about STM but mentions MIC at the very end: "Still trying to see whether I can get PyPy to run on the MIC. :)" I'd appreciate any pointers to making PyPy translate work on Phi, with CPython or PyPy binary itself (if needed, since it may be hard to get it working on Phi). At least whether Xeon Phi is or is not a supported platform, and what options there are to support it, in the latter case). Thanks, Ajit ==== PS: For full disclosure, I work for Intel but my day job is related to hardware, hence posting from my personal account. From wlavrijsen at lbl.gov Sat Apr 11 21:16:08 2015 From: wlavrijsen at lbl.gov (wlavrijsen at lbl.gov) Date: Sat, 11 Apr 2015 12:16:08 -0700 (PDT) Subject: [pypy-dev] PyPy translation on Xeon Phi (pka MIC) In-Reply-To: References: Message-ID: Ajit, > I tried to search for previous experience with Phi (or MIC) but > could only find this old post on the mailing list: > http://permalink.gmane.org/gmane.comp.python.pypy/11981 > which is mainly about STM but mentions MIC at the very end: > "Still trying to see whether I can get PyPy to run on the MIC. :)" That was me. :) Still interested, but rather looking forward to KNL atm. Single-threaded performance isn't worth it on KNC, and scaling out with STM wasn't enough by far. Haven't tried since with more recent PyPy or PyPy-STM. Will try again once we get our hands on KNL. Now, from your description it sounds to me like you are trying to translate on the card, rather than translate on the host and then cross-compile the C output? I never tried the former. Rather, translate with --gcrootfinder=shadowstack, find the C output under $TMP, together with a make file to rebuild. Edit that to use icc and -mmic and see where that gets you. (Things may have changed plenty since that posting, of course. :P ) Best regards, Wim -- WLavrijsen at lbl.gov -- +1 (510) 486 6411 -- www.lavrijsen.net From arigo at tunes.org Sun Apr 12 09:32:01 2015 From: arigo at tunes.org (Armin Rigo) Date: Sun, 12 Apr 2015 09:32:01 +0200 Subject: [pypy-dev] PyPy translation on Xeon Phi (pka MIC) In-Reply-To: References: Message-ID: Hi Ajit, On 11 April 2015 at 20:00, Ajit Dingankar wrote: > It failed due to a "safety check" related to asserts at the end of > the translate module. We can't help from this vague description. Please give at least the complete error message. Officially, supported platforms are Linux, Windows and OS/X running on x86, x86-64, or ARMv6-v7. I don't know where Xeon Phi fits there. It seems to be an x86-64 from Wikipedia, but I'm not sure about what is special about it. Wim's reply is not helpful at all for me, as it is mostly given as a series of three-letter acronyms I've never heard about :-) A bient?t, Armin From yury at shurup.com Sun Apr 12 11:43:36 2015 From: yury at shurup.com (Yury V. Zaytsev) Date: Sun, 12 Apr 2015 11:43:36 +0200 Subject: [pypy-dev] PyPy translation on Xeon Phi (pka MIC) In-Reply-To: References: Message-ID: <1428831816.2688.86.camel@newpride> On Sun, 2015-04-12 at 09:32 +0200, Armin Rigo wrote: > Wim's reply is not helpful at all for me, as it is mostly given as a > series of three-letter acronyms I've never heard about :-) He's simply referring to the different generations of MICs (MIC = Many Integrated Core architecture, KNC = Knights Corner [older models], KNL = Knights Landing [newer models]). > I don't know where Xeon Phi fits there. It seems to be an x86-64 from > Wikipedia, but I'm not sure about what is special about it. I've shortly played with KNC, and put very simply in its current shape it's basically a plug-in computer extension card, which can function in several modes, e.g. as an accelerator which receives tasks from the host system and executes them, or even as a more or less stand-alone box inside the box running (for instance) a stripped down Linux system. In the latter mode, software just requires cross-compilation and then can run on the board as if it was a stand-alone computer, in the former you have to make use of special APIs to run your tasks on the MICs. It looks like Wim has taken the first approach, which makes total sense to get it working with minimal effort :-) So yes, in this approximation, assume it's x86-64 which requires a special cross-compiler and has a bit different subset of supported insns. -- Sincerely yours, Yury V. Zaytsev From tritium-list at sdamon.com Sun Apr 12 13:57:35 2015 From: tritium-list at sdamon.com (Alexander Walters) Date: Sun, 12 Apr 2015 07:57:35 -0400 Subject: [pypy-dev] PyPy translation on Xeon Phi (pka MIC) In-Reply-To: References: Message-ID: <552A5DAF.7070603@sdamon.com> Xeon Phi is, technically, X86... On 4/12/2015 03:32, Armin Rigo wrote: > Hi Ajit, > > On 11 April 2015 at 20:00, Ajit Dingankar wrote: >> It failed due to a "safety check" related to asserts at the end of >> the translate module. > We can't help from this vague description. Please give at least the > complete error message. > > Officially, supported platforms are Linux, Windows and OS/X running on > x86, x86-64, or ARMv6-v7. I don't know where Xeon Phi fits there. It > seems to be an x86-64 from Wikipedia, but I'm not sure about what is > special about it. Wim's reply is not helpful at all for me, as it is > mostly given as a series of three-letter acronyms I've never heard > about :-) > > > A bient?t, > > Armin > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > https://mail.python.org/mailman/listinfo/pypy-dev From lac at openend.se Sun Apr 12 14:46:15 2015 From: lac at openend.se (Laura Creighton) Date: Sun, 12 Apr 2015 14:46:15 +0200 Subject: [pypy-dev] http://speed.pypy.org/timeline/ -- show the code? Message-ID: <201504121246.t3CCkF5q031140@fido.openend.se> I thought there was a way to show the code that was actually run to get the results. Maybe I am confusing things with the pshootout site. Have I just forgotten how to do this? I wanted to show an astronomer what sort of code we run blazingly fast vs what sort we are less speedy at, so he can decide if he needs to write his algorithm in C++ or not. If speed.pypy.org doesn't have such a feature, it would seem to be a good one to add. Laura From wlavrijsen at lbl.gov Sun Apr 12 17:11:03 2015 From: wlavrijsen at lbl.gov (wlavrijsen at lbl.gov) Date: Sun, 12 Apr 2015 08:11:03 -0700 (PDT) Subject: [pypy-dev] PyPy translation on Xeon Phi (pka MIC) In-Reply-To: References: Message-ID: Armin, > Wim's reply is not helpful at all for me, as it is mostly given as a > series of three-letter acronyms I've never heard about :-) well, what Yury said. :) Sorry, but I was replying to an Intel guy ... To first order, think of it as a server that you ssh to, and then you find yourself on a perfectly ordinary Linux machine, albeit one that presents you with 240 "cpus." For the current generation, you have to deal with cross-compilation, b/c its vector instruction set is unique. Beyond that, it's just x86. (The next generation uses AVX-512, so no more cross-compilation necessary.) Since PyPy does not generate vector instructions (nor deals with any offloading), the easiest I found was to generate C, and cross-compile that, all on the host; not try to run the translator on the Phi. Even so, I didn't find anything worthwhile: loosing vectorization is okay (for high energy physics codes, as they're too branchy anyway), but you really need to be able to scale out. Btw., we're in big for its successor: https://www.nersc.gov/users/computational-systems/cori/ Best regards, Wim -- WLavrijsen at lbl.gov -- +1 (510) 486 6411 -- www.lavrijsen.net From arigo at tunes.org Sun Apr 12 17:39:17 2015 From: arigo at tunes.org (Armin Rigo) Date: Sun, 12 Apr 2015 17:39:17 +0200 Subject: [pypy-dev] http://speed.pypy.org/timeline/ -- show the code? In-Reply-To: <201504121246.t3CCkF5q031140@fido.openend.se> References: <201504121246.t3CCkF5q031140@fido.openend.se> Message-ID: Hi Laura, On 12 April 2015 at 14:46, Laura Creighton wrote: > I thought there was a way to show the code that was actually run to get > the results. Maybe I am confusing things with the pshootout site. Have > I just forgotten how to do this? Yes, there is a way: from a page that shows one graph, like for example http://speed.pypy.org/timeline/?exe=3%2C6%2C1%2C5&base=2%2B472&ben=ai&env=1&revs=200&equid=off , you can see a link called "Code" below the graph itself, next to the short description of what the benchmark does. A bient?t, Armin. From lac at openend.se Mon Apr 13 08:58:00 2015 From: lac at openend.se (Laura Creighton) Date: Mon, 13 Apr 2015 08:58:00 +0200 Subject: [pypy-dev] http://speed.pypy.org/timeline/ -- show the code? In-Reply-To: Message from Armin Rigo of "Sun, 12 Apr 2015 17:39:17 +0200." References: <201504121246.t3CCkF5q031140@fido.openend.se> Message-ID: <201504130658.t3D6w0CV030990@fido.openend.se> In a message of Sun, 12 Apr 2015 17:39:17 +0200, Armin Rigo writes: >Hi Laura, > >On 12 April 2015 at 14:46, Laura Creighton wrote: >> I thought there was a way to show the code that was actually run to get >> the results. Maybe I am confusing things with the pshootout site. Have >> I just forgotten how to do this? > >Yes, there is a way: from a page that shows one graph, like for example >http://speed.pypy.org/timeline/?exe=3%2C6%2C1%2C5&base=2%2B472&ben=ai&env=1&revs=200&equid=off >, you can see a link called "Code" below the graph itself, next to the >short description of what the benchmark does. > > >A bient?t, > >Armin. Fantastic, sorry I missed that. Thank you. Laura From ajit.dingankar at ieee.org Mon Apr 13 06:46:45 2015 From: ajit.dingankar at ieee.org (Ajit Dingankar) Date: Mon, 13 Apr 2015 04:46:45 +0000 (UTC) Subject: [pypy-dev] PyPy translation on Xeon Phi (pka MIC) In-Reply-To: <1428831816.2688.86.camel@newpride> References: <1428831816.2688.86.camel@newpride> Message-ID: <1999412943.1525748.1428900405433.JavaMail.yahoo@mail.yahoo.com> @Wim: Yes, I am trying to translate on the card since I couldn't find much info re cross-compiling and someone had suggested direct translation on the target platform (though not specifically for Xeon Phi). Anyway, thanks a lot for the tips! I'll try them at work tomorrow. BTW, I'm more interested in providing new functionality, not targeting high performance initially, so multi-threading is not important for me now; I just need to get it working... @Armin: Sorry I didn't have access to the actual error message when I posted the question as a general query. Will do so if I can't make progress trying Wim's suggestions from work.? @Yury:?Thanks for the clarification re Xeon Phi generations and usage models. Thanks, Ajit==== ? On Sunday, April 12, 2015 2:46 AM, Yury V. Zaytsev wrote: On Sun, 2015-04-12 at 09:32 +0200, Armin Rigo wrote: > Wim's reply is not helpful at all for me, as it is mostly given as a > series of three-letter acronyms I've never heard about :-) He's simply referring to the different generations of MICs (MIC = Many Integrated Core architecture, KNC = Knights Corner [older models], KNL = Knights Landing [newer models]). > I don't know where Xeon Phi fits there. It seems to be an x86-64 from > Wikipedia, but I'm not sure about what is special about it. I've shortly played with KNC, and put very simply in its current shape it's basically a plug-in computer extension card, which can function in several modes, e.g. as an accelerator which receives tasks from the host system and executes them, or even as a more or less stand-alone box inside the box running (for instance) a stripped down Linux system. In the latter mode, software just requires cross-compilation and then can run on the board as if it was a stand-alone computer, in the former you have to make use of special APIs to run your tasks on the MICs. It looks like Wim has taken the first approach, which makes total sense to get it working with minimal effort :-) So yes, in this approximation, assume it's x86-64 which requires a special cross-compiler and has a bit different subset of supported insns. -- Sincerely yours, Yury V. Zaytsev -------------- next part -------------- An HTML attachment was scrubbed... URL: From mmueller at python-academy.de Mon Apr 13 17:29:52 2015 From: mmueller at python-academy.de (=?UTF-8?B?TWlrZSBNw7xsbGVy?=) Date: Mon, 13 Apr 2015 17:29:52 +0200 Subject: [pypy-dev] Which pypy with >=3.3 Python compatibility Message-ID: <552BE0F0.8000907@python-academy.de> I need pypy that is Python 3.3 or, even better, Python 3.4 compatible. I can found the nightly builds at http://buildbot.pypy.org//nightly Which one should I use py3.3 or py3k? There are many more version. Should I use one of them? Thanks, Mike From amauryfa at gmail.com Mon Apr 13 17:42:37 2015 From: amauryfa at gmail.com (Amaury Forgeot d'Arc) Date: Mon, 13 Apr 2015 17:42:37 +0200 Subject: [pypy-dev] Which pypy with >=3.3 Python compatibility In-Reply-To: <552BE0F0.8000907@python-academy.de> References: <552BE0F0.8000907@python-academy.de> Message-ID: Hi, 2015-04-13 17:29 GMT+02:00 Mike M?ller : > I need pypy that is Python 3.3 or, even better, Python 3.4 compatible. > I can found the nightly builds at http://buildbot.pypy.org//nightly There is a branch for the Python 3.3 port, named "py3.3". It's not complete, but probably enough for most usages. The builds are in the corresponding subdirectory: http://buildbot.pypy.org/nightly/py3.3/ Note that this branch is not built nightly, but only on-demand. I just started a new one ("pypy-c-jit-linux-x86-64"), let's see if it completes :-) As you have guessed, it's still Work In Progress, and should be considered as highly experimental. > Which one should I use py3.3 or py3k? There are many more version. > Should I use one of them? > > Thanks, > Mike > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > https://mail.python.org/mailman/listinfo/pypy-dev > -- Amaury Forgeot d'Arc -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmludo at gmail.com Mon Apr 13 17:50:07 2015 From: gmludo at gmail.com (Ludovic Gasc) Date: Mon, 13 Apr 2015 11:50:07 -0400 Subject: [pypy-dev] Which pypy with >=3.3 Python compatibility In-Reply-To: <552BE0F0.8000907@python-academy.de> References: <552BE0F0.8000907@python-academy.de> Message-ID: >From my experience, the main issue I have is that setuptools/pip bootstrapping doesn't work, at least on my setup. You need to install dependencies manually and play with PYTHONPATH environment variable. For Python applications themselves, for now, it works pretty well, and when I found a bug, Amaury has fixed that quickly. -- Ludovic Gasc (GMLudo) http://www.gmludo.eu/ 2015-04-13 11:29 GMT-04:00 Mike M?ller : > I need pypy that is Python 3.3 or, even better, Python 3.4 compatible. > I can found the nightly builds at http://buildbot.pypy.org//nightly > > Which one should I use py3.3 or py3k? There are many more version. > Should I use one of them? > > Thanks, > Mike > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > https://mail.python.org/mailman/listinfo/pypy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mmueller at python-academy.de Mon Apr 13 17:55:33 2015 From: mmueller at python-academy.de (=?UTF-8?B?TWlrZSBNw7xsbGVy?=) Date: Mon, 13 Apr 2015 17:55:33 +0200 Subject: [pypy-dev] Which pypy with >=3.3 Python compatibility In-Reply-To: References: <552BE0F0.8000907@python-academy.de> Message-ID: <552BE6F5.1090603@python-academy.de> Am 13.04.15 um 17:42 schrieb Amaury Forgeot d'Arc: > Hi, > > 2015-04-13 17:29 GMT+02:00 Mike M?ller >: > > I need pypy that is Python 3.3 or, even better, Python 3.4 compatible. > I can found the nightly builds at http://buildbot.pypy.org//nightly > > > There is a branch for the Python 3.3 port, named "py3.3". > It's not complete, but probably enough for most usages. > > The builds are in the corresponding > subdirectory: http://buildbot.pypy.org/nightly/py3.3/ > Note that this branch is not built nightly, but only on-demand. > > I just started a new one ("pypy-c-jit-linux-x86-64"), > let's see if it completes :-) Thanks. > As you have guessed, it's still Work In Progress, > and should be considered as highly experimental. Ok. Consider me warned. ;) Mike > > > > Which one should I use py3.3 or py3k? There are many more version. > Should I use one of them? > > Thanks, > Mike > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > https://mail.python.org/mailman/listinfo/pypy-dev > > > > > -- > Amaury Forgeot d'Arc From mmueller at python-academy.de Mon Apr 13 17:57:30 2015 From: mmueller at python-academy.de (=?UTF-8?B?TWlrZSBNw7xsbGVy?=) Date: Mon, 13 Apr 2015 17:57:30 +0200 Subject: [pypy-dev] Which pypy with >=3.3 Python compatibility In-Reply-To: References: <552BE0F0.8000907@python-academy.de> Message-ID: <552BE76A.8010402@python-academy.de> Am 13.04.15 um 17:50 schrieb Ludovic Gasc: > From my experience, the main issue I have is that setuptools/pip bootstrapping > doesn't work, at least on my setup. > You need to install dependencies manually and play with PYTHONPATH environment > variable. > > For Python applications themselves, for now, it works pretty well, and when I > found a bug, Amaury has fixed that quickly. Good to know. Just like in the good old times. ;) Mike > > -- > Ludovic Gasc (GMLudo) > http://www.gmludo.eu/ > > 2015-04-13 11:29 GMT-04:00 Mike M?ller >: > > I need pypy that is Python 3.3 or, even better, Python 3.4 compatible. > I can found the nightly builds at http://buildbot.pypy.org//nightly > > Which one should I use py3.3 or py3k? There are many more version. > Should I use one of them? > > Thanks, > Mike > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > https://mail.python.org/mailman/listinfo/pypy-dev > > From amauryfa at gmail.com Mon Apr 13 18:10:00 2015 From: amauryfa at gmail.com (Amaury Forgeot d'Arc) Date: Mon, 13 Apr 2015 18:10:00 +0200 Subject: [pypy-dev] Which pypy with >=3.3 Python compatibility In-Reply-To: References: <552BE0F0.8000907@python-academy.de> Message-ID: 2015-04-13 17:50 GMT+02:00 Ludovic Gasc : > From my experience, the main issue I have is that setuptools/pip > bootstrapping doesn't work, at least on my setup. > You need to install dependencies manually and play with PYTHONPATH > environment variable. > ensurepip is a 3.4 feature, right? I guess we need to wait a bit more. Or get involved and help us finish the 3.3 port first... -- Amaury Forgeot d'Arc -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmludo at gmail.com Mon Apr 13 19:03:34 2015 From: gmludo at gmail.com (Ludovic Gasc) Date: Mon, 13 Apr 2015 13:03:34 -0400 Subject: [pypy-dev] Which pypy with >=3.3 Python compatibility In-Reply-To: References: <552BE0F0.8000907@python-academy.de> Message-ID: 2015-04-13 12:10 GMT-04:00 Amaury Forgeot d'Arc : > > 2015-04-13 17:50 GMT+02:00 Ludovic Gasc : > >> From my experience, the main issue I have is that setuptools/pip >> bootstrapping doesn't work, at least on my setup. >> You need to install dependencies manually and play with PYTHONPATH >> environment variable. >> > > ensurepip is a 3.4 feature, right? > I guess we need to wait a bit more. > I'm not an expert, but I don't think we speak of the same thing, I talk about this: https://pypi.python.org/pypi/setuptools#unix-wget https://pip.pypa.io/en/latest/installing.html#install-pip This technique is the same as ensurepip ? > Or get involved and help us finish the 3.3 port first... > Of course, I'm agree with you to finish the 3.3 port first :-) And I try to help you as much as possible I can do with my little skills. To remember, I'm a simple dev guy, not a PyPy expert. FYI, I'm trying to implement monotonic timer in PyPy3.3 during PyCON sprint code, Beno?t Chesneau finds me an example: https://gist.github.com/vext01/8c06136ca3522738234a No idea if I'll success, you'll see at the end of this week ;-) If you want to help me/give me tips/implement yourself, be my guest. Nevertheless, without pip, sorry, but at least for me, it's a little bit complicated to test easily libraries, especially with cffi. Or maybe easy_install is included ? I recognize I don't verified. > > -- > Amaury Forgeot d'Arc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From wlavrijsen at lbl.gov Mon Apr 13 23:23:10 2015 From: wlavrijsen at lbl.gov (wlavrijsen at lbl.gov) Date: Mon, 13 Apr 2015 14:23:10 -0700 (PDT) Subject: [pypy-dev] PyPy translation on Xeon Phi (pka MIC) In-Reply-To: <350035525.2119818.1428958585738.JavaMail.yahoo@mail.yahoo.com> References: <1428831816.2688.86.camel@newpride> <1999412943.1525748.1428900405433.JavaMail.yahoo@mail.yahoo.com> <350035525.2119818.1428958585738.JavaMail.yahoo@mail.yahoo.com> Message-ID: Ajit, > Hi Wim! I tried your suggestions (CC=icc and including -mmic in CFLAGS) did you edit the make file (under $TMP), or only added the above as envars? If the former, you can check the full linker command. I'd expect problems with the latter, as the translation (of full pypy-c anyway) runs some tests (see platform.py). > $ ./example2-c bottles.b ?/lib64/ld-linux-x86-64.so.2: No such file or directory That to me looks like the linker used was not icc (may have to set LD=icc). What does: $ objdump -f ./example2-c tell you? Best regards, Wim -- WLavrijsen at lbl.gov -- +1 (510) 486 6411 -- www.lavrijsen.net From ajit.dingankar at ieee.org Mon Apr 13 22:56:25 2015 From: ajit.dingankar at ieee.org (Ajit Dingankar) Date: Mon, 13 Apr 2015 20:56:25 +0000 (UTC) Subject: [pypy-dev] PyPy translation on Xeon Phi (pka MIC) In-Reply-To: <1999412943.1525748.1428900405433.JavaMail.yahoo@mail.yahoo.com> References: <1428831816.2688.86.camel@newpride> <1999412943.1525748.1428900405433.JavaMail.yahoo@mail.yahoo.com> Message-ID: <350035525.2119818.1428958585738.JavaMail.yahoo@mail.yahoo.com> Hi Wim! I tried your suggestions (CC=icc and including -mmic in CFLAGS) and copied the executable to the Phi co-processor card, but it fails due to a library dependency. ---- $ ./example2-c bottles.b ?/lib64/ld-linux-x86-64.so.2: No such file or directory $ ls -l /lib64/ld-linux* lrwxrwxrwx??? 1 root???? root??????????? 13 Jan? 1? 1970 /lib64/ld-linux-k1om.so.2 -> ld-2.14.90.so---- When I tried to run it from the Xeon host with micnativeloadex I also get an error:---- $ export SINK_LD_LIBRARY_PATH=/opt/intel/composer_xe_2015.1.133/lib/mic $ micnativeloadex ./example2-c Supplied binary does not match the Intel(R) Xeon Phi(TM) coprocessor that is installed.---- I?couldn't?find any unresolved?library dependencies: ---- $ micnativeloadex ./example2-c -lDependency information for ./example2-c?Full path was resolved as ?/tmp/usession-release-2.5.1-6/testing_1/./example2-c?Binary was built for X86_64 architecture?SINK_LD_LIBRARY_PATH = /opt/intel/composer_xe_2015.1.133/lib/mic?Dependencies Found: ??(none found)?Dependencies Not Found Locally (but may exist already on the coprocessor): ??librt.so.1 ??libm.so.6 ??libgcc_s.so.1 ??libpthread.so.0 ??libc.so.6 ??libdl.so.2---- I noticed in the output above that the?"Binary was built for X86_64 architecture" but couldn't find any info re it. Plain ldd does show the dependency on ld-linux-x86-64.so.2 (here shown with extra libraries libm, libgcc and libdl included with the "-mmic" flag):---- $ ldd example2-c ?linux-vdso.so.1 =>? (0x00007fffcabc1000) ?librt.so.1 => /lib64/librt.so.1 (0x00000037d8c00000) ?libm.so.6 => /lib64/libm.so.6 (0x00000037d9400000) ?libgcc_s.so.1 => /lib64/libgcc_s.so.1 (0x00000037de800000) ?libpthread.so.0 => /lib64/libpthread.so.0 (0x00000037d8400000) ?libc.so.6 => /lib64/libc.so.6 (0x00000037d8000000) ?libdl.so.2 => /lib64/libdl.so.2 (0x00000037d8800000) ?/lib64/ld-linux-x86-64.so.2 (0x00000037d7c00000) ---- I'd appreciate any help/pointers to the library dependency! Thanks, Ajit==== On Sunday, April 12, 2015 9:46 PM, Ajit Dingankar wrote: @Wim: Yes, I am trying to translate on the card since I couldn't find much info re cross-compiling and someone had suggested direct translation on the target platform (though not specifically for Xeon Phi). Anyway, thanks a lot for the tips! I'll try them at work tomorrow. BTW, I'm more interested in providing new functionality, not targeting high performance initially, so multi-threading is not important for me now; I just need to get it working... @Armin: Sorry I didn't have access to the actual error message when I posted the question as a general query. Will do so if I can't make progress trying Wim's suggestions from work.? @Yury:?Thanks for the clarification re Xeon Phi generations and usage models. Thanks, Ajit==== ? On Sunday, April 12, 2015 2:46 AM, Yury V. Zaytsev wrote: On Sun, 2015-04-12 at 09:32 +0200, Armin Rigo wrote: > Wim's reply is not helpful at all for me, as it is mostly given as a > series of three-letter acronyms I've never heard about :-) He's simply referring to the different generations of MICs (MIC = Many Integrated Core architecture, KNC = Knights Corner [older models], KNL = Knights Landing [newer models]). > I don't know where Xeon Phi fits there. It seems to be an x86-64 from > Wikipedia, but I'm not sure about what is special about it. I've shortly played with KNC, and put very simply in its current shape it's basically a plug-in computer extension card, which can function in several modes, e.g. as an accelerator which receives tasks from the host system and executes them, or even as a more or less stand-alone box inside the box running (for instance) a stripped down Linux system. In the latter mode, software just requires cross-compilation and then can run on the board as if it was a stand-alone computer, in the former you have to make use of special APIs to run your tasks on the MICs. It looks like Wim has taken the first approach, which makes total sense to get it working with minimal effort :-) So yes, in this approximation, assume it's x86-64 which requires a special cross-compiler and has a bit different subset of supported insns. -- Sincerely yours, Yury V. Zaytsev -------------- next part -------------- An HTML attachment was scrubbed... URL: From ajit.dingankar at ieee.org Mon Apr 13 23:56:41 2015 From: ajit.dingankar at ieee.org (Ajit Dingankar) Date: Mon, 13 Apr 2015 21:56:41 +0000 (UTC) Subject: [pypy-dev] PyPy translation on Xeon Phi (pka MIC) In-Reply-To: References: Message-ID: <1673336847.2208436.1428962201806.JavaMail.yahoo@mail.yahoo.com> Thanks a lot, Wim! I had edited the Makefile but hadn't added LD=icc there (had set it just in the shell).?With that added along with "-mmic" added to the LDFLAGS and removing object files in module_cache?directory fixed the problem! Now I can run on Phi!? Will post some results on performance; the translated interpreter seems much faster on Phi but I don't?quite understand why! (Not that I'm complaining! ;-)? Thanks,?Ajit====? On Monday, April 13, 2015 2:23 PM, "wlavrijsen at lbl.gov" wrote: Ajit, > Hi Wim! I tried your suggestions (CC=icc and including -mmic in CFLAGS) did you edit the make file (under $TMP), or only added the above as envars? If the former, you can check the full linker command. I'd expect problems with the latter, as the translation (of full pypy-c anyway) runs some tests (see platform.py). > $ ./example2-c bottles.b ?/lib64/ld-linux-x86-64.so.2: No such file or directory That to me looks like the linker used was not icc (may have to set LD=icc). What does: ? $ objdump -f ./example2-c tell you? Best regards, ? ? ? ? ? ? Wim -- WLavrijsen at lbl.gov? ? --? ? +1 (510) 486 6411? ? --? ? www.lavrijsen.net -------------- next part -------------- An HTML attachment was scrubbed... URL: From arigo at tunes.org Tue Apr 14 09:25:24 2015 From: arigo at tunes.org (Armin Rigo) Date: Tue, 14 Apr 2015 09:25:24 +0200 Subject: [pypy-dev] Which pypy with >=3.3 Python compatibility In-Reply-To: References: <552BE0F0.8000907@python-academy.de> Message-ID: Hi Ludovic, On 13 April 2015 at 19:03, Ludovic Gasc wrote: > FYI, I'm trying to implement monotonic timer in PyPy3.3 during PyCON sprint > code, Beno?t Chesneau finds me an example: Fwiw, clock_gettime() and similar functions are already present in PyPy2 in the module ``__pypy__.time``. I didn't check where that code is in py3.3. I would guess it is similarly present in the ``__pypy__.time`` module, but simply needs to be made accessible from the standard place in Python 3 (the ``time`` module?). A bient?t, Armin. From rich at pasra.at Tue Apr 14 13:16:50 2015 From: rich at pasra.at (Richard Plangger) Date: Tue, 14 Apr 2015 13:16:50 +0200 Subject: [pypy-dev] Vectorizing pypy traces no3 In-Reply-To: References: Message-ID: <552CF722.60703@pasra.at> Hi, I have recently managed to correctly transform a trace to a vectorized trace that includes a guard. I'm hoping that this might be merged into the code base of pypy (when it is finished), thus it would be nice to get feedback and iron out some problems I currently have. Of course this needs explanation (hope that does not lead to tl;dr): Consider the following trace: short version (pseudo syntax): ``` label(...,i,...) store(c,i) = load(a,i) + load(b,i) j = i+1 guard(j -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: OpenPGP digital signature URL: From fijall at gmail.com Tue Apr 14 13:39:49 2015 From: fijall at gmail.com (Maciej Fijalkowski) Date: Tue, 14 Apr 2015 13:39:49 +0200 Subject: [pypy-dev] Vectorizing pypy traces no3 In-Reply-To: <552CF722.60703@pasra.at> References: <552CF722.60703@pasra.at> Message-ID: Hi Richard. I read it. but I don't quite understand, want to discuss on IRC? On Tue, Apr 14, 2015 at 1:16 PM, Richard Plangger wrote: > Hi, > > I have recently managed to correctly transform a trace to a vectorized > trace that includes a guard. I'm hoping that this might be merged into > the code base of pypy (when it is finished), thus it would be nice to > get feedback and iron out some problems I currently have. Of course this > needs explanation (hope that does not lead to tl;dr): > > Consider the following trace: > short version (pseudo syntax): > > ``` > label(...,i,...) > store(c,i) = load(a,i) + load(b,i) > j = i+1 > guard(j jump(...,j,...) > ``` > long version: http://pastebin.com/e24s1vZg > > By unrolling this short trace, it is _NOT_ possible to vectorize it. The > guard prohibits the store operation to be executed after the guard. I > solved this problem by introducing a new guard (called 'early-exit'). It > saves the live variables at the beginning of the trace. By finding the > index calculations + guards and moving them above the early exit the > following is possible: > > short version (pseudo syntax): > > ``` > label(...,i,...) > j = i + 1 > guard(j k = j + 1 > guard(k guard_early_exit() # will not be emitted > va = vec_load(a,i,2) > vb = vec_load(b,i,2) > vc = vec_add(va,vb) > vec_store(c, i, 2) = vc > jump(...,k,...) > ``` > long version http://pastebin.com/vc3HaZCn > > My assumptions: Any guard that fails before the early exit must guide > blackhole to the original loop at instruction 0. Only pure operations > and the guards protecting the index are allowed to move before early-exit. > > The previous and the use of the live variables of the early exit (at the > guard instructions) preserve correctness. > > I'm not quite sure how to handle the following problems: > > 1) I had the problem that uneven iterations moved to the blackhole > interpreter and executed the loop from the beginning. I fixed it by > resetting the blackhole interpreter position to the jitcode index 0. > Is this the right way to start from the beginning? > > 2) Is there a better way to tell the blackhole interpreter to resume > from the beginning of the trace, or even do not blackhole and just jump > into the normal interpreter? > > 3) Are there any objections to do it this way (guard early-exit)? > > Best, > Richard > > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > https://mail.python.org/mailman/listinfo/pypy-dev > From yury at shurup.com Tue Apr 14 18:24:33 2015 From: yury at shurup.com (Yury V. Zaytsev) Date: Tue, 14 Apr 2015 18:24:33 +0200 Subject: [pypy-dev] PyPy translation on Xeon Phi (pka MIC) In-Reply-To: <1673336847.2208436.1428962201806.JavaMail.yahoo@mail.yahoo.com> References: <1673336847.2208436.1428962201806.JavaMail.yahoo@mail.yahoo.com> Message-ID: <1429028673.2688.235.camel@newpride> On Mon, 2015-04-13 at 21:56 +0000, Ajit Dingankar wrote: > Will post some results on performance; the translated interpreter > seems much faster on Phi but I don't quite understand why! (Not that > I'm complaining! ;-) Translated as compared to what ;-) ? If you compiled on the host with gcc (especially an older one) and now comparing this to the cross-compiled version with icc, this will be very unfair :-) Also, make sure you translate the same revision on both... I would be curious about recent results, and please be sure to mention which Phi are you using for this test. I haven't had access to any recent ones lately... Thanks, -- Sincerely yours, Yury V. Zaytsev From ajit.dingankar at ieee.org Tue Apr 14 20:54:41 2015 From: ajit.dingankar at ieee.org (Ajit Dingankar) Date: Tue, 14 Apr 2015 18:54:41 +0000 (UTC) Subject: [pypy-dev] PyPy translation on Xeon Phi (pka MIC) In-Reply-To: <1429028673.2688.235.camel@newpride> References: <1429028673.2688.235.camel@newpride> Message-ID: <1094015910.2145031.1429037681206.JavaMail.yahoo@mail.yahoo.com> Sorry about the false alarm re performance! After carefully looking at the numbers from the examples?and versions etc, it turns out that the performance on Phi is lower than the host, which is understandable?given the power and perf characteristics of single cores. As Wim had mentioned, the value of Phi will be?visible if and when we can scale out.? Yury,?I meant translated interpreter running on Phi v/s on the Xeon host, not translated v/s something else.? The device id is 0x225d so it looks like it's a Knights Corner 3120A.? Thanks,?Ajit====? On Tuesday, April 14, 2015 9:24 AM, Yury V. Zaytsev wrote: On Mon, 2015-04-13 at 21:56 +0000, Ajit Dingankar wrote: > Will post some results on performance; the translated interpreter > seems much faster on Phi but I don't quite understand why! (Not that > I'm complaining! ;-) Translated as compared to what ;-) ? If you compiled on the host with gcc (especially an older one) and now comparing this to the cross-compiled version with icc, this will be very unfair :-) Also, make sure you translate the same revision on both... I would be curious about recent results, and please be sure to mention which Phi are you using for this test. I haven't had access to any recent ones lately... Thanks, -- Sincerely yours, Yury V. Zaytsev -------------- next part -------------- An HTML attachment was scrubbed... URL: From mikeckennedy at gmail.com Wed Apr 15 20:07:18 2015 From: mikeckennedy at gmail.com (Michael Kennedy) Date: Wed, 15 Apr 2015 18:07:18 +0000 Subject: [pypy-dev] Be on my podcast Message-ID: I'd love to have you guys on my podcast, Talk Python To Me. You can learn more here: http://www.talkpythontome.com/ Interested in being a guest? Or a couple of you even? Thanks! Michael -------------- next part -------------- An HTML attachment was scrubbed... URL: From van.lindberg at gmail.com Wed Apr 15 21:34:06 2015 From: van.lindberg at gmail.com (VanL) Date: Wed, 15 Apr 2015 14:34:06 -0500 Subject: [pypy-dev] Porting PyPy/rpython to Python 3 Message-ID: Hi everyone, For the last little bit I have been working on porting the rpython toolchain to Python 3. My initial goal is to get either pypy2 or pypy3 to build with either pypy2 or pypy3. I had gotten the impression from some previous statements that these efforts would not be welcome, so I was doing my work in a private fork. After a few conversations at PyCon, though, I was encouraged to package some of these changes up and send them as a series of pull requests. A couple questions/thoughts: 1. I am happy to send the pull requests up using bitbucket. Rather than do a big dump, I will send up chunks that each address a particular issue across the entire codebase. Even if a PR touches a number of files, each PR will implement the same change so that correctness is easy to check. If these PRs are not wanted, let me know, and I will stop sending them up. 2. I am initially doing this work in a way that maintains 2/3 compatibility - my check before each major commit is whether I can still build pypy using pypy2. Would the pypy devs be willing to make building pypy be 2.7+ only? That way I could use __future__ imports to ease some of the porting. 3. I will likely vendor or require six before I am done. Let me know if this would likely be a problem. 4. At some point in the future, I plan on reworking the rpython toolchain in various ways - use python 3 function and type annotations so as to make the flow of types be easier to see, fully split out the rpython and non-rpython bits, etc. Again, I am happy to do this on my own, but will gladly contribute upstream if wanted. Thanks, Van -------------- next part -------------- An HTML attachment was scrubbed... URL: From ronan.lamy at gmail.com Thu Apr 16 06:51:30 2015 From: ronan.lamy at gmail.com (Ronan Lamy) Date: Thu, 16 Apr 2015 05:51:30 +0100 Subject: [pypy-dev] Porting PyPy/rpython to Python 3 In-Reply-To: References: Message-ID: <552F3FD2.9010601@gmail.com> Le 15/04/15 20:34, VanL a ?crit : > Hi everyone, > > For the last little bit I have been working on porting the rpython > toolchain to Python 3. My initial goal is to get either pypy2 or pypy3 > to build with either pypy2 or pypy3. Porting rpython and porting pypy are different problems. I'm not sure it'll ever make sense for pypy to translate on more than one major version at any given time. Getting rpython tests to pass on any Python3 interpreter already seems like a daunting task to me. > I had gotten the impression from some previous statements that these > efforts would not be welcome, so I was doing my work in a private fork. > After a few conversations at PyCon, though, I was encouraged to package > some of these changes up and send them as a series of pull requests. Personally, I think it's something we'll have to do sooner or later, so I'm glad to hear that you're motivated to put some effort into it. > > A couple questions/thoughts: > > 1. I am happy to send the pull requests up using bitbucket. Rather than > do a big dump, I will send up chunks that each address a particular > issue across the entire codebase. Even if a PR touches a number of > files, each PR will implement the same change so that correctness is > easy to check. If these PRs are not wanted, let me know, and I will stop > sending them up. Sounds good. Cleaning up the code base by getting rid of outdated idioms and deprecated syntax would be a good thing in itself. I suggest you start with PRs that do just that. > 2. I am initially doing this work in a way that maintains 2/3 > compatibility - my check before each major commit is whether I can still > build pypy using pypy2. Would the pypy devs be willing to make building > pypy be 2.7+ only? That way I could use __future__ imports to ease some > of the porting. pypy is already 2.7 only. It's only rpython that still supports 2.6, probably (we have no CI for 2.6, so it's not even clear that it really works). I'm +1 for dropping it. > 3. I will likely vendor or require six before I am done. Let me know if > this would likely be a problem. We already vendor py.test and pylib, so adding six is not an issue. > 4. At some point in the future, I plan on reworking the rpython > toolchain in various ways - use python 3 function and type annotations > so as to make the flow of types be easier to see, fully split out the > rpython and non-rpython bits, etc. Again, I am happy to do this on my > own, but will gladly contribute upstream if wanted. Could you expand a bit? I'm not sure whether you want to improve the usability of RPython-the-language or the maintainability of rpython-the-toolchain. Anyway, both are useful goals, and contributions will be welcomed. > Thanks, > Van > > > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > https://mail.python.org/mailman/listinfo/pypy-dev > From fijall at gmail.com Thu Apr 16 09:08:18 2015 From: fijall at gmail.com (Maciej Fijalkowski) Date: Thu, 16 Apr 2015 09:08:18 +0200 Subject: [pypy-dev] Porting PyPy/rpython to Python 3 In-Reply-To: <552F3FD2.9010601@gmail.com> References: <552F3FD2.9010601@gmail.com> Message-ID: > > > pypy is already 2.7 only. It's only rpython that still supports 2.6, > probably (we have no CI for 2.6, so it's not even clear that it really > works). I'm +1 for dropping it. RPython is also 2.7 only, we dropped the 2.6 support a while ago From fijall at gmail.com Thu Apr 16 10:48:28 2015 From: fijall at gmail.com (Maciej Fijalkowski) Date: Thu, 16 Apr 2015 10:48:28 +0200 Subject: [pypy-dev] Porting PyPy/rpython to Python 3 In-Reply-To: References: Message-ID: Hi Van. First of all I'm really sorry if we ever gave an impression that working on porting RPython to Python 3 would not be welcomed and I would like to strongly disagree with that. What we did say (or wanted to say) is that we're unlikely to put a significant effort into doing the porting ourselves, however reviewing the pull requests is definitely within the scope of the work someone will do (if they languish, feel free to poke me personally). As far as the points go, I'll respond inline. On Wed, Apr 15, 2015 at 9:34 PM, VanL wrote: > Hi everyone, > > For the last little bit I have been working on porting the rpython toolchain > to Python 3. My initial goal is to get either pypy2 or pypy3 to build with > either pypy2 or pypy3. > > I had gotten the impression from some previous statements that these efforts > would not be welcome, so I was doing my work in a private fork. After a few > conversations at PyCon, though, I was encouraged to package some of these > changes up and send them as a series of pull requests. > > A couple questions/thoughts: > > 1. I am happy to send the pull requests up using bitbucket. Rather than do a > big dump, I will send up chunks that each address a particular issue across > the entire codebase. Even if a PR touches a number of files, each PR will > implement the same change so that correctness is easy to check. If these PRs > are not wanted, let me know, and I will stop sending them up. That sounds very reasonable. > > 2. I am initially doing this work in a way that maintains 2/3 compatibility > - my check before each major commit is whether I can still build pypy using > pypy2. Would the pypy devs be willing to make building pypy be 2.7+ only? > That way I could use __future__ imports to ease some of the porting. Generally speaking the small changes are mostly a no-brainer for us. RPython is already 2.7 only. However, we generally want to avoid being Python 3 compatible as a major barrier, so things that complicate stuff need to be discussed first. One thing that we need to discuss is how to support unicode in RPython. Unicode-everywhere is definitely a model we would not like to pursue, you *have to* be able to use bytes efficiently and all over the place in RPython. Right now unicode support is a bit rudimentary and I would welcome a way to structure it better. I'm happy to discuss this (note that automatic conversion between unicode and bytes in rpython is illegal anyway) > > 3. I will likely vendor or require six before I am done. Let me know if this > would likely be a problem. As far as simple stuff goes from six (e.g. constants) this is a no-brainer, we can easily vendor it. > > 4. At some point in the future, I plan on reworking the rpython toolchain in > various ways - use python 3 function and type annotations so as to make the > flow of types be easier to see, fully split out the rpython and non-rpython > bits, etc. Again, I am happy to do this on my own, but will gladly > contribute upstream if wanted. We'll be happy to review what you are trying to achieve and happy to discuss ideas. Note that PyPy is by far not the only project using RPython and we would need to consider all important backwards-incompatible changes. Cheers, fijal From amauryfa at gmail.com Thu Apr 16 12:40:27 2015 From: amauryfa at gmail.com (Amaury Forgeot d'Arc) Date: Thu, 16 Apr 2015 12:40:27 +0200 Subject: [pypy-dev] Porting PyPy/rpython to Python 3 In-Reply-To: References: Message-ID: 2015-04-16 10:48 GMT+02:00 Maciej Fijalkowski : > > > > 2. I am initially doing this work in a way that maintains 2/3 > compatibility > > - my check before each major commit is whether I can still build pypy > using > > pypy2. Would the pypy devs be willing to make building pypy be 2.7+ only? > > That way I could use __future__ imports to ease some of the porting. > > Generally speaking the small changes are mostly a no-brainer for us. > RPython is already 2.7 only. However, we generally want to avoid being > Python 3 compatible as a major barrier, so things that complicate > stuff need to be discussed first. One thing that we need to discuss is > how to support unicode in RPython. Unicode-everywhere is definitely a > model we would not like to pursue, you *have to* be able to use bytes > efficiently and all over the place in RPython. Right now unicode > support is a bit rudimentary and I would welcome a way to structure it > better. I'm happy to discuss this (note that automatic conversion > between unicode and bytes in rpython is illegal anyway) > I think *some* conversion should be allowed, for example when the unicode is a constant. (maybe with a SomeAsciiString annotation) Otherwise, do we need to rewrite all calls like `space.call_method(w_x, "split")`? Another issue will be the int/long distinction. -- Amaury Forgeot d'Arc -------------- next part -------------- An HTML attachment was scrubbed... URL: From fijall at gmail.com Thu Apr 16 14:02:14 2015 From: fijall at gmail.com (Maciej Fijalkowski) Date: Thu, 16 Apr 2015 14:02:14 +0200 Subject: [pypy-dev] Porting PyPy/rpython to Python 3 In-Reply-To: References: Message-ID: On Thu, Apr 16, 2015 at 12:40 PM, Amaury Forgeot d'Arc wrote: > > 2015-04-16 10:48 GMT+02:00 Maciej Fijalkowski : >> >> > >> > 2. I am initially doing this work in a way that maintains 2/3 >> > compatibility >> > - my check before each major commit is whether I can still build pypy >> > using >> > pypy2. Would the pypy devs be willing to make building pypy be 2.7+ >> > only? >> > That way I could use __future__ imports to ease some of the porting. >> >> Generally speaking the small changes are mostly a no-brainer for us. >> RPython is already 2.7 only. However, we generally want to avoid being >> Python 3 compatible as a major barrier, so things that complicate >> stuff need to be discussed first. One thing that we need to discuss is >> how to support unicode in RPython. Unicode-everywhere is definitely a >> model we would not like to pursue, you *have to* be able to use bytes >> efficiently and all over the place in RPython. Right now unicode >> support is a bit rudimentary and I would welcome a way to structure it >> better. I'm happy to discuss this (note that automatic conversion >> between unicode and bytes in rpython is illegal anyway) > > > I think *some* conversion should be allowed, for example when the unicode is > a constant. > (maybe with a SomeAsciiString annotation) > Otherwise, do we need to rewrite all calls like `space.call_method(w_x, > "split")`? > > Another issue will be the int/long distinction. Wait, what? You're messing two things. If you want to convert, you can always call encode/decode From van.lindberg at gmail.com Thu Apr 16 15:00:05 2015 From: van.lindberg at gmail.com (VanL) Date: Thu, 16 Apr 2015 08:00:05 -0500 Subject: [pypy-dev] Porting PyPy/rpython to Python 3 In-Reply-To: <552F3FD2.9010601@gmail.com> References: <552F3FD2.9010601@gmail.com> Message-ID: On Wed, Apr 15, 2015 at 11:51 PM, Ronan Lamy wrote: > 4. At some point in the future, I plan on reworking the rpython >> toolchain in various ways - use python 3 function and type annotations >> so as to make the flow of types be easier to see, fully split out the >> rpython and non-rpython bits, etc. Again, I am happy to do this on my >> own, but will gladly contribute upstream if wanted. >> > > Could you expand a bit? I'm not sure whether you want to improve the > usability of RPython-the-language or the maintainability of > rpython-the-toolchain. Anyway, both are useful goals, and contributions > will be welcomed. > Well, my long term interest is in an evolution of RPython-the-language, but to get there I want to improve the maintainability of Rpython-the-toolchain. So both. -------------- next part -------------- An HTML attachment was scrubbed... URL: From amauryfa at gmail.com Thu Apr 16 15:24:58 2015 From: amauryfa at gmail.com (Amaury Forgeot d'Arc) Date: Thu, 16 Apr 2015 15:24:58 +0200 Subject: [pypy-dev] Porting PyPy/rpython to Python 3 In-Reply-To: References: Message-ID: 2015-04-16 14:02 GMT+02:00 Maciej Fijalkowski : > > I think *some* conversion should be allowed, for example when the > unicode is > > a constant. > > (maybe with a SomeAsciiString annotation) > > Otherwise, do we need to rewrite all calls like `space.call_method(w_x, > > "split")`? > > > > Another issue will be the int/long distinction. > > Wait, what? You're messing two things. If you want to convert, you can > always call encode/decode Should space.call_method() take a bytes string or a unicode string? And can we have a unique spelling that covers all cases? python{2|3} translating a pypy{2|3} interpreter -- Amaury Forgeot d'Arc -------------- next part -------------- An HTML attachment was scrubbed... URL: From van.lindberg at gmail.com Thu Apr 16 15:55:35 2015 From: van.lindberg at gmail.com (VanL) Date: Thu, 16 Apr 2015 08:55:35 -0500 Subject: [pypy-dev] Porting PyPy/rpython to Python 3 In-Reply-To: References: Message-ID: Hi Maciej, On Thu, Apr 16, 2015 at 3:48 AM, Maciej Fijalkowski wrote: > Hi Van. > > First of all I'm really sorry if we ever gave an impression that > working on porting RPython to Python 3 would not be welcomed and I > would like to strongly disagree with that. > > What we did say (or wanted to say) is that we're unlikely to put a > significant effort into doing the porting ourselves, however reviewing > the pull requests is definitely within the scope of the work someone > will do (if they languish, feel free to poke me personally). > That is good to hear, and consistent with the conversations I had at PyCon. I wasn't discouraged from doing the work (as you can see from this thread), but I just figured I needed to do it in a private fork that wouldn't be accepted upstream. With this encouragement, you will start to see PRs from me. So it might be useful to explain what I am about, as that will give context for what I am doing. I am creating a language which I call spy, for "sub-python," that is a strict semantic and syntactic subset of Python 3 that is AOT-compilable. You can think of this as an evolved version of rpython, but in Python 3, fully specified, and with a slightly different compilation model. Why? I have lots of reasons, but the initial seed of the idea came out of following a number of the "compile Python" projects over the years. What I determined was that, regardless of the approach used, there was a ceiling that each project reached *at almost exactly the same place.* For example, the subset of Python that can be handled in rpython, shedskin, and starkiller is almost feature-for-feature identical, despite the fact that each of these projects used a different approach. In conversations with the Numba folks, they are approaching this same boundary. Thus, there is a natural AOT subset of Python... that isn't too far from full Python. What's more, you all - the PyPy project - implemented full Python in this subset. A few Q&As: Why this project: For fun. But I also have professional interest in a couple things that I think this would fix. First, deployment. There are some applications that it would *really* help me to be able to drop a binary on a server and go. AOT compilation gives me that, if I am willing to restrict myself to the spy subset. Second, libraries loadable in either cPython, PyPy, or even Jython (all via cffi). Third, I have an interest in mobile, where I believe that this approach would work better (it is similar to what Unity does, for example). Why Py3?: I like Py3 better. I want to use function annotations to provide information to the inferencing process. The annotations would provide new roots for a the type inferencing process - and importantly, would allow me to stop that process at module boundaries efficiently. Type inferencing? The types in typing.py don't work for us: Yes, but we don't need to be restricted to those types only. There is no reason not to declare the types that we need - for example to have UInt32 as a possible type in a function annotation. This allows us to get rid of a fair amount of "noise" is the rpython implementation (or at least to sequester it better). What do you mean fully specified? Well, I want to have a spec as to what is within this AOT subset. I think I can even detect unsupported language constructs on import and throw a SyntaxError (or whichever error is appropriate) if a non-AOT feature is used. If someone figures out how to AOT compile a new feature, then it is added to the subset. I want to allow for different implementations. What changes in the compilation model? One big one: Be able to effectively do type inferencing on a smaller piece of the program than "the whole program." I would like to stop/start at either function boundaries or at module boundaries. Declaring appropriate type information would let me do that. As for modules, I would just require that anything in __all__ be annotated. Only functions in __all__ would be exported, and it would be an error to access anything else. From a pypy/rpython perspective, this would allow builtin modules (like sre) to be separately compiled and not have to recompile the world when a change was made somewhere else. Why bother with rpython at all? It seemed the fastest way to get to where I wanted to go: 1. Port rpython to Py3. 2. Reuse the PyPy3 target, but strip out everything that can't be AOT compiled. 3. Evolve the rpython syntax to py3/spy, make it nice, doc it, start work on new compilation model, etc. Thanks, Van -------------- next part -------------- An HTML attachment was scrubbed... URL: From van.lindberg at gmail.com Thu Apr 16 16:27:59 2015 From: van.lindberg at gmail.com (VanL) Date: Thu, 16 Apr 2015 09:27:59 -0500 Subject: [pypy-dev] Porting PyPy/rpython to Python 3 In-Reply-To: References: Message-ID: Just bundled up a few of the more mechanical changes into a PR and sent it upstream. On Thu, Apr 16, 2015 at 8:55 AM, VanL wrote: > Hi Maciej, > > On Thu, Apr 16, 2015 at 3:48 AM, Maciej Fijalkowski > wrote: > >> Hi Van. >> >> First of all I'm really sorry if we ever gave an impression that >> working on porting RPython to Python 3 would not be welcomed and I >> would like to strongly disagree with that. >> >> What we did say (or wanted to say) is that we're unlikely to put a >> significant effort into doing the porting ourselves, however reviewing >> the pull requests is definitely within the scope of the work someone >> will do (if they languish, feel free to poke me personally). >> > > That is good to hear, and consistent with the conversations I had at > PyCon. I wasn't discouraged from doing the work (as you can see from this > thread), but I just figured I needed to do it in a private fork that > wouldn't be accepted upstream. With this encouragement, you will start to > see PRs from me. > > So it might be useful to explain what I am about, as that will give > context for what I am doing. > > I am creating a language which I call spy, for "sub-python," that is a > strict semantic and syntactic subset of Python 3 that is AOT-compilable. > You can think of this as an evolved version of rpython, but in Python 3, > fully specified, and with a slightly different compilation model. > > Why? I have lots of reasons, but the initial seed of the idea came out of > following a number of the "compile Python" projects over the years. What I > determined was that, regardless of the approach used, there was a ceiling > that each project reached *at almost exactly the same place.* For example, > the subset of Python that can be handled in rpython, shedskin, and > starkiller is almost feature-for-feature identical, despite the fact that > each of these projects used a different approach. In conversations with the > Numba folks, they are approaching this same boundary. Thus, there is a > natural AOT subset of Python... that isn't too far from full Python. What's > more, you all - the PyPy project - implemented full Python in this subset. > > A few Q&As: > > Why this project: For fun. But I also have professional interest in a > couple things that I think this would fix. > > First, deployment. There are some applications that it would *really* help > me to be able to drop a binary on a server and go. AOT compilation gives me > that, if I am willing to restrict myself to the spy subset. > > Second, libraries loadable in either cPython, PyPy, or even Jython (all > via cffi). > > Third, I have an interest in mobile, where I believe that this approach > would work better (it is similar to what Unity does, for example). > > Why Py3?: I like Py3 better. I want to use function annotations to > provide information to the inferencing process. The annotations would > provide new roots for a the type inferencing process - and importantly, > would allow me to stop that process at module boundaries efficiently. > > Type inferencing? The types in typing.py don't work for us: Yes, but we > don't need to be restricted to those types only. There is no reason not to > declare the types that we need - for example to have UInt32 as a possible > type in a function annotation. This allows us to get rid of a fair amount > of "noise" is the rpython implementation (or at least to sequester it > better). > > What do you mean fully specified? Well, I want to have a spec as to what > is within this AOT subset. I think I can even detect unsupported language > constructs on import and throw a SyntaxError (or whichever error is > appropriate) if a non-AOT feature is used. If someone figures out how to > AOT compile a new feature, then it is added to the subset. I want to allow > for different implementations. > > What changes in the compilation model? One big one: Be able to > effectively do type inferencing on a smaller piece of the program than "the > whole program." I would like to stop/start at either function boundaries or > at module boundaries. Declaring appropriate type information would let me > do that. > > As for modules, I would just require that anything in __all__ be > annotated. Only functions in __all__ would be exported, and it would be an > error to access anything else. From a pypy/rpython perspective, this would > allow builtin modules (like sre) to be separately compiled and not have to > recompile the world when a change was made somewhere else. > > Why bother with rpython at all? It seemed the fastest way to get to > where I wanted to go: > > 1. Port rpython to Py3. > 2. Reuse the PyPy3 target, but strip out everything that can't be AOT > compiled. > 3. Evolve the rpython syntax to py3/spy, make it nice, doc it, start work > on new compilation model, etc. > > Thanks, > Van > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ronan.lamy at gmail.com Thu Apr 16 18:39:37 2015 From: ronan.lamy at gmail.com (Ronan Lamy) Date: Thu, 16 Apr 2015 17:39:37 +0100 Subject: [pypy-dev] Porting PyPy/rpython to Python 3 In-Reply-To: References: Message-ID: <552FE5C9.8070001@gmail.com> Le 16/04/15 14:55, VanL a ?crit : > Hi Maciej, > > On Thu, Apr 16, 2015 at 3:48 AM, Maciej Fijalkowski > wrote: > > Hi Van. > > First of all I'm really sorry if we ever gave an impression that > working on porting RPython to Python 3 would not be welcomed and I > would like to strongly disagree with that. > > What we did say (or wanted to say) is that we're unlikely to put a > significant effort into doing the porting ourselves, however reviewing > the pull requests is definitely within the scope of the work someone > will do (if they languish, feel free to poke me personally). > > > That is good to hear, and consistent with the conversations I had at > PyCon. I wasn't discouraged from doing the work (as you can see from > this thread), but I just figured I needed to do it in a private fork > that wouldn't be accepted upstream. With this encouragement, you will > start to see PRs from me. > > So it might be useful to explain what I am about, as that will give > context for what I am doing. > > I am creating a language which I call spy, for "sub-python," that is a > strict semantic and syntactic subset of Python 3 that is AOT-compilable. > You can think of this as an evolved version of rpython, but in Python 3, > fully specified, and with a slightly different compilation model. > > Why? I have lots of reasons, but the initial seed of the idea came out > of following a number of the "compile Python" projects over the years. > What I determined was that, regardless of the approach used, there was a > ceiling that each project reached *at almost exactly the same place.* > For example, the subset of Python that can be handled in rpython, > shedskin, and starkiller is almost feature-for-feature identical, > despite the fact that each of these projects used a different approach. > In conversations with the Numba folks, they are approaching this same > boundary. Thus, there is a natural AOT subset of Python... that isn't > too far from full Python. What's more, you all - the PyPy project - > implemented full Python in this subset. That's an interesting goal. Using RPython to create a standardised AOT subset of Python is something I've had in the back of my mind for quite a while. > A few Q&As: > > Why this project: For fun. But I also have professional interest in a > couple things that I think this would fix. > > First, deployment. There are some applications that it would *really* > help me to be able to drop a binary on a server and go. AOT compilation > gives me that, if I am willing to restrict myself to the spy subset. You can basically do that today with RPython - provided you don't care too much about the size of the binary. > > Second, libraries loadable in either cPython, PyPy, or even Jython (all > via cffi). RPython can almost do that already. All that's needed is a nice way of specifying the interface and a bit of tooling to generate cffi bindings from that. > Third, I have an interest in mobile, where I believe that this approach > would work better (it is similar to what Unity does, for example). > > Why Py3?: I like Py3 better. I want to use function annotations to > provide information to the inferencing process. The annotations would > provide new roots for a the type inferencing process - and importantly, > would allow me to stop that process at module boundaries efficiently. Syntax aside, it's already in RPython, cf. rpython.rlib.objectmodel.enforceargs and rpython.rlib.signature (which sadly are incompatible with each other). > Type inferencing? The types in typing.py don't work for us: Yes, but we > don't need to be restricted to those types only. There is no reason not > to declare the types that we need - for example to have UInt32 as a > possible type in a function annotation. This allows us to get rid of a > fair amount of "noise" is the rpython implementation (or at least to > sequester it better). Unless you want to get rid of type inferencing altogether (or strictly restrict it to function locals, perhaps), that noise isn't going away. > What do you mean fully specified? Well, I want to have a spec as to > what is within this AOT subset. I think I can even detect unsupported > language constructs on import and throw a SyntaxError (or whichever > error is appropriate) if a non-AOT feature is used. If someone figures > out how to AOT compile a new feature, then it is added to the subset. I > want to allow for different implementations. Deciding whether a program is valid RPython requires full typing information. It's always going to be rather more complicated than validating syntax, but I guess it goes with the AOT territory. Anyway, if you wanted to create a spec for RPython as it exists now, you could take a note of which bytecodes are unsupported and grep for AnnotatorError. > > What changes in the compilation model? One big one: Be able to > effectively do type inferencing on a smaller piece of the program than > "the whole program." I would like to stop/start at either function > boundaries or at module boundaries. Declaring appropriate type > information would let me do that. Removing whole-program type inferencing would change the character of the language a lot. In addition to function signatures, you'd also need to declare types for global constants, class and instance attributes, ... > As for modules, I would just require that anything in __all__ be > annotated. Only functions in __all__ would be exported, and it would be > an error to access anything else. From a pypy/rpython perspective, this > would allow builtin modules (like sre) to be separately compiled and not > have to recompile the world when a change was made somewhere else. Well, separate compilation has long been a wanted feature for RPython. Whole-program inferencing is a major roadblock, so you might have more luck implementing this, but it's not the only obstacle. > Why bother with rpython at all? It seemed the fastest way to get to > where I wanted to go: > > 1. Port rpython to Py3. > 2. Reuse the PyPy3 target, but strip out everything that can't be AOT > compiled. I don't understand how this fits with the rest of your plans. By definition, PyPy3 will be able to run spy, so why do you need your own interpreter? > 3. Evolve the rpython syntax to py3/spy, make it nice, doc it, start > work on new compilation model, etc. > > Thanks, > Van > > > > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > https://mail.python.org/mailman/listinfo/pypy-dev > From van.lindberg at gmail.com Thu Apr 16 20:15:08 2015 From: van.lindberg at gmail.com (VanL) Date: Thu, 16 Apr 2015 13:15:08 -0500 Subject: [pypy-dev] Porting PyPy/rpython to Python 3 In-Reply-To: <552FE5C9.8070001@gmail.com> References: <552FE5C9.8070001@gmail.com> Message-ID: A quick overall response: I know that a lot of what I am talking about *is possible* using RPython. That is one reason why I am starting where I am. That doesn't necessarily make it easy (or as easy as it could be). On Thu, Apr 16, 2015 at 11:39 AM, Ronan Lamy wrote: > Why Py3?: I like Py3 better. I want to use function annotations to >> provide information to the inferencing process. The annotations would >> provide new roots for a the type inferencing process - and importantly, >> would allow me to stop that process at module boundaries efficiently. >> > > > Syntax aside, it's already in RPython, cf. > rpython.rlib.objectmodel.enforceargs and rpython.rlib.signature (which > sadly are incompatible with each other). > > Type inferencing? The types in typing.py don't work for us: Yes, but we >> don't need to be restricted to those types only. There is no reason not >> to declare the types that we need - for example to have UInt32 as a >> possible type in a function annotation. This allows us to get rid of a >> fair amount of "noise" is the rpython implementation (or at least to >> sequester it better). >> > > Unless you want to get rid of type inferencing altogether (or strictly > restrict it to function locals, perhaps), that noise isn't going away. > [snip] What changes in the compilation model? One big one: Be able to >> effectively do type inferencing on a smaller piece of the program than >> "the whole program." I would like to stop/start at either function >> boundaries or at module boundaries. Declaring appropriate type >> information would let me do that. >> > > Removing whole-program type inferencing would change the character of the > language a lot. In addition to function signatures, you'd also need to > declare types for global constants, class and instance attributes, ... > I don't want to get rid of whole-program type inferencing. I just want to be able to define a subset and declare that "this is the whole program" for purposes of an inferencing pass. I do know that means that sometimes types will become non-inferenceable. That is where explicit function/type annotation would allow me to do so. For a silly example, def add(x, y): return x + y is not generally type inferenceable in Python. But def add(x:UInt32, y:UInt32): return x + y is. (Putting aside overflow for a second). > > I don't understand how this fits with the rest of your plans. By > definition, PyPy3 will be able to run spy, so why do you need your own > interpreter? > > I want my spy interpreter to run only spy. Python 3 (whether CPython3 or PyPy3) would also be able to run .spy files, but having a nice repl (without incurring the double interpretation cost) would be good. -------------- next part -------------- An HTML attachment was scrubbed... URL: From van.lindberg at gmail.com Fri Apr 17 17:58:24 2015 From: van.lindberg at gmail.com (VanL) Date: Fri, 17 Apr 2015 10:58:24 -0500 Subject: [pypy-dev] Porting PyPy/rpython to Python 3 In-Reply-To: References: <552FE5C9.8070001@gmail.com> Message-ID: A question came up in the discussion of a pull request: What is the allowable scope? I propose pypy/ and rpython/ as those are fairly intertwined. Comments? -------------- next part -------------- An HTML attachment was scrubbed... URL: From alex.gaynor at gmail.com Fri Apr 17 18:20:43 2015 From: alex.gaynor at gmail.com (Alex Gaynor) Date: Fri, 17 Apr 2015 12:20:43 -0400 Subject: [pypy-dev] Porting PyPy/rpython to Python 3 In-Reply-To: References: <552FE5C9.8070001@gmail.com> Message-ID: Where possible (e.g. syntax changes), I'd love to constrain the scope as much as possible. It's MUCH easier to review 100 20-line pull requests than it is to review a 2000-line PR. Alex On Fri, Apr 17, 2015 at 11:58 AM, VanL wrote: > A question came up in the discussion of a pull request: What is the > allowable scope? I propose pypy/ and rpython/ as those are fairly > intertwined. > > Comments? > > > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > https://mail.python.org/mailman/listinfo/pypy-dev > > -- "I disapprove of what you say, but I will defend to the death your right to say it." -- Evelyn Beatrice Hall (summarizing Voltaire) "The people's good is the highest law." -- Cicero GPG Key fingerprint: 125F 5C67 DFE9 4084 -------------- next part -------------- An HTML attachment was scrubbed... URL: From fijall at gmail.com Fri Apr 17 18:45:10 2015 From: fijall at gmail.com (Maciej Fijalkowski) Date: Fri, 17 Apr 2015 18:45:10 +0200 Subject: [pypy-dev] Porting PyPy/rpython to Python 3 In-Reply-To: References: <552FE5C9.8070001@gmail.com> Message-ID: rpython/ and pypy/ should not be intervined. In fact we're putting effort into making them two separate projects On Fri, Apr 17, 2015 at 5:58 PM, VanL wrote: > A question came up in the discussion of a pull request: What is the > allowable scope? I propose pypy/ and rpython/ as those are fairly > intertwined. > > Comments? > > > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > https://mail.python.org/mailman/listinfo/pypy-dev > From van.lindberg at gmail.com Fri Apr 17 18:58:36 2015 From: van.lindberg at gmail.com (VanL) Date: Fri, 17 Apr 2015 11:58:36 -0500 Subject: [pypy-dev] Can someone explain __extend__? Message-ID: I am having some trouble wrapping my head around it. Reading through rpython/tools/pairtype.py, it looks like it could be one or more of a number of things: - An implementation of javascript-style prototypes. (The similarity: you don't subclass an object in js - you use the base object as a prototype and extend it with new functionality) - A way to do specialization and automatic dispatching on types so that a+b works (both "a" and "b" know what they are, and whether they are compatible with each other in an __add__/__radd__ sense, and what type should be returned as a result of that call) - Sort of a first draft of ABCs, allowing composition and type buildup without explicit inheritance (roughly, __extend__ is similar to ABC.register) - Other? Thanks, Van -------------- next part -------------- An HTML attachment was scrubbed... URL: From fijall at gmail.com Fri Apr 17 19:00:27 2015 From: fijall at gmail.com (Maciej Fijalkowski) Date: Fri, 17 Apr 2015 19:00:27 +0200 Subject: [pypy-dev] Can someone explain __extend__? In-Reply-To: References: Message-ID: I suggest IRC for such questions Generally __extend__ extends an existing class (so just adds methods). __extend__(pairtype(...)) is an implementation of double-dispatch multimethods On Fri, Apr 17, 2015 at 6:58 PM, VanL wrote: > I am having some trouble wrapping my head around it. Reading through > rpython/tools/pairtype.py, it looks like it could be one or more of a number > of things: > > - An implementation of javascript-style prototypes. (The similarity: you > don't subclass an object in js - you use the base object as a prototype and > extend it with new functionality) > > - A way to do specialization and automatic dispatching on types so that a+b > works (both "a" and "b" know what they are, and whether they are compatible > with each other in an __add__/__radd__ sense, and what type should be > returned as a result of that call) > > - Sort of a first draft of ABCs, allowing composition and type buildup > without explicit inheritance (roughly, __extend__ is similar to > ABC.register) > > - Other? > > Thanks, > Van > > > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > https://mail.python.org/mailman/listinfo/pypy-dev > From ronan.lamy at gmail.com Fri Apr 17 19:15:27 2015 From: ronan.lamy at gmail.com (Ronan Lamy) Date: Fri, 17 Apr 2015 18:15:27 +0100 Subject: [pypy-dev] Porting PyPy/rpython to Python 3 In-Reply-To: References: <552FE5C9.8070001@gmail.com> Message-ID: <55313FAF.4090903@gmail.com> Le 17/04/15 16:58, VanL a ?crit : > A question came up in the discussion of a pull request: What is the > allowable scope? I propose pypy/ and rpython/ as those are fairly > intertwined. > > Comments? You've stated that your goal is to allow the building of pypy[2|3] with pypy[2|3], but that requires several different steps: 1. Make it possible for Python 3 to run the RPython toolchain. 2. Make the RPython toolchain work on 2+3 mixed-mode code bases. 3. Port the interpreter to 2+3-compatible code. Trying to work on task 3. before resolving its prerequisites is very likely to be inefficient and generate unnecessary friction. For now, I think you should only work on 1. (which is rather big already). This basically means modifying only rpython/. From arigo at tunes.org Fri Apr 17 20:15:16 2015 From: arigo at tunes.org (Armin Rigo) Date: Fri, 17 Apr 2015 20:15:16 +0200 Subject: [pypy-dev] Porting PyPy/rpython to Python 3 In-Reply-To: <55313FAF.4090903@gmail.com> References: <552FE5C9.8070001@gmail.com> <55313FAF.4090903@gmail.com> Message-ID: Hi, I have kept quiet on this issue, but I'd like to mention that I'm not looking forward at all --but would accept it anyway if others deemed it a good idea-- to have to write all my code in all of "rpython/" in the restricted style of 2+3 mixed-mode code bases. This might create a source of friction if me and other core devs are not sold to the idea: we'll keep writing new code in the 2.7-only style, or even accidentally refactor some pieces of code to a more canonical 2.7 style. Then if we get ready to set up a buildbot to run all the tests with Python 3.x, and (more importantly) if we have people that are dedicated to fixing failures shown only by that buildbot, then this will introduce many conflicts with freshly-written and actively-edited code. General unhappiness will follow. The only reasonable way I can see for this would really be for all devs to write 2+3 mixed code in the first place, hence my position: please convince me that it's worth it. :-) My position about the Python 2.x/3.x issue is that I'm extremely happy to deal with a *frozen* language when writing interpreters. It avoids a lot of maintenance cost to keep track of the latest version of Python all the time, and this job of keeping track is imho pointless in this specific context. I'm not saying "Python 3 is bad" in general! But I'm saying "Python 3.x has no benefit for us, and it has several issues." These issues include the fact that it's not frozen. Another one would be the fact that 3.x is more unicode-oriented: it plays against us for writing interpreters for languages that have different ways to support unicode that what (R)Python has, e.g. utf8-everywhere, or 2- versus 4-bytes chars, etc. A bient?t, Armin. From van.lindberg at gmail.com Fri Apr 17 23:50:09 2015 From: van.lindberg at gmail.com (VanL) Date: Fri, 17 Apr 2015 16:50:09 -0500 Subject: [pypy-dev] Porting PyPy/rpython to Python 3 In-Reply-To: References: <552FE5C9.8070001@gmail.com> <55313FAF.4090903@gmail.com> Message-ID: Hi Armin, I am not trying to force you (or anyone) to use Py3. I have been working on this in a private branch for a little bit, and I am happy to continue to do so. As I said earlier in the thread, I had gotten the impression that these changes would not make you or the other PyPy devs happy, so I wasn't going to submit them upstream. As I said in another place, just let me know what it is that you want; among my goals is to *not bother you all.* As for the "restricted style" - well, I don't want that either. My goal would be to move 100% over to Py3 syntax. The restricted style is just a stepping stone so that stuff wouldn't stop working during the switch. I'll step out of this conversation, as I am not here to convince you and the other PyPy devs to do Py3 or not do Py3. I'll just watch and go along with whatever you all decide. Thanks, Van -------------- next part -------------- An HTML attachment was scrubbed... URL: From arigo at tunes.org Sat Apr 18 11:02:05 2015 From: arigo at tunes.org (Armin Rigo) Date: Sat, 18 Apr 2015 11:02:05 +0200 Subject: [pypy-dev] Porting PyPy/rpython to Python 3 In-Reply-To: References: <552FE5C9.8070001@gmail.com> <55313FAF.4090903@gmail.com> Message-ID: Hi VanL, On 17 April 2015 at 23:50, VanL wrote: > I am not trying to force you (or anyone) to use Py3. I have been working on > this in a private branch for a little bit, and I am happy to continue to do > so. As I said earlier in the thread, I had gotten the impression that these > changes would not make you or the other PyPy devs happy, so I wasn't going > to submit them upstream. As I said in another place, just let me know what > it is that you want; among my goals is to *not bother you all.* > > As for the "restricted style" - well, I don't want that either. My goal > would be to move 100% over to Py3 syntax. The restricted style is just a > stepping stone so that stuff wouldn't stop working during the switch. I would imagine that a better way would be to not care about restricted style at all. If we really decide to move to Python 3, then maybe we should drop 2.7 altogether and all do one sprint whose goal is to fully switch to Python 3.N (both "default" and the major branches open at the time). It would be a documented move that occurs at some date --- I imagine this to be in the "far future", say when Python 3 is becoming dominant over Python 2. As I said I'm not strictly opposed to such a move: even though I think it brings us little, it might be unavoidable in the long run. At some point it would even be likely that 3rd-party users of RPython would to complain seriously. What I'm not too sure about is the real point of starting to port some things to mixed 2/3 style now, with core devs continuing to work in 2-only style. You're making a huge diff from "default", but then continuing changes from us will constantly conflict, which makes maintaining the branch (or fork) a horrible job. You're likely to give up well before we finally decide to switch, and then it will be easier to restart from scratch anyway... Finally, all these general remarks don't really apply to some style clean-ups you can propose pull requests for. For example, the "remove all argument tuple unpacking" pull request is fine: even if it wouldn't fix all *future* tuple unpackings we're likely to re-add, it will still reduce a lot the number of them left at the time of the hypothetical big switch. At least that's how I view things :-) A bient?t, Armin. From ronan.lamy at gmail.com Sun Apr 19 19:49:14 2015 From: ronan.lamy at gmail.com (Ronan Lamy) Date: Sun, 19 Apr 2015 18:49:14 +0100 Subject: [pypy-dev] Porting PyPy/rpython to Python 3 In-Reply-To: References: <552FE5C9.8070001@gmail.com> <55313FAF.4090903@gmail.com> Message-ID: <5533EA9A.9010608@gmail.com> Le 18/04/15 10:02, Armin Rigo a ?crit : > Hi VanL, > > On 17 April 2015 at 23:50, VanL wrote: >> I am not trying to force you (or anyone) to use Py3. I have been working on >> this in a private branch for a little bit, and I am happy to continue to do >> so. As I said earlier in the thread, I had gotten the impression that these >> changes would not make you or the other PyPy devs happy, so I wasn't going >> to submit them upstream. As I said in another place, just let me know what >> it is that you want; among my goals is to *not bother you all.* >> >> As for the "restricted style" - well, I don't want that either. My goal >> would be to move 100% over to Py3 syntax. The restricted style is just a >> stepping stone so that stuff wouldn't stop working during the switch. > > I would imagine that a better way would be to not care about > restricted style at all. If we really decide to move to Python 3, > then maybe we should drop 2.7 altogether and all do one sprint whose > goal is to fully switch to Python 3.N (both "default" and the major > branches open at the time). It would be a documented move that occurs > at some date --- I imagine this to be in the "far future", say when > Python 3 is becoming dominant over Python 2. The "big bang model" is fine for pypy, but I don't think it works for rpython. We should not ask our users to upgrade all at the same time. Besides, it would be a good idea to let smaller and more experimental interpreters iron out the bugs with the transition before doing it to pypy. So there has to be a transition period where rpython works on 2 and 3. > As I said I'm not strictly opposed to such a move: even > though I think it brings us little, it might be unavoidable in the > long run. At some point it would even be likely that 3rd-party users > of RPython would to complain seriously. > > What I'm not too sure about is the real point of starting to port some > things to mixed 2/3 style now, with core devs continuing to work in 2-only > style. You're making a huge diff from "default", but then continuing > changes from us will constantly conflict, which makes maintaining the > branch (or fork) a horrible job. You're likely to give up well before > we finally decide to switch, and then it will be easier to restart > from scratch anyway... Well, I think that the only sane way to port something as big as RPython is to do it incrementally - by getting tests to pass on 3 one subpackage at a time. The parts that are ported will have to be written in mixed 2/3 style, but having tests will prevent regressions in Python 3 compatibility: I don't see why it would be harder than maintaining compatibility with "obscure" platforms such as Windows. Another advantage of working incrementally is that it avoids huge diffs that bitrot very quickly. I'd rather see changes that are justified by some concrete goal (e.g. "get rpython.foo.bar to import") and touch only one or two subdirectories than attempts to blindly fix things everywhere. > Finally, all these general remarks don't really apply to some style > clean-ups you can propose pull requests for. For example, the "remove > all argument tuple unpacking" pull request is fine: even if it > wouldn't fix all *future* tuple unpackings we're likely to re-add, it > will still reduce a lot the number of them left at the time of the > hypothetical big switch. +1. As an exception to what I said above, such changes are fine, provided that they're safe and that they could be justified on code quality grounds alone. > At least that's how I view things :-) > > > A bient?t, > > Armin. > From fijall at gmail.com Mon Apr 20 10:05:58 2015 From: fijall at gmail.com (Maciej Fijalkowski) Date: Mon, 20 Apr 2015 10:05:58 +0200 Subject: [pypy-dev] Porting PyPy/rpython to Python 3 In-Reply-To: <5533EA9A.9010608@gmail.com> References: <552FE5C9.8070001@gmail.com> <55313FAF.4090903@gmail.com> <5533EA9A.9010608@gmail.com> Message-ID: Changing topic a bit. For what is worth, my pet peeve right now is to make pypy.tool.gdb_pypy py2/3 compatible, it would be terrific if that can happen as a first step. This is one place where we NEED TO make this happen and despite 2 or 3 attempts I completely failed at that. New GDB ships with python 3 and it breaks the extension (it imports and then it fails to work). This file is not RPython. This is one place where we would accept the port without any complaints (and the file has to stay 2/3 compatible for the forseable future) Cheers, fijal On Sun, Apr 19, 2015 at 7:49 PM, Ronan Lamy wrote: > Le 18/04/15 10:02, Armin Rigo a ?crit : >> >> Hi VanL, >> >> On 17 April 2015 at 23:50, VanL wrote: >>> >>> I am not trying to force you (or anyone) to use Py3. I have been working >>> on >>> this in a private branch for a little bit, and I am happy to continue to >>> do >>> so. As I said earlier in the thread, I had gotten the impression that >>> these >>> changes would not make you or the other PyPy devs happy, so I wasn't >>> going >>> to submit them upstream. As I said in another place, just let me know >>> what >>> it is that you want; among my goals is to *not bother you all.* >>> >>> As for the "restricted style" - well, I don't want that either. My goal >>> would be to move 100% over to Py3 syntax. The restricted style is just a >>> stepping stone so that stuff wouldn't stop working during the switch. >> >> >> I would imagine that a better way would be to not care about >> restricted style at all. If we really decide to move to Python 3, >> then maybe we should drop 2.7 altogether and all do one sprint whose >> goal is to fully switch to Python 3.N (both "default" and the major >> branches open at the time). It would be a documented move that occurs >> at some date --- I imagine this to be in the "far future", say when >> Python 3 is becoming dominant over Python 2. > > > The "big bang model" is fine for pypy, but I don't think it works for > rpython. We should not ask our users to upgrade all at the same time. > Besides, it would be a good idea to let smaller and more experimental > interpreters iron out the bugs with the transition before doing it to pypy. > So there has to be a transition period where rpython works on 2 and 3. > >> As I said I'm not strictly opposed to such a move: even >> though I think it brings us little, it might be unavoidable in the >> long run. At some point it would even be likely that 3rd-party users >> of RPython would to complain seriously. >> >> What I'm not too sure about is the real point of starting to port some >> things to mixed 2/3 style now, with core devs continuing to work in 2-only >> style. You're making a huge diff from "default", but then continuing >> changes from us will constantly conflict, which makes maintaining the >> branch (or fork) a horrible job. You're likely to give up well before >> we finally decide to switch, and then it will be easier to restart >> from scratch anyway... > > > Well, I think that the only sane way to port something as big as RPython is > to do it incrementally - by getting tests to pass on 3 one subpackage at a > time. The parts that are ported will have to be written in mixed 2/3 style, > but having tests will prevent regressions in Python 3 compatibility: I don't > see why it would be harder than maintaining compatibility with "obscure" > platforms such as Windows. > > Another advantage of working incrementally is that it avoids huge diffs that > bitrot very quickly. I'd rather see changes that are justified by some > concrete goal (e.g. "get rpython.foo.bar to import") and touch only one or > two subdirectories than attempts to blindly fix things everywhere. > >> Finally, all these general remarks don't really apply to some style >> clean-ups you can propose pull requests for. For example, the "remove >> all argument tuple unpacking" pull request is fine: even if it >> wouldn't fix all *future* tuple unpackings we're likely to re-add, it >> will still reduce a lot the number of them left at the time of the >> hypothetical big switch. > > > +1. As an exception to what I said above, such changes are fine, provided > that they're safe and that they could be justified on code quality grounds > alone. > > >> At least that's how I view things :-) >> >> >> A bient?t, >> >> Armin. >> > > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > https://mail.python.org/mailman/listinfo/pypy-dev From arigo at tunes.org Mon Apr 20 10:15:52 2015 From: arigo at tunes.org (Armin Rigo) Date: Mon, 20 Apr 2015 10:15:52 +0200 Subject: [pypy-dev] Porting PyPy/rpython to Python 3 In-Reply-To: <5533EA9A.9010608@gmail.com> References: <552FE5C9.8070001@gmail.com> <55313FAF.4090903@gmail.com> <5533EA9A.9010608@gmail.com> Message-ID: Hi Ronan, On 19 April 2015 at 19:49, Ronan Lamy wrote: > Well, I think that the only sane way to port something as big as RPython is > to do it incrementally - by getting tests to pass on 3 one subpackage at a > time. The parts that are ported will have to be written in mixed 2/3 style, > but having tests will prevent regressions in Python 3 compatibility: I don't > see why it would be harder than maintaining compatibility with "obscure" > platforms such as Windows. The model we use for Windows would not work. Imagine we have a buildbot running nightly the tests on Python 3 (or some subset of them that were already ported). If we make small changes anywhere in the rpython directory, we're likely to ignore Python 3 most of the time, just like we can ignore Windows most of the time. The problem is that the latter is fine --- most changes to the rpython directory don't affect Windows specifically --- but the former is not --- these changes will likely add some SyntaxError or something for Python 3. So we need a team whose only job is to look at this Py3 buildbot and fix things the next day. This has two problems. The first is that someone doing so for a long time is implausible (certainly not the kind of job I'd like myself). The second problem is that it is going to annoy a lot the original author of the patch, as the next day he's likely to continue working on the same parts and does not expect the source to have been thouroughly "fixed" under his feet, creating a lot of conflicts. So, definitely -1 on that variant of the idea. I think I still prefer the "upgrade everything at once and forget about Python 2" approach. Third-party users of RPython on Python 2 don't have to upgrade at the same time as long as they use the "2.7" branch in our repo. They might have to upgrade to Python 3 later if they want to benefit from new features or bug fixes we add from that point. We can even backport a few selected bug fixes for a while, based on requests. A bient?t, Armin From anto.cuni at gmail.com Mon Apr 20 10:45:57 2015 From: anto.cuni at gmail.com (Antonio Cuni) Date: Mon, 20 Apr 2015 10:45:57 +0200 Subject: [pypy-dev] Porting PyPy/rpython to Python 3 In-Reply-To: References: <552FE5C9.8070001@gmail.com> <55313FAF.4090903@gmail.com> Message-ID: ?Hi,? ?sorry for responding so late, I was at a conference.? On Sat, Apr 18, 2015 at 11:02 AM, Armin Rigo wrote: > I would imagine that a better way would be to not care about > restricted style at all. If we really decide to move to Python 3, > then maybe we should drop 2.7 altogether and all do one sprint whose > goal is to fully switch to Python 3.N (both "default" and the major > branches open at the time). It would be a documented move that occurs > at some date --- I imagine this to be in the "far future", say when > Python 3 is becoming dominant over Python 2. > The question is also WHETHER Python 3 will become dominant over Python 2. This is a broad topic and I'm not sure pypy-dev and this particular thread is the right place to discuss it, but in my experience, I see a lot of large 2.7 codebases which will likely never be ported to python3. The problem of such codebases is what happens when python2.7 will no longer be supported, but for PyPy this is not a problem since we are self-hosting: we DO decide when to stop supporting pypy-2.7, and for all I know it might be perfectly reasonable to support pypy-2.7 + rpython-on-python-2.7 for a long time. My final point of view is similar to Armin's: +0 as long as the compatibility does not affect the readabiltity-maintainability of the code base, -1 as soon as it does. ciao, Anto -------------- next part -------------- An HTML attachment was scrubbed... URL: From lac at openend.se Mon Apr 20 11:53:53 2015 From: lac at openend.se (Laura Creighton) Date: Mon, 20 Apr 2015 11:53:53 +0200 Subject: [pypy-dev] Porting PyPy/rpython to Python 3 In-Reply-To: Message from Armin Rigo of "Mon, 20 Apr 2015 10:15:52 +0200." References: <552FE5C9.8070001@gmail.com> <55313FAF.4090903@gmail.com> <5533EA9A.9010608@gmail.com> Message-ID: <201504200953.t3K9rrgx011015@fido.openend.se> In a message of Mon, 20 Apr 2015 10:15:52 +0200, Armin Rigo writes: >I think I still prefer the "upgrade everything at once and forget >about Python 2" approach. I worry that this will be slow. Laura From arigo at tunes.org Mon Apr 20 12:04:10 2015 From: arigo at tunes.org (Armin Rigo) Date: Mon, 20 Apr 2015 12:04:10 +0200 Subject: [pypy-dev] Porting PyPy/rpython to Python 3 In-Reply-To: <201504200953.t3K9rrgx011015@fido.openend.se> References: <552FE5C9.8070001@gmail.com> <55313FAF.4090903@gmail.com> <5533EA9A.9010608@gmail.com> <201504200953.t3K9rrgx011015@fido.openend.se> Message-ID: Hi Laura, On 20 April 2015 at 11:53, Laura Creighton wrote: > I worry that this will be slow. Slow at which level? The final speed of some translated PyPy should not be influenced, but maybe translation itself can become slower. But then it would be good motivation to do performance improvements in PyPy3, which would at that point gain one major user --- our own translations. A bient?t, Armin. From lac at openend.se Mon Apr 20 12:18:22 2015 From: lac at openend.se (Laura Creighton) Date: Mon, 20 Apr 2015 12:18:22 +0200 Subject: [pypy-dev] Porting PyPy/rpython to Python 3 In-Reply-To: Message from Armin Rigo of "Mon, 20 Apr 2015 12:04:10 +0200." References: <552FE5C9.8070001@gmail.com> <55313FAF.4090903@gmail.com> <5533EA9A.9010608@gmail.com> <201504200953.t3K9rrgx011015@fido.openend.se> Message-ID: <201504201018.t3KAIMmA011803@fido.openend.se> In a message of Mon, 20 Apr 2015 12:04:10 +0200, Armin Rigo writes: >Hi Laura, > >On 20 April 2015 at 11:53, Laura Creighton wrote: >> I worry that this will be slow. > >Slow at which level? The final speed of some translated PyPy should >not be influenced, but maybe translation itself can become slower. >But then it would be good motivation to do performance improvements in >PyPy3, which would at that point gain one major user --- our own >translations. > > >A bient?t, > >Armin. I was worried about translation speed. Laura From arigo at tunes.org Mon Apr 20 12:28:19 2015 From: arigo at tunes.org (Armin Rigo) Date: Mon, 20 Apr 2015 12:28:19 +0200 Subject: [pypy-dev] Porting PyPy/rpython to Python 3 In-Reply-To: <201504201018.t3KAIMmA011803@fido.openend.se> References: <552FE5C9.8070001@gmail.com> <55313FAF.4090903@gmail.com> <5533EA9A.9010608@gmail.com> <201504200953.t3K9rrgx011015@fido.openend.se> <201504201018.t3KAIMmA011803@fido.openend.se> Message-ID: Hi Laura, On 20 April 2015 at 12:18, Laura Creighton wrote: > I was worried about translation speed. Ok. Then yes, I think there should be little intrinsic reason for it to be slower (apart from some bytes/unicodes changes, which should not be too important in this case), and it would be a good excuse to focus on the performance of PyPy3. A bient?t, Armin. From lac at openend.se Mon Apr 20 12:45:26 2015 From: lac at openend.se (Laura Creighton) Date: Mon, 20 Apr 2015 12:45:26 +0200 Subject: [pypy-dev] Porting PyPy/rpython to Python 3 In-Reply-To: Message from Armin Rigo of "Mon, 20 Apr 2015 12:28:19 +0200." References: <552FE5C9.8070001@gmail.com> <55313FAF.4090903@gmail.com> <5533EA9A.9010608@gmail.com> <201504200953.t3K9rrgx011015@fido.openend.se> <201504201018.t3KAIMmA011803@fido.openend.se> Message-ID: <201504201045.t3KAjQAR012595@fido.openend.se> In a message of Mon, 20 Apr 2015 12:28:19 +0200, Armin Rigo writes: >Ok. Then yes, I think there should be little intrinsic reason for it >to be slower (apart from some bytes/unicodes changes, which should not >be too important in this case), and it would be a good excuse to focus >on the performance of PyPy3. > > >A bient?t, > >Armin. Sounds to me as if you are talking youself into it. ;) Laura From arigo at tunes.org Mon Apr 20 14:54:59 2015 From: arigo at tunes.org (Armin Rigo) Date: Mon, 20 Apr 2015 14:54:59 +0200 Subject: [pypy-dev] Porting PyPy/rpython to Python 3 In-Reply-To: <201504201045.t3KAjQAR012595@fido.openend.se> References: <552FE5C9.8070001@gmail.com> <55313FAF.4090903@gmail.com> <5533EA9A.9010608@gmail.com> <201504200953.t3K9rrgx011015@fido.openend.se> <201504201018.t3KAIMmA011803@fido.openend.se> <201504201045.t3KAjQAR012595@fido.openend.se> Message-ID: Hi Laura, On 20 April 2015 at 12:45, Laura Creighton wrote: > Sounds to me as if you are talking youself into it. ;) I'm not. I'm talking myself into thinking it would be the most approachable route (which can of course be wrong). But I'm not looking forward to what would come next: once we have jumped to the head of Python again, we're stuck with fixing various things for every single new version of Python. Stopping at 2.7, which turns out to be a "very long term" version, and a good enough one imho, is extremely pleasant from that point of view. A bient?t, Armin. From ronan.lamy at gmail.com Mon Apr 20 19:53:24 2015 From: ronan.lamy at gmail.com (Ronan Lamy) Date: Mon, 20 Apr 2015 18:53:24 +0100 Subject: [pypy-dev] Porting PyPy/rpython to Python 3 In-Reply-To: References: <552FE5C9.8070001@gmail.com> <55313FAF.4090903@gmail.com> <5533EA9A.9010608@gmail.com> Message-ID: <55353D14.1040804@gmail.com> Le 20/04/15 09:15, Armin Rigo a ?crit : > Hi Ronan, > > On 19 April 2015 at 19:49, Ronan Lamy wrote: >> Well, I think that the only sane way to port something as big as RPython is >> to do it incrementally - by getting tests to pass on 3 one subpackage at a >> time. The parts that are ported will have to be written in mixed 2/3 style, >> but having tests will prevent regressions in Python 3 compatibility: I don't >> see why it would be harder than maintaining compatibility with "obscure" >> platforms such as Windows. > > The model we use for Windows would not work. Imagine we have a > buildbot running nightly the tests on Python 3 (or some subset of them > that were already ported). If we make small changes anywhere in the > rpython directory, we're likely to ignore Python 3 most of the time, > just like we can ignore Windows most of the time. The problem is that > the latter is fine --- most changes to the rpython directory don't > affect Windows specifically --- but the former is not --- these > changes will likely add some SyntaxError or something for Python 3. > So we need a team whose only job is to look at this Py3 buildbot and > fix things the next day. Once code has been ported, keeping compatibility is easy. It's mostly just a matter of mechanically writing X instead of Y. So a team of 1 is likely more than enough, and I'm volunteering to be it. > This has two problems. The first is that someone doing so for a long > time is implausible (certainly not the kind of job I'd like myself). Fixing other people's code and telling them how to write it? I can certainly do it for as long as you want, and probably for longer than that ;-) > The second problem is that it is going to annoy a lot the original > author of the patch, as the next day he's likely to continue working > on the same parts and does not expect the source to have been > thouroughly "fixed" under his feet, creating a lot of conflicts. Compatibility fixes are unlikely to be extensive, if you start from code that was already compatible. You can also avoid conflicts just by remembering to 'hg pull' before making further changes or by running some Python 3 tests before merging. > > So, definitely -1 on that variant of the idea. > > I think I still prefer the "upgrade everything at once and forget > about Python 2" approach. Third-party users of RPython on Python 2 > don't have to upgrade at the same time as long as they use the "2.7" > branch in our repo. They might have to upgrade to Python 3 later if > they want to benefit from new features or bug fixes we add from that > point. We can even backport a few selected bug fixes for a while, > based on requests. I don't think that upgrading RPython and PyPy simultaneously is realistic, even if we sprint on that exclusively for a week, all together. Porting RPython involves difficult design decisions, which shouldn't be rushed, and a raft of issues we won't know anything about until we try. I fear that committing to the big-bang approach means that we'll just never port, because it's too risky. > > > A bient?t, > > Armin > From lac at openend.se Tue Apr 21 21:47:05 2015 From: lac at openend.se (Laura Creighton) Date: Tue, 21 Apr 2015 21:47:05 +0200 Subject: [pypy-dev] Allegro64 buildslave disappeared Message-ID: <201504211947.t3LJl5pS008554@fido.openend.se> Has this problem been resoved? People are getting buildslaves on python.org for other things, so I suspect we could have one if we just asked. Laura (who doesn't want to ask if the problem has been already solved.) From fijall at gmail.com Wed Apr 22 08:59:05 2015 From: fijall at gmail.com (Maciej Fijalkowski) Date: Wed, 22 Apr 2015 08:59:05 +0200 Subject: [pypy-dev] [pypy-commit] pypy default: Detect objects with h_tid==-42 In-Reply-To: <20150421172407.7CDED1C127C@cobra.cs.uni-duesseldorf.de> References: <20150421172407.7CDED1C127C@cobra.cs.uni-duesseldorf.de> Message-ID: Are you sure this is unsigned? IMO I've seen '0xffffffd5' or something like that. On Tue, Apr 21, 2015 at 7:24 PM, arigo wrote: > Author: Armin Rigo > Branch: > Changeset: r76860:e30bd43e438c > Date: 2015-04-21 19:24 +0200 > http://bitbucket.org/pypy/pypy/changeset/e30bd43e438c/ > > Log: Detect objects with h_tid==-42 > > diff --git a/pypy/tool/gdb_pypy.py b/pypy/tool/gdb_pypy.py > --- a/pypy/tool/gdb_pypy.py > +++ b/pypy/tool/gdb_pypy.py > @@ -99,6 +99,8 @@ > obj = obj.dereference() > hdr = lookup(obj, '_gcheader') > tid = hdr['h_tid'] > + if tid == -42: # forwarded? > + return 'Forwarded' > if sys.maxsize < 2**32: > offset = tid & 0xFFFF # 32bit > else: > _______________________________________________ > pypy-commit mailing list > pypy-commit at python.org > https://mail.python.org/mailman/listinfo/pypy-commit From matti.picus at gmail.com Wed Apr 22 11:37:50 2015 From: matti.picus at gmail.com (Matti Picus) Date: Wed, 22 Apr 2015 12:37:50 +0300 Subject: [pypy-dev] Allegro64 buildslave disappeared In-Reply-To: <201504211947.t3LJl5pS008554@fido.openend.se> References: <201504211947.t3LJl5pS008554@fido.openend.se> Message-ID: <55376BEE.7050700@gmail.com> While the specific linux64 problem has been resolved, we have lost our windows buildbot (it ran as a virtual machine on allegro64) as well as our only fully functional macosx buildbot. Anyone who can pull some strings to obtain buildslaves is more than welcome to do so. I am willing to be the POC for the windows buildslave once a suitable machine is located that can run it or can host a virtual machine that can run it. We need ~4GB RAM and at least 20GB disk space, the builds run about 7 hours on a single CPU core. Matti On 21/04/15 22:47, Laura Creighton wrote: > Has this problem been resoved? > People are getting buildslaves on python.org for other things, so I > suspect we could have one if we just asked. > > Laura (who doesn't want to ask if the problem has been already > solved.) > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > https://mail.python.org/mailman/listinfo/pypy-dev From lac at openend.se Wed Apr 22 12:06:23 2015 From: lac at openend.se (Laura Creighton) Date: Wed, 22 Apr 2015 12:06:23 +0200 Subject: [pypy-dev] Allegro64 buildslave disappeared In-Reply-To: Message from Matti Picus of "Wed, 22 Apr 2015 12:37:50 +0300." <55376BEE.7050700@gmail.com> References: <201504211947.t3LJl5pS008554@fido.openend.se><55376BEE.7050700@gmail.com> Message-ID: <201504221006.t3MA6NnK002182@fido.openend.se> Okay, I've started the asking process. It may collide with somebody else's desire to have a way to securely handle automatic patch validation for patches sent to CPython, which apparantly is on the list of very hard things to do securely. Laura From fijall at gmail.com Wed Apr 22 18:33:16 2015 From: fijall at gmail.com (Maciej Fijalkowski) Date: Wed, 22 Apr 2015 18:33:16 +0200 Subject: [pypy-dev] Be on my podcast In-Reply-To: References: Message-ID: Hi Michael I'm sorry I did not reply, somehow missed it. Sure, I'm happy to be on your podcast, beware of my thick eastern european accent though :-) Cheers, fijal On Wed, Apr 15, 2015 at 8:07 PM, Michael Kennedy wrote: > I'd love to have you guys on my podcast, Talk Python To Me. You can learn > more here: > > http://www.talkpythontome.com/ > > Interested in being a guest? Or a couple of you even? > > Thanks! > Michael > > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > https://mail.python.org/mailman/listinfo/pypy-dev > From arigo at tunes.org Wed Apr 22 18:12:17 2015 From: arigo at tunes.org (Armin Rigo) Date: Wed, 22 Apr 2015 18:12:17 +0200 Subject: [pypy-dev] [pypy-commit] pypy default: Detect objects with h_tid==-42 In-Reply-To: References: <20150421172407.7CDED1C127C@cobra.cs.uni-duesseldorf.de> Message-ID: Hi Maciej, On 22 April 2015 at 08:59, Maciej Fijalkowski wrote: > Are you sure this is unsigned? IMO I've seen '0xffffffd5' or something > like that. As far as I can tell, the C code contains the declaration "Signed h_tid;". So I would guess that hdr['h_tid'] returns a signed integer. The next line masks it to a number between 0 and 2**32-1, so then -42 would become 0xffffffd5. But I didn't actually check; please fix if I'm wrong. A bient?t, Armin. From foogod at gmail.com Wed Apr 22 19:31:55 2015 From: foogod at gmail.com (Alex Stewart) Date: Wed, 22 Apr 2015 10:31:55 -0700 Subject: [pypy-dev] [pypy-commit] pypy default: Detect objects with h_tid==-42 In-Reply-To: References: <20150421172407.7CDED1C127C@cobra.cs.uni-duesseldorf.de> Message-ID: Sorry, I couldn't help noticing this: > if sys.maxsize < 2**32: > offset = tid & 0xFFFF # 32bit 0xFFFF is not 32 bit, it's 16 bit.. Should that be 0xFFFFFFFF instead? -alex On Apr 22, 2015 9:43 AM, "Armin Rigo" wrote: > Hi Maciej, > > On 22 April 2015 at 08:59, Maciej Fijalkowski wrote: > > Are you sure this is unsigned? IMO I've seen '0xffffffd5' or something > > like that. > > As far as I can tell, the C code contains the declaration "Signed > h_tid;". So I would guess that hdr['h_tid'] returns a signed integer. > The next line masks it to a number between 0 and 2**32-1, so then -42 > would become 0xffffffd5. But I didn't actually check; please fix if > I'm wrong. > > > A bient?t, > > Armin. > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > https://mail.python.org/mailman/listinfo/pypy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From yury at shurup.com Wed Apr 22 22:05:21 2015 From: yury at shurup.com (Yury V. Zaytsev) Date: Wed, 22 Apr 2015 22:05:21 +0200 Subject: [pypy-dev] Allegro64 buildslave disappeared In-Reply-To: <55376BEE.7050700@gmail.com> References: <201504211947.t3LJl5pS008554@fido.openend.se> <55376BEE.7050700@gmail.com> Message-ID: <1429733121.15191.75.camel@newpride> On Wed, 2015-04-22 at 12:37 +0300, Matti Picus wrote: > > I am willing to be the POC for the windows buildslave once a suitable > machine is located that can run it or can host a virtual machine that > can run it. Hi Matti, What are the requirements for the Windows build slave? I have a machine with >4G RAM and >20G disk running Windows 7 x86_64 that I use as a builder for my hobby projects. I can't provide direct root ssh access to it though, as it's in a firewalled environment, is that a problem? Another catch is that I'm not sure about the longterm future of this box, but I would expect at least ~4-6 months lifetime, and if it's going to be decommissioned, I'll try to find a replacement. -- Sincerely yours, Yury V. Zaytsev From yury at shurup.com Wed Apr 22 21:58:00 2015 From: yury at shurup.com (Yury V. Zaytsev) Date: Wed, 22 Apr 2015 21:58:00 +0200 Subject: [pypy-dev] Be on my podcast In-Reply-To: References: Message-ID: <1429732680.15191.68.camel@newpride> On Wed, 2015-04-22 at 18:33 +0200, Maciej Fijalkowski wrote: > Sure, I'm happy to be on your podcast, beware of my thick eastern > european accent though :-) If all else fails, try festival ;-) -- Sincerely yours, Yury V. Zaytsev From matti.picus at gmail.com Wed Apr 22 22:51:31 2015 From: matti.picus at gmail.com (Matti Picus) Date: Wed, 22 Apr 2015 23:51:31 +0300 Subject: [pypy-dev] Allegro64 buildslave disappeared In-Reply-To: <1429733121.15191.75.camel@newpride> References: <201504211947.t3LJl5pS008554@fido.openend.se> <55376BEE.7050700@gmail.com> <1429733121.15191.75.camel@newpride> Message-ID: <553809D3.2030908@gmail.com> On 22/04/15 23:05, Yury V. Zaytsev wrote: > On Wed, 2015-04-22 at 12:37 +0300, Matti Picus wrote: >> I am willing to be the POC for the windows buildslave once a suitable >> machine is located that can run it or can host a virtual machine that >> can run it. > Hi Matti, > > What are the requirements for the Windows build slave? I have a machine > with >4G RAM and >20G disk running Windows 7 x86_64 that I use as a > builder for my hobby projects. I can't provide direct root ssh access to > it though, as it's in a firewalled environment, is that a problem? > > Another catch is that I'm not sure about the longterm future of this > box, but I would expect at least ~4-6 months lifetime, and if it's going > to be decommissioned, I'll try to find a replacement. > Thanks for the offer. You can set up the buildslave yourself, no need to give anyone else access to the box. The buildslave would run as a process that would talk to the buildmaster over a TCP socket, so being behind a firewall should not be a problem as long as the port from the slave to the master is open. We run nightly tests that can be scheduled to your convenience, they requre about 7-8 hours of CPU time total. Some of the tests can be run in parallel, two cores will run the tests in about 5 hours of wall-clock time, YMMV. You need to be able to build ppy, directions are here http://doc.pypy.org/en/latest/windows.html To set up a buildslave, read this https://bitbucket.org/pypy/buildbot/src/default/README_BUILDSLAVE and then come join us on IRC at #pypy so we can update the buildmaster with your details Matti From arigo at tunes.org Thu Apr 23 10:42:40 2015 From: arigo at tunes.org (Armin Rigo) Date: Thu, 23 Apr 2015 10:42:40 +0200 Subject: [pypy-dev] [pypy-commit] pypy default: Detect objects with h_tid==-42 In-Reply-To: References: <20150421172407.7CDED1C127C@cobra.cs.uni-duesseldorf.de> Message-ID: Hi Alex, On 22 April 2015 at 19:31, Alex Stewart wrote: > Sorry, I couldn't help noticing this: > >> if sys.maxsize < 2**32: >> offset = tid & 0xFFFF # 32bit > > 0xFFFF is not 32 bit, it's 16 bit.. Should that be 0xFFFFFFFF instead? No, this "32bit" comment means "we're running on a 32-bit machine". We take half of the word here. Armin From tom at twhanson.com Fri Apr 24 20:30:21 2015 From: tom at twhanson.com (tom at twhanson.com) Date: Fri, 24 Apr 2015 12:30:21 -0600 Subject: [pypy-dev] How to embed PyPy when there's no filesystem? Message-ID: <20150424123021.u27xw1lj44cg4008@hostingmail.earthlink.net> I'm evaluating PyPy for use in an application where it will be running in an RTOS (Greenhills Integrity) which is congifured without a file system at runtime.? The rest of the application is C/C++. ? Is there a way to build PyPy for this environment?? The issue I see is that pypy_setup_home()requires a file system path to an executable / .so library. ? Is it possible to statically link PyPy into the application and the give an equivalent to pypy_setup_home()? a pointer to the linked code?? Some other approach? ? Thanks Tom ? -------------- next part -------------- An HTML attachment was scrubbed... URL: From fijall at gmail.com Fri Apr 24 21:30:18 2015 From: fijall at gmail.com (Maciej Fijalkowski) Date: Fri, 24 Apr 2015 21:30:18 +0200 Subject: [pypy-dev] How to embed PyPy when there's no filesystem? In-Reply-To: <20150424123021.u27xw1lj44cg4008@hostingmail.earthlink.net> References: <20150424123021.u27xw1lj44cg4008@hostingmail.earthlink.net> Message-ID: pypy generally needs to find a bunch of files for it's standard library. I would suggest trying something a-la the sandboxed version where all the external calls go via a special proxy that you can write in C++. it's a bit of effort though. What are you trying to achieve if you have no filesystem? (e.g. the whole module system can't potentially work) On Fri, Apr 24, 2015 at 8:30 PM, wrote: > I'm evaluating PyPy for use in an application where it will be running in an > RTOS (Greenhills Integrity) which is congifured without a file system at > runtime. The rest of the application is C/C++. > > > > Is there a way to build PyPy for this environment? The issue I see is that > pypy_setup_home()requires a file system path to an executable / .so library. > > > > Is it possible to statically link PyPy into the application and the give an > equivalent to pypy_setup_home() a pointer to the linked code? Some other > approach? > > > > Thanks > > Tom > > > > > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > https://mail.python.org/mailman/listinfo/pypy-dev > From yury at shurup.com Sat Apr 25 00:38:47 2015 From: yury at shurup.com (Yury V. Zaytsev) Date: Sat, 25 Apr 2015 00:38:47 +0200 Subject: [pypy-dev] Allegro64 buildslave disappeared In-Reply-To: <55376BEE.7050700@gmail.com> References: <201504211947.t3LJl5pS008554@fido.openend.se> <55376BEE.7050700@gmail.com> Message-ID: <1429915127.23835.52.camel@newpride> On Wed, 2015-04-22 at 12:37 +0300, Matti Picus wrote: > While the specific linux64 problem has been resolved, we have lost our > windows buildbot (it ran as a virtual machine on allegro64) as well as > our only fully functional macosx buildbot. Okay, sorry guys, but it seems that in my attempts to fix the compiler detection that wasn't working, I somehow got the master to get stuck by rebooting the slave at an unfortunate moment (this build doesn't react to "Stop build" anymore): http://buildbot.pypy.org/builders/own-win-x86-32/builds/504 I would appreciate if you could restart the master or otherwise recover from this, but it seems that there is noone on IRC who can do this now. The compiler detection should now work by the way, but I can't check, because of this stuck build. -- Sincerely yours, Yury V. Zaytsev From fijall at gmail.com Sat Apr 25 01:32:02 2015 From: fijall at gmail.com (Maciej Fijalkowski) Date: Sat, 25 Apr 2015 01:32:02 +0200 Subject: [pypy-dev] How to embed PyPy when there's no filesystem? In-Reply-To: <20150424171314.0sxpdzpao4kc80kc@hostingmail.earthlink.net> References: <20150424123021.u27xw1lj44cg4008@hostingmail.earthlink.net> <20150424171314.0sxpdzpao4kc80kc@hostingmail.earthlink.net> Message-ID: On Sat, Apr 25, 2015 at 1:13 AM, wrote: > Maciej, > > > > Thanks for the idea. I played with the sandboxed version and it looks like > it has potential. > > > > I searched the web for a C/C++ version of the controller but with no luck. > I saw questions about it and interest expressed but couldn't find anyone who > had actually built one. Do you (or does anyone) know of an example? Ideal > would probably be one implementing SimpleIOSandboxedProc since that would > allow streaming of Python source to stdin. I don't think anyone wrote a C/C++ controller so far. From fijall at gmail.com Sat Apr 25 01:32:37 2015 From: fijall at gmail.com (Maciej Fijalkowski) Date: Sat, 25 Apr 2015 01:32:37 +0200 Subject: [pypy-dev] Allegro64 buildslave disappeared In-Reply-To: <1429915127.23835.52.camel@newpride> References: <201504211947.t3LJl5pS008554@fido.openend.se> <55376BEE.7050700@gmail.com> <1429915127.23835.52.camel@newpride> Message-ID: This can be resolved on the slave level, not on master level On Sat, Apr 25, 2015 at 12:38 AM, Yury V. Zaytsev wrote: > On Wed, 2015-04-22 at 12:37 +0300, Matti Picus wrote: >> While the specific linux64 problem has been resolved, we have lost our >> windows buildbot (it ran as a virtual machine on allegro64) as well as >> our only fully functional macosx buildbot. > > Okay, sorry guys, but it seems that in my attempts to fix the compiler > detection that wasn't working, I somehow got the master to get stuck by > rebooting the slave at an unfortunate moment (this build doesn't react > to "Stop build" anymore): > > http://buildbot.pypy.org/builders/own-win-x86-32/builds/504 > > I would appreciate if you could restart the master or otherwise recover > from this, but it seems that there is noone on IRC who can do this now. > > The compiler detection should now work by the way, but I can't check, > because of this stuck build. > > -- > Sincerely yours, > Yury V. Zaytsev > > > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > https://mail.python.org/mailman/listinfo/pypy-dev From arigo at tunes.org Sat Apr 25 08:41:50 2015 From: arigo at tunes.org (Armin Rigo) Date: Sat, 25 Apr 2015 08:41:50 +0200 Subject: [pypy-dev] Allegro64 buildslave disappeared In-Reply-To: References: <201504211947.t3LJl5pS008554@fido.openend.se> <55376BEE.7050700@gmail.com> <1429915127.23835.52.camel@newpride> Message-ID: Hi, On 25 April 2015 at 01:32, Maciej Fijalkowski wrote: > This can be resolved on the slave level, not on master level Yes, you can stop (disconnect) the slave, and then restart it. I never really understood the "Stop" buttons on the buildbot web pages, because some of them seem to have no effect, or only sometimes. A bient?t, Armin. From tom at twhanson.com Sat Apr 25 01:13:14 2015 From: tom at twhanson.com (tom at twhanson.com) Date: Fri, 24 Apr 2015 17:13:14 -0600 Subject: [pypy-dev] How to embed PyPy when there's no filesystem? In-Reply-To: References: <20150424123021.u27xw1lj44cg4008@hostingmail.earthlink.net> Message-ID: <20150424171314.0sxpdzpao4kc80kc@hostingmail.earthlink.net> Maciej, ? Thanks for the idea.? I played with the sandboxed version and it looks like it has potential.? ? I searched the web for a C/C++ version of the controller but with no luck.? I saw questions about it and interest expressed but couldn't find anyone who had actually built one.? Do you (or does anyone) know of an example?? Ideal would probably be one implementing SimpleIOSandboxedProc since that would allow streaming of Python source to stdin. ? I can start from the Python controller if necessary but I'm a C/C++ programmer by trade with very little Python experience.? A C example would make it much faster to spin up. ? Thanks, -Tom On Fri, 24 Apr 2015 21:30:18 +0200, Maciej Fijalkowski wrote: pypy generally needs to find a bunch of files for it's standard library. I would suggest trying something a-la the sandboxed version where all the external calls go via a special proxy that you can write in C++. it's a bit of effort though. What are you trying to achieve if you have no filesystem? (e.g. the whole module system can't potentially work) On Fri, Apr 24, 2015 at 8:30 PM, wrote: > I'm evaluating PyPy for use in an application where it will be running in an > RTOS (Greenhills Integrity) which is congifured without a file system at > runtime. The rest of the application is C/C++. > > > > Is there a way to build PyPy for this environment? The issue I see is that > pypy_setup_home()requires a file system path to an executable / .so library. > > > > Is it possible to statically link PyPy into the application and the give an > equivalent to pypy_setup_home() a pointer to the linked code? Some other > approach? > > > > Thanks > > Tom > > > > > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > https://mail.python.org/mailman/listinfo/pypy-dev > _______________________________________________ pypy-dev mailing list pypy-dev at python.org https://mail.python.org/mailman/listinfo/pypy-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From arigo at tunes.org Sat Apr 25 09:33:58 2015 From: arigo at tunes.org (Armin Rigo) Date: Sat, 25 Apr 2015 09:33:58 +0200 Subject: [pypy-dev] How to embed PyPy when there's no filesystem? In-Reply-To: References: <20150424123021.u27xw1lj44cg4008@hostingmail.earthlink.net> <20150424171314.0sxpdzpao4kc80kc@hostingmail.earthlink.net> Message-ID: Hi Tom, On 25 April 2015 at 01:32, Maciej Fijalkowski wrote: > On Sat, Apr 25, 2015 at 1:13 AM, wrote: >> Thanks for the idea. I played with the sandboxed version and it looks like >> it has potential. It's not necessarily the only option. A sandboxed process comes with a lot of other constrains apart from "no filesystem access". There are alternatives: you could play in ways similar to how you would solve this with CPython, namely trying to embed the parts of the standard library and main program that you need. Just like sandboxing, we don't have much experience and tools to do that ourselves, so you still need to come up with all the details (and we can help, of course). Maybe something like: we can tweak pypy_setup_home() to accept NULL as a path. Then it would not try to automatically set up "sys.path" or import "site". You're left with what is a broken PyPy, in the sense that you cannot import anything, but then you can do calls like pypy_execute_source() to run 5-line scripts --- or even, as a hack, to declare and install complete modules whose source code you have previously copied into static strings in your binary. A bient?t, Armin. From yury at shurup.com Sat Apr 25 10:47:19 2015 From: yury at shurup.com (Yury V. Zaytsev) Date: Sat, 25 Apr 2015 10:47:19 +0200 Subject: [pypy-dev] Allegro64 buildslave disappeared In-Reply-To: References: <201504211947.t3LJl5pS008554@fido.openend.se> <55376BEE.7050700@gmail.com> <1429915127.23835.52.camel@newpride> Message-ID: <1429951639.23835.58.camel@newpride> On Sat, 2015-04-25 at 08:41 +0200, Armin Rigo wrote: > > Yes, you can stop (disconnect) the slave, and then restart it. I > never really understood the "Stop" buttons on the buildbot web pages, > because some of them seem to have no effect, or only sometimes. So how do I do that? I have already tried stopping buildbot service and then starting it again, and this didn't change anything. I'm confused about your terminology: I can't see a button saying "Disconnect" anywhere. When I stopped the buildbot service, the slave appeared disconnected on the buildbot pages. Is that what you meant? Just to make it clear: the build is not actually running on the slave anymore and the slave reports that it's idle. It's just that the master for some reason disagrees. -- Sincerely yours, Yury V. Zaytsev From tom at twhanson.com Mon Apr 27 18:10:22 2015 From: tom at twhanson.com (tom at twhanson.com) Date: Mon, 27 Apr 2015 10:10:22 -0600 Subject: [pypy-dev] How to embed PyPy when there's no filesystem? In-Reply-To: References: <20150424123021.u27xw1lj44cg4008@hostingmail.earthlink.net> <20150424171314.0sxpdzpao4kc80kc@hostingmail.earthlink.net> Message-ID: <20150427101022.ycn1fxcbb44wws4c@hostingmail.earthlink.net> Armin, ? A good thought.? Sandboxing may actually be an advantage from a security standpoint.? We'll be developing all of the scripts to be run, but there's always the chance of hacking. ? We can't hard-code the scripts into the binary becuase their purpose is to adapt behavior to new configurations.? Because of this the scripts will be read from an external source and then executed.? This is what makes the the stdin/stdout streaming version attractive. ? Thanks, Tom On Sat, 25 Apr 2015 09:33:58 +0200, Armin Rigo wrote: Hi Tom, On 25 April 2015 at 01:32, Maciej Fijalkowski wrote: > On Sat, Apr 25, 2015 at 1:13 AM, wrote: >> Thanks for the idea. I played with the sandboxed version and it looks like >> it has potential. It's not necessarily the only option. A sandboxed process comes with a lot of other constrains apart from "no filesystem access". There are alternatives: you could play in ways similar to how you would solve this with CPython, namely trying to embed the parts of the standard library and main program that you need. Just like sandboxing, we don't have much experience and tools to do that ourselves, so you still need to come up with all the details (and we can help, of course). Maybe something like: we can tweak pypy_setup_home() to accept NULL as a path. Then it would not try to automatically set up "sys.path" or import "site". You're left with what is a broken PyPy, in the sense that you cannot import anything, but then you can do calls like pypy_execute_source() to run 5-line scripts --- or even, as a hack, to declare and install complete modules whose source code you have previously copied into static strings in your binary. A bient?t, Armin. -------------- next part -------------- An HTML attachment was scrubbed... URL: From arigo at tunes.org Mon Apr 27 18:43:19 2015 From: arigo at tunes.org (Armin Rigo) Date: Mon, 27 Apr 2015 18:43:19 +0200 Subject: [pypy-dev] How to embed PyPy when there's no filesystem? In-Reply-To: <20150427101022.ycn1fxcbb44wws4c@hostingmail.earthlink.net> References: <20150424123021.u27xw1lj44cg4008@hostingmail.earthlink.net> <20150424171314.0sxpdzpao4kc80kc@hostingmail.earthlink.net> <20150427101022.ycn1fxcbb44wws4c@hostingmail.earthlink.net> Message-ID: Hi Tom, On 27 April 2015 at 18:10, wrote: > We can't hard-code the scripts into the binary becuase their purpose is to > adapt behavior to new configurations. Because of this the scripts will be > read from an external source and then executed. This is what makes the the > stdin/stdout streaming version attractive. I just said "statically into the binary" as an example. Of course you can get the string from anywhere, like from reading an external source. Once you got it into a "char *", you can pass it to pypy_execute_source(). A bient?t, Armin. From ronan.lamy at gmail.com Mon Apr 27 21:31:17 2015 From: ronan.lamy at gmail.com (Ronan Lamy) Date: Mon, 27 Apr 2015 20:31:17 +0100 Subject: [pypy-dev] EuroPython? In-Reply-To: References: Message-ID: <553E8E85.6090407@gmail.com> Le 11/04/15 10:47, Armin Rigo a ?crit : > Hi, > > On 11 April 2015 at 11:29, Antonio Cuni wrote: >> my plan was to submit a talk about profiling/optimizing, possibly together >> with fijal if he comes (but I didn't do yet :)). >> Probably the talk which suits best for talking about the general status is >> Romain's one? > > I just submitted > http://bitbucket.org/pypy/extradoc/raw/extradoc/talk/ep2015/stm-abstract.rst > . I didn't expect there would be three talks, although I guess the > vmprof talk is not really PyPy-only. > > At one point, maybe, we could do a talk about CFFI, which is not > PyPy-only either... But there is no way I'm going to submit a 4th > proposal :-) Since the CfP is still open until tomorrow, I've been thinking that we should do some sort of "PyPy for dummies" talk, to try to dispel the aura of high-flying magic that surrounds PyPy. Words that shouldn't be uttered include "annotator", "nursery", "quasi-immutable", "reds", "greens", ... I'll try to whip up an abstract before the deadline - though I'd gladly let someone else attempt the challenge. From rich at pasra.at Tue Apr 28 12:12:45 2015 From: rich at pasra.at (Richard Plangger) Date: Tue, 28 Apr 2015 12:12:45 +0200 Subject: [pypy-dev] EuroPython? Message-ID: <553F5D1D.1020707@pasra.at> Hi, I also planning to attend EuroPython and my advisor (of the thesis) had the idea to present the vectorization optimization. Here is my draft: http://docdro.id/yonu (or in attachment). I hopefully you like it. I'm happy to get any feedback! Of course I'm not a core PyPy developer (and I hope my abstract does not give the impression that I am), but it might still be cool to draw some attention of something 'new' in the RPython toolchain? Best, Richard PS.: I'm up to now only get the digest of the mailing list, thus I don't know how to repsond to the original message. -------------- next part -------------- A non-text attachment was scrubbed... Name: richard-plangger-europython-talk-abstract.pdf Type: application/pdf Size: 80521 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: OpenPGP digital signature URL: From ronan.lamy at gmail.com Tue Apr 28 16:48:45 2015 From: ronan.lamy at gmail.com (Ronan Lamy) Date: Tue, 28 Apr 2015 15:48:45 +0100 Subject: [pypy-dev] EuroPython? In-Reply-To: <553F5D1D.1020707@pasra.at> References: <553F5D1D.1020707@pasra.at> Message-ID: <553F9DCD.1080301@gmail.com> Le 28/04/15 11:12, Richard Plangger a ?crit : > Hi, > > I also planning to attend EuroPython and my advisor (of the thesis) had > the idea to present the vectorization optimization. > Here is my draft: http://docdro.id/yonu (or in attachment). > > I hopefully you like it. I'm happy to get any feedback! Of course I'm > not a core PyPy developer (and I hope my abstract does not give the > impression that I am), but it might still be cool to draw some attention > of something 'new' in the RPython toolchain? Good idea! This looks good to me. I think it's clearly an advanced talk, rather than an intermediate one, though. Also, the abstract feels a bit packed for a 20 minute talk, maybe you could focus more on concrete benefits for PyPy users (i.e some loops will magically become faster) and less on general compiler theory considerations. Cheers, Ronan From tom at twhanson.com Tue Apr 28 17:33:01 2015 From: tom at twhanson.com (tom at twhanson.com) Date: Tue, 28 Apr 2015 09:33:01 -0600 Subject: [pypy-dev] How to embed PyPy when there's no filesystem? In-Reply-To: References: <20150424123021.u27xw1lj44cg4008@hostingmail.earthlink.net> <20150424171314.0sxpdzpao4kc80kc@hostingmail.earthlink.net> <20150427101022.ycn1fxcbb44wws4c@hostingmail.earthlink.net> Message-ID: <20150428093301.f3kz5yj34gs8440o@hostingmail.earthlink.net> I'm confused about the relationship between SimpleIOSandboxedProc and VirtualizedSandboxedProc. ? Looking at pypy_interact.py I see that it is multiply dependent from SimpleIOSandboxedProc and VirtualizedSandboxedProc.? I expected that I'd be able to drop VirtualizedSandboxedProc and tweak the code in pypy_interact to get a controller that just did stdin/stdout.? But when I try that I get "out of memory" errors.? ? It appears that SimpleIOSandboxedProc is not an independent, stand-alone class but is actually non-functional without the child class VirtualizedSandboxedProc.? Is that the intent?? Am I missing something? ? -Tom On Mon, 27 Apr 2015 18:43:19 +0200, Armin Rigo wrote: Hi Tom, On 27 April 2015 at 18:10, wrote: > We can't hard-code the scripts into the binary becuase their purpose is to > adapt behavior to new configurations. Because of this the scripts will be > read from an external source and then executed. This is what makes the the > stdin/stdout streaming version attractive. I just said "statically into the binary" as an example. Of course you can get the string from anywhere, like from reading an external source. Once you got it into a "char *", you can pass it to pypy_execute_source(). A bient?t, Armin. -------------- next part -------------- An HTML attachment was scrubbed... URL: From tom at twhanson.com Tue Apr 28 19:56:00 2015 From: tom at twhanson.com (tom at twhanson.com) Date: Tue, 28 Apr 2015 11:56:00 -0600 Subject: [pypy-dev] How to embed PyPy when there's no filesystem? In-Reply-To: <20150428093301.f3kz5yj34gs8440o@hostingmail.earthlink.net> References: <20150424123021.u27xw1lj44cg4008@hostingmail.earthlink.net> <20150424171314.0sxpdzpao4kc80kc@hostingmail.earthlink.net> <20150427101022.ycn1fxcbb44wws4c@hostingmail.earthlink.net> <20150428093301.f3kz5yj34gs8440o@hostingmail.earthlink.net> Message-ID: <20150428115600.77aghl10gk4wsg8s@hostingmail.earthlink.net> Correction: " non-functional without the *peer* class VirtualizedSandboxedProc" On Tue, 28 Apr 2015 09:33:01 -0600, tom at twhanson.com wrote: I'm confused about the relationship between SimpleIOSandboxedProc and VirtualizedSandboxedProc. ? Looking at pypy_interact.py I see that it is multiply dependent from SimpleIOSandboxedProc and VirtualizedSandboxedProc.? I expected that I'd be able to drop VirtualizedSandboxedProc and tweak the code in pypy_interact to get a controller that just did stdin/stdout.? But when I try that I get "out of memory" errors.? ? It appears that SimpleIOSandboxedProc is not an independent, stand-alone class but is actually non-functional without the child class VirtualizedSandboxedProc.? Is that the intent?? Am I missing something? ? -Tom On Mon, 27 Apr 2015 18:43:19 +0200, Armin Rigo wrote: Hi Tom, On 27 April 2015 at 18:10, wrote: > We can't hard-code the scripts into the binary becuase their purpose is to > adapt behavior to new configurations. Because of this the scripts will be > read from an external source and then executed. This is what makes the the > stdin/stdout streaming version attractive. I just said "statically into the binary" as an example. Of course you can get the string from anywhere, like from reading an external source. Once you got it into a "char *", you can pass it to pypy_execute_source(). A bient?t, Armin. -------------- next part -------------- An HTML attachment was scrubbed... URL: From arigo at tunes.org Tue Apr 28 22:50:57 2015 From: arigo at tunes.org (Armin Rigo) Date: Tue, 28 Apr 2015 22:50:57 +0200 Subject: [pypy-dev] How to embed PyPy when there's no filesystem? In-Reply-To: <20150428115600.77aghl10gk4wsg8s@hostingmail.earthlink.net> References: <20150424123021.u27xw1lj44cg4008@hostingmail.earthlink.net> <20150424171314.0sxpdzpao4kc80kc@hostingmail.earthlink.net> <20150427101022.ycn1fxcbb44wws4c@hostingmail.earthlink.net> <20150428093301.f3kz5yj34gs8440o@hostingmail.earthlink.net> <20150428115600.77aghl10gk4wsg8s@hostingmail.earthlink.net> Message-ID: Hi Tom, On 28 April 2015 at 19:56, wrote: > Correction: " non-functional without the *peer* class > VirtualizedSandboxedProc" Modern PyPy versions try to get some environ variables, at least as documented in rpython/doc/logging.rst. It makes the do_ll_os__ll_os_getenv() method necessary (undefined methods cause the subprocess to be aborted). Moreover, I'm sure that a PyPy in the default configuration will try afterward to access the file system for all its stdlib, which means it will call at least some of the other methods too, starting from do_ll_os__ll_os_stat(). All these methods happen to be in the VirtualizedSandboxedProc class. A bient?t, Armin. From tom at twhanson.com Thu Apr 30 00:21:17 2015 From: tom at twhanson.com (tom at twhanson.com) Date: Wed, 29 Apr 2015 16:21:17 -0600 Subject: [pypy-dev] How to embed PyPy when there's no filesystem? In-Reply-To: References: <20150424123021.u27xw1lj44cg4008@hostingmail.earthlink.net> <20150424171314.0sxpdzpao4kc80kc@hostingmail.earthlink.net> <20150427101022.ycn1fxcbb44wws4c@hostingmail.earthlink.net> <20150428093301.f3kz5yj34gs8440o@hostingmail.earthlink.net> <20150428115600.77aghl10gk4wsg8s@hostingmail.earthlink.net> Message-ID: <20150429162117.lo2jflrijo8wss8o@hostingmail.earthlink.net> When I kick off the interctive sand-boxed version of PyPy I'm seeing it open a significant number of .py files.? ? 1) Am I correct in assuming that these are imports? ? 2) Can these be eliminated?? These opens are problematic in the absence of a file system. ? Thanks, Tom On Tue, 28 Apr 2015 22:50:57 +0200, Armin Rigo wrote: Hi Tom, On 28 April 2015 at 19:56, wrote: > Correction: " non-functional without the *peer* class > VirtualizedSandboxedProc" Modern PyPy versions try to get some environ variables, at least as documented in rpython/doc/logging.rst. It makes the do_ll_os__ll_os_getenv() method necessary (undefined methods cause the subprocess to be aborted). Moreover, I'm sure that a PyPy in the default configuration will try afterward to access the file system for all its stdlib, which means it will call at least some of the other methods too, starting from do_ll_os__ll_os_stat(). All these methods happen to be in the VirtualizedSandboxedProc class. A bient?t, Armin. _______________________________________________ pypy-dev mailing list pypy-dev at python.org https://mail.python.org/mailman/listinfo/pypy-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From arigo at tunes.org Thu Apr 30 09:48:39 2015 From: arigo at tunes.org (Armin Rigo) Date: Thu, 30 Apr 2015 09:48:39 +0200 Subject: [pypy-dev] How to embed PyPy when there's no filesystem? In-Reply-To: <20150429162117.lo2jflrijo8wss8o@hostingmail.earthlink.net> References: <20150424123021.u27xw1lj44cg4008@hostingmail.earthlink.net> <20150424171314.0sxpdzpao4kc80kc@hostingmail.earthlink.net> <20150427101022.ycn1fxcbb44wws4c@hostingmail.earthlink.net> <20150428093301.f3kz5yj34gs8440o@hostingmail.earthlink.net> <20150428115600.77aghl10gk4wsg8s@hostingmail.earthlink.net> <20150429162117.lo2jflrijo8wss8o@hostingmail.earthlink.net> Message-ID: Hi Tom, On 30 April 2015 at 00:21, wrote: > 1) Am I correct in assuming that these are imports? Yes. > 2) Can these be eliminated? These opens are problematic in the absence of a > file system. Try running pypy with the -s option. Likely, it doesn't remove them all; you have to provide the remaining modules manually, e.g. by embedding either the .py or the .pyc in the controller process. A bient?t, Armin. From fijall at gmail.com Thu Apr 30 18:27:44 2015 From: fijall at gmail.com (Maciej Fijalkowski) Date: Thu, 30 Apr 2015 18:27:44 +0200 Subject: [pypy-dev] Be on my podcast In-Reply-To: References: Message-ID: After the 25th of May On Thu, Apr 30, 2015 at 6:12 PM, Michael Kennedy wrote: > Hi Maciej, > > Accents are fine. :) Do you have some time later in May (last few weeks)? > > Thanks, > Michael > > > On Wed, Apr 22, 2015 at 9:33 AM Maciej Fijalkowski wrote: >> >> Hi Michael >> >> I'm sorry I did not reply, somehow missed it. Sure, I'm happy to be on >> your podcast, beware of my thick eastern european accent though :-) >> >> Cheers, >> fijal >> >> On Wed, Apr 15, 2015 at 8:07 PM, Michael Kennedy >> wrote: >> > I'd love to have you guys on my podcast, Talk Python To Me. You can >> > learn >> > more here: >> > >> > http://www.talkpythontome.com/ >> > >> > Interested in being a guest? Or a couple of you even? >> > >> > Thanks! >> > Michael >> > >> > _______________________________________________ >> > pypy-dev mailing list >> > pypy-dev at python.org >> > https://mail.python.org/mailman/listinfo/pypy-dev >> > From mikeckennedy at gmail.com Thu Apr 30 18:12:00 2015 From: mikeckennedy at gmail.com (Michael Kennedy) Date: Thu, 30 Apr 2015 16:12:00 +0000 Subject: [pypy-dev] Be on my podcast In-Reply-To: References: Message-ID: Hi Maciej, Accents are fine. :) Do you have some time later in May (last few weeks)? Thanks, Michael On Wed, Apr 22, 2015 at 9:33 AM Maciej Fijalkowski wrote: > Hi Michael > > I'm sorry I did not reply, somehow missed it. Sure, I'm happy to be on > your podcast, beware of my thick eastern european accent though :-) > > Cheers, > fijal > > On Wed, Apr 15, 2015 at 8:07 PM, Michael Kennedy > wrote: > > I'd love to have you guys on my podcast, Talk Python To Me. You can learn > > more here: > > > > http://www.talkpythontome.com/ > > > > Interested in being a guest? Or a couple of you even? > > > > Thanks! > > Michael > > > > _______________________________________________ > > pypy-dev mailing list > > pypy-dev at python.org > > https://mail.python.org/mailman/listinfo/pypy-dev > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: