From frankw at mit.edu Mon Feb 6 14:17:17 2017 From: frankw at mit.edu (Frank Wang) Date: Mon, 6 Feb 2017 14:17:17 -0500 Subject: [pypy-dev] Modifying Interpreter-level Object from Application In-Reply-To: References: Message-ID: Hi, Is there a way to distinguish in a mixed module function, e.g. len, whether the function is being called from the application or interpreter? Right now, I'm modifying len to print out information about an interpreter level object, but I only want it to print out when it's being called by an application. Thanks, Frank On Fri, Jan 27, 2017 at 12:40 PM, Frank Wang wrote: > Hi Carl, > > Thanks for the information! I just have to do it for a specific attribute. > It is just a bit tedious as you said, making sure the semantics of the > parameter type and result conversion work properly is a bit tricky. > > > > Frank > > On Fri, Jan 27, 2017 at 11:35 AM, Carl Friedrich Bolz > wrote: > >> Hi Frank, >> >> no, unfortunately there's not really a shortcut to exposing the methods, >> functions and attributes via a mixed module, because you need to think >> about the semantics of parameter type and result conversion for every >> such function anyway. >> >> Do you have trouble to get it to work at all? Or is it just tedious? >> >> If the former, there are some hints about mixed modules here: >> >> http://doc.pypy.org/en/latest/coding-guide.html#implementing >> -a-mixed-interpreter-application-level-module >> >> For the latter, if you need to do this for absolutely *every* >> interpreter level attribute, there may be a way to achieve this effect >> with some magic meta-programming, though I'd have to think a bit how. >> >> Cheers, >> >> Carl Friedrich >> >> On 27/01/17 16:45, Frank Wang wrote: >> > Hi, >> > >> > At the application level, I want to modify some interpreter-level >> > attributes of an object. Right now, I have the interpreter level >> > functions that allow me to modify the interpreter object. Is the easiest >> > way to have an application access interpreter level attributes to use a >> > Mixed Module with interpreter level definitions? >> > >> > I'm having a bit of trouble getting it to work, so I was wondering if >> > there was a much easier way for an application to modify interpreter >> > objects or call interpreter level functions that modify an object. >> > >> > Thanks, >> > Frank >> > >> > >> > _______________________________________________ >> > pypy-dev mailing list >> > pypy-dev at python.org >> > https://mail.python.org/mailman/listinfo/pypy-dev >> > >> >> _______________________________________________ >> pypy-dev mailing list >> pypy-dev at python.org >> https://mail.python.org/mailman/listinfo/pypy-dev >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From armin.rigo at gmail.com Tue Feb 7 11:24:01 2017 From: armin.rigo at gmail.com (Armin Rigo) Date: Tue, 7 Feb 2017 17:24:01 +0100 Subject: [pypy-dev] Modifying Interpreter-level Object from Application In-Reply-To: References: Message-ID: Hi Frank, On 6 February 2017 at 20:17, Frank Wang wrote: > Is there a way to distinguish in a mixed module function, e.g. len, whether > the function is being called from the application or interpreter? Right now, > I'm modifying len to print out information about an interpreter level > object, but I only want it to print out when it's being called by an > application. The app-visible function is pypy.module.__builtin__.operation.len(). At interp-level we never call that but directly space.len(). A bient?t, Armin. From kunshan.wang at anu.edu.au Wed Feb 8 05:07:36 2017 From: kunshan.wang at anu.edu.au (Kunshan Wang) Date: Wed, 8 Feb 2017 18:07:36 +0800 Subject: [pypy-dev] vtable: non-GC allocation unit used only within RPython programs Message-ID: <4da2eadb-6da7-9a46-584b-9440913f9163@anu.edu.au> Hi all, I am working on porting RPython to the Mu micro virtual machine (http://microvm.org/). I have a question regarding the translation of the vtables of RPython objects. Background: Mu is a micro virtual machine. Its type system is similar to the level of the LL type system. Mu has a garbage collector and a stack unwinder built in, so LL-typed RPython programs can be straightforwardly translated to the Mu intermediate representation (IR) without injecting exception handling and GC. Mu has a garbage-collected heap and a permanent static space, both are traced by GC (i.e. the GC can identify all reference fields in them). Unlike RPython where only GcStruct, GcArray and their derivatives are GC-ed, any type can be allocated in the GC heap (Yes, you can allocate a single int64 in the heap if it makes sense to your language.). The GC maintains its own metadata (invisible to the Mu client) to perform GC for any types. But Mu is also minimal: Mu only allocates exactly what fields the client tells Mu to allocate, but not more. Particularly, Mu does not provide any RTTI to its client (think about the C language with reference types and GC). Therefore, OOP language implementers have to implement their own vtables and add vtable reference fields to object headers. Unlike RPython where the traced-ness of Ptr is determined by whether T is GC-ed or not, Mu has distinct untraced pointers (uptr) which are just raw addresses, object references (ref) which refer to heap objects, and internal references (iref) which is a more powerful (but harder to implement) reference type that can refer to a field of either a heap object or a static variable. Mu can build 'boot images'. A boot image is an executable image which contains a preserved heap in addition to executable code. It is similar to the executable program the RPython C backend generates, but preserved heap objects can still be garbage-collected as usual --- they are not C-level static variables. Problem: When translating the LL type Ptr to the Mu counterpart, our current approach is: 1. If T is a GcStruct or a GcArray, translate it to ref, 2. otherwise, translate it to uptr. Ideally, object references are used inside Mu (all RPython programs are translated to Mu IR), and uptr are only used to interact with external C programs --- uptr are just addresses. Most RPython programs seem to follow this pattern: GcStructs are used within the RPython program, and non-GC Structs can be passed to external C programs by Ptr. But vtables appear to be a special case (see rpython/rtyper/rclass.py:160): 1. It is a Struct, but not GcStruct. So it is not allocated in the GC heap. 2. But it is only used internally for RPython. It is never exposed to native C programs. It is a valid approach to implement vtables as non-GC objects, because the number of RPython types is determined at compile time, and there are only as many vtables as GC-ed types. But I think our current translation strategy --- translating RPython Ptr to Mu uptr --- is problematic: this link only exists within Mu programs from GC objects to their respective vtables. It shouldn't be translated to uptr which should only be used for the native interface. Furthermore, Mu restricts the T of uptr to be untraced types. vtables contain many function references, which are a kind of traced reference in Mu. So referring to vtable by uptr will not work unless we relax the restriction to allow accessing traced reference fields using uptr, which is undesirable because uptr is specifically designed to access native C data, not Mu objects. Currently, we allocate all vtables as static variables (still traced by the GC). So, an alternative to uptr is iref which is a traced reference type, and can still be used to refer to static variables. Now, we see there are three kinds of RPython-level Ptr values: 1. Those pointing to GC objects. 2. Those pointing to non-GC objects which can be accessed by native programs (e.g. the byte buffers to be read by the `write` system call when printing) 3. Those pointing to non-GC internal objects within RPython (e.g. vtable) The problem is: there seems to be not enough static type information to distinguish between case 2 and case 3. If we could distinguish between 2 and 3, we could translate case 2 to uptr and case 3 to iref. But RPython seems to be using Ptr for these two purposes interchangeably. Therefore, the Mu backend will have to tell whether an RPython-level Ptr field is ever exposed to native programs or not, which can only be decided at run time. Understandably, using the C backend, both vtables and C-accessible objects are represented as C-level global variables of struct types. Therefore it was not necessary to make such distinction, because both are accessed by C-level pointers after translation. So I would like to know: Is vtable the only non-GC Struct type that is used internally within an RPython program (instead of exposed to native C programs)? If it is, we can make a special case, and translate Ptr to iref. But the ideal way to implement vtables is translating them to GC objects. In this way, vtables can be managed just like any other GC objects. Since Mu do not automatically add RTTI to heap objects, Mu can still allocate vtables on the GC heap even though vtable is not an RttiStruct. Regards, Kunshan Wang Australian National University -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 496 bytes Desc: OpenPGP digital signature URL: From armin.rigo at gmail.com Thu Feb 9 02:17:56 2017 From: armin.rigo at gmail.com (Armin Rigo) Date: Thu, 9 Feb 2017 08:17:56 +0100 Subject: [pypy-dev] vtable: non-GC allocation unit used only within RPython programs In-Reply-To: <4da2eadb-6da7-9a46-584b-9440913f9163@anu.edu.au> References: <4da2eadb-6da7-9a46-584b-9440913f9163@anu.edu.au> Message-ID: Hi, On 8 February 2017 at 11:07, Kunshan Wang wrote: > 1. Those pointing to GC objects. > 2. Those pointing to non-GC objects which can be accessed by native > programs (e.g. the byte buffers to be read by the `write` system call > when printing) > 3. Those pointing to non-GC internal objects within RPython (e.g. vtable) > > The problem is: there seems to be not enough static type information to > distinguish between case 2 and case 3. To summarize, the native model of PyPy breaks because you can't reference GC objects from non-GC data, whereas this occurs commonly inside PyPy. But I'm more concerned about function pointers: you say this is GC data too in Mu? How can you then pass a function pointer, pointing to a callback, to external C code? To distinguish between case 2 and case 3, you could simply look inside the Struct at whether it contains references to GC objects or not. This is not perfect but it should work well enough (and can be tweaked manually if necessary in one corner case or two). A bient?t, Armin. From planrichi at gmail.com Wed Feb 15 05:22:13 2017 From: planrichi at gmail.com (Richard Plangger) Date: Wed, 15 Feb 2017 11:22:13 +0100 Subject: [pypy-dev] VMProf 0.4.0 Message-ID: Hi there, I'm currently finishing up vmprof native profilng for CPython & PyPy. Here are some highlights (soon to be released in 0.4.0): * Native profiling (C stack) is included in the profile (using libunwind) for Linux & Mac * Windows 64bit support (no native profiling) * The platform that reads the profile can be different from the platform that generated it * vmprof.com updates to the flamegraph and general style changes * Documentation updates I'm happy to receive any feedback. PyPy support is nearly finished, but there is one issue: On my branch (called vmprof-native), libunwind is required to be available on the system. Which means that we need to install those on the buildbots (which is fine) and libunwind suddenly becomes a dependency when building PyPy on Linux and Mac. Is the latter ok? Or should libunwind be an optional dependency? Cheers, Richard -------------- next part -------------- An HTML attachment was scrubbed... URL: From armin.rigo at gmail.com Wed Feb 15 05:32:14 2017 From: armin.rigo at gmail.com (Armin Rigo) Date: Wed, 15 Feb 2017 11:32:14 +0100 Subject: [pypy-dev] VMProf 0.4.0 In-Reply-To: References: Message-ID: Hi Richard, On 15 February 2017 at 11:22, Richard Plangger wrote: > On my branch (called vmprof-native), libunwind is required to be available > on the system. Which means that we need to install those on the buildbots > (which is fine) and libunwind suddenly becomes a dependency when building > PyPy on Linux and Mac. Is the latter ok? Or should libunwind be an optional > dependency? Avoiding to make it a hard dependency would be a good idea. Also, libunwind is a hack that showed problems when vmprof previously supported C frames, and it was removed for that reason. Maybe you should give a word about why re-enabling C frames with libunwind looks ok now? A bient?t, Armin. From planrichi at gmail.com Wed Feb 15 06:10:19 2017 From: planrichi at gmail.com (Richard Plangger) Date: Wed, 15 Feb 2017 12:10:19 +0100 Subject: [pypy-dev] VMProf 0.4.0 In-Reply-To: References: Message-ID: <9eb04732-499b-eebb-c4f9-3680e3c61156@gmail.com> Hi, > Avoiding to make it a hard dependency would be a good idea. Also, > libunwind is a hack that showed problems when vmprof previously > supported C frames, and it was removed for that reason. Maybe you > should give a word about why re-enabling C frames with libunwind looks > ok now? Can you elaborate on that? My understanding is that Maciej at some point complained that there are some issues and removed that feature (which Antonio was not particularly happy about). I never learned the issues. One complaint I remember (should be on IRC logs some place): It was along the lines: "... some times libunwind returns garbage ..." Speaking from my experience during the development: I have not seen a C stack trace that is totally implausible (even though I thought so, but I turned out to be some other error). It was very hard to get the trampoline right and it was also not easy to get the stack walking right (considering it should work for all platform combinations). There are several cases where libunwind could say: 'hey, unsure what to do, cannot rebuild the stack' and those cases now lead to skipping the sample in the signal. I'm happy to accept the fact that 'libunwind is garbage/hack/...' if somebody proves that. So if anyone has some insight on that, please speak up. There are alternatives like: use libbacktrace from gcc (which is already included in vmprof now). Cheers, Richard From fijall at gmail.com Wed Feb 15 08:53:23 2017 From: fijall at gmail.com (Maciej Fijalkowski) Date: Wed, 15 Feb 2017 14:53:23 +0100 Subject: [pypy-dev] VMProf 0.4.0 In-Reply-To: <9eb04732-499b-eebb-c4f9-3680e3c61156@gmail.com> References: <9eb04732-499b-eebb-c4f9-3680e3c61156@gmail.com> Message-ID: There was definitely a massive problem with libunwind & JIT frames, which made it unsuitable for windows and os x Another issue was that libunwind made traces ten times bigger, for no immediate benefit other than "it might be useful some day" and added complexity. On linux I was getting ~7% of stack that was not correctly rebuilt. This is not an issue if you assume that the 7% is statistically distributed evenly, but I heavily doubt that is the case (and there is no way to check) which made us build a more robust approach. I think I would like the following properties: * everything works without libunwind, native=True raises an exception * with libunwind, we don't loose frames in python just because libunwind is unable to reconstruct the stack * we don't pay 10x storage just because there is an option to want native frames Can we have that? PS. How does the new approach works? If it always uses libunwind and ditches the original approach I'm very much -1 Cheers, fijal On Wed, Feb 15, 2017 at 12:10 PM, Richard Plangger wrote: > Hi, > >> Avoiding to make it a hard dependency would be a good idea. Also, >> libunwind is a hack that showed problems when vmprof previously >> supported C frames, and it was removed for that reason. Maybe you >> should give a word about why re-enabling C frames with libunwind looks >> ok now? > > Can you elaborate on that? My understanding is that Maciej at some point > complained that there are some issues and removed that feature (which > Antonio was not particularly happy about). I never learned the issues. > One complaint I remember (should be on IRC logs some place): > > It was along the lines: "... some times libunwind returns garbage ..." > > Speaking from my experience during the development: > > I have not seen a C stack trace that is totally implausible (even though > I thought so, but I turned out to be some other error). It was very hard > to get the trampoline right and it was also not easy to get the stack > walking right (considering it should work for all platform combinations). > > There are several cases where libunwind could say: 'hey, unsure what to > do, cannot rebuild the stack' and those cases now lead to skipping the > sample in the signal. > > I'm happy to accept the fact that 'libunwind is garbage/hack/...' if > somebody proves that. So if anyone has some insight on that, please > speak up. There are alternatives like: use libbacktrace from gcc (which > is already included in vmprof now). > > Cheers, > Richard > > > > > > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > https://mail.python.org/mailman/listinfo/pypy-dev From planrichi at gmail.com Wed Feb 15 09:42:31 2017 From: planrichi at gmail.com (Richard Plangger) Date: Wed, 15 Feb 2017 15:42:31 +0100 Subject: [pypy-dev] VMProf 0.4.0 In-Reply-To: References: <9eb04732-499b-eebb-c4f9-3680e3c61156@gmail.com> Message-ID: Hi, sorry about the lengthy email, but you asked for details :) windows is certainly out of scope (not even sure if somebody succeeded compiling libunwind on windows, probably it needs lots of porting). Some technical details how it now works now (did not finish it yet completely): In the PyPy world: * _U_dyn_register exposed by libunwind is used to save every piece of assembler generated by the JIT (_U_dyn_cancel is called if a loop token is finally collected), otherwise one cannot reliably do the matching between JIT trace <-> native libunwind symbol * The stack PyPy maintains for it's frames is still maintained as before Did you do that back then for PyPy as well? Stack walking: Depending on vmprof.enable(native=True/False) either: 1) native=True, the C stack is walked matching a special symbol __vmprof_eval_vmprof to the entries of 'kind' VMPROF_CODE_TAG. All other symbols exposed by pypy-c or libpypy-c.so are *ignored*. Most of them are internal functions during the interpreter loop. For PyPy this is not entirely true, because we include large parts of the standard library within libpypy-c.so, whereas cpython separates them in other shared objects. THis simply means you cannot log stack frames within those shared objects/executable. Using the facility in libunwind for dynamic code (_U_dyn_register), one can match the JIT frames with 'kind' VMPROF_JITTED_TAG. All other symbols are considered native and are logged as kind VMPROF_NATIVE_TAG. 2) native=False, witch means the stack is walked was it was before (iterating the list of pypy stack frames). Which means the current setup allows you to use either method what ever you prefer. Some notes about the properties: > * everything works without libunwind, native=True raises an exception > * with libunwind, we don't loose frames in python just because > libunwind is unable to reconstruct the stack Yes I agree, though I would like to have that in the same pypy-c executable, meaning that native=True does not raise, but simply fulfills the second property. > * we don't pay 10x storage just because there is an option to want native frames As described above, the filtering of libpypy-c.so/pypy-c internal symbols greatly reduces the size of the resulting traces. If you gdb from time to time 500 stack entries is nothing for cpython :), the filtering did a good job to make that much smaller (did some tests deep down in a pytest stack frame, 300 stack frames became 37 frames. All but ~5 of them where python level stack frames). > On linux I was getting ~7% of stack that was not correctly rebuilt. > This is not an issue if you assume that the 7% is statistically > distributed evenly, but I heavily doubt that is the case (and there is > no way to check) which made us build a more robust approach. We could check that, by logging 'trace was canceled' and saving a timestamp/counter for each signal. If a certain pattern occured you could tell the user: 'this profile has skipped lots of signals in a very short period, please run with native=False' Can you elaborate on the issue on PyPy + Mac OS X? I remember that you said that there are issues with the JIT code map. Which ones? Cheers, Richard From John.Zhang at anu.edu.au Wed Feb 15 19:50:48 2017 From: John.Zhang at anu.edu.au (John Zhang) Date: Thu, 16 Feb 2017 00:50:48 +0000 Subject: [pypy-dev] '@rpython/$(TARGET)' problem Message-ID: Hi all (Armin?), I have been troubled by the line rpython/translator/platform/drawin.py:35, where it adds ?@rpath/$(TARGET)? to the linker arguments for shared libraries. This assumes that the usage case will be from a Makefile, where $(TARGET) is defined. However, in ExternalCompilationInfo.compile_shared_library function, which I use to compile a few support C files into a shared library dependency, there is no Makefile involved, yet the flag is still involved in the final linking. This seemed to pose a problem for other shared library that depends on the compiled library, and dlopen was complaining that it cannot find library ?@rpath/$(TARGET)?. I?m wondering if there is a good solution for this. A possible solution would be to add a default parameter for _args_for_shared, which could be altered if not compiled using Makefile. What do you think? The attached is a suggested solution. Regards, John Zhang ------------------------------------------------------ John Zhang Research Assistant Programming Languages, Design & Implementation Division Computer Systems Group ANU College of Engineering & Computer Science 108 North Rd The Australian National University Acton ACT 2601 john.zhang at anu.edu.au -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: suggestion.patch Type: application/octet-stream Size: 1747 bytes Desc: suggestion.patch URL: From robert.zaremba at scale-it.pl Thu Feb 16 12:48:26 2017 From: robert.zaremba at scale-it.pl (Robert Zaremba) Date: Thu, 16 Feb 2017 18:48:26 +0100 Subject: [pypy-dev] Leysin Winter Sprint 2017 Message-ID: <6e1a8fb8-66b6-ab85-28a7-6307c07c5353@scale-it.pl> Hi, I would like to join the Leysin sprint. Any additional steps I have to do? When do I need to confirm dates? I will be available 25/02 - 1/03, thought if there will be nothing on 25 then I prefer to come on 26/02 morning. About me: I'm a software specialist, programming in Python for 11 years, thought during my last 3 years I was doing mostly Go, and Python for some visualization. I would like to work on Python 3.5 support. Robert Zaremba From me at manueljacob.de Thu Feb 16 13:00:39 2017 From: me at manueljacob.de (Manuel Jacob) Date: Thu, 16 Feb 2017 19:00:39 +0100 Subject: [pypy-dev] Leysin Winter Sprint 2017 In-Reply-To: <6e1a8fb8-66b6-ab85-28a7-6307c07c5353@scale-it.pl> References: <6e1a8fb8-66b6-ab85-28a7-6307c07c5353@scale-it.pl> Message-ID: <4227efd5c0b6fa872f6b06710e685232@manueljacob.de> Hi Robert, Welcome! To register, you can send a pull request adding yourself to this file: https://bitbucket.org/pypy/extradoc/src/extradoc/sprintinfo/leysin-winter-2017/people.txt?at=extradoc&fileviewer=file-view-default Alternatively, someone can do it for you if you want. We will start working around noon on the 26th. If you can get there in the morning, you should come on the 26th. If you have to travel all day and can only arrive at the evening, I'd recommend coming on the 25th. On 2017-02-16 18:48, Robert Zaremba wrote: > Hi, > > I would like to join the Leysin sprint. > Any additional steps I have to do? When do I need to confirm dates? > I will be available 25/02 - 1/03, thought if there will be nothing on > 25 then I prefer to come on 26/02 morning. > > About me: > I'm a software specialist, programming in Python for 11 years, thought > during my last 3 years I was doing mostly Go, and Python for some > visualization. > I would like to work on Python 3.5 support. > > Robert Zaremba > > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > https://mail.python.org/mailman/listinfo/pypy-dev From planrichi at gmail.com Thu Feb 16 13:18:02 2017 From: planrichi at gmail.com (Richard Plangger) Date: Thu, 16 Feb 2017 19:18:02 +0100 Subject: [pypy-dev] Leysin Winter Sprint 2017 In-Reply-To: <6e1a8fb8-66b6-ab85-28a7-6307c07c5353@scale-it.pl> References: <6e1a8fb8-66b6-ab85-28a7-6307c07c5353@scale-it.pl> Message-ID: <053ab265-1261-ef5f-f43b-20eb0023d9d7@gmail.com> Hi Robert, I have added you (25.01-01.03). Looking forward meeting you there! Cheers, Richard On 02/16/2017 06:48 PM, Robert Zaremba wrote: > Hi, > > I would like to join the Leysin sprint. > Any additional steps I have to do? When do I need to confirm dates? > I will be available 25/02 - 1/03, thought if there will be nothing on 25 > then I prefer to come on 26/02 morning. > > About me: > I'm a software specialist, programming in Python for 11 years, thought > during my last 3 years I was doing mostly Go, and Python for some > visualization. > I would like to work on Python 3.5 support. > > Robert Zaremba > > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > https://mail.python.org/mailman/listinfo/pypy-dev From robert.zaremba at scale-it.pl Thu Feb 16 14:17:15 2017 From: robert.zaremba at scale-it.pl (Robert Zaremba) Date: Thu, 16 Feb 2017 20:17:15 +0100 Subject: [pypy-dev] Leysin Winter Sprint 2017 In-Reply-To: <4227efd5c0b6fa872f6b06710e685232@manueljacob.de> References: <6e1a8fb8-66b6-ab85-28a7-6307c07c5353@scale-it.pl> <4227efd5c0b6fa872f6b06710e685232@manueljacob.de> Message-ID: @Manuel - I'm 2h away from Leysin, so it's just a matter of organization. If there is no plan for 25th, or everybody will be late then I will come on 26/02 @Richard Thanks for adding me in. Should I contact Ermina to confirm accommodation? See you! On 02/16/2017 07:00 PM, Manuel Jacob wrote: > Hi Robert, > > Welcome! > > To register, you can send a pull request adding yourself to this file: > https://bitbucket.org/pypy/extradoc/src/extradoc/sprintinfo/leysin-winter-2017/people.txt?at=extradoc&fileviewer=file-view-default > > > Alternatively, someone can do it for you if you want. > > We will start working around noon on the 26th. If you can get there in > the morning, you should come on the 26th. If you have to travel all day > and can only arrive at the evening, I'd recommend coming on the 25th. > > On 2017-02-16 18:48, Robert Zaremba wrote: >> Hi, >> >> I would like to join the Leysin sprint. >> Any additional steps I have to do? When do I need to confirm dates? >> I will be available 25/02 - 1/03, thought if there will be nothing on >> 25 then I prefer to come on 26/02 morning. >> >> About me: >> I'm a software specialist, programming in Python for 11 years, thought >> during my last 3 years I was doing mostly Go, and Python for some >> visualization. >> I would like to work on Python 3.5 support. >> >> Robert Zaremba >> >> _______________________________________________ >> pypy-dev mailing list >> pypy-dev at python.org >> https://mail.python.org/mailman/listinfo/pypy-dev From armin.rigo at gmail.com Thu Feb 16 14:28:06 2017 From: armin.rigo at gmail.com (Armin Rigo) Date: Thu, 16 Feb 2017 20:28:06 +0100 Subject: [pypy-dev] Leysin Winter Sprint 2017 In-Reply-To: References: <6e1a8fb8-66b6-ab85-28a7-6307c07c5353@scale-it.pl> <4227efd5c0b6fa872f6b06710e685232@manueljacob.de> Message-ID: Hi Robert, and welcome :-) On 16 February 2017 at 20:17, Robert Zaremba wrote: > @Manuel - I'm 2h away from Leysin, so it's just a matter of organization. If > there is no plan for 25th, or everybody will be late then I will come on > 26/02 I guess you should come the 26th then. So far, arrivals are scattered between the 25th (evening I guess) and the 27th (question mark included). > @Richard Thanks for adding me in. Should I contact Ermina to confirm > accommodation? I have a group arrangement. Please tell me if you'd like a separate room (and in this case, you can either contact Ermina or leave it to me); otherwise, by default, everybody should be in one or two larger rooms (3-5 people each). A bient?t, Armin. From robert.zaremba at scale-it.pl Fri Feb 17 01:35:50 2017 From: robert.zaremba at scale-it.pl (Robert Zaremba) Date: Fri, 17 Feb 2017 07:35:50 +0100 Subject: [pypy-dev] Leysin Winter Sprint 2017 In-Reply-To: References: <6e1a8fb8-66b6-ab85-28a7-6307c07c5353@scale-it.pl> <4227efd5c0b6fa872f6b06710e685232@manueljacob.de> Message-ID: Great. Group room is fine. Could you give me your mobile number, or to somebody responsible there, please? Greetings. On 02/16/2017 08:28 PM, Armin Rigo wrote: > I have a group arrangement. Please tell me if you'd like a separate > room (and in this case, you can either contact Ermina or leave it to > me); otherwise, by default, everybody should be in one or two larger > rooms (3-5 people each). From armin.rigo at gmail.com Mon Feb 20 10:29:52 2017 From: armin.rigo at gmail.com (Armin Rigo) Date: Mon, 20 Feb 2017 16:29:52 +0100 Subject: [pypy-dev] '@rpython/$(TARGET)' problem In-Reply-To: References: Message-ID: Hi John, On 16 February 2017 at 01:50, John Zhang wrote: > Hi all (Armin?), > I have been troubled by the line rpython/translator/platform/drawin.py:35, > where it adds ?@rpath/$(TARGET)? to the linker arguments for shared > libraries. This assumes that the usage case will be from a Makefile, where > $(TARGET) is defined. However, in > ExternalCompilationInfo.compile_shared_library function, which I use to > compile a few support C files into a shared library dependency, there is no > Makefile involved, yet the flag is still involved in the final linking. This > seemed to pose a problem for other shared library that depends on the > compiled library, and dlopen was complaining that it cannot find library > ?@rpath/$(TARGET)?. > I?m wondering if there is a good solution for this. A possible solution > would be to add a default parameter for _args_for_shared, which could be > altered if not compiled using Makefile. > What do you think? > The attached is a suggested solution. I've checked in a similar solution (your patch is not acceptable because it breaks on non-OS/X platforms). Can you check if it works for you? A bient?t, Armin. From robert.zaremba at scale-it.pl Fri Feb 24 14:01:32 2017 From: robert.zaremba at scale-it.pl (Robert Zaremba) Date: Fri, 24 Feb 2017 20:01:32 +0100 Subject: [pypy-dev] Leysin Winter Sprint 2017 In-Reply-To: <6e1a8fb8-66b6-ab85-28a7-6307c07c5353@scale-it.pl> References: <6e1a8fb8-66b6-ab85-28a7-6307c07c5353@scale-it.pl> Message-ID: <15a717fea2b.c1a5420b42294.3355259016841101003@scale-it.pl> Hi, Will be on 26, morning. I can offer a car lift from Geneva airport if anyone would need. Do you plan to code from the morning or have some skiing? -- Robert Zaremba ---- On Thu, 16 Feb 2017 18:48:26 +0100 Robert Zaremba wrote ---- > Hi, > > I would like to join the Leysin sprint. > Any additional steps I have to do? When do I need to confirm dates? > I will be available 25/02 - 1/03, thought if there will be nothing on 25 > then I prefer to come on 26/02 morning. > > About me: > I'm a software specialist, programming in Python for 11 years, thought > during my last 3 years I was doing mostly Go, and Python for some > visualization. > I would like to work on Python 3.5 support. > > Robert Zaremba > From armin.rigo at gmail.com Fri Feb 24 17:18:53 2017 From: armin.rigo at gmail.com (Armin Rigo) Date: Fri, 24 Feb 2017 23:18:53 +0100 Subject: [pypy-dev] Leysin Winter Sprint 2017 In-Reply-To: <15a717fea2b.c1a5420b42294.3355259016841101003@scale-it.pl> References: <6e1a8fb8-66b6-ab85-28a7-6307c07c5353@scale-it.pl> <15a717fea2b.c1a5420b42294.3355259016841101003@scale-it.pl> Message-ID: Hi Robert, On 24 February 2017 at 20:01, Robert Zaremba wrote: > Do you plan to code from the morning or have some skiing? The plan is to take a break day in the middle of the week for skiing. The exact date depends on the meteo and the people. On Sunday 26th I think we'll start at around 10:30 or 11am. The following days we start at 10am, and we code until it's time for dinner, with a picnic lunch in the middle. That's the general rule, but of course people are welcome to take more (or less) breaks too. A bient?t, Armin. From sebastiankaster at googlemail.com Sat Feb 25 20:02:24 2017 From: sebastiankaster at googlemail.com (Sebastian K) Date: Sun, 26 Feb 2017 02:02:24 +0100 Subject: [pypy-dev] GIL Message-ID: Hello everbody, can someone explain me why pypy needs the gil? In CPython the main reason is the reference counting but pypy doesnt use reference counting. Thanks in advance. Regards Sebastian -------------- next part -------------- An HTML attachment was scrubbed... URL: From phyo.arkarlwin at gmail.com Sun Feb 26 05:04:50 2017 From: phyo.arkarlwin at gmail.com (Phyo Arkar) Date: Sun, 26 Feb 2017 10:04:50 +0000 Subject: [pypy-dev] GIL In-Reply-To: References: Message-ID: Please check PyPy-STM http://doc.pypy.org/en/latest/stm.html On Sun, Feb 26, 2017 at 3:43 PM Sebastian K via pypy-dev < pypy-dev at python.org> wrote: > Hello everbody, > > can someone explain me why pypy needs the gil? In CPython the main reason > is the reference counting but pypy doesnt use reference counting. > Thanks in advance. > > Regards > > Sebastian > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > https://mail.python.org/mailman/listinfo/pypy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From yury at shurup.com Sun Feb 26 05:12:50 2017 From: yury at shurup.com (Yury V. Zaytsev) Date: Sun, 26 Feb 2017 11:12:50 +0100 (CET) Subject: [pypy-dev] GIL In-Reply-To: References: Message-ID: On Sun, 26 Feb 2017, Sebastian K via pypy-dev wrote: > can someone explain me why pypy needs the gil? In CPython the main > reason is the reference counting but pypy doesnt use reference counting. The main reason is not the reference counting itself, but rather that the garbage collector (which happens to use reference counting) is not thread-safe. It is true that PyPy's garbage collectors do not use reference counting, but in itself, this doesn't make them any thread-safer. On top of that, there are more subtle issues to take care of, so instead of working towards removing the GIL, the decision was made to investigate a completely different approach to parallelization (STM): http://doc.pypy.org/en/latest/faq.html#does-pypy-have-a-gil-why -- Sincerely yours, Yury V. Zaytsev From phyo.arkarlwin at gmail.com Sun Feb 26 06:11:02 2017 From: phyo.arkarlwin at gmail.com (Phyo Arkar) Date: Sun, 26 Feb 2017 11:11:02 +0000 Subject: [pypy-dev] GIL In-Reply-To: References: Message-ID: Currenly focus is on PyPy3.5 right? will there be continuation of development after PyPy3.5? On Sun, Feb 26, 2017 at 4:59 PM Yury V. Zaytsev wrote: > On Sun, 26 Feb 2017, Sebastian K via pypy-dev wrote: > > > can someone explain me why pypy needs the gil? In CPython the main > > reason is the reference counting but pypy doesnt use reference counting. > > The main reason is not the reference counting itself, but rather that the > garbage collector (which happens to use reference counting) is not > thread-safe. It is true that PyPy's garbage collectors do not use > reference counting, but in itself, this doesn't make them any > thread-safer. On top of that, there are more subtle issues to take care > of, so instead of working towards removing the GIL, the decision was made > to investigate a completely different approach to parallelization (STM): > > http://doc.pypy.org/en/latest/faq.html#does-pypy-have-a-gil-why > > -- > Sincerely yours, > Yury V. Zaytsev > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > https://mail.python.org/mailman/listinfo/pypy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From phyo.arkarlwin at gmail.com Sun Feb 26 06:11:44 2017 From: phyo.arkarlwin at gmail.com (Phyo Arkar) Date: Sun, 26 Feb 2017 11:11:44 +0000 Subject: [pypy-dev] GIL In-Reply-To: References: Message-ID: Continuation of PyPy-STM dev. On Sun, Feb 26, 2017 at 5:41 PM Phyo Arkar wrote: > Currenly focus is on PyPy3.5 right? will there be continuation of > development after PyPy3.5? > > On Sun, Feb 26, 2017 at 4:59 PM Yury V. Zaytsev wrote: > > On Sun, 26 Feb 2017, Sebastian K via pypy-dev wrote: > > > can someone explain me why pypy needs the gil? In CPython the main > > reason is the reference counting but pypy doesnt use reference counting. > > The main reason is not the reference counting itself, but rather that the > garbage collector (which happens to use reference counting) is not > thread-safe. It is true that PyPy's garbage collectors do not use > reference counting, but in itself, this doesn't make them any > thread-safer. On top of that, there are more subtle issues to take care > of, so instead of working towards removing the GIL, the decision was made > to investigate a completely different approach to parallelization (STM): > > http://doc.pypy.org/en/latest/faq.html#does-pypy-have-a-gil-why > > -- > Sincerely yours, > Yury V. Zaytsev > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > https://mail.python.org/mailman/listinfo/pypy-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From armin.rigo at gmail.com Sun Feb 26 06:46:23 2017 From: armin.rigo at gmail.com (Armin Rigo) Date: Sun, 26 Feb 2017 12:46:23 +0100 Subject: [pypy-dev] GIL In-Reply-To: References: Message-ID: Hi, On 26 February 2017 at 11:12, Yury V. Zaytsev wrote: > The main reason is not the reference counting itself, but rather that the > garbage collector (which happens to use reference counting) is not > thread-safe. It is true that PyPy's garbage collectors do not use reference > counting, but in itself, this doesn't make them any thread-safer. On top of > that, there are more subtle issues to take care of (...) To complete that answer, I updated the faq: http://doc.pypy.org/en/latest/faq.html#does-pypy-have-a-gil-why A bient?t, Armin. From yashwardhan.singh at intel.com Tue Feb 28 14:06:35 2017 From: yashwardhan.singh at intel.com (Singh, Yashwardhan) Date: Tue, 28 Feb 2017 19:06:35 +0000 Subject: [pypy-dev] Pandas on PyPy Message-ID: <0151F66FF725AC42A760DA612754C5F819911FE5@ORSMSX104.amr.corp.intel.com> Hi Everyone, I was hoping to use Pandas with PyPy to speed up applications, but ran into compatibility issues with PyPy. Has anyone tried to make PyPy works with Pandas, by any experiment/hacking method? If yes, would you share your findings with me? If not, could someone points me to where the obstacles are and any potential approach, or plan by this community? Any help is greatly appreciated. Yash -------------- next part -------------- An HTML attachment was scrubbed... URL: From phyo.arkarlwin at gmail.com Tue Feb 28 14:28:25 2017 From: phyo.arkarlwin at gmail.com (Phyo Arkar) Date: Tue, 28 Feb 2017 19:28:25 +0000 Subject: [pypy-dev] Pandas on PyPy In-Reply-To: <0151F66FF725AC42A760DA612754C5F819911FE5@ORSMSX104.amr.corp.intel.com> References: <0151F66FF725AC42A760DA612754C5F819911FE5@ORSMSX104.amr.corp.intel.com> Message-ID: Hi See this : https://morepypy.blogspot.com/2016/11/pypy27-v56-released-stdlib-2712-support.html On Wed, Mar 1, 2017 at 1:38 AM Singh, Yashwardhan < yashwardhan.singh at intel.com> wrote: > Hi Everyone, > > I was hoping to use Pandas with PyPy to speed up applications, but ran > into compatibility issues with PyPy. > Has anyone tried to make PyPy works with Pandas, by any experiment/hacking > method? > If yes, would you share your findings with me? If not, could someone > points me to where the obstacles are and any potential approach, or plan by > this community? > > Any help is greatly appreciated. > > Yash > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > https://mail.python.org/mailman/listinfo/pypy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fijall at gmail.com Tue Feb 28 16:48:09 2017 From: fijall at gmail.com (Maciej Fijalkowski) Date: Tue, 28 Feb 2017 22:48:09 +0100 Subject: [pypy-dev] Pandas on PyPy In-Reply-To: <0151F66FF725AC42A760DA612754C5F819911FE5@ORSMSX104.amr.corp.intel.com> References: <0151F66FF725AC42A760DA612754C5F819911FE5@ORSMSX104.amr.corp.intel.com> Message-ID: Hi Singh. We're working hard on getting pypy to run pandas at this very moment :-) At present, you would be unlikely to find significant speedups - our C API emulation layer is slow (unless you do a lot of work outside of pandas without pandas objects). We will work on it, wait for the next release, hopefully! Best regards, Maciej Fijalkowski On Tue, Feb 28, 2017 at 8:06 PM, Singh, Yashwardhan wrote: > Hi Everyone, > > I was hoping to use Pandas with PyPy to speed up applications, but ran into > compatibility issues with PyPy. > Has anyone tried to make PyPy works with Pandas, by any experiment/hacking > method? > If yes, would you share your findings with me? If not, could someone points > me to where the obstacles are and any potential approach, or plan by this > community? > > Any help is greatly appreciated. > > Yash > > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > https://mail.python.org/mailman/listinfo/pypy-dev >