From arigo at tunes.org Sun May 1 06:14:27 2016 From: arigo at tunes.org (Armin Rigo) Date: Sun, 1 May 2016 12:14:27 +0200 Subject: [pypy-dev] cpyext-auto-gil? Message-ID: Hi all, hi Matti, If a CPython C extension module is written in a way that calls some PyXxx() function without the GIL (hoping that the code in this particular function is kept simple enough that it still works), then in the same situation it would loudly complain and crash on PyPy. This is a problem for numpy, which had to be worked around. Now I did a minimal change in the branch cpyext-auto-gil: instead of complaining, we acquire/release the GIL. Does it sound like a good idea? Could you check if it solves the numpy issue? A bient?t, Armin. From matti.picus at gmail.com Sun May 1 06:55:25 2016 From: matti.picus at gmail.com (Matti Picus) Date: Sun, 1 May 2016 13:55:25 +0300 Subject: [pypy-dev] cpyext-auto-gil? In-Reply-To: References: Message-ID: <5725E09D.4030106@gmail.com> On 01/05/16 13:14, Armin Rigo wrote: > Hi all, hi Matti, > > If a CPython C extension module is written in a way that calls some > PyXxx() function without the GIL (hoping that the code in this > particular function is kept simple enough that it still works), then > in the same situation it would loudly complain and crash on PyPy. > This is a problem for numpy, which had to be worked around. > > Now I did a minimal change in the branch cpyext-auto-gil: instead of > complaining, we acquire/release the GIL. Does it sound like a good > idea? Could you check if it solves the numpy issue? > > > A bient?t, > > Armin. > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > https://mail.python.org/mailman/listinfo/pypy-dev Should be easy enough to check - it should relax the requirement to use NPY_NOSMP=1 when building github/pypy/numpy. I can try it Matti From matti.picus at gmail.com Tue May 3 13:02:09 2016 From: matti.picus at gmail.com (Matti Picus) Date: Tue, 3 May 2016 20:02:09 +0300 Subject: [pypy-dev] PyPy 5.1.1 released Message-ID: <5728D991.4060208@gmail.com> I have released a bug-fix for 5.1, the blog post has details http://morepypy.blogspot.com/2016/05/pypy-511-bugfix-released.html From yury at shurup.com Tue May 3 13:34:10 2016 From: yury at shurup.com (Yury V. Zaytsev) Date: Tue, 03 May 2016 19:34:10 +0200 Subject: [pypy-dev] PyPy & Django: recommended mySQL module? Message-ID: <1462296850.2708.5.camel@newpride> Hi, I'm thinking of trying PyPy on a Python 2.7 application based on Django, and I'm wondering what mySQL module would be the recommended one these days to use for the ORM backend? I understand that the lastest version MySQL-Python should just work, although will be going through CPyExt, so I'm not sure how stable it's gonna be / what's the performance going to look like. It seems that there is also a pure Python connector called PyMySQL, which should work out of the box and not rely on CPyExt. Has anyone done any recent benchmarks and/or can give me a hint which of the two is currently the way to go? Many thanks! -- Sincerely yours, Yury V. Zaytsev From hrc706 at gmail.com Thu May 5 02:17:52 2016 From: hrc706 at gmail.com (=?utf-8?B?6buE6Iul5bCY?=) Date: Thu, 5 May 2016 15:17:52 +0900 Subject: [pypy-dev] How can I retrieve target-level language's bytecode in functions of JitHookInterface? Message-ID: Hi all, My name is Ruochen Huang, I?m developing an Erlang bytecode virtual machine called Pyrlang. Recently I?m trying to apply a new tracing JIT policy to Pyrlang, I found there are some traces that were too long and therefore aborted by JIT according to rpython?s JIT log. In order to find those ?too long? traces in my benchmark programs, I?m trying to implement the JitHookInterface in rpython, and rewrite the on_abort function there. However, until now I can only print out the rpython level bytecode instructions from the ?operations? arguments, but what I want are the target language (e.g., Erlang in my project) bytecode instructions. Could somebody tell me that is it possible to retrieve it by calling some rpython API? Best Regards, Ruochen Huang Tokyo Institute of Technology From arigo at tunes.org Thu May 5 02:32:55 2016 From: arigo at tunes.org (Armin Rigo) Date: Thu, 5 May 2016 08:32:55 +0200 Subject: [pypy-dev] How can I retrieve target-level language's bytecode in functions of JitHookInterface? In-Reply-To: References: Message-ID: Hi Ruochen, On 5 May 2016 at 08:17, ??? wrote: > Recently I?m trying to apply a new tracing JIT policy to Pyrlang, I found there are some traces that were too long and therefore aborted by JIT according to rpython?s JIT log. You can find them without using JitHookInterface: jitdriver = JitDriver(..., get_printable_location=get_printable_location) Then get_printable_location() should translate the greenkey into a human-readable string that describes the location. Once this is done, you can usually figure out why some specific traces are too long by looking at the PYPYLOG output: PYPYLOG=jit:log pyrlang-c args... This produces a file called "log" where the output of `get_printable_location()` is found in the `debug_merge_point` intructions, so that you should be able to follow it (even it is a bit large). A bient?t, Armin. From omer.drow at gmail.com Fri May 6 03:02:14 2016 From: omer.drow at gmail.com (Omer Katz) Date: Fri, 06 May 2016 07:02:14 +0000 Subject: [pypy-dev] BList strategy Message-ID: https://github.com/DanielStutzbach/blist/ is an implementation of lists that is asymptotically faster. Quoting from them: "The blist uses a flexible, hybrid array/tree structure and only needs to move a small portion of items in memory, specifically using O(log n) operations. For small lists, the blist and the built-in list have virtually identical performance." I looked into PyPy's current list strategies and it seems we're not implementing the same algorithm as blist. I had a chance to use the package before and the implementation is sound. I'm wondering if PyPy could by a chance benefit from implementing the blist algorithm as a strategy. Possibly even switching it to be the default. What do you guys think? Omer Katz -------------- next part -------------- An HTML attachment was scrubbed... URL: From arigo at tunes.org Fri May 6 04:56:22 2016 From: arigo at tunes.org (Armin Rigo) Date: Fri, 6 May 2016 10:56:22 +0200 Subject: [pypy-dev] BList strategy In-Reply-To: References: Message-ID: Hi Omer, On 6 May 2016 at 09:02, Omer Katz wrote: > I'm wondering if PyPy could by a chance benefit from implementing the blist > algorithm as a strategy. Possibly even switching it to be the default. > > What do you guys think? It would certainly be interesting to try. The problem with using *only* this approach is the following. It is marketed as having "similar performance on small lists", but that's a "similar" on top of CPython. That means that if we need (say) to read three memory locations instead of one, with a test in the middle, then it is "similar" on CPython; indeed it would hardly be a noticable overhead with the rest of the CPython interpreter around. However, it does have more impact inside JITted code. I fear we're going to replace a direct memory read with a call to a helper, for every read. It is possibly still reasonable; but possibly not. To be tried, basically. In the past we tried something similar with "ropes" (for strings). In the future we might try it with utf-8-encoded unicode strings. They all have the same drawback, and it needs to be tried and measured on a case-by-case basis. As a workaround for that problem, we could implement it as a list strategy that we only switch to if: * the number of items is large enough; * and we try to do one of the operations that would take much less time in the blist world. In that way, we would keep most lists unmodified, and only switch to blists when there is a clear benefit. A bient?t, Armin. From omer.drow at gmail.com Fri May 6 05:04:43 2016 From: omer.drow at gmail.com (Omer Katz) Date: Fri, 06 May 2016 09:04:43 +0000 Subject: [pypy-dev] BList strategy In-Reply-To: References: Message-ID: I agree that we can certainly benefit from using blists in large enough lists. How much is large enough? We'll probably need to benchmark right? Are there any JIT paths for blist operations? Will blists run faster because of the JIT or just because the rest of the runtime is more efficient? On Fri, May 6, 2016, 11:57 Armin Rigo wrote: > Hi Omer, > > On 6 May 2016 at 09:02, Omer Katz wrote: > > I'm wondering if PyPy could by a chance benefit from implementing the > blist > > algorithm as a strategy. Possibly even switching it to be the default. > > > > What do you guys think? > > It would certainly be interesting to try. > > The problem with using *only* this approach is the following. It is > marketed as having "similar performance on small lists", but that's a > "similar" on top of CPython. That means that if we need (say) to read > three memory locations instead of one, with a test in the middle, then > it is "similar" on CPython; indeed it would hardly be a noticable > overhead with the rest of the CPython interpreter around. However, it > does have more impact inside JITted code. I fear we're going to > replace a direct memory read with a call to a helper, for every read. > It is possibly still reasonable; but possibly not. To be tried, > basically. > > In the past we tried something similar with "ropes" (for strings). In > the future we might try it with utf-8-encoded unicode strings. They > all have the same drawback, and it needs to be tried and measured on a > case-by-case basis. > > As a workaround for that problem, we could implement it as a list > strategy that we only switch to if: > > * the number of items is large enough; > * and we try to do one of the operations that would take much less > time in the blist world. > > In that way, we would keep most lists unmodified, and only switch to > blists when there is a clear benefit. > > > A bient?t, > > Armin. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From arigo at tunes.org Fri May 6 12:36:14 2016 From: arigo at tunes.org (Armin Rigo) Date: Fri, 6 May 2016 18:36:14 +0200 Subject: [pypy-dev] BList strategy In-Reply-To: References: Message-ID: Hi Omer, On 6 May 2016 at 11:04, Omer Katz wrote: > I agree that we can certainly benefit from using blists in large enough > lists. How much is large enough? We'll probably need to benchmark right? > > Are there any JIT paths for blist operations? Do you mean "would we need to add specific JIT support for blists"? No, unlikely. > Will blists run faster because of the JIT or just because the rest of the > runtime is more efficient? I think you're talking about my sentence "it would hardly be a noticable overhead with the rest of the CPython interpreter around.", is that correct? If that's correct, then I'm saying that our JIT usually turns a Python line like "x = somelist[index]" into a single load-from-memory CPU instruction, plus the check that "index" is not out of bounds. I'm saying that blists will most likely need more than a single CPU instruction here. So, at least in microbenchmarks, blists could easily be several times slower than regular lists, after the JIT has removed everything around it. A bient?t, Armin. From flebber.crue at gmail.com Sat May 7 08:45:05 2016 From: flebber.crue at gmail.com (Sayth Renshaw) Date: Sat, 07 May 2016 12:45:05 +0000 Subject: [pypy-dev] Create own windows distribution - Borrowing from github repo and conda Message-ID: Hi Wanting to know from your experience how hard it would be to create a python distribution for windows based on pypy ? For example if it contained the "essential" and sometimes hardest packages plus a few of the biggest names would this be feasible? For example as packages numpy, django, scikit, matplotlib or bokeh and spyder and ipython. For example in the anaconda distribution there is a github repo https://github.com/rvianello/conda-pypy which has created the option to replace the conda config and direct all packages to a different repo and use pypy as the default distribution. using something like that as a base and borrowing from the conda base would this be feasible, got to be good from an adoption stand point. Thoughts? Sayth -------------- next part -------------- An HTML attachment was scrubbed... URL: From fijall at gmail.com Sun May 8 04:05:58 2016 From: fijall at gmail.com (Maciej Fijalkowski) Date: Sun, 8 May 2016 10:05:58 +0200 Subject: [pypy-dev] PyPy & Django: recommended mySQL module? In-Reply-To: <1462296850.2708.5.camel@newpride> References: <1462296850.2708.5.camel@newpride> Message-ID: Hi Yury Sorry for the late response, on holiday. I would personally use something cffi-based, like this: https://github.com/andrewsmedina/mysql-cffi Generally speaking all cpyext-based solutions will be slower (although these days I know MySQL-Python should indeed just work and not crash) than non-cpyext based solutions but depending on the application you might or might not care. Cheers, fijal On Tue, May 3, 2016 at 7:34 PM, Yury V. Zaytsev wrote: > Hi, > > I'm thinking of trying PyPy on a Python 2.7 application based on Django, > and I'm wondering what mySQL module would be the recommended one these > days to use for the ORM backend? > > I understand that the lastest version MySQL-Python should just work, > although will be going through CPyExt, so I'm not sure how stable it's > gonna be / what's the performance going to look like. > > It seems that there is also a pure Python connector called PyMySQL, > which should work out of the box and not rely on CPyExt. > > Has anyone done any recent benchmarks and/or can give me a hint which of > the two is currently the way to go? > > Many thanks! > > -- > Sincerely yours, > Yury V. Zaytsev > > > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > https://mail.python.org/mailman/listinfo/pypy-dev From yury at shurup.com Sun May 8 05:07:28 2016 From: yury at shurup.com (Yury V. Zaytsev) Date: Sun, 8 May 2016 11:07:28 +0200 (CEST) Subject: [pypy-dev] PyPy & Django: recommended mySQL module? In-Reply-To: References: <1462296850.2708.5.camel@newpride> Message-ID: Hi Maciej, Thanks for the feedback! On Sun, 8 May 2016, Maciej Fijalkowski wrote: > > I would personally use something cffi-based, like this: > https://github.com/andrewsmedina/mysql-cffi Well, neither this seems to be supported by Django, nor it looks like it's actively maintained... :-/ > Generally speaking all cpyext-based solutions will be slower (although > these days I know MySQL-Python should indeed just work and not crash) > than non-cpyext based solutions but depending on the application you > might or might not care. Okay, so I started out with CPython and I'll keep an eye on the load; the plan is to bring up a second upstream to experiment with PyPy and see if this is getting me anywhere. I will then either try the pure Python fork of MySQL-Python, or the original and see if that's really the bottleneck. On a related note, has anyone tried binary wheels with PyPy, are they known to work? Among other dependencies I have is for example Pillow; so far I've been building binary wheels on a dedicated development server and deploying them to the application server which doesn't have any complier infrastructure installed. Will this simply work for PyPy? -- Sincerely yours, Yury V. Zaytsev From fijall at gmail.com Sun May 8 05:22:57 2016 From: fijall at gmail.com (Maciej Fijalkowski) Date: Sun, 8 May 2016 11:22:57 +0200 Subject: [pypy-dev] PyPy & Django: recommended mySQL module? In-Reply-To: References: <1462296850.2708.5.camel@newpride> Message-ID: On Sun, May 8, 2016 at 11:07 AM, Yury V. Zaytsev wrote: > Hi Maciej, > > Thanks for the feedback! > > On Sun, 8 May 2016, Maciej Fijalkowski wrote: >> >> >> I would personally use something cffi-based, like this: >> https://github.com/andrewsmedina/mysql-cffi > > > Well, neither this seems to be supported by Django, nor it looks like it's > actively maintained... :-/ > >> Generally speaking all cpyext-based solutions will be slower (although >> these days I know MySQL-Python should indeed just work and not crash) than >> non-cpyext based solutions but depending on the application you might or >> might not care. > > > Okay, so I started out with CPython and I'll keep an eye on the load; the > plan is to bring up a second upstream to experiment with PyPy and see if > this is getting me anywhere. I will then either try the pure Python fork of > MySQL-Python, or the original and see if that's really the bottleneck. For the record - if you are using django ORM, then the mysql binding is unlikely to be your bottleneck for accessing the DB. > > On a related note, has anyone tried binary wheels with PyPy, are they known > to work? Among other dependencies I have is for example Pillow; so far I've > been building binary wheels on a dedicated development server and deploying > them to the application server which doesn't have any complier > infrastructure installed. Will this simply work for PyPy? It should. > > > -- > Sincerely yours, > Yury V. Zaytsev From yury at shurup.com Sun May 8 07:31:09 2016 From: yury at shurup.com (Yury V. Zaytsev) Date: Sun, 8 May 2016 13:31:09 +0200 (CEST) Subject: [pypy-dev] PyPy & Django: recommended mySQL module? In-Reply-To: References: <1462296850.2708.5.camel@newpride> Message-ID: On Sun, 8 May 2016, Maciej Fijalkowski wrote: > For the record - if you are using django ORM, then the mysql binding > is unlikely to be your bottleneck for accessing the DB. Yep, that's what I'm using it for... good news. >> On a related note, has anyone tried binary wheels with PyPy, are they known >> to work? Among other dependencies I have is for example Pillow; so far I've >> been building binary wheels on a dedicated development server and deploying >> them to the application server which doesn't have any complier >> infrastructure installed. Will this simply work for PyPy? > > It should. Excellent, many thanks! -- Sincerely yours, Yury V. Zaytsev From arigo at tunes.org Sun May 8 09:33:24 2016 From: arigo at tunes.org (Armin Rigo) Date: Sun, 8 May 2016 15:33:24 +0200 Subject: [pypy-dev] Fwd: Bounce action notification In-Reply-To: References: Message-ID: Hi all, I wanted to communicate this to whoever added the pypy-dev list to "all-mail-archive.com": the subsciption has been disabled---excessive or fatal bounces. Armin From arigo at tunes.org Sun May 8 09:56:19 2016 From: arigo at tunes.org (Armin Rigo) Date: Sun, 8 May 2016 15:56:19 +0200 Subject: [pypy-dev] Create own windows distribution - Borrowing from github repo and conda In-Reply-To: References: Message-ID: Hi Sayth, We're fully open to people doing this kind of packaging. From our point of view it is a good idea, as long as the packaging is kept up-to-date with reasonably recent releases of PyPy. If you want to start such a project, you are welcome to. Such a project for Windows would be particularly welcome: none of the core PyPy developers use Windows at all, so the platform tends to be left behind... (I'll sneak in this e-mail a notice that repeats our position about 64-bit versions of PyPy for Windows. There are two ways for this to ever occur: either a third party contributes major amounts of code and support; or someone pays us for the work and for the future support (*business* rates, though not necessarily on the expensive side of that scale---contact pypy-z at python.org).) A bient?t, Armin. From omer.drow at gmail.com Sun May 8 16:23:12 2016 From: omer.drow at gmail.com (Omer Katz) Date: Sun, 08 May 2016 20:23:12 +0000 Subject: [pypy-dev] BList strategy In-Reply-To: References: Message-ID: I took a stab at it and it turns out the implementation is pretty complex. If anyone wants to give it a shot, I'd love to get some help. ??????? ??? ??, 6 ???? 2016 ?-19:36 ??? ?Armin Rigo?? :? > Hi Omer, > > On 6 May 2016 at 11:04, Omer Katz wrote: > > I agree that we can certainly benefit from using blists in large enough > > lists. How much is large enough? We'll probably need to benchmark right? > > > > Are there any JIT paths for blist operations? > > Do you mean "would we need to add specific JIT support for blists"? > No, unlikely. > > > Will blists run faster because of the JIT or just because the rest of the > > runtime is more efficient? > > I think you're talking about my sentence "it would hardly be a noticable > overhead with the rest of the CPython interpreter around.", is that > correct? If that's correct, then I'm saying that our JIT usually > turns a Python line like "x = somelist[index]" into a single > load-from-memory CPU instruction, plus the check that "index" is not > out of bounds. I'm saying that blists will most likely need more than > a single CPU instruction here. So, at least in microbenchmarks, > blists could easily be several times slower than regular lists, after > the JIT has removed everything around it. > > > A bient?t, > > Armin. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From omer.drow at gmail.com Mon May 9 01:56:57 2016 From: omer.drow at gmail.com (Omer Katz) Date: Mon, 09 May 2016 05:56:57 +0000 Subject: [pypy-dev] Copying a slice of a list in RPython Message-ID: Hi guys, While implementing the BList strategy I found a need to be able to efficiently copy part of a list to another list (and sometimes to the same list). I'd rather avoid implementing something like https://github.com/DanielStutzbach/blist/blob/master/blist/_blist.c#L189 if it already exists in RPython. Someone on IRC mentioned that there is a function in the rgc module called ll_arraycopy but I'm unsure how to use it. I could implement it myself but I don't think it should live under objspace/listobject.py. Is there a utility module for objspace I should put those functions in? Thanks, Omer Katz. -------------- next part -------------- An HTML attachment was scrubbed... URL: From omer.drow at gmail.com Mon May 9 04:18:24 2016 From: omer.drow at gmail.com (Omer Katz) Date: Mon, 09 May 2016 08:18:24 +0000 Subject: [pypy-dev] PyPy & Django: recommended mySQL module? In-Reply-To: References: <1462296850.2708.5.camel@newpride> Message-ID: Postgres has better support for PyPy through https://github.com/chtd/psycopg2cffi and you can integrate it with Django easily. If you don't have to use MySQL maybe it's a good idea to use Postgres instead. Depending on what you need, you'll get much better performance and versatility with Postgres IMO. If this is a pet project it probably doesn't matter but for production use cases Postgres has superior schema alteration mechanism that mostly doesn't require downtime and starting from 9.6 it will be able to execute queries in parallel so you'll see better performance there. ??????? ??? ??, 8 ???? 2016 ?-14:32 ??? ?Yury V. Zaytsev?? :? > On Sun, 8 May 2016, Maciej Fijalkowski wrote: > > > For the record - if you are using django ORM, then the mysql binding > > is unlikely to be your bottleneck for accessing the DB. > > Yep, that's what I'm using it for... good news. > > >> On a related note, has anyone tried binary wheels with PyPy, are they > known > >> to work? Among other dependencies I have is for example Pillow; so far > I've > >> been building binary wheels on a dedicated development server and > deploying > >> them to the application server which doesn't have any complier > >> infrastructure installed. Will this simply work for PyPy? > > > > It should. > > Excellent, many thanks! > > -- > Sincerely yours, > Yury V. Zaytsev > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > https://mail.python.org/mailman/listinfo/pypy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fijall at gmail.com Mon May 9 05:52:44 2016 From: fijall at gmail.com (Maciej Fijalkowski) Date: Mon, 9 May 2016 11:52:44 +0200 Subject: [pypy-dev] PyPy & Django: recommended mySQL module? In-Reply-To: References: <1462296850.2708.5.camel@newpride> Message-ID: On Sun, May 8, 2016 at 1:31 PM, Yury V. Zaytsev wrote: > On Sun, 8 May 2016, Maciej Fijalkowski wrote: > >> For the record - if you are using django ORM, then the mysql binding >> is unlikely to be your bottleneck for accessing the DB. > > > Yep, that's what I'm using it for... good news. > No, that's bad news, Django ORM is terrible, noone should be using it for anything that requires performance ;-) From phyo.arkarlwin at gmail.com Mon May 9 06:04:44 2016 From: phyo.arkarlwin at gmail.com (Phyo Arkar) Date: Mon, 09 May 2016 10:04:44 +0000 Subject: [pypy-dev] PyPy & Django: recommended mySQL module? In-Reply-To: References: <1462296850.2708.5.camel@newpride> Message-ID: try pymysql ?purepython mysql client https://github.com/PyMySQL/PyMySQL it works much faster than mysqlpython back when i used mysql. On Mon, May 9, 2016 at 4:23 PM Maciej Fijalkowski wrote: > On Sun, May 8, 2016 at 1:31 PM, Yury V. Zaytsev wrote: > > On Sun, 8 May 2016, Maciej Fijalkowski wrote: > > > >> For the record - if you are using django ORM, then the mysql binding > >> is unlikely to be your bottleneck for accessing the DB. > > > > > > Yep, that's what I'm using it for... good news. > > > > No, that's bad news, Django ORM is terrible, noone should be using it > for anything that requires performance ;-) > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > https://mail.python.org/mailman/listinfo/pypy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From andrewsmedina at gmail.com Mon May 9 10:45:49 2016 From: andrewsmedina at gmail.com (Andrews Medina) Date: Mon, 9 May 2016 11:45:49 -0300 Subject: [pypy-dev] PyPy & Django: recommended mySQL module? In-Reply-To: References: <1462296850.2708.5.camel@newpride> Message-ID: I started this project (mysql-cffi) but unfortunately not had enough time to continue it. :( I believe that the better way is to use the pure python client. On Mon, May 9, 2016 at 7:04 AM, Phyo Arkar wrote: > try pymysql ?purepython mysql client https://github.com/PyMySQL/PyMySQL > it works much faster than mysqlpython back when i used mysql. > > On Mon, May 9, 2016 at 4:23 PM Maciej Fijalkowski wrote: >> >> On Sun, May 8, 2016 at 1:31 PM, Yury V. Zaytsev wrote: >> > On Sun, 8 May 2016, Maciej Fijalkowski wrote: >> > >> >> For the record - if you are using django ORM, then the mysql binding >> >> is unlikely to be your bottleneck for accessing the DB. >> > >> > >> > Yep, that's what I'm using it for... good news. >> > >> >> No, that's bad news, Django ORM is terrible, noone should be using it >> for anything that requires performance ;-) >> _______________________________________________ >> pypy-dev mailing list >> pypy-dev at python.org >> https://mail.python.org/mailman/listinfo/pypy-dev > > > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > https://mail.python.org/mailman/listinfo/pypy-dev > -- Andrews Medina www.andrewsmedina.com From vincent.legoll at gmail.com Tue May 10 09:59:37 2016 From: vincent.legoll at gmail.com (Vincent Legoll) Date: Tue, 10 May 2016 15:59:37 +0200 Subject: [pypy-dev] Copying a slice of a list in RPython In-Reply-To: References: Message-ID: Hello, I suggested having a look at rpython's list implementation rpython/rtyper/rlist.py:ll_arraycopy() which uses: rpython/rlib/rgc/rgc.py:ll_arraycopy() I hope that was a useful hint... -- Vincent Legoll From cfbolz at gmx.de Thu May 12 06:25:45 2016 From: cfbolz at gmx.de (Carl Friedrich Bolz) Date: Thu, 12 May 2016 12:25:45 +0200 Subject: [pypy-dev] call for papers: Dynamic Language Symposium 2016, Nov 1, Amsterdam Message-ID: <57345A29.9060607@gmx.de> Call for papers ## 12th Dynamic Languages Symposium (DLS 2016) ### Co-located with SPLASH 2016 ### In association with ACM SIGPLAN ### November 1, 2016, Amsterdam The 12th Dynamic Languages Symposium (DLS) at SPLASH 2016 invites high quality papers reporting original research and experience related to the design, implementation, and applications of dynamic languages. Areas of interest include but are not limited to: * Innovative language features * Innovative implementation techniques * Innovative applications * Development environments and tools * Experience reports and case studies * Domain-oriented programming * Very late binding, dynamic composition, and run-time adaptation * Reflection and meta-programming * Software evolution * Language symbiosis and multi-paradigm languages * Dynamic optimization * JIT compilation * Soft/optional/gradual typing * Hardware support * Educational approaches and perspectives * Semantics of dynamic languages ### Submissions and proceedings Submissions must not have been published previously nor being under review at other events. Research papers should describe work that advances the current state of the art. Experience papers should be of broad interest and should describe insights gained from substantive practical applications. The program committee will evaluate each contributed paper based on its relevance, significance, clarity, and originality. Papers are to be submitted electronically at in PDF format. Submissions must be in the ACM format with 10-point fonts and should not exceed 12 pages. Please see full details in the following link: DLS 2016 will run a two-phase reviewing process to help authors make their final papers the best that they can be. Accepted papers will be published in the ACM Digital Library and will be freely available for one month, starting two weeks before the event. ### Important dates * Submissions: Jun 10, 2016 (UTC, firm deadline) * First phase notification: Jul 22, 2016 * Revisions due: July 29, 2016 * Final notification: Aug 14, 2016 * Camera ready: Aug 26, 2016 * DLS: Nov 1, 2016 **AUTHORS TAKE NOTE:** The official publication date is the date the proceedings are made available in the ACM Digital Library. This date may be up to two weeks prior to the first day of your conference. The official publication date affects the deadline for any patent filings related to published work. ### Program chair Roberto Ierusalimschy, PUC-Rio, Brazil dls16 at inf.puc-rio.br ### Program committee * Carl Friedrich Bolz, King?s College London, UK * Gilad Bracha, Google, USA * Marcus Denker, INRIA, France * Zachary DeVito, Stanford, USA * Jonathan Edwards, CDG Labs, USA * Matthew Flatt, University of Utah, USA * Elisa Gonzalez Boix, Vrije Universiteit Brussel, Belgium * Robert Hirschfeld, Hasso Plattner Institute Potsdam, Germany * Roberto Ierusalimschy, PUC-Rio, Brazil (chair) * Shriram Krishnamurthi, Brown University, USA * Benjamin Livshits, Microsoft Research, USA * Priya Nagpurkar, IBM Research, USA * Joe Gibbs Politz, Swarthmore College, USA * Chris Seaton, Oracle Labs, UK * Manuel Serrano, INRIA, France * Sam Tobin-Hochstadt, Indiana University, USA * Laurence Tratt, King?s College London, UK * Jan Vitek, Northeastern University, USA * Haichuan Wang, Huawei America Research Center, USA From wickedgrey at gmail.com Fri May 13 19:19:44 2016 From: wickedgrey at gmail.com (Eli Stevens (Gmail)) Date: Fri, 13 May 2016 16:19:44 -0700 Subject: [pypy-dev] Game state search dominated by copy.deepcopy Message-ID: Hello, I'm in the process of working on a hobby project to have an AI searching through a game state space. I recently ran what I have so far on pypy (I had been doing initial work on cpython), and got two results that were unexpected: - The total execution time was basically identical between cpython and pypy - The runtime on both pythons was about 50% copy.deepcopy (called on the main game state object) The runtime of the script that I've been using is in the 30s to 2m range, depending on config details, and the implementation fits the following pretty well: """ The JIT is generally good at speeding up straight-forward Python code that spends a lot of time in the bytecode dispatch loop, i.e., running actual Python code ? as opposed to running things that only are invoked by Python code. Good examples include numeric calculations or any kind of heavily object-oriented program. """ I have already pulled out constant game information (static info like unit stats, etc.) into an object that isn't copied, and a lot of the numerical data that is copied is stored in a numpy array so that I don't have hundreds of dicts, ints, etc. First, is there a good way to speed up object copying? I've tried pickling to a cache, and unpickling from there (so as to only pickle once), but that didn't make a significant difference. http://vmprof.com/#/905dfb71d28626bff6341a5848deae73 (deepcopy) http://vmprof.com/#/545f1243b345eb9e41d73a9043a85efd (pickle) Second, what's the best way to start figuring out why pypy isn't able to outperform cpython on my program? Thanks for any pointers, Eli From fijall at gmail.com Mon May 16 12:37:26 2016 From: fijall at gmail.com (Maciej Fijalkowski) Date: Mon, 16 May 2016 18:37:26 +0200 Subject: [pypy-dev] Game state search dominated by copy.deepcopy In-Reply-To: References: Message-ID: I don't think pypy is expected to speed up program where the majority of the time is spent in copy.deepcopy. The answer to this question is a bit boring one: don't write algorithms that copy around so much data. As a general rule - if the vast majority of your work is done by the runtime there is no way for pypy to speed this up (mostly) and in fact the exact same algo in C would be as slow or slower. I'm sorry we can't give you a very good answer - if you really *NEED* to do that much copying (maybe you can make the copy more lazy?), then maybe shrinking the data structures would help? I can't answer that without having access to the code though, which I would be happy to look at. Cheers, fijal On Sat, May 14, 2016 at 1:19 AM, Eli Stevens (Gmail) wrote: > Hello, > > I'm in the process of working on a hobby project to have an AI > searching through a game state space. I recently ran what I have so > far on pypy (I had been doing initial work on cpython), and got two > results that were unexpected: > > - The total execution time was basically identical between cpython and pypy > - The runtime on both pythons was about 50% copy.deepcopy (called on > the main game state object) > > The runtime of the script that I've been using is in the 30s to 2m > range, depending on config details, and the implementation fits the > following pretty well: > > """ > The JIT is generally good at speeding up straight-forward Python code > that spends a lot of time in the bytecode dispatch loop, i.e., running > actual Python code ? as opposed to running things that only are > invoked by Python code. Good examples include numeric calculations or > any kind of heavily object-oriented program. > """ > > I have already pulled out constant game information (static info like > unit stats, etc.) into an object that isn't copied, and a lot of the > numerical data that is copied is stored in a numpy array so that I > don't have hundreds of dicts, ints, etc. > > First, is there a good way to speed up object copying? I've tried > pickling to a cache, and unpickling from there (so as to only pickle > once), but that didn't make a significant difference. > > http://vmprof.com/#/905dfb71d28626bff6341a5848deae73 (deepcopy) > http://vmprof.com/#/545f1243b345eb9e41d73a9043a85efd (pickle) > > Second, what's the best way to start figuring out why pypy isn't able > to outperform cpython on my program? > > Thanks for any pointers, > Eli > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > https://mail.python.org/mailman/listinfo/pypy-dev From wickedgrey at gmail.com Mon May 16 12:53:21 2016 From: wickedgrey at gmail.com (Eli Stevens (Gmail)) Date: Mon, 16 May 2016 09:53:21 -0700 Subject: [pypy-dev] Game state search dominated by copy.deepcopy In-Reply-To: References: Message-ID: On Mon, May 16, 2016 at 9:37 AM, Maciej Fijalkowski wrote: > I don't think pypy is expected to speed up program where the majority > of the time is spent in copy.deepcopy. The answer to this question is > a bit boring one: don't write algorithms that copy around so much > data. I hadn't expected the copy time to dominate, but I suspect that's because the state object was more complex than I was giving it credit for (large sets of tuples all nicely hidden away under an abstraction layer, that kind of thing). I was still surprised about the other 50% of the runtime not decreasing much (the actual state transformation part), but I'm not particularly concerned with that right now, as I'm in the process of restructuring the code to be copy on write (which is turning out to be less work than I was originally concerned it would be). > I'm sorry we can't give you a very good answer - if you really *NEED* > to do that much copying (maybe you can make the copy more lazy?), then > maybe shrinking the data structures would help? I can't answer that > without having access to the code though, which I would be happy to > look at. Thank you for the offer. If I still am not seeing results after the refactor, I might take you up on that. Since I don't have IP rights to the game in question, would it be acceptable to add you to a private repo on github (should we get to that point)? Thanks, Eli From fijall at gmail.com Mon May 16 12:56:11 2016 From: fijall at gmail.com (Maciej Fijalkowski) Date: Mon, 16 May 2016 18:56:11 +0200 Subject: [pypy-dev] Game state search dominated by copy.deepcopy In-Reply-To: References: Message-ID: On Mon, May 16, 2016 at 6:53 PM, Eli Stevens (Gmail) wrote: > On Mon, May 16, 2016 at 9:37 AM, Maciej Fijalkowski wrote: >> I don't think pypy is expected to speed up program where the majority >> of the time is spent in copy.deepcopy. The answer to this question is >> a bit boring one: don't write algorithms that copy around so much >> data. > > I hadn't expected the copy time to dominate, but I suspect that's > because the state object was more complex than I was giving it credit > for (large sets of tuples all nicely hidden away under an abstraction > layer, that kind of thing). > > I was still surprised about the other 50% of the runtime not > decreasing much (the actual state transformation part), but I'm not > particularly concerned with that right now, as I'm in the process of > restructuring the code to be copy on write (which is turning out to be > less work than I was originally concerned it would be). > > >> I'm sorry we can't give you a very good answer - if you really *NEED* >> to do that much copying (maybe you can make the copy more lazy?), then >> maybe shrinking the data structures would help? I can't answer that >> without having access to the code though, which I would be happy to >> look at. > > Thank you for the offer. If I still am not seeing results after the > refactor, I might take you up on that. > > Since I don't have IP rights to the game in question, would it be > acceptable to add you to a private repo on github (should we get to > that point)? > > Thanks, > Eli Sure, I don't mind. From cfbolz at gmx.de Tue May 17 06:50:58 2016 From: cfbolz at gmx.de (Carl Friedrich Bolz) Date: Tue, 17 May 2016 12:50:58 +0200 Subject: [pypy-dev] RUMPLE'16 Call for Papers Message-ID: <573AF792.2020604@gmx.de> ============================================================================ Call for Papers: RUMPLE?16 1st Workshop on ReUsable and Modular Programming Language Ecosystems Co-located with SPLASH Oct/Nov, 2016, Amsterdam, Netherlands http://2016.splashcon.org/track/rumple2016 ============================================================================ The RUMPLE workshop is a venue for discussing modular approaches to programming language implementations, extensible virtual machine architectures, as well as reusable runtime components such as dynamic compilers, interpreters, or garbage collectors. The main goal of the workshop is to bring together researchers and practitioners, and facilitate the sharing of experiences and ideas. Relevant topics include, but are definitely not limited to, the following: - Extensible VM design (compiler- or interpreter-based VMs) - Reusable implementation of runtime components (e.g. interpreters, garbage collectors, intermediate representations) - Static and dynamic compiler techniques for different languages - Multi-language runtimes and mechanisms for cross-language interoperability between different languages - Tooling support for different languages (e.g. debugging, profiling, etc.) - Modular language implementations that use existing frameworks and systems - Case studies of existing language implementations, virtual machines, and runtime components (e.g. design choices, tradeoffs, etc.) - New research ideas on how we want to build languages in the future. Workshop Format and Submissions This workshop welcomes the presentation and discussion of new ideas and emerging problems that give a chance for interaction and exchange. We accept presentation proposals in the form of extended abstracts (1-4 pages). Accepted abstracts will be published on the workshop's website before the workshop date. Submissions should use the ACM SIGPLAN Conference Format, 10 point font, using the font family Times New Roman and numeric citation style. All submissions should be in PDF format. Please submit abstracts through http://ssw.jku.at/rumple/ Important Dates - Exended abstract submission: 1 Aug 2016 - Author notification: 5 Sep 2016 All deadlines are Anywhere on Earth (AoE), i.e. GMT/UTC?12:00 hour - Workshop: 31 Oct 2016 Program Committee Walter Binder, University of Lugano Carl Friedrich Bolz, King's College London Richard Jones, University of Kent Stephen Kell, University of Cambridge Jan Vitek, Northeastern University Christian Wimmer, Oracle Labs Workshop Organizers Matthias Grimmer, Johannes Kepler University Linz, Austria Laurence Tratt, King's College London, United Kingdom Adam Welc, Oracle Labs, United States For questions or concerns, please mail to matthias.grimmer at jku.at From wickedgrey at gmail.com Wed May 18 01:43:22 2016 From: wickedgrey at gmail.com (Eli Stevens (Gmail)) Date: Tue, 17 May 2016 22:43:22 -0700 Subject: [pypy-dev] Looking into numpy ndarray.flags.writeable Message-ID: The following works on cpython, but fails on pypy: import numpy as np def test_writeable(): a = np.zeros((2,2), np.int8) > a.flags.writeable = False E TypeError: readonly attribute So I went and installed pypy's fork of numpy from source, and tried to start poking around looking for tests, etc. to see what was going on. It looks like numpy/core/tests/test_multiarray.py TestFlags.test_writeable is doing something similar, but when I attempt to run the tests like so: pypy -c "import numpy; numpy.test()" (Is that the right way to do it?) It looks like the only thing that gets referenced from the test_multiarray.py file is: ====================================================================== ERROR: Failure: AttributeError ('module' object has no attribute 'datetime64') ---------------------------------------------------------------------- Traceback (most recent call last): File "/Users/elis/venv/droidblue-pypy/site-packages/nose/loader.py", line 418, in loadTestsFromName addr.filename, addr.module) File "/Users/elis/venv/droidblue-pypy/site-packages/nose/importer.py", line 47, in importFromPath return self.importFromDir(dir_path, fqname) File "/Users/elis/venv/droidblue-pypy/site-packages/nose/importer.py", line 94, in importFromDir mod = load_module(part_fqname, fh, filename, desc) File "/Users/elis/venv/droidblue-pypy/site-packages/numpy/core/tests/test_multiarray.py", line 2712, in class TestArgmax(TestCase): File "/Users/elis/venv/droidblue-pypy/site-packages/numpy/core/tests/test_multiarray.py", line 2733, in TestArgmax ([np.datetime64('1923-04-14T12:43:12'), AttributeError: 'module' object has no attribute 'datetime64' (And this is after I have to comment out a bunch of imports that can't be found, etc.) My reading of this is that none of the tests in the test_multiarray.py file are even being attempted due to a missing datetime64. Is that correct? Since we're not supposed to touch the numpy source if at all possible, what's the suggested approach for getting the TestFlags tests to a state where I can run them (besides just hacking away at the source file, as it seems unlikely that would be an acceptable patch)? I'm not certain if I'm going to be able to invest the time to fix the writeable flag, but I'd like to give it a go, at least. Anything else in particular I should be aware of, when it comes to flags, etc.? Thanks, Eli From matti.picus at gmail.com Wed May 18 11:42:56 2016 From: matti.picus at gmail.com (Matti Picus) Date: Wed, 18 May 2016 18:42:56 +0300 Subject: [pypy-dev] Looking into numpy ndarray.flags.writeable In-Reply-To: References: Message-ID: <573C8D80.8080503@gmail.com> On 18/05/16 08:43, Eli Stevens (Gmail) wrote: > The following works on cpython, but fails on pypy: > > import numpy as np > def test_writeable(): > a = np.zeros((2,2), np.int8) >> a.flags.writeable = False > E TypeError: readonly attribute > > So I went and installed pypy's fork of numpy from source, and tried to > start poking around looking for tests, etc. to see what was going on. > > It looks like numpy/core/tests/test_multiarray.py > TestFlags.test_writeable is doing something similar, but when I > attempt to run the tests like so: > > pypy -c "import numpy; numpy.test()" > > (Is that the right way to do it?) > > ... Thanks for looking into this. It seems there are two seperate issues, the failure to set flag attributes and the failure to run tests. Could you give a few more details about how to reproduce the second problem: which pypy and how did you obtain it, and how you built/installed numpy into it? What imports did you have to change? You shouldn't have to change imports, but it is entirely possible that we have overlooked something. Matti From wickedgrey at gmail.com Wed May 18 12:35:05 2016 From: wickedgrey at gmail.com (Eli Stevens (Gmail)) Date: Wed, 18 May 2016 09:35:05 -0700 Subject: [pypy-dev] Looking into numpy ndarray.flags.writeable In-Reply-To: <573C8D80.8080503@gmail.com> References: <573C8D80.8080503@gmail.com> Message-ID: I downloaded the prebuilt binary pypy-5.1.1-osx64, dropped it into my homedir, and built a virtualenv using it. The files look like they were built Apr. 30th, I installed May 11th. I don't think that I've installed numpy-specific C libs for the cpython numpy I also have installed, but this system is old enough that I might have. If that becomes relevant, I can dig in more. First I installed numpy by following the instructions here: http://pypy.org/download.html#installing-numpy IIRC, I just used the virtualenv pip to do it. Then when I started to notice the difference in behavior between cpython and pypy, I ended up messing around a bit (wondering if the slight version difference was important, etc.) and ended up going into the pypy venv site-packages and doing rm -rf numpy* to get a clean slate, then doing a git clone of the pypy/numpy repo and doing pypy setup.py install, as directed here: https://bitbucket.org/pypy/numpy Then I run this: pypy -c "import numpy; numpy.test('doesntexist')" And get (among a lot of other noise): ====================================================================== ERROR: Failure: ImportError (No module named numpy.core.multiarray_tests) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Users/elis/venv/droidblue-pypy/site-packages/nose/loader.py", line 418, in loadTestsFromName addr.filename, addr.module) File "/Users/elis/venv/droidblue-pypy/site-packages/nose/importer.py", line 47, in importFromPath return self.importFromDir(dir_path, fqname) File "/Users/elis/venv/droidblue-pypy/site-packages/nose/importer.py", line 94, in importFromDir mod = load_module(part_fqname, fh, filename, desc) File "/Users/elis/venv/droidblue-pypy/site-packages/numpy/core/tests/test_multiarray.py", line 22, in from numpy.core.multiarray_tests import ( ImportError: No module named numpy.core.multiarray_tests Commenting out those imports results in the datetime64 issue I gave earlier. That's when I decided it was getting too hacky, and it made sense to reach out and make sure that I was running the tests the right way, etc. Of course, if you'd like more detail on anything, please let me know. :) Thanks! Eli On Wed, May 18, 2016 at 8:42 AM, Matti Picus wrote: > On 18/05/16 08:43, Eli Stevens (Gmail) wrote: >> >> The following works on cpython, but fails on pypy: >> >> import numpy as np >> def test_writeable(): >> a = np.zeros((2,2), np.int8) >>> >>> a.flags.writeable = False >> >> E TypeError: readonly attribute >> >> So I went and installed pypy's fork of numpy from source, and tried to >> start poking around looking for tests, etc. to see what was going on. >> >> It looks like numpy/core/tests/test_multiarray.py >> TestFlags.test_writeable is doing something similar, but when I >> attempt to run the tests like so: >> >> pypy -c "import numpy; numpy.test()" >> >> (Is that the right way to do it?) >> >> ... > > Thanks for looking into this. It seems there are two seperate issues, the > failure to set flag attributes and the failure to run tests. > Could you give a few more details about how to reproduce the second problem: > which pypy and how did you obtain it, and how you built/installed numpy into > it? What imports did you have to change? > You shouldn't have to change imports, but it is entirely possible that we > have overlooked something. > Matti From matti.picus at gmail.com Wed May 18 12:59:35 2016 From: matti.picus at gmail.com (Matti Picus) Date: Wed, 18 May 2016 19:59:35 +0300 Subject: [pypy-dev] Looking into numpy ndarray.flags.writeable In-Reply-To: References: <573C8D80.8080503@gmail.com> Message-ID: <573C9F77.1050009@gmail.com> An HTML attachment was scrubbed... URL: From wickedgrey at gmail.com Wed May 18 13:19:16 2016 From: wickedgrey at gmail.com (Eli Stevens (Gmail)) Date: Wed, 18 May 2016 10:19:16 -0700 Subject: [pypy-dev] Looking into numpy ndarray.flags.writeable In-Reply-To: <573C9F77.1050009@gmail.com> References: <573C8D80.8080503@gmail.com> <573C9F77.1050009@gmail.com> Message-ID: Great, thanks for the pointers. I'll hopefully be able to dig in after work tonight. Is there a from-scratch guide to getting to the point where I can run those micronumpy tests? NBD if not, I'm sure I can figure it out. Do you think it would make sense to start off by copying test_multiarray.py:TestFlags there? Cheers, Eli On Wed, May 18, 2016 at 9:59 AM, Matti Picus wrote: > On 18/05/16 19:35, Eli Stevens (Gmail) wrote: > > I downloaded the prebuilt binary pypy-5.1.1-osx64, dropped it into my > homedir, and built a virtualenv using it. The files look like they > were built Apr. 30th, I installed May 11th. I don't think that I've > installed numpy-specific C libs for the cpython numpy I also have > installed, but this system is old enough that I might have. If that > becomes relevant, I can dig in more. > > First I installed numpy by following the instructions here: > http://pypy.org/download.html#installing-numpy IIRC, I just used the > virtualenv pip to do it. > > Then when I started to notice the difference in behavior between > cpython and pypy, I ended up messing around a bit (wondering if the > slight version difference was important, etc.) and ended up going into > the pypy venv site-packages and doing rm -rf numpy* to get a clean > slate, then doing a git clone of the pypy/numpy repo and doing pypy > setup.py install, as directed here: https://bitbucket.org/pypy/numpy > > Then I run this: > > pypy -c "import numpy; numpy.test('doesntexist')" > > And get (among a lot of other noise): > > ====================================================================== > ERROR: Failure: ImportError (No module named numpy.core.multiarray_tests) > ---------------------------------------------------------------------- > ... > > Commenting out those imports results in the datetime64 issue I gave > earlier. That's when I decided it was getting too hacky, and it made > sense to reach out and make sure that I was running the tests the > right way, etc. > > Of course, if you'd like more detail on anything, please let me know. :) > > Thanks! > Eli > > It seems you are doing everything correctly. > multiarray_tests comes from numpy/core/src/multiarray/multiarray.c.src which > is compiled to a C-API module. > We skip building it as we have quite a way to go before we can support that > level of C-API compatibility. > The issue with readonly flag attributes actually lies with micronumpy, in > the pypy interpreter itself. > If you wish to work on this, you should add a test (in the pypy repo) to > pypy/module/micronumpy/test/test_flagsobj.py and continue from there. > Matti From matti.picus at gmail.com Wed May 18 13:55:18 2016 From: matti.picus at gmail.com (Matti Picus) Date: Wed, 18 May 2016 20:55:18 +0300 Subject: [pypy-dev] Looking into numpy ndarray.flags.writeable In-Reply-To: References: <573C8D80.8080503@gmail.com> <573C9F77.1050009@gmail.com> Message-ID: <573CAC86.6040207@gmail.com> An HTML attachment was scrubbed... URL: From wickedgrey at gmail.com Thu May 19 02:58:50 2016 From: wickedgrey at gmail.com (Eli Stevens (Gmail)) Date: Wed, 18 May 2016 23:58:50 -0700 Subject: [pypy-dev] Looking into numpy ndarray.flags.writeable In-Reply-To: <573CAC86.6040207@gmail.com> References: <573C8D80.8080503@gmail.com> <573C9F77.1050009@gmail.com> <573CAC86.6040207@gmail.com> Message-ID: I've got a pypy clone and checkout, and have added TestFlags. When I run it, I see: > a.flags.writeable = False E TypeError: readonly attribute But nothing that looks like it should raise a TypeError in either of: pypy/pypy/module/micronumpy/flagsobj.py pypy/pypy/module/micronumpy/ndarray.py Still trying to get oriented with the code. Any suggestions? Thanks, Eli On Wed, May 18, 2016 at 10:55 AM, Matti Picus wrote: > On 18/05/16 20:19, Eli Stevens (Gmail) wrote: > > Great, thanks for the pointers. I'll hopefully be able to dig in after > work tonight. Is there a from-scratch guide to getting to the point > where I can run those micronumpy tests? NBD if not, I'm sure I can > figure it out. > > Do you think it would make sense to start off by copying > test_multiarray.py:TestFlags there? > > Cheers, > Eli > > On Wed, May 18, 2016 at 9:59 AM, Matti Picus wrote: > > It seems you are doing everything correctly. > multiarray_tests comes from numpy/core/src/multiarray/multiarray.c.src which > is compiled to a C-API module. > We skip building it as we have quite a way to go before we can support that > level of C-API compatibility. > The issue with readonly flag attributes actually lies with micronumpy, in > the pypy interpreter itself. > If you wish to work on this, you should add a test (in the pypy repo) to > pypy/module/micronumpy/test/test_flagsobj.py and continue from there. > Matti > > There is an explanation of running tests here > http://doc.pypy.org/en/latest/getting-started-dev.html#running-pypy-s-unit-tests > The tests in TestFlags require refactoring for our simpler 'assert' style - > no fancy assert_equal() or assert_() functions > Matti From arigo at tunes.org Thu May 19 11:11:06 2016 From: arigo at tunes.org (Armin Rigo) Date: Thu, 19 May 2016 17:11:06 +0200 Subject: [pypy-dev] Forwarding... Message-ID: On 19 May 2016 at 14:58, wrote: > ---------- Forwarded message ---------- > From: Daniel Hnyk > To: pypy-dev at python.org > Cc: > Date: Thu, 19 May 2016 12:58:36 +0000 > Subject: Question about funding, again > Hello, > > my question is simple. It strikes me why you don't have more financial support, since PyPy might save quite a lot of resources compared to CPython. When we witness that e.g. microsoft is able to donate $100k to Jupyter (https://ipython.org/microsoft-donation-2013.html), why PyPy, being even more generic then Jupyter, has problem to raise few tenths of thousands. > > I can find few mentions about this on the internet, but no serious article or summary is out there. > > Have you tried any of the following? > > 1. Trying to get some funding from big companies and organizations such as Google, Microsoft, RedHat or some other like Free Software Foundation? If not, why not? > 2. Crowd founding websites such as Kickstarter or Indiegogo get quite a big attention nowadays even for similar projects. There were successful campaigns for projects with even smaller target group, such as designers (https://krita.org/) or video editors (openshot 2). Why haven't you created a campaign there? Micropython, again, with much smaller target group of users had got funded as well. > > Is someone working on this subject? Or is there a general lack of man power in PyPy's team? Couldn't be someone hired from money already collected? > > Thanks for an answer, > Daniel From arigo at tunes.org Thu May 19 11:23:15 2016 From: arigo at tunes.org (Armin Rigo) Date: Thu, 19 May 2016 17:23:15 +0200 Subject: [pypy-dev] Looking into numpy ndarray.flags.writeable In-Reply-To: References: <573C8D80.8080503@gmail.com> <573C9F77.1050009@gmail.com> <573CAC86.6040207@gmail.com> Message-ID: Hi Eli, On 19 May 2016 at 08:58, Eli Stevens (Gmail) wrote: > I've got a pypy clone and checkout, and have added TestFlags. When I > run it, I see: > >> a.flags.writeable = False > E TypeError: readonly attribute > > But nothing that looks like it should raise a TypeError in either of: Grep for 'writable'. You'll see that it is defined as a GetSetProperty() with a getter but no setter so far. A bient?t, Armin. From fijall at gmail.com Thu May 19 12:12:10 2016 From: fijall at gmail.com (Maciej Fijalkowski) Date: Thu, 19 May 2016 18:12:10 +0200 Subject: [pypy-dev] Forwarding... In-Reply-To: References: Message-ID: Hi Daniel. We've done all of the proposed scenarios. We had some success talking to companies, but there is a lot of resistance for various reasons (and the successful proposals I can't talk about), including the inability to pay open source from the engineering budget and instead doing it via the marketing budget (which is orders of magnitude slower). In short - you need to offer them something in exchange, which usually means you need to do a good job, but not good enough (so you can fix it for money). This is a very perverse incentive, btu this is how it goes. As for kickstarter - that targets primarily end-user experience and not infrastructure. As such, it's hard to find money from users for infrastructure, because it has relatively few direct users - mostly large companies. As for who is working on this subject - I am. Feel free to get in touch with me via other channels (private mail, gchat, IRC) if you have deeper insights Best regards, Maciej Fijalkowski On Thu, May 19, 2016 at 5:11 PM, Armin Rigo wrote: > On 19 May 2016 at 14:58, wrote: >> ---------- Forwarded message ---------- >> From: Daniel Hnyk >> To: pypy-dev at python.org >> Cc: >> Date: Thu, 19 May 2016 12:58:36 +0000 >> Subject: Question about funding, again >> Hello, >> >> my question is simple. It strikes me why you don't have more financial support, since PyPy might save quite a lot of resources compared to CPython. When we witness that e.g. microsoft is able to donate $100k to Jupyter (https://ipython.org/microsoft-donation-2013.html), why PyPy, being even more generic then Jupyter, has problem to raise few tenths of thousands. >> >> I can find few mentions about this on the internet, but no serious article or summary is out there. >> >> Have you tried any of the following? >> >> 1. Trying to get some funding from big companies and organizations such as Google, Microsoft, RedHat or some other like Free Software Foundation? If not, why not? >> 2. Crowd founding websites such as Kickstarter or Indiegogo get quite a big attention nowadays even for similar projects. There were successful campaigns for projects with even smaller target group, such as designers (https://krita.org/) or video editors (openshot 2). Why haven't you created a campaign there? Micropython, again, with much smaller target group of users had got funded as well. >> >> Is someone working on this subject? Or is there a general lack of man power in PyPy's team? Couldn't be someone hired from money already collected? >> >> Thanks for an answer, >> Daniel > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > https://mail.python.org/mailman/listinfo/pypy-dev From kotrfa at gmail.com Thu May 19 15:29:36 2016 From: kotrfa at gmail.com (Kotrfa) Date: Thu, 19 May 2016 19:29:36 +0000 Subject: [pypy-dev] Forwarding... In-Reply-To: References: Message-ID: Thanks for answer Maciej! I'm glad that this is in progress. It isn't possible to make some image about the situation from what I have found on the web. You response clarifies that a bit. I understand how difficult it can be. But I disagree with you regarding kickstarter. Pypy is connected to user experience. E.g. I am working as datascientists and pypy is running about 3 times faster on the code I am able to use it on (which is, unfortunately, minority - most of it is of course in those 4 libraries which shines red on the library support wall - numpy, scipy, pandas, scikit-learn). Similar with (py)Spark. I would say there are more data scientists using Python than those who likes to use "MicroPython on the ESP8266". The gain this field can get from Pypy is quite substantial, even with that conservative estimate about 3 times as fast compared to cPython. And that is just one example. Of course, I cannot ensure that you might get reasonably funded on kickstarter-like sites. But, what can you lose by making a campaign? It would be definitely much more visible than on your website, which, to be honest, could be a bit modernized as well. And even if it wouldn't be a success, you still get PR basically for free. I, unfortunately, don't have any insights or recommendation, it just scratched my mind. Thanks for your awesome work, Daniel ?t 19. 5. 2016 v 18:12 odes?latel Maciej Fijalkowski napsal: > Hi Daniel. > > We've done all of the proposed scenarios. We had some success talking > to companies, but there is a lot of resistance for various reasons > (and the successful proposals I can't talk about), including the > inability to pay open source from the engineering budget and instead > doing it via the marketing budget (which is orders of magnitude > slower). In short - you need to offer them something in exchange, > which usually means you need to do a good job, but not good enough (so > you can fix it for money). This is a very perverse incentive, btu this > is how it goes. > > As for kickstarter - that targets primarily end-user experience and > not infrastructure. As such, it's hard to find money from users for > infrastructure, because it has relatively few direct users - mostly > large companies. > > As for who is working on this subject - I am. Feel free to get in > touch with me via other channels (private mail, gchat, IRC) if you > have deeper insights > > Best regards, > Maciej Fijalkowski > > On Thu, May 19, 2016 at 5:11 PM, Armin Rigo wrote: > > On 19 May 2016 at 14:58, wrote: > >> ---------- Forwarded message ---------- > >> From: Daniel Hnyk > >> To: pypy-dev at python.org > >> Cc: > >> Date: Thu, 19 May 2016 12:58:36 +0000 > >> Subject: Question about funding, again > >> Hello, > >> > >> my question is simple. It strikes me why you don't have more financial > support, since PyPy might save quite a lot of resources compared to > CPython. When we witness that e.g. microsoft is able to donate $100k to > Jupyter (https://ipython.org/microsoft-donation-2013.html), why PyPy, > being even more generic then Jupyter, has problem to raise few tenths of > thousands. > >> > >> I can find few mentions about this on the internet, but no serious > article or summary is out there. > >> > >> Have you tried any of the following? > >> > >> 1. Trying to get some funding from big companies and organizations such > as Google, Microsoft, RedHat or some other like Free Software Foundation? > If not, why not? > >> 2. Crowd founding websites such as Kickstarter or Indiegogo get quite a > big attention nowadays even for similar projects. There were successful > campaigns for projects with even smaller target group, such as designers ( > https://krita.org/) or video editors (openshot 2). Why haven't you > created a campaign there? Micropython, again, with much smaller target > group of users had got funded as well. > >> > >> Is someone working on this subject? Or is there a general lack of man > power in PyPy's team? Couldn't be someone hired from money already > collected? > >> > >> Thanks for an answer, > >> Daniel > > _______________________________________________ > > pypy-dev mailing list > > pypy-dev at python.org > > https://mail.python.org/mailman/listinfo/pypy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From wickedgrey at gmail.com Thu May 19 16:36:52 2016 From: wickedgrey at gmail.com (Eli Stevens (Gmail)) Date: Thu, 19 May 2016 13:36:52 -0700 Subject: [pypy-dev] Looking into numpy ndarray.flags.writeable In-Reply-To: References: <573C8D80.8080503@gmail.com> <573C9F77.1050009@gmail.com> <573CAC86.6040207@gmail.com> Message-ID: Looks like I need to do something along the lines of: def descr_set_writeable(self, space, w_value): if space.is_true(w_value) != bool(self.flags & NPY.ARRAY_WRITEABLE): self.flags ^= NPY.ARRAY_WRITEABLE (Though I probably need more robust checking to see if the flag *can* be turned off) def descr_setitem(self, space, w_item, w_value): # This function already exists, but just contains the last line with the raise key = space.str_w(w_item) value = space.bool_w(w_value) if key == "W" or key == "WRITEABLE": return self.descr_set_writeable(space, value) raise oefmt(space.w_KeyError, "Unknown flag") ... writeable = GetSetProperty(W_FlagsObject.descr_get_writeable, W_FlagsObject.descr_set_writeable), However I'm not entirely confident about things like space.bool_w, etc. I've read http://doc.pypy.org/en/latest/objspace.html but am still working on internalizing it. Setting the GetSetProperty still results in the TypeError, which makes me wonder how to tell if I'm getting the right flagsobj.py. I don't think that I am. The results of the tests should be the same no matter what python interpreter I'm using, correct? Would running the tests with a virtualenv that has a stock pypy/numpy installed cause issues? What if the virtualenv is cpython? When I run py.test, I see: pytest-2.5.2 from /Users/elis/edit/play/pypy/pytest.pyc Which looks correct (.../play/pypy is my source checkout). But I get the same thing when using cpython to run test_all.py, and there the test passes, so I don't think it's indicative. When I print out np.__file__ inside the test, I get /Users/elis/venv/droidblue-pypy/site-packages/numpy/__init__.pyc Which is the pypy venv I am using to run the tests in the first place, but I'm not sure what the on-disk relationship between numpy and micronumpy actually is. Is there a way from the test_flagobjs.py file to determine what the on-disk location of micronumpy is? I strongly suspect I've got something basic wrong. I also think that the information at http://doc.pypy.org/en/latest/getting-started-dev.html#running-pypy-s-unit-tests and http://doc.pypy.org/en/latest/coding-guide.html#command-line-tool-test-all conflict somewhat, or are at least unclear as to which approach is the right way in what situation. I'll attempt to clarify whatever it is that's tripping me up once I've got it sorted out. Some other questions I have, looking at micornumpy/concrete.py line 37: class BaseConcreteArray(object): _immutable_fields_ = ['dtype?', 'storage', 'start', 'size', 'shape[*]', 'strides[*]', 'backstrides[*]', 'order', 'gcstruct', 'flags'] start = 0 parent = None flags = 0 Does that immutable status cascade down into the objects, or is that saying only that myInstance.flags cannot be reassigned (but myInstance.flags.foo = 3 is fine)? interpreter/typedef.py 221: @specialize.arg(0) def make_objclass_getter(tag, func, cls): if func and hasattr(func, 'im_func'): assert not cls or cls is func.im_class cls = func.im_class return _make_objclass_getter(cls) What's the purpose of the tag argument? It doesn't seem to be used here or in _make_descr_typecheck_wrapper, both of which are called from GetSetProperty init. Based on docstrings on _Specialize, it seems like they might be JIT hints. Is that correct? Matti: If it's okay, I'd like to keep the discussion on the list, as I've actively searched through discussions here to avoid asking questions a second time. Hopefully this thread can help the next person. Sorry for the mega-post; thanks for reading. Eli On Thu, May 19, 2016 at 8:23 AM, Armin Rigo wrote: > Hi Eli, > > On 19 May 2016 at 08:58, Eli Stevens (Gmail) wrote: >> I've got a pypy clone and checkout, and have added TestFlags. When I >> run it, I see: >> >>> a.flags.writeable = False >> E TypeError: readonly attribute >> >> But nothing that looks like it should raise a TypeError in either of: > > Grep for 'writable'. You'll see that it is defined as a > GetSetProperty() with a getter but no setter so far. > > > A bient?t, > > Armin. From fijall at gmail.com Thu May 19 17:15:09 2016 From: fijall at gmail.com (Maciej Fijalkowski) Date: Thu, 19 May 2016 23:15:09 +0200 Subject: [pypy-dev] Forwarding... In-Reply-To: References: Message-ID: Hi Daniel. As for kickstarter - it requires you to be american to start with :-P As for numpy etc. - I can assure you we're working on the support for those libraries as fast as possible, at the same time looking for funding through commercial sources. As for the website modernization - yes, this has to be done at some point soon (and I started doing steps in that direction), but *that* sort of things are really difficult to fund :-) Cheers, fijal On Thu, May 19, 2016 at 9:29 PM, Kotrfa wrote: > Thanks for answer Maciej! > > I'm glad that this is in progress. It isn't possible to make some image > about the situation from what I have found on the web. You response > clarifies that a bit. I understand how difficult it can be. > > But I disagree with you regarding kickstarter. Pypy is connected to user > experience. E.g. I am working as datascientists and pypy is running about 3 > times faster on the code I am able to use it on (which is, unfortunately, > minority - most of it is of course in those 4 libraries which shines red on > the library support wall - numpy, scipy, pandas, scikit-learn). Similar with > (py)Spark. I would say there are more data scientists using Python than > those who likes to use "MicroPython on the ESP8266". The gain this field can > get from Pypy is quite substantial, even with that conservative estimate > about 3 times as fast compared to cPython. And that is just one example. > > Of course, I cannot ensure that you might get reasonably funded on > kickstarter-like sites. But, what can you lose by making a campaign? It > would be definitely much more visible than on your website, which, to be > honest, could be a bit modernized as well. And even if it wouldn't be a > success, you still get PR basically for free. > > I, unfortunately, don't have any insights or recommendation, it just > scratched my mind. > > Thanks for your awesome work, > Daniel > > ?t 19. 5. 2016 v 18:12 odes?latel Maciej Fijalkowski > napsal: >> >> Hi Daniel. >> >> We've done all of the proposed scenarios. We had some success talking >> to companies, but there is a lot of resistance for various reasons >> (and the successful proposals I can't talk about), including the >> inability to pay open source from the engineering budget and instead >> doing it via the marketing budget (which is orders of magnitude >> slower). In short - you need to offer them something in exchange, >> which usually means you need to do a good job, but not good enough (so >> you can fix it for money). This is a very perverse incentive, btu this >> is how it goes. >> >> As for kickstarter - that targets primarily end-user experience and >> not infrastructure. As such, it's hard to find money from users for >> infrastructure, because it has relatively few direct users - mostly >> large companies. >> >> As for who is working on this subject - I am. Feel free to get in >> touch with me via other channels (private mail, gchat, IRC) if you >> have deeper insights >> >> Best regards, >> Maciej Fijalkowski >> >> On Thu, May 19, 2016 at 5:11 PM, Armin Rigo wrote: >> > On 19 May 2016 at 14:58, wrote: >> >> ---------- Forwarded message ---------- >> >> From: Daniel Hnyk >> >> To: pypy-dev at python.org >> >> Cc: >> >> Date: Thu, 19 May 2016 12:58:36 +0000 >> >> Subject: Question about funding, again >> >> Hello, >> >> >> >> my question is simple. It strikes me why you don't have more financial >> >> support, since PyPy might save quite a lot of resources compared to CPython. >> >> When we witness that e.g. microsoft is able to donate $100k to Jupyter >> >> (https://ipython.org/microsoft-donation-2013.html), why PyPy, being even >> >> more generic then Jupyter, has problem to raise few tenths of thousands. >> >> >> >> I can find few mentions about this on the internet, but no serious >> >> article or summary is out there. >> >> >> >> Have you tried any of the following? >> >> >> >> 1. Trying to get some funding from big companies and organizations such >> >> as Google, Microsoft, RedHat or some other like Free Software Foundation? If >> >> not, why not? >> >> 2. Crowd founding websites such as Kickstarter or Indiegogo get quite a >> >> big attention nowadays even for similar projects. There were successful >> >> campaigns for projects with even smaller target group, such as designers >> >> (https://krita.org/) or video editors (openshot 2). Why haven't you created >> >> a campaign there? Micropython, again, with much smaller target group of >> >> users had got funded as well. >> >> >> >> Is someone working on this subject? Or is there a general lack of man >> >> power in PyPy's team? Couldn't be someone hired from money already >> >> collected? >> >> >> >> Thanks for an answer, >> >> Daniel >> > _______________________________________________ >> > pypy-dev mailing list >> > pypy-dev at python.org >> > https://mail.python.org/mailman/listinfo/pypy-dev From arigo at tunes.org Thu May 19 17:52:33 2016 From: arigo at tunes.org (Armin Rigo) Date: Thu, 19 May 2016 23:52:33 +0200 Subject: [pypy-dev] Forwarding... In-Reply-To: References: Message-ID: Hi, On 19 May 2016 at 21:29, Kotrfa wrote: > Of course, I cannot ensure that you might get reasonably funded on > kickstarter-like sites. But, what can you lose by making a campaign? It > would be definitely much more visible than on your website I should add that we did that already: we have three fundraisers linked from the website---or maybe *had,* given that we launced them long ago already. It was reasonably successful. We're now thinking about what we'll do next. A bient?t, Armin. From hnykda at gmail.com Thu May 19 08:58:36 2016 From: hnykda at gmail.com (Daniel Hnyk) Date: Thu, 19 May 2016 12:58:36 +0000 Subject: [pypy-dev] Question about funding, again Message-ID: Hello, my question is simple. It strikes me why you don't have more financial support, since PyPy might save quite a lot of resources compared to CPython. When we witness that e.g. microsoft is able to donate $100k to Jupyter (https://ipython.org/microsoft-donation-2013.html), why PyPy, being even more generic then Jupyter, has problem to raise few tenths of thousands. I can find few mentions about this on the internet, but no serious article or summary is out there. Have you tried any of the following? 1. Trying to get some funding from big companies and organizations such as Google, Microsoft, RedHat or some other like Free Software Foundation? If not, why not? 2. Crowd founding websites such as Kickstarter or Indiegogo get quite a big attention nowadays even for similar projects. There were successful campaigns for projects with even smaller target group, such as designers ( https://krita.org/) or video editors (openshot 2). Why haven't you created a campaign there? Micropython, again, with much smaller target group of users had got funded as well. Is someone working on this subject? Or is there a general lack of man power in PyPy's team? Couldn't be someone hired from money already collected? Thanks for an answer, Daniel -------------- next part -------------- An HTML attachment was scrubbed... URL: From kotrfa at gmail.com Fri May 20 12:36:59 2016 From: kotrfa at gmail.com (Kotrfa) Date: Fri, 20 May 2016 16:36:59 +0000 Subject: [pypy-dev] Forwarding... In-Reply-To: References: Message-ID: Hey, thanks for your answers. I know you are working on numpy and similar libraries, as well about the fundraisers on your site. I am glad there is something happening around this. Thank you for your work and information, Daniel ?t 19. 5. 2016 v 23:53 odes?latel Armin Rigo napsal: > Hi, > > On 19 May 2016 at 21:29, Kotrfa wrote: > > Of course, I cannot ensure that you might get reasonably funded on > > kickstarter-like sites. But, what can you lose by making a campaign? It > > would be definitely much more visible than on your website > > I should add that we did that already: we have three fundraisers > linked from the website---or maybe *had,* given that we launced them > long ago already. It was reasonably successful. We're now thinking > about what we'll do next. > > > A bient?t, > > Armin. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From wickedgrey at gmail.com Fri May 20 13:44:38 2016 From: wickedgrey at gmail.com (Eli Stevens (Gmail)) Date: Fri, 20 May 2016 10:44:38 -0700 Subject: [pypy-dev] Looking into numpy ndarray.flags.writeable In-Reply-To: References: <573C8D80.8080503@gmail.com> <573C9F77.1050009@gmail.com> <573CAC86.6040207@gmail.com> Message-ID: More questions! :) When I run pypy> /usr/bin/python bin/pyinteractive.py I get to a (presumably interpreted, given the startup time) pypy prompt, but I cannot import numpy. Is the intent that I somehow install numpy into the source checkout's site-packages directory (the one listed in sys.path from that interpreted pypy prompt)? Also, it's pretty clear that when running the tests that "import numpy" just gets the numpy from the base interpreter, not from the micronumpy included in the pypy source. Is it possible to run the numpy tests without doing a full translation? Thanks, Eli On Thu, May 19, 2016 at 1:36 PM, Eli Stevens (Gmail) wrote: > Looks like I need to do something along the lines of: > > def descr_set_writeable(self, space, w_value): > if space.is_true(w_value) != bool(self.flags & NPY.ARRAY_WRITEABLE): > self.flags ^= NPY.ARRAY_WRITEABLE > > (Though I probably need more robust checking to see if the flag *can* > be turned off) > > def descr_setitem(self, space, w_item, w_value): > # This function already exists, but just contains the last > line with the raise > key = space.str_w(w_item) > value = space.bool_w(w_value) > if key == "W" or key == "WRITEABLE": > return self.descr_set_writeable(space, value) > raise oefmt(space.w_KeyError, "Unknown flag") > > ... > writeable = GetSetProperty(W_FlagsObject.descr_get_writeable, > W_FlagsObject.descr_set_writeable), > > However I'm not entirely confident about things like space.bool_w, > etc. I've read http://doc.pypy.org/en/latest/objspace.html but am > still working on internalizing it. > > Setting the GetSetProperty still results in the TypeError, which makes > me wonder how to tell if I'm getting the right flagsobj.py. I don't > think that I am. The results of the tests should be the same no matter > what python interpreter I'm using, correct? Would running the tests > with a virtualenv that has a stock pypy/numpy installed cause issues? > What if the virtualenv is cpython? > > When I run py.test, I see: > > pytest-2.5.2 from /Users/elis/edit/play/pypy/pytest.pyc > > Which looks correct (.../play/pypy is my source checkout). But I get > the same thing when using cpython to run test_all.py, and there the > test passes, so I don't think it's indicative. When I print out > np.__file__ inside the test, I get > > /Users/elis/venv/droidblue-pypy/site-packages/numpy/__init__.pyc > > Which is the pypy venv I am using to run the tests in the first place, > but I'm not sure what the on-disk relationship between numpy and > micronumpy actually is. Is there a way from the test_flagobjs.py file > to determine what the on-disk location of micronumpy is? > > I strongly suspect I've got something basic wrong. I also think that > the information at > http://doc.pypy.org/en/latest/getting-started-dev.html#running-pypy-s-unit-tests > and http://doc.pypy.org/en/latest/coding-guide.html#command-line-tool-test-all > conflict somewhat, or are at least unclear as to which approach is the > right way in what situation. I'll attempt to clarify whatever it is > that's tripping me up once I've got it sorted out. > > Some other questions I have, looking at micornumpy/concrete.py line 37: > > class BaseConcreteArray(object): > _immutable_fields_ = ['dtype?', 'storage', 'start', 'size', 'shape[*]', > 'strides[*]', 'backstrides[*]', 'order', 'gcstruct', > 'flags'] > start = 0 > parent = None > flags = 0 > > Does that immutable status cascade down into the objects, or is that > saying only that myInstance.flags cannot be reassigned (but > myInstance.flags.foo = 3 is fine)? > > interpreter/typedef.py 221: > > @specialize.arg(0) > def make_objclass_getter(tag, func, cls): > if func and hasattr(func, 'im_func'): > assert not cls or cls is func.im_class > cls = func.im_class > return _make_objclass_getter(cls) > > What's the purpose of the tag argument? It doesn't seem to be used > here or in _make_descr_typecheck_wrapper, both of which are called > from GetSetProperty init. Based on docstrings on _Specialize, it seems > like they might be JIT hints. Is that correct? > > Matti: If it's okay, I'd like to keep the discussion on the list, as > I've actively searched through discussions here to avoid asking > questions a second time. Hopefully this thread can help the next > person. > > Sorry for the mega-post; thanks for reading. > Eli > > On Thu, May 19, 2016 at 8:23 AM, Armin Rigo wrote: >> Hi Eli, >> >> On 19 May 2016 at 08:58, Eli Stevens (Gmail) wrote: >>> I've got a pypy clone and checkout, and have added TestFlags. When I >>> run it, I see: >>> >>>> a.flags.writeable = False >>> E TypeError: readonly attribute >>> >>> But nothing that looks like it should raise a TypeError in either of: >> >> Grep for 'writable'. You'll see that it is defined as a >> GetSetProperty() with a getter but no setter so far. >> >> >> A bient?t, >> >> Armin. From fijall at gmail.com Fri May 20 14:43:13 2016 From: fijall at gmail.com (Maciej Fijalkowski) Date: Fri, 20 May 2016 20:43:13 +0200 Subject: [pypy-dev] Looking into numpy ndarray.flags.writeable In-Reply-To: References: <573C8D80.8080503@gmail.com> <573C9F77.1050009@gmail.com> <573CAC86.6040207@gmail.com> Message-ID: the option is --withmod-micronumpy or --allworkingmodules but the tests are in the test directory and *that's* how you should run tests (not by playing with interactive) On Fri, May 20, 2016 at 7:44 PM, Eli Stevens (Gmail) wrote: > More questions! :) > > When I run > > pypy> /usr/bin/python bin/pyinteractive.py > > I get to a (presumably interpreted, given the startup time) pypy > prompt, but I cannot import numpy. Is the intent that I somehow > install numpy into the source checkout's site-packages directory (the > one listed in sys.path from that interpreted pypy prompt)? > > Also, it's pretty clear that when running the tests that "import > numpy" just gets the numpy from the base interpreter, not from the > micronumpy included in the pypy source. Is it possible to run the > numpy tests without doing a full translation? > > Thanks, > Eli > > On Thu, May 19, 2016 at 1:36 PM, Eli Stevens (Gmail) > wrote: >> Looks like I need to do something along the lines of: >> >> def descr_set_writeable(self, space, w_value): >> if space.is_true(w_value) != bool(self.flags & NPY.ARRAY_WRITEABLE): >> self.flags ^= NPY.ARRAY_WRITEABLE >> >> (Though I probably need more robust checking to see if the flag *can* >> be turned off) >> >> def descr_setitem(self, space, w_item, w_value): >> # This function already exists, but just contains the last >> line with the raise >> key = space.str_w(w_item) >> value = space.bool_w(w_value) >> if key == "W" or key == "WRITEABLE": >> return self.descr_set_writeable(space, value) >> raise oefmt(space.w_KeyError, "Unknown flag") >> >> ... >> writeable = GetSetProperty(W_FlagsObject.descr_get_writeable, >> W_FlagsObject.descr_set_writeable), >> >> However I'm not entirely confident about things like space.bool_w, >> etc. I've read http://doc.pypy.org/en/latest/objspace.html but am >> still working on internalizing it. >> >> Setting the GetSetProperty still results in the TypeError, which makes >> me wonder how to tell if I'm getting the right flagsobj.py. I don't >> think that I am. The results of the tests should be the same no matter >> what python interpreter I'm using, correct? Would running the tests >> with a virtualenv that has a stock pypy/numpy installed cause issues? >> What if the virtualenv is cpython? >> >> When I run py.test, I see: >> >> pytest-2.5.2 from /Users/elis/edit/play/pypy/pytest.pyc >> >> Which looks correct (.../play/pypy is my source checkout). But I get >> the same thing when using cpython to run test_all.py, and there the >> test passes, so I don't think it's indicative. When I print out >> np.__file__ inside the test, I get >> >> /Users/elis/venv/droidblue-pypy/site-packages/numpy/__init__.pyc >> >> Which is the pypy venv I am using to run the tests in the first place, >> but I'm not sure what the on-disk relationship between numpy and >> micronumpy actually is. Is there a way from the test_flagobjs.py file >> to determine what the on-disk location of micronumpy is? >> >> I strongly suspect I've got something basic wrong. I also think that >> the information at >> http://doc.pypy.org/en/latest/getting-started-dev.html#running-pypy-s-unit-tests >> and http://doc.pypy.org/en/latest/coding-guide.html#command-line-tool-test-all >> conflict somewhat, or are at least unclear as to which approach is the >> right way in what situation. I'll attempt to clarify whatever it is >> that's tripping me up once I've got it sorted out. >> >> Some other questions I have, looking at micornumpy/concrete.py line 37: >> >> class BaseConcreteArray(object): >> _immutable_fields_ = ['dtype?', 'storage', 'start', 'size', 'shape[*]', >> 'strides[*]', 'backstrides[*]', 'order', 'gcstruct', >> 'flags'] >> start = 0 >> parent = None >> flags = 0 >> >> Does that immutable status cascade down into the objects, or is that >> saying only that myInstance.flags cannot be reassigned (but >> myInstance.flags.foo = 3 is fine)? >> >> interpreter/typedef.py 221: >> >> @specialize.arg(0) >> def make_objclass_getter(tag, func, cls): >> if func and hasattr(func, 'im_func'): >> assert not cls or cls is func.im_class >> cls = func.im_class >> return _make_objclass_getter(cls) >> >> What's the purpose of the tag argument? It doesn't seem to be used >> here or in _make_descr_typecheck_wrapper, both of which are called >> from GetSetProperty init. Based on docstrings on _Specialize, it seems >> like they might be JIT hints. Is that correct? >> >> Matti: If it's okay, I'd like to keep the discussion on the list, as >> I've actively searched through discussions here to avoid asking >> questions a second time. Hopefully this thread can help the next >> person. >> >> Sorry for the mega-post; thanks for reading. >> Eli >> >> On Thu, May 19, 2016 at 8:23 AM, Armin Rigo wrote: >>> Hi Eli, >>> >>> On 19 May 2016 at 08:58, Eli Stevens (Gmail) wrote: >>>> I've got a pypy clone and checkout, and have added TestFlags. When I >>>> run it, I see: >>>> >>>>> a.flags.writeable = False >>>> E TypeError: readonly attribute >>>> >>>> But nothing that looks like it should raise a TypeError in either of: >>> >>> Grep for 'writable'. You'll see that it is defined as a >>> GetSetProperty() with a getter but no setter so far. >>> >>> >>> A bient?t, >>> >>> Armin. > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > https://mail.python.org/mailman/listinfo/pypy-dev From wickedgrey at gmail.com Fri May 20 15:18:53 2016 From: wickedgrey at gmail.com (Eli Stevens (Gmail)) Date: Fri, 20 May 2016 12:18:53 -0700 Subject: [pypy-dev] Looking into numpy ndarray.flags.writeable In-Reply-To: References: <573C8D80.8080503@gmail.com> <573C9F77.1050009@gmail.com> <573CAC86.6040207@gmail.com> Message-ID: I understand that the tests are in the test directory, but the issue I'm currently trying to figure out is that when I invoke either of: /usr/bin/python test_all.py test_all.py module/micronumpy/test/test_flagsobj.py /usr/bin/python pytest.py pypy/module/micronumpy/test/test_flagsobj.py (a directory level up) With any of system cpython, venv cpython, venv pypy, etc. then the changes that I've made locally to micronumpy aren't used, since inside of the tests "import numpy" grabs the numpy from whatever interpreter the tests were invoked with. I'm sure there's something simple that I'm missing about the environment that's needed to make this work, but I haven't figured it out yet. Do I need to be doing something with the PYTHONPATH prior to running the tests? Cheers, Eli On Fri, May 20, 2016 at 11:43 AM, Maciej Fijalkowski wrote: > the option is --withmod-micronumpy or --allworkingmodules > > but the tests are in the test directory and *that's* how you should > run tests (not by playing with interactive) > > On Fri, May 20, 2016 at 7:44 PM, Eli Stevens (Gmail) > wrote: >> More questions! :) >> >> When I run >> >> pypy> /usr/bin/python bin/pyinteractive.py >> >> I get to a (presumably interpreted, given the startup time) pypy >> prompt, but I cannot import numpy. Is the intent that I somehow >> install numpy into the source checkout's site-packages directory (the >> one listed in sys.path from that interpreted pypy prompt)? >> >> Also, it's pretty clear that when running the tests that "import >> numpy" just gets the numpy from the base interpreter, not from the >> micronumpy included in the pypy source. Is it possible to run the >> numpy tests without doing a full translation? >> >> Thanks, >> Eli >> >> On Thu, May 19, 2016 at 1:36 PM, Eli Stevens (Gmail) >> wrote: >>> Looks like I need to do something along the lines of: >>> >>> def descr_set_writeable(self, space, w_value): >>> if space.is_true(w_value) != bool(self.flags & NPY.ARRAY_WRITEABLE): >>> self.flags ^= NPY.ARRAY_WRITEABLE >>> >>> (Though I probably need more robust checking to see if the flag *can* >>> be turned off) >>> >>> def descr_setitem(self, space, w_item, w_value): >>> # This function already exists, but just contains the last >>> line with the raise >>> key = space.str_w(w_item) >>> value = space.bool_w(w_value) >>> if key == "W" or key == "WRITEABLE": >>> return self.descr_set_writeable(space, value) >>> raise oefmt(space.w_KeyError, "Unknown flag") >>> >>> ... >>> writeable = GetSetProperty(W_FlagsObject.descr_get_writeable, >>> W_FlagsObject.descr_set_writeable), >>> >>> However I'm not entirely confident about things like space.bool_w, >>> etc. I've read http://doc.pypy.org/en/latest/objspace.html but am >>> still working on internalizing it. >>> >>> Setting the GetSetProperty still results in the TypeError, which makes >>> me wonder how to tell if I'm getting the right flagsobj.py. I don't >>> think that I am. The results of the tests should be the same no matter >>> what python interpreter I'm using, correct? Would running the tests >>> with a virtualenv that has a stock pypy/numpy installed cause issues? >>> What if the virtualenv is cpython? >>> >>> When I run py.test, I see: >>> >>> pytest-2.5.2 from /Users/elis/edit/play/pypy/pytest.pyc >>> >>> Which looks correct (.../play/pypy is my source checkout). But I get >>> the same thing when using cpython to run test_all.py, and there the >>> test passes, so I don't think it's indicative. When I print out >>> np.__file__ inside the test, I get >>> >>> /Users/elis/venv/droidblue-pypy/site-packages/numpy/__init__.pyc >>> >>> Which is the pypy venv I am using to run the tests in the first place, >>> but I'm not sure what the on-disk relationship between numpy and >>> micronumpy actually is. Is there a way from the test_flagobjs.py file >>> to determine what the on-disk location of micronumpy is? >>> >>> I strongly suspect I've got something basic wrong. I also think that >>> the information at >>> http://doc.pypy.org/en/latest/getting-started-dev.html#running-pypy-s-unit-tests >>> and http://doc.pypy.org/en/latest/coding-guide.html#command-line-tool-test-all >>> conflict somewhat, or are at least unclear as to which approach is the >>> right way in what situation. I'll attempt to clarify whatever it is >>> that's tripping me up once I've got it sorted out. >>> >>> Some other questions I have, looking at micornumpy/concrete.py line 37: >>> >>> class BaseConcreteArray(object): >>> _immutable_fields_ = ['dtype?', 'storage', 'start', 'size', 'shape[*]', >>> 'strides[*]', 'backstrides[*]', 'order', 'gcstruct', >>> 'flags'] >>> start = 0 >>> parent = None >>> flags = 0 >>> >>> Does that immutable status cascade down into the objects, or is that >>> saying only that myInstance.flags cannot be reassigned (but >>> myInstance.flags.foo = 3 is fine)? >>> >>> interpreter/typedef.py 221: >>> >>> @specialize.arg(0) >>> def make_objclass_getter(tag, func, cls): >>> if func and hasattr(func, 'im_func'): >>> assert not cls or cls is func.im_class >>> cls = func.im_class >>> return _make_objclass_getter(cls) >>> >>> What's the purpose of the tag argument? It doesn't seem to be used >>> here or in _make_descr_typecheck_wrapper, both of which are called >>> from GetSetProperty init. Based on docstrings on _Specialize, it seems >>> like they might be JIT hints. Is that correct? >>> >>> Matti: If it's okay, I'd like to keep the discussion on the list, as >>> I've actively searched through discussions here to avoid asking >>> questions a second time. Hopefully this thread can help the next >>> person. >>> >>> Sorry for the mega-post; thanks for reading. >>> Eli >>> >>> On Thu, May 19, 2016 at 8:23 AM, Armin Rigo wrote: >>>> Hi Eli, >>>> >>>> On 19 May 2016 at 08:58, Eli Stevens (Gmail) wrote: >>>>> I've got a pypy clone and checkout, and have added TestFlags. When I >>>>> run it, I see: >>>>> >>>>>> a.flags.writeable = False >>>>> E TypeError: readonly attribute >>>>> >>>>> But nothing that looks like it should raise a TypeError in either of: >>>> >>>> Grep for 'writable'. You'll see that it is defined as a >>>> GetSetProperty() with a getter but no setter so far. >>>> >>>> >>>> A bient?t, >>>> >>>> Armin. >> _______________________________________________ >> pypy-dev mailing list >> pypy-dev at python.org >> https://mail.python.org/mailman/listinfo/pypy-dev From matti.picus at gmail.com Fri May 20 15:55:22 2016 From: matti.picus at gmail.com (matti picus) Date: Fri, 20 May 2016 22:55:22 +0300 Subject: [pypy-dev] Looking into numpy ndarray.flags.writeable In-Reply-To: References: <573C8D80.8080503@gmail.com> <573C9F77.1050009@gmail.com> <573CAC86.6040207@gmail.com> Message-ID: <286D29BE-5F2C-4501-B56C-B2172E34BF56@gmail.com> You should commit your changes to a branch and push to a bitbucket repo so we can see your changes. Our test runner compiles part of PyPy and calls the tests using that partial interpreter (unless run with -A), when you call import numpy inside a test you are using micronumpy. You should not try to stop in a debugger in the test code itself (app level), rather print or set_trace inside micronumpy code (interpreter level). Matti > On 20 May 2016, at 10:18 PM, Eli Stevens (Gmail) wrote: > > I understand that the tests are in the test directory, but the issue > I'm currently trying to figure out is that when I invoke either of: > > /usr/bin/python test_all.py test_all.py > module/micronumpy/test/test_flagsobj.py > > /usr/bin/python pytest.py > pypy/module/micronumpy/test/test_flagsobj.py (a directory level up) > > With any of system cpython, venv cpython, venv pypy, etc. then the > changes that I've made locally to micronumpy aren't used, since inside > of the tests "import numpy" grabs the numpy from whatever interpreter > the tests were invoked with. > > I'm sure there's something simple that I'm missing about the > environment that's needed to make this work, but I haven't figured it > out yet. Do I need to be doing something with the PYTHONPATH prior to > running the tests? > > Cheers, > Eli > > >> From wickedgrey at gmail.com Fri May 20 16:21:58 2016 From: wickedgrey at gmail.com (Eli Stevens (Gmail)) Date: Fri, 20 May 2016 13:21:58 -0700 Subject: [pypy-dev] Looking into numpy ndarray.flags.writeable In-Reply-To: <286D29BE-5F2C-4501-B56C-B2172E34BF56@gmail.com> References: <573C8D80.8080503@gmail.com> <573C9F77.1050009@gmail.com> <573CAC86.6040207@gmail.com> <286D29BE-5F2C-4501-B56C-B2172E34BF56@gmail.com> Message-ID: Here you go: https://bitbucket.org/elistevens/pypy/commits/branch/numpy_flags_writeable In particular, this produces different output based on the invoking interpreter. https://bitbucket.org/elistevens/pypy/commits/922e80048e9c8ef71b3ea90171a1f8f06b04f00a?at=numpy_flags_writeable#Lpypy/module/micronumpy/test/test_flagsobj.pyT158 Things like: /Users/elis/venv/droidblue-pypy/site-packages/numpy/__init__.pyc or /System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/numpy/__init__.pyc etc. Cheers, Eli On Fri, May 20, 2016 at 12:55 PM, matti picus wrote: > You should commit your changes to a branch and push to a bitbucket repo so we can see your changes. Our test runner compiles part of PyPy and calls the tests using that partial interpreter (unless run with -A), when you call import numpy inside a test you are using micronumpy. You should not try to stop in a debugger in the test code itself (app level), rather print or set_trace inside micronumpy code (interpreter level). > Matti > > >> On 20 May 2016, at 10:18 PM, Eli Stevens (Gmail) wrote: >> >> I understand that the tests are in the test directory, but the issue >> I'm currently trying to figure out is that when I invoke either of: >> >> /usr/bin/python test_all.py test_all.py >> module/micronumpy/test/test_flagsobj.py >> >> /usr/bin/python pytest.py >> pypy/module/micronumpy/test/test_flagsobj.py (a directory level up) >> >> With any of system cpython, venv cpython, venv pypy, etc. then the >> changes that I've made locally to micronumpy aren't used, since inside >> of the tests "import numpy" grabs the numpy from whatever interpreter >> the tests were invoked with. >> >> I'm sure there's something simple that I'm missing about the >> environment that's needed to make this work, but I haven't figured it >> out yet. Do I need to be doing something with the PYTHONPATH prior to >> running the tests? >> >> Cheers, >> Eli >> >> >>> From ronan.lamy at gmail.com Fri May 20 16:52:23 2016 From: ronan.lamy at gmail.com (Ronan Lamy) Date: Fri, 20 May 2016 21:52:23 +0100 Subject: [pypy-dev] Looking into numpy ndarray.flags.writeable In-Reply-To: References: <573C8D80.8080503@gmail.com> <573C9F77.1050009@gmail.com> <573CAC86.6040207@gmail.com> <286D29BE-5F2C-4501-B56C-B2172E34BF56@gmail.com> Message-ID: <573F7907.9040802@gmail.com> Le 20/05/16 21:21, Eli Stevens (Gmail) a ?crit : > Here you go: https://bitbucket.org/elistevens/pypy/commits/branch/numpy_flags_writeable The name of your class needs to start with 'AppTest', so that our test runner knows it needs to use the black magic that enables application-level tests (i.e. tests that run on top of our PyPy interpreter itself running on top of your regular Python). From wickedgrey at gmail.com Sun May 22 01:51:24 2016 From: wickedgrey at gmail.com (Eli Stevens (Gmail)) Date: Sat, 21 May 2016 22:51:24 -0700 Subject: [pypy-dev] Looking into numpy ndarray.flags.writeable In-Reply-To: <573F7907.9040802@gmail.com> References: <573C8D80.8080503@gmail.com> <573C9F77.1050009@gmail.com> <573CAC86.6040207@gmail.com> <286D29BE-5F2C-4501-B56C-B2172E34BF56@gmail.com> <573F7907.9040802@gmail.com> Message-ID: Ah-ha! Renaming the class seemed to do the trick. Is that documented anywhere? Similarly, is there a full list of the things that are available to AppTestXXX classes? It took quite a bit of trial and error to figure out that "with py.test.raises" wasn't available, but "raises" was (I spent a lot of time trying to figure out how to import py.test, though now I know what to look for, I see that there are already examples of raises use in test_flagsobj.py. Sigh.). In any case, I'm able to run the tests and see my code executing now, so that's good. What does the method name prefix 'descr_' imply? I'm now stuck trying to understand the relationship between the application level myarray.flags and the flag instances on BaseConcreteArray subclasses. I now have this situation: In the AppTest my array has id 0x110d648d0. The writeable flag is False, according to printing a.flags (I set it in the test, having added that code to W_FlagsObject.descr_set_writeable, which also seems to be behaving). In BaseConcreteArray.descr_setitem the orig_arr id is 0x110d648d0, orig_arr.w_flags is None, and self.flags is 0x507, with writeable being 0x400. Any ideas? I've pushed up what I have so far. Thanks, Eli On Fri, May 20, 2016 at 1:52 PM, Ronan Lamy wrote: > Le 20/05/16 21:21, Eli Stevens (Gmail) a ?crit : >> >> Here you go: >> https://bitbucket.org/elistevens/pypy/commits/branch/numpy_flags_writeable > > > The name of your class needs to start with 'AppTest', so that our test > runner knows it needs to use the black magic that enables application-level > tests (i.e. tests that run on top of our PyPy interpreter itself running on > top of your regular Python). > > > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > https://mail.python.org/mailman/listinfo/pypy-dev From dimaqq at gmail.com Mon May 23 08:06:49 2016 From: dimaqq at gmail.com (Dima Tisnek) Date: Mon, 23 May 2016 14:06:49 +0200 Subject: [pypy-dev] Forwarding... In-Reply-To: References: Message-ID: Indeed kickstarter (or similar) cannot be about general research of infrastructure. It has to be about something visible, a goal that's shared in a crowd. IMO a campaign to achieve cpython-3.latest parity in pypy will get backers. Not only from pypy users, but also from cpython world, because: * it places Python 3 (language) into mainstream pypy, thus closing py2/py3 divide, and * it establishes Python as a larger standard (two implementations) Just my 2c. d. On 19 May 2016 at 18:12, Maciej Fijalkowski wrote: > Hi Daniel. > > We've done all of the proposed scenarios. We had some success talking > to companies, but there is a lot of resistance for various reasons > (and the successful proposals I can't talk about), including the > inability to pay open source from the engineering budget and instead > doing it via the marketing budget (which is orders of magnitude > slower). In short - you need to offer them something in exchange, > which usually means you need to do a good job, but not good enough (so > you can fix it for money). This is a very perverse incentive, btu this > is how it goes. > > As for kickstarter - that targets primarily end-user experience and > not infrastructure. As such, it's hard to find money from users for > infrastructure, because it has relatively few direct users - mostly > large companies. > > As for who is working on this subject - I am. Feel free to get in > touch with me via other channels (private mail, gchat, IRC) if you > have deeper insights > > Best regards, > Maciej Fijalkowski > > On Thu, May 19, 2016 at 5:11 PM, Armin Rigo wrote: >> On 19 May 2016 at 14:58, wrote: >>> ---------- Forwarded message ---------- >>> From: Daniel Hnyk >>> To: pypy-dev at python.org >>> Cc: >>> Date: Thu, 19 May 2016 12:58:36 +0000 >>> Subject: Question about funding, again >>> Hello, >>> >>> my question is simple. It strikes me why you don't have more financial support, since PyPy might save quite a lot of resources compared to CPython. When we witness that e.g. microsoft is able to donate $100k to Jupyter (https://ipython.org/microsoft-donation-2013.html), why PyPy, being even more generic then Jupyter, has problem to raise few tenths of thousands. >>> >>> I can find few mentions about this on the internet, but no serious article or summary is out there. >>> >>> Have you tried any of the following? >>> >>> 1. Trying to get some funding from big companies and organizations such as Google, Microsoft, RedHat or some other like Free Software Foundation? If not, why not? >>> 2. Crowd founding websites such as Kickstarter or Indiegogo get quite a big attention nowadays even for similar projects. There were successful campaigns for projects with even smaller target group, such as designers (https://krita.org/) or video editors (openshot 2). Why haven't you created a campaign there? Micropython, again, with much smaller target group of users had got funded as well. >>> >>> Is someone working on this subject? Or is there a general lack of man power in PyPy's team? Couldn't be someone hired from money already collected? >>> >>> Thanks for an answer, >>> Daniel >> _______________________________________________ >> pypy-dev mailing list >> pypy-dev at python.org >> https://mail.python.org/mailman/listinfo/pypy-dev > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > https://mail.python.org/mailman/listinfo/pypy-dev From fijall at gmail.com Fri May 27 03:24:56 2016 From: fijall at gmail.com (Maciej Fijalkowski) Date: Fri, 27 May 2016 09:24:56 +0200 Subject: [pypy-dev] Found on the internet Message-ID: Another game boy emulator using RPython https://github.com/Baekalfen/PyBoy From arigo at tunes.org Fri May 27 04:50:26 2016 From: arigo at tunes.org (Armin Rigo) Date: Fri, 27 May 2016 10:50:26 +0200 Subject: [pypy-dev] Found on the internet In-Reply-To: References: Message-ID: Hi, On 27 May 2016 at 09:24, Maciej Fijalkowski wrote: > Another game boy emulator using RPython https://github.com/Baekalfen/PyBoy It's not actually using RPython. It's a standard Python program, and the PDF has graphs about CPython versus PyPy performance. Armin From jdoran at lexmachina.com Fri May 27 17:45:57 2016 From: jdoran at lexmachina.com (Jeff Doran) Date: Fri, 27 May 2016 14:45:57 -0700 Subject: [pypy-dev] Question about cookielib in PyPy Message-ID: I'm working on an ongoing task to test our existing code on PyPy. My latest exploration was is using PyPy 5.1 along with lxml 3.6.0 and of partuclar relevance to my question, requests 2.9.1 and betamax 0.7.0 Our test suite completes without incident on Python 2.7.9, but on PyPy 5.1.0 we encounter errors trying to match an http request to a recorded session in betamax. This process fails when trying to match a request against the correct recorded session due to a failure in the header comparison when we include a cookie string. This is a comparison of 2 dicts and the contents are the same, except for the Cookie string. We created the Cookie using cookielib.Cookie and add the various items via the Cookie.__init__() method. The resulting strings comes from Cookie.__str__ I believe and the comparison in requests is a simple dict1 == dict2. Any thoughts on where this difference might lie (dictionary comparison issues, string comparison issues, or ...)? I'd like to avoid having to have a different set of tests for PyPy vs CPython as that defeats the idea of verifying compatibility. Thanks for any input. - Jeff Doran This comes from cookielib.Cookie under PyPY. request.headers: %s { 'User-Agent': 'python-requests/2.9.1', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive', 'Cookie': 'NextGenCSO=8MWACYK5f24yllbpMwVpocfsyGYIBUXl4aWQvo7nUcWz2OwU7p4Dy40bxiyjGS5es8hBv5LxPT8PqBnWNzaBNo0k0PGffpQTDI4xBGc9WwQevnzyUCmq7WaXMTOTSpKM; PacerClientCode=dev; PacerSession=8MWACYK5f24yllbpMwVpocfsyGYIBUXl4aWQvo7nUcWz2OwU7p4Dy40bxiyjGS5es8hBv5LxPT8PqBnWNzaBNo0k0PGffpQTDI4xBGc9WwQevnzyUCmq7WaXMTOTSpKM; domain=.uscourts.gov; path=/; PacerPref="receipt=Y"' } This was recorded previously using Python 2.7.9 recorded headers: %s { u'Connection': u'keep-alive', u'Cookie': u'PacerClientCode=dev; path=/; NextGenCSO=8MWACYK5f24yllbpMwVpocfsyGYIBUXl4aWQvo7nUcWz2OwU7p4Dy40bxiyjGS5es8hBv5LxPT8PqBnWNzaBNo0k0PGffpQTDI4xBGc9WwQevnzyUCmq7WaXMTOTSpKM; domain=.uscourts.gov; PacerSession=8MWACYK5f24yllbpMwVpocfsyGYIBUXl4aWQvo7nUcWz2OwU7p4Dy40bxiyjGS5es8hBv5LxPT8PqBnWNzaBNo0k0PGffpQTDI4xBGc9WwQevnzyUCmq7WaXMTOTSpKM; PacerPref="receipt=Y"', u'Accept-Encoding': u'gzip, deflate', u'Accept': u'*/*', u'User-Agent': u'python-requests/2.9.1' } -------------- next part -------------- An HTML attachment was scrubbed... URL: From arigo at tunes.org Sat May 28 03:13:43 2016 From: arigo at tunes.org (Armin Rigo) Date: Sat, 28 May 2016 09:13:43 +0200 Subject: [pypy-dev] Question about cookielib in PyPy In-Reply-To: References: Message-ID: Hi Jeff, On 27 May 2016 at 23:45, Jeff Doran wrote: > The resulting strings comes from Cookie.__str__ I believe and the > comparison in requests is a simple dict1 == dict2. If what you're printing are two dicts, then they don't compare equal, because of this value: > 'NextGenCSO=8MWACYK5f24yllbpMwVpocfsyGYIBUXl4aWQvo7nUcWz2OwU7p4Dy40bxiyjGS5es8hBv5LxPT8PqBnWNzaBNo0k0PGffpQTDI4xBGc9WwQevnzyUCmq7WaXMTOTSpKM; PacerClientCode=dev;PacerSession=8MWACYK5f24yllbpMwVpocfsyGYIBUXl4aWQvo7nUcWz2OwU7p4Dy40bxiyjGS5es8hBv5LxPT8PqBnWNzaBNo0k0PGffpQTDI4xBGc9WwQevnzyUCmq7WaXMTOTSpKM; domain=.uscourts.gov; path=/; PacerPref="receipt=Y"' which is just a long string, which is different from this other long unicode string: > u'PacerClientCode=dev; path=/;NextGenCSO=8MWACYK5f24yllbpMwVpocfsyGYIBUXl4aWQvo7nUcWz2OwU7p4Dy40bxiyjGS5es8hBv5LxPT8PqBnWNzaBNo0k0PGffpQTDI4xBGc9WwQevnzyUCmq7WaXMTOTSpKM;domain=.uscourts.gov;PacerSession=8MWACYK5f24yllbpMwVpocfsyGYIBUXl4aWQvo7nUcWz2OwU7p4Dy40bxiyjGS5es8hBv5LxPT8PqBnWNzaBNo0k0PGffpQTDI4xBGc9WwQevnzyUCmq7WaXMTOTSpKM;PacerPref="receipt=Y"' This is likely because the string was built following some dictionary order, which is (in CPython) nondeterministic. It is often roughly stable, in that the same dictionaries often end up in the same order, which is why such errors in the tests are not immediately apparent. PyPy has a different order (which actually guarantees that dicts work like OrderedDicts). You should check that theory by verifying if your tests pass or break with "python -R". My guess is that they do not. Then you can fix the test---but maybe a better idea would be to fix the code to sort the keys (I see that Cookie.py does sort the keys, so I'm guessing there is more code involved here). A bient?t, Armin. From matti.picus at gmail.com Sat May 28 14:22:50 2016 From: matti.picus at gmail.com (Matti Picus) Date: Sat, 28 May 2016 21:22:50 +0300 Subject: [pypy-dev] Update to the win32 buildbot Message-ID: <5749E1FA.6050801@gmail.com> Hi! I am cc-ing the pypy-dev list so there is some permanent record of this, the mail is for the win32 buildbot maintainer. Could you add the lzma development libraries to the build environment of the win32 buildbot? They are needed for python 3, as you can see from any of the failing build runs like this one http://buildbot.pypy.org/builders/pypy-c-jit-win-x86-32/builds/2398/steps/shell_1/logs/stdio Python 3 uses the xz-5.0.5 libraries, as can bee seen in the cpython/PCbuild/readme.txt, following their links leads to http://tukaani.org/xz/ and to a download http://tukaani.org/xz/xz-5.0.5-windows.zip (library) http://tukaani.org/xz/xz-5.0.5-windows.zip.sig (signature) I will try to update the local.zip available on our downloads https://bitbucket.org/pypy/pypy/downloads/local_2.4.zip and the documentation http://doc.pypy.org/en/latest/windows.html but in the meantime it would be great if you could get this going Thanks! Matti From wickedgrey at gmail.com Sun May 29 13:16:14 2016 From: wickedgrey at gmail.com (Eli Stevens (Gmail)) Date: Sun, 29 May 2016 10:16:14 -0700 Subject: [pypy-dev] Looking into numpy ndarray.flags.writeable In-Reply-To: References: <573C8D80.8080503@gmail.com> <573C9F77.1050009@gmail.com> <573CAC86.6040207@gmail.com> <286D29BE-5F2C-4501-B56C-B2172E34BF56@gmail.com> <573F7907.9040802@gmail.com> Message-ID: Anyone have any pointers for understanding the relationship between the application-level myarray.flags, and self.flags and/or orig_arr.w_flags in BaseConcreteArray.descr_setitem? I'm even sure where to start doing code dives from. Thanks, Eli On Sat, May 21, 2016 at 10:51 PM, Eli Stevens (Gmail) wrote: > Ah-ha! Renaming the class seemed to do the trick. Is that documented > anywhere? Similarly, is there a full list of the things that are > available to AppTestXXX classes? It took quite a bit of trial and > error to figure out that "with py.test.raises" wasn't available, but > "raises" was (I spent a lot of time trying to figure out how to import > py.test, though now I know what to look for, I see that there are > already examples of raises use in test_flagsobj.py. Sigh.). > > In any case, I'm able to run the tests and see my code executing now, > so that's good. > > What does the method name prefix 'descr_' imply? > > I'm now stuck trying to understand the relationship between the > application level myarray.flags and the flag instances on > BaseConcreteArray subclasses. > > I now have this situation: > > In the AppTest my array has id 0x110d648d0. The writeable flag is > False, according to printing a.flags (I set it in the test, having > added that code to W_FlagsObject.descr_set_writeable, which also seems > to be behaving). > > In BaseConcreteArray.descr_setitem the orig_arr id is 0x110d648d0, > orig_arr.w_flags is None, and self.flags is 0x507, with writeable > being 0x400. Any ideas? I've pushed up what I have so far. > > Thanks, > Eli > > On Fri, May 20, 2016 at 1:52 PM, Ronan Lamy wrote: >> Le 20/05/16 21:21, Eli Stevens (Gmail) a ?crit : >>> >>> Here you go: >>> https://bitbucket.org/elistevens/pypy/commits/branch/numpy_flags_writeable >> >> >> The name of your class needs to start with 'AppTest', so that our test >> runner knows it needs to use the black magic that enables application-level >> tests (i.e. tests that run on top of our PyPy interpreter itself running on >> top of your regular Python). >> >> >> _______________________________________________ >> pypy-dev mailing list >> pypy-dev at python.org >> https://mail.python.org/mailman/listinfo/pypy-dev From arigo at tunes.org Mon May 30 03:18:19 2016 From: arigo at tunes.org (Armin Rigo) Date: Mon, 30 May 2016 09:18:19 +0200 Subject: [pypy-dev] asmgcc versus shadowstack Message-ID: Hi all, Recently, we've got a few more of the common bug report "cannot find gc roots!". This is caused by asmgcc somehow failing to parse the ".s" files produced by gcc on Linux. I'm investigating what can be done to improve the situation of asmgcc in a more definitive way. There are basically two solutions: 1) we improve shadowstack. This is the alternative to asmgcc, which is used on any non-Linux platform already. So far it is around 10% slower than asmgcc. 2) we improve asmgcc by finding some better way than parsing assembler files. I worked during the past month in the branch "shadowstack-perf-2". This gives a major improvement on the placement of pushing and popping GC roots on the shadow stack. I think it's worth merging that branch in any case. On x86, it gives roughly 3-4% speed improvements; I'd guess on arm it is slightly more. (I'm comparing the performance outside JITted machine code; the JITted machine code we produce is more similar.) The problem is that asmgcc used to be ~10% better. IMHO, 3-4% is not quite enough to be happy and kill asmgcc. Improving beyond these 3-4% seems to require some new ideas. So I'm also thinking about ways to fix asmgcc more generally, this time focusing on Linux only; asmgcc contains old code that tries to parse MSVC output, and I bet we tried with clang at some point, but these attempts both failed. So let's focus on Linux and gcc only. Asmgcc does two things with the parsed assembler: it computes the stack size at every point, and it tracks some marked variables backward until the previous "call" instruction. I think we can assume that the version of gcc is not older than, say, the one on tannit32 (Ubuntu 12.04), which is gcc 4.6. At least from that version, both on x86-32 and x86-64, gcc will emit "CFI directives" (https://sourceware.org/binutils/docs/as/CFI-directives.html). These are a saner way to get the information about the current stack size. About the backward tracking, we need to have a complete understanding of all instructions, even if e.g. for any xmm instruction we just say "can't handle GC pointers". The backward tracking itself is often foiled because the assembler is lacking a way to know clearly "this call never returns" (e.g. calls to abort(), or to some RPython helper that prints stuff and aborts). In other words, the control flow is sometimes hard to get correctly, because a "call" generally returns, but not always. Such mistakes can produce bogus results (including "cannot find gc roots!"). What can we do about that? Maybe we can compile with "-s -fdump-final-insns". This dumps a gcc-specific summary of the RTL, which is the final intermediate representation, which looks like it is in one-to-one correspondance with the actual assembly. It would be a better input for the backward-tracker, because we don't have to handle tons of instructions with unknown effects, and because it contains explicit points at which control flow cannot pass. On the other hand, we'd need to parse both the .s and this dump in parallel, matching them as we go along. But I still think it would be better than now. Of course the best would be to get rid of asmgcc completely... This mail is meant to be a dump of my current mind's state :-) A bient?t, Armin. From fijall at gmail.com Mon May 30 03:25:50 2016 From: fijall at gmail.com (Maciej Fijalkowski) Date: Mon, 30 May 2016 09:25:50 +0200 Subject: [pypy-dev] asmgcc versus shadowstack In-Reply-To: References: Message-ID: hi armin I don't have very deep opinions - but I'm worried about one particular thing. GCC tends to change its IR with every release, would be parsing this not be a nightmare that has to be updated with each new release of gcc? On Mon, May 30, 2016 at 9:18 AM, Armin Rigo wrote: > Hi all, > > Recently, we've got a few more of the common bug report "cannot find > gc roots!". This is caused by asmgcc somehow failing to parse the > ".s" files produced by gcc on Linux. > > I'm investigating what can be done to improve the situation of asmgcc > in a more definitive way. There are basically two solutions: > > > 1) we improve shadowstack. This is the alternative to asmgcc, which > is used on any non-Linux platform already. So far it is around 10% > slower than asmgcc. > > 2) we improve asmgcc by finding some better way than parsing assembler files. > > > I worked during the past month in the branch "shadowstack-perf-2". > This gives a major improvement on the placement of pushing and popping > GC roots on the shadow stack. I think it's worth merging that branch > in any case. On x86, it gives roughly 3-4% speed improvements; I'd > guess on arm it is slightly more. (I'm comparing the performance > outside JITted machine code; the JITted machine code we produce is > more similar.) > > The problem is that asmgcc used to be ~10% better. IMHO, 3-4% is not > quite enough to be happy and kill asmgcc. Improving beyond these 3-4% > seems to require some new ideas. > > > So I'm also thinking about ways to fix asmgcc more generally, this > time focusing on Linux only; asmgcc contains old code that tries to > parse MSVC output, and I bet we tried with clang at some point, but > these attempts both failed. So let's focus on Linux and gcc only. > > Asmgcc does two things with the parsed assembler: it computes the > stack size at every point, and it tracks some marked variables > backward until the previous "call" instruction. > > I think we can assume that the version of gcc is not older than, say, > the one on tannit32 (Ubuntu 12.04), which is gcc 4.6. At least from > that version, both on x86-32 and x86-64, gcc will emit "CFI > directives" (https://sourceware.org/binutils/docs/as/CFI-directives.html). > These are a saner way to get the information about the current stack > size. > > About the backward tracking, we need to have a complete understanding > of all instructions, even if e.g. for any xmm instruction we just say > "can't handle GC pointers". The backward tracking itself is often > foiled because the assembler is lacking a way to know clearly "this > call never returns" (e.g. calls to abort(), or to some RPython helper > that prints stuff and aborts). In other words, the control flow is > sometimes hard to get correctly, because a "call" generally returns, > but not always. Such mistakes can produce bogus results (including > "cannot find gc roots!"). > > What can we do about that? Maybe we can compile with "-s > -fdump-final-insns". This dumps a gcc-specific summary of the RTL, > which is the final intermediate representation, which looks like it is > in one-to-one correspondance with the actual assembly. It would be a > better input for the backward-tracker, because we don't have to handle > tons of instructions with unknown effects, and because it contains > explicit points at which control flow cannot pass. On the other hand, > we'd need to parse both the .s and this dump in parallel, matching > them as we go along. But I still think it would be better than now. > > Of course the best would be to get rid of asmgcc completely... > > This mail is meant to be a dump of my current mind's state :-) > > > A bient?t, > > Armin. > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > https://mail.python.org/mailman/listinfo/pypy-dev From matti.picus at gmail.com Mon May 30 17:18:09 2016 From: matti.picus at gmail.com (Matti Picus) Date: Tue, 31 May 2016 00:18:09 +0300 Subject: [pypy-dev] pypy3.3-v5.2-alpha1 release Message-ID: <574CAE11.7080504@gmail.com> We are almost ready to release pypy3.3-v5.2-alpha1, but need some help to make sure all is OK. Please try the download packages here https://bitbucket.org/pypy/pypy/downloads also, if someone could check that the links related to pypy3.3 all work on this page that would be great http://pypy.org/download.html If the page is still showing PyPy3 2.4.0, wait 30 minutes or so from now and then hit refresh on your browser. Thanks, and kudos to the pypy3 team. Matti From andrewsmedina at gmail.com Mon May 30 20:54:40 2016 From: andrewsmedina at gmail.com (Andrews Medina) Date: Mon, 30 May 2016 21:54:40 -0300 Subject: [pypy-dev] pypy3.3-v5.2-alpha1 release In-Reply-To: <574CAE11.7080504@gmail.com> References: <574CAE11.7080504@gmail.com> Message-ID: On Mon, May 30, 2016 at 6:18 PM, Matti Picus wrote: > We are almost ready to release pypy3.3-v5.2-alpha1, but need some help to > make sure all is OK. > Please try the download packages here > https://bitbucket.org/pypy/pypy/downloads > also, if someone could check that the links related to pypy3.3 all work on > this page that would be great > http://pypy.org/download.html All pypy3.3 link are ok to me! Best, -- Andrews Medina www.andrewsmedina.com From kostia.lopuhin at gmail.com Tue May 31 03:08:33 2016 From: kostia.lopuhin at gmail.com (Konstantin Lopuhin) Date: Tue, 31 May 2016 10:08:33 +0300 Subject: [pypy-dev] pypy3.3-v5.2-alpha1 release In-Reply-To: <574CAE11.7080504@gmail.com> References: <574CAE11.7080504@gmail.com> Message-ID: Thanks for doing the release, this is awesome! I tried the OS X version (https://bitbucket.org/pypy/pypy/downloads/pypy3.3-v5.2.0-alpha1-osx64.tar.bz2), it runs, but it does not seem to have pip (or perhaps I did something wrong): https://bpaste.net/show/f5e10b16bc22 2016-05-31 0:18 GMT+03:00 Matti Picus : > We are almost ready to release pypy3.3-v5.2-alpha1, but need some help to > make sure all is OK. > Please try the download packages here > https://bitbucket.org/pypy/pypy/downloads > also, if someone could check that the links related to pypy3.3 all work on > this page that would be great > http://pypy.org/download.html > If the page is still showing PyPy3 2.4.0, wait 30 minutes or so from now and > then hit refresh on your browser. > > Thanks, and kudos to the pypy3 team. > Matti > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > https://mail.python.org/mailman/listinfo/pypy-dev From kostia.lopuhin at gmail.com Tue May 31 03:39:33 2016 From: kostia.lopuhin at gmail.com (Konstantin Lopuhin) Date: Tue, 31 May 2016 10:39:33 +0300 Subject: [pypy-dev] pypy3.3-v5.2-alpha1 release In-Reply-To: References: <574CAE11.7080504@gmail.com> Message-ID: Ah, sorry, my bad: I did not know about ensurepip module, checked if after reading the blog post, and it works: it installs pip and "python -m pip" works as expected (although "pip" still points to the system one). 2016-05-31 10:08 GMT+03:00 Konstantin Lopuhin : > Thanks for doing the release, this is awesome! > > I tried the OS X version > (https://bitbucket.org/pypy/pypy/downloads/pypy3.3-v5.2.0-alpha1-osx64.tar.bz2), > it runs, but it does not seem to have pip (or perhaps I did something > wrong): https://bpaste.net/show/f5e10b16bc22 > > 2016-05-31 0:18 GMT+03:00 Matti Picus : >> We are almost ready to release pypy3.3-v5.2-alpha1, but need some help to >> make sure all is OK. >> Please try the download packages here >> https://bitbucket.org/pypy/pypy/downloads >> also, if someone could check that the links related to pypy3.3 all work on >> this page that would be great >> http://pypy.org/download.html >> If the page is still showing PyPy3 2.4.0, wait 30 minutes or so from now and >> then hit refresh on your browser. >> >> Thanks, and kudos to the pypy3 team. >> Matti >> _______________________________________________ >> pypy-dev mailing list >> pypy-dev at python.org >> https://mail.python.org/mailman/listinfo/pypy-dev