From amauryfa at gmail.com Wed Dec 3 20:16:14 2014 From: amauryfa at gmail.com (Amaury Forgeot d'Arc) Date: Wed, 3 Dec 2014 20:16:14 +0100 Subject: [pypy-dev] GSoC 2015: cpyext project? In-Reply-To: <87mw7bnnog.fsf@tsmithe.net> References: <87mw7bnnog.fsf@tsmithe.net> Message-ID: Hello Toby, Overall it's a nice goal, but I don't think that improving cpyext is easy. Its goal is to reproduce the CPython API, in all its details and caveats. I will list some of them to explain why I think it's a difficult task: - First, PyPy objects have no fixed layout exposed to C code. for example, PyPy has multiple implementations of lists and dicts, that are chosen at runtime, and can even change when the object is mutated, so all the concrete functions of the CPython API need to use the abstract object interface (e.g. PyList_GET_ITEM is not a C macro, but a Python call to type(x).__getitem__, fetched from the class dictionary) - Then, PyPy uses a moving garbage collector, which move allocated objects when they survive the first collection. This is not what users of PyObject* pointers expect, the address has to stay the same for the life of the object. So cpyext allocates a PyObject struct at a fixed address, and uses a mapping (expensive!) each time the object crosses the boundary between the interpreted and the C extension. There is even a ob_refcount field, which keeps track of the number of the references held in C code; and borrowed references were a nightmare to implement correctly. And I'm sure we don't correctly handle circular references between PyObjects... - Finally, there is a lot of code that directly accesses C struct members (very common: obj->ob_type->tp_name). So each time an object goes from Python to the C extension, cpyext needs to allocate a struct which contains all these fields, recursively, only to delete them when the call returns, even when the C code does not actually use these fields. Even if cpyext can be made a bit faster, the issues above won't disappear, if we want to support all the semantics implied by the CPython API. And believe me, all the features we implemented are needed by one extension or another. I'd say that cpyext is quite mature, because it provides all the infrastructure to support almost all extension modules, and went much farther than we initially expected. But I think it went as far as possible given the differences between CPython and PyPy. There is a solution though, which is also a nice project: Since "cffi" is the preferred way to access C code from PyPy, you could instead write a version of boost::python (maybe renamed to boost::python_cffi) that uses cffi primitives to implement all the boost functions: class_(), def(), and so on. I started this idea some time ago already, and I was able to support the "hello world" example of boost::python. This one: http://www.boost.org/doc/libs/1_57_0/libs/python/doc/tutorial/doc/html/index.html#quickstart.hello_world I need to find the code I wrote so I can share it (around 250 lines); basically it's a rewrite of boost::python, but using a slightly different C API (to use Python features from C++), and a completely different way to manage memory (similar to JNI: there are Local and Global References , and ffi.new_handle() to create references from objects). This method is much more friendly to PyPy and its JIT (mostly because references don't need to be memory addresses!) Or maybe you'll find that boost::python is quite complex to reimplement correctly (because it's boost), and you will decide to use directly the C API defined above. I remember there are functions like Object_SetAttrString and PyString_FromString, and it's easy to add new ones. Of course this requires to rewrite all your bindings from scratch, but since all the code will be in Python (with snippets of C++) you will find that there are better way than C++ templates to generate code from regular patterns. I haven't seen yet any serious module that uses cffi to interface C++, so any progress in this direction would be awesome. 2014-11-28 20:13 GMT+01:00 Toby St Clere Smithe : > Hi all, > > I've posted a couple of times on here before: I maintain a Python > extension for GPGPU linear algebra[1], but it uses boost.python. I do > most of my scientific computing in Python, but often am forced to use > CPython where I would prefer to use PyPy, largely because of the > availability of extensions. > > I'm looking for an interesting Google Summer of Code project for next > year, and would like to continue working on things that help make > high-performance computing in Python straight-forward. In particular, > I've had my eye on the 'optimising cpyext'[2] project for a while: might > work in that area be available? > > I notice that it is described with difficulty 'hard', and so I'm keen to > enquire early so that I can get up to speed before making a potential > application in the spring. I would love to work on getting cpyext into a > good enough shape that both Cython and Boost.Python extensions are > functional with minimal effort on behalf of the user. Does anyone have > any advice? Are there particular things I should familiarise myself > with? I know there is the module/cpyext tree, but it is quite formidable > for someone uninitiated! > > Of course, I recognise that cpyext is a much trickier proposition in > comparison with things like cffi and cppyy. In particular, I'm very > excited by cppyy and PyCling, but they seem quite bound up in CERN's > ROOT infrastructure, which is a shame. But it's also clear that very > many useful extensions currently use the CPython API, and so -- as I > have often found -- the apparent relative immaturity of cpyext keeps > people away from PyPy, which is also a shame! > > [1] https://pypi.python.org/pypi/pyviennacl > [2] https://bitbucket.org/pypy/pypy/wiki/GSOC%202014 > > Best, > > Toby > > > -- > Toby St Clere Smithe > http://tsmithe.net > > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > https://mail.python.org/mailman/listinfo/pypy-dev > -- Amaury Forgeot d'Arc -------------- next part -------------- An HTML attachment was scrubbed... URL: From fijall at gmail.com Wed Dec 3 20:39:10 2014 From: fijall at gmail.com (Maciej Fijalkowski) Date: Wed, 3 Dec 2014 21:39:10 +0200 Subject: [pypy-dev] GSoC 2015: cpyext project? In-Reply-To: <87mw7bnnog.fsf@tsmithe.net> References: <87mw7bnnog.fsf@tsmithe.net> Message-ID: On Fri, Nov 28, 2014 at 9:13 PM, Toby St Clere Smithe wrote: > Hi all, > > I've posted a couple of times on here before: I maintain a Python > extension for GPGPU linear algebra[1], but it uses boost.python. I do > most of my scientific computing in Python, but often am forced to use > CPython where I would prefer to use PyPy, largely because of the > availability of extensions. > > I'm looking for an interesting Google Summer of Code project for next > year, and would like to continue working on things that help make > high-performance computing in Python straight-forward. In particular, > I've had my eye on the 'optimising cpyext'[2] project for a while: might > work in that area be available? > > I notice that it is described with difficulty 'hard', and so I'm keen to > enquire early so that I can get up to speed before making a potential > application in the spring. I would love to work on getting cpyext into a > good enough shape that both Cython and Boost.Python extensions are > functional with minimal effort on behalf of the user. Does anyone have > any advice? Are there particular things I should familiarise myself > with? I know there is the module/cpyext tree, but it is quite formidable > for someone uninitiated! > > Of course, I recognise that cpyext is a much trickier proposition in > comparison with things like cffi and cppyy. In particular, I'm very > excited by cppyy and PyCling, but they seem quite bound up in CERN's > ROOT infrastructure, which is a shame. But it's also clear that very > many useful extensions currently use the CPython API, and so -- as I > have often found -- the apparent relative immaturity of cpyext keeps > people away from PyPy, which is also a shame! > > [1] https://pypi.python.org/pypi/pyviennacl > [2] https://bitbucket.org/pypy/pypy/wiki/GSOC%202014 > > Best, > > Toby > I'm not commenting on speeding up cpyext (I have a few ideas how to do that) Unbounding cppyy from the CERN ROOT infrastructure sounds like a very worthy goal. Does that sound exciting to you? From mail at tsmithe.net Wed Dec 3 20:51:21 2014 From: mail at tsmithe.net (Toby St Clere Smithe) Date: Wed, 03 Dec 2014 20:51:21 +0100 Subject: [pypy-dev] GSoC 2015: cpyext project? References: <87mw7bnnog.fsf@tsmithe.net> Message-ID: <87wq68ikau.fsf@tsmithe.net> Maciej Fijalkowski writes: > Unbounding cppyy from the CERN ROOT infrastructure sounds like a very > worthy goal. Does that sound exciting to you? That does sound worthwhile -- and probably more viable than Amaury's project (sorry, Amaury!). I've actually put out enquiries to the CERN people about a very similar idea, just relating to PyCling -- which is a more general cousin of cppyy, from what I can tell -- and so perhaps I could combine your expertise with theirs. I believe Wim is on this list; I've sent him and the CERN guys an e-mail this evening, but would like to hear back from them before expending much effort thinking about cppyy (since it is all so inter-related). Obviously, my preference would be to work on a project that would help both the CPython and PyPy worlds. Best, Toby -- Toby St Clere Smithe http://tsmithe.net From fijall at gmail.com Wed Dec 3 21:03:44 2014 From: fijall at gmail.com (Maciej Fijalkowski) Date: Wed, 3 Dec 2014 22:03:44 +0200 Subject: [pypy-dev] GSoC 2015: cpyext project? In-Reply-To: <87wq68ikau.fsf@tsmithe.net> References: <87mw7bnnog.fsf@tsmithe.net> <87wq68ikau.fsf@tsmithe.net> Message-ID: feel free to come to IRC and discuss it btw On Wed, Dec 3, 2014 at 9:51 PM, Toby St Clere Smithe wrote: > Maciej Fijalkowski writes: >> Unbounding cppyy from the CERN ROOT infrastructure sounds like a very >> worthy goal. Does that sound exciting to you? > > That does sound worthwhile -- and probably more viable than Amaury's > project (sorry, Amaury!). > > I've actually put out enquiries to the CERN people about a very similar > idea, just relating to PyCling -- which is a more general cousin of > cppyy, from what I can tell -- and so perhaps I could combine your > expertise with theirs. I believe Wim is on this list; I've sent him and > the CERN guys an e-mail this evening, but would like to hear back from > them before expending much effort thinking about cppyy (since it is all > so inter-related). Obviously, my preference would be to work on a > project that would help both the CPython and PyPy worlds. > > Best, > > Toby > > -- > Toby St Clere Smithe > http://tsmithe.net > > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > https://mail.python.org/mailman/listinfo/pypy-dev From mail at tsmithe.net Wed Dec 3 21:18:02 2014 From: mail at tsmithe.net (Toby St Clere Smithe) Date: Wed, 03 Dec 2014 21:18:02 +0100 Subject: [pypy-dev] GSoC 2015: cpyext project? References: <87mw7bnnog.fsf@tsmithe.net> <87wq68ikau.fsf@tsmithe.net> Message-ID: <87ppc0ij2d.fsf@tsmithe.net> Maciej Fijalkowski writes: > feel free to come to IRC and discuss it btw Great -- I will pop in when I know more! Toby > On Wed, Dec 3, 2014 at 9:51 PM, Toby St Clere Smithe wrote: >> Maciej Fijalkowski writes: >>> Unbounding cppyy from the CERN ROOT infrastructure sounds like a very >>> worthy goal. Does that sound exciting to you? >> >> That does sound worthwhile -- and probably more viable than Amaury's >> project (sorry, Amaury!). >> >> I've actually put out enquiries to the CERN people about a very similar >> idea, just relating to PyCling -- which is a more general cousin of >> cppyy, from what I can tell -- and so perhaps I could combine your >> expertise with theirs. I believe Wim is on this list; I've sent him and >> the CERN guys an e-mail this evening, but would like to hear back from >> them before expending much effort thinking about cppyy (since it is all >> so inter-related). Obviously, my preference would be to work on a >> project that would help both the CPython and PyPy worlds. >> >> Best, >> >> Toby >> >> -- >> Toby St Clere Smithe >> http://tsmithe.net >> >> _______________________________________________ >> pypy-dev mailing list >> pypy-dev at python.org >> https://mail.python.org/mailman/listinfo/pypy-dev -- Toby St Clere Smithe http://tsmithe.net From wlavrijsen at lbl.gov Wed Dec 3 22:05:02 2014 From: wlavrijsen at lbl.gov (wlavrijsen at lbl.gov) Date: Wed, 3 Dec 2014 13:05:02 -0800 (PST) Subject: [pypy-dev] GSoC 2015: cpyext project? In-Reply-To: <87wq68ikau.fsf@tsmithe.net> References: <87mw7bnnog.fsf@tsmithe.net> <87wq68ikau.fsf@tsmithe.net> Message-ID: Toby, > I've actually put out enquiries to the CERN people about a very similar > idea, just relating to PyCling -- which is a more general cousin of > cppyy, from what I can tell -- and so perhaps I could combine your > expertise with theirs. I'll just quickly answer here first, get into more detail later on the private e-mail (although that won't be for today anymore). We actually have cppyy on CPython with Cling as a backend. The nice thing about a C++ interpreter is that you can do things like: $ python Python 2.7.7 (default, Jun 20 2014, 13:47:02) [GCC] on linux2 Type "help", "copyright", "credits" or "license" for more information. using my private settings ... >>> import cppyy >>> from cppyy.gbl import gInterpreter >>> gInterpreter.Declare("template class MyClass { public: T m_val; };") True >>> from cppyy.gbl import std, MyClass >>> v = std.vector(MyClass(int))(1) >>> v[0].m_val 0 >>> v[0].m_val = 2 >>> v[0].m_val 2 >>> Clearly, it then also allows to build a 'cppffi' as Armin has asked for. The catch is that there is a boat load of refactoring to be done. The heavy lifting in the above module is in libCore, libCling, and libPyROOT, for example, which are all part of ROOT. (cppyy in PyPy is properly factored.) When we refer to 'PyCling', we mean the above, but refactored. To first order, that can be done by stripping all ROOT bits out of PyROOT, but better would be that it utilizes the same backend as does cppyy in PyPy. (You can also use the AST directly, in theory, leaving only clang/llvm as dependency, but we tried that, but it doesn't work. I can get you all the gory details.) There is more fun to be had then that, though. E.g. cppffi as already mentioned. But beyond, fully automatically generated bindings get you 95% of the way only. Yes, you get everything bound, but it smells like C++ and is sometimes clunky. Pythonizations get you to 99%, e.g. the above session can be continued like so: >>> for m in v: ... print m.m_val ... 2 >>> b/c the PyROOT code recognizes the begin()/end() iterator paradigm. Smart, reflection-based pythonizations are a project in themselves. Then to get to 100%, requires some proper hooks for the programmer to fine tune behavior, and although PyROOT has some of that, it's rather ad hoc (e.g. settings for memory ownership and GIL handling) and I've never taken to time to think that through, so that could be another fun project. As said, I'll get to the other e-mail tomorrow. Best regards, Wim -- WLavrijsen at lbl.gov -- +1 (510) 486 6411 -- www.lavrijsen.net From mail at tsmithe.net Wed Dec 3 22:14:09 2014 From: mail at tsmithe.net (Toby St Clere Smithe) Date: Wed, 03 Dec 2014 22:14:09 +0100 Subject: [pypy-dev] GSoC 2015: cpyext project? References: <87mw7bnnog.fsf@tsmithe.net> <87wq68ikau.fsf@tsmithe.net> Message-ID: <87lhmoiggu.fsf@tsmithe.net> Dear Wim, wlavrijsen at lbl.gov writes: > I'll just quickly answer here first, get into more detail later on the > private e-mail (although that won't be for today anymore). Sure. My comments here will also be mostly general (regarding what you've written below). I've suspected that most of what you write is the case, so thanks for clarifying that -- in particular regarding cppyy/PyCling/refactoring (this is fairly clear once you play with the various parts!). I also agree that is it very interesting, and I've also been wondering about the automated Pythonization bits; I agree that the automated bindings just give you something clunky. From my own point of view, I would again be very keen to work on something like that -- not only because it sounds rather fun, but also because it would save me a lot of work regarding PyViennaCL in future! I'm quite interested in the gory details and the fine-tuning, and think I could make quite a good GSoC proposal about this. It's also quite convenient that there is scope to work on it under the aegis of both PyPy and CERN, since that maximises the chances of organisation acceptance (not that it is likely, it seems to me, that either would be rejected). I look forwad to your e-mail tomorrow! Best, Toby > We actually have cppyy on CPython with Cling as a backend. The nice thing > about a C++ interpreter is that you can do things like: > > $ python > Python 2.7.7 (default, Jun 20 2014, 13:47:02) [GCC] on linux2 > Type "help", "copyright", "credits" or "license" for more information. > using my private settings ... > >>> import cppyy > >>> from cppyy.gbl import gInterpreter > >>> gInterpreter.Declare("template class MyClass { public: T m_val; };") > True > >>> from cppyy.gbl import std, MyClass > >>> v = std.vector(MyClass(int))(1) > >>> v[0].m_val > 0 > >>> v[0].m_val = 2 > >>> v[0].m_val > 2 > >>> > > Clearly, it then also allows to build a 'cppffi' as Armin has asked for. > > The catch is that there is a boat load of refactoring to be done. The heavy > lifting in the above module is in libCore, libCling, and libPyROOT, for > example, which are all part of ROOT. (cppyy in PyPy is properly factored.) > > When we refer to 'PyCling', we mean the above, but refactored. To first > order, that can be done by stripping all ROOT bits out of PyROOT, but better > would be that it utilizes the same backend as does cppyy in PyPy. (You can > also use the AST directly, in theory, leaving only clang/llvm as dependency, > but we tried that, but it doesn't work. I can get you all the gory details.) > > There is more fun to be had then that, though. E.g. cppffi as already > mentioned. But beyond, fully automatically generated bindings get you 95% > of the way only. Yes, you get everything bound, but it smells like C++ and > is sometimes clunky. Pythonizations get you to 99%, e.g. the above session > can be continued like so: > > >>> for m in v: > ... print m.m_val > ... > 2 > >>> > > b/c the PyROOT code recognizes the begin()/end() iterator paradigm. Smart, > reflection-based pythonizations are a project in themselves. > > Then to get to 100%, requires some proper hooks for the programmer to > fine tune behavior, and although PyROOT has some of that, it's rather ad > hoc (e.g. settings for memory ownership and GIL handling) and I've never > taken to time to think that through, so that could be another fun project. > > As said, I'll get to the other e-mail tomorrow. > > Best regards, > Wim -- Toby St Clere Smithe http://tsmithe.net From van.lindberg at gmail.com Fri Dec 5 16:38:31 2014 From: van.lindberg at gmail.com (VanL) Date: Fri, 5 Dec 2014 09:38:31 -0600 Subject: [pypy-dev] Unify lib_pypy by vendoring six Message-ID: Hi all, I?ve been doing some experiments with pypy and I would interested in making parts of the codebase more 3x compatible. As a first step, I notice that there are slight differences between the lib_pypy shipped in the 2.7 and 3.2 releases. How would people feel about reducing the duplication by consolidating the lib_pypy implementations? The strategy would be: - vendor six.py within lib_pypy - unify implementation as much as possible, using either compatible syntax or six helpers - if the implementation cannot be unified, putting individual implementations behind six.PY2 or six.PY3 conditionals Thoughts? Thanks, Van -------------- next part -------------- An HTML attachment was scrubbed... URL: From amauryfa at gmail.com Fri Dec 5 16:58:23 2014 From: amauryfa at gmail.com (Amaury Forgeot d'Arc) Date: Fri, 5 Dec 2014 16:58:23 +0100 Subject: [pypy-dev] Unify lib_pypy by vendoring six In-Reply-To: References: Message-ID: Hi Van, 2014-12-05 16:38 GMT+01:00 VanL : > Hi all, > > I?ve been doing some experiments with pypy and I would interested in > making parts of the codebase more 3x compatible. As a first step, I notice > that there are slight differences between the lib_pypy shipped in the 2.7 > and 3.2 releases. How would people feel about reducing the duplication by > consolidating the lib_pypy implementations? > lib_pypy is a portion of the stdlib. It contains the modules that CPython implements in C, and that PyPy decided to implement in pure Python. They describe a different version of Python, and have different features. And why would you want to consolidate the code there, and not say in urllib2.py or unicodeobject.py? The strategy would be: > > - vendor six.py within lib_pypy > - unify implementation as much as possible, using either compatible syntax > or six helpers > - if the implementation cannot be unified, putting individual > implementations behind six.PY2 or six.PY3 conditionals > > Thoughts? > > Thanks, > Van > > > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > https://mail.python.org/mailman/listinfo/pypy-dev > > -- Amaury Forgeot d'Arc -------------- next part -------------- An HTML attachment was scrubbed... URL: From tobias.oberstein at tavendo.de Fri Dec 5 17:27:56 2014 From: tobias.oberstein at tavendo.de (Tobias Oberstein) Date: Fri, 5 Dec 2014 08:27:56 -0800 Subject: [pypy-dev] Looking for Python/Twisted developer to work on Open-source Message-ID: <634914A010D0B943A035D226786325D44974B62CA1@EXVMBX020-12.exch020.serverdata.net> Hi, we're looking for an experienced, dedicated Python / Twisted developer to work on http://crossbar.io/ helping us move faster. The code base is around 40k LOC right now, all open-source here https://github.com/crossbario/crossbar https://github.com/tavendo/AutobahnPython This stuff is all new, probably non-trivial, but I'd say technically exciting. Means: a "challenge" should be something that wakes you up;) We have big plans, and we're starting to see strong uptake - both community and commercial, in particular in IoT. You'll work remotely, as you like, when and where you like. Payment by hours for now, maybe more down the road. If you are interested, please respond by mail, including some interesting bits about you, plus your GitHub handle (or similar), expectations, time zone and rate. If you are not interested, please forward;) Cheers, /Tobias From van.lindberg at gmail.com Fri Dec 5 20:09:21 2014 From: van.lindberg at gmail.com (VanL) Date: Fri, 5 Dec 2014 13:09:21 -0600 Subject: [pypy-dev] Unify lib_pypy by vendoring six In-Reply-To: References: Message-ID: Hi Amaury, On Fri, Dec 5, 2014 at 9:58 AM, Amaury Forgeot d'Arc wrote: > lib_pypy is a portion of the stdlib. > It contains the modules that CPython implements in C, and that PyPy > decided to implement in pure Python. > > They describe a different version of Python, and have different features. > And why would you want to consolidate the code there, and not say in > urllib2.py or unicodeobject.py? > Two reasons: 1. Pypy bears the maintenance burden for the stuff in lib_pypy. Shared code means less maintenance. The rest of the stdlib is just copied over, so there is no maintenance burden. 2. Most of the APIs (and locations) for the things in lib_pypy *didn't* change very much, as opposed to things like urrlib2 where there was a large reorganization. This makes it a better candidate for consolidation. Thanks, Van -------------- next part -------------- An HTML attachment was scrubbed... URL: From fijall at gmail.com Sat Dec 6 15:01:16 2014 From: fijall at gmail.com (Maciej Fijalkowski) Date: Sat, 6 Dec 2014 16:01:16 +0200 Subject: [pypy-dev] Unify lib_pypy by vendoring six In-Reply-To: References: Message-ID: On Fri, Dec 5, 2014 at 9:09 PM, VanL wrote: > Hi Amaury, > > On Fri, Dec 5, 2014 at 9:58 AM, Amaury Forgeot d'Arc > wrote: >> >> lib_pypy is a portion of the stdlib. >> It contains the modules that CPython implements in C, and that PyPy >> decided to implement in pure Python. >> >> They describe a different version of Python, and have different features. >> And why would you want to consolidate the code there, and not say in >> urllib2.py or unicodeobject.py? > > > Two reasons: > 1. Pypy bears the maintenance burden for the stuff in lib_pypy. Shared code > means less maintenance. The rest of the stdlib is just copied over, so there > is no maintenance burden. The stdlib might be copied over, but there is still quite a bit of burden with differing C interfaces, semantics, performance etc. Why did CPython not decide to go that way in the first place btw? > 2. Most of the APIs (and locations) for the things in lib_pypy *didn't* > change very much, as opposed to things like urrlib2 where there was a large > reorganization. This makes it a better candidate for consolidation. > > Thanks, > > Van I'm -1 on the idea unless proven otherwise (it does add burden on people writing stuff for lib_pypy on python2 for example) From arigo at tunes.org Sat Dec 6 15:09:25 2014 From: arigo at tunes.org (Armin Rigo) Date: Sat, 6 Dec 2014 15:09:25 +0100 Subject: [pypy-dev] Unify lib_pypy by vendoring six In-Reply-To: References: Message-ID: Hi, On 6 December 2014 at 15:01, Maciej Fijalkowski wrote: > I'm -1 on the idea unless proven otherwise (it does add burden on > people writing stuff for lib_pypy on python2 for example) We mostly never write new stuff in lib_pypy on Python 2. But this makes it a perfect example of a directory where Mercurial shines with the current PyPy2-vs-PyPy3 split into branches: my guess is that the current solution is much better than the proposed change. The lib_pypy directory is like the Modules/*.c files of CPython: trying to share them with #ifdefs between CPython releases is a bad idea for maintenance reasons, and a proper use of Mercurial is better. A bient?t, Armin. From arigo at tunes.org Mon Dec 8 16:49:13 2014 From: arigo at tunes.org (Armin Rigo) Date: Mon, 8 Dec 2014 15:49:13 +0000 Subject: [pypy-dev] Poor performance for Krakatau In-Reply-To: References: Message-ID: Hi Robert, On 10 November 2014 at 05:27, Maciej Fijalkowski wrote: > I've been looking at krakatau performance for a while, it's almost > exclusively warmup time. We are going to address it, I hope rather > sooner than later :-) We added Krakatau to the official benchmark suite. It turns out not to be exclusively warmup time: after the program is fully warmed up, it is still almost 2 times slower than CPython. We'll look at it at some point now that it's on speed.pypy.org and annoying us regularly :-) A bient?t, Armin & Carl Friedrich From tbaldridge at gmail.com Tue Dec 9 07:01:12 2014 From: tbaldridge at gmail.com (Timothy Baldridge) Date: Mon, 8 Dec 2014 23:01:12 -0700 Subject: [pypy-dev] Getting rid of "prebuilt instance X has no attribute Y" warnings Message-ID: I'm getting a ton of these sort of warnings. They seem to go away when I either a) type hint the object via assert (gross) or b) access the attribute via a getter method. Is there a better way? Would there be a problem with somehow just turning this warning off? Thanks, Timothy -------------- next part -------------- An HTML attachment was scrubbed... URL: From dmalcolm at redhat.com Sat Dec 13 03:13:10 2014 From: dmalcolm at redhat.com (David Malcolm) Date: Fri, 12 Dec 2014 21:13:10 -0500 Subject: [pypy-dev] Experiments with PyPy and libgccjit Message-ID: <1418436790.3830.16.camel@surprise> I'm the maintainer of a new feature for the (not-yet-released) GCC 5: libgccjit: a way to build gcc as a shared library, suitable for generating code in-process. See: https://gcc.gnu.org/wiki/JIT I've been experimenting with embedding it within PyPy - my thought was that gcc has great breadth of hardware support, so maybe PyPy could use libgccjit as a fallback backend for targets which don't yet have their own pypy jit backends. I'm attaching the work I've got so far, in patch form; I apologize for the rough work-in-progress nature of the patch. It has: * a toy example of calling libgccjit from cffi, to build and run code in process (see rpython/jit/backend/libgccjit/cffi_bindings.py). * doing the same from rffi (see rpython/jit/backend/libgccjit/rffi_bindings.py and rpython/jit/backend/libgccjit/test/test_rffi_bindings.py) These seem to work: the translator builds binaries that call into my library, which builds machine code "on the fly". Is there a way to do this without going through the translation step? * the beginnings of a JIT backend: I hack up rpython/jit/backend/detect_cpu.py to always use: rpython/jit/backend/libgccjit/runner.py and this merely raises an exception, albeit dumping the operations seen in loops. My thinking is that I ought to be able to use the rffi bindings of libgccjit to implement the backend, and somehow turn the operations I'm seeing into calls into my libgccjit API. Does this sound useful, and am I on the right track here? Is there documentation about the meaning of the various kinds of operations within a to-be-JITted-loop? Thanks Dave -------------- next part -------------- A non-text attachment was scrubbed... Name: pypy-libgccjit-WIP.patch Type: text/x-patch Size: 26961 bytes Desc: not available URL: From 958816492 at qq.com Sat Dec 13 07:05:25 2014 From: 958816492 at qq.com (=?gb18030?B?y7w=?=) Date: Sat, 13 Dec 2014 14:05:25 +0800 Subject: [pypy-dev] Ask Pypy within virtualenv of Windows 7/8 for Help ---- bitpeach from china Message-ID: Dear Ms./Mr. Director / Dear Pypy Team: I'm a e-pal named bitpeach. I'm so interested in your work and admire the Pypy. Therefore, I try to install Pypy and pratice with Pypy during my working and studying. But I got a problem, what is worrying me is that there is few versions of Pypy running on the Windows. So the problem comes as follows: (1)I want to instal Pypy but do not confuse the the 3rd packages or libraries with Python2.7 already in my operate system Windows 7/8 (32bit). Then I choose to follow the tutorial settings to install VirtualEnv just like "Installing using virtualenv". (2)After I install VirtualEnv successfully, I need to arrange a new space for Pypy so that I download the Pypy available to Windows as Python2.7 compatible PyPy 2.4.0 - Windows binary (32bit) shown. (3)Then I ectract the pypy-2.4.0-win32.zip file to normal foleder and use VirutalEnv commands like ">virtualenv.exe -p \pathto\pypy.exe". This command "-p $PATH" means I need to choose Pypy as a default Python interpreter,otherwise it will choose Python27 already installed in my Windows system. However the command comes a error and fail to build a virtual environment for pypy. From now on, I truly realized that the specific parameters in the command in windows and Unix/Linux is different. Although I notice that your tutorial shows that $ virtualenv -p /opt/pypy-c-jit-41718-3fb486695f20-linux/bin/pypy my-pypy-env with a difference between Windows7/8 and Unix/Linux, I still can not solve the problem in Windows 7/8 and do not know how to build a virtual environment with appointing Pypy to a default interpreting python environment in Windows 7/8. And I remember your team or website is very good at this and cannot praise your speeding performance Pypy too highly. So you are the best and professional. Hence I come to seek your help. Best Regards! Hope Pypy more and more better! Especially the versions about Windows Sincerely yours!:-) bitpeach 2014-12-13 Sat. Email From China -------------- next part -------------- An HTML attachment was scrubbed... URL: From fijall at gmail.com Sat Dec 13 14:59:41 2014 From: fijall at gmail.com (Maciej Fijalkowski) Date: Sat, 13 Dec 2014 15:59:41 +0200 Subject: [pypy-dev] Experiments with PyPy and libgccjit In-Reply-To: <1418436790.3830.16.camel@surprise> References: <1418436790.3830.16.camel@surprise> Message-ID: Hi Dave. There is no documentation, but we can help you on IRC. Two things that pop into my mind: * Can libgcc patch existing assembler? (it's necessary) * Can libgcc tell us where on stack are GC roots? (also necessary) On Sat, Dec 13, 2014 at 4:13 AM, David Malcolm wrote: > I'm the maintainer of a new feature for the (not-yet-released) GCC 5: > libgccjit: a way to build gcc as a shared library, suitable for > generating code in-process. See: > https://gcc.gnu.org/wiki/JIT > > I've been experimenting with embedding it within PyPy - my thought was > that gcc has great breadth of hardware support, so maybe PyPy could use > libgccjit as a fallback backend for targets which don't yet have their > own pypy jit backends. > > I'm attaching the work I've got so far, in patch form; I apologize for > the rough work-in-progress nature of the patch. It has: > > * a toy example of calling libgccjit from cffi, to build and > run code in process (see > rpython/jit/backend/libgccjit/cffi_bindings.py). > > * doing the same from rffi (see > rpython/jit/backend/libgccjit/rffi_bindings.py and > rpython/jit/backend/libgccjit/test/test_rffi_bindings.py) > These seem to work: the translator builds binaries that call > into my library, which builds machine code "on the fly". > Is there a way to do this without going through the > translation step? > > * the beginnings of a JIT backend: > I hack up rpython/jit/backend/detect_cpu.py to always use: > rpython/jit/backend/libgccjit/runner.py > and this merely raises an exception, albeit dumping the > operations seen in loops. > > My thinking is that I ought to be able to use the rffi bindings of > libgccjit to implement the backend, and somehow turn the operations I'm > seeing into calls into my libgccjit API. > > Does this sound useful, and am I on the right track here? > > Is there documentation about the meaning of the various kinds of > operations within a to-be-JITted-loop? > > Thanks > Dave > > > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > https://mail.python.org/mailman/listinfo/pypy-dev > From matti.picus at gmail.com Sat Dec 13 21:51:03 2014 From: matti.picus at gmail.com (Matti Picus) Date: Sat, 13 Dec 2014 22:51:03 +0200 Subject: [pypy-dev] Ask Pypy within virtualenv of Windows 7/8 for Help ---- bitpeach from china Message-ID: <548CA6B7.3010605@gmail.com> The latest released virtualenv is version 1.11.6 which does not support pypy 2.4.0 and earlier on windows. A fix was merged into virtualenv after this version (released 17 May 2014) so any newer release will be fixed. In the mean time you can: 1. run "virtualenv -p path\to\pypy.exe new_path" and it will fail 2. copy by hand the directories lib_pypy and lib-python from path\to to new_path 3. rerun virtualenv, it should complete cleanly after installing setuptools and pip Matti > Dear Ms./Mr. Director / Dear Pypy Team: > I'm a e-pal named bitpeach. I'm so interested in your work and > admire the Pypy. > Therefore, I try to install Pypy and pratice with Pypy during my > working and studying. But I got a problem, what is worrying me is that > there is few versions of Pypy running on the Windows. So the problem > comes as follows: > (1)I want to instal Pypy but do not confuse the the 3rd packages > or libraries with Python2.7 already in my operate system Windows 7/8 > (32bit). Then I choose to follow the tutorial settings to install > VirtualEnv just like "Installing using virtualenv". > ... bitpeach > 2014-12-13 Sat. > Email From China > > > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > https://mail.python.org/mailman/listinfo/pypy-dev > From 958816492 at qq.com Sun Dec 14 08:52:30 2014 From: 958816492 at qq.com (=?gb18030?B?y7w=?=) Date: Sun, 14 Dec 2014 15:52:30 +0800 Subject: [pypy-dev] =?gb18030?q?Thank_You_Matti_Picus_AND_I_may_have_some_?= =?gb18030?q?new_questions_=A1=AA=A1=AA_bitpeach_from_china?= In-Reply-To: <548CA6B7.3010605@gmail.com> References: <548CA6B7.3010605@gmail.com> Message-ID: Dear Pypy Directors or Team / Dear friend Ms.or Mr. Picus: Glad to hear from you. I really appreciate your email. Thank you Picus. With your help, I have some new questions as follows. (1) I check the VirtualEnv version list and the 1.11.6 is the latest by this URL. There is no newer version than that. It means that 1.11.6 version is the newest and no versions can support pypy-2.4.0. (2) Then I keep the complete file of Pypy-2.4.0 as an old path and build a new file to copy the directories lib_pypy and lib-python into the new file. (3) And I use the command "virtualenv -p \path\to\oldfile\pypy.exe \path\to\newfile". As following picture shown, please allow me to explain it. [3-1] I already install python2.7 and virtualenv 1.11.6. [3-2] I put a complete pypy-2.4.0 programs in file "e:\Python27\Scripts\pypy-2.4.0-win32" as an old path. The Picture below presents this. (3-2 Picture) [3-3] I build a new file named "VirtualenvPypy" and copy the directories lib_pypy & lib-python from file "pypy-2.4.0-win32" to new file "VirtualenvPypy". The Picture below presents this. (3-3 Picture) [3-4] Finally, I use the command as below. The two Pictures below present this. In (3-4 Picture B), the red one is old path and the blue one new path. If my command is wrong, you can help me to revise it. (3-4 Picture A) (3-4 Picture B) (4) The command is failed. And I guess the intention to appointing pypy.exe as a default interpret may be not available to the Windows 7/8 versions and Pypy-2.4.0. I still worry a truth that a lot of materials in foreign or domestic forums are about pypy-2.4.0 with VirtualEnv in Linux such as CentOs. I really doubt my idea of installing pypy-2.4.0 in virtualenv with Windows 7/8 is correct. All pictures are uploaded as attachments. (5) I hope more and more interesting people if want to install pypy-2.4.0 with virtualenv in Windows 7/8 join this discussion and wish you or directors to help me. I really love this crystallization of wisdom??Pypy??It takes my great enthusiasm and interest along. Thank you! I'm eternally grateful for your help. bitpeach 2014-12-14 China ------------------OLD MAIL ------------------ ???: "Matti Picus";; ????: 2014?12?14?(???) ??4:51 ???: "PyPy Developer Mailing List"; ??: "?"<958816492 at qq.com>; ??: Re: [pypy-dev] Ask Pypy within virtualenv of Windows 7/8 for Help---- bitpeach from china The latest released virtualenv is version 1.11.6 which does not support pypy 2.4.0 and earlier on windows. A fix was merged into virtualenv after this version (released 17 May 2014) so any newer release will be fixed. In the mean time you can: 1. run "virtualenv -p path\to\pypy.exe new_path" and it will fail 2. copy by hand the directories lib_pypy and lib-python from path\to to new_path 3. rerun virtualenv, it should complete cleanly after installing setuptools and pip Matti > Dear Ms./Mr. Director / Dear Pypy Team: > I'm a e-pal named bitpeach. I'm so interested in your work and > admire the Pypy. > Therefore, I try to install Pypy and pratice with Pypy during my > working and studying. But I got a problem, what is worrying me is that > there is few versions of Pypy running on the Windows. So the problem > comes as follows: > (1)I want to instal Pypy but do not confuse the the 3rd packages > or libraries with Python2.7 already in my operate system Windows 7/8 > (32bit). Then I choose to follow the tutorial settings to install > VirtualEnv just like "Installing using virtualenv". > ... bitpeach > 2014-12-13 Sat. > Email From China > > > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > https://mail.python.org/mailman/listinfo/pypy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 50F5DB0C at AABB8322.BE418D54.png Type: application/octet-stream Size: 29547 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 51F50238 at AABB8322.BE418D54.png Type: application/octet-stream Size: 9098 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 52F6AE02 at AABB8322.BE418D54.png Type: application/octet-stream Size: 8756 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 4FF5AE02 at AABB8322.BE418D54.png Type: application/octet-stream Size: 17966 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Pictures.rar Type: application/octet-stream Size: 49196 bytes Desc: not available URL: From arigo at tunes.org Sun Dec 14 22:03:07 2014 From: arigo at tunes.org (Armin Rigo) Date: Sun, 14 Dec 2014 21:03:07 +0000 Subject: [pypy-dev] Experiments with PyPy and libgccjit In-Reply-To: References: <1418436790.3830.16.camel@surprise> Message-ID: Hi, On 13 December 2014 at 13:59, Maciej Fijalkowski wrote: > * Can libgcc tell us where on stack are GC roots? (also necessary) This constraint can be relaxed nowadays: it's enough e.g. if we tell gcc to reserve register %rbp to contain the jitframe object. That's the only GC root that's really necessary from pieces of generated assembler. A bient?t, Armin. From alex.gaynor at gmail.com Sun Dec 14 22:15:00 2014 From: alex.gaynor at gmail.com (Alex Gaynor) Date: Sun, 14 Dec 2014 21:15:00 +0000 Subject: [pypy-dev] stdlib-2.7.9! Message-ID: Hey all, Earlier today I created the 2.7.9 branch, with the copy of the 2.7.9 stdlib. http://buildbot.pypy.org/summary?branch=stdlib-2.7.9 is the branch summary. It's no surprise, the biggest work to be done is for the ssl module, 2.7.9 contains a complete backport of 3.4's ssl module. We have up through 3.2s version of the ssl module implemented on the py3k branch. I'd like some feedback from folks on how you think we should best handle finishing the 2.7.9 work. Should I copy the work from py3k, finish anything missing, and then when we get to python 3.4 on the py3k branch the work is just "already done"? Something else? Feedback please! Alex -------------- next part -------------- An HTML attachment was scrubbed... URL: From rymg19 at gmail.com Sun Dec 14 23:22:30 2014 From: rymg19 at gmail.com (Ryan Gonzalez) Date: Sun, 14 Dec 2014 16:22:30 -0600 Subject: [pypy-dev] Experiments with PyPy and libgccjit In-Reply-To: <1418436790.3830.16.camel@surprise> References: <1418436790.3830.16.camel@surprise> Message-ID: As awesome as this would be, I'd be surprised if this worked since LLVM didn't. On Fri, Dec 12, 2014 at 8:13 PM, David Malcolm wrote: > > I'm the maintainer of a new feature for the (not-yet-released) GCC 5: > libgccjit: a way to build gcc as a shared library, suitable for > generating code in-process. See: > https://gcc.gnu.org/wiki/JIT > > I've been experimenting with embedding it within PyPy - my thought was > that gcc has great breadth of hardware support, so maybe PyPy could use > libgccjit as a fallback backend for targets which don't yet have their > own pypy jit backends. > > I'm attaching the work I've got so far, in patch form; I apologize for > the rough work-in-progress nature of the patch. It has: > > * a toy example of calling libgccjit from cffi, to build and > run code in process (see > rpython/jit/backend/libgccjit/cffi_bindings.py). > > * doing the same from rffi (see > rpython/jit/backend/libgccjit/rffi_bindings.py and > rpython/jit/backend/libgccjit/test/test_rffi_bindings.py) > These seem to work: the translator builds binaries that call > into my library, which builds machine code "on the fly". > Is there a way to do this without going through the > translation step? > > * the beginnings of a JIT backend: > I hack up rpython/jit/backend/detect_cpu.py to always use: > rpython/jit/backend/libgccjit/runner.py > and this merely raises an exception, albeit dumping the > operations seen in loops. > > My thinking is that I ought to be able to use the rffi bindings of > libgccjit to implement the backend, and somehow turn the operations I'm > seeing into calls into my libgccjit API. > > Does this sound useful, and am I on the right track here? > > Is there documentation about the meaning of the various kinds of > operations within a to-be-JITted-loop? > > Thanks > Dave > > > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > https://mail.python.org/mailman/listinfo/pypy-dev > > -- Ryan If anybody ever asks me why I prefer C++ to C, my answer will be simple: "It's becauseslejfp23(@#Q*(E*EIdc-SEGFAULT. Wait, I don't think that was nul-terminated." Personal reality distortion fields are immune to contradictory evidence. - srean Check out my website: http://kirbyfan64.github.io/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From fijall at gmail.com Sun Dec 14 23:42:22 2014 From: fijall at gmail.com (Maciej Fijalkowski) Date: Mon, 15 Dec 2014 00:42:22 +0200 Subject: [pypy-dev] Experiments with PyPy and libgccjit In-Reply-To: References: <1418436790.3830.16.camel@surprise> Message-ID: On Mon, Dec 15, 2014 at 12:22 AM, Ryan Gonzalez wrote: > As awesome as this would be, I'd be surprised if this worked since LLVM > didn't. and you're basing it on what precisely? LLVM didn't work for a variety of reasons, a myriad bugs being one of them From rymg19 at gmail.com Mon Dec 15 05:35:33 2014 From: rymg19 at gmail.com (Ryan) Date: Sun, 14 Dec 2014 22:35:33 -0600 Subject: [pypy-dev] Experiments with PyPy and libgccjit In-Reply-To: References: <1418436790.3830.16.camel@surprise> Message-ID: <1f8b0407-4e19-4c96-9039-fcdc1bf1af2e@email.android.com> I don't know; it just seems weird, since LLVM and libgccjit seem to hold similar concepts (though there's a 99% chance I'm wrong; I just glanced over the libgccjit description). What I *really* wish PyPy could have would be a C-- backend. *That* would be insanely awesome and would probably blow the C backend out of the water. Maciej Fijalkowski wrote: >On Mon, Dec 15, 2014 at 12:22 AM, Ryan Gonzalez >wrote: >> As awesome as this would be, I'd be surprised if this worked since >LLVM >> didn't. > >and you're basing it on what precisely? > >LLVM didn't work for a variety of reasons, a myriad bugs being one of >them -- Sent from my Android phone with K-9 Mail. Please excuse my brevity. Check out my website: http://kirbyfan64.github.io/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From fijall at gmail.com Mon Dec 15 08:22:09 2014 From: fijall at gmail.com (Maciej Fijalkowski) Date: Mon, 15 Dec 2014 09:22:09 +0200 Subject: [pypy-dev] Experiments with PyPy and libgccjit In-Reply-To: <1f8b0407-4e19-4c96-9039-fcdc1bf1af2e@email.android.com> References: <1418436790.3830.16.camel@surprise> <1f8b0407-4e19-4c96-9039-fcdc1bf1af2e@email.android.com> Message-ID: On Mon, Dec 15, 2014 at 6:35 AM, Ryan wrote: > I don't know; it just seems weird, since LLVM and libgccjit seem to hold > similar concepts (though there's a 99% chance I'm wrong; I just glanced over > the libgccjit description). > > What I *really* wish PyPy could have would be a C-- backend. *That* would be > insanely awesome and would probably blow the C backend out of the water. You seem to have a lot of opinions. Can you back this one up with something? From amauryfa at gmail.com Mon Dec 15 11:01:18 2014 From: amauryfa at gmail.com (Amaury Forgeot d'Arc) Date: Mon, 15 Dec 2014 11:01:18 +0100 Subject: [pypy-dev] stdlib-2.7.9! In-Reply-To: References: Message-ID: I suspect that some 2.7.9 changes should not go in 3.2, but are only compatible with a 3.3 or 3.4 stdlib... Is there a way to skip the merge so these changes directly go to the 3.3 branch? 2014-12-14 22:15 GMT+01:00 Alex Gaynor : > Hey all, > > Earlier today I created the 2.7.9 branch, with the copy of the 2.7.9 > stdlib. > > http://buildbot.pypy.org/summary?branch=stdlib-2.7.9 is the branch > summary. > > It's no surprise, the biggest work to be done is for the ssl module, 2.7.9 > contains a complete backport of 3.4's ssl module. > > We have up through 3.2s version of the ssl module implemented on the py3k > branch. I'd like some feedback from folks on how you think we should best > handle finishing the 2.7.9 work. > > Should I copy the work from py3k, finish anything missing, and then when > we get to python 3.4 on the py3k branch the work is just "already done"? > Something else? > > Feedback please! > Alex > > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > https://mail.python.org/mailman/listinfo/pypy-dev > > -- Amaury Forgeot d'Arc -------------- next part -------------- An HTML attachment was scrubbed... URL: From fijall at gmail.com Mon Dec 15 11:53:46 2014 From: fijall at gmail.com (Maciej Fijalkowski) Date: Mon, 15 Dec 2014 12:53:46 +0200 Subject: [pypy-dev] Experiments with PyPy and libgccjit In-Reply-To: References: <1418436790.3830.16.camel@surprise> <1f8b0407-4e19-4c96-9039-fcdc1bf1af2e@email.android.com> Message-ID: On Mon, Dec 15, 2014 at 9:22 AM, Maciej Fijalkowski wrote: > On Mon, Dec 15, 2014 at 6:35 AM, Ryan wrote: >> I don't know; it just seems weird, since LLVM and libgccjit seem to hold >> similar concepts (though there's a 99% chance I'm wrong; I just glanced over >> the libgccjit description). >> >> What I *really* wish PyPy could have would be a C-- backend. *That* would be >> insanely awesome and would probably blow the C backend out of the water. > > You seem to have a lot of opinions. Can you back this one up with something? To clarify my question: C-- looks like a cool idea, but not very actively developed. I would expect to run into bugs or just missing features. Some parts are obviously more suited for compilers than say C, but I would expect GCC (and to some extent LLVM) to be more mature and have better optimizations. I would need to see some evidence of C-- being used by someone else than original author before trying to evaluate it. Cheers, fijal From rymg19 at gmail.com Mon Dec 15 18:30:03 2014 From: rymg19 at gmail.com (Ryan Gonzalez) Date: Mon, 15 Dec 2014 11:30:03 -0600 Subject: [pypy-dev] Experiments with PyPy and libgccjit In-Reply-To: References: <1418436790.3830.16.camel@surprise> <1f8b0407-4e19-4c96-9039-fcdc1bf1af2e@email.android.com> Message-ID: On Mon, Dec 15, 2014 at 4:53 AM, Maciej Fijalkowski wrote: > > On Mon, Dec 15, 2014 at 9:22 AM, Maciej Fijalkowski > wrote: > > On Mon, Dec 15, 2014 at 6:35 AM, Ryan wrote: > >> I don't know; it just seems weird, since LLVM and libgccjit seem to hold > >> similar concepts (though there's a 99% chance I'm wrong; I just glanced > over > >> the libgccjit description). > >> > >> What I *really* wish PyPy could have would be a C-- backend. *That* > would be > >> insanely awesome and would probably blow the C backend out of the water. > > > > You seem to have a lot of opinions. Can you back this one up with > something? > I like to experiment with stuff that probably won't work. Can't help it. > > To clarify my question: > > C-- looks like a cool idea, but not very actively developed. I would > expect to run into bugs or just missing features. Some parts are > obviously more suited for compilers than say C, but I would expect GCC > (and to some extent LLVM) to be more mature and have better > optimizations. I would need to see some evidence of C-- being used by > someone else than original author before trying to evaluate it. > > Which is why I said *I wish*. It probably won't happen because Quick C-- (the only C-- compiler) has been abandoned. I think C-- would be great because it has an awesome runtime interface. You can traverse the stack, gather roots, mark specific variables as roots and others as non-roots...all built-in. > Cheers, > fijal > -- Ryan If anybody ever asks me why I prefer C++ to C, my answer will be simple: "It's becauseslejfp23(@#Q*(E*EIdc-SEGFAULT. Wait, I don't think that was nul-terminated." Personal reality distortion fields are immune to contradictory evidence. - srean Check out my website: http://kirbyfan64.github.io/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From fijall at gmail.com Tue Dec 16 14:25:52 2014 From: fijall at gmail.com (Maciej Fijalkowski) Date: Tue, 16 Dec 2014 15:25:52 +0200 Subject: [pypy-dev] Experiments with PyPy and libgccjit In-Reply-To: References: <1418436790.3830.16.camel@surprise> <1f8b0407-4e19-4c96-9039-fcdc1bf1af2e@email.android.com> Message-ID: On Mon, Dec 15, 2014 at 7:30 PM, Ryan Gonzalez wrote: > On Mon, Dec 15, 2014 at 4:53 AM, Maciej Fijalkowski > wrote: >> >> On Mon, Dec 15, 2014 at 9:22 AM, Maciej Fijalkowski >> wrote: >> > On Mon, Dec 15, 2014 at 6:35 AM, Ryan wrote: >> >> I don't know; it just seems weird, since LLVM and libgccjit seem to >> >> hold >> >> similar concepts (though there's a 99% chance I'm wrong; I just glanced >> >> over >> >> the libgccjit description). >> >> >> >> What I *really* wish PyPy could have would be a C-- backend. *That* >> >> would be >> >> insanely awesome and would probably blow the C backend out of the >> >> water. >> > >> > You seem to have a lot of opinions. Can you back this one up with >> > something? > > > I like to experiment with stuff that probably won't work. Can't help it. > >> >> >> To clarify my question: >> >> C-- looks like a cool idea, but not very actively developed. I would >> expect to run into bugs or just missing features. Some parts are >> obviously more suited for compilers than say C, but I would expect GCC >> (and to some extent LLVM) to be more mature and have better >> optimizations. I would need to see some evidence of C-- being used by >> someone else than original author before trying to evaluate it. >> > > Which is why I said *I wish*. It probably won't happen because Quick C-- > (the only C-- compiler) has been abandoned. I think C-- would be great > because it has an awesome runtime interface. You can traverse the stack, > gather roots, mark specific variables as roots and others as non-roots...all > built-in. Yes, I like the idea too :-) but getting from an idea to a working implementation gives a lot of headaches and the implementation might happen to be not as cool as the idea. From arigo at tunes.org Fri Dec 19 00:16:18 2014 From: arigo at tunes.org (Armin Rigo) Date: Thu, 18 Dec 2014 23:16:18 +0000 Subject: [pypy-dev] Getting rid of "prebuilt instance X has no attribute Y" warnings In-Reply-To: References: Message-ID: Hi Timothy, On 9 December 2014 at 06:01, Timothy Baldridge wrote: > I'm getting a ton of these sort of warnings. They seem to go away when I > either a) type hint the object via assert (gross) or b) access the attribute > via a getter method. Is there a better way? Would there be a problem with > somehow just turning this warning off? The problem is that attributes get moved unexpectedly to base classes. Then all instances of any subclass of that base class will have all the attributes, i.e. be unexpectedly fat. This is why e.g. in PyPy we added this to a few crucial base classes: class W_Root(object): _attrs_ = () # can also be __slots__ = () It crashes the annotator when attributes (other than those listed, which is none in that case) are unexpectedly moved to that base class. You're seeing the problem as warnings about prebuilt instances' attributes, but the problem more generally applies to all instances, prebuilt or not. How to fix that depends on your code. Usually you need to figure out a bit more context, i.e. seeing where the read or write of the attribute occurs and trying to understand why the annotator thinks the object in question can be of any subclass of the root class rather than only some specific subclass. Sometimes you really need "assert isinstance(x, Subclass)" but usually the problem can be fixed more cleanly. A bient?t, Armin. From matti.picus at gmail.com Wed Dec 24 23:29:55 2014 From: matti.picus at gmail.com (Matti Picus) Date: Thu, 25 Dec 2014 00:29:55 +0200 Subject: [pypy-dev] recent commit to pypy/numpy build-shared-object Message-ID: <549B3E63.3040108@gmail.com> I would like to hear what you think about the build-shared-object branch on pypy/numpy The goal is to buld a pure shared object for cffi rather than a traditional python module I used link_shared_lib() from the distutils ccompiler|, |||but had to hack a bit to get it to play nicely with the numpy build system. Any comments would be appreciated. Matti From fijall at gmail.com Thu Dec 25 11:39:12 2014 From: fijall at gmail.com (Maciej Fijalkowski) Date: Thu, 25 Dec 2014 12:39:12 +0200 Subject: [pypy-dev] recent commit to pypy/numpy build-shared-object In-Reply-To: <549B3E63.3040108@gmail.com> References: <549B3E63.3040108@gmail.com> Message-ID: On Thu, Dec 25, 2014 at 12:29 AM, Matti Picus wrote: > I would like to hear what you think about the build-shared-object branch on > pypy/numpy > The goal is to buld a pure shared object for cffi rather than a traditional > python module > I used link_shared_lib() from the distutils ccompiler|, |||but had to hack a > bit to get it to play nicely with the numpy build system. > Any comments would be appreciated. > Matti > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > https://mail.python.org/mailman/listinfo/pypy-dev Armin was working on something like that for cffi 1.0 or is it something completely different? From arigo at tunes.org Thu Dec 25 18:26:19 2014 From: arigo at tunes.org (Armin Rigo) Date: Thu, 25 Dec 2014 18:26:19 +0100 Subject: [pypy-dev] Some work on virtualenv Message-ID: Hi all, Some work started on the virtualenv refactoring that some people here might be interested to hear about: https://github.com/pypa/virtualenv/pull/691 A bient?t, Armin. From matti.picus at gmail.com Sun Dec 28 21:48:25 2014 From: matti.picus at gmail.com (Matti Picus) Date: Sun, 28 Dec 2014 22:48:25 +0200 Subject: [pypy-dev] ufuncapi branch Message-ID: <54A06C99.2060808@gmail.com> I have been plugging away at getting linalg support working via the ufunc capi in cpyext. It turns out that the branch can actually run much of linalg, and does not crash pypy. The ufunc api is a very convoluted, it uses a function-selection mechanism based on dtypes, and function interface specification via signatures. I am sure I have not covered all the corners with the tests that exist in micronumpy, and the numpy tests seem very minimal as well, but it seems to work as advertised. Currently numpy's linalg uses the cpyext interface, my next step should be to use cffi instead via the extended frompyfunc() interface that supports most of the ufunc capi arguments, this work will happen in the cffi-linalg branch of pypy/numpy. I would like to merge the ufuncapi branch of pypy to default, that would make work on the pypy/numpy repo easier. Are there objections and/or does anyone know of a wider suite of tests of ufuncs? Matti Note that we now have a solution for getting non-ui matplotlib plots: - translate the pypy ufuncapi branch - set it up in a virtualenv - install the cffi-linalg branch of pypy/numpy - install github.com/mattip/matplotlib This is sufficient to run the python-benchmarks repo from https://github.com/numfocus/python-benchmarks From fijall at gmail.com Mon Dec 29 10:03:20 2014 From: fijall at gmail.com (Maciej Fijalkowski) Date: Mon, 29 Dec 2014 11:03:20 +0200 Subject: [pypy-dev] ufuncapi branch In-Reply-To: <54A06C99.2060808@gmail.com> References: <54A06C99.2060808@gmail.com> Message-ID: I would say "go ahead". Do you want someone to review it a bit? Cheers, fijal On Sun, Dec 28, 2014 at 10:48 PM, Matti Picus wrote: > I have been plugging away at getting linalg support working via the ufunc > capi in cpyext. > It turns out that the branch can actually run much of linalg, and does not > crash pypy. > The ufunc api is a very convoluted, it uses a function-selection mechanism > based on dtypes, and function interface specification via signatures. I am > sure I have not covered all the corners with the tests that exist in > micronumpy, and the numpy tests seem very minimal as well, but it seems to > work as advertised. > Currently numpy's linalg uses the cpyext interface, my next step should be > to use cffi instead via the extended frompyfunc() interface that supports > most of the ufunc capi arguments, this work will happen in the cffi-linalg > branch of pypy/numpy. > I would like to merge the ufuncapi branch of pypy to default, that would > make work on the pypy/numpy repo easier. Are there objections and/or does > anyone know of a wider suite of tests of ufuncs? > > Matti > > Note that we now have a solution for getting non-ui matplotlib plots: > - translate the pypy ufuncapi branch > - set it up in a virtualenv > - install the cffi-linalg branch of pypy/numpy > - install github.com/mattip/matplotlib > > This is sufficient to run the python-benchmarks repo from > https://github.com/numfocus/python-benchmarks > > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > https://mail.python.org/mailman/listinfo/pypy-dev From wouter.pypy at richtlijn.be Tue Dec 30 20:12:40 2014 From: wouter.pypy at richtlijn.be (Wouter van Heijst) Date: Tue, 30 Dec 2014 21:12:40 +0200 Subject: [pypy-dev] --shared on osx Message-ID: <20141230191240.GA2872@fice> Hei all, at the moment --shared on OSX is broken, merging the following PR: https://bitbucket.org/pypy/pypy/pull-request/293/shared-support-on-osx/ should take care of it. Is there anything I can do to move the pull request forward? Cheers, Wouter From tbaldridge at gmail.com Tue Dec 30 20:22:33 2014 From: tbaldridge at gmail.com (Timothy Baldridge) Date: Tue, 30 Dec 2014 12:22:33 -0700 Subject: [pypy-dev] How do I get better jit_libffi traces? Message-ID: I'm trying to optimize the FFI functionality in Pixie, and I'm not sure how to proceed. From what I understand, the JIT generator is able to optimize away calls to jit_libffi and simply replace them with bare calls to the c functions. However, not matter how I hint or mark things as immutable, I seem to always have a call to "jit_ffi_call_int". What exactly triggers the JIT to remove that call? The part of the trace involved in the jit call looks like this: https://gist.github.com/halgari/31b188e8e4757ccb3218 Any ideas? Thanks, Timothy -------------- next part -------------- An HTML attachment was scrubbed... URL: From fijall at gmail.com Tue Dec 30 20:34:20 2014 From: fijall at gmail.com (Maciej Fijalkowski) Date: Tue, 30 Dec 2014 21:34:20 +0200 Subject: [pypy-dev] How do I get better jit_libffi traces? In-Reply-To: References: Message-ID: I would expect cif_description to need to be constant (in your example it's i126, which I don't know where it comes from, but it's definitely not a constant). Simple promote should do the trick? On Tue, Dec 30, 2014 at 9:22 PM, Timothy Baldridge wrote: > I'm trying to optimize the FFI functionality in Pixie, and I'm not sure how > to proceed. From what I understand, the JIT generator is able to optimize > away calls to jit_libffi and simply replace them with bare calls to the c > functions. However, not matter how I hint or mark things as immutable, I > seem to always have a call to "jit_ffi_call_int". What exactly triggers the > JIT to remove that call? > > The part of the trace involved in the jit call looks like this: > > https://gist.github.com/halgari/31b188e8e4757ccb3218 > > Any ideas? > > Thanks, > > Timothy > > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > https://mail.python.org/mailman/listinfo/pypy-dev > From tbaldridge at gmail.com Tue Dec 30 21:44:49 2014 From: tbaldridge at gmail.com (Timothy Baldridge) Date: Tue, 30 Dec 2014 13:44:49 -0700 Subject: [pypy-dev] How do I get better jit_libffi traces? In-Reply-To: References: Message-ID: That did it! My traces are nice and small now. Thanks! Timothy On Tue, Dec 30, 2014 at 12:34 PM, Maciej Fijalkowski wrote: > I would expect cif_description to need to be constant (in your example > it's i126, which I don't know where it comes from, but it's definitely > not a constant). Simple promote should do the trick? > > On Tue, Dec 30, 2014 at 9:22 PM, Timothy Baldridge > wrote: > > I'm trying to optimize the FFI functionality in Pixie, and I'm not sure > how > > to proceed. From what I understand, the JIT generator is able to optimize > > away calls to jit_libffi and simply replace them with bare calls to the c > > functions. However, not matter how I hint or mark things as immutable, I > > seem to always have a call to "jit_ffi_call_int". What exactly triggers > the > > JIT to remove that call? > > > > The part of the trace involved in the jit call looks like this: > > > > https://gist.github.com/halgari/31b188e8e4757ccb3218 > > > > Any ideas? > > > > Thanks, > > > > Timothy > > > > _______________________________________________ > > pypy-dev mailing list > > pypy-dev at python.org > > https://mail.python.org/mailman/listinfo/pypy-dev > > > -- ?One of the main causes of the fall of the Roman Empire was that?lacking zero?they had no way to indicate successful termination of their C programs.? (Robert Firth) -------------- next part -------------- An HTML attachment was scrubbed... URL: