From fijal at genesilico.pl Wed Aug 1 12:28:42 2007 From: fijal at genesilico.pl (Maciek Fijalkowski) Date: Wed, 01 Aug 2007 12:28:42 +0200 Subject: [pypy-dev] Stackless vs Erlang benchmarks Message-ID: <46B0605A.6010305@genesilico.pl> http://muharem.wordpress.com/2007/07/31/erlang-vs-stackless-python-a-first-benchmark/ Christian: with a dedication for you :) We should try pypy on this btw. :. From jgustak at gmail.com Thu Aug 2 18:08:01 2007 From: jgustak at gmail.com (Jakub Gustak) Date: Thu, 2 Aug 2007 18:08:01 +0200 Subject: [pypy-dev] scheme interpreter [status report] In-Reply-To: References: Message-ID: A first, a joke: (syntax-rules () ((i-have-spare day ...) (hack 'macros (day ....)))) So we have recursive macros expanding recursively, as they should. I was working on macros with ellipsis almost whole week i. I was changing approach several times. But it looks like flat ellipses (not nested) work well. All most important parts of macros are SyntaxRule.match method, or rather matchr, and W_Transformer.substitute. They raise exception when discover ellipsis to handle it at higher level. matchr is kinda handling nested ellipses, but substitute not yet. That's pretty it. I would like to get nested ellipses working and then start playing with continuations. Wish me luck. Cheers, Jakub Gustak From simon at arrowtheory.com Fri Aug 3 21:14:13 2007 From: simon at arrowtheory.com (Simon Burton) Date: Fri, 3 Aug 2007 12:14:13 -0700 Subject: [pypy-dev] rffi feature request Message-ID: <20070803121413.bd2d992a.simon@arrowtheory.com> I would like to expose some functions as external symbols when i build a .so def foo(i, j): return i+j foo._expose_ = [rffi.INT, rffi.INT] This is basically so I can write cpython extension modules in rpython. (and manually doing ref counting (etc.) on the cpython objects.) Simon. From simon at arrowtheory.com Sun Aug 5 03:14:03 2007 From: simon at arrowtheory.com (Simon Burton) Date: Sat, 4 Aug 2007 18:14:03 -0700 Subject: [pypy-dev] rffi feature request In-Reply-To: <20070803121413.bd2d992a.simon@arrowtheory.com> References: <20070803121413.bd2d992a.simon@arrowtheory.com> Message-ID: <20070804181403.f15cdbb6.simon@arrowtheory.com> On Fri, 3 Aug 2007 12:14:13 -0700 Simon Burton wrote: > > I would like to expose some functions as external > symbols when i build a .so > > def foo(i, j): > return i+j > > foo._expose_ = [rffi.INT, rffi.INT] It seems like this could also enable a plugin system for rpython, and for example, (c or rpython) extension modules for the PyPy interpreter. Simon. From cfbolz at gmx.de Sun Aug 5 22:00:43 2007 From: cfbolz at gmx.de (Carl Friedrich Bolz) Date: Sun, 05 Aug 2007 22:00:43 +0200 Subject: [pypy-dev] Stackless vs Erlang benchmarks In-Reply-To: <46B0605A.6010305@genesilico.pl> References: <46B0605A.6010305@genesilico.pl> Message-ID: <46B62C6B.5040306@gmx.de> Hi Maciek Maciek Fijalkowski wrote: > http://muharem.wordpress.com/2007/07/31/erlang-vs-stackless-python-a-first-benchmark/ > > Christian: with a dedication for you :) > > We should try pypy on this btw. seems a bit meaningless, given that one of erlang's most important strengths is the possibility of using it to transparently across multiple processes and especially multiple machines. Cheers, Carl Friedrich From paul.degrandis at gmail.com Tue Aug 7 03:46:28 2007 From: paul.degrandis at gmail.com (Paul deGrandis) Date: Mon, 6 Aug 2007 21:46:28 -0400 Subject: [pypy-dev] Trouble with ooparse_int/ooparse_float Message-ID: <9c0bb8a00708061846m265d5f30q7f47eac4442242e1@mail.gmail.com> Anto, Niko, and all, I've been trying to get some of the string and float tests to pass. The trouble I'm running into is that ooparse_float doesn't know how to parse ll_str_0, but for the life of me can't find where I need to be looking or where I should override the ooparse_float method. I feel like it should go in my opcodes.py file (translator/jvm). Also, one question I had about long conversions, in translator/jvm/test/test_float.py:22, there is a test for long conversion. Tracing the results shows I get the correct answer for the conversion, but my result is not an r_longlong, the tests returns a Java Long. Any tips or hints? Regards, Paul -------------- next part -------------- An HTML attachment was scrubbed... URL: From anto.cuni at gmail.com Tue Aug 7 14:09:26 2007 From: anto.cuni at gmail.com (Antonio Cuni) Date: Tue, 07 Aug 2007 14:09:26 +0200 Subject: [pypy-dev] Trouble with ooparse_int/ooparse_float In-Reply-To: <9c0bb8a00708061846m265d5f30q7f47eac4442242e1@mail.gmail.com> References: <9c0bb8a00708061846m265d5f30q7f47eac4442242e1@mail.gmail.com> Message-ID: <46B860F6.9030306@gmail.com> Hi Paul! Paul deGrandis wrote: > Anto, Niko, and all, > > I've been trying to get some of the string and float tests to pass. The > trouble I'm running into is that ooparse_float doesn't know how to parse > ll_str_0, but for the life of me can't find where I need to be looking > or where I should override the ooparse_float method. I feel like it > should go in my opcodes.py file (translator/jvm). I'm not sure to understand the question. ooparse_float is an opcode which converts strings to floats. To implement it, you must add the corresponding definition to translator/jvm/opcodes.py; the simplest thing to do is probably to call java.lang.Double.parseDouble(); I don't know the exact syntax to use, but I guess Niko could help here :-). > Also, one question I had about long conversions, in > translator/jvm/test/test_float.py:22, there is a test for long > conversion. Tracing the results shows I get the correct answer for the > conversion, but my result is not an r_longlong, the tests returns a Java > Long. Any tips or hints? This happens because the same tests are used to test both the llinterpreter and the backends: when run in the llinterpreter, the return type is effectively r_longlong and the test runs fine, but when translated by a backend such as cli or jvm, the result is simply printed to stdout, without any information about the type: thus, there is no way to test if the result was effectively a r_longlong. Look at the source of the test, in rpython/test/test_rfloat.py:69: you see that the type-checking is done by calling the is_of_type method; now, look how the CLI test framerwork implements this method, in cli/test/runtest.py:299: as you can see, it simply returns always true, because it has no chance to test it. What you need to do is simply to do the same in jvm/test/runtest.py. ciao Anto From fijal at genesilico.pl Tue Aug 7 14:15:48 2007 From: fijal at genesilico.pl (Maciek Fijalkowski) Date: Tue, 07 Aug 2007 14:15:48 +0200 Subject: [pypy-dev] rffi feature request In-Reply-To: <20070804181403.f15cdbb6.simon@arrowtheory.com> References: <20070803121413.bd2d992a.simon@arrowtheory.com> <20070804181403.f15cdbb6.simon@arrowtheory.com> Message-ID: <46B86274.2040401@genesilico.pl> Simon Burton wrote: > On Fri, 3 Aug 2007 12:14:13 -0700 > Simon Burton wrote: > > >> I would like to expose some functions as external >> symbols when i build a .so >> >> def foo(i, j): >> return i+j >> >> foo._expose_ = [rffi.INT, rffi.INT] >> > > It seems like this could also enable a plugin system for rpython, > and for example, (c or rpython) extension modules for the PyPy interpreter. > > Simon. > I kind of don't understand what are you trying to do. Could you explain in a bit more detail? (Ie are you trying to have rffi-rffi bridge between interpreter and compiled module?) Why not fix extcompiler instead? :. From simon at arrowtheory.com Wed Aug 8 18:14:40 2007 From: simon at arrowtheory.com (Simon Burton) Date: Wed, 8 Aug 2007 09:14:40 -0700 Subject: [pypy-dev] rffi feature request In-Reply-To: <46B86274.2040401@genesilico.pl> References: <20070803121413.bd2d992a.simon@arrowtheory.com> <20070804181403.f15cdbb6.simon@arrowtheory.com> <46B86274.2040401@genesilico.pl> Message-ID: <20070808091440.f8e06de1.simon@arrowtheory.com> On Tue, 07 Aug 2007 14:15:48 +0200 Maciek Fijalkowski wrote: > Simon Burton wrote: > > On Fri, 3 Aug 2007 12:14:13 -0700 > > Simon Burton wrote: > > > > > >> I would like to expose some functions as external > >> symbols when i build a .so > >> > >> def foo(i, j): > >> return i+j > >> > >> foo._expose_ = [rffi.INT, rffi.INT] > >> > > > > It seems like this could also enable a plugin system for rpython, > > and for example, (c or rpython) extension modules for the PyPy interpreter. > > > > Simon. > > > I kind of don't understand what are you trying to do. Could you explain > in a bit more detail? (Ie are you trying to have rffi-rffi bridge > between interpreter and compiled module?) Why not fix extcompiler instead? > > :. > well, the above code would produce: extern int foo(int i, int j) { return i+j; } (and perhaps an accompanying .h file) thereby providing an interface for other C programs. This is rffi producing rather than consuming a C interface. Simon. From jgustak at gmail.com Thu Aug 9 21:30:41 2007 From: jgustak at gmail.com (Jakub Gustak) Date: Thu, 9 Aug 2007 21:30:41 +0200 Subject: [pypy-dev] scheme interpreter [status report] In-Reply-To: References: Message-ID: Macros looks like working. One thing to be added: they are acting like first-class objects, but should not. Right now I am working on continuations. First a puzzle: What will return evaluation of s-expression like this: (call/cc call/cc) Or what will be bound to k after this s-expression: (define k (call/cc call/cc)) After some asking on #pypy I decided to create own implementation of continuations, and not try to use stackless. The reason is that coroutines are one-shot. So I created execution stack simulation, or rather stack for continuation frames. Right now not every macro uses it, so not every one is continuation-safe. If macro has defined method continue_tr it should get along with continuations well. Another issue: this implementations is stack-only so it introduce lot of overhead both on capturing and throwing continuations. And one more: every W_Procedure is continuations-safe but VERY slow, even when no capture occurs. I'd rather would like to think about how to throw continuations more generally, and not have to implement continue_tr for every W_Callable, than implementing capture differently. Or maybe try totally different approach e.g write every call in CPS, but it probably would require way too much changes right now. Cheers, Jakub Gustak From overminddl1 at gmail.com Fri Aug 10 03:21:21 2007 From: overminddl1 at gmail.com (OvermindDL1) Date: Thu, 9 Aug 2007 19:21:21 -0600 Subject: [pypy-dev] Stackless vs Erlang benchmarks In-Reply-To: <46B62C6B.5040306@gmx.de> References: <46B0605A.6010305@genesilico.pl> <46B62C6B.5040306@gmx.de> Message-ID: <3f49a9f40708091821u16f1055esb0777c2df4e1d4ac@mail.gmail.com> I made a library not long ago for stackless cpython meant to partially emulate that aspect of Erlang (called the library Pylang aptly enough, no I have no website, mostly personal use, I have never seen anyone in the community actively interested in such things from my Python experience, although if anyone voices opinions here then I could be convinced to release what I have). But you write your actors as if they are mini programs with their own even loops and so forth. They should not communicate by direct interaction of objects (unless you know beyond any doubt they will be in the same process, still better to just purely use message passing as I do with it) but rather strictly by message passing. Here is one of my tests that show the event loop and so forth (this test tests the message loop, does not show PID usage, network interaction, or anything else): _______________________________________________________ from ..core import getMessage # Yes, yes, I know, castrate me now # for using relative imports, this file is pylang/tests/test0.py and I do # need to do some file renaming and class shuffling if I release this from ..nameserver import NameServer # If I release, I need to rename # this as well since it does far more then the original design called for # This vars and this class are just to make ease of message passing # in this first test more simple, it is not necessary though TEST0_CONFIRMATION=0 TEST0_TESTSUBROUTINE=1 TEST0_ONLYSUBROUTINE=2 TEST0_EXIT=-1 class test0Message(object): def __init__(self, type): self.type=type def __str__(self): return "Message type: %i" % (self.type) def test0SubRoutine(): print "\t 0:Subroutine entered, waiting for signal only for me" e=getMessage(lambda e: isinstance(e, test0Message) and e.type==TEST0_ONLYSUBROUTINE) print "\t 0:Subroutine message receieved (should be %i):"%(TEST0_ONLYSUBROUTINE), e print "\t 0:Exiting subroutine" def test0Main(): while True: e=getMessage(timeout=1) if e is None: print "\t0:Timeout receieved" elif not isinstance(e, test0Message): print "\t0:Unknown message receieved, not of this test type" elif e.type==TEST0_CONFIRMATION: print "\t0:Confirmation received" elif e.type==TEST0_TESTSUBROUTINE: print "\t0:Testing subroutine" test0SubRoutine() elif e.type==TEST0_ONLYSUBROUTINE: print "0:ERROR: Only Subroutine message received when not in a callback" elif e.type==TEST0_EXIT: print "\t0:Exit request received, exiting process" return else: print "\t0:Unknown message type received, either need to set a condition to only get what you want, or dump this:", e def test0Generator(p): global exitScheduler print "Generator started, sending confirmation" p.sendMessage(test0Message(TEST0_CONFIRMATION)) Sleep(0.01) print "Generator sleeping for 1.5 seconds" Sleep(1.5) print "Sending test subroutine" p.sendMessage(test0Message(TEST0_TESTSUBROUTINE)) Sleep(0.01) print "Sending confirmation" p.sendMessage(test0Message(TEST0_CONFIRMATION)) Sleep(0.01) print "Sending only subroutine" p.sendMessage(test0Message(TEST0_ONLYSUBROUTINE)) Sleep(0.01) print "Sending confirmation" p.sendMessage(test0Message(TEST0_CONFIRMATION)) Sleep(0.01) print "Sending untyped test0Message" p.sendMessage(test0Message(10)) Sleep(0.01) print "Sending exit message" p.sendMessage(test0Message(TEST0_EXIT)) Sleep(0.01) print "Generator exiting" NameServer.__nameserver__.shutdown() # This sends a close message to all open # Processes and continues for a bit, if they have not all died by the timeout then they # are sent the taskletkill exception, which will kill them regardless. Also shuts down # the server and other such things def runTest0(): print "\t0:Testing local process and local message communication" NameServer(serverNode=NullServer()) # NullServer does not host squat. You can # create servernodes to handle tcp, udp, some other communication type, equiv to # a driver in Erlang, everything on the mesh network needs to use the same thing # without using a gateway inside the mesh of some sort to connect different ones p=Process(test0Main)() Process(test0Generator)(p) # Not the proper way to spawn processes now, # should call spawn(), but this was an early testcase that is still useful as-is NameServer.__nameserver__.run() _______________________________________________________ And it works as expected. The testcase that just tests network setup and destruction (no message passing or anything, just creation, verification, and destruction) is this: _______________________________________________________ from ..nameserver import NameServer from ..node import TCPServer from ..core import Process, getMessage def tempKillAll(waitTime=50, ns=None): if not ns: ns = NameServer.__nameserver__ Sleep(waitTime%10) for i in xrange(int(waitTime)-(waitTime%10), 0, -10): print "%i seconds until death" %(i) Sleep(10) print "dieing" ns.shutdown() def testServer(timeout=30, cookie='nocookie', localname='myhostA'): NameServer(serverNode=TCPServer(localname=localname, cookie=cookie, host='', port=42586, backlog=5)) Process(tempKillAll)(timeout) NameServer.__nameserver__.run() def testClient(timeout=3, cookie='nocookie', localname='myhostB', remoteURL='pylang://myhostA:42586/'): # no the port in the url is not # necessary as it is the default port used by this anyway NameServer(serverNode=TCPServer(localname=localname, cookie=cookie, host='', port=42587, backlog=5)) Process(tempKillAll)(timeout) Sleep(0.1) NameServer.__nameserver__.createNewNode(remoteURL) NameServer.__nameserver__.run() _______________________________________________________ The URL in full would actually be "pylang://myhostB:nocookie at myhostA:42586/". But if any are omitted then it uses the local hostname, the cookie, the remote host, the default port, etc... To connect to a remote named process (not an anonymous process) then you could get its PID with: PID("pylang://myhostB:nocookie at myhostA:42586/someNamedProcess") and if the connection to the remote node is not made then it will be made. If you have a Node pointer to an already connected node then you can just call a method on it to get a PID to a remote named process. If you send a message through a PID (the main way of sending messages) then they may or may not arrive because, although you can get a PID to a named process for example, the process may not actually exist. You can query a process to see if it exists which involves calling a ping method on the PID which will actually perform a test on the remote node (which may be on the same process/node, or a networked computer or what-not) and send back a specially crafted message saying pong or pang for success or fail with the PID as the param. A PID can always be converted to a pylang URL and vice-versa (as even anonymous processes have a guid generated for them). There can also be a type param of the url (example: "pylang://myhostB:nocookie at myhostA:42586;tcp/", then 'tcp' would be the type, standard url syntax) and if so then it will try to create a connection to the remote node using that connection (tcp, udp, ssl, pipe, raknet, whateverisMade) instead of the default registered to the nameserver by the servernode param, useful for connecting to a mesh that uses a different server type.. Processes can choose to save themselves for transmission across a network, save to a db or file, etc..., and they will retain their GUID. When they are serialized they are not serialized as a whole but they send a structure with the method call which is returned as an initial message to a process when it is restarted so it can reconstruct itself from it, the guid is always included and handled transparently. Thus, a process can be 'pickled' quite easilly, but it must be built for it (not hard to do, just needs to be done), reason being is that although stackless allows full tasklets to be pickled, some process may have some rather... adverse effects to being pickled (like one that handled any form of resources, db, file, etc...) hence why I have it do it this other, more 'erlangish' method. The message loop function (I do apologize for jumping around in thoughts...) is not just a standard pull the first off the queue like in a normal OS message loop, but has a signature of: "message getMessage(conditionFn=lambda e:True, timeout=-1)". When getMessage is called then it starts testing the messages in the queue for the process it was called from in order they were received, testing each with the conditionFn (allows for nice complex testing for something like "lambda e: isinstance(e, test0Message) and e.type==TEST0_ONLYSUBROUTINE" or "lambda e: isinstance(e, tuple) and len(e)==2 and e[0]==14" or whatever to simple things like just returning true to get the next top thing as is the default to passing in another function. If it good to get and empty out the queue on occasion to make sure it does not fill up with messages you do not want, slowing it down slightly. The other param is timeout (in seconds). If timeout equals -1 then it will wait forever for a matching message. If timeout equals 0 then it just test and returns a message if one matches that it already has, or it returns None (due to zero timeout). If greater then zero then it tests for a matching message, if none, then it waits up to the timeout for a message, if one arrives by that time then it returns it immediately, if not then it returns None when it reaches the timeout. It is not a spin-wait nor does it delay receiving the message or anything of so forth. The nameserver runs a pure tasklet (no message queue, no nothing like that) that just checks a list, if the top-most item on it is past the time delay then it sends a message to it and removes it then tests the next and repeats, else it inserts itself back into the stackless queue and waits for the next cycle to come by. A Process, when it has a timeout but no messages match, then it adds itself to the afore-mentioned list as a tuple of (timeTheTimeoutOccurs, processWaitChannel) then sorts it then blocks on its wait channel. If a message arrives at the process it tests the wait channel to see if it is blocked, if so it returns the message on that channel so it can then be tested, if it does not match then the process re-blocks, if it does match then it removes itself from the wait list and returns the message. If an exception occurs (taskletkill for example), it still properly removes itself from the list before it re-raises. Many other things done, but those are the major parts. All listed above is either working (near all of it) but may still need to be refined in its interface and so forth, or is partially done (pickling is more... hands-on currently, the interface described above is the new one I am partially moved to). The purpose of this was just to describe that erlangs strength can also be a strength for python as well. I really would like to move this library to PyPy when PyPy becomes usable. I have written that linked test above as well (not in this library, but pure stackless, a vastly different way then the article did though) and erlang still beat stackless cpython, hopefully pypy will fare better. On 8/5/07, Carl Friedrich Bolz wrote: > Hi Maciek > > Maciek Fijalkowski wrote: > > http://muharem.wordpress.com/2007/07/31/erlang-vs-stackless-python-a-first-benchmark/ > > > > Christian: with a dedication for you :) > > > > We should try pypy on this btw. > > seems a bit meaningless, given that one of erlang's most important > strengths is the possibility of using it to transparently across > multiple processes and especially multiple machines. > > Cheers, > > Carl Friedrich > _______________________________________________ > pypy-dev at codespeak.net > http://codespeak.net/mailman/listinfo/pypy-dev > From fijal at genesilico.pl Fri Aug 10 11:53:25 2007 From: fijal at genesilico.pl (Maciek Fijalkowski) Date: Fri, 10 Aug 2007 11:53:25 +0200 Subject: [pypy-dev] rffi feature request In-Reply-To: <20070808091440.f8e06de1.simon@arrowtheory.com> References: <20070803121413.bd2d992a.simon@arrowtheory.com> <20070804181403.f15cdbb6.simon@arrowtheory.com> <46B86274.2040401@genesilico.pl> <20070808091440.f8e06de1.simon@arrowtheory.com> Message-ID: <46BC3595.10606@genesilico.pl> > well, the above code would produce: > > extern int foo(int i, int j) > { > return i+j; > } > > (and perhaps an accompanying .h file) > > thereby providing an interface for other C programs. > This is rffi producing rather than consuming a C interface. > > Simon. > Hey Simon. It's doable, and not even hard, but it requires work. I kind of don't see the use-case for me right now (but I perfectly see the use-case in general), so it's a bit unlikely that I'll try to implement it, although I might help you a bit. Cheers, fijal :. From anto.cuni at gmail.com Sat Aug 11 12:02:42 2007 From: anto.cuni at gmail.com (Antonio Cuni) Date: Sat, 11 Aug 2007 12:02:42 +0200 Subject: [pypy-dev] rffi feature request In-Reply-To: <20070808091440.f8e06de1.simon@arrowtheory.com> References: <20070803121413.bd2d992a.simon@arrowtheory.com> <20070804181403.f15cdbb6.simon@arrowtheory.com> <46B86274.2040401@genesilico.pl> <20070808091440.f8e06de1.simon@arrowtheory.com> Message-ID: <46BD8942.2030309@gmail.com> Simon Burton wrote: >>>> I would like to expose some functions as external >>>> symbols when i build a .so >>>> >>>> def foo(i, j): >>>> return i+j >>>> >>>> foo._expose_ = [rffi.INT, rffi.INT] >>>> > well, the above code would produce: > > extern int foo(int i, int j) > { > return i+j; > } > > (and perhaps an accompanying .h file) > > thereby providing an interface for other C programs. > This is rffi producing rather than consuming a C interface. I think that what you need is similar to what carbonpython does: basically, carbonpython is a frontend for the translation toolchain that takes all the exposed functions/classes in an input file and produce a .NET libraries to be reused by other programs. Functions are exposed with the @export decorator: @export(int, int) def foo(a, b): return a+b The frontend creates a TranslationDriver objects, but instead of calling driver.setup() it calls driver.setup_library(), which allows to pass more than one entry point. I think all this could be reused for your needs. Then, the next step is to teach the backend how to deal with libraries; for genc, this would probably mean not to mangle functions names, to create a companying .h and to modify the Makefile to produce a .so instead of an executable. About name mangling: one possible solution could be to mangle the names as now and put some #define in the .h to allow the programmer to use non-mangled names. Probably this is the less invasive solution. ciao Anto From jason at jasondavies.com Mon Aug 13 00:29:29 2007 From: jason at jasondavies.com (Jason Davies) Date: Sun, 12 Aug 2007 23:29:29 +0100 Subject: [pypy-dev] Lua Frontend Message-ID: <46BF89C9.8000309@jasondavies.com> Hello, A couple of years ago I wrote a JIT compiler for Lua [1] for my final-year dissertation. Looking through the project-ideas page [2] I noticed there might be interest in writing Lua interpreter using PyPy. I'd be interested in doing this as a personal project. How would I go about getting started? Jason [1] http://www.lua.org/ [2] http://codespeak.net/pypy/dist/pypy/doc/project-ideas.html#write-a-new-front-end From gbowyer at fastmail.co.uk Mon Aug 13 16:49:41 2007 From: gbowyer at fastmail.co.uk (Greg Bowyer) Date: Mon, 13 Aug 2007 15:49:41 +0100 Subject: [pypy-dev] Silly idea - Pypy on Xen Message-ID: Taking a look at this http://www.informationweek.com/news/showArticle.jhtml?articleID=201311330&pgno=1&queryText= I wonder if the same could be done for pypy (this is just high level thinking, nothing serious) From fijal at genesilico.pl Mon Aug 13 17:21:26 2007 From: fijal at genesilico.pl (Maciek Fijalkowski) Date: Mon, 13 Aug 2007 17:21:26 +0200 Subject: [pypy-dev] Silly idea - Pypy on Xen In-Reply-To: References: Message-ID: <46C076F6.1090707@genesilico.pl> Greg Bowyer wrote: > Taking a look at this > http://www.informationweek.com/news/showArticle.jhtml?articleID=201311330&pgno=1&queryText= > > I wonder if the same could be done for pypy (this is just high level > thinking, nothing serious) > > Look at unununium project (dead for quite a while), they tried to have a python shell as a basic thing for an operating system and had quite a lot of nice concepts. Right now obvious approach would be to target some virtualised environment instead of hardware itself, so it's not something very new. There are questions what abstraction do you provide for hard drive, etc. etc. What pypy has unique here is RPython, which can easily map low level details onto high level. Just few thoughts Cheers, fijal :. From cfbolz at gmx.de Mon Aug 13 17:23:36 2007 From: cfbolz at gmx.de (Carl Friedrich Bolz) Date: Mon, 13 Aug 2007 17:23:36 +0200 Subject: [pypy-dev] Silly idea - Pypy on Xen In-Reply-To: <46C076F6.1090707@genesilico.pl> References: <46C076F6.1090707@genesilico.pl> Message-ID: <46C07778.6050009@gmx.de> Maciek Fijalkowski wrote: > Greg Bowyer wrote: >> Taking a look at this >> http://www.informationweek.com/news/showArticle.jhtml?articleID=201311330&pgno=1&queryText= >> >> I wonder if the same could be done for pypy (this is just high level >> thinking, nothing serious) >> >> > Look at unununium project (dead for quite a while), they tried to have a > python shell as a basic thing for an operating system and had quite a > lot of nice concepts. Right now obvious approach would be to target some > virtualised environment instead of hardware itself, so it's not > something very new. There are questions what abstraction do you provide > for hard drive, etc. etc. > > What pypy has unique here is RPython, which can easily map low level > details onto high level. Yes, I kind of agree. IF you want to write an OS in Python then it probably makes most sense to target some virtualized environment and you might want to use RPython. Of course it depends very much what you actually want :-). Cheers, Carl Friedrich From fijal at genesilico.pl Mon Aug 13 17:28:46 2007 From: fijal at genesilico.pl (Maciek Fijalkowski) Date: Mon, 13 Aug 2007 17:28:46 +0200 Subject: [pypy-dev] Silly idea - Pypy on Xen In-Reply-To: <46C07778.6050009@gmx.de> References: <46C076F6.1090707@genesilico.pl> <46C07778.6050009@gmx.de> Message-ID: <46C078AE.50508@genesilico.pl> Carl Friedrich Bolz wrote: > Maciek Fijalkowski wrote: >> Greg Bowyer wrote: >>> Taking a look at this >>> http://www.informationweek.com/news/showArticle.jhtml?articleID=201311330&pgno=1&queryText= >>> >>> >>> I wonder if the same could be done for pypy (this is just high level >>> thinking, nothing serious) >>> >>> >> Look at unununium project (dead for quite a while), they tried to >> have a python shell as a basic thing for an operating system and had >> quite a lot of nice concepts. Right now obvious approach would be to >> target some virtualised environment instead of hardware itself, so >> it's not something very new. There are questions what abstraction do >> you provide for hard drive, etc. etc. >> >> What pypy has unique here is RPython, which can easily map low level >> details onto high level. > > Yes, I kind of agree. IF you want to write an OS in Python then it > probably makes most sense to target some virtualized environment and > you might want to use RPython. Of course it depends very much what you > actually want :-). > > Cheers, > > Carl Friedrich > > :. > Come on, OS is an outdated concept. Cheers, fijal :. From gbowyer at fastmail.co.uk Mon Aug 13 17:54:18 2007 From: gbowyer at fastmail.co.uk (Greg Bowyer) Date: Mon, 13 Aug 2007 16:54:18 +0100 Subject: [pypy-dev] Silly idea - Pypy on Xen In-Reply-To: <46C078AE.50508@genesilico.pl> References: <46C076F6.1090707@genesilico.pl> <46C07778.6050009@gmx.de> <46C078AE.50508@genesilico.pl> Message-ID: Maciek Fijalkowski wrote: > Carl Friedrich Bolz wrote: >> Maciek Fijalkowski wrote: >>> Greg Bowyer wrote: >>>> Taking a look at this >>>> http://www.informationweek.com/news/showArticle.jhtml?articleID=201311330&pgno=1&queryText= >>>> >>>> >>>> I wonder if the same could be done for pypy (this is just high level >>>> thinking, nothing serious) >>>> >>>> >>> Look at unununium project (dead for quite a while), they tried to >>> have a python shell as a basic thing for an operating system and had >>> quite a lot of nice concepts. Right now obvious approach would be to >>> target some virtualised environment instead of hardware itself, so >>> it's not something very new. There are questions what abstraction do >>> you provide for hard drive, etc. etc. >>> >>> What pypy has unique here is RPython, which can easily map low level >>> details onto high level. >> Yes, I kind of agree. IF you want to write an OS in Python then it >> probably makes most sense to target some virtualized environment and >> you might want to use RPython. Of course it depends very much what you >> actually want :-). >> >> Cheers, >> >> Carl Friedrich >> >> :. >> > Come on, OS is an outdated concept. > > Cheers, > fijal > > > :. > > _______________________________________________ > pypy-dev at codespeak.net > http://codespeak.net/mailman/listinfo/pypy-dev > I was thinking the selling point might be RPython From cfbolz at gmx.de Mon Aug 13 18:06:50 2007 From: cfbolz at gmx.de (Carl Friedrich Bolz) Date: Mon, 13 Aug 2007 18:06:50 +0200 Subject: [pypy-dev] Silly idea - Pypy on Xen In-Reply-To: <46C078AE.50508@genesilico.pl> References: <46C076F6.1090707@genesilico.pl> <46C07778.6050009@gmx.de> <46C078AE.50508@genesilico.pl> Message-ID: <46C0819A.6000302@gmx.de> Maciek Fijalkowski wrote: >> Yes, I kind of agree. IF you want to write an OS in Python then it >> probably makes most sense to target some virtualized environment and >> you might want to use RPython. Of course it depends very much what you >> actually want :-). >> >> Cheers, >> >> Carl Friedrich >> >> :. >> > Come on, OS is an outdated concept. Come on, you need a name to call the "thing-that-cares-for-your-resources-and-does-stuff" :-). Cheers, Carl Friedrich From cfbolz at gmx.de Mon Aug 13 18:17:21 2007 From: cfbolz at gmx.de (Carl Friedrich Bolz) Date: Mon, 13 Aug 2007 18:17:21 +0200 Subject: [pypy-dev] Lua Frontend In-Reply-To: <46BF89C9.8000309@jasondavies.com> References: <46BF89C9.8000309@jasondavies.com> Message-ID: <46C08411.1080902@gmx.de> Hi Jason, Jason Davies wrote: > A couple of years ago I wrote a JIT compiler for Lua [1] for my > final-year dissertation. Is this on the web somewhere? How did it work? > Looking through the project-ideas page [2] I > noticed there might be interest in writing Lua interpreter using PyPy. I think it would be interesting, yes. On the other hand, to me it is unclear whether this Lua interpreter would get any users, since the typical Lua users embed the Lua interpreter into a bigger project, which wouldn't be easily possible with a PyPy-based Lua-VM. > I'd be interested in doing this as a personal project. How would I go > about getting started? I would try to write an interpreter mirroring Lua's bytecode. This way you don't have to care for the tedious work of parsing and bytecode-compiling at first. I guess it is best to start with implementing some of the data structures of Lua, most notably tables (other data structures, like strings and floats are probably simple to implement, they just need to box rpython-level strings and floats). I guess you can have a naive implementation of tables in the beginning, though. Then you can probably start by implementing a simple interpreter for Lua's bytecode (probably by first writing some code that reads in Lua's bytecode and stores it nicely accessible somehow). If you are serious about this project and want to develop it under PyPy's umbrella you should ask for commit rights (e.g. by sending Michael Hudson a mail, micahel at gmail.com ). Cheers, Carl Friedrich From fijal at genesilico.pl Tue Aug 14 18:16:16 2007 From: fijal at genesilico.pl (Maciek Fijalkowski) Date: Tue, 14 Aug 2007 18:16:16 +0200 Subject: [pypy-dev] [pypy-svn] r45656 - in pypy/branch/pypy-more-rtti-inprogress: . module/posix rlib rpython rpython/lltypesystem rpython/lltypesystem/module rpython/module rpython/module/test rpython/ootypesystem/module rpython/test translator translator/c translator/c/src translator/c/test translator/sandbox translator/sandbox/test In-Reply-To: <20070814161046.E284280E8@code0.codespeak.net> References: <20070814161046.E284280E8@code0.codespeak.net> Message-ID: <46C1D550.8020508@genesilico.pl> arigo at codespeak.net wrote: > For backup purposes, in-progress: convert more of the os module to rtti > and try to get rid of the rllib.ros module by making the native > interfaces RPythonic. This looks quite good in my opinion - seems that > we've finally learned a reasonable way to do external functions. > > Hooray! I would suggest to be consistent in names. Either rename it to rtti or keep using rffi (I know that fonts basically suck when it comes to this :) Seriously - I would like to move a bit rsocket, shall I do this on the branch or on trunk? Cheers, fijal :. From cfbolz at gmx.de Tue Aug 14 21:42:12 2007 From: cfbolz at gmx.de (Carl Friedrich Bolz) Date: Tue, 14 Aug 2007 21:42:12 +0200 Subject: [pypy-dev] Paper about accurate GC in C Message-ID: <46C20594.6050506@gmx.de> Hi all, the following paper describes a variant of the root-finding approach we use for the framework GC: http://citeseer.ist.psu.edu/henderson02accurate.html They use a "shadow stack" as well, but keep all their GCed locals always in it: The tradeoffs are slightly different than ours. They reach performance quite similar to that of the Boehm GC with a simple semispace collector. Chris Lattner told me that this is how LLVM lowers GC operations currently (since no backend directly supports the GC primitives in a better way). Cheers, Carl Friedrich From cfbolz at gmx.de Tue Aug 14 21:53:12 2007 From: cfbolz at gmx.de (Carl Friedrich Bolz) Date: Tue, 14 Aug 2007 21:53:12 +0200 Subject: [pypy-dev] [pypy-svn] r45656 - in pypy/branch/pypy-more-rtti-inprogress: . module/posix rlib rpython rpython/lltypesystem rpython/lltypesystem/module rpython/module rpython/module/test rpython/ootypesystem/module rpython/test translator translator/c translator/c/src translator/c/test translator/sandbox translator/sandbox/test In-Reply-To: <46C1D550.8020508@genesilico.pl> References: <20070814161046.E284280E8@code0.codespeak.net> <46C1D550.8020508@genesilico.pl> Message-ID: <46C20828.3010905@gmx.de> Maciek Fijalkowski wrote: > arigo at codespeak.net wrote: >> For backup purposes, in-progress: convert more of the os module to rtti >> and try to get rid of the rllib.ros module by making the native >> interfaces RPythonic. This looks quite good in my opinion - seems that >> we've finally learned a reasonable way to do external functions. >> >> > Hooray! Seconded: very cool! Thanks to all the people that are working on this (I am looking at you, Maciek :-) ). Cheers, Carl Friedrich From jgustak at gmail.com Fri Aug 17 14:35:20 2007 From: jgustak at gmail.com (Jakub Gustak) Date: Fri, 17 Aug 2007 14:35:20 +0200 Subject: [pypy-dev] scheme interpreter [status report] In-Reply-To: References: Message-ID: This is last scheme status report during SoC. So I will sum things up in the end. > Macros looks like working. > One thing to be added: they are acting like first-class objects, but should not. Found a bug, when ellipsis matched zero objects, fized now. Scheme48 acts differently in some cases. On the other hand in this cases mzScheme works just as expected. In this meaning macros are evil. There is no explicit specification how they should behave in r5rs. They look like added to the specification in last minute. Continuations: > Another issue: this implementations is stack-only so it introduce lot > of overhead both on capturing and throwing continuations. > > And one more: every W_Procedure is continuations-safe but VERY slow, > even when no capture occurs. I made them a little faster by decreasing number of operations, but still for every non-tail-call a new instance of ContinuationFrame is created and is stored on stack. On every return it is popped form stack and forgotten. When no capture will happen, it is time and memory waste. > I'd rather would like to think about how to throw continuations more > generally, and not have to implement continue_tr for every W_Callable, > than implementing capture differently. No progress in this direction. Still every W_Callable to be continuations friendly has to use eval_cf instead of eval and must implement continue_tr method. > Or maybe try totally different approach e.g write every call in CPS, > but it probably would require way too much changes right now. Nor here. Summarization: * Interactive interpreter (not RPythonic) lang/scheme/interactive.py * Can be translated using: translation/goal/targetscheme.py PyPy Scheme Interpreter features: * uses rpythonic packrat parser * every "syntax" definitions are implemented * quotations and quasi-quotations * delayed evaluation * proper tail-recursion * hygienic macros * partly working continuations (some macros are not continuations friendly yet). * no dynamic-wind procedure Known issues and "to be added": * lambda does not check if it is called with correct arguments number * macros are acting like first-class objects, though they shouldn't * missing input/output procedures * no support for chars, vectors * strings are supported, but no string-operations procedures * quite a few procedures are missing, but they can be easily added * no call-with-values and values * missing some "library syntax", can be defined using macros What's next? I will probably mysteriously disappear and became grave-digger or hermit. Never touch computer again and live happily ever after ;-). Cheers, Jakub Gustak From fijal at genesilico.pl Fri Aug 17 15:58:43 2007 From: fijal at genesilico.pl (Maciek Fijalkowski) Date: Fri, 17 Aug 2007 15:58:43 +0200 Subject: [pypy-dev] scheme interpreter [status report] In-Reply-To: References: Message-ID: <46C5A993.4040901@genesilico.pl> Jakub Gustak wrote: > This is last scheme status report during SoC. So I will sum things up > in the end. > > Summarization: > * Interactive interpreter (not RPythonic) lang/scheme/interactive.py > * Can be translated using: translation/goal/targetscheme.py > > PyPy Scheme Interpreter features: > * uses rpythonic packrat parser > * every "syntax" definitions are implemented > * quotations and quasi-quotations > * delayed evaluation > * proper tail-recursion > * hygienic macros > * partly working continuations (some macros are not continuations friendly yet). > * no dynamic-wind procedure > > > I would say congratulations! (Although I don't know scheme that much to be able to evaluate this, even positively) Cheers, fijal :. From arigo at tunes.org Fri Aug 17 20:03:30 2007 From: arigo at tunes.org (Armin Rigo) Date: Fri, 17 Aug 2007 20:03:30 +0200 Subject: [pypy-dev] scheme interpreter [status report] In-Reply-To: References: Message-ID: <20070817180330.GA5301@code0.codespeak.net> Hi Jakub, On Fri, Aug 17, 2007 at 02:35:20PM +0200, Jakub Gustak wrote: > This is last scheme status report during SoC. So I will sum things up > in the end. Thanks for the good work you did during the summer! The result looks good to me, even with the missing features, but we'd really need a Scheme guy to guide us from now on, if we want to make the interpreter more usable. > > I'd rather would like to think about how to throw continuations more > > generally, and not have to implement continue_tr for every W_Callable, > > than implementing capture differently. > > No progress in this direction. > Still every W_Callable to be continuations friendly has to use eval_cf > instead of eval and must implement continue_tr method. > > > Or maybe try totally different approach e.g write every call in CPS, > > but it probably would require way too much changes right now. > > Nor here. Indeed, the current approach allows impressive-looking tests to pass, but the code is quite verbose and inefficient. We can start thinking at some point about using the stackless and/or GC-based cloning operations of RPython as a more orthogonal alternative, but that's an involved topic. > What's next? > I will probably mysteriously disappear and became grave-digger or > hermit. Never touch computer again and live happily ever after ;-). If you do, then farewell! In the event that you're interested to continue, though (:-), here are in summary some possible future directions: * get some Scheme people interested in the project, try to run real Scheme projects * implement more of the missing features * think how to do continuations with stackless or GC-based cloning * also, we could try to integrate pypy/lang/scheme and pypy/interpreter in one of the many possible ways, to offer a combined Python+Scheme environment... A bientot, Armin. From fijal at genesilico.pl Sat Aug 18 13:26:55 2007 From: fijal at genesilico.pl (Maciek Fijalkowski) Date: Sat, 18 Aug 2007 13:26:55 +0200 Subject: [pypy-dev] PyPy work plan Message-ID: <46C6D77F.3020807@genesilico.pl> As you probably have observed already, for the last few months pypy is slowly approaching "being usable" state instead of "having more ultra-cool features", as we've got enough of them to be willing to use them. As we're moving towards maintaining pypy as an open source project I would suggest to look around parts of pypy and have list of people who are willing to maintain certain parts. Maintain not in a sense of developing those, but rather accepting/rejecting patches, keeping it up to date with other parts, etc. I would also suggest to move parts with no obvious maintainer somewhere else in svn (branch? tag?) not to confuse people who checkout whole svn repository with parts never used by anyone. This option would make it easier to: * have people entering project, since it would be obvious who to contact for different parts. * maintain whole codebase, since some cleanup changes are pervasive enough to break different parts. * this would also hopefully reduce whole code size. I've also checked in pypy parts (more or less) into http://codespeak.net/svn/pypy/dist/pypy/doc/maintainers.txt feel free to add yourself wherever you like and to modify this list as well What do you think? Cheers, fijal :. From pedronis at openendsystems.com Sat Aug 18 17:08:34 2007 From: pedronis at openendsystems.com (Samuele Pedroni) Date: Sat, 18 Aug 2007 17:08:34 +0200 Subject: [pypy-dev] PyPy work plan In-Reply-To: <46C6D77F.3020807@genesilico.pl> References: <46C6D77F.3020807@genesilico.pl> Message-ID: <46C70B72.4050704@openendsystems.com> Maciek Fijalkowski wrote: > > I've also checked in pypy parts (more or less) into > http://codespeak.net/svn/pypy/dist/pypy/doc/maintainers.txt > feel free to add yourself wherever you like and to modify this list as well > I looked at it, didn't really know how to fill it in without being confused or confusing. I removed it. > What do you think? > I think identifying orphaned areas, agree on them and act accordingly is useful. I suppose there are indeed specific areas with specific maintainers, I suppose listing those may be useful. Most of PyPy is too interconnected and demanding for anything but global ownership to make sense tough, at least at this point in time. My check-ins on the 4-5 of August are a case in point. From lac at openend.se Sat Aug 18 17:24:27 2007 From: lac at openend.se (Laura Creighton) Date: Sat, 18 Aug 2007 17:24:27 +0200 Subject: [pypy-dev] PyPy work plan In-Reply-To: Message from Maciek Fijalkowski of "Sat, 18 Aug 2007 13:26:55 +0200." <46C6D77F.3020807@genesilico.pl> References: <46C6D77F.3020807@genesilico.pl> Message-ID: <200708181524.l7IFORYH028243@theraft.openend.se> Hi Simon. I never like to quote somebody's email without informing them. I am about to use ICCARUS (why 2 Cs) as an example of what I am looking for. The rest is the middle of a discusison that you are not expected to understand. Thank you for ICCARUS. -- Laura In a message of Sat, 18 Aug 2007 13:26:55 +0200, Maciek Fijalkowski writes: > >As you probably have observed already, for the last few months pypy >is slowly approaching "being usable" state instead of "having more >ultra-cool >features", as we've got enough of them to be willing to use them. >As we're moving towards maintaining pypy as an open source project I woul >d >suggest to look around parts of pypy and have list >of people who are willing to maintain certain parts. Maintain not in a se >nse >of developing those, but rather accepting/rejecting patches, keeping it u >p >to date with other parts, etc. I would also suggest >to move parts with no obvious maintainer somewhere else in svn >(branch? tag?) not to confuse people who checkout whole svn repository >with parts never used by anyone. This option would make it easier to: > >* have people entering project, since it would be obvious who to > contact for different parts. > >* maintain whole codebase, since some cleanup changes are pervasive > enough to break different parts. > >* this would also hopefully reduce whole code size. > >I've also checked in pypy parts (more or less) into >http://codespeak.net/svn/pypy/dist/pypy/doc/maintainers.txt >feel free to add yourself wherever you like and to modify this list as we >ll > >What do you think? > >Cheers, >fijal As mentioned just in irc, I am unhappy with the dividing up of pypy into pieces with maintainers. That way the people who envision the small can dominate those who envision the large, and one of the strengths of this project is the sheer visionary power of some of its members. on the other hand, some tidying would be nice. and we sure as hell could be friendlier for newbies. I am now looking at code visualisation tools, and time visualisation tools. I get the idea that if we could visualise things better, we could undersgtand who knows what about what, and what else, then we would need less in the way of 'task management'. for static things I am thinking of: http://simile.mit.edu/timeline/ or everything at the top level simile.mit.edu domain. But then yesterday I read this on the pygame mailing list. ..... From: "Simon Wittber" To: pygame-users at seul.org Subject: [pygame] 3D Social Network Visualisation A few evening ago we demonstrated ICCARUS at a web party. It is a piece of software which provides a visualisation of social networks in three dimensions :-) It uses pygame and a Pyrexed OpenGL lib called GFX. You can see screencast here: http://scouta.com/faves/8raO17jT8Y3/ and a vid of the presentation here: http://blog.viddler.com/cdevroe/webjam-iccarus/ Thanks to python and pygame, this project was conceived and implemented over 3 days. ...... I thought this was way cool. Especially the 3 days part. I want our code base to work like this. Who knows what? stuff that everybody knows, stuff that carl freidrich and armin and maciek and samuele know, stuff that nobody knows but samuele, and even -- 'stuff that nobody knows' somebody did at one time but we all have forgotten by now. This is a half-baked proposal for code base management, normally I would not mention this at all, but some of us are getting grumpy, and I thought that other ideas, however weird might be welcome. Laura From lac at openend.se Sat Aug 18 17:30:01 2007 From: lac at openend.se (Laura Creighton) Date: Sat, 18 Aug 2007 17:30:01 +0200 Subject: [pypy-dev] scheme interpreter [status report] In-Reply-To: Message from Armin Rigo of "Fri, 17 Aug 2007 20:03:30 +0200." <20070817180330.GA5301@code0.codespeak.net> References: <20070817180330.GA5301@code0.codespeak.net> Message-ID: <200708181530.l7IFU1k8029132@theraft.openend.se> >> What's next? >> I will probably mysteriously disappear and became grave-digger or >> hermit. Never touch computer again and live happily ever after ;-). This sort of peace is good for 2 weeks, a month, even a year in severe burn-out cases. UNless you plan on being struck by lightning on your vacation in the hermitage or graveyard, I would expect the technical world to grab its claw around you once again. Know that you are welcome here whenever this happens. :-) Laura From santagada at gmail.com Sun Aug 19 00:22:22 2007 From: santagada at gmail.com (Leonardo Santagada) Date: Sat, 18 Aug 2007 19:22:22 -0300 Subject: [pypy-dev] Javascript Frontend (and mybe some unrelated future plans) Message-ID: I've been on a cave for some time, but I want to get back to work more on pypy but I don't know what to do with the javascript interpreter. While I was working with it it was getting better but almost all the architecture work i've done alone and I'm not happy with the results. The code looks kinda ugly and I don't really know how to get it into a good shape. Maybe I need someone to take a look at it. Also there is some stuff that I don't know what to do about like automatic semicolon insertion and regexes. And them there is the stuff that I really think is essential but that is missing wich is a simple RPython API to make native objects to javascript (that we could use to implement DOM, Regexes and XMLHttpRequests using rsockets). All those are internal architectural problems, them there are the external ones like, who would like to use our javascript interpreter? Web javascript guys for testing and to make their tools using js, or people wanting to know how to make an interpreter on top of pypy (people always says that a scheme interpreter would be easier for that). So those are my troubles with the js interpreter, as it is now I don't know what it adds to the pypy project. But I'm always a mail away and I still track the development of pypy and can maintain it. I still want to work more with pypy, just don't know what to do with the js frontend now. Maybe I should take another part of pypy to work with, I want to know what you guys have to say about all that i've written. As always I want to express my gratitude to everyone as you all really helped me a lot. I learned so much working on pypy, I don't know what to say but thanks. best wishes, -- Leonardo Santagada From arigo at tunes.org Sun Aug 19 12:33:09 2007 From: arigo at tunes.org (Armin Rigo) Date: Sun, 19 Aug 2007 12:33:09 +0200 Subject: [pypy-dev] pypy-dev@codespeak.net Message-ID: <20070819103308.GA28257@code0.codespeak.net> Hi all, Those that follow IRC already know it, but it's worth being announced a bit more widely: I've been working on a form of sandboxing for RPython programs, which now seems to work for the whole of PyPy. It's "sandboxing" as in "full virtualization", but done in normal C with no OS support at all. It's a two-processes model: we can translate PyPy to a special "pypy-c-sandbox" executable, which is safe in the sense that it doesn't do any library or system call - instead, whenever it would like to perform such an operation, it marshals the operation name and the arguments to its stdout and it waits for the marshalled result on its stdin. This pypy-c-sandbox process is meant to be run by an outer "controller" program that answers to these operation requests. The pypy-c-sandbox program is obtained by adding a transformation during translation, which turns all RPython-level external function calls into stubs that do the marshalling/waiting/unmarshalling. An attacker that tries to escape the sandbox is stuck within a C program that contains no external function call at all except to write to stdout and read from stdin. (It's still attackable, e.g. by exploiting segfault-like situations, but as far as I can tell - unlike CPython - any RPython program is really robust against this kind of attack, at least if we enable the extra checks that all RPython list and string indexing are in range. Alternatively, on Linux there is a lightweight OS-level sandboxing technique available by default - google for 'seccomp'.) The outer controller is a plain Python program that can run in CPython or a regular PyPy. It can perform any virtualization it likes, by giving the subprocess any custom view on its world. For example, while the subprocess thinks it's using file handles, in reality the numbers are created by the controller process and so they need not be (and probably should not be) real OS-level file handles at all. In the demo controller I've implemented there is simply a mapping from numbers to file-like objects. The controller answers to the "os_open" operation by translating the requested path to some file or file-like object in some virtual and completely custom directory hierarchy. The file-like object is put in the mapping with any unused number >= 3 as a key, and the latter is returned to the subprocess. The "os_read" operation works by mapping the pseudo file handle given by the subprocess back to a file-like object in the controller, and reading from the file-like object. Enough explanations, here's a how-to: For now, this lives in a branch at the 'pypy' level: cd ..../pypy-dist/pypy svn switch http://codespeak.net/svn/pypy/branch/pypy-more-rtti-inprogress/ In pypy/translator/goal: ./translate.py --sandbox --source targetpypystandalone.py --withoutmod-gc --withoutmod-_weakref (the gc and _weakref modules are disabled because they are a bit too unsafe, in the sense that they could allow bogus memory accesses) Then in the directory where the sources were created, compile with the extra RPython-level assertions enabled: make CFLAGS="-O2 -DRPY_ASSERT" mv testing_1 /some/path/pypy-c-sandbox Run it with the tools in the pypy/translator/sandbox directory: ./pypy_interact.py /some/path/pypy-c-sandbox [args...] Just like pypy-c, if you pass no argument you get the interactive prompt. In theory it's impossible to do anything bad or read a random file on the machine from this prompt. (There is no protection against using all the RAM or CPU yet.) To pass a script as an argument you need to put it in a directory along with all its dependencies, and ask pypy_interact to export this directory (read-only) to the subprocess' virtual /tmp directory with the --tmp=DIR option. Not all operations are supported; e.g. if you type os.readlink('...'), the controller crashes with an exception and the subprocess is killed. Other operations make the subprocess die directly with a "Fatal RPython error". None of this is a security hole; it just means that if you try to run some random program, it risks getting killed depending on the Python built-in functions it tries to call. By the way, as you should have realized, it's really independent from the fact that it's PyPy that we are translating. Any RPython program should do. I've successfully tried it on the JS interpreter. The controller is only called "pypy_interact" because it emulates a file hierarchy that makes pypy-c-sandbox happy - it contains (read-only) virtual directories like /bin/lib-python and /bin/pypy/lib and it pretends that the executable is /bin/pypy-c. A bientot, Armin. From grobinson at goombah.com Sun Aug 19 19:40:52 2007 From: grobinson at goombah.com (Gary Robinson) Date: Sun, 19 Aug 2007 13:40:52 -0400 Subject: [pypy-dev] How's the JIT coming along? Message-ID: <20070819134052388521.b4a47875@goombah.com> The docs for PyPy 1.0 it say: > A foreword of warning about the JIT of PyPy as of March 2007: single > functions doing integer arithmetic get great speed-ups; about > anything else will be a bit slower with the JIT than without. We are > working on this - you can even expect quick progress, because it is > mostly a matter of adding a few careful hints in the source code of > the Python interpreter of PyPy. Given the mention of the possibility of "quick progress", I'm wondering where the JIT is 5 months later... does anyone care to give a status report and/or any estimates of when it will be more broadly useful? Thanks, Gary -- Gary Robinson CTO Emergent Music, LLC grobinson at goombah.com 207-942-3463 Company: http://www.goombah.com Blog: http://www.garyrobinson.net From arigo at tunes.org Mon Aug 20 17:30:51 2007 From: arigo at tunes.org (Armin Rigo) Date: Mon, 20 Aug 2007 17:30:51 +0200 Subject: [pypy-dev] pypy-dev@codespeak.net In-Reply-To: <20070819103308.GA28257@code0.codespeak.net> References: <20070819103308.GA28257@code0.codespeak.net> Message-ID: <20070820153051.GA19399@code0.codespeak.net> Hi, On Sun, Aug 19, 2007 at 12:33:09PM +0200, Armin Rigo wrote: > Then in the directory where the sources were created, compile with the > extra RPython-level assertions enabled: > > make CFLAGS="-O2 -DRPY_ASSERT" > mv testing_1 /some/path/pypy-c-sandbox You can now just say 'make llsafer' instead. This enables a new flag, -DRPY_LL_ASSERT, which differs from RPY_ASSERT in some ways explained in translator/c/src/support.h and which is better suited for this situation. I would say that the resulting sandboxed PyPy is quite safe then - at most, it will abort() itself if you play too strange tricks with 'exec new.code(...)'. For paranoia bonus points you can enable both RPY_ASSERT and RPY_LL_ASSERT. For what it's worth, the -DRPY_LL_ASSERT inserts tons of checks everywhere, for an acceptable performance hit (~10%?). A bientot, Armin. From arigo at tunes.org Mon Aug 20 17:41:42 2007 From: arigo at tunes.org (Armin Rigo) Date: Mon, 20 Aug 2007 17:41:42 +0200 Subject: [pypy-dev] pypy-dev@codespeak.net In-Reply-To: <20070820153051.GA19399@code0.codespeak.net> References: <20070819103308.GA28257@code0.codespeak.net> <20070820153051.GA19399@code0.codespeak.net> Message-ID: <20070820154142.GA20520@code0.codespeak.net> Re-hi, On Mon, Aug 20, 2007 at 05:30:51PM +0200, Armin Rigo wrote: > > make CFLAGS="-O2 -DRPY_ASSERT" > > mv testing_1 /some/path/pypy-c-sandbox > > You can now just say 'make llsafer' instead. While I was at it I enabled RPY_LL_ASSERT automatically in all programs translated with --sandbox, assuming that better safety by default is a good idea in this case. So now you can say ./translate.py --sandbox and get a correctly compiled result for free. A bientot, Armin. From bea at changemaker.nu Tue Aug 21 15:54:28 2007 From: bea at changemaker.nu (=?ISO-8859-1?Q?Beatrice_D=FCring?=) Date: Tue, 21 Aug 2007 15:54:28 +0200 Subject: [pypy-dev] How's the JIT coming along? In-Reply-To: <20070819134052388521.b4a47875@goombah.com> References: <20070819134052388521.b4a47875@goombah.com> Message-ID: <46CAEE94.1060004@changemaker.nu> Hi there Gary Robinson skrev: > The docs for PyPy 1.0 it say: > > >> A foreword of warning about the JIT of PyPy as of March 2007: single >> functions doing integer arithmetic get great speed-ups; about >> anything else will be a bit slower with the JIT than without. We are >> working on this - you can even expect quick progress, because it is >> mostly a matter of adding a few careful hints in the source code of >> the Python interpreter of PyPy. >> > > Given the mention of the possibility of "quick progress", I'm wondering where the JIT is 5 months later... does anyone care to give a status report and/or any estimates of when it will be more broadly useful? > > Thanks, > Gary > > > Sorry for not having responded on this issue even prior to your email. We have had discussions some of us to go for a sprint in end week of October, probably in Germany, where we would focus on clean-up work/refactoring and also to discuss plans for a 1.1 release (scope, content, time). The sprint plan need to be discussed in the upcoming week with a sprint announcement going out. _My guess_ is that the JIT of course would also be involved in the scope of that sprint (clean up/refactoring) and that the 1.1 release would also contain results from such a clean up. Some of the reasons for the "silence" in the JIT area is partly because of people having vacation and post-eu sabbaticals, we have been going to conferences such as (Europython, Dyla) to do talks about the JIT to get input as well as focusing on getting external functions, RPython and PyPy to work more smoothly (which was feedback from users and would be pypy-users at Europython in Vilnius). But yes - we might have been better at expressing this. If i might ask - what are the use cases you are thinking about when talking about the JIT (and/or the JIT generator?). Cheers Bea D?ring -- Beatrice D?ring, PMP Change Maker J?rntorget 3 413 04 Gothenburg www.changemaker.nu email: bea at changemaker.nu Phone: +46 31 7750940 Cellphone: +46 734 22 89 06 PyPy: www.pypy.org From simon at arrowtheory.com Tue Aug 21 18:06:37 2007 From: simon at arrowtheory.com (Simon Burton) Date: Tue, 21 Aug 2007 09:06:37 -0700 Subject: [pypy-dev] buffer class and ll2ctypes problem Message-ID: <20070821090637.f57b0323.simon@arrowtheory.com> I've been working with Richard on a new (rpython) buffer class. See attached for two approaches. The richbuf seems much clearer, but i'm not sure how to get other types out of it (like long long or short), or do byte swapping. I don't understand this much, can someone point the right direction ? Ie. which buf is the right direction to go. (I would also like to re-visit the array type in pypy/rpython/numpy, and perhaps think more generally about __getitem__ in rpython.) The rffi buffer (buffers.py) fails on CPython: $ python buffers.py Traceback (most recent call last): File "buffers.py", line 55, in test_run() File "buffers.py", line 51, in test_run assert demo(1234) == 1234 File "buffers.py", line 39, in demo buf.put_int32(value) File "buffers.py", line 31, in put_int32 intp = rffi.cast(INTP, buf) File "/home/simon/site-packages/pypy/rpython/lltypesystem/ll2ctypes.py", line 491, in force_cast cvalue = lltype2ctypes(value) File "/home/simon/site-packages/pypy/rpython/lltypesystem/ll2ctypes.py", line 336, in lltype2ctypes convert_struct(container) File "/home/simon/site-packages/pypy/rpython/lltypesystem/ll2ctypes.py", line 161, in convert_struct convert_struct(parent) File "/home/simon/site-packages/pypy/rpython/lltypesystem/ll2ctypes.py", line 165, in convert_struct cstruct = cls._malloc() File "/home/simon/site-packages/pypy/rpython/lltypesystem/ll2ctypes.py", line 89, in _malloc raise TypeError, "array length must be an int" TypeError: array length must be an int Simon. -------------- next part -------------- A non-text attachment was scrubbed... Name: buffers.py Type: text/x-python Size: 1328 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: richbuf.py Type: text/x-python Size: 1370 bytes Desc: not available URL: From david at ar.media.kyoto-u.ac.jp Wed Aug 22 04:22:41 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Wed, 22 Aug 2007 11:22:41 +0900 Subject: [pypy-dev] How's the JIT coming along? In-Reply-To: <46CAEE94.1060004@changemaker.nu> References: <20070819134052388521.b4a47875@goombah.com> <46CAEE94.1060004@changemaker.nu> Message-ID: <46CB9DF1.8060103@ar.media.kyoto-u.ac.jp> Beatrice D?ring wrote: > Sorry for not having responded on this issue even prior to your email. > > We have had discussions some of us to go for a sprint in end week of > October, > probably in Germany, where we would focus on clean-up work/refactoring and > also to discuss plans for a 1.1 release (scope, content, time). The > sprint plan need to > be discussed in the upcoming week with a sprint announcement going out. > > _My guess_ is that the JIT of course would also be involved in the scope > of that sprint > (clean up/refactoring) and that the 1.1 release would also contain > results from such a clean up. > > Some of the reasons for the "silence" in the JIT area is partly because > of people having vacation > and post-eu sabbaticals, we have been going to conferences such as > (Europython, Dyla) to do > talks about the JIT to get input as well as focusing on getting external > functions, RPython and > PyPy to work more smoothly (which was feedback from users and would be > pypy-users > at Europython in Vilnius). But yes - we might have been better at > expressing this. > > If i might ask - what are the use cases you are thinking about when > talking about the JIT > (and/or the JIT generator?). I cannot speak for Garry, but I myself would be interested in pypy for numerical computing in python. Basically, there are cases where numpy is not enough and require coding in C. The two cases I am thinking are: - recursive algorithms: this means many functions calls, which are too expensive in python (eg: imagine you have a two buffers of many float x, and you want to compute f(x[i], [y[i], nu[i]) = x[i+1] = x[i] + nu[i] * (x[i] - y[i]); even using ctypes for the trivial computation in C kills performances because of the many calls) - algorithms which require many temporaries to be efficient in numpy. Both of them, if my understanding is right, would be perfect exemples of easy to optimize using JIT. David From santagada at gmail.com Wed Aug 22 06:43:01 2007 From: santagada at gmail.com (Leonardo Santagada) Date: Wed, 22 Aug 2007 01:43:01 -0300 Subject: [pypy-dev] How's the JIT coming along? In-Reply-To: <46CB9DF1.8060103@ar.media.kyoto-u.ac.jp> References: <20070819134052388521.b4a47875@goombah.com> <46CAEE94.1060004@changemaker.nu> <46CB9DF1.8060103@ar.media.kyoto-u.ac.jp> Message-ID: <4B0F48B0-D9AD-416E-81F6-3A839C002620@gmail.com> Em Aug 21, 2007, ?s 11:22 PM, David Cournapeau escreveu: > I cannot speak for Garry, but I myself would be interested in pypy for > numerical computing in python. Basically, there are cases where > numpy is > not enough and require coding in C. The two cases I am thinking are: > - recursive algorithms: this means many functions calls, which are > too expensive in python (eg: imagine you have a two buffers of many > float x, and you want to compute f(x[i], [y[i], nu[i]) = x[i+1] = x > [i] + > nu[i] * (x[i] - y[i]); even using ctypes for the trivial > computation in > C kills performances because of the many calls) > - algorithms which require many temporaries to be efficient in > numpy. > > Both of them, if my understanding is right, would be perfect > exemples of > easy to optimize using JIT. I'm guessing here, but to my knoledge to make python functions more lightweight using jit would only work by making tail call optimization, but for that you can also do the "same" optimization by using a explicit stack and not use function calls at all. -- Leonardo Santagada From david at ar.media.kyoto-u.ac.jp Wed Aug 22 07:56:15 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Wed, 22 Aug 2007 14:56:15 +0900 Subject: [pypy-dev] How's the JIT coming along? In-Reply-To: <4B0F48B0-D9AD-416E-81F6-3A839C002620@gmail.com> References: <20070819134052388521.b4a47875@goombah.com> <46CAEE94.1060004@changemaker.nu> <46CB9DF1.8060103@ar.media.kyoto-u.ac.jp> <4B0F48B0-D9AD-416E-81F6-3A839C002620@gmail.com> Message-ID: <46CBCFFF.2060708@ar.media.kyoto-u.ac.jp> Leonardo Santagada wrote: > > Em Aug 21, 2007, ?s 11:22 PM, David Cournapeau escreveu: >> I cannot speak for Garry, but I myself would be interested in pypy for >> numerical computing in python. Basically, there are cases where numpy is >> not enough and require coding in C. The two cases I am thinking are: >> - recursive algorithms: this means many functions calls, which are >> too expensive in python (eg: imagine you have a two buffers of many >> float x, and you want to compute f(x[i], [y[i], nu[i]) = x[i+1] = x[i] + >> nu[i] * (x[i] - y[i]); even using ctypes for the trivial computation in >> C kills performances because of the many calls) >> - algorithms which require many temporaries to be efficient in >> numpy. >> >> Both of them, if my understanding is right, would be perfect exemples of >> easy to optimize using JIT. > > I'm guessing here, but to my knoledge to make python functions more > lightweight using jit would only work by making tail call > optimization, but for that you can also do the "same" optimization by > using a explicit stack and not use function calls at all. Mmh, tail call optimization is useful for recursion, right ? I should not have used recursive in my post, actually, because the recursive I am talking about is not really what is meant in programming. The probleme in my case is not so much the stack, but the function call by itself. For example, the following python code: class A: def _foo(self): return None def foo(self): return self._foo() Is extremely slow compared to a compiled language like C. On my computer, I can only execute the above function around 1 millon times a second (it takes around 3500 cycles by using %timeit from ipython). This forces me to avoid functions in some performance intensive code, which is ugly. Basically, I am interested in the kind of things described there: http://www.avibryant.com/2006/09/index.html. David From cfbolz at gmx.de Wed Aug 22 08:24:12 2007 From: cfbolz at gmx.de (Carl Friedrich Bolz) Date: Wed, 22 Aug 2007 08:24:12 +0200 Subject: [pypy-dev] How's the JIT coming along? In-Reply-To: <4B0F48B0-D9AD-416E-81F6-3A839C002620@gmail.com> References: <20070819134052388521.b4a47875@goombah.com> <46CAEE94.1060004@changemaker.nu> <46CB9DF1.8060103@ar.media.kyoto-u.ac.jp> <4B0F48B0-D9AD-416E-81F6-3A839C002620@gmail.com> Message-ID: <46CBD68C.5010604@gmx.de> Leonardo Santagada wrote: > Em Aug 21, 2007, ?s 11:22 PM, David Cournapeau escreveu: >> I cannot speak for Garry, but I myself would be interested in pypy for >> numerical computing in python. Basically, there are cases where >> numpy is >> not enough and require coding in C. The two cases I am thinking are: >> - recursive algorithms: this means many functions calls, which are >> too expensive in python (eg: imagine you have a two buffers of many >> float x, and you want to compute f(x[i], [y[i], nu[i]) = x[i+1] = x >> [i] + >> nu[i] * (x[i] - y[i]); even using ctypes for the trivial >> computation in >> C kills performances because of the many calls) >> - algorithms which require many temporaries to be efficient in >> numpy. >> >> Both of them, if my understanding is right, would be perfect >> exemples of >> easy to optimize using JIT. > > I'm guessing here, but to my knoledge to make python functions more > lightweight using jit would only work by making tail call > optimization, but for that you can also do the "same" optimization by > using a explicit stack and not use function calls at all. I don't have time to go into this right now, but you are guessing wrong. With some sophistication you can also use the JIT to optimize normal calls (even inline them, with enough effort). It would require some more work, though. Cheers, Carl Friedrich From erik at medallia.com Wed Aug 22 14:01:27 2007 From: erik at medallia.com (Erik Gorset) Date: Wed, 22 Aug 2007 14:01:27 +0200 Subject: [pypy-dev] How's the JIT coming along? In-Reply-To: <46CBCFFF.2060708@ar.media.kyoto-u.ac.jp> References: <20070819134052388521.b4a47875@goombah.com> <46CAEE94.1060004@changemaker.nu> <46CB9DF1.8060103@ar.media.kyoto-u.ac.jp> <4B0F48B0-D9AD-416E-81F6-3A839C002620@gmail.com> <46CBCFFF.2060708@ar.media.kyoto-u.ac.jp> Message-ID: <9E4AD833-8A8C-47AE-8CF4-FB0F961A3FC8@medallia.com> On Aug 22, 2007, at 7:56 AM, David Cournapeau wrote: > For example, the following python code: > > class A: > def _foo(self): > return None > def foo(self): > return self._foo() > > Is extremely slow compared to a compiled language like C. On my > computer, I can only execute the above function around 1 millon > times a > second (it takes around 3500 cycles by using %timeit from ipython). > This > forces me to avoid functions in some performance intensive code, which > is ugly. Basically, I am interested in the kind of things described > there: http://www.avibryant.com/2006/09/index.html. The self technology is extremely interesting. For more information, you can read Urs thesis [0], which is the bases for the cool optimization tricks in the strongtalk vm [1]. The important trick here is that the vm is able to do optimistic inlining based on type feedback during runtime, and deoptimize the compiled code when it guesses wrong or when you enter the debugger. I guess you can do the same thing with partial evaluation if you gather enough runtime statistics to figure out which functions to specialize/inline. More specific for your example, it will result in code looking something like this: class A_compiled: def foo(self): if type(self) == A: return None # _foo has been inlined else: return self._foo() # uncommon case Of course, the higher up you can start inlining, the less calls is needed and better performance is achieved: a = obj.foo() turns into: if type(obj) == A: a = None else: a = obj.foo() [0] http://www.cs.ucsb.edu/~urs/oocsb/self/papers/urs-thesis.html [1] http://strongtalk.org/ - Erik Gorset -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 1962 bytes Desc: not available URL: From grobinson at goombah.com Wed Aug 22 14:21:23 2007 From: grobinson at goombah.com (Gary Robinson) Date: Wed, 22 Aug 2007 08:21:23 -0400 Subject: [pypy-dev] How's the JIT coming along? Message-ID: <20070822082123472317.d675fd87@goombah.com> > If i might ask - what are the use cases you are thinking about when > talking about the JIT > (and/or the JIT generator?). Numerical computing. I have a lot of floating point arithmetic. I was also going to mention the function calling overhead that I see was already discussed (yesterday) in this thread! -- Gary Robinson CTO Emergent Music, LLC grobinson at goombah.com 207-942-3463 Company: http://www.goombah.com Blog: http://www.garyrobinson.net From cfbolz at gmx.de Wed Aug 22 16:03:48 2007 From: cfbolz at gmx.de (Carl Friedrich Bolz) Date: Wed, 22 Aug 2007 16:03:48 +0200 Subject: [pypy-dev] How's the JIT coming along? In-Reply-To: <20070822082123472317.d675fd87@goombah.com> References: <20070822082123472317.d675fd87@goombah.com> Message-ID: <348899050708220703i1ad5a8bey1e863f4a288779eb@mail.gmail.com> Hi Gary, 2007/8/22, Gary Robinson : > > If i might ask - what are the use cases you are thinking about when > > talking about the JIT > > (and/or the JIT generator?). > > Numerical computing. I have a lot of floating point arithmetic. I was also going to mention the > function calling overhead that I see was already discussed (yesterday) in this thread! We are quite far from supporting floats, I fear. Our i386 assembly backend doesn't have support for floats so far. That doesn't mean that you cannot JIT functions that do float operations, it just means that you won't get any great speedups. Cheers, Carl Friedrich From cfbolz at gmx.de Wed Aug 22 16:09:21 2007 From: cfbolz at gmx.de (Carl Friedrich Bolz) Date: Wed, 22 Aug 2007 16:09:21 +0200 Subject: [pypy-dev] How's the JIT coming along? In-Reply-To: <46CBCFFF.2060708@ar.media.kyoto-u.ac.jp> References: <20070819134052388521.b4a47875@goombah.com> <46CAEE94.1060004@changemaker.nu> <46CB9DF1.8060103@ar.media.kyoto-u.ac.jp> <4B0F48B0-D9AD-416E-81F6-3A839C002620@gmail.com> <46CBCFFF.2060708@ar.media.kyoto-u.ac.jp> Message-ID: <348899050708220709t4ab7eeadt46417b5087099048@mail.gmail.com> Hi David, 2007/8/22, David Cournapeau : > For example, the following python code: > > class A: > def _foo(self): > return None > def foo(self): > return self._foo() > > Is extremely slow compared to a compiled language like C. On my > computer, I can only execute the above function around 1 millon times a > second (it takes around 3500 cycles by using %timeit from ipython). This > forces me to avoid functions in some performance intensive code, which > is ugly. Basically, I am interested in the kind of things described > there: http://www.avibryant.com/2006/09/index.html. This page is talking about polymorphic inline caches. One of the more important building blocks of our JIT, "promotion", can be considered to be a generalization of polymorphic inline caches. So yes, PyPy's Jit will eventually be able to do things like this. Cheers, Carl Friedrich From cfbolz at gmx.de Wed Aug 22 16:13:23 2007 From: cfbolz at gmx.de (Carl Friedrich Bolz) Date: Wed, 22 Aug 2007 16:13:23 +0200 Subject: [pypy-dev] How's the JIT coming along? In-Reply-To: <9E4AD833-8A8C-47AE-8CF4-FB0F961A3FC8@medallia.com> References: <20070819134052388521.b4a47875@goombah.com> <46CAEE94.1060004@changemaker.nu> <46CB9DF1.8060103@ar.media.kyoto-u.ac.jp> <4B0F48B0-D9AD-416E-81F6-3A839C002620@gmail.com> <46CBCFFF.2060708@ar.media.kyoto-u.ac.jp> <9E4AD833-8A8C-47AE-8CF4-FB0F961A3FC8@medallia.com> Message-ID: <348899050708220713l32c9acc7s57bd36569af61af4@mail.gmail.com> Hi Erik, 2007/8/22, Erik Gorset : > On Aug 22, 2007, at 7:56 AM, David Cournapeau wrote: > > For example, the following python code: > > > > class A: > > def _foo(self): > > return None > > def foo(self): > > return self._foo() > > [snip] > More specific for your example, it will result in code looking > something like this: > > class A_compiled: > def foo(self): > if type(self) == A: > return None # _foo has been inlined > else: > return self._foo() # uncommon case > > Of course, the higher up you can start inlining, the less calls is > needed and > better performance is achieved: > > a = obj.foo() > > turns into: > > if type(obj) == A: > a = None > else: > a = obj.foo() those inline checks are not enough in the Python case. You also need to check that the method A.foo was not changed in the meantime. But yes, in principle this is the idea. It works even better in the case of integer-handling functions, because the inlined operations can in many cases be implemented by processor opcodes. Cheers, Carl Friedrich From erik at medallia.com Wed Aug 22 17:47:20 2007 From: erik at medallia.com (Erik Gorset) Date: Wed, 22 Aug 2007 17:47:20 +0200 Subject: [pypy-dev] How's the JIT coming along? In-Reply-To: <348899050708220713l32c9acc7s57bd36569af61af4@mail.gmail.com> References: <20070819134052388521.b4a47875@goombah.com> <46CAEE94.1060004@changemaker.nu> <46CB9DF1.8060103@ar.media.kyoto-u.ac.jp> <4B0F48B0-D9AD-416E-81F6-3A839C002620@gmail.com> <46CBCFFF.2060708@ar.media.kyoto-u.ac.jp> <9E4AD833-8A8C-47AE-8CF4-FB0F961A3FC8@medallia.com> <348899050708220713l32c9acc7s57bd36569af61af4@mail.gmail.com> Message-ID: <8BD45F0D-F754-4B09-B206-1925E53A2653@medallia.com> On Aug 22, 2007, at 4:13 PM, Carl Friedrich Bolz wrote: > those inline checks are not enough in the Python case. You also need > to check that the method A.foo was not changed in the meantime. But > yes, in principle this is the idea. It works even better in the case > of integer-handling functions, because the inlined operations can in > many cases be implemented by processor opcodes. Yes, and one way to do this is to keep a dependency graph for the compiled code, and invalidate when a dependency changes. You can get away with checks at relatively few points, and you can also insert traps for assignments to objects which will (possibly) shadow compiled dependencies. -- Erik Gorset -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 1962 bytes Desc: not available URL: From len-l at telus.net Wed Aug 22 19:42:12 2007 From: len-l at telus.net (Lenard Lindstrom) Date: Wed, 22 Aug 2007 10:42:12 -0700 Subject: [pypy-dev] How's the JIT coming along? In-Reply-To: <20070822082123472317.d675fd87@goombah.com> References: <20070822082123472317.d675fd87@goombah.com> Message-ID: <46CC7574.5070500@telus.net> Gary Robinson wrote: >> If i might ask - what are the use cases you are thinking about when >> talking about the JIT >> (and/or the JIT generator?). >> > > Numerical computing. I have a lot of floating point arithmetic. I was also going to mention the function calling overhead that I see was already discussed (yesterday) in this thread! > > > I would say that Python function calls are not the only calls to consider. At some point the ctypes module will need porting to PyPy. A JIT could remove the overhead of libffi. -- Lenard Lindstrom From fijall at gmail.com Wed Aug 22 20:49:12 2007 From: fijall at gmail.com (Maciej Fijalkowski) Date: Wed, 22 Aug 2007 20:49:12 +0200 Subject: [pypy-dev] PyPy cleanups Message-ID: <693bc9ab0708221149i4819f01l3a4b7fd71574064b@mail.gmail.com> Hello, This is the list of possibly orphaned parts of pypy. We should consider each item here and think in detail what to do with them. They're mostly broken or not actively maintained. If nobody shows an interest to maintain them, deleting would be the best solution to avoid clutter. Also they pose a maintenance burden when we proceed with the needed large refactorings. Maybe it is also a solution to lazily delete them as they are broken by refactoring if nobody steps up. Of course deleted things can easily be brought back from svn history, if there is renewed interest. So the above may sound scarier than it is. * llvm backend - we need a maintainer for that * pyrex backend (llvm depends on it tough) * js interpreter - needs a lot of work, before we might have usecases for that. * common lisp backend * squeak backend * ext compiler - would need a large rewrite from rctypes to rffi * rctypes (both) - delete as soon as they would not be needed * logic objspace, including constraint libraries Cheers, fijal, Samuele, Simon & Carl Friedrich -------------- next part -------------- An HTML attachment was scrubbed... URL: From cfbolz at gmx.de Wed Aug 22 23:02:28 2007 From: cfbolz at gmx.de (Carl Friedrich Bolz) Date: Wed, 22 Aug 2007 23:02:28 +0200 Subject: [pypy-dev] How's the JIT coming along? In-Reply-To: <46CC7574.5070500@telus.net> References: <20070822082123472317.d675fd87@goombah.com> <46CC7574.5070500@telus.net> Message-ID: <46CCA464.7010301@gmx.de> Lenard Lindstrom wrote: > Gary Robinson wrote: >>> If i might ask - what are the use cases you are thinking about when >>> talking about the JIT >>> (and/or the JIT generator?). >>> >> Numerical computing. I have a lot of floating point arithmetic. I was >> also going to mention the function calling overhead that I see was >> already discussed (yesterday) in this thread! > > I would say that Python function calls are not the only calls to > consider. At some point the ctypes module will need porting to PyPy. A > JIT could remove the overhead of libffi. Absolutely. This is part of the more longer-term plans for the JIT (especially since it involves re-implementing ctypes, which will be its own kind of pain). In theory the JIT can not only remove some of the overhead of libffi but also be faster than statically compiled extension modules, since in some cases it can know in advance that some runtime type checks that a static extension does are always true and skip them. Cheers, Carl Friedrich From simon at arrowtheory.com Thu Aug 23 00:23:40 2007 From: simon at arrowtheory.com (Simon Burton) Date: Wed, 22 Aug 2007 15:23:40 -0700 Subject: [pypy-dev] rffi strict typing In-Reply-To: <20070720125254.GC19336@code0.codespeak.net> References: <20070719183117.97ad1ce3.simon@arrowtheory.com> <20070720125254.GC19336@code0.codespeak.net> Message-ID: <20070822152340.8075e11c.simon@arrowtheory.com> On Fri, 20 Jul 2007 14:52:54 +0200 Armin Rigo wrote: > > Hi Simon, > > On Thu, Jul 19, 2007 at 06:31:17PM -0700, Simon Burton wrote: > > The lltype's are very strict about types. > > Indeed, it seems worthwhile to loosen this restriction at least for the > purpose of calling external functions... Not sure how, but there are > possible hacks at least. Following some hints from Samuele, I am trying to wrap such functions in another function that does some casting. Here is my first attempt. It does no casting, but i already can't get it to work. def softwrapper(funcptr, arg_tps): unrolling_arg_tps = unrolling_iterable(enumerate(arg_tps)) def softfunc(*args): real_args = tuple([args[i] for i, tp in unrolling_arg_tps]) result = funcptr(*real_args) return softfunc where funcptr comes from a call to llexternal. It seems the tuple construction should not work (each args[i] is a different type), but instead i get: CallPatternTooComplex': '*' argument must be SomeTuple. Um.. My only guess now is to malloc a TUPLE_TYPE.. ?!? no idea.. (I am really sick of code generation and hope it doesn't come to that). Simon. From alexandre.fayolle at logilab.fr Thu Aug 23 11:18:50 2007 From: alexandre.fayolle at logilab.fr (Alexandre Fayolle) Date: Thu, 23 Aug 2007 11:18:50 +0200 Subject: [pypy-dev] PyPy cleanups In-Reply-To: <693bc9ab0708221149i4819f01l3a4b7fd71574064b@mail.gmail.com> References: <693bc9ab0708221149i4819f01l3a4b7fd71574064b@mail.gmail.com> Message-ID: <20070823091850.GB4812@logilab.fr> On Wed, Aug 22, 2007 at 08:49:12PM +0200, Maciej Fijalkowski wrote: > Hello, > > This is the list of possibly orphaned parts of pypy. > We should consider each item here and think in detail > what to do with them. They're mostly broken or not > actively maintained. If nobody shows an interest > to maintain them, deleting would be the best solution > to avoid clutter. Also they pose a maintenance burden > when we proceed with the needed large refactorings. > Maybe it is also a solution to lazily delete them as > they are broken by refactoring if nobody steps up. > > Of course deleted things can easily be brought back from > svn history, if there is renewed interest. So the above > may sound scarier than it is. > > * llvm backend - we need a maintainer for that > * pyrex backend (llvm depends on it tough) > * js interpreter - needs a lot of work, before > we might have usecases for that. > * common lisp backend > * squeak backend > * ext compiler - would need a large rewrite from rctypes to rffi > * rctypes (both) - delete as soon as they would not be needed > * logic objspace, including constraint libraries Please add aop.py and dbc.py (from pypy/lib) to that list. -- Alexandre Fayolle LOGILAB, Paris (France) Formations Python, Zope, Plone, Debian: http://www.logilab.fr/formations D?veloppement logiciel sur mesure: http://www.logilab.fr/services Informatique scientifique: http://www.logilab.fr/science Reprise et maintenance de sites CPS: http://www.migration-cms.com/ -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 481 bytes Desc: Digital signature URL: From simon at arrowtheory.com Thu Aug 23 19:07:37 2007 From: simon at arrowtheory.com (Simon Burton) Date: Thu, 23 Aug 2007 10:07:37 -0700 Subject: [pypy-dev] rffi strict typing In-Reply-To: <20070822152340.8075e11c.simon@arrowtheory.com> References: <20070719183117.97ad1ce3.simon@arrowtheory.com> <20070720125254.GC19336@code0.codespeak.net> <20070822152340.8075e11c.simon@arrowtheory.com> Message-ID: <20070823100737.1f3f56f9.simon@arrowtheory.com> On Wed, 22 Aug 2007 15:23:40 -0700 Simon Burton wrote: > > Following some hints from Samuele, I am trying to wrap such functions in > another function that does some casting. Here is the latest: def softwrapper(funcptr, arg_tps): unrolling_arg_tps = unrolling_iterable(enumerate(arg_tps)) def softfunc(*args): real_args = () for i, tp in unrolling_arg_tps: real_args = real_args + (args[i],) result = funcptr(*real_args) return result return softfunc When applied to llexternal's that have pointer-to-struct args the generated c code breaks; it decides to declare&use anonymous structs: long pypy_g_softfunc_star2_1(struct pypy__cairo_surface0 *l_stararg0_7, struct pypy_array3 *l_stararg1_7) { long l_v492; block0: l_v492 = cairo_surface_write_to_png(l_stararg0_7, l_stararg1_7); goto block1; block1: RPY_DEBUG_RETURN(); return l_v492; } where cairo_surface_write_to_png is declared: cairo_surface_write_to_png (cairo_surface_t *surface, const char *filename); I wonder if the annotator is getting confused by the real_args tuple growing... From fijal at genesilico.pl Thu Aug 23 19:13:26 2007 From: fijal at genesilico.pl (Maciek Fijalkowski) Date: Thu, 23 Aug 2007 19:13:26 +0200 Subject: [pypy-dev] rffi strict typing In-Reply-To: <20070823100737.1f3f56f9.simon@arrowtheory.com> References: <20070719183117.97ad1ce3.simon@arrowtheory.com> <20070720125254.GC19336@code0.codespeak.net> <20070822152340.8075e11c.simon@arrowtheory.com> <20070823100737.1f3f56f9.simon@arrowtheory.com> Message-ID: <46CDC036.20506@genesilico.pl> Simon Burton wrote: > On Wed, 22 Aug 2007 15:23:40 -0700 > Simon Burton wrote: > > >> Following some hints from Samuele, I am trying to wrap such functions in >> another function that does some casting. >> > > Here is the latest: > > > def softwrapper(funcptr, arg_tps): > unrolling_arg_tps = unrolling_iterable(enumerate(arg_tps)) > def softfunc(*args): > real_args = () > for i, tp in unrolling_arg_tps: > real_args = real_args + (args[i],) > result = funcptr(*real_args) > return result > return softfunc > > When applied to llexternal's that have pointer-to-struct args > the generated c code breaks; it decides to declare&use anonymous > structs: > > long pypy_g_softfunc_star2_1(struct pypy__cairo_surface0 *l_stararg0_7, struct pypy_array3 *l_stararg1_7) { > long l_v492; > > block0: > l_v492 = cairo_surface_write_to_png(l_stararg0_7, l_stararg1_7); > goto block1; > > block1: > RPY_DEBUG_RETURN(); > return l_v492; > } > > where cairo_surface_write_to_png is declared: > > cairo_surface_write_to_png (cairo_surface_t *surface, > const char *filename); > > > I wonder if the annotator is getting confused by the real_args tuple growing... > _______________________________________________ > pypy-dev at codespeak.net > http://codespeak.net/mailman/listinfo/pypy-dev > > :. > > You need to create the structure which you push by hand. As well as array argument. :. From simon at arrowtheory.com Thu Aug 23 21:42:08 2007 From: simon at arrowtheory.com (Simon Burton) Date: Thu, 23 Aug 2007 12:42:08 -0700 Subject: [pypy-dev] rffi strict typing In-Reply-To: <46CDC036.20506@genesilico.pl> References: <20070719183117.97ad1ce3.simon@arrowtheory.com> <20070720125254.GC19336@code0.codespeak.net> <20070822152340.8075e11c.simon@arrowtheory.com> <20070823100737.1f3f56f9.simon@arrowtheory.com> <46CDC036.20506@genesilico.pl> Message-ID: <20070823124208.dfbcea83.simon@arrowtheory.com> On Thu, 23 Aug 2007 19:13:26 +0200 Maciek Fijalkowski wrote: > > You need to create the structure which you push by hand. As well as > array argument. No, structure comes from a call to another llexternal. It makes no sense to clone it. It should be an opaque struct anyway. Simon. From simon at arrowtheory.com Thu Aug 23 22:05:52 2007 From: simon at arrowtheory.com (Simon Burton) Date: Thu, 23 Aug 2007 13:05:52 -0700 Subject: [pypy-dev] rffi strict typing In-Reply-To: <46CDC036.20506@genesilico.pl> References: <20070719183117.97ad1ce3.simon@arrowtheory.com> <20070720125254.GC19336@code0.codespeak.net> <20070822152340.8075e11c.simon@arrowtheory.com> <20070823100737.1f3f56f9.simon@arrowtheory.com> <46CDC036.20506@genesilico.pl> Message-ID: <20070823130552.ce5e77bd.simon@arrowtheory.com> On Thu, 23 Aug 2007 19:13:26 +0200 Maciek Fijalkowski wrote: > > You need to create the structure which you push by hand. As well as > array argument. I stuck a test in test_rffi but didn't want to commit it. It works fine. (attached) Simon. -------------- next part -------------- A non-text attachment was scrubbed... Name: test_rffi.py Type: text/x-python Size: 6958 bytes Desc: not available URL: From fijal at genesilico.pl Fri Aug 24 10:06:01 2007 From: fijal at genesilico.pl (Maciek Fijalkowski) Date: Fri, 24 Aug 2007 10:06:01 +0200 Subject: [pypy-dev] rffi strict typing In-Reply-To: <20070823124208.dfbcea83.simon@arrowtheory.com> References: <20070719183117.97ad1ce3.simon@arrowtheory.com> <20070720125254.GC19336@code0.codespeak.net> <20070822152340.8075e11c.simon@arrowtheory.com> <20070823100737.1f3f56f9.simon@arrowtheory.com> <46CDC036.20506@genesilico.pl> <20070823124208.dfbcea83.simon@arrowtheory.com> Message-ID: <46CE9169.2070307@genesilico.pl> Simon Burton wrote: > On Thu, 23 Aug 2007 19:13:26 +0200 > Maciek Fijalkowski wrote: > > >> You need to create the structure which you push by hand. As well as >> array argument. >> > > No, structure comes from a call to another llexternal. It makes no sense to clone it. > It should be an opaque struct anyway. > > Simon. > There is a COpaque and COpaquePtr for opaque structures. cheers, fijal :. From cfbolz at gmx.de Sat Aug 25 00:08:01 2007 From: cfbolz at gmx.de (Carl Friedrich Bolz) Date: Sat, 25 Aug 2007 00:08:01 +0200 Subject: [pypy-dev] away for two weeks Message-ID: <46CF56C1.7020304@gmx.de> Hi all, I will be away from home (and probably from Internet) for about two weeks, starting from Sunday morning. Cheers, Carl Friedrich From exarkun at divmod.com Sat Aug 25 01:18:08 2007 From: exarkun at divmod.com (Jean-Paul Calderone) Date: Fri, 24 Aug 2007 19:18:08 -0400 Subject: [pypy-dev] translation broken in pypy-more-rtti-inprogress branch In-Reply-To: 0 Message-ID: <20070824231808.8162.305325374.divmod.quotient.1425@ohm> Just a heads up :) Here's the traceback I get: [translation:ERROR] Error: [translation:ERROR] Traceback (most recent call last): [translation:ERROR] File "./translate.py", line 272, in main [translation:ERROR] drv.proceed(goals) [translation:ERROR] File "/home/exarkun/Projects/pypy/trunk/pypy/translator/driver.py", line 748, in proceed [translation:ERROR] return self._execute(goals, task_skip = self._maybe_skip()) [translation:ERROR] File "/home/exarkun/Projects/pypy/trunk/pypy/translator/tool/taskengine.py", line 112, in _execute [translation:ERROR] res = self._do(goal, taskcallable, *args, **kwds) [translation:ERROR] File "/home/exarkun/Projects/pypy/trunk/pypy/translator/driver.py", line 265, in _do [translation:ERROR] res = func() [translation:ERROR] File "/home/exarkun/Projects/pypy/trunk/pypy/translator/driver.py", line 334, in task_rtype_lltype [translation:ERROR] crash_on_first_typeerror=insist) [translation:ERROR] File "/home/exarkun/Projects/pypy/trunk/pypy/rpython/rtyper.py", line 180, in specialize [translation:ERROR] self.specialize_more_blocks() [translation:ERROR] File "/home/exarkun/Projects/pypy/trunk/pypy/rpython/rtyper.py", line 226, in specialize_more_blocks [translation:ERROR] self.specialize_block(block) [translation:ERROR] File "/home/exarkun/Projects/pypy/trunk/pypy/rpython/rtyper.py", line 345, in specialize_block [translation:ERROR] self.translate_hl_to_ll(hop, varmapping) [translation:ERROR] File "/home/exarkun/Projects/pypy/trunk/pypy/rpython/rtyper.py", line 474, in translate_hl_to_ll [translation:ERROR] resultvar = hop.dispatch() [translation:ERROR] File "/home/exarkun/Projects/pypy/trunk/pypy/rpython/rtyper.py", line 712, in dispatch [translation:ERROR] return translate_meth(self) [translation:ERROR] File "None", line 5, in translate_op_delitem [translation:ERROR] File "/home/exarkun/Projects/pypy/trunk/pypy/rpython/lltypesystem/rdict.py", line 319, in rtype_delitem [translation:ERROR] return hop.gendirectcall(ll_dict_delitem, v_dict, v_key) [translation:ERROR] File "/home/exarkun/Projects/pypy/trunk/pypy/rpython/rtyper.py", line 750, in gendirectcall [translation:ERROR] return self.llops.gendirectcall(ll_function, *args_v) [translation:ERROR] File "/home/exarkun/Projects/pypy/trunk/pypy/rpython/rtyper.py", line 908, in gendirectcall [translation:ERROR] rtyper.lowlevel_ann_policy) [translation:ERROR] File "/home/exarkun/Projects/pypy/trunk/pypy/rpython/annlowlevel.py", line 103, in annotate_lowlevel_helper [translation:ERROR] return annotator.annotate_helper(ll_function, args_s, policy) [translation:ERROR] File "/home/exarkun/Projects/pypy/trunk/pypy/annotation/annrpython.py", line 129, in annotate_helper [translation:ERROR] self.complete_helpers(policy) [translation:ERROR] File "/home/exarkun/Projects/pypy/trunk/pypy/annotation/annrpython.py", line 175, in complete_helpers [translation:ERROR] self.complete() [translation:ERROR] File "/home/exarkun/Projects/pypy/trunk/pypy/annotation/annrpython.py", line 249, in complete [translation:ERROR] self.processblock(graph, block) [translation:ERROR] File "/home/exarkun/Projects/pypy/trunk/pypy/annotation/annrpython.py", line 475, in processblock [translation:ERROR] self.flowin(graph, block) [translation:ERROR] File "/home/exarkun/Projects/pypy/trunk/pypy/annotation/annrpython.py", line 535, in flowin [translation:ERROR] self.consider_op(block.operations[i]) [translation:ERROR] File "/home/exarkun/Projects/pypy/trunk/pypy/annotation/annrpython.py", line 737, in consider_op [translation:ERROR] raise_nicer_exception(op, str(graph)) [translation:ERROR] File "/home/exarkun/Projects/pypy/trunk/pypy/annotation/annrpython.py", line 734, in consider_op [translation:ERROR] resultcell = consider_meth(*argcells) [translation:ERROR] File "", line 3, in consider_op_call_args [translation:ERROR] File "/home/exarkun/Projects/pypy/trunk/pypy/annotation/unaryop.py", line 169, in call_args [translation:ERROR] return obj.call(getbookkeeper().build_args("call_args", args_s)) [translation:ERROR] File "/home/exarkun/Projects/pypy/trunk/pypy/annotation/unaryop.py", line 582, in call [translation:ERROR] return bltn.analyser(*args_s, **kwds_s) [translation:ERROR] File "/home/exarkun/Projects/pypy/trunk/pypy/annotation/builtin.py", line 418, in malloc [translation:ERROR] assert (s_n is None or s_n.knowntype == int [translation:ERROR] AssertionError': [translation:ERROR] .. v1 = call_args((function malloc), ((2, ('zero',), False, False)), v0, new_size_0, (True)) [translation:ERROR] .. '(pypy.rpython.lltypesystem.rdict:475)ll_dict_resize__dicttablePtr' [translation:ERROR] Processing block: [translation:ERROR] block at 97 is a [translation:ERROR] in (pypy.rpython.lltypesystem.rdict:475)ll_dict_resize__dicttablePtr [translation:ERROR] containing the following operations: [translation:ERROR] v2 = simple_call((function typeOf), old_entries_0) [translation:ERROR] v0 = getattr(v2, ('TO')) [translation:ERROR] v1 = call_args((function malloc), ((2, ('zero',), False, False)), v0, new_size_0, (True)) [translation:ERROR] v3 = setattr(d_0, ('entries'), v1) [translation:ERROR] v4 = setattr(d_0, ('num_items'), (0)) [translation:ERROR] v5 = setattr(d_0, ('num_pristine_entries'), new_size_0) [translation:ERROR] --end-- [translation] start debugger... > /home/exarkun/Projects/pypy/trunk/pypy/annotation/builtin.py(418)malloc() -> assert (s_n is None or s_n.knowntype == int (Pdb+) This is from r45970 Jean-Paul From simon at arrowtheory.com Wed Aug 29 02:35:57 2007 From: simon at arrowtheory.com (Simon Burton) Date: Tue, 28 Aug 2007 17:35:57 -0700 Subject: [pypy-dev] buffer class and ll2ctypes problem In-Reply-To: <20070821090637.f57b0323.simon@arrowtheory.com> References: <20070821090637.f57b0323.simon@arrowtheory.com> Message-ID: <20070828173557.1f161eec.simon@arrowtheory.com> On Tue, 21 Aug 2007 09:06:37 -0700 Simon Burton wrote: > > I don't understand this much, can someone point the right direction ? > Ie. which buf is the right direction to go. failing that, does anyone have a random opinion ? Simon. From elmo13 at jippii.fi Thu Aug 30 21:10:57 2007 From: elmo13 at jippii.fi (=?ISO-8859-1?Q?Elmo_M=E4ntynen?=) Date: Thu, 30 Aug 2007 22:10:57 +0300 Subject: [pypy-dev] About adding a blist implementation... In-Reply-To: <20070830112103.d8da6bdc.simon@arrowtheory.com> References: <466FE95A.6020805@jippii.fi> <46779173.3050902@jippii.fi> <20070621181001.d9322aef.simon@arrowtheory.com> <467BD864.8040505@jippii.fi> <20070830112103.d8da6bdc.simon@arrowtheory.com> Message-ID: <46D71641.2050000@jippii.fi> Sorry, I got distracted, and now I have commitments (schoolwork) which prevent me from doing anything moderately involved. And no, I have only started porting the implementation and fixed some general multilist bugs (nothing grave). I will return to all this soon, probably after a month or two... If you need a btree, you'd better do it yourself anyway, and we'll see later if a refactoring is possible. Elmo Simon Burton wrote: > Elmo! > did you get anywhere with this ?? > > Simon. > > On Fri, 22 Jun 2007 17:10:44 +0300 > Elmo M?ntynen wrote: > > >> Leonardo Santagada wrote: >> >>> Em 21/06/2007, ?s 22:10, Simon Burton escreveu: >>> >>> >>> >>>> On Tue, 19 Jun 2007 11:18:59 +0300 >>>> Elmo M?ntynen wrote: >>>> >>>> >>>> >>>>> Since I'm adding a list implementation, I'd like to know what >>>>> benchmarks >>>>> have you used to test the other ones. Also, if you know apps with >>>>> heavy >>>>> usage of lists and some that use very long lists, I'd be >>>>> interested very >>>>> much. >>>>> >>>>> >>>> Can you explain what this blist is ? Is it a btree ? I think I need >>>> an rpython btree. >>>> >>>> Simon. >>>> >>>> >>> I dunno but this guy is probably talking about something like this >>> http://mail.python.org/pipermail/python-3000/2007-May/thread.html#7127 >>> >>> If it is not related I would also like to understand the diferences. >>> >>> -- >>> Leonardo Santagada >>> >>> Sent from my iPhone >>> >>> >>> >> Hi. >> It is an list implementation that has much better performance with very >> big lists, available as an extension to CPython, and soon included in >> pypy by me. >> This cheeseshop entry is a good place to start: >> http://www.python.org/pypi/blist/ >> >> The thread might be about integrading it into CPython or something, >> haven't read. >> >> It is based on btree, but the implementation I'm working on is based on >> the python prototype included in the tar distrib, which won't probably >> be useful if you specifically need a btree. In the future, maybe some >> refactoring would be useful, but for now I'm happy with the way I started. >> >> I'd still be interested in ways to test the different list >> implementations =) >> >> Elmo >> _______________________________________________ >> pypy-dev at codespeak.net >> http://codespeak.net/mailman/listinfo/pypy-dev >> From arigo at tunes.org Fri Aug 31 10:05:55 2007 From: arigo at tunes.org (Armin Rigo) Date: Fri, 31 Aug 2007 10:05:55 +0200 Subject: [pypy-dev] Talked at ESUG Message-ID: <20070831080555.GA19496@code0.codespeak.net> Hi all, As some of you know I've just given a talk at the ESUG (European Smalltalk User Group) conference, in Lugano. I mostly showed up for my talk only, staying only one night. The talk yesterday late afternoon went well; it was the demo part of the talk we gave at Dyla. Thanks to Roel for pushing us to give a presentation here - it seems quite unusual to have presentations that are given by "outsiders" of Smalltalk, and the organizers suggested to the audiance that I should get an extra applause for daring come and talk :-) Some people had hard about PyPy already, even though it's clear that in the Smalltalk situation the whole approach we took in PyPy is a bit overkill: it's a very stable language that existed and had good VMs for a very long time, with a relatively small core and a lot of things merely written in Smalltalk on top of it. This was the argument I received when I tried to discuss with someone working on XTC, a JIT compiler for Squeak (yet another) - the language is too small to bother with more general techniques, and he might be right for now. (I cannot seem to find his work by googling.) Where I failed to convince him was that PyPy was a possibly better alternative to "plans" like: all Smalltalk-like languages (Self, dialects, etc.) should be implemented in a single VM in a way that allows a single JIT for them all. Non-performance-related benefits of PyPy are also not completely clear for Smalltalk given that the language is more flexible than Python to start with. However, people generally see why we'd like such things for Python-like languages - I think it's a very positive thing. I also believe that presenting the PyPy approach to different communities can contribute just a little bit in the long run to push forward some ideas; for example, people may think twice about Slang or equivalents and see if it could be practical to make these languages just a bit higher level, e.g. move them above the GC. A final note is that talking about PyPy in live seems to be very important. Various people had spend some time looking around our web site because they had heard about PyPy, but mostly failed to understand what we were trying to do. For most talks we have given about PyPy I remember getting some feedback of people that now, at least, had got an overview about what we were doing. A bientot, Armin. From tismer at stackless.com Fri Aug 31 14:04:14 2007 From: tismer at stackless.com (Christian Tismer) Date: Fri, 31 Aug 2007 14:04:14 +0200 Subject: [pypy-dev] PyPy cleanups In-Reply-To: <693bc9ab0708221149i4819f01l3a4b7fd71574064b@mail.gmail.com> References: <693bc9ab0708221149i4819f01l3a4b7fd71574064b@mail.gmail.com> Message-ID: <46D803BE.50001@stackless.com> Maciej Fijalkowski wrote: > Hello, > ... > Of course deleted things can easily be brought back from > svn history, if there is renewed interest. So the above > may sound scarier than it is. No, it is scary for me. Things that I can't see are gone. The svn history argument does not help. I would prefer disabling the tests and moving unmaintained stuff into maybe a "unmaintained" folder or renaming ist's path to include a syllable for unmaintained. Deleting is final, in a sense. scared - ly y'rs -- chris -- Christian Tismer :^) tismerysoft GmbH : Have a break! Take a ride on Python's Johannes-Niemeyer-Weg 9A : *Starship* http://starship.python.net/ 14109 Berlin : PGP key -> http://wwwkeys.pgp.net/ work +49 30 802 86 56 mobile +49 173 24 18 776 fax +49 30 80 90 57 05 PGP 0x57F3BF04 9064 F4E1 D754 C2FF 1619 305B C09C 5A3B 57F3 BF04 whom do you want to sponsor today? http://www.stackless.com/ From paul.degrandis at gmail.com Fri Aug 31 14:43:29 2007 From: paul.degrandis at gmail.com (Paul deGrandis) Date: Fri, 31 Aug 2007 08:43:29 -0400 Subject: [pypy-dev] PyPy cleanups In-Reply-To: <46D803BE.50001@stackless.com> References: <693bc9ab0708221149i4819f01l3a4b7fd71574064b@mail.gmail.com> <46D803BE.50001@stackless.com> Message-ID: <9c0bb8a00708310543s3efb3092ye03a56a49af5edd@mail.gmail.com> I like Christian's ideas. let's not delete them but move them to "pasture" or "unmaintained". Some people may want it, and someone (unfamilar to the project) may bring one of them back to life. Paul On 8/31/07, Christian Tismer wrote: > > Maciej Fijalkowski wrote: > > Hello, > > > ... > > > Of course deleted things can easily be brought back from > > svn history, if there is renewed interest. So the above > > may sound scarier than it is. > > No, it is scary for me. Things that I can't see are gone. > The svn history argument does not help. > > I would prefer disabling the tests and moving > unmaintained stuff into maybe a "unmaintained" folder or > renaming ist's path to include a syllable for unmaintained. > > Deleting is final, in a sense. > > scared - ly y'rs -- chris > > -- > Christian Tismer :^) > tismerysoft GmbH : Have a break! Take a ride on Python's > Johannes-Niemeyer-Weg 9A : *Starship* http://starship.python.net/ > 14109 Berlin : PGP key -> http://wwwkeys.pgp.net/ > work +49 30 802 86 56 mobile +49 173 24 18 776 fax +49 30 80 90 57 05 > PGP 0x57F3BF04 9064 F4E1 D754 C2FF 1619 305B C09C 5A3B 57F3 BF04 > whom do you want to sponsor today? http://www.stackless.com/ > _______________________________________________ > pypy-dev at codespeak.net > http://codespeak.net/mailman/listinfo/pypy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tismer at stackless.com Fri Aug 31 16:55:28 2007 From: tismer at stackless.com (Christian Tismer) Date: Fri, 31 Aug 2007 16:55:28 +0200 Subject: [pypy-dev] PyPy cleanups In-Reply-To: <9c0bb8a00708310543s3efb3092ye03a56a49af5edd@mail.gmail.com> References: <693bc9ab0708221149i4819f01l3a4b7fd71574064b@mail.gmail.com> <46D803BE.50001@stackless.com> <9c0bb8a00708310543s3efb3092ye03a56a49af5edd@mail.gmail.com> Message-ID: <46D82BE0.2050503@stackless.com> Paul deGrandis wrote: > I like Christian's ideas. > > let's not delete them but move them to "pasture" or "unmaintained". > Some people may want it, and someone (unfamilar to the project) may bring > one of them back to life. And I'd rather go for this earlier than later. Not removing but renaming is not destructive, and may shake up people who thought "at some time, I will work on this" to consider a reaction. Things that are out of sight are forgotten very fast. Things that get a nasty name in their path are crying for maintenance, also expressing that people are sad to loose them. cheers - chris -- Christian Tismer :^) tismerysoft GmbH : Have a break! Take a ride on Python's Johannes-Niemeyer-Weg 9A : *Starship* http://starship.python.net/ 14109 Berlin : PGP key -> http://wwwkeys.pgp.net/ work +49 30 802 86 56 mobile +49 173 24 18 776 fax +49 30 80 90 57 05 PGP 0x57F3BF04 9064 F4E1 D754 C2FF 1619 305B C09C 5A3B 57F3 BF04 whom do you want to sponsor today? http://www.stackless.com/ From fijal at genesilico.pl Fri Aug 31 20:49:15 2007 From: fijal at genesilico.pl (Maciek Fijalkowski) Date: Fri, 31 Aug 2007 20:49:15 +0200 Subject: [pypy-dev] I'm off for 4 weeks Message-ID: <46D862AB.40500@genesilico.pl> I'll be probably completely offline for like next 4 weeks, starting from monday. Cheers, fijal :.