From guido at python.org Tue Jul 1 13:54:45 2003 From: guido at python.org (Guido van Rossum) Date: Tue, 01 Jul 2003 07:54:45 -0400 Subject: [pypy-dev] Annotating space status Message-ID: <200307011154.h61BsjI13807@pcp02138704pcs.reston01.va.comcast.net> I'm hoping Armin has some time to look into this; I probably won't. There's this test that doesn't work any more: def dont_test_assign_local_w_flow_control(self): # XXX This test doesn't work any more -- disabled for now code = """ def f(n): total = 0 for i in range(n): total = total + 1 return n """ x = self.codetest(code, 'f', [self.space.wrap(3)]) self.assertEquals(type(x), W_Integer) It worked before, when range(n) was enough to befuddle the symbolic interpretation, but now, when you enable it (remove the "dont_" from the name) it raises "TypeError: no result at all?!?!" This seems to be because the next() call implicit in the for loop doesn't ever raise IndeterminateCondition, because it is iterating over a constant list (the list [0, 1, 2] returned by range(3)). But somehome frame unification decides to unify states after the 2nd time through the loop -- and the alternative branch (to the end of the loop) is never taken, so there is never a return value. What's wrong here??? --Guido van Rossum (home page: http://www.python.org/~guido/) From mwh at python.net Tue Jul 1 15:29:00 2003 From: mwh at python.net (Michael Hudson) Date: Tue, 01 Jul 2003 14:29:00 +0100 Subject: [pypy-dev] Re: [pypy-svn] rev 1073 - in pypy/trunk/src/pypy: interpreter/test objspace/std tool In-Reply-To: <20030630154108.BA5935BBA2@thoth.codespeak.net> (hpk@codespeak.net's message of "Mon, 30 Jun 2003 17:41:08 +0200 (MEST)") References: <20030630154108.BA5935BBA2@thoth.codespeak.net> Message-ID: <2m8yris7dv.fsf@starship.python.net> hpk at codespeak.net writes: > Author: hpk > Date: Mon Jun 30 17:41:07 2003 > New Revision: 1073 > > Modified: > pypy/trunk/src/pypy/interpreter/test/test_interpreter.py > pypy/trunk/src/pypy/objspace/std/register_all.py (props changed) > pypy/trunk/src/pypy/objspace/std/stringobject.py > pypy/trunk/src/pypy/tool/test.py > Log: > - improved the hack to perform specific objspace tests only when > actually running the tests with the respective objspace. > (i.e. objspace.ann* tests are skipped for StdObjSpace and > objspace.std* for AnnObjSpace) I think *a* way to do this nicely is to have test.objspace() raise a "TestSkip" exception when asked for a specific object space that is not the one currently being tested. This probably means rewriting more of unittest.py, though (we want four possible outcomes: pass, fail, error, skip). Cheers, M. -- same software, different verbosity settings (this one goes to eleven) -- the effbot on the martellibot From hpk at trillke.net Tue Jul 1 21:58:32 2003 From: hpk at trillke.net (holger krekel) Date: Tue, 1 Jul 2003 21:58:32 +0200 Subject: [pypy-dev] unittest-module (was: some pypy-svn checkin) In-Reply-To: <2m8yris7dv.fsf@starship.python.net>; from mwh@python.net on Tue, Jul 01, 2003 at 02:29:00PM +0100 References: <20030630154108.BA5935BBA2@thoth.codespeak.net> <2m8yris7dv.fsf@starship.python.net> Message-ID: <20030701215832.T3869@prim.han.de> [Michael Hudson Tue, Jul 01, 2003 at 02:29:00PM +0100] > hpk at codespeak.net writes: > > Log: > > - improved the hack to perform specific objspace tests only when > > actually running the tests with the respective objspace. > > (i.e. objspace.ann* tests are skipped for StdObjSpace and > > objspace.std* for AnnObjSpace) > > I think *a* way to do this nicely is to have test.objspace() raise a > "TestSkip" exception when asked for a specific object space that is > not the one currently being tested. I agree this is a good way. Also the "shouldStop" stuff would probably be better implemented by a Control-Flow Exception. > This probably means rewriting > more of unittest.py, though (we want four possible outcomes: pass, > fail, error, skip). yep, looking at the unittest.py source this probably means overriding even more methods. To be honest IMO we should make our life easier and just drag unittest.py from python-2.3, add "skips", remove the leading "__" from some names plus probably do some other cleanups/niceties. Doesn't seem like incredibly hard work and we may offer it back to cpython, later :-) What do you think? cheers, holger From pedronis at bluewin.ch Tue Jul 1 22:17:10 2003 From: pedronis at bluewin.ch (Samuele Pedroni) Date: Tue, 01 Jul 2003 22:17:10 +0200 Subject: [pypy-dev] Annotating space status In-Reply-To: <200307011154.h61BsjI13807@pcp02138704pcs.reston01.va.comca st.net> Message-ID: <5.2.1.1.0.20030701221104.0254dbc8@pop.bluewin.ch> At 07:54 01.07.2003 -0400, Guido van Rossum wrote: >I'm hoping Armin has some time to look into this; I probably won't. > >There's this test that doesn't work any more: > > def dont_test_assign_local_w_flow_control(self): > # XXX This test doesn't work any more -- disabled for now > code = """ >def f(n): > total = 0 > for i in range(n): > total = total + 1 > return n >""" > x = self.codetest(code, 'f', [self.space.wrap(3)]) > self.assertEquals(type(x), W_Integer) 'self.assertEquals(type(x), W_Integer)' should be 'self.assertEquals(type(x), W_Constant)' or we should have a 'return total' instead of 'return n' >It worked before, when range(n) was enough to befuddle the symbolic >interpretation, but now, when you enable it (remove the "dont_" from >the name) it raises "TypeError: no result at all?!?!" > >This seems to be because the next() call implicit in the for loop >doesn't ever raise IndeterminateCondition, because it is iterating >over a constant list (the list [0, 1, 2] returned by range(3)). But >somehome frame unification decides to unify states after the 2nd time >through the loop -- and the alternative branch (to the end of the >loop) is never taken, so there is never a return value. > >What's wrong here??? this should fix it: Index: pypy/objspace/ann/objspace.py =================================================================== --- pypy/objspace/ann/objspace.py (revision 1076) +++ pypy/objspace/ann/objspace.py (working copy) @@ -285,16 +285,16 @@ force = w_iterator.force w_iterator.force = None if force: + if isinstance(w_iterator, W_ConstantIterator): + try: + value = w_iterator.next() + except StopIteration: + raise NoValue # XXX could be also ExitFrame? + else: + return self.wrap(value) return W_Anything() else: raise NoValue - if isinstance(w_iterator, W_ConstantIterator): - try: - value = w_iterator.next() - except StopIteration: - raise NoValue - else: - return self.wrap(value) raise IndeterminateCondition(w_iterator) def call(self, w_func, w_args, w_kwds): From pedronis at bluewin.ch Wed Jul 2 00:35:35 2003 From: pedronis at bluewin.ch (Samuele Pedroni) Date: Wed, 02 Jul 2003 00:35:35 +0200 Subject: [pypy-dev] Annotating space status In-Reply-To: <5.2.1.1.0.20030701221104.0254dbc8@pop.bluewin.ch> References: <200307011154.h61BsjI13807@pcp02138704pcs.reston01.va.comca st.net> Message-ID: <5.2.1.1.0.20030702002556.0254dbc8@pop.bluewin.ch> At 22:17 01.07.2003 +0200, Samuele Pedroni wrote: >this should fix it: truth to be told no, it would break the special helper function evalutation, which is (indirectly) tested by test_build_class only for the moment, a test that works only under 2.3 btw. I have committed a different fix: a cycle doesn't reach a fix point as long as there's a W_ConstantIterator which is being consumed. regards. From mwh at python.net Wed Jul 2 12:34:31 2003 From: mwh at python.net (Michael Hudson) Date: Wed, 02 Jul 2003 11:34:31 +0100 Subject: [pypy-dev] Re: unittest-module References: <20030630154108.BA5935BBA2@thoth.codespeak.net> <2m8yris7dv.fsf@starship.python.net> <20030701215832.T3869@prim.han.de> Message-ID: <2mk7b1qkso.fsf@starship.python.net> holger krekel writes: > [Michael Hudson Tue, Jul 01, 2003 at 02:29:00PM +0100] >> hpk at codespeak.net writes: >> > Log: >> > - improved the hack to perform specific objspace tests only when >> > actually running the tests with the respective objspace. >> > (i.e. objspace.ann* tests are skipped for StdObjSpace and >> > objspace.std* for AnnObjSpace) >> >> I think *a* way to do this nicely is to have test.objspace() raise a >> "TestSkip" exception when asked for a specific object space that is >> not the one currently being tested. > > I agree this is a good way. Also the "shouldStop" stuff would probably > be better implemented by a Control-Flow Exception. ? Not sure I follow. The same sort of control flow exceptions the annotation object space uses? >> This probably means rewriting >> more of unittest.py, though (we want four possible outcomes: pass, >> fail, error, skip). > > yep, looking at the unittest.py source this probably means > overriding even more methods. To be honest IMO we should make our > life easier and just drag unittest.py from python-2.3, add "skips", > remove the leading "__" from some names plus probably do some other > cleanups/niceties. Would this make life easier? As much as we bitch about unittest.py, it has been and will continue to be a "getting us from here to there" tool -- if we'd had to write a unittest module from scratch when we started the project, we either wouldn't have unittests for pypy or we wouldn't have pypy. It may be that one day we'll be able to remove "import unittest" from the top of tool/test.py, but that won't diminish the usefulness unittest.py has given us. I'm also not sure we're at the stage where a wholesale fork is the easier option -- yet. > Doesn't seem like incredibly hard work and we may offer it back to > cpython, later :-) This suffers from the distutils problem: we've no way of knowing what other people have done in the way of subclassing unittest.Foo, so even the slightest change carries the risk of breaking other people's test rigs. There might be a way to provide new behaviour by default whilst presenting old behaviour to subclassers, but not trivially (it would have been easier if both distutils and unittest had been written with this in mind...). Cheers, M. -- Darned confusing, unless you have that magic ingredient coffee, of which I can pay you Tuesday for a couple pounds of extra-special grind today. -- John Mitchell, 11 Jan 1999 From hpk at trillke.net Wed Jul 2 13:04:24 2003 From: hpk at trillke.net (holger krekel) Date: Wed, 2 Jul 2003 13:04:24 +0200 Subject: [pypy-dev] Re: unittest-module In-Reply-To: <2mk7b1qkso.fsf@starship.python.net>; from mwh@python.net on Wed, Jul 02, 2003 at 11:34:31AM +0100 References: <20030630154108.BA5935BBA2@thoth.codespeak.net> <2m8yris7dv.fsf@starship.python.net> <20030701215832.T3869@prim.han.de> <2mk7b1qkso.fsf@starship.python.net> Message-ID: <20030702130424.B3869@prim.han.de> [Michael Hudson Wed, Jul 02, 2003 at 11:34:31AM +0100] > holger krekel writes: > > > [Michael Hudson Tue, Jul 01, 2003 at 02:29:00PM +0100] > >> hpk at codespeak.net writes: > >> > Log: > >> > - improved the hack to perform specific objspace tests only when > >> > actually running the tests with the respective objspace. > >> > (i.e. objspace.ann* tests are skipped for StdObjSpace and > >> > objspace.std* for AnnObjSpace) > >> > >> I think *a* way to do this nicely is to have test.objspace() raise a > >> "TestSkip" exception when asked for a specific object space that is > >> not the one currently being tested. > > > > I agree this is a good way. Also the "shouldStop" stuff would probably > > be better implemented by a Control-Flow Exception. > > ? Not sure I follow. The same sort of control flow exceptions the > annotation object space uses? No, sorry, that was unclear. I just meant *a* control-flow exception (not the interpreter ones) and more specifically, that the attribute "shouldStop" which is set on a TestResult instance and checked for at upper layers. I think the generally cleaner way is to define an exception hieararchy like TestFlowException | |-- TestSkipException | |-- TestCaseAbortException | |-- TestFailureException (aka failureException) | |-- ... and use them to bail out of lower layers without calling methods and settings attributes you have to check at all "calling" places. > >> This probably means rewriting > >> more of unittest.py, though (we want four possible outcomes: pass, > >> fail, error, skip). > > > > yep, looking at the unittest.py source this probably means > > overriding even more methods. To be honest IMO we should make our > > life easier and just drag unittest.py from python-2.3, add "skips", > > remove the leading "__" from some names plus probably do some other > > cleanups/niceties. > > Would this make life easier? As much as we bitch about unittest.py, > it has been and will continue to be a "getting us from here to there" > tool -- if we'd had to write a unittest module from scratch when we > started the project, we either wouldn't have unittests for pypy or we > wouldn't have pypy. It may be that one day we'll be able to remove > "import unittest" from the top of tool/test.py, but that won't > diminish the usefulness unittest.py has given us. Fully agreed. I didn't mean to say (at all) that unittest.py hasn't been useful to us. But i doubt that the current tool/test.py + std/unittest.py + interpreter/unittest_w.py is that easy and nice to understand any more. And if we want to add more features then i think that having our own unittest2.py (or whatever) forked from the standard unittest.py makes some sense. After all, we have to care for different objectspaces and different "levels" (interpreter-level and application-level) and want different modes of operation (CTS). The current unittest.py doesn't make this easy, does it? > > Doesn't seem like incredibly hard work and we may offer it back to > > cpython, later :-) > > This suffers from the distutils problem: we've no way of knowing what > other people have done in the way of subclassing unittest.Foo, so even > the slightest change carries the risk of breaking other people's test > rigs. There might be a way to provide new behaviour by default whilst > presenting old behaviour to subclassers, but not trivially (it would > have been easier if both distutils and unittest had been written with > this in mind...). Agreed. So contributing it back isn't so interesting for the time beeing. If we integrated a test-coverage mechanism at some point then the situation might possibly change. I talked to some Zope3 and other people and nobody knows of such a tool, btw. cheers, holger From guido at python.org Wed Jul 2 18:29:00 2003 From: guido at python.org (Guido van Rossum) Date: Wed, 02 Jul 2003 12:29:00 -0400 Subject: [pypy-dev] Annotating space status In-Reply-To: Your message of "Wed, 02 Jul 2003 00:35:35 +0200." <5.2.1.1.0.20030702002556.0254dbc8@pop.bluewin.ch> References: <200307011154.h61BsjI13807@pcp02138704pcs.reston01.va.comca st.net> <5.2.1.1.0.20030702002556.0254dbc8@pop.bluewin.ch> Message-ID: <200307021629.h62GT1317007@pcp02138704pcs.reston01.va.comcast.net> > I have committed a different fix: a cycle doesn't reach a fix point as long > as there's a W_ConstantIterator which is being consumed. This works now, but it makes me worry. I think that the symbolic execution framework doesn't really cope very well with mutable objects; I expect that we'll get other problems like this. I wonder if you or Armin have a better idea on how to deal with this? I'd think that maintaining a 'changed-since-last-clone' flag in each object seems a pretty ad-hoc solution... --Guido van Rossum (home page: http://www.python.org/~guido/) From pedronis at bluewin.ch Wed Jul 2 19:09:23 2003 From: pedronis at bluewin.ch (Samuele Pedroni) Date: Wed, 02 Jul 2003 19:09:23 +0200 Subject: [pypy-dev] Annotating space status In-Reply-To: <200307021629.h62GT1317007@pcp02138704pcs.reston01.va.comca st.net> References: <200307011154.h61BsjI13807@pcp02138704pcs.reston01.va.comca st.net> <5.2.1.1.0.20030702002556.0254dbc8@pop.bluewin.ch> Message-ID: <5.2.1.1.0.20030702185646.00aa7990@pop.bluewin.ch> At 12:29 02.07.2003 -0400, Guido van Rossum wrote: > > I have committed a different fix: a cycle doesn't reach a fix point as > long > > as there's a W_ConstantIterator which is being consumed. > >This works now, but it makes me worry. I think that the symbolic >execution framework doesn't really cope very well with mutable >objects; I expect that we'll get other problems like this. > >I wonder if you or Armin have a better idea on how to deal with this? >I'd think that maintaining a 'changed-since-last-clone' flag in each >object seems a pretty ad-hoc solution... the problem is our fix-point condition which is mostly based purely on types, and that we are trying to have it both ways, doing some mixture of constant propagation (normal eval) and type propagation, if you consider code like this: x = 0 while True: x = x + 2 type propagation can end, but constant evaluation will give an infinite loop. So we should clarify what we really want. One possibility is optinally instead of using types to decide fix-points, we stop if some bytecode (position) has been encountered more than N times. regards. From guido at python.org Wed Jul 2 21:18:08 2003 From: guido at python.org (Guido van Rossum) Date: Wed, 02 Jul 2003 15:18:08 -0400 Subject: [pypy-dev] Annotating space status In-Reply-To: Your message of "Wed, 02 Jul 2003 19:09:23 +0200." <5.2.1.1.0.20030702185646.00aa7990@pop.bluewin.ch> References: <200307011154.h61BsjI13807@pcp02138704pcs.reston01.va.comca st.net> <5.2.1.1.0.20030702002556.0254dbc8@pop.bluewin.ch> <5.2.1.1.0.20030702185646.00aa7990@pop.bluewin.ch> Message-ID: <200307021918.h62JI8e17572@pcp02138704pcs.reston01.va.comcast.net> > the problem is our fix-point condition which is mostly based purely > on types, and that we are trying to have it both ways, doing some > mixture of constant propagation (normal eval) and type propagation, > if you consider code like this: > > x = 0 > while True: > x = x + 2 > > type propagation can end, but constant evaluation will give an > infinite loop. > > So we should clarify what we really want. > > One possibility is optinally instead of using types to decide > fix-points, we stop if some bytecode (position) has been encountered > more than N times. Right. We could differentiate this by using the differentiation of object space and execution context for evaluation of internal helpers. Those helpers want constant evaluation, and they already get a different object space (a subclass of the annotating object space). The space controls the execution context, and the execution context controls whether we check for fix-point conditions. The "real" code would continue to use the "normal" annotation space and would do full type propagation. Unfortunately I won't have time to work on this... --Guido van Rossum (home page: http://www.python.org/~guido/) From hpk at trillke.net Wed Jul 2 22:41:44 2003 From: hpk at trillke.net (holger krekel) Date: Wed, 2 Jul 2003 22:41:44 +0200 Subject: [pypy-dev] Re: Greetings Message-ID: <20030702224144.S3869@prim.han.de> Hello pypy-dev, i just noticed that Jonathan David Riehl has sent a message to pypy-dev at codespeak.com ^^^^^ which didn't work (codespeak.NET would be right). So i resent my reply to his message which actually cites all of his mail. cheers, holger ----- Forwarded message from holger krekel ----- Date: Wed, 2 Jul 2003 21:29:59 +0200 From: holger krekel To: Jonathan David Riehl Cc: pypy-dev at codespeak.com Subject: Re: Greetings In-Reply-To: ; from jriehl at cs.uchicago.edu on Wed, Jul 02, 2003 at 01:45:30PM -0500 Hello Jonathan! [Jonathan David Riehl Wed, Jul 02, 2003 at 01:45:30PM -0500] > Hi everyone, > I noticed Christian was trying to steer a newbie over on > python-dev over to PyPy, and I was thinking that I might be able to help > as well. Chris suggested I formally introduce myself to the group and see > what can be done. Good idea :-) > I've done work on translation of Python to C, as well as creating > other static analysis tools. Most of that work was done while I was at > NASA, where political kinks halted my work. Five years later, I have > finally managed to get back into a research environment, and I am now > working on a PhD at the University of Chicago, studying under Dave > Beazley. Makes me envious. > Instead of restarting work on my own Python compiler (Python to > C translation style), I figure I would see if we could work together. Right. That certainly makes sense. > The only problem I see is that I'm in the US, and attending > sprints may be difficult on a student's budget. Hopefully, we can still > have a tight development loop via IRC. After the last sprints and EuroPython everyone needs to catch some breath, i guess. I am usually hanging out on #pypy on irc.freenode.net. You are welcome :-) > So now to ask the obvious newbie questions: What needs to be > done? What is cool that could be done? How can I help? I'm also more > than happy to talk about what I have done as well, if anyone is curious. I can't answer too long because i am currently cooking :-) And probably Armin (Rigo) or others might give a more detailed answer, anyway. But it's always a good idea to check out the source code. Therefore you need a "subversion" client. Jens-Uwe Mager has prepared some client side installs: http://codespeak.net/pypy/doc/devel/howtosvn.html In this doc-directory (http://codespeak.net/pypy/doc) you'll also find some unsorted documentation which might help in understanding what we have been doing this year. Also, at least i am quite interested to get to know what you have done or what you found out about Python-to-C translators. cheers, holger ----- End forwarded message ----- From pedronis at bluewin.ch Wed Jul 2 23:30:40 2003 From: pedronis at bluewin.ch (Samuele Pedroni) Date: Wed, 02 Jul 2003 23:30:40 +0200 Subject: [pypy-dev] Annotating space status In-Reply-To: <200307021918.h62JI8e17572@pcp02138704pcs.reston01.va.comca st.net> References: <200307011154.h61BsjI13807@pcp02138704pcs.reston01.va.comca st.net> <5.2.1.1.0.20030702002556.0254dbc8@pop.bluewin.ch> <5.2.1.1.0.20030702185646.00aa7990@pop.bluewin.ch> Message-ID: <5.2.1.1.0.20030702231607.00aa7990@pop.bluewin.ch> At 15:18 02.07.2003 -0400, Guido van Rossum wrote: > > the problem is our fix-point condition which is mostly based purely > > on types, and that we are trying to have it both ways, doing some > > mixture of constant propagation (normal eval) and type propagation, > > if you consider code like this: > > > > x = 0 > > while True: > > x = x + 2 > > > > type propagation can end, but constant evaluation will give an > > infinite loop. > > > > So we should clarify what we really want. > > > > One possibility is optinally instead of using types to decide > > fix-points, we stop if some bytecode (position) has been encountered > > more than N times. > >Right. We could differentiate this by using the differentiation of >object space and execution context for evaluation of internal helpers. > >Those helpers want constant evaluation, and they already get a >different object space (a subclass of the annotating object space). >The space controls the execution context, and the execution context >controls whether we check for fix-point conditions. > >The "real" code would continue to use the "normal" annotation space >and would do full type propagation. another problem is that for the momement we unify between different invocation of a same function but we don't reflow the information trough the entire call-graph, we just traverse some part of it as needed: r = self.codetest("def f():\n" " x = g(1)\n" " y = g('1')\n" " return x\n" # vs. return y "def g(y):\n" " return y\n", 'f', []) print r in the 'return x' case r is W_Constant(1) in the 'return y' case r is W_Anything() this is sort of bogus, and here too we should clarify what we want: - reflow type info through the call-graph somehow (this means more code complexity) and get in both cases W_Anything() - don't unify beetween different invocation of a same function: maybe that's enough for our purposes and get W_Constant(1), W_Constant('1') respectively. From pedronis at bluewin.ch Thu Jul 3 00:13:44 2003 From: pedronis at bluewin.ch (Samuele Pedroni) Date: Thu, 03 Jul 2003 00:13:44 +0200 Subject: [pypy-dev] Annotating space status In-Reply-To: <5.2.1.1.0.20030702231607.00aa7990@pop.bluewin.ch> References: <200307021918.h62JI8e17572@pcp02138704pcs.reston01.va.comca st.net> <200307011154.h61BsjI13807@pcp02138704pcs.reston01.va.comca st.net> <5.2.1.1.0.20030702002556.0254dbc8@pop.bluewin.ch> <5.2.1.1.0.20030702185646.00aa7990@pop.bluewin.ch> Message-ID: <5.2.1.1.0.20030702235114.02566818@pop.bluewin.ch> At 23:30 02.07.2003 +0200, Samuele Pedroni wrote: >- reflow type info through the call-graph somehow (this means more code >complexity) and get in both cases W_Anything() >- don't unify beetween different invocation of a same function: maybe >that's enough for our purposes and get W_Constant(1), W_Constant('1') >respectively. the simplest solution is to just reiterate the analysis globally up to a fix-point. From tismer at tismer.com Fri Jul 4 05:09:02 2003 From: tismer at tismer.com (Christian Tismer) Date: Fri, 04 Jul 2003 05:09:02 +0200 Subject: [pypy-dev] Re: Greetings In-Reply-To: <20030702224144.S3869@prim.han.de> References: <20030702224144.S3869@prim.han.de> Message-ID: <3F04EFCE.9020304@tismer.com> holger krekel wrote: > Hello pypy-dev, > > i just noticed that Jonathan David Riehl has sent a message > to pypy-dev at codespeak.com Sorry, this was probably my fault. BTW., I suggested that working on the compiler package might be a good idea. I'd personally appreciate very much to have Jon in this project, since I know his outstanding work since years. ciao - chris -- Christian Tismer :^) Mission Impossible 5oftware : Have a break! Take a ride on Python's Johannes-Niemeyer-Weg 9a : *Starship* http://starship.python.net/ 14109 Berlin : PGP key -> http://wwwkeys.pgp.net/ work +49 30 89 09 53 34 home +49 30 802 86 56 pager +49 173 24 18 776 PGP 0x57F3BF04 9064 F4E1 D754 C2FF 1619 305B C09C 5A3B 57F3 BF04 whom do you want to sponsor today? http://www.stackless.com/ From roccomoretti at netscape.net Fri Jul 4 05:22:39 2003 From: roccomoretti at netscape.net (Rocco Moretti) Date: Thu, 03 Jul 2003 23:22:39 -0400 Subject: [pypy-dev] Builtin Modules, Annotation Space Message-ID: <4C1DF9B7.1B0F9A0C.9ADE5C6A@netscape.net> Hello all, I've made some updates to the interpreter internals to generalize builtin module loading (with an eye toward getting an os module). These changes pass all tests on trivial and standard object spaces, but cause test_all.py to crash under -A during the ObjSpace initilization (it doesn't even get to the "Running tests under ..."). My best guess is that when I'm working with wrapped objects in the initilization code, I'm running afoul of what AnnObjSpace is doing, probably in the same way that running the interpreter app-level helper functions under AnnObjSpace does not work. (We want the code to execute, not be translated.) How should the people working on the interpreter internals deal with wrapped objects, such that we don't run afoul of AnnObjSpace? Are there guidelines as to what operations are allowed when? For those who want to take a look, the diff of my changes should be attached. (I'm not commiting it yet for obvious reasons.) -Rocco __________________________________________________________________ McAfee VirusScan Online from the Netscape Network. Comprehensive protection for your entire computer. Get your free trial today! http://channels.netscape.com/ns/computing/mcafee/index.jsp?promo=393397 Get AOL Instant Messenger 5.1 free of charge. Download Now! http://aim.aol.com/aimnew/Aim/register.adp?promo=380455 -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: module_diff.txt URL: From roccomoretti at netscape.net Fri Jul 4 05:37:38 2003 From: roccomoretti at netscape.net (Rocco Moretti) Date: Thu, 03 Jul 2003 23:37:38 -0400 Subject: [pypy-dev] website update / pypy developments Message-ID: <3914921B.51B1E169.9ADE5C6A@netscape.net> holger krekel wrote: > After some discussions especially with Armin > ?i now think that we should promote as a goal to > > ? ?dynamically tie into/adapt arbritrary C-Libraries and > ? ?operation system calls. > > ?The latter would basically mean that you could use Python directly to > ?drive your favourite OS. Think rapidly specializing for whatever > ?embedded device without even a libc and doing that with 99% of the > ?code beeing written in Python. I'm a little unclear by what you mean by this goal. As best I can tell, you envision being able to put a shared library facade over pypy functions, and then have the C/whatever library function calls be able to seamlessly call into pypy without recompilation. e.g. we could write a glibc facade which runs pypy 'under the hood' and drive precompiled linux programs with pypy. Is this getting close to the idea? ... If so, you are either insanely brilliant, or insanely insane - though I'd lean toward the former. What-will-they-think-up-next-ly 'ers -Rocco __________________________________________________________________ McAfee VirusScan Online from the Netscape Network. Comprehensive protection for your entire computer. Get your free trial today! http://channels.netscape.com/ns/computing/mcafee/index.jsp?promo=393397 Get AOL Instant Messenger 5.1 free of charge. Download Now! http://aim.aol.com/aimnew/Aim/register.adp?promo=380455 From hpk at trillke.net Fri Jul 4 07:16:14 2003 From: hpk at trillke.net (holger krekel) Date: Fri, 4 Jul 2003 07:16:14 +0200 Subject: [pypy-dev] website update / pypy developments In-Reply-To: <3914921B.51B1E169.9ADE5C6A@netscape.net>; from roccomoretti@netscape.net on Thu, Jul 03, 2003 at 11:37:38PM -0400 References: <3914921B.51B1E169.9ADE5C6A@netscape.net> Message-ID: <20030704071614.J3869@prim.han.de> [Rocco Moretti Thu, Jul 03, 2003 at 11:37:38PM -0400] > holger krekel wrote: > > > After some discussions especially with Armin > > ?i now think that we should promote as a goal to > > > > ? ?dynamically tie into/adapt arbritrary C-Libraries and > > ? ?operation system calls. > > > > ?The latter would basically mean that you could use Python directly to > > ?drive your favourite OS. Think rapidly specializing for whatever > > ?embedded device without even a libc and doing that with 99% of the > > ?code beeing written in Python. > > I'm a little unclear by what you mean by this goal. As best I can tell, > you envision being able to put a shared library facade over pypy > functions, and then have the C/whatever library function calls be able to > seamlessly call into pypy without recompilation. e.g. we could write a > glibc facade which runs pypy 'under the hood' and drive precompiled linux > programs with pypy. That's also an interesting idea :-) It's more the other way round: Dynamically setup a "C-call" from Python and execute it. ASFAIK you need assembler-written "trampolin" functions that help you in doing this work. (Thomas Heller's ctypes module uses e.g. libffi on unices to do this). It seems to be a far fetched goal but note that this was mentioned under "EU-funding" because they apparently want "ambitious" goals. Given we implement such a "trampolin" technique you can imagine all sorts of nice stuff like having a small embedded device basically running on linux+python. PyPy would probably head for beeing a runtime-system in this case rather than "just" a language reimplementation. > Is this getting close to the idea? ... If so, you are either insanely > brilliant, or insanely insane - though I'd lean toward the former. I guess we have all kinds of such productive people in the project :-) cheers, holger From arigo at tunes.org Fri Jul 4 21:10:55 2003 From: arigo at tunes.org (Armin Rigo) Date: Fri, 4 Jul 2003 21:10:55 +0200 Subject: [pypy-dev] website update / pypy developments In-Reply-To: <20030704071614.J3869@prim.han.de> References: <3914921B.51B1E169.9ADE5C6A@netscape.net> <20030704071614.J3869@prim.han.de> Message-ID: <20030704191055.GA11501@magma.unil.ch> Hello Holger, On Fri, Jul 04, 2003 at 07:16:14AM +0200, holger krekel wrote: > > > After some discussions especially with Armin > > > i now think that we should promote as a goal to > > > > > > dynamically tie into/adapt arbritrary C-Libraries and > > > operation system calls. You and me discussed this at EuroPython, but I expressed the thought that this could be *one* of the various ambitious goals of the project. Using libffi or some other custom machinery to call native C libraries directly from Python is one step (though you can already do that with ctypes, as you mentioned). It is true that it is probably easier and cleaner to do it with PyPy because we control low-level code emission much more closely; some more hacking could probably get us rid of glibc altogether. Getting rid of glibc is a neat idea, but all the other goals that we discussed in pypy-dev and on the sprints are also important in other respects and should not be disregarded if we want to promote PyPy as a "software platform". I, for example, tend to attach a lot of value to the interpreter's great flexibility as an experimentation platform. Others want to make object spaces with slightly or heavily customized semantics, dream to plug-and-play their own syntax, or just build lightweight versions of Python. All these goals together are what make a great and ambitious project ! A bientot, Armin. From arigo at tunes.org Fri Jul 4 21:56:56 2003 From: arigo at tunes.org (Armin Rigo) Date: Fri, 4 Jul 2003 21:56:56 +0200 Subject: [pypy-dev] Annotating space status In-Reply-To: <5.2.1.1.0.20030702231607.00aa7990@pop.bluewin.ch> References: <5.2.1.1.0.20030702185646.00aa7990@pop.bluewin.ch> <5.2.1.1.0.20030702002556.0254dbc8@pop.bluewin.ch> <200307011154.h61BsjI13807@pcp02138704pcs.reston01.va.comcast.net> <5.2.1.1.0.20030702002556.0254dbc8@pop.bluewin.ch> <5.2.1.1.0.20030702185646.00aa7990@pop.bluewin.ch> <5.2.1.1.0.20030702231607.00aa7990@pop.bluewin.ch> Message-ID: <20030704195656.GA12186@magma.unil.ch> Hello Samuele, On Wed, Jul 02, 2003 at 11:30:40PM +0200, Samuele Pedroni wrote: > r = self.codetest("def f():\n" > " x = g(1)\n" > " y = g('1')\n" > " return x\n" # vs. return y > "def g(y):\n" > " return y\n", > 'f', []) > print r > > in the 'return x' case r is W_Constant(1) > in the 'return y' case r is W_Anything() > > this is sort of bogus, and here too we should clarify what we want: This can be fixed by replacing the current way the analysis is driven, which is by really doing nested eval_frame() calls to follow the nested function calls. Instead, we should probably have a "pool" of frames waiting to be analyzed, just like we currently do in CloningExecutionContext.eval_frame(), but for all the frames instead of for a particular call. For the problem with W_ConstantIterator I'm a bit confused. Where is the code that allows more than just one frame to be saved for the same bytecode position? In other words I don't understand how this code can unroll the loops in the helpers. More generally I'd say that the current situation is confused because we try to do the right thing for the specific goal of being able to translate RPython to C, but what the right thing exactly is in this case is still unclear. I think we should try to make a slightly more general but parametrizable AnnotationObjectSpace. A bient?t, Armin. From guido at python.org Fri Jul 4 22:05:53 2003 From: guido at python.org (Guido van Rossum) Date: Fri, 04 Jul 2003 16:05:53 -0400 Subject: [pypy-dev] Annotating space status In-Reply-To: Your message of "Fri, 04 Jul 2003 21:56:56 +0200." <20030704195656.GA12186@magma.unil.ch> References: <5.2.1.1.0.20030702185646.00aa7990@pop.bluewin.ch> <5.2.1.1.0.20030702002556.0254dbc8@pop.bluewin.ch> <200307011154.h61BsjI13807@pcp02138704pcs.reston01.va.comcast.net> <5.2.1.1.0.20030702002556.0254dbc8@pop.bluewin.ch> <5.2.1.1.0.20030702185646.00aa7990@pop.bluewin.ch> <5.2.1.1.0.20030702231607.00aa7990@pop.bluewin.ch> <20030704195656.GA12186@magma.unil.ch> Message-ID: <200307042005.h64K5ru22895@pcp02138704pcs.reston01.va.comcast.net> > For the problem with W_ConstantIterator I'm a bit confused. Where is > the code that allows more than just one frame to be saved for the > same bytecode position? In other words I don't understand how this > code can unroll the loops in the helpers. I have a feeling that the real problem here may be that the value stack isn't properly cloned, which means that the frames stored in knownframes will share their objects with the frame being modified as part of the current frame evaluation. I'd bet that if you properly clone the value stack using clonecells(), Samuele's hack (adding a 'changed' flag to the W_ConstantIterator class) would no longer be necessary. Wishing I had time to code this up, --Guido van Rossum (home page: http://www.python.org/~guido/) From hpk at trillke.net Fri Jul 4 23:41:48 2003 From: hpk at trillke.net (holger krekel) Date: Fri, 4 Jul 2003 23:41:48 +0200 Subject: [pypy-dev] website update / pypy developments In-Reply-To: <20030704191055.GA11501@magma.unil.ch>; from arigo@tunes.org on Fri, Jul 04, 2003 at 09:10:55PM +0200 References: <3914921B.51B1E169.9ADE5C6A@netscape.net> <20030704071614.J3869@prim.han.de> <20030704191055.GA11501@magma.unil.ch> Message-ID: <20030704234148.M3869@prim.han.de> [Armin Rigo Fri, Jul 04, 2003 at 09:10:55PM +0200] > Hello Holger, > > On Fri, Jul 04, 2003 at 07:16:14AM +0200, holger krekel wrote: > > > > After some discussions especially with Armin > > > > i now think that we should promote as a goal to > > > > > > > > dynamically tie into/adapt arbritrary C-Libraries and > > > > operation system calls. > > You and me discussed this at EuroPython, but I expressed the thought that this > could be *one* of the various ambitious goals of the project. Using libffi or > some other custom machinery to call native C libraries directly from Python is > one step (though you can already do that with ctypes, as you mentioned). It is > true that it is probably easier and cleaner to do it with PyPy because we > control low-level code emission much more closely; some more hacking could > probably get us rid of glibc altogether. > > Getting rid of glibc is a neat idea, but all the other goals that we discussed > in pypy-dev and on the sprints are also important in other respects and should > not be disregarded if we want to promote PyPy as a "software platform". I, for > example, tend to attach a lot of value to the interpreter's great flexibility > as an experimentation platform. Others want to make object spaces with > slightly or heavily customized semantics, dream to plug-and-play their own > syntax, or just build lightweight versions of Python. All these goals together > are what make a great and ambitious project ! Well put! Actually i didn't mean to suddenly reduce the number of goals/dreams to just one. Then again, i specifically put C/system-call trampolins under "EU-funding" because i think connecting a high-level language like python directly to the kernel-level or C-libraries feels (to me) more like a "software platform" than a nice language with a nice flexible interpreter (which is nevertheless the pre-condition for c/sys trampolin integration, isn't it?). Speaking of EU-Funding. Should we setup a mailing-list for information exchange and discussion? If we are to really go for the October 15th deadline then we should start to rush or don't go for it this year (IMHO). cheers, holger From pedronis at bluewin.ch Sat Jul 5 00:21:59 2003 From: pedronis at bluewin.ch (Samuele Pedroni) Date: Sat, 05 Jul 2003 00:21:59 +0200 Subject: [pypy-dev] Annotating space status In-Reply-To: <200307042005.h64K5ru22895@pcp02138704pcs.reston01.va.comca st.net> References: <5.2.1.1.0.20030702185646.00aa7990@pop.bluewin.ch> <5.2.1.1.0.20030702002556.0254dbc8@pop.bluewin.ch> <200307011154.h61BsjI13807@pcp02138704pcs.reston01.va.comcast.net> <5.2.1.1.0.20030702002556.0254dbc8@pop.bluewin.ch> <5.2.1.1.0.20030702185646.00aa7990@pop.bluewin.ch> <5.2.1.1.0.20030702231607.00aa7990@pop.bluewin.ch> <20030704195656.GA12186@magma.unil.ch> Message-ID: <5.2.1.1.0.20030705002142.0252cf90@pop.bluewin.ch> At 16:05 04.07.2003 -0400, Guido van Rossum wrote: > > For the problem with W_ConstantIterator I'm a bit confused. Where is > > the code that allows more than just one frame to be saved for the > > same bytecode position? In other words I don't understand how this > > code can unroll the loops in the helpers. > >I have a feeling that the real problem here may be that the value >stack isn't properly cloned, which means that the frames stored in >knownframes will share their objects with the frame being modified as >part of the current frame evaluation. I'd bet that if you properly >clone the value stack using clonecells(), Samuele's hack (adding a >'changed' flag to the W_ConstantIterator class) would no longer be >necessary. > >Wishing I had time to code this up, makes sense, done. From guido at python.org Sat Jul 5 01:31:15 2003 From: guido at python.org (Guido van Rossum) Date: Fri, 04 Jul 2003 19:31:15 -0400 Subject: [pypy-dev] Annotating space status In-Reply-To: Your message of "Sat, 05 Jul 2003 00:21:59 +0200." <5.2.1.1.0.20030705002142.0252cf90@pop.bluewin.ch> References: <5.2.1.1.0.20030702185646.00aa7990@pop.bluewin.ch> <5.2.1.1.0.20030702002556.0254dbc8@pop.bluewin.ch> <200307011154.h61BsjI13807@pcp02138704pcs.reston01.va.comcast.net> <5.2.1.1.0.20030702002556.0254dbc8@pop.bluewin.ch> <5.2.1.1.0.20030702185646.00aa7990@pop.bluewin.ch> <5.2.1.1.0.20030702231607.00aa7990@pop.bluewin.ch> <20030704195656.GA12186@magma.unil.ch> <5.2.1.1.0.20030705002142.0252cf90@pop.bluewin.ch> Message-ID: <200307042331.h64NVFq23087@pcp02138704pcs.reston01.va.comcast.net> > >I have a feeling that the real problem here may be that the value > >stack isn't properly cloned, which means that the frames stored in > >knownframes will share their objects with the frame being modified as > >part of the current frame evaluation. I'd bet that if you properly > >clone the value stack using clonecells(), Samuele's hack (adding a > >'changed' flag to the W_ConstantIterator class) would no longer be > >necessary. > > > >Wishing I had time to code this up, > > makes sense, done. Thanks, cool. What did you mean by this comment? # XXX should we copy seq, and roll our own definition of identity? BTW I note that all __eq__ methods can be removed if we change __eq__ in W_Object to def __eq__(self, other): return type(other) is type(self) and other.__dict__ == self.__dict__ --Guido van Rossum (home page: http://www.python.org/~guido/) From pedronis at bluewin.ch Sat Jul 5 03:36:41 2003 From: pedronis at bluewin.ch (Samuele Pedroni) Date: Sat, 05 Jul 2003 03:36:41 +0200 Subject: [pypy-dev] Annotating space status In-Reply-To: <200307042331.h64NVFq23087@pcp02138704pcs.reston01.va.comca st.net> References: <5.2.1.1.0.20030702185646.00aa7990@pop.bluewin.ch> <5.2.1.1.0.20030702002556.0254dbc8@pop.bluewin.ch> <200307011154.h61BsjI13807@pcp02138704pcs.reston01.va.comcast.net> <5.2.1.1.0.20030702002556.0254dbc8@pop.bluewin.ch> <5.2.1.1.0.20030702185646.00aa7990@pop.bluewin.ch> <5.2.1.1.0.20030702231607.00aa7990@pop.bluewin.ch> <20030704195656.GA12186@magma.unil.ch> <5.2.1.1.0.20030705002142.0252cf90@pop.bluewin.ch> Message-ID: <5.2.1.1.0.20030705032005.02549468@pop.bluewin.ch> At 19:31 04.07.2003 -0400, Guido van Rossum wrote: > > >I have a feeling that the real problem here may be that the value > > >stack isn't properly cloned, which means that the frames stored in > > >knownframes will share their objects with the frame being modified as > > >part of the current frame evaluation. I'd bet that if you properly > > >clone the value stack using clonecells(), Samuele's hack (adding a > > >'changed' flag to the W_ConstantIterator class) would no longer be > > >necessary. > > > > > >Wishing I had time to code this up, > > > > makes sense, done. > >Thanks, cool. r = self.codetest("def f(a):\n" " x = [1,2]\n" " if a:\n" " x.append(3)\n" " else:\n" " x.append(3)\n" " return x", 'f',[W_Anything()]) print r this prints W_Constant([1, 2, 3, 3]) the first obvious fix works up to: r = self.codetest("def f(a):\n" " x = [1,2]\n" " y = x\n" " if a:\n" " x.append(3)\n" " else:\n" " x.append(3)\n" " return y", 'f',[W_Anything()]) if we care about this, it seems that state cloning should be done on the state as a whole caring for identity like pickling or deepcopy do. From guido at python.org Sat Jul 5 03:58:49 2003 From: guido at python.org (Guido van Rossum) Date: Fri, 04 Jul 2003 21:58:49 -0400 Subject: [pypy-dev] Annotating space status In-Reply-To: Your message of "Sat, 05 Jul 2003 03:36:41 +0200." <5.2.1.1.0.20030705032005.02549468@pop.bluewin.ch> References: <5.2.1.1.0.20030702185646.00aa7990@pop.bluewin.ch> <5.2.1.1.0.20030702002556.0254dbc8@pop.bluewin.ch> <200307011154.h61BsjI13807@pcp02138704pcs.reston01.va.comcast.net> <5.2.1.1.0.20030702002556.0254dbc8@pop.bluewin.ch> <5.2.1.1.0.20030702185646.00aa7990@pop.bluewin.ch> <5.2.1.1.0.20030702231607.00aa7990@pop.bluewin.ch> <20030704195656.GA12186@magma.unil.ch> <5.2.1.1.0.20030705002142.0252cf90@pop.bluewin.ch> <5.2.1.1.0.20030705032005.02549468@pop.bluewin.ch> Message-ID: <200307050158.h651wnc23419@pcp02138704pcs.reston01.va.comcast.net> > if we care about this, it seems that state cloning should be done on the > state as a whole caring for identity like pickling or deepcopy do. Maybe cloning should be done using deepcopy(), defining __deepcopy__(self, memo) in places that need a little help to avoid copying too much (e.g. references to the execution context clearly shouldn't be copied). --Guido van Rossum (home page: http://www.python.org/~guido/) From lac at strakt.com Sat Jul 5 10:16:38 2003 From: lac at strakt.com (Laura Creighton) Date: Sat, 05 Jul 2003 10:16:38 +0200 Subject: [pypy-dev] website update / pypy developments In-Reply-To: Message from holger krekel of "Fri, 04 Jul 2003 23:41:48 +0200." <20030704234148.M3869@prim.han.de> References: <3914921B.51B1E169.9ADE5C6A@netscape.net> <20030704071614.J3869@prim.han.de> <20030704191055.GA11501@magma.unil.ch> <20030704234148.M3869@prim.han.de> Message-ID: <200307050816.h658GcOs025505@ratthing-b246.strakt.com> In a message of Fri, 04 Jul 2003 23:41:48 +0200, holger krekel writes: >Speaking of EU-Funding. Should we setup a mailing-list for information >exchange and discussion? If we are to really go for the October 15th >deadline then we should start to rush or don't go for it this year (IMHO) >. > >cheers, > > holger I don't mind a separate mailing list, or if discussions happen here. Laura From hpk at trillke.net Sat Jul 5 15:54:59 2003 From: hpk at trillke.net (holger krekel) Date: Sat, 5 Jul 2003 15:54:59 +0200 Subject: [pypy-dev] side "sprint" Message-ID: <20030705155459.W3869@prim.han.de> hello pypy, some people already know it. I am hosting a small sprint at the same location as the first PyPy-Sprint, the Trillke-Gut in Hildesheim (germany) between 12th and 19th of July. This coding week will not focus primarily on pypy but more on the codespeak-infrastructure and whatever (python) programming tasks. Jens-Uwe and me plan among other things to enhance the codespeak server configuration and i have some pypy-related tasks on my list: - generate a pypy "documentation overview" which should show which document was updated last and by whom etc.pp. - implement functional tests for codespeak services (subversion/web/mail integration) - enhance the wiki, maybe convert it to ReST and use/enhance Subwiki from Greg Stein?! - enhance testing of pypy and have an automatically generated "status" webpage which shows which revision broke which tests under which object space etc.pp. - look to enhance the tool/test.py and unittest-infrastructure If you have any suggestions (the smaller the better!) or comments feel free to reply to this post. Everybody will be welcome between 12th and 19th of July. Actually, this week will also be interesting because musicians, writers and other artists will also meet and be productive (kind of trying the sprint-model in non-computer areas) and on 19th there is a big party. So this will probably be unlike the other more coding-intensive sprints. Actually, I don't expect many of the former pypy-sprinters because Goethenburg and and Louvain-La-Neuve/ EuroPython have been pretty intense and took a lot of time from everyone in the last two month. Of course there is enough room so you can probably join even on short-notice. And sorry for not posting this earlier but i was/am ill and had to care for some other stuff first. If you need more detailed information about the sprint-location and how to get there contact me privately. cheers, holger From arigo at tunes.org Sun Jul 6 13:54:50 2003 From: arigo at tunes.org (Armin Rigo) Date: Sun, 6 Jul 2003 13:54:50 +0200 Subject: [pypy-dev] Annotating space status In-Reply-To: <5.2.1.1.0.20030705032005.02549468@pop.bluewin.ch> References: <5.2.1.1.0.20030705002142.0252cf90@pop.bluewin.ch> <5.2.1.1.0.20030702185646.00aa7990@pop.bluewin.ch> <5.2.1.1.0.20030702002556.0254dbc8@pop.bluewin.ch> <200307011154.h61BsjI13807@pcp02138704pcs.reston01.va.comcast.net> <5.2.1.1.0.20030702002556.0254dbc8@pop.bluewin.ch> <5.2.1.1.0.20030702185646.00aa7990@pop.bluewin.ch> <5.2.1.1.0.20030702231607.00aa7990@pop.bluewin.ch> <20030704195656.GA12186@magma.unil.ch> <5.2.1.1.0.20030705002142.0252cf90@pop.bluewin.ch> <5.2.1.1.0.20030705032005.02549468@pop.bluewin.ch> Message-ID: <20030706115450.GA14334@magma.unil.ch> Hello Samuele, On Sat, Jul 05, 2003 at 03:36:41AM +0200, Samuele Pedroni wrote: > r = self.codetest("def f(a):\n" > " x = [1,2]\n" > " if a:\n" > " x.append(3)\n" > " else:\n" > " x.append(3)\n" > " return x", > 'f',[W_Anything()]) > print r > > this prints W_Constant([1, 2, 3, 3]) Aaargh. This is getting messy. Again, we really need to clarify what we want. History: Originally, I was expecting the AnnotationObjectSpace to work exclusively on RPython-compliant code. In this case it seems that we can entierely avoid the problem of mutable objects. For example, the above code (after translation) would mean that we malloc() an array of two ints, then realloc() it to make room for a third one. The actual values in the array would never be part of a W_Xxx() wrapper. In other words W_Constant() would only be used for constant immutable objects. It is also the reason why I felt W_Anything() to be unnecessary: everything *should* be known (this might require using something like W_Union(W_Integer(), W_String()) at places). For this specific goal there is no need to target Pyrex or any particularly clever run-time environment because there is never anything more than ints and strings and structs being manipulated -- no W_Anything(). Now it seems that we shifted towards the more general goal of analyzing *any* Python code, reverting to W_Anything() if necessary. This is a cool goal too but we should clarify which one we are heading for. For reference the shift was caused by the app-helpers used by the interpreter. These were not meant to be written in any particularly restricted style, but still, working our way through them is necessary -- for example, we need to call decode_code_arguments() to follow where the arguments will end up, and we must do this in some annotating object space because the arguments are typically W_Integer() or W_String(), as opposed to real objects. Proposal: Maybe we can decide we don't know yet exactly what we want, and just special-case the functions in interpreter/*_app.py that we need for the analyzis of simple (RPython) programs. For example, we can special-case decode_code_arguments() by saying that whenever it is called (i.e. whenever the AnnotationObjectSpace analyzes a non-trivial call) then: * we collect the W_Xxx() arguments in real tuples and dicts * we get the real code object * we just call decode_code_arguments(), which will manipulate these W_Xxx() objects instead of real objects, but it doesn't matter to it If not enough information is known to prepare the arguments like that (for example, because we don't know the length of the argument tuple) then it is an error anyway: it is something we don't allow in RPython. Drawback: We can only process RPython program. Well, that was the original goal anyway. I think it is cleaner but it also means that it will take more time before we can actually process the whole of PyPy. The other (current) solution is more like we have a very general but hacky W_Anything() fall-back, that could quickly be complete enough to process arbitrary programs (provided however that we don't keep running into these mutable object problems, which is not clear to me). Both goals are interesting per se, but my opinion is that we should concentrate on the first one right now. A bient?t, Armin. From pedronis at bluewin.ch Sun Jul 6 14:27:47 2003 From: pedronis at bluewin.ch (Samuele Pedroni) Date: Sun, 06 Jul 2003 14:27:47 +0200 Subject: [pypy-dev] Annotating space status In-Reply-To: <20030706115450.GA14334@magma.unil.ch> References: <5.2.1.1.0.20030705032005.02549468@pop.bluewin.ch> <5.2.1.1.0.20030705002142.0252cf90@pop.bluewin.ch> <5.2.1.1.0.20030702185646.00aa7990@pop.bluewin.ch> <5.2.1.1.0.20030702002556.0254dbc8@pop.bluewin.ch> <200307011154.h61BsjI13807@pcp02138704pcs.reston01.va.comcast.net> <5.2.1.1.0.20030702002556.0254dbc8@pop.bluewin.ch> <5.2.1.1.0.20030702185646.00aa7990@pop.bluewin.ch> <5.2.1.1.0.20030702231607.00aa7990@pop.bluewin.ch> <20030704195656.GA12186@magma.unil.ch> <5.2.1.1.0.20030705002142.0252cf90@pop.bluewin.ch> <5.2.1.1.0.20030705032005.02549468@pop.bluewin.ch> Message-ID: <5.2.1.1.0.20030706142355.02538030@pop.bluewin.ch> At 13:54 06.07.2003 +0200, Armin Rigo wrote: >Hello Samuele, > >On Sat, Jul 05, 2003 at 03:36:41AM +0200, Samuele Pedroni wrote: > > r = self.codetest("def f(a):\n" > > " x = [1,2]\n" > > " if a:\n" > > " x.append(3)\n" > > " else:\n" > > " x.append(3)\n" > > " return x", > > 'f',[W_Anything()]) > > print r > > > > this prints W_Constant([1, 2, 3, 3]) > >Aaargh. This is getting messy. Again, we really need to clarify what we want. > >History: Originally, I was expecting the AnnotationObjectSpace to work >exclusively on RPython-compliant code. In this case it seems that we can >entierely avoid the problem of mutable objects. For example, the above code >(after translation) would mean that we malloc() an array of two ints, then >realloc() it to make room for a third one. The actual values in the array >would never be part of a W_Xxx() wrapper. In other words W_Constant() would >only be used for constant immutable objects. > >It is also the reason why I felt W_Anything() to be unnecessary: everything >*should* be known (this might require using something like >W_Union(W_Integer(), W_String()) at places). For this specific goal there is >no need to target Pyrex or any particularly clever run-time environment >because there is never anything more than ints and strings and structs being >manipulated -- no W_Anything(). > >Now it seems that we shifted towards the more general goal of analyzing *any* >Python code, reverting to W_Anything() if necessary. This is a cool goal too >but we should clarify which one we are heading for. > >For reference the shift was caused by the app-helpers used by the >interpreter. >These were not meant to be written in any particularly restricted style, but >still, working our way through them is necessary -- for example, we need to >call decode_code_arguments() to follow where the arguments will end up, and we >must do this in some annotating object space because the arguments are >typically W_Integer() or W_String(), as opposed to real objects. > >Proposal: Maybe we can decide we don't know yet exactly what we want, and just >special-case the functions in interpreter/*_app.py that we need for the >analyzis of simple (RPython) programs. For example, we can special-case >decode_code_arguments() by saying that whenever it is called (i.e. whenever >the AnnotationObjectSpace analyzes a non-trivial call) then: > > * we collect the W_Xxx() arguments in real tuples and dicts > * we get the real code object > * we just call decode_code_arguments(), which will manipulate these W_Xxx() >objects instead of real objects, but it doesn't matter to it > >If not enough information is known to prepare the arguments like that (for >example, because we don't know the length of the argument tuple) then it >is an >error anyway: it is something we don't allow in RPython. that's ok, it was already discussed as a possibility at Europython. >Drawback: We can only process RPython program. Well, that was the original >goal anyway. I think it is cleaner but it also means that it will take more >time before we can actually process the whole of PyPy. The other (current) >solution is more like we have a very general but hacky W_Anything() fall-back, >that could quickly be complete enough to process arbitrary programs (provided >however that we don't keep running into these mutable object problems, which >is not clear to me). > >Both goals are interesting per se, but my opinion is that we should >concentrate on the first one right now. I agree, but then W_Constant should refuse to wrap mutable objects or raise an exception when a mutating op is tried. I think the code should be explicit about what is not able to do. That means the above code should raise some exception complaining about the wrapped list or about .append. regards. From arigo at tunes.org Sun Jul 6 14:37:26 2003 From: arigo at tunes.org (Armin Rigo) Date: Sun, 6 Jul 2003 14:37:26 +0200 Subject: [pypy-dev] Annotating space status In-Reply-To: <5.2.1.1.0.20030706142355.02538030@pop.bluewin.ch> References: <5.2.1.1.0.20030702185646.00aa7990@pop.bluewin.ch> <5.2.1.1.0.20030702002556.0254dbc8@pop.bluewin.ch> <200307011154.h61BsjI13807@pcp02138704pcs.reston01.va.comcast.net> <5.2.1.1.0.20030702002556.0254dbc8@pop.bluewin.ch> <5.2.1.1.0.20030702185646.00aa7990@pop.bluewin.ch> <5.2.1.1.0.20030702231607.00aa7990@pop.bluewin.ch> <20030704195656.GA12186@magma.unil.ch> <5.2.1.1.0.20030705002142.0252cf90@pop.bluewin.ch> <5.2.1.1.0.20030705032005.02549468@pop.bluewin.ch> <5.2.1.1.0.20030706142355.02538030@pop.bluewin.ch> Message-ID: <20030706123726.GB14334@magma.unil.ch> Hello Samuele, On Sun, Jul 06, 2003 at 02:27:47PM +0200, Samuele Pedroni wrote: > >> r = self.codetest("def f(a):\n" > >> " x = [1,2]\n" > >> " if a:\n" > >> " x.append(3)\n" > >> " else:\n" > >> " x.append(3)\n" > >> " return x", > >> 'f',[W_Anything()]) > >> print r > > I agree, but then W_Constant should refuse to wrap mutable objects or raise > an exception when a mutating op is tried. I think the code should be > explicit about what is not able to do. > > That means the above code should raise some exception complaining about the > wrapped list or about .append. I agree about the exceptions. In the above example however the 'x' should simply not be wrapped in a W_Constant() at all, but instead it should be a W_List(W_Integer()), where the W_Integer() is obtained by the union of W_Constant(1) and W_Constant(2). About .append, we should probably complain, just to emphasis the point that this is not an efficient operation at all (a realloc()). If people really want the same result in RPython, then "x += [3]" makes the point more clearly. A bient?t, Armin. From guido at python.org Sun Jul 6 14:50:47 2003 From: guido at python.org (Guido van Rossum) Date: Sun, 06 Jul 2003 08:50:47 -0400 Subject: [pypy-dev] Annotating space status In-Reply-To: Your message of "Sun, 06 Jul 2003 13:54:50 +0200." <20030706115450.GA14334@magma.unil.ch> References: <5.2.1.1.0.20030705002142.0252cf90@pop.bluewin.ch> <5.2.1.1.0.20030702185646.00aa7990@pop.bluewin.ch> <5.2.1.1.0.20030702002556.0254dbc8@pop.bluewin.ch> <200307011154.h61BsjI13807@pcp02138704pcs.reston01.va.comcast.net> <5.2.1.1.0.20030702002556.0254dbc8@pop.bluewin.ch> <5.2.1.1.0.20030702185646.00aa7990@pop.bluewin.ch> <5.2.1.1.0.20030702231607.00aa7990@pop.bluewin.ch> <20030704195656.GA12186@magma.unil.ch> <5.2.1.1.0.20030705002142.0252cf90@pop.bluewin.ch> <5.2.1.1.0.20030705032005.02549468@pop.bluewin.ch> <20030706115450.GA14334@magma.unil.ch> Message-ID: <200307061250.h66ColL04775@pcp02138704pcs.reston01.va.comcast.net> Armin ends a long, eloquent and well thought-out message with: > Drawback: We can only process RPython program. Well, that was the original > goal anyway. I think it is cleaner but it also means that it will take more > time before we can actually process the whole of PyPy. The other (current) > solution is more like we have a very general but hacky W_Anything() fall-back, > that could quickly be complete enough to process arbitrary programs (provided > however that we don't keep running into these mutable object problems, which > is not clear to me). > > Both goals are interesting per se, but my opinion is that we should > concentrate on the first one right now. I would find processing arbitrary programs cooler, and if I had time to contribute coding time (which I don't) I would want to pursue that. I think the problems with mutable objects can be fixed entirely by using deepcopy after each step. Deepcopying, BTW, ses a memo to keep track of object identities, so it will preserve the (perhaps useful to the analysis) knowledge that two W_Anything() objects found in different local variables or stack positions are in fact the same object. But since I don't have time (alas), I think it's fine to pursue the goals of those who do (like Armin an Samuele) first. --Guido van Rossum (home page: http://www.python.org/~guido/) From guido at python.org Sun Jul 6 14:52:17 2003 From: guido at python.org (Guido van Rossum) Date: Sun, 06 Jul 2003 08:52:17 -0400 Subject: [pypy-dev] Annotating space status In-Reply-To: Your message of "Sun, 06 Jul 2003 14:37:26 +0200." <20030706123726.GB14334@magma.unil.ch> References: <5.2.1.1.0.20030702185646.00aa7990@pop.bluewin.ch> <5.2.1.1.0.20030702002556.0254dbc8@pop.bluewin.ch> <200307011154.h61BsjI13807@pcp02138704pcs.reston01.va.comcast.net> <5.2.1.1.0.20030702002556.0254dbc8@pop.bluewin.ch> <5.2.1.1.0.20030702185646.00aa7990@pop.bluewin.ch> <5.2.1.1.0.20030702231607.00aa7990@pop.bluewin.ch> <20030704195656.GA12186@magma.unil.ch> <5.2.1.1.0.20030705002142.0252cf90@pop.bluewin.ch> <5.2.1.1.0.20030705032005.02549468@pop.bluewin.ch> <5.2.1.1.0.20030706142355.02538030@pop.bluewin.ch> <20030706123726.GB14334@magma.unil.ch> Message-ID: <200307061252.h66CqHr04794@pcp02138704pcs.reston01.va.comcast.net> > About .append, we should probably complain, just to emphasis the point that > this is not an efficient operation at all (a realloc()). If people really want > the same result in RPython, then "x += [3]" makes the point more clearly. BTW, where is the formal definition or RPython? Or if there isn't one, where can I find out more about it? --Guido van Rossum (home page: http://www.python.org/~guido/) From pedronis at bluewin.ch Sun Jul 6 16:19:05 2003 From: pedronis at bluewin.ch (Samuele Pedroni) Date: Sun, 06 Jul 2003 16:19:05 +0200 Subject: [pypy-dev] Annotating space status In-Reply-To: <200307061250.h66ColL04775@pcp02138704pcs.reston01.va.comca st.net> References: <5.2.1.1.0.20030705002142.0252cf90@pop.bluewin.ch> <5.2.1.1.0.20030702185646.00aa7990@pop.bluewin.ch> <5.2.1.1.0.20030702002556.0254dbc8@pop.bluewin.ch> <200307011154.h61BsjI13807@pcp02138704pcs.reston01.va.comcast.net> <5.2.1.1.0.20030702002556.0254dbc8@pop.bluewin.ch> <5.2.1.1.0.20030702185646.00aa7990@pop.bluewin.ch> <5.2.1.1.0.20030702231607.00aa7990@pop.bluewin.ch> <20030704195656.GA12186@magma.unil.ch> <5.2.1.1.0.20030705002142.0252cf90@pop.bluewin.ch> <5.2.1.1.0.20030705032005.02549468@pop.bluewin.ch> <20030706115450.GA14334@magma.unil.ch> Message-ID: <5.2.1.1.0.20030706161104.02535408@pop.bluewin.ch> At 08:50 06.07.2003 -0400, Guido van Rossum wrote: >Armin ends a long, eloquent and well thought-out message with: > > > Drawback: We can only process RPython program. Well, that was the original > > goal anyway. I think it is cleaner but it also means that it will take more > > time before we can actually process the whole of PyPy. The other > (current) > > solution is more like we have a very general but hacky W_Anything() > fall-back, > > that could quickly be complete enough to process arbitrary programs > (provided > > however that we don't keep running into these mutable object problems, > which > > is not clear to me). > > > > Both goals are interesting per se, but my opinion is that we should > > concentrate on the first one right now. > >I would find processing arbitrary programs cooler, and if I had time >to contribute coding time (which I don't) I would want to pursue that. dealing with arbitrary programs means dealing up-front with dynamic dispatch, then this become relevant: Effective Interprocedural Optimization of Object-Oriented Languages (Ph.D. Thesis, 1998) David Grove http://www.cs.washington.edu/research/projects/cecil/www/pubs/grove-thesis.html From lac at strakt.com Sun Jul 6 17:13:02 2003 From: lac at strakt.com (Laura Creighton) Date: Sun, 6 Jul 2003 17:13:02 +0200 Subject: [pypy-dev] OSCON paper Message-ID: <200307061513.h66FD27s019812@ratthing-b246.strakt.com> I am refactoring it, with an eye to producing something which is more like an introduction to PyPy. This can then get used in part for the EU funding proposal, while I can make slides out of it for OSCON. As for slides -- if any of you have any pictures which you would like used, let me know. Otherwise I am going to select some: if you are going to present a paper from a team, it helps to show pictures so people get to' know who the team is beyond a long list of names on the Paper. I would also like to use Armin's slide of: Can I have it, please? I also think that the dis example is rather instructive, but I can't remember exactly what you ran. I think that, as a presentation, it would work to first run a CPython version, and then start the PyPy version. Then start to answer questions from the audience about PyPy and wait for the expected cheer when we start to get output. What do you think? Too corny? Also is there any code which you would like to be shown? It is difficult now for me to see PyPy with fresh eyes, so it is hard for me to see where I should begin to explain PyPy code wise. Ah well, to work where I will try to install of this on a fresh machine. Laura From arigo at tunes.org Sun Jul 6 17:47:43 2003 From: arigo at tunes.org (Armin Rigo) Date: Sun, 6 Jul 2003 17:47:43 +0200 Subject: [pypy-dev] OSCON paper In-Reply-To: <200307061513.h66FD27s019812@ratthing-b246.strakt.com> References: <200307061513.h66FD27s019812@ratthing-b246.strakt.com> Message-ID: <20030706154743.GC14334@magma.unil.ch> Hello Laura, On Sun, Jul 06, 2003 at 05:13:02PM +0200, Laura Creighton wrote: > let me know. Otherwise I am going to select some: if you are going to > present a paper from a team, it helps to show pictures so people get to' > know who the team is beyond a long list of names on the Paper. I found on-line a photo that somebody took of the PyPy sprinters (at the end of my talk when I awkwardly asked for people to group in front). http://www.grisby.org/Photos/140/2_Wed_Day/img_4182c.jpg.html > I would also like to use Armin's slide of: > > I will do that. > I also think that the dis example is rather instructive, but I can't > remember exactly what you ran. in /pypy/trunk/src: python pypy/interpreter/py.py -S goals/dis-pregoal.py > I think that, as a presentation, it > would work to first run a CPython version, and then start the PyPy version. > Then start to answer questions from the audience about PyPy and wait for > the expected cheer when we start to get output. What do you think? It depends on the speed of your machine. At EuroPython I didn't have that nice 2GHz machine that starts printing stuff after "only" 5 to 10 seconds. > Also is there any code which you would like to be shown? It is > difficult now for me to see PyPy with fresh eyes, so it is hard for > me to see where I should begin to explain PyPy code wise. Maybe: opcode.py, with its nice forest of little functions, for each of the opcodes, should look weirdly familiar to the big switch in ceval.c for people who know it; and also an example of the forest of little functions in objspace.std.xxxobject.py, for example listobject.py, to implement all the operations on an object. From lac at strakt.com Sun Jul 6 18:38:08 2003 From: lac at strakt.com (Laura Creighton) Date: Sun, 6 Jul 2003 18:38:08 +0200 Subject: [pypy-dev] suggestion for infastructure Sprint Message-ID: <200307061638.h66Gc8ha021322@ratthing-b246.strakt.com> While I am thinking of it ... I would like it if automatically generated pypy docs also came with a 'click here to get to the PyPy home page' link. Laura From hpk at trillke.net Sun Jul 6 19:02:41 2003 From: hpk at trillke.net (holger krekel) Date: Sun, 6 Jul 2003 19:02:41 +0200 Subject: [pypy-dev] suggestion for infastructure Sprint In-Reply-To: <200307061638.h66Gc8ha021322@ratthing-b246.strakt.com>; from lac@strakt.com on Sun, Jul 06, 2003 at 06:38:08PM +0200 References: <200307061638.h66Gc8ha021322@ratthing-b246.strakt.com> Message-ID: <20030706190241.A3869@prim.han.de> [Laura Creighton Sun, Jul 06, 2003 at 06:38:08PM +0200] > While I am thinking of it ... > I would like it if automatically generated pypy docs also came with a > 'click here to get to the PyPy home page' link. the docs should be "framed" like the other pypy-pages IMO. So we could have a "doc" menu-item or so ... holger From guenter.jantzen at netcologne.de Sun Jul 6 23:37:50 2003 From: guenter.jantzen at netcologne.de (=?iso-8859-1?Q?G=FCnter_Jantzen?=) Date: Sun, 6 Jul 2003 23:37:50 +0200 Subject: [pypy-dev] Annotating space status References: <5.2.1.1.0.20030702185646.00aa7990@pop.bluewin.ch><5.2.1.1.0.20030702002556.0254dbc8@pop.bluewin.ch><200307011154.h61BsjI13807@pcp02138704pcs.reston01.va.comcast.net><5.2.1.1.0.20030702002556.0254dbc8@pop.bluewin.ch><5.2.1.1.0.20030702185646.00aa7990@pop.bluewin.ch><5.2.1.1.0.20030702231607.00aa7990@pop.bluewin.ch><20030704195656.GA12186@magma.unil.ch><5.2.1.1.0.20030705002142.0252cf90@pop.bluewin.ch><5.2.1.1.0.20030705032005.02549468@pop.bluewin.ch><5.2.1.1.0.20030706142355.02538030@pop.bluewin.ch> <20030706123726.GB14334@magma.unil.ch> <200307061252.h66CqHr04794@pcp02138704pcs.reston01.va.comcast.net> Message-ID: <006001c34406$db31ea80$0100a8c0@Lieschen> Hello friends, > > About .append, we should probably complain, just to emphasis the point that > > this is not an efficient operation at all (a realloc()). If people really want > > the same result in RPython, then "x += [3]" makes the point more clearly. > > BTW, where is the formal definition or RPython? Or if there isn't > one, where can I find out more about it? > About RPython read please: http://codespeak.net/pypy/doc/objspace/restrictedpy.html But as Chris was saying: "Some hints are here, but this is a moving target..." I think sooner or later we have to iterate towards a more formal definition otherwise our Standard-Objectspace will be coded in all different interpretations of RPython (like annual rings of a tree) The problem with the document above is that it describes mainly a restriction in the use of types, not the restrictions of behaviour. BTW I remember one of our goals was to target not only C. I see a danger when RPython mimicries to much C (in its informal definition - see code examples below). Maybe I miss *again* the point, but I think at this time we can't know if "append" will be mapped to "realloc". For example when we target Java or C++ we will have string classes or Collections in our target language. These classes encapsulate memory allocation as an low level detail. If we want, we can encapsulate this for the target C too, We can write C-Libraries with some helper functions The first C++ Compilers (Glockenspiel) generated intermediate C-Code. Its possible to write "C with classes". Not too complicated. I only think about String or lists "classes" implemented by a collection of functions which takes as a first argument an object "self". (Strange - this reminds me somehow at "stringobject.py" or "listobject.py"). Ok, it could be slow. Then we will need different RPythons for different targets. Not so beauty ... I think a this time we should be optimistic and use one beauty RPython which is restricted in its dynamics but does not care about memory allocation in the target language. Now the code example: Implementation of __repr__ for Stringobjects : The comment shows an unrestricted implementation variant. Then follows the implementation in RPython, which mimics C: ..objspace/std/stringobject.py [line 886 - line 954] ---------------------------------------------------------------------------- ------------------- #for comparison and understandiong of the underlying algorithm the unrestricted implementation #def repr__String(space, w_str): # u_str = space.unwrap(w_str) # # quote = '\'' # if '\'' in u_str and not '"' in u_str: # quote = '"' # # u_repr = quote # # for i in range(len(u_str)): # c = u_str[i] # if c == '\\' or c == quote: u_repr+= '\\'+c # elif c == '\t': u_repr+= '\\t' # elif c == '\r': u_repr+= '\\r' # elif c == '\n': u_repr+= '\\n' # elif not _isreadable(c) : # u_repr+= '\\' + hex(ord(c))[-3:] # else: # u_repr += c # # u_repr += quote # # return space.wrap(u_repr) def repr__String(space, w_str): u_str = space.unwrap(w_str) quote = '\'' if '\'' in u_str and not '"' in u_str: quote = '"' buflen = 2 for i in range(len(u_str)): c = u_str[i] if c in quote+"\\\r\t\n" : buflen+= 2 elif _isreadable(c) : buflen+= 1 else: buflen+= 4 buf = [' ']* buflen buf[0] = quote j=1 for i in range(len(u_str)): #print buflen-j c = u_str[i] if c in quote+"\\\r\t\n" : buf[j]= '\\' j+=1 if c == quote or c=='\\': buf[j] = c elif c == '\t': buf[j] = 't' elif c == '\r': buf[j] = 'r' elif c == '\n': buf[j] = 'n' j +=1 elif not _isreadable(c) : buf[j]= '\\' j+=1 for x in hex(ord(c))[-3:]: buf[j] = x j+=1 else: buf[j] = c j+=1 buf[j] = quote return space.wrap("".join(buf)) ---------------------------------------------------------------------------- ------------------- kind regards G?nter From lac at strakt.com Sun Jul 6 23:43:30 2003 From: lac at strakt.com (Laura Creighton) Date: Sun, 6 Jul 2003 23:43:30 +0200 Subject: [pypy-dev] will anything break if we rename pypy/trunk/doc/EU funding Message-ID: <200307062143.h66LhUQT028815@theraft.strakt.com> to to pypy/trunk/doc/EU_funding? My tools cannot easily tolerate spaces in filenames. Laura From hpk at trillke.net Sun Jul 6 23:57:58 2003 From: hpk at trillke.net (holger krekel) Date: Sun, 6 Jul 2003 23:57:58 +0200 Subject: [pypy-dev] will anything break if we rename pypy/trunk/doc/EU funding In-Reply-To: <200307062143.h66LhUQT028815@theraft.strakt.com>; from lac@strakt.com on Sun, Jul 06, 2003 at 11:43:30PM +0200 References: <200307062143.h66LhUQT028815@theraft.strakt.com> Message-ID: <20030706235758.C3869@prim.han.de> [Laura Creighton Sun, Jul 06, 2003 at 11:43:30PM +0200] > to to pypy/trunk/doc/EU_funding? My tools cannot easily tolerate > spaces in filenames. IMHO go ahead holger From arigo at tunes.org Mon Jul 7 11:40:58 2003 From: arigo at tunes.org (Armin Rigo) Date: Mon, 7 Jul 2003 11:40:58 +0200 Subject: [pypy-dev] Annotating space status In-Reply-To: <006001c34406$db31ea80$0100a8c0@Lieschen> References: <20030706123726.GB14334@magma.unil.ch> <200307061252.h66CqHr04794@pcp02138704pcs.reston01.va.comcast.net> <006001c34406$db31ea80$0100a8c0@Lieschen> Message-ID: <20030707094058.GB14678@magma.unil.ch> Hello G?nter, On Sun, Jul 06, 2003 at 11:37:50PM +0200, G?nter Jantzen wrote: > BTW I remember one of our goals was to target not only C. > I see a danger when RPython mimicries to much C (in its informal > definition - see code examples below). > Maybe I miss *again* the point, but I think at this time we can't know if > "append" will be mapped to "realloc". You are right about this. Your example for repr__String() shows easily enough that sticking to C-like code too closely leads to bad code (more complex and obscure and much more error-prone). We may extend RPython to allow more operations, or alternatively to avoid code duplication we may use the objects we have already defined in stringobject.py and listobject.py to do that, and write more operations as helpers. For example, the first version of repr__String() would be nice as just an application-level function. Similarily, an algorithm that needs a lot of list.append() can be written at the application-level, where it is not restricted at all. Turning helpers into something efficient at translation time is a different issue (Holger is working on some cool ideas about them) than merely extending the definition of RPython. More about it later... A bient?t, Armin. From mwh at python.net Mon Jul 7 13:12:01 2003 From: mwh at python.net (Michael Hudson) Date: Mon, 07 Jul 2003 12:12:01 +0100 Subject: [pypy-dev] Re: side "sprint" References: <20030705155459.W3869@prim.han.de> Message-ID: <2msmpioake.fsf@starship.python.net> holger krekel writes: > - look to enhance the tool/test.py and unittest-infrastructure I did some of that this weekend :-) Trains with power for laptops are *cool*. Hope to check in soon. > Actually, I don't expect many of the former pypy-sprinters because Goethenburg > and and Louvain-La-Neuve/ EuroPython have been pretty intense and took > a lot of time from everyone in the last two month. Of course there is enough > room so you can probably join even on short-notice. Would love to, but just too impractical. Cheers, M. -- My hat is lined with tinfoil for protection in the unlikely event that the droid gets his PowerPoint presentation working. -- Alan W. Frame, alt.sysadmin.recovery From mwh at python.net Mon Jul 7 13:12:34 2003 From: mwh at python.net (Michael Hudson) Date: Mon, 07 Jul 2003 12:12:34 +0100 Subject: [pypy-dev] Re: Annotating space status References: <5.2.1.1.0.20030705002142.0252cf90@pop.bluewin.ch> <5.2.1.1.0.20030702185646.00aa7990@pop.bluewin.ch> <5.2.1.1.0.20030702002556.0254dbc8@pop.bluewin.ch> <200307011154.h61BsjI13807@pcp02138704pcs.reston01.va.comcast.net> <5.2.1.1.0.20030702002556.0254dbc8@pop.bluewin.ch> <5.2.1.1.0.20030702185646.00aa7990@pop.bluewin.ch> <5.2.1.1.0.20030702231607.00aa7990@pop.bluewin.ch> <20030704195656.GA12186@magma.unil.ch> <5.2.1.1.0.20030705002142.0252cf90@pop.bluewin.ch> <5.2.1.1.0.20030705032005.02549468@pop.bluewin.ch> <20030706115450.GA14334@magma.unil.ch> <5.2.1.1.0.20030706161104.02535408@pop.bluewin.ch> Message-ID: <2mptkmoajh.fsf@starship.python.net> Samuele Pedroni writes: >>I would find processing arbitrary programs cooler, and if I had time >>to contribute coding time (which I don't) I would want to pursue that. > > dealing with arbitrary programs means dealing up-front with dynamic dispatch, > then this become relevant: > > Effective Interprocedural Optimization of Object-Oriented Languages > (Ph.D. Thesis, 1998) > David Grove > > http://www.cs.washington.edu/research/projects/cecil/www/pubs/grove-thesis.html Um, I'd say so :-) Have you actually read much of the thesis? Cheers, M. -- > You're already using asyncore so you can't really be worried > about complexity . (-8 .helps which, demand on backwards work to brain my rewired I've -- Jeremy Hylton & Richie Hindle From lac at strakt.com Mon Jul 7 14:07:09 2003 From: lac at strakt.com (Laura Creighton) Date: Mon, 7 Jul 2003 14:07:09 +0200 Subject: [pypy-dev] stale link Message-ID: <200307071207.h67C79wP026675@ratthing-b246.strakt.com> in http://codespeak.net/pypy/doc/sprintinfo/LouvainLaNeuveReport.html 'iterobject' is a stale link. From arigo at tunes.org Mon Jul 7 14:16:02 2003 From: arigo at tunes.org (Armin Rigo) Date: Mon, 7 Jul 2003 14:16:02 +0200 Subject: [pypy-dev] stale link In-Reply-To: <200307071207.h67C79wP026675@ratthing-b246.strakt.com> References: <200307071207.h67C79wP026675@ratthing-b246.strakt.com> Message-ID: <20030707121602.GC14678@magma.unil.ch> Hello Laura, On Mon, Jul 07, 2003 at 02:07:09PM +0200, Laura Creighton wrote: > in http://codespeak.net/pypy/doc/sprintinfo/LouvainLaNeuveReport.html > 'iterobject' is a stale link. Fixed. (We should really be able to use the revision number in the links.) Armin From jum at anubis.han.de Mon Jul 7 14:36:37 2003 From: jum at anubis.han.de (Jens-Uwe Mager) Date: Mon, 7 Jul 2003 14:36:37 +0200 Subject: [pypy-dev] stale link In-Reply-To: <20030707121602.GC14678@magma.unil.ch> References: <200307071207.h67C79wP026675@ratthing-b246.strakt.com> <20030707121602.GC14678@magma.unil.ch> Message-ID: <20030707123636.GA2960@anubis> On Mon, Jul 07, 2003 at 14:16 +0200, Armin Rigo wrote: > Hello Laura, > > On Mon, Jul 07, 2003 at 02:07:09PM +0200, Laura Creighton wrote: > > in http://codespeak.net/pypy/doc/sprintinfo/LouvainLaNeuveReport.html > > 'iterobject' is a stale link. > > Fixed. (We should really be able to use the revision number in the links.) You can. I attach a message from the svn mailing list that describes what can be done to specify a particular revision. -- Jens-Uwe Mager -------------- next part -------------- An embedded message was scrubbed... From: Ben Collins-Sussman Subject: Re: Is it possible to browser a certain srelease of a svn repository? Date: 05 May 2003 09:03:00 -0500 Size: 3290 URL: From pedronis at bluewin.ch Mon Jul 7 15:54:09 2003 From: pedronis at bluewin.ch (Samuele Pedroni) Date: Mon, 07 Jul 2003 15:54:09 +0200 Subject: [pypy-dev] Re: Annotating space status In-Reply-To: <2mptkmoajh.fsf@starship.python.net> References: <5.2.1.1.0.20030705002142.0252cf90@pop.bluewin.ch> <5.2.1.1.0.20030702185646.00aa7990@pop.bluewin.ch> <5.2.1.1.0.20030702002556.0254dbc8@pop.bluewin.ch> <200307011154.h61BsjI13807@pcp02138704pcs.reston01.va.comcast.net> <5.2.1.1.0.20030702002556.0254dbc8@pop.bluewin.ch> <5.2.1.1.0.20030702185646.00aa7990@pop.bluewin.ch> <5.2.1.1.0.20030702231607.00aa7990@pop.bluewin.ch> <20030704195656.GA12186@magma.unil.ch> <5.2.1.1.0.20030705002142.0252cf90@pop.bluewin.ch> <5.2.1.1.0.20030705032005.02549468@pop.bluewin.ch> <20030706115450.GA14334@magma.unil.ch> <5.2.1.1.0.20030706161104.02535408@pop.bluewin.ch> Message-ID: <5.2.1.1.0.20030707152332.02519df0@pop.bluewin.ch> At 12:12 07.07.2003 +0100, Michael Hudson wrote: >Samuele Pedroni writes: > > >>I would find processing arbitrary programs cooler, and if I had time > >>to contribute coding time (which I don't) I would want to pursue that. > > > > dealing with arbitrary programs means dealing up-front with dynamic > dispatch, > > then this become relevant: > > > > Effective Interprocedural Optimization of Object-Oriented Languages > > (Ph.D. Thesis, 1998) > > David Grove > > > > > http://www.cs.washington.edu/research/projects/cecil/www/pubs/grove-thesis.html > >Um, I'd say so :-) > >Have you actually read much of the thesis? > yup, but some time ago, so I would have to reskim it to give a summary. Although the bottom line is that type inference/analysis for general Python would be a research topic on its own. There are tenuous traces of people doing just that http://www.ai.mit.edu/projects/dynlangs/Talks/star-killer.htm the cited Ole Agesen's algorithm is described here: http://www.sunlabs.com/technical-reports/1996/abstract-52.html David Grove thesis generalizes, improves on that. Once I saw a paper/thesis on doing type analysis for the Jython compiler, but I can't find it anymore (I should have downloaded it at that time), maybe is the one listed here: http://www.cs.princeton.edu/~bwk/iw.html regards. From mwh at python.net Mon Jul 7 17:45:57 2003 From: mwh at python.net (Michael Hudson) Date: Mon, 7 Jul 2003 17:45:57 +0200 Subject: [pypy-dev] type problems Message-ID: <1A04C73C-B092-11D7-ABFF-0003931DF95C@python.net> I wrote this on the train yesterday. Some of it is a bit out of date from talking on IRC today, but not all of it. A tip for making the multimethod implementation more comprehensible: separate finding the method to call from calling it. Also, consistently using common lisp terminology would help *me*, if no-one else. And, finally, presuming that the implementation isn't going to change drastically another four or five times, documentation would be good! As Samuele and Armin are probably too close to write this, perhaps I'll have a stab. Also, disentangling them from the StdObjSpace implementation (if possible) would probably reduce the general level of incomprehension. I want a spec for the applicable method computation algorithm -- a sufficiently detailed description to allow implementation by someone unfamiliar with our present one. I would also like an explanation of why our algorithm is so flaming complicated. (Having pdbed through a method selection, I do now have some understanding of why we're so slow!!) Refactor! Refactor! Refactor! And, after all that, the problem I was chasing wasn't in the multimethod implementation after all. Heh. There are problems will the space.call multimethod, or rather the bound variants of it: you can't translate o.__call__(a, b, c) into space.call(o, a, b, c) ! Another irregular multimethod? I've also hacked on the test rig some more, adding an (optional) interactive mode that you get dropped into when there are failures. Currently it allows you to print each traceback individually (or as a group) and drop into a pdb session at the point of each failure. I've also added a skeletal PyPy-specialized version of pdb, which will probably grow over time... Debugging PyPy is ludicrously complicated: I know Python as well as almost anyone. I know more about PyPy than just a very few people. I'm fairly smart. And *I* get hopelessly lost in the internals. What can we do about this? Cheers, M. From Nicolas.Chauvat at logilab.fr Mon Jul 7 16:58:45 2003 From: Nicolas.Chauvat at logilab.fr (Nicolas Chauvat) Date: Mon, 7 Jul 2003 16:58:45 +0200 Subject: [pypy-dev] Re: unittest-module In-Reply-To: <20030702130424.B3869@prim.han.de> References: <20030630154108.BA5935BBA2@thoth.codespeak.net> <2m8yris7dv.fsf@starship.python.net> <20030701215832.T3869@prim.han.de> <2mk7b1qkso.fsf@starship.python.net> <20030702130424.B3869@prim.han.de> Message-ID: <20030707145845.GA31286@logilab.fr> On Wed, Jul 02, 2003 at 01:04:24PM +0200, holger krekel wrote: > Agreed. So contributing it back isn't so interesting for the time beeing. > If we integrated a test-coverage mechanism at some point then the > situation might possibly change. I talked to some Zope3 and other people and > nobody knows of such a tool, btw. Google says: http://www.garethrees.org/2001/12/04/python-coverage/ http://www.mems-exchange.org/software/sancho/ http://omniorb.sourceforge.net/cgi-bin/moin.cgi/TestCoverageAnalysis http://www.geocities.com/drew_csillag/pycover.html And I'd like to add that this may also prove useful: http://www.logilab.org/pylint/ Hope this helps, -- Nicolas Chauvat http://www.logilab.com - "Mais o? est donc Ornicar ?" - LOGILAB, Paris (France) From Nicolas.Chauvat at logilab.fr Mon Jul 7 17:03:01 2003 From: Nicolas.Chauvat at logilab.fr (Nicolas Chauvat) Date: Mon, 7 Jul 2003 17:03:01 +0200 Subject: [pypy-dev] website update / pypy developments In-Reply-To: <200307050816.h658GcOs025505@ratthing-b246.strakt.com> References: <3914921B.51B1E169.9ADE5C6A@netscape.net> <20030704071614.J3869@prim.han.de> <20030704191055.GA11501@magma.unil.ch> <20030704234148.M3869@prim.han.de> <200307050816.h658GcOs025505@ratthing-b246.strakt.com> Message-ID: <20030707150301.GB31286@logilab.fr> On Sat, Jul 05, 2003 at 10:16:38AM +0200, Laura Creighton wrote: > In a message of Fri, 04 Jul 2003 23:41:48 +0200, holger krekel writes: > > >Speaking of EU-Funding. Should we setup a mailing-list for information > >exchange and discussion? If we are to really go for the October 15th > >deadline then we should start to rush or don't go for it this year (IMHO) > >. > > > >cheers, > > > > holger > > I don't mind a separate mailing list, or if discussions happen here. I'd welcome a separate mailing list to help sort out development and EU-funding issues. -- Nicolas Chauvat http://www.logilab.com - "Mais o? est donc Ornicar ?" - LOGILAB, Paris (France) From pedronis at bluewin.ch Mon Jul 7 18:56:55 2003 From: pedronis at bluewin.ch (Samuele Pedroni) Date: Mon, 07 Jul 2003 18:56:55 +0200 Subject: [pypy-dev] type problems In-Reply-To: <1A04C73C-B092-11D7-ABFF-0003931DF95C@python.net> Message-ID: <5.2.1.1.0.20030707180007.02582dc0@pop.bluewin.ch> I agree with this. At 17:45 07.07.2003 +0200, Michael Hudson wrote: >I wrote this on the train yesterday. Some of it is a bit out of date from >talking on IRC today, but not all of it. > >A tip for making the multimethod implementation more comprehensible: >separate finding the method to call from calling it. Also, consistently >using common lisp terminology would help *me*, if no-one else. it makes sense in general, one problem is that delegation is very specific to our multimethods, there is no such notion in CL,Dylan,... > And, finally, presuming that the implementation isn't going to change > drastically another four or five times, documentation would be good! As > Samuele and Armin are probably too close to write this, perhaps I'll have > a stab. Also, disentangling them from the StdObjSpace implementation (if > possible) would probably reduce the general level of incomprehension. > >I want a spec for the applicable method computation algorithm -- a >sufficiently detailed description to allow implementation by someone >unfamiliar with our present one. I would also like an explanation of why >our algorithm is so flaming complicated. a formal spec for what it happens would be very good indeed. If we cannot write it then we should change the code so that we can write one. If dispatch table compression is the way we want to follow it becomes important e.g. to know whether our rules are monotonic (table compression algos I know about need that) and to reformulate delegation relationships as some kind of subtyping rel. regards. From mwh at python.net Mon Jul 7 19:07:07 2003 From: mwh at python.net (Michael Hudson) Date: Mon, 07 Jul 2003 18:07:07 +0100 Subject: [pypy-dev] Re: type problems References: <1A04C73C-B092-11D7-ABFF-0003931DF95C@python.net> <5.2.1.1.0.20030707180007.02582dc0@pop.bluewin.ch> Message-ID: <2m1xx2nu4k.fsf@starship.python.net> Samuele Pedroni writes: > I agree with this. > > At 17:45 07.07.2003 +0200, Michael Hudson wrote: >> I wrote this on the train yesterday. Some of it is a bit out of >> date from talking on IRC today, but not all of it. >> >> A tip for making the multimethod implementation more comprehensible: >> separate finding the method to call from calling it. Also, >> consistently using common lisp terminology would help *me*, if >> no-one else. > > it makes sense in general, one problem is that delegation is very > specific to our multimethods, there is no such notion in CL,Dylan,... Well, OK, but in other places. Applicable methods is one notion that certainly is transferable. I also notice that raising FailedToImplement is very like calling call-next-method except that control flow never returns. I think it makes sense to have compute-applicable-methods or whatever be a generator. > > >> And, finally, presuming that the implementation isn't going to >> change drastically another four or five times, documentation would >> be good! As Samuele and Armin are probably too close to write this, >> perhaps I'll have a stab. Also, disentangling them from the >> StdObjSpace implementation (if possible) would probably reduce the >> general level of incomprehension. >> >> I want a spec for the applicable method computation algorithm -- a >> sufficiently detailed description to allow implementation by someone >> unfamiliar with our present one. I would also like an explanation >> of why our algorithm is so flaming complicated. > > a formal spec for what it happens would be very good indeed. If we > cannot write it then we should change the code so that we can write > one. This is part of the plan :-) > If dispatch table compression is the way we want to follow it > becomes important e.g. to know whether our rules are monotonic > (table compression algos I know about need that) and to reformulate > delegation relationships as some kind of subtyping rel. If you say so :-) I do notice delegation is what screws a naive attempt to memoize more of the applicable method computation. Cheers, M. -- [3] Modem speeds being what they are, large .avi files were generally downloaded to the shell server instead[4]. [4] Where they were usually found by the technical staff, and burned to CD. -- Carlfish, asr From m at moshez.org Tue Jul 8 10:44:50 2003 From: m at moshez.org (Moshe Zadka) Date: 8 Jul 2003 08:44:50 -0000 Subject: [pypy-dev] X Range Object Message-ID: <20030708084450.30944.qmail@green.zadka.com> Here is a diff vs. revision 1116. It contains patches to the objspace module, but I wasn't sure how to add new names. Thanks, Moshe Zadka diff -urN -x.svn -x'*.pyc' src.old/pypy/objspace/std/objspace.py src/pypy/objspace/std/objspace.py --- src.old/pypy/objspace/std/objspace.py 2003-07-08 08:36:48.000000000 +0000 +++ src/pypy/objspace/std/objspace.py 2003-07-08 07:58:21.000000000 +0000 @@ -260,6 +260,11 @@ import moduleobject return moduleobject.W_ModuleObject(self, w_name) + def newxrange(self, w_start, w_end, w_step): + # w_step may be a real None + import xrangeobject + return xrangeobject.W_XRangeObject(self, w_start, w_end, w_step) + def newstring(self, chars_w): try: chars = [chr(self.unwrap(w_c)) for w_c in chars_w] diff -urN -x.svn -x'*.pyc' src.old/pypy/objspace/std/test/test_xrangeobject.py src/pypy/objspace/std/test/test_xrangeobject.py --- src.old/pypy/objspace/std/test/test_xrangeobject.py 1970-01-01 00:00:00.000000000 +0000 +++ src/pypy/objspace/std/test/test_xrangeobject.py 2003-07-08 08:35:28.000000000 +0000 @@ -0,0 +1,70 @@ +import autopath +from pypy.objspace.std.xrangeobject import W_XRangeObject +from pypy.objspace.std.intobject import W_IntObject +from pypy.objspace.std.objspace import NoValue +from pypy.tool import test + +class TestW_XRangeObject(test.TestCase): + + def setUp(self): + self.space = test.objspace('std') + + def tearDown(self): + pass + + def test_is_true(self): + w = self.space.wrap + w_range = W_XRangeObject(self.space, w(0), w(0), w(1)) + self.assertEqual(self.space.is_true(w_range), False) + w_range = W_XRangeObject(self.space, w(0), w(1), w(1)) + self.assertEqual(self.space.is_true(w_range), True) + w_range = W_XRangeObject(self.space, w(0), w(5), w(1)) + self.assertEqual(self.space.is_true(w_range), True) + + def test_len(self): + w = self.space.wrap + w_range = W_XRangeObject(self.space, w(0), w(0), w(1)) + self.assertEqual_w(self.space.len(w_range), w(0)) + w_range = W_XRangeObject(self.space, w(0), w(1), w(1)) + self.assertEqual_w(self.space.len(w_range), w(1)) + w_range = W_XRangeObject(self.space, w(0), w(5), w(1)) + self.assertEqual_w(self.space.len(w_range), w(5)) + + def test_getitem(self): + w = self.space.wrap + w_range = W_XRangeObject(self.space, w(0), w(2), w(1)) + self.assertEqual_w(self.space.getitem(w_range, w(0)), w(0)) + self.assertEqual_w(self.space.getitem(w_range, w(1)), w(1)) + self.assertEqual_w(self.space.getitem(w_range, w(-2)), w(0)) + self.assertEqual_w(self.space.getitem(w_range, w(-1)), w(1)) + self.assertRaises_w(self.space.w_IndexError, + self.space.getitem, w_range, w(2)) + self.assertRaises_w(self.space.w_IndexError, + self.space.getitem, w_range, w(42)) + self.assertRaises_w(self.space.w_IndexError, + self.space.getitem, w_range, w(-3)) + + def test_iter(self): + w = self.space.wrap + w_range = W_XRangeObject(self.space, w(0), w(3), w(1)) + w_iter = self.space.iter(w_range) + self.assertEqual_w(self.space.next(w_iter), w(0)) + self.assertEqual_w(self.space.next(w_iter), w(1)) + self.assertEqual_w(self.space.next(w_iter), w(2)) + self.assertRaises(NoValue, self.space.next, w_iter) + self.assertRaises(NoValue, self.space.next, w_iter) + + def test_contains(self): + w = self.space.wrap + w_range = W_XRangeObject(self.space, w(0), w(3), w(1)) + self.assertEqual_w(self.space.contains(w_range, w(0)), + self.space.w_True) + self.assertEqual_w(self.space.contains(w_range, w(1)), + self.space.w_True) + self.assertEqual_w(self.space.contains(w_range, w(11)), + self.space.w_False) + self.assertEqual_w(self.space.contains(w_range, w_range), + self.space.w_False) + +if __name__ == '__main__': + test.main() diff -urN -x.svn -x'*.pyc' src.old/pypy/objspace/std/xrangeobject.py src/pypy/objspace/std/xrangeobject.py --- src.old/pypy/objspace/std/xrangeobject.py 1970-01-01 00:00:00.000000000 +0000 +++ src/pypy/objspace/std/xrangeobject.py 2003-07-08 08:32:53.000000000 +0000 @@ -0,0 +1,88 @@ +""" +Reviewed 03-06-21 + +slice object construction tested, OK +indices method tested, OK +""" + +from pypy.objspace.std.objspace import * +from pypy.interpreter.appfile import AppFile +from pypy.interpreter.extmodule import make_builtin_func +from pypy.objspace.std.instmethobject import W_InstMethObject +from xrangetype import W_XRangeType +from intobject import W_IntObject + +#appfile = AppFile(__name__, ["objspace.std"]) + + +class W_XRangeObject(W_Object): + statictype = W_XRangeType + + def __init__(w_self, space, w_start, w_stop, w_step): + W_Object.__init__(w_self, space) + w_self.w_start = w_start or W_IntObject(space, 0) + w_self.w_stop = w_stop + w_self.w_step = w_step or W_IntObject(space, 1) + + def tolist(w_self): + stop = w_self.space.unwrap(w_self.w_stop) + start = w_self.space.unwrap(w_self.w_start) + step = w_self.space.unwrap(w_self.w_step) + num = (stop-start)//step + # does this count as "using range in a loop"? + # RPython is poorly speced + return w_self.space.newlist([w_self.space.wrap(i*w_step+w_start) + for i in range(num)]) + +registerimplementation(W_XRangeObject) + +def getattr__XRange_ANY(space, w_range, w_attr): + if space.is_true(space.eq(w_attr, space.wrap('start'))): + if w_range.w_start is None: + return space.w_None + else: + return w_range.w_start + if space.is_true(space.eq(w_attr, space.wrap('stop'))): + if w_range.w_stop is None: + return space.w_None + else: + return w_range.w_stop + if space.is_true(space.eq(w_attr, space.wrap('step'))): + if w_range.w_step is None: + return space.w_None + else: + return w_range.w_step + if space.is_true(space.eq(w_attr, space.wrap('tolist'))): + w_builtinfn = make_builtin_func(space, W_XRangeObject.tolist) + return W_InstMethObject(space, w_builtinfn, w_range, w_range.statictype) + + raise FailedToImplement(space.w_AttributeError) + +def len__XRange(space, w_range): + stop = space.unwrap(w_range.w_stop) + start = space.unwrap(w_range.w_start) + step = space.unwrap(w_range.w_step) + num = (stop-start)//step + return W_IntObject(space, num) + +def getitem__XRange_Int(space, w_range, w_index): + stop = space.unwrap(w_range.w_stop) + start = space.unwrap(w_range.w_start) + step = space.unwrap(w_range.w_step) + num = (stop-start)//step + idx = w_index.intval + if idx < 0: + idx += num + if idx < 0 or idx >= num: + raise OperationError(space.w_IndexError, + space.wrap("xrange index out of range")) + return W_IntObject(space, start+step*idx) + +def iter__XRange(space, w_range): + # Someday, I'll write a smarter iterator + # This is like the old 2.2 slow iterator, rather than the new + # 2.3 fast iterator + import iterobject + return iterobject.W_SeqIterObject(space, w_range) + +register_all(vars()) diff -urN -x.svn -x'*.pyc' src.old/pypy/objspace/std/xrangetype.py src/pypy/objspace/std/xrangetype.py --- src.old/pypy/objspace/std/xrangetype.py 1970-01-01 00:00:00.000000000 +0000 +++ src/pypy/objspace/std/xrangetype.py 2003-07-08 08:00:45.000000000 +0000 @@ -0,0 +1,33 @@ +from pypy.objspace.std.objspace import * +from typeobject import W_TypeObject + +class W_XRangeType(W_TypeObject): + + typename = 'range' + +registerimplementation(W_XRangeType) + + +def type_new__XRangeType_XRangeType_ANY_ANY(space, w_basetype, w_xrangetype, w_args, w_kwds): + if space.is_true(w_kwds): + raise OperationError(space.w_TypeError, + space.wrap("no keyword arguments expected")) + args = space.unpackiterable(w_args) + start = space.w_None + stop = space.w_None + step = space.w_None + if len(args) == 1: + stop, = args + elif len(args) == 2: + start, stop = args + elif len(args) == 3: + start, stop, step = args + elif len(args) > 3: + raise OperationError(space.w_TypeError, + space.wrap("xrange() takes at most 3 arguments")) + else: + raise OperationError(space.w_TypeError, + space.wrap("xrange() takes at least 1 argument")) + return space.newxrange(start, stop, step), True + +register_all(vars()) -- Moshe Zadka -- http://moshez.org/ Buffy: I don't like you hanging out with someone that... short. Riley: Yeah, a lot of young people nowadays are experimenting with shortness. Agile Programming Language -- http://www.python.org/ From m at moshez.org Tue Jul 8 10:58:20 2003 From: m at moshez.org (Moshe Zadka) Date: 8 Jul 2003 08:58:20 -0000 Subject: [pypy-dev] XRange Object [Fixed Patch] Message-ID: <20030708085820.31421.qmail@green.zadka.com> OK, I fixed the problem of not registering in the builtins, and now moshez at green:~/devel/pypy/src/pypy$ python2.2 interpreter/py.py -S Python 2.2.2 (#1, Nov 21 2002, 08:18:14) [GCC 2.95.4 20011002 (Debian prerelease)] in pypy PyPyConsole / StdObjSpace >>> for i in xrange(0,1,1): ... print i ... 0 [Though, this being my first playing around with PyPy, I must say I was a bit taken aback by just how slow it is :)] Diff still vs. 1116 Thanks and sorry for the slight spamming, Moshe diff -urN -x.svn -x'*.pyc' src.old/pypy/objspace/std/objspace.py src/pypy/objspace/std/objspace.py --- src.old/pypy/objspace/std/objspace.py 2003-07-08 08:36:48.000000000 +0000 +++ src/pypy/objspace/std/objspace.py 2003-07-08 08:53:16.000000000 +0000 @@ -67,6 +67,7 @@ from stringtype import W_StringType from typetype import W_TypeType from slicetype import W_SliceType + from xrangetype import W_XRangeType return [value for key, value in result.__dict__.items() if not key.startswith('_')] # don't look @@ -260,6 +261,11 @@ import moduleobject return moduleobject.W_ModuleObject(self, w_name) + def newxrange(self, w_start, w_end, w_step): + # w_step may be a real None + import xrangeobject + return xrangeobject.W_XRangeObject(self, w_start, w_end, w_step) + def newstring(self, chars_w): try: chars = [chr(self.unwrap(w_c)) for w_c in chars_w] diff -urN -x.svn -x'*.pyc' src.old/pypy/objspace/std/test/test_xrangeobject.py src/pypy/objspace/std/test/test_xrangeobject.py --- src.old/pypy/objspace/std/test/test_xrangeobject.py 1970-01-01 00:00:00.000000000 +0000 +++ src/pypy/objspace/std/test/test_xrangeobject.py 2003-07-08 08:35:28.000000000 +0000 @@ -0,0 +1,70 @@ +import autopath +from pypy.objspace.std.xrangeobject import W_XRangeObject +from pypy.objspace.std.intobject import W_IntObject +from pypy.objspace.std.objspace import NoValue +from pypy.tool import test + +class TestW_XRangeObject(test.TestCase): + + def setUp(self): + self.space = test.objspace('std') + + def tearDown(self): + pass + + def test_is_true(self): + w = self.space.wrap + w_range = W_XRangeObject(self.space, w(0), w(0), w(1)) + self.assertEqual(self.space.is_true(w_range), False) + w_range = W_XRangeObject(self.space, w(0), w(1), w(1)) + self.assertEqual(self.space.is_true(w_range), True) + w_range = W_XRangeObject(self.space, w(0), w(5), w(1)) + self.assertEqual(self.space.is_true(w_range), True) + + def test_len(self): + w = self.space.wrap + w_range = W_XRangeObject(self.space, w(0), w(0), w(1)) + self.assertEqual_w(self.space.len(w_range), w(0)) + w_range = W_XRangeObject(self.space, w(0), w(1), w(1)) + self.assertEqual_w(self.space.len(w_range), w(1)) + w_range = W_XRangeObject(self.space, w(0), w(5), w(1)) + self.assertEqual_w(self.space.len(w_range), w(5)) + + def test_getitem(self): + w = self.space.wrap + w_range = W_XRangeObject(self.space, w(0), w(2), w(1)) + self.assertEqual_w(self.space.getitem(w_range, w(0)), w(0)) + self.assertEqual_w(self.space.getitem(w_range, w(1)), w(1)) + self.assertEqual_w(self.space.getitem(w_range, w(-2)), w(0)) + self.assertEqual_w(self.space.getitem(w_range, w(-1)), w(1)) + self.assertRaises_w(self.space.w_IndexError, + self.space.getitem, w_range, w(2)) + self.assertRaises_w(self.space.w_IndexError, + self.space.getitem, w_range, w(42)) + self.assertRaises_w(self.space.w_IndexError, + self.space.getitem, w_range, w(-3)) + + def test_iter(self): + w = self.space.wrap + w_range = W_XRangeObject(self.space, w(0), w(3), w(1)) + w_iter = self.space.iter(w_range) + self.assertEqual_w(self.space.next(w_iter), w(0)) + self.assertEqual_w(self.space.next(w_iter), w(1)) + self.assertEqual_w(self.space.next(w_iter), w(2)) + self.assertRaises(NoValue, self.space.next, w_iter) + self.assertRaises(NoValue, self.space.next, w_iter) + + def test_contains(self): + w = self.space.wrap + w_range = W_XRangeObject(self.space, w(0), w(3), w(1)) + self.assertEqual_w(self.space.contains(w_range, w(0)), + self.space.w_True) + self.assertEqual_w(self.space.contains(w_range, w(1)), + self.space.w_True) + self.assertEqual_w(self.space.contains(w_range, w(11)), + self.space.w_False) + self.assertEqual_w(self.space.contains(w_range, w_range), + self.space.w_False) + +if __name__ == '__main__': + test.main() diff -urN -x.svn -x'*.pyc' src.old/pypy/objspace/std/xrangeobject.py src/pypy/objspace/std/xrangeobject.py --- src.old/pypy/objspace/std/xrangeobject.py 1970-01-01 00:00:00.000000000 +0000 +++ src/pypy/objspace/std/xrangeobject.py 2003-07-08 08:32:53.000000000 +0000 @@ -0,0 +1,88 @@ +""" +Reviewed 03-06-21 + +slice object construction tested, OK +indices method tested, OK +""" + +from pypy.objspace.std.objspace import * +from pypy.interpreter.appfile import AppFile +from pypy.interpreter.extmodule import make_builtin_func +from pypy.objspace.std.instmethobject import W_InstMethObject +from xrangetype import W_XRangeType +from intobject import W_IntObject + +#appfile = AppFile(__name__, ["objspace.std"]) + + +class W_XRangeObject(W_Object): + statictype = W_XRangeType + + def __init__(w_self, space, w_start, w_stop, w_step): + W_Object.__init__(w_self, space) + w_self.w_start = w_start or W_IntObject(space, 0) + w_self.w_stop = w_stop + w_self.w_step = w_step or W_IntObject(space, 1) + + def tolist(w_self): + stop = w_self.space.unwrap(w_self.w_stop) + start = w_self.space.unwrap(w_self.w_start) + step = w_self.space.unwrap(w_self.w_step) + num = (stop-start)//step + # does this count as "using range in a loop"? + # RPython is poorly speced + return w_self.space.newlist([w_self.space.wrap(i*w_step+w_start) + for i in range(num)]) + +registerimplementation(W_XRangeObject) + +def getattr__XRange_ANY(space, w_range, w_attr): + if space.is_true(space.eq(w_attr, space.wrap('start'))): + if w_range.w_start is None: + return space.w_None + else: + return w_range.w_start + if space.is_true(space.eq(w_attr, space.wrap('stop'))): + if w_range.w_stop is None: + return space.w_None + else: + return w_range.w_stop + if space.is_true(space.eq(w_attr, space.wrap('step'))): + if w_range.w_step is None: + return space.w_None + else: + return w_range.w_step + if space.is_true(space.eq(w_attr, space.wrap('tolist'))): + w_builtinfn = make_builtin_func(space, W_XRangeObject.tolist) + return W_InstMethObject(space, w_builtinfn, w_range, w_range.statictype) + + raise FailedToImplement(space.w_AttributeError) + +def len__XRange(space, w_range): + stop = space.unwrap(w_range.w_stop) + start = space.unwrap(w_range.w_start) + step = space.unwrap(w_range.w_step) + num = (stop-start)//step + return W_IntObject(space, num) + +def getitem__XRange_Int(space, w_range, w_index): + stop = space.unwrap(w_range.w_stop) + start = space.unwrap(w_range.w_start) + step = space.unwrap(w_range.w_step) + num = (stop-start)//step + idx = w_index.intval + if idx < 0: + idx += num + if idx < 0 or idx >= num: + raise OperationError(space.w_IndexError, + space.wrap("xrange index out of range")) + return W_IntObject(space, start+step*idx) + +def iter__XRange(space, w_range): + # Someday, I'll write a smarter iterator + # This is like the old 2.2 slow iterator, rather than the new + # 2.3 fast iterator + import iterobject + return iterobject.W_SeqIterObject(space, w_range) + +register_all(vars()) diff -urN -x.svn -x'*.pyc' src.old/pypy/objspace/std/xrangetype.py src/pypy/objspace/std/xrangetype.py --- src.old/pypy/objspace/std/xrangetype.py 1970-01-01 00:00:00.000000000 +0000 +++ src/pypy/objspace/std/xrangetype.py 2003-07-08 08:55:04.000000000 +0000 @@ -0,0 +1,33 @@ +from pypy.objspace.std.objspace import * +from typeobject import W_TypeObject + +class W_XRangeType(W_TypeObject): + + typename = 'xrange' + +registerimplementation(W_XRangeType) + + +def type_new__XRangeType_XRangeType_ANY_ANY(space, w_basetype, w_xrangetype, w_args, w_kwds): + if space.is_true(w_kwds): + raise OperationError(space.w_TypeError, + space.wrap("no keyword arguments expected")) + args = space.unpackiterable(w_args) + start = space.w_None + stop = space.w_None + step = space.w_None + if len(args) == 1: + stop, = args + elif len(args) == 2: + start, stop = args + elif len(args) == 3: + start, stop, step = args + elif len(args) > 3: + raise OperationError(space.w_TypeError, + space.wrap("xrange() takes at most 3 arguments")) + else: + raise OperationError(space.w_TypeError, + space.wrap("xrange() takes at least 1 argument")) + return space.newxrange(start, stop, step), True + +register_all(vars()) -- Moshe Zadka -- http://moshez.org/ Buffy: I don't like you hanging out with someone that... short. Riley: Yeah, a lot of young people nowadays are experimenting with shortness. Agile Programming Language -- http://www.python.org/ From hpk at trillke.net Tue Jul 8 11:30:38 2003 From: hpk at trillke.net (holger krekel) Date: Tue, 8 Jul 2003 11:30:38 +0200 Subject: [pypy-dev] XRange Object [Fixed Patch] In-Reply-To: <20030708085820.31421.qmail@green.zadka.com>; from m@moshez.org on Tue, Jul 08, 2003 at 08:58:20AM -0000 References: <20030708085820.31421.qmail@green.zadka.com> Message-ID: <20030708113038.P3869@prim.han.de> Hey Moshe! [Moshe Zadka Tue, Jul 08, 2003 at 08:58:20AM -0000] > OK, I fixed the problem of not registering in the builtins, and now > > moshez at green:~/devel/pypy/src/pypy$ python2.2 interpreter/py.py -S > Python 2.2.2 (#1, Nov 21 2002, 08:18:14) > [GCC 2.95.4 20011002 (Debian prerelease)] in pypy > PyPyConsole / StdObjSpace > >>> for i in xrange(0,1,1): > ... print i > ... > 0 > > [Though, this being my first playing around with PyPy, I must say I > was a bit taken aback by just how slow it is :)] The slower it is now all the more faster it gets later :-) > Diff still vs. 1116 > > Thanks and sorry for the slight spamming, thanks you for the patch. But i think that "xrange" should really be a builtin rather than a first-class type on stdobjspace. Thanks to the last sprint we now have generators so this is rather easy to do (in module/builtin_app.py). Btw, is there a reason we don't compile with generators turned on in "interpreter/appfile.py"? I just did a quick try at copy/modify of range into "xrange" and it seems to work fine. Now, i would like to check *that* in but i am not sure about the "append"-problem. The implementer of "range" took great care to not use "list.append" but to compute the size of the list before hand. But filter/map/zip.. use list.append anyway and i think we should just make "range" use "xrange" the simple way arr = [] for i in xrange(x,y,step): arr.append(i) return arr or arr = [] map(arr.append, xrange(x, y, step) return arr IOW, i think we should allow list.append for the time beeing because it is neccessary to work anyway e.g. for map applied to an iterator/generator. cheers, holger From hpk at trillke.net Tue Jul 8 11:46:24 2003 From: hpk at trillke.net (holger krekel) Date: Tue, 8 Jul 2003 11:46:24 +0200 Subject: [pypy-dev] XRange Object [Fixed Patch] In-Reply-To: <20030708113038.P3869@prim.han.de>; from hpk@trillke.net on Tue, Jul 08, 2003 at 11:30:38AM +0200 References: <20030708085820.31421.qmail@green.zadka.com> <20030708113038.P3869@prim.han.de> Message-ID: <20030708114624.Q3869@prim.han.de> [holger krekel Tue, Jul 08, 2003 at 11:30:38AM +0200] > > arr = [] > for i in xrange(x,y,step): > arr.append(i) > return arr > > or > > arr = [] > map(arr.append, xrange(x, y, step) > return arr or simply def range(x, y=None, step=1): return list(xrange(x,y,step) :-) holger From m at moshez.org Tue Jul 8 12:21:52 2003 From: m at moshez.org (Moshe Zadka) Date: 8 Jul 2003 10:21:52 -0000 Subject: [pypy-dev] XRange Object [Fixed Patch] In-Reply-To: <20030708113038.P3869@prim.han.de> References: <20030708113038.P3869@prim.han.de>, <20030708085820.31421.qmail@green.zadka.com> Message-ID: <20030708102152.802.qmail@green.zadka.com> On Tue, 8 Jul 2003, holger krekel wrote: > thanks you for the patch. But i think that "xrange" should really > be a builtin rather than a first-class type on stdobjspace. I disagree. For example, it should support methods such as tolist() and getitem(). Maybe you want a smarter iterator method...which is fine, look at the comment I put in the iterator method [roughly "this is a standin because I wanted to do the simplest thing"] -- Moshe Zadka -- http://moshez.org/ Buffy: I don't like you hanging out with someone that... short. Riley: Yeah, a lot of young people nowadays are experimenting with shortness. Agile Programming Language -- http://www.python.org/ From hpk at trillke.net Tue Jul 8 13:32:24 2003 From: hpk at trillke.net (holger krekel) Date: Tue, 8 Jul 2003 13:32:24 +0200 Subject: [pypy-dev] XRange Object [Fixed Patch] In-Reply-To: <20030708102152.802.qmail@green.zadka.com>; from m@moshez.org on Tue, Jul 08, 2003 at 10:21:52AM -0000 References: <20030708113038.P3869@prim.han.de>, <20030708085820.31421.qmail@green.zadka.com> <20030708113038.P3869@prim.han.de> <20030708102152.802.qmail@green.zadka.com> Message-ID: <20030708133224.S3869@prim.han.de> [Moshe Zadka Tue, Jul 08, 2003 at 10:21:52AM -0000] > On Tue, 8 Jul 2003, holger krekel wrote: > > > thanks you for the patch. But i think that "xrange" should really > > be a builtin rather than a first-class type on stdobjspace. > > I disagree. For example, it should support methods such as tolist() and > getitem(). Maybe you want a smarter iterator method...which is fine, > look at the comment I put in the iterator method [roughly "this is > a standin because I wanted to do the simplest thing"] moshez and me (and michael) are continuing this discussion on IRC (freenode: #pypy) and report back any results. Basically Moshez is right that "xrange" is more than a generator (you never learn enough :-). But probably it's still possible to stuff the xrange-implementation completly into builtin_app.py cheers, hholger From arigo at tunes.org Wed Jul 9 12:43:30 2003 From: arigo at tunes.org (Armin Rigo) Date: Wed, 9 Jul 2003 12:43:30 +0200 Subject: [pypy-dev] Application-level helpers Message-ID: <20030709104330.GA25854@magma.unil.ch> Hello everybody, Holger came up with a nice idea about application-level helpers, which he experimented on an svn branch: http://codespeak.net/svn/pypy/branch/pypy-noappfile/pypy/ The idea is already a few days old and I promized to write about it earlier, but I'm a bit busy until tomorrow so I'll keep it short :-) Holger clarified the role and definition of what is currently found in the _app.py files. We are trying to do several things at once there, and it is true that we often only need a convenient way to write code without constantly wrapping and unwrapping objects. The plan is to be able to just write plain Python functions in the code, and call them "at application-level". There should be no need for the functions to be in a separate _app file if it is clear that it is application-level. For example, dictobject.py currently contains the magic code: def dict_update__Dict_Dict(space, w_self, w_other): w_self.space.gethelper(applicationfile).call("dict_update", [w_self, w_other]) Instead it could directly contain: def app_dict_update(d,o): for k in o.keys(): d[k] = o[k] dict_update__Dict_Dict = appfunction(app_dict_update) This means that app_dict_update() is a helper function that you write at application-level (hence the app_ prefix), whereas dict_update__Dict_Dict is a wrapper around it that you can call from interpreter-level with an extra "space" first argument. The change seems to be minor, but consider that helpers would now be together with the place they really belong to. They could be used directly as methods in classes, too. In short they make the bridge between interpreter-level and application-level code much smaller. The application-level code can also be allowed to call not only other application-level code, but interpreter-level one, all transparently (because they essentially live in the same globals). This clarification also means that it is an independent issue to know what we should write by strictly sticking to RPython or not. Application-level code is essentially just a way to avoid wraps. All interpreter-level code should be in RPython, but then some application-level code could be easy enough for the analyzer to understand too. If they are, the code generator can make efficient code from them too (cool!), i.e. it would not matter if the code was defined at interpreter-level or application-level. For very complex helpers, there is always the option to fall back to emitting frozen bytecode and interpreting it. I believe that this also clarifies some of the issues about the AnnotationObjSpace. More about it later... A bient?t, Armin. From guenter.jantzen at netcologne.de Wed Jul 9 13:24:38 2003 From: guenter.jantzen at netcologne.de (=?iso-8859-1?Q?G=FCnter_Jantzen?=) Date: Wed, 9 Jul 2003 13:24:38 +0200 Subject: [pypy-dev] Application-level helpers References: <20030709104330.GA25854@magma.unil.ch> Message-ID: <002a01c3460c$b00279d0$0100a8c0@Lieschen> Hello Armin, thank you for this and your last mail, now I understand better how it could be done. I think we should do sometimes CodeReviews and have "good examples", because the decision what will be delegated to helpers will be technical and somehow artificial. I like it, that it can be done easier now and at the place whereit is necessary. I still would be more happy when allocating of buffers could be done transparently in the target language, but it seems not so urgent now. kind regards G?nter From guenter.jantzen at netcologne.de Wed Jul 9 13:40:20 2003 From: guenter.jantzen at netcologne.de (=?iso-8859-1?Q?G=FCnter_Jantzen?=) Date: Wed, 9 Jul 2003 13:40:20 +0200 Subject: [pypy-dev] type problems References: <1A04C73C-B092-11D7-ABFF-0003931DF95C@python.net> Message-ID: <003a01c3460e$e182fb90$0100a8c0@Lieschen> > > Debugging PyPy is ludicrously complicated: I know Python as well as > almost anyone. I know more about PyPy than just a very few people. > I'm fairly smart. And *I* get hopelessly lost in the internals. What > can we do about this? Hallo Michael, sometimes when interactive debugging is complicated, it can be helpful to avoid it and to use some kind of logging mechanism instead. I remember a project with a huge framework. It was very difficult to understand when and how our functions where called and what was expected. One member of our team started only to write a little printf at the entry and exit of every function. When he changed something he compared the logfiles with a regression test. After some weeks he could teach us how to use the framework. I think someone who is *fairly smart* ;-) could introduce a similar logging mechanism without a lot of boilerplate print statements G?nter From guido at python.org Wed Jul 9 17:39:23 2003 From: guido at python.org (Guido van Rossum) Date: Wed, 09 Jul 2003 11:39:23 -0400 Subject: [pypy-dev] Application-level helpers In-Reply-To: Your message of "Wed, 09 Jul 2003 12:43:30 +0200." <20030709104330.GA25854@magma.unil.ch> References: <20030709104330.GA25854@magma.unil.ch> Message-ID: <200307091539.h69FdNk11047@pcp02138704pcs.reston01.va.comcast.net> > [...] For example, dictobject.py currently contains the magic code: > > def dict_update__Dict_Dict(space, w_self, w_other): > w_self.space.gethelper(applicationfile).call("dict_update", [w_self, > w_other]) > > Instead it could directly contain: > > def app_dict_update(d,o): > for k in o.keys(): > d[k] = o[k] > dict_update__Dict_Dict = appfunction(app_dict_update) Hurray! Wonderful!! > This means that app_dict_update() is a helper function that you > write at application-level (hence the app_ prefix), whereas > dict_update__Dict_Dict is a wrapper around it that you can call from > interpreter-level with an extra "space" first argument. > > The change seems to be minor, but consider that helpers would now be > together with the place they really belong to. They could be used > directly as methods in classes, too. In short they make the bridge > between interpreter-level and application-level code much > smaller. The application-level code can also be allowed to call not > only other application-level code, but interpreter-level one, all > transparently (because they essentially live in the same globals). So e.g. if there was another function, app_dict_merge(d, o), it could call app_dict_update(d, o) directly, right? Great! --Guido van Rossum (home page: http://www.python.org/~guido/) From pedronis at bluewin.ch Wed Jul 9 18:13:38 2003 From: pedronis at bluewin.ch (Samuele Pedroni) Date: Wed, 09 Jul 2003 18:13:38 +0200 Subject: [pypy-dev] Re: type problems In-Reply-To: <2m1xx2nu4k.fsf@starship.python.net> References: <1A04C73C-B092-11D7-ABFF-0003931DF95C@python.net> <5.2.1.1.0.20030707180007.02582dc0@pop.bluewin.ch> Message-ID: <5.2.1.1.0.20030709171840.0248a3d0@pop.bluewin.ch> At 18:07 07.07.2003 +0100, Michael Hudson wrote: >Samuele Pedroni writes: > > > If dispatch table compression is the way we want to follow it > > becomes important e.g. to know whether our rules are monotonic > > (table compression algos I know about need that) and to reformulate > > delegation relationships as some kind of subtyping rel. > >If you say so :-) > >I do notice delegation is what screws a naive attempt to memoize more >of the applicable method computation. > >Cheers, what I was thinking is that we should see whether a kind of class precedence list (CL, Dylan terminology), MRO can be derived taking delegation into account, e.g. for BoolObject it would be something like Bool Int Any |here starts delegation| Float it would not be just a list of types but a list of pairs (type,delegation_func). With complex it would likely become: Bool Int Any Float Complex. If we can construct such a list, then all the usual MM stuff applies more naturally. regards. From tismer at tismer.com Wed Jul 9 18:14:18 2003 From: tismer at tismer.com (Christian Tismer) Date: Wed, 09 Jul 2003 18:14:18 +0200 Subject: [pypy-dev] Application-level helpers In-Reply-To: <20030709104330.GA25854@magma.unil.ch> References: <20030709104330.GA25854@magma.unil.ch> Message-ID: <3F0C3F5A.9080109@tismer.com> Armin Rigo wrote: [about getting rid of the app files -- great!] Holger, this is a nice idea! > This clarification also means that it is an independent issue to know what we > should write by strictly sticking to RPython or not. Application-level code is > essentially just a way to avoid wraps. All interpreter-level code should be in > RPython, but then some application-level code could be easy enough for the > analyzer to understand too. If they are, the code generator can make efficient > code from them too (cool!), i.e. it would not matter if the code was defined > at interpreter-level or application-level. For very complex helpers, there is > always the option to fall back to emitting frozen bytecode and interpreting > it. Very very cool implications!! Let me try: If the analyser is able to produce interpreter-level code from application-level code in an efficient way, then this is the equivalent of producing C code from a Python function, in the "real world". Furthermore, since the interpreter-level code is actually what is "the C library", this code can now be optimized for a specific Python function. In other words, we have broken the "C library barrier", and reached one of the most ambitious goals of this project. (In theory. We just need the analyser :-) in-the-hope-that-this-conclusion-is-correct--ly y'rs - chris -- Christian Tismer :^) Mission Impossible 5oftware : Have a break! Take a ride on Python's Johannes-Niemeyer-Weg 9a : *Starship* http://starship.python.net/ 14109 Berlin : PGP key -> http://wwwkeys.pgp.net/ work +49 30 89 09 53 34 home +49 30 802 86 56 pager +49 173 24 18 776 PGP 0x57F3BF04 9064 F4E1 D754 C2FF 1619 305B C09C 5A3B 57F3 BF04 whom do you want to sponsor today? http://www.stackless.com/ From hpk at trillke.net Wed Jul 9 20:27:02 2003 From: hpk at trillke.net (holger krekel) Date: Wed, 9 Jul 2003 20:27:02 +0200 Subject: [pypy-dev] Application-level helpers In-Reply-To: <3F0C3F5A.9080109@tismer.com>; from tismer@tismer.com on Wed, Jul 09, 2003 at 06:14:18PM +0200 References: <20030709104330.GA25854@magma.unil.ch> <3F0C3F5A.9080109@tismer.com> Message-ID: <20030709202702.A3869@prim.han.de> Hi Christian, [Christian Tismer Wed, Jul 09, 2003 at 06:14:18PM +0200] > Armin Rigo wrote: > > This clarification also means that it is an independent issue to know what we > > should write by strictly sticking to RPython or not. Application-level code is > > essentially just a way to avoid wraps. All interpreter-level code should be in > > RPython, but then some application-level code could be easy enough for the > > analyzer to understand too. If they are, the code generator can make efficient > > code from them too (cool!), i.e. it would not matter if the code was defined > > at interpreter-level or application-level. For very complex helpers, there is > > always the option to fall back to emitting frozen bytecode and interpreting > > it. > > Very very cool implications!! > > Let me try: > If the analyser is able to produce interpreter-level > code from application-level code in an efficient > way, then this is the equivalent of producing > C code from a Python function, in the "real world". I'd say it's even slightly better. The analyzer should be able to produce interpreter-level pyrex-annotated python code. From that we get C-level code that "really runs" and can possibly be imported from cpython! I think that could even be worth a first "development release". It's probably only 50 times slower or so :-) > Furthermore, since the interpreter-level code is > actually what is "the C library", this code > can now be optimized for a specific Python function. It at least means that using the builtins from a python-function shouldn't spoil optimization efforts. If the AnnSpace would annotate any python code (interpreter-level, app-level, RPython, whatever[*]) then this might even have further implications. And somehow aiming at this goal seems simpler because you know what you need to provide :-) (but i don't really know how feasible this is, does anyone?) > In other words, we have broken the "C library barrier", > and reached one of the most ambitious goals of this > project. (In theory. We just need the analyser :-) i like to believe that, too, but there are still some vaguenesses to fight :-) cheers, holger [*] 'whatever' possibly the app-space itself so that it also can pass through its machinery From jriehl at cs.uchicago.edu Wed Jul 9 22:40:21 2003 From: jriehl at cs.uchicago.edu (Jonathan David Riehl) Date: Wed, 9 Jul 2003 15:40:21 -0500 (CDT) Subject: [pypy-dev] Re: Greetings In-Reply-To: <20030702212959.R3869@prim.han.de> Message-ID: On Wed, 2 Jul 2003, holger krekel wrote: > In this doc-directory (http://codespeak.net/pypy/doc) you'll also find some > unsorted documentation which might help in understanding what we have been > doing this year. Still trying to decode the docs, but the RPython stuff is starting to look familiar (I was going to something similar). I know Chris mentioned working on a pure-Python compiler (one that didn't use sre), but if anyone has a good starting point in mind, please let me know. > Also, at least i am quite interested to get to know what you have done > or what you found out about Python-to-C translators. I was building dataflow models of Python, which I thought might be useful for type inference. There was a paper on PyFront published in the IPC 8 proceedings: http://www.foretec.com/python/workshops/1998-11/proceedings/papers/riehl/riehl.html I actually didn't bother to schedule the DAG's (I didn't know anything about that topic at the time) and ended up emitting the basic blocks by simply inlining calls to the Python API. One thing about the dataflow representation that I liked was that it made constant folding a snap (by virtue of its construction, it was already done for you) and I think I could have easily done common subexpression elimination. The one bad thing about the representation was it's functional representation (iterative loops would translate to recursive functions), since I still don't know much about elimination of recursion on the back end. -Jon From hpk at trillke.net Thu Jul 10 01:49:08 2003 From: hpk at trillke.net (holger krekel) Date: Thu, 10 Jul 2003 01:49:08 +0200 Subject: [pypy-dev] type problems In-Reply-To: <003a01c3460e$e182fb90$0100a8c0@Lieschen>; from guenter.jantzen@netcologne.de on Wed, Jul 09, 2003 at 01:40:20PM +0200 References: <1A04C73C-B092-11D7-ABFF-0003931DF95C@python.net> <003a01c3460e$e182fb90$0100a8c0@Lieschen> Message-ID: <20030710014907.C3869@prim.han.de> Hi Guenter, [G?nter Jantzen Wed, Jul 09, 2003 at 01:40:20PM +0200] > > > > Debugging PyPy is ludicrously complicated: I know Python as well as > > almost anyone. I know more about PyPy than just a very few people. > > I'm fairly smart. And *I* get hopelessly lost in the internals. What > > can we do about this? > > Hallo Michael, > > sometimes when interactive debugging is complicated, it can be helpful to > avoid it and to use some kind of logging mechanism instead. That may be a good idea. I am currently hunting down a deeply nested Exception problem and knowing which frames get created executing which code helps in understanding what's going on. Without lots of print-statements this would be very difficult (not that it really gets easy :-). cheers, holger From hfastjava at yahoo.com Thu Jul 10 14:19:23 2003 From: hfastjava at yahoo.com (Hunter Peress) Date: Thu, 10 Jul 2003 05:19:23 -0700 (PDT) Subject: [pypy-dev] specfic questions for getting into the code for first time Message-ID: <20030710121923.8128.qmail@web41305.mail.yahoo.com> I ran pypy -S the first time two days ago. before, i had been lurking on the mailing lists and irc channel. I was met with extreme apathy and active negativity and downright avoidance yesterday in the irc. It should be evident that I am a kind and extroverted person that wants to help the project. over 4 hours of repeating questions on irc, i was able to glean just a little bit of information. Here are my questions: Specific example: could someone kindly explain how you can tell that test_stringobject.py uses more appspace tests than test_intobject.py. ok test_int doesnt seem to use w() anywhere, could that be a correct correlation (hpk originally told me that some low hanging fruit could be picked off by converting test_intobject.py to use more appspace tests) Code symbology: Types are implemented by the following convention: W_TypeObject eg (W_SliceObject) but whats all the w_ ....._w stuff like w_globals or self.failUnless_w(space.lt(w('a'), w('b'))) ...eg what is the distinction between upper and lower W ? maybe w means coming from the appspace ? whereas W means only in objspace ? and _w means going into appspace ? im also curious about how w() and space.wrap() differ? Tying it all together: it would also help me (and thereby anyone else thats new) if a full working function could be commented on, this is why i made the : http://codespeak.net/moin/pypy/moin.cgi/WalkThroughFunction page. Goals: I immediately dived into PyPy -S and found that int->Long promotion wasnt working, that there is not yet a longobject.py implementation and that Longs dont even seem to work that well. all from: fact = lambda n:[1,0][n>0] or fact(n-1)*n then i tried fact(100) fact(1L) and fact(1) i want to work on these issues so that i can know how to swim around the source code, and then move on. I am a very good abstract thinker as well as being technically competent, i hope to help pypy because it is insanely interesting to me. gooday. -Hunter __________________________________ Do you Yahoo!? SBC Yahoo! DSL - Now only $29.95 per month! http://sbc.yahoo.com From hfastjava at yahoo.com Thu Jul 10 14:32:05 2003 From: hfastjava at yahoo.com (Hunter Peress) Date: Thu, 10 Jul 2003 05:32:05 -0700 (PDT) Subject: [pypy-dev] RE: type problems Message-ID: <20030710123205.94724.qmail@web41311.mail.yahoo.com> holger krekel That may be a good idea. I am currently hunting down a deeply nested Exception problem and knowing which frames get created executing which code helps in understanding what's going on. Without lots of print-statements this would be very difficult (not that it really gets easy :-). are you tracking these frames from within the appspace or object space ? it would probably be helpful to have a tool like ping's cgitb (which he/I ) hacked into a text-mode version, and to have it for both object and appspace, no? __________________________________ Do you Yahoo!? SBC Yahoo! DSL - Now only $29.95 per month! http://sbc.yahoo.com From arigo at tunes.org Thu Jul 10 15:49:15 2003 From: arigo at tunes.org (Armin Rigo) Date: Thu, 10 Jul 2003 15:49:15 +0200 Subject: [pypy-dev] Re: Greetings In-Reply-To: References: <20030702212959.R3869@prim.han.de> Message-ID: <20030710134915.GA414@magma.unil.ch> Hello Jonathan, On Wed, Jul 09, 2003 at 03:40:21PM -0500, Jonathan David Riehl wrote: > I actually didn't bother to schedule the DAG's (I didn't know anything > about that topic at the time) and ended up emitting the basic blocks by > simply inlining calls to the Python API. This looks similar to what we want to do here, and have started working on in the annotation object space. We build information about the code by interpreting it somehow. What is nice with having a whole interpreter already here in the first place is that we can reuse it for this analysis. Constant folding is indeed completely trivial. > thing about the representation was it's functional representation > (iterative loops would translate to recursive functions), since I still > don't know much about elimination of recursion on the back end. This too is the point here. The analysis essentially produces a functional representation. It is probably a good thing to have. Modern compilers tend to use this representation too (they call it SSA, Single-Step Assignment), which is that disregarding a source code variable name all assignments essentially create a new variable. The remaining question is how you close loops, i.e. how the new variable created by assignment inside the loop should be identified with the old variable of the same name when we jump back. At this point using recursive functions is only one solution; if you are targeting a low-level language (like C with gotos) all you need to do is copy values back into the original variables (i.e. merge the end-of-loop state with the start-of-loop state), and jump back. As we plan to target Pyrex first we will need to think about this, as Pyrex has neither gotos nor good (tail-end) recursion. If someone comes up with an algorithm to translate an arbitrary bunch of code with gotos back into nested ifs and while loops he's most welcome :-) A bientot, Armin. From hpk at trillke.net Thu Jul 10 16:01:41 2003 From: hpk at trillke.net (holger krekel) Date: Thu, 10 Jul 2003 16:01:41 +0200 Subject: [pypy-dev] builtin refactoring branch Message-ID: <20030710160141.M3869@prim.han.de> Hello PyPy, in the last days i experimentally tried some approaches to "fixing" problems regarding the crossing of app-level/interpreter-level boundary. While Armin already posted how we'd probably like to have the situation (easily intermingling app-level and interpreter-level without _app files) there are some concepts to clarify. I think it makes sense to try to clarify the issues by focusing on "builtins" first because they imply interesting cases: - interpreter-level functions are exposed to app-space - some builtins are implemented at application level and are visible at application-level (currently: map, range ...) I guess we want to be able to write the helpers in as much as "allowed" plain python. Where Annspace defines "allowed". (note that AnnSpace must understand exactly those builtins/app-space stuff like decode_code_arguments that the interpreter uses itself to initialize and to operate). - we may want to call builtins - defined at whatever level - directly and uniformly from interpreter-level without having to know the distinction on the caller side. The latest commits to the builtinrefactor branch made it possible to define "xrange" as an app-level class. This idea was a result of some IRC-discussions with Moshez (thanks!), Michael and some others. For more detailed info, please read the branch-commit info because i have to leave now... Ah, yes, if you are interested you can try the code out by doing svn switch http://codespeak.net/svn/pypy/branch/builtinrefactor/pypy in your 'pypy' directory (be careful if you have mods in your tree). cheers, holger From mwh at python.net Thu Jul 10 18:52:07 2003 From: mwh at python.net (Michael Hudson) Date: Thu, 10 Jul 2003 17:52:07 +0100 Subject: [pypy-dev] Re: type problems References: <1A04C73C-B092-11D7-ABFF-0003931DF95C@python.net> <5.2.1.1.0.20030707180007.02582dc0@pop.bluewin.ch> <5.2.1.1.0.20030709171840.0248a3d0@pop.bluewin.ch> Message-ID: <2mznjml3yg.fsf@starship.python.net> Samuele Pedroni writes: > At 18:07 07.07.2003 +0100, Michael Hudson wrote: >>Samuele Pedroni writes: >> >> > If dispatch table compression is the way we want to follow it >> > becomes important e.g. to know whether our rules are monotonic >> > (table compression algos I know about need that) and to reformulate >> > delegation relationships as some kind of subtyping rel. >> >>If you say so :-) >> >>I do notice delegation is what screws a naive attempt to memoize more >>of the applicable method computation. >> >>Cheers, > > what I was thinking is that we should see whether a kind of class > precedence list (CL, Dylan terminology), MRO can be derived taking > delegation into account, e.g. for BoolObject it would be something like > > Bool Int Any |here starts delegation| Float > > it would not be just a list of types but a list of pairs > (type,delegation_func). > > With complex it would likely become: > > Bool Int Any Float Complex. > > If we can construct such a list, then all the usual MM stuff applies > more naturally. Sure. This makes sense. But I'm not sure how (app-side) inheritance fits into the picture. I guess the "delegation_func" could do this? I'm also growing more convinced that what's now W_TypeObject.lookup should be a multimethod (and lookup_exactly_here, too, I guess). Cheers, mwh -- Get out your salt shakers folks, this one's going to take more than one grain. -- Ator in an Ars Technica news item From bokr at oz.net Thu Jul 10 22:02:48 2003 From: bokr at oz.net (Bengt Richter) Date: Thu, 10 Jul 2003 13:02:48 -0700 Subject: [pypy-dev] Re: Greetings In-Reply-To: <20030710134915.GA414@magma.unil.ch> References: <20030702212959.R3869@prim.han.de> Message-ID: <5.0.2.1.1.20030710104347.00a6a140@mail.oz.net> At 15:49 2003-07-10 +0200, you (Armin Rigo) wrote: >Hello Jonathan, > >On Wed, Jul 09, 2003 at 03:40:21PM -0500, Jonathan David Riehl wrote: >> I actually didn't bother to schedule the DAG's (I didn't know anything >> about that topic at the time) and ended up emitting the basic blocks by >> simply inlining calls to the Python API. > >This looks similar to what we want to do here, and have started working on in >the annotation object space. We build information about the code by >interpreting it somehow. What is nice with having a whole interpreter already >here in the first place is that we can reuse it for this analysis. Constant >folding is indeed completely trivial. > >> thing about the representation was it's functional representation >> (iterative loops would translate to recursive functions), since I still >> don't know much about elimination of recursion on the back end. > >This too is the point here. The analysis essentially produces a functional >representation. It is probably a good thing to have. Modern compilers tend to >use this representation too (they call it SSA, Single-Step Assignment), which >is that disregarding a source code variable name all assignments essentially >create a new variable. The remaining question is how you close loops, i.e. how Your SSA description makes me wonder again about a question that's occurred to me in the past: If all program values were stored-to and retrieved-from fifo pipes, what would be the minimum number of pipes needed to implement a given program, and what would be their various minimum capacities? E.g., the rhs of an assignment in a loop generates a sequence of fifo reads (maybe from different fifos or the same) and the lhs (maybe also some reads here if target expression) a write (or possibly writes). IIRC I've used a limited version of this in hand-optimizing PDP-11/45 (ok, some time ago ;-) assembler in a loop, so as to have all [reg]++ addressing modulo the memory fifo spaces. (IIRC some values were pointers for indirect [reg] access elsewhere, so it wasn't a truly pure piped variable-value thing). Hm, need to think how this affects modern CPU cache logic though...OTOH, sequential access is good for interleaved memory. Hm, wonder if a limited set of fast hardware fifos could do any good, or how big a set you'd need... >the new variable created by assignment inside the loop should be identified >with the old variable of the same name when we jump back. At this point using >recursive functions is only one solution; if you are targeting a low-level >language (like C with gotos) all you need to do is copy values back into the >original variables (i.e. merge the end-of-loop state with the start-of-loop >state), and jump back. IIRC, I wound up with a "pipe" ready to be emptied, so I didn't need to "copy values back." (I may also have arranged it so the pipe length was more than the size a useful struct (pre-c assembler offsets equiv) and I was also guaranteed that a final "pipe" read access address was the base address for a non-wrapped contiguous image of the struct too. No holds barred in those days ;-) Don't know if there's anything useful to you in the above, but I thought I'd share the reminiscence ;-) >As we plan to target Pyrex first we will need to think about this, as Pyrex >has neither gotos nor good (tail-end) recursion. If someone comes up with an >algorithm to translate an arbitrary bunch of code with gotos back into nested >ifs and while loops he's most welcome :-) I guess there's always a state machine that can implement arbitrary spaghetti. Is that what you have in mind? Regards, Bengt Richter From mwh at python.net Fri Jul 11 13:25:13 2003 From: mwh at python.net (Michael Hudson) Date: Fri, 11 Jul 2003 12:25:13 +0100 Subject: [pypy-dev] Re: type problems References: <1A04C73C-B092-11D7-ABFF-0003931DF95C@python.net> <003a01c3460e$e182fb90$0100a8c0@Lieschen> <20030710014907.C3869@prim.han.de> Message-ID: <2misq9l2zq.fsf@starship.python.net> holger krekel writes: > Hi Guenter, > > [G?nter Jantzen Wed, Jul 09, 2003 at 01:40:20PM +0200] >> > >> > Debugging PyPy is ludicrously complicated: I know Python as well as >> > almost anyone. I know more about PyPy than just a very few people. >> > I'm fairly smart. And *I* get hopelessly lost in the internals. What >> > can we do about this? >> >> Hallo Michael, >> >> sometimes when interactive debugging is complicated, it can be helpful to >> avoid it and to use some kind of logging mechanism instead. > > That may be a good idea. I am currently hunting down a deeply > nested Exception problem and knowing which frames get created executing > which code helps in understanding what's going on. Without lots of > print-statements this would be very difficult (not that it really gets > easy :-). I think the difficulty here might be trying to work out what to log. You'd have to log *just the right amount* of information. I guess it wouldn't be that hard to use a trace or profile function to save a call tree, but then the problem becomes one of data presentation and viusalization (I expect) -- and maybe memory consumption... Cheers, M. -- There are two kinds of large software systems: those that evolved from small systems and those that don't work. -- Seen on slashdot.org, then quoted by amk From mwh at python.net Fri Jul 11 15:35:13 2003 From: mwh at python.net (Michael Hudson) Date: Fri, 11 Jul 2003 14:35:13 +0100 Subject: [pypy-dev] Re: specfic questions for getting into the code for first time References: <20030710121923.8128.qmail@web41305.mail.yahoo.com> Message-ID: <2mfzldkwz2.fsf@starship.python.net> Hunter Peress writes: > I was met with extreme apathy and active negativity and downright > avoidance yesterday in the irc. You were being pretty annoying, or at least you managed to significantly irritate Moshe, Holger and myself, and while it might just be us being cruel heartless bastards, it's also possible you were doing something wrong. > Code symbology: > Types are implemented by the following convention: > W_TypeObject eg > (W_SliceObject) but whats all the w_ ....._w stuff like w_globals or > self.failUnless_w(space.lt(w('a'), w('b'))) > > ...eg what is the distinction between upper and lower W ? maybe w > means coming from the appspace ? whereas W means only in objspace ? > and _w means going into appspace ? W_ usually is the name of a class, w_ usually the name of an instance (of one of the W_ classes, in the StdObjSpace). things named _w are usually containers that while not being wrapped containers contain objects that are wrapped. Oh, apart from the failUnless_w and similar methods on test.TestCase -- these are variants of the usual TestCase methods that test for appspace truth or appspace equality. As you might be able to see, there aren't any really hard and fast rules here. > > im also curious about how w() and space.wrap() differ? Many bits of code just do w = space.wrap and the beginning to reduce clutter later on. So they don't differ. > Tying it all together: > it would also help me (and thereby anyone else thats new) if a full > working function could be commented on, this is why i made the : > http://codespeak.net/moin/pypy/moin.cgi/WalkThroughFunction page. It's a little hard to see what would be done here. A thorough explanation of it would require explaining lots of things about multimethods, which a) would be a LOT of work b) might be pointless as we're not sure we have the details of multimethods right yet. > I immediately dived into PyPy -S and found that int->Long promotion > wasnt working, that there is not yet a longobject.py implementation > and that Longs dont even seem to work that well. all from: > > fact = lambda n:[1,0][n>0] or fact(n-1)*n > > then i tried fact(100) fact(1L) and fact(1) > > i want to work on these issues so that i can know how to swim around > the source code, and then move on. Hmm. This might be quite hard for a first project, on thinking about it. You should probably look at how the int->float delegation works, but, like I said I'm not sure the relavent details are totally settled yet. Cheers, M. -- ROOSTA: Ever since you arrived on this planet last night you've been going round telling people that you're Zaphod Beeblebrox, but that they're not to tell anyone else. -- The Hitch-Hikers Guide to the Galaxy, Episode 7 From pedronis at bluewin.ch Fri Jul 11 16:06:23 2003 From: pedronis at bluewin.ch (Samuele Pedroni) Date: Fri, 11 Jul 2003 16:06:23 +0200 Subject: [pypy-dev] Re: type problems In-Reply-To: <2mznjml3yg.fsf@starship.python.net> References: <1A04C73C-B092-11D7-ABFF-0003931DF95C@python.net> <5.2.1.1.0.20030707180007.02582dc0@pop.bluewin.ch> <5.2.1.1.0.20030709171840.0248a3d0@pop.bluewin.ch> Message-ID: <5.2.1.1.0.20030711154128.024fc2d8@pop.bluewin.ch> At 17:52 10.07.2003 +0100, Michael Hudson wrote: >Samuele Pedroni writes: > > > what I was thinking is that we should see whether a kind of class > > precedence list (CL, Dylan terminology), MRO can be derived taking > > delegation into account, e.g. for BoolObject it would be something like > > > > Bool Int Any |here starts delegation| Float > > > > it would not be just a list of types but a list of pairs > > (type,delegation_func). > > > > With complex it would likely become: > > > > Bool Int Any Float Complex. > > > > If we can construct such a list, then all the usual MM stuff applies > > more naturally. > >Sure. This makes sense. But I'm not sure how (app-side) inheritance >fits into the picture. it's a separated issue, or better it is implemented (already now) on top of MM,W_* objects and delegation: 1) W_UserObjects delegate to the parent builtin type 2) all the rest is handled by the logic implementing the various space multi-methods for W_UserObjects. > I guess the "delegation_func" could do this? now all W_* objects' classes have a dispatchclass attribute that is used for MM dispatch (typically w_x.dispatchclass == w_x.__class__, except for the synthetic classes used for user-defined types, for which dispatchclass is W_UserObject), it should probably become a dispatchcpl. So dispatchcpl for a user subclass of int would be set to: W_UserObject W_IntObject ... The only problem is that it should possible to define conversion delegation releationships (i.e. those that are not parent type relationships) where the target type is defined, not where the source type, i.e. just like now Int-Float delegation is defined together with Float not Int. This means that the dispatchcpl lists can be constructed only after all information has been gathered and not locally where/when each single type is defined. >I'm also growing more convinced that what's now W_TypeObject.lookup >should be a multimethod (and lookup_exactly_here, too, I guess). lookup probably not, in CPython it's a polymorphic function which basically deal with lists of objects having __dict__s. lookup_exactly_here likely, in the general case it should use getdict, and then there would be a specialized version for types that deals with sliced multimethods. regards. From arigo at tunes.org Sat Jul 12 23:08:50 2003 From: arigo at tunes.org (Armin Rigo) Date: Sat, 12 Jul 2003 14:08:50 -0700 (PDT) Subject: [pypy-dev] Re: [pypy-svn] rev 1121 - in pypy/branch/builtinrefactor/pypy:interpreter module module/test In-Reply-To: <20030712095054.1BA185A31D@thoth.codespeak.net>; from hpk@codespeak.net on Sat, Jul 12, 2003 at 11:50:54AM +0200 References: <20030712095054.1BA185A31D@thoth.codespeak.net> Message-ID: <20030712210850.AD8D81F6F1@bespin.org> Hello Holger, On Sat, Jul 12, 2003 at 11:50:54AM +0200, hpk at codespeak.net wrote: > - there is a new (experimental, of course!) approach to executing > code objects. Actually the "app2interp" method when called > makes up an "AppBuiltinCode" instance which is a simple form > of a code object. Instead of the whole "build_arguments" to-dict-locals- > and-then-get-it-out-of-locals business it constructs arguments > in a straightforward manner. Some more refactoring would be nice here. I'm posting here something we discussed at the end of the latest sprint: from the point of view of decoding the arguments, a frame is just a repository of 'fast locals' slots. Hence the idea that instead of decoding arguments and building a real 'locals' dictionary, we directly put the arguments in a frame-like class. As it only makes sense to use a real PyFrame instance when we are about to call a Python (app-level) function, we could use a base class for the general case: + ActivationFrame + PyFrame [the current frame class] + BuiltinFrame + ExecutableCode [new proposed name for PyBaseCode] + PyCode [new proposed name for PyByteCode] + BuiltinCode [new proposed name for PyBuiltinCode] ExecutableCode instances are used as the 'func_code' attribute of function objects; they would have methods to create an ActivationFrame instance of the correct corresponding class, and then the ActivationFrame's list of 'fast locals' would be populated with the generic ExecutableCode.build_arguments(). The base ActivationFrame is essentially nothing more than a 'fast locals' container. It is easy to either use that as the real 'fast locals' during interpretation in PyFrame, or pick the arguments from the list in BuiltinFrame to pass them to the built-in interpreter-level function. I think it looks like a nice way to minimize the number of times the arguments must be copied around. Armin From arigo at tunes.org Sat Jul 12 23:08:52 2003 From: arigo at tunes.org (Armin Rigo) Date: Sat, 12 Jul 2003 14:08:52 -0700 (PDT) Subject: [pypy-dev] builtin refactoring branch In-Reply-To: <20030710160141.M3869@prim.han.de>; from hpk@trillke.net on Thu, Jul 10, 2003 at 04:01:41PM +0200 References: <20030710160141.M3869@prim.han.de> Message-ID: <20030712210852.67D0C1F6F9@bespin.org> Hello, Here are some suggestions for mixing application-level and interpreter-level functions. This starts from Holger's, from the builtinrefactor branch, and from ideas discussed with Michael on #pypy. The issue is related to how we want to control the wrapping of internal interpreter objects. The relation becomes clear when you think about defining a method of some PyPy class at the application-level: the 'self' argument needs to be wrapped to be visible. For example, PyBaseCode.app_decode_code_arguments() needs 'self' to access the 'co_xxx' attributes of the PyBaseCode instance. Here is a suggestion mixing properties and CPython's structmember.c (comments following the code): class PyByteCode(Wrappable): app_co_code = appfield_str('co_code', readonly=True) app_co_argcount = appfield_int('co_argcount', readonly=True) def __init__(self): self.co_code = '' self.co_argcount = 0 def app_decode(self, argtuple): # app-level code, implicit wrapping assert len(argtuple) == self.co_argcount return argtuple decode = app2interp(app_decode) def eval(self, space, w_args): w_args = self.decode(space, w_args) w_result = space.eval_frame(w_args) return w_result app_eval = appfunc(eval) The base class 'Wrappable' contains the logic that allows the interpreter to control the behavior of wrapped versions of its instances. More about it below. The essential idea is that only the class attributes starting with 'app_' are visible to wrapped objects. To read or write the attribute xyz, a wrapped Wrappable instance will look for a class attribute 'app_xyz'. It can be: * an appfield_t instance, the equivalent of CPython's member list. Above, the definition for 'app_co_code' means that the wrapped 'self' has a 'co_code' attribute which can be read from the interpreter-level 'co_code' attribute and wrapped. If the attribute were not read-only, it could be written to but would only accept a string, which would be unwrapped before it is stored in the interpreter-level 'co_code'. * app_decode() is a function, which is just executed at the application-level as a method with the extra 'self' argument. The app_decode() function is not supposed to be called directly from the interpreter-level, but it can be called through the 'decode' name thanks to the bridge 'app2interp(app_decode)'. * conversely, eval() is just an interpreter-level method, but it can be called from application-level thanks to the definition of app_eval(). This is the equivalent of CPython's method table. It only works if eval() has a "standard" signature: 'self', followed by 'space', followed by wrapped arguments only. We should probably design a way to describe more about the signature in the call to appfunc(), like specifying that some arguments are strings or integers -- the declarative equivalent of PyArg_ParseTuple(). * a way to define properties (the getset list of CPython) would be nice to have too. * finally, for convenience, an app_xyz attribute could be just a simple wrappable constant, e.g. "app_CO_VARARGS = CO_VARARGS" to make it visible at application-level. The trick of systematically adding an app_ prefix is useful to define special attribute names (__xyz__) that would otherwise have another meaning at interpreter-level. For example, a module definition could start: class Builtin(BuiltinModule): app___name__ = '__builtin__' This makes the above constant visible at application-level under the name '__name__'. Some of the above also applies to global helpers (as opposed to methods): global variables called app_xyz are visible under the name xyz in the application-level helpers. (It seems more regular this way, and generally cleaner than making all global variables visible by default to helpers, without requiring nor allowing the app_ prefix.) Finally, as an extension of the idea, special method names 'app___xyz__' could be used to control the details of the operations on internal classes, if needed; for example, to define attribute reading without using appfield: def my_custom_getattr(self, space, w_attr): .... app___getattr__ = appfunc(my_custom_getattr) or even defining getattr() itself as an application-level helper (!): def app___getattr__(self, attr): ... About the implementation: the interpreter-level wrappable classes should inherit from Wrappable, which can be checked for in object spaces wrap() methods. This integrates with multimethods or whatever a particular object space uses for dispatch with a trick similar to StdObjSpace's current W_CPythonObject: we would have, say, a W_InternalObject class that works like W_CPythonObject but just uses generic code from pypy.interpreter.something to do the operations. I guess this could replace W_CPythonObject. A bientot, Armin. From arigo at tunes.org Sat Jul 12 23:08:56 2003 From: arigo at tunes.org (Armin Rigo) Date: Sat, 12 Jul 2003 14:08:56 -0700 (PDT) Subject: [pypy-dev] RE: type problems In-Reply-To: <20030710123205.94724.qmail@web41311.mail.yahoo.com>; from hfastjava@yahoo.com on Thu, Jul 10, 2003 at 05:32:05AM -0700 References: <20030710123205.94724.qmail@web41311.mail.yahoo.com> Message-ID: <20030712210856.5B19F1F6F1@bespin.org> Hello Hunter, On Thu, Jul 10, 2003 at 05:32:05AM -0700, Hunter Peress wrote: > That may be a good idea. I am currently hunting down a deeply > nested Exception problem and knowing which frames get created executing > which code helps in understanding what's going on. Without lots of > print-statements this would be very difficult (not that it really gets > easy :-). There are several problems with the current way exceptions are printed. For example the tracebacks are incredibly long. This is due to the fact that a number of nested interpreter-level calls are needed for each application-level call. Internally, the OperationError can record which lines in the traceback correspond to which application-level call; the problem is mainly how to display it reasonably. Another problem is that application-level frames and multimethod calls are extremely confusing in the traceback -- you can't even know which multimethod was called, because they all involve the same code in multimethod.py. I would suggest we add debugging commands in the code of pypy; something that would allow us to build a custom traceback, and also possibly record the call tree. I'm thinking about explicit "pypy.debug.enter/leave" calls around multimethod calls and calls to application-level functions. Customizing pdb also looks like a good idea. Uncaught exceptions should automatically throw us into our debugger. A bientot, Armin. From hpk at trillke.net Sat Jul 12 23:42:02 2003 From: hpk at trillke.net (holger krekel) Date: Sat, 12 Jul 2003 23:42:02 +0200 Subject: [pypy-dev] RE: type problems In-Reply-To: <20030712210856.5B19F1F6F1@bespin.org>; from arigo@tunes.org on Sat, Jul 12, 2003 at 02:08:56PM -0700 References: <20030710123205.94724.qmail@web41311.mail.yahoo.com> <20030712210856.5B19F1F6F1@bespin.org> Message-ID: <20030712234202.W3869@prim.han.de> Hello Armin, [Armin Rigo Sat, Jul 12, 2003 at 02:08:56PM -0700] > Hello Hunter, > > On Thu, Jul 10, 2003 at 05:32:05AM -0700, Hunter Peress wrote: > > That may be a good idea. I am currently hunting down a deeply > > nested Exception problem and knowing which frames get created executing > > which code helps in understanding what's going on. Without lots of > > print-statements this would be very difficult (not that it really gets > > easy :-). ... not your fault, but Mr. Hunter Press cited me without proper indication ... > There are several problems with the current way exceptions are printed. For > example the tracebacks are incredibly long. This is due to the fact that a > number of nested interpreter-level calls are needed for each application-level > call. Internally, the OperationError can record which lines in the traceback > correspond to which application-level call; the problem is mainly how to > display it reasonably. Another problem is that application-level frames and > multimethod calls are extremely confusing in the traceback -- you can't even > know which multimethod was called, because they all involve the same code in > multimethod.py. It really depends on what you want to debug, e.g. opcodes, interpeter-level or object-space level stuff or interactions between all of this etc. > I would suggest we add debugging commands in the code of pypy; something that > would allow us to build a custom traceback, and also possibly record the call > tree. I'm thinking about explicit "pypy.debug.enter/leave" calls around > multimethod calls and calls to application-level functions. I think that having easy-to-use "aspect-oriented" traces would be nice. Actually i'd like to say: def __init__(self): t = functrace('frame construction') t.dump('self.co_*') # or so def eval_code(self): t = functrace('frame') and this means that 1) functrace gets the name of the function by relying on 'self' beeing in the parent's frame locals (you may explicitely specify that of course) 2) the given strings are the "aspects" which you can use to parametrize what is beeing displayed 3) we rely on DECREF(t) when the scope is left (you don't want to do a try-finally block all the time, do you?). or alternatively call "t.close()" explicitely such a mechanism - reyling somewhat on introspection - would allow for a pretty flexible tracing parametrization depending on what you want to debug. > Customizing pdb also looks like a good idea. Uncaught exceptions should > automatically throw us into our debugger. Didn't Michael already do something like that? cheers, holger From mwh at python.net Mon Jul 14 12:42:07 2003 From: mwh at python.net (Michael Hudson) Date: Mon, 14 Jul 2003 11:42:07 +0100 Subject: [pypy-dev] Re: type problems References: <20030710123205.94724.qmail@web41311.mail.yahoo.com> <20030712210856.5B19F1F6F1@bespin.org> <20030712234202.W3869@prim.han.de> Message-ID: <2msmp9qtj4.fsf@starship.python.net> holger krekel writes: > I think that having easy-to-use "aspect-oriented" traces would be nice. > Actually i'd like to say: > > def __init__(self): > t = functrace('frame construction') > > t.dump('self.co_*') # or so > > def eval_code(self): > t = functrace('frame') > > and this means that > > 1) functrace gets the name of the function by relying on 'self' beeing in > the parent's frame locals (you may explicitely specify that of course) > > 2) the given strings are the "aspects" which you can use to parametrize > what is beeing displayed > > 3) we rely on DECREF(t) when the scope is left (you don't want to > do a try-finally block all the time, do you?). or alternatively > call "t.close()" explicitely Can't you use Python's setprofile/settrace stuff for this? Would make PyPy even slower, of course :-) > such a mechanism - reyling somewhat on introspection - would allow for > a pretty flexible tracing parametrization depending on what you want > to debug. Yeah, should be possible. >> Customizing pdb also looks like a good idea. Uncaught exceptions should >> automatically throw us into our debugger. > > Didn't Michael already do something like that? Eh, I started. There's not much useful there, yet. Cheers, mwh -- I've even been known to get Marmite *near* my mouth -- but never actually in it yet. Vegamite is right out. UnicodeError: ASCII unpalatable error: vegamite found, ham expected -- Tim Peters, comp.lang.python From mwh at python.net Mon Jul 14 19:57:09 2003 From: mwh at python.net (Michael Hudson) Date: Mon, 14 Jul 2003 18:57:09 +0100 Subject: [pypy-dev] Re: [pypy-svn] rev 1121 - in pypy/branch/builtinrefactor/pypy:interpreter module module/test References: <20030712095054.1BA185A31D@thoth.codespeak.net> <20030712210850.AD8D81F6F1@bespin.org> Message-ID: <2moezxq9e2.fsf@starship.python.net> Armin Rigo writes: > Some more refactoring would be nice here. I'm posting here something we > discussed at the end of the latest sprint: from the point of view of decoding > the arguments, a frame is just a repository of 'fast locals' slots. Hence the > idea that instead of decoding arguments and building a real 'locals' > dictionary, we directly put the arguments in a frame-like class. As it only > makes sense to use a real PyFrame instance when we are about to call a Python > (app-level) function, we could use a base class for the general case: > > + ActivationFrame > + PyFrame [the current frame class] > + BuiltinFrame > > + ExecutableCode [new proposed name for PyBaseCode] > + PyCode [new proposed name for PyByteCode] > + BuiltinCode [new proposed name for PyBuiltinCode] > > ExecutableCode instances are used as the 'func_code' attribute of function > objects; they would have methods to create an ActivationFrame instance of the > correct corresponding class, and then the ActivationFrame's list of 'fast > locals' would be populated with the generic ExecutableCode.build_arguments(). > The base ActivationFrame is essentially nothing more than a 'fast locals' > container. It is easy to either use that as the real 'fast locals' during > interpretation in PyFrame, or pick the arguments from the list in BuiltinFrame > to pass them to the built-in interpreter-level function. I think it looks like > a nice way to minimize the number of times the arguments must be copied > around. me too Cheers, M. -- Clue: You've got the appropriate amount of hostility for the Monastery, however you are metaphorically getting out of the safari jeep and kicking the lions. -- coonec -- http://home.xnet.com/~raven/Sysadmin/ASR.Quotes.html From hpk at trillke.net Tue Jul 15 16:45:30 2003 From: hpk at trillke.net (holger krekel) Date: Tue, 15 Jul 2003 16:45:30 +0200 Subject: [pypy-dev] new pypy-funding mailing list Message-ID: <20030715164530.H3869@prim.han.de> hello pypy, i have just created the pypy-funding mailing list. The PyPy-funding list is a platform for focused discussions about funding issues such as the EU-funding proposal. I rather randomly subscribed a few people to see if everything works. They should have got a welcome-message. If you are interested to discuss those issues and/or spent some time/though trying to get funding then please subscribe here http://codespeak.net/mailman/listinfo/pypy-funding If you have problems, mail me. cheers, holger From amaury.forgeotdarc at ubitrade.com Wed Jul 16 12:14:36 2003 From: amaury.forgeotdarc at ubitrade.com (amaury.forgeotdarc at ubitrade.com) Date: Wed, 16 Jul 2003 12:14:36 +0200 Subject: [pypy-dev] Someone changed the wiki FrontPage to an old version Message-ID: Hello, I keep an interested eye on the PyPy project, and I have just seen that the wiki's FrontPage is considerably shorter than a few days ago. http://codespeak.net/moin/pypy/moin.cgi/RecentChanges shows that the content of FrontPage was reverted to an old version. The page now talks about the "next sprint scheduled in Belgium" ;-) and links to the documentation disappeared. Is this intentional? Hopefully the diffs with the previous version are still available. You're doing a very impressive work... continue! -- Amaury Forgeot d'Arc From hpk at trillke.net Wed Jul 16 13:35:51 2003 From: hpk at trillke.net (holger krekel) Date: Wed, 16 Jul 2003 13:35:51 +0200 Subject: [pypy-dev] Someone changed the wiki FrontPage to an old version In-Reply-To: ; from amaury.forgeotdarc@ubitrade.com on Wed, Jul 16, 2003 at 12:14:36PM +0200 References: Message-ID: <20030716133551.I3869@prim.han.de> [amaury.forgeotdarc at ubitrade.com Wed, Jul 16, 2003 at 12:14:36PM +0200] > Hello, > > I keep an interested eye on the PyPy project, and I have just seen > that the wiki's FrontPage is considerably shorter than a few days ago. > > http://codespeak.net/moin/pypy/moin.cgi/RecentChanges > shows that the content of FrontPage was reverted to an old version. > > The page now talks about the "next sprint scheduled in Belgium" ;-) > and links to the documentation disappeared. > > Is this intentional? > Hopefully the diffs with the previous version are still available. Hmmm, it seems that some search engine (inktomisearch) has traversed the site and "tried" the revert links. I reverted back to the last valid change from Anna. Probably we should try to teach the search engines not to follow certain links or so. cheers, holger From arigo at tunes.org Thu Jul 17 11:20:48 2003 From: arigo at tunes.org (Armin Rigo) Date: Thu, 17 Jul 2003 11:20:48 +0200 Subject: [pypy-dev] Someone changed the wiki FrontPage to an old version In-Reply-To: <20030716133551.I3869@prim.han.de> References: <20030716133551.I3869@prim.han.de> Message-ID: <20030717092048.GA31071@magma.unil.ch> Hello Holger, On Wed, Jul 16, 2003 at 01:35:51PM +0200, holger krekel wrote: > > I keep an interested eye on the PyPy project, and I have just seen > > that the wiki's FrontPage is considerably shorter than a few days ago. > > Hmmm, it seems that some search engine (inktomisearch) has traversed > the site and "tried" the revert links. I receive automatic notifications of Wiki changes; I am a bit surprised to have been lately receiving several notifications each day, all about the front page. I would strongly suggest to be sure that following random links cannot interfere with the Wiki content. I tend to think that *no* action should *ever* be triggered by mere links, but only by form buttons (preferrably POST ones). A bientot, Armin. From hpk at trillke.net Thu Jul 17 12:30:26 2003 From: hpk at trillke.net (holger krekel) Date: Thu, 17 Jul 2003 12:30:26 +0200 Subject: [pypy-dev] Someone changed the wiki FrontPage to an old version In-Reply-To: <20030717092048.GA31071@magma.unil.ch>; from arigo@tunes.org on Thu, Jul 17, 2003 at 11:20:48AM +0200 References: <20030716133551.I3869@prim.han.de> <20030717092048.GA31071@magma.unil.ch> Message-ID: <20030717123026.P3869@prim.han.de> [Armin Rigo Thu, Jul 17, 2003 at 11:20:48AM +0200] > Hello Holger, > > On Wed, Jul 16, 2003 at 01:35:51PM +0200, holger krekel wrote: > > > I keep an interested eye on the PyPy project, and I have just seen > > > that the wiki's FrontPage is considerably shorter than a few days ago. > > > > Hmmm, it seems that some search engine (inktomisearch) has traversed > > the site and "tried" the revert links. > > I receive automatic notifications of Wiki changes; I am a bit surprised to > have been lately receiving several notifications each day, all about the front > page. > > I would strongly suggest to be sure that following random links cannot > interfere with the Wiki content. I tend to think that *no* action should > *ever* be triggered by mere links, but only by form buttons (preferrably POST > ones). Problem is that this is rather deeply in the moinmoin code and i am not sure i want to mess with it at the moment. It is strange, though, that all other search engines seem to ignore those particular links except in the recent case. If anybody cares enough please open an issue in the tracker and assign it to me. cheers, holger From hpk at trillke.net Fri Jul 18 15:14:14 2003 From: hpk at trillke.net (holger krekel) Date: Fri, 18 Jul 2003 15:14:14 +0200 Subject: [pypy-dev] new (documentation) infrastructure Message-ID: <20030718151414.A21766@prim.han.de> hello pypy, if you just want to see some results of the infrastructure-hacks: http://codespeak.net/pypy/?doc jum and me hacked the previous days on various bits of the (web)- infrastructure for the pypy project. One outcome is the spawn of a new "vpath" project which encapsulates local filesystem pathes as well as versioned subversion pathes. It makes it very easy to e.g. save/load objects to filesystem pathes (or even from the svn-repo) in a uniform "expected" way. Here is a real-world Example for gathering all (previously generated) "subversion info" and the ReST-generated html from a filesystem subtree. from vpath.local import Path from vpath.filters import endswidth, nodotdir docpath = Path('/projects/pypy/doc') for path in docpath.visit(endswith('.txt'), rfunc=nodotdir): infopath = path.newsuffix('.svninfo') info = infopath.load() info.htmlpath = path.newsuffix('.html') infolist.append(info) The idea to put this "vpath" module into its own project was made possible by a new svn-concept we discovered: you can stick a property called "svn:externals" to a directory. This property contains name-url pairs. The 'name' is put into the namespace of the directory and the bound object is taken from "importing" what the 'url' points to. Thus if you want to use the "vpath" module in your project you just do svn propset "svn:externals" \ "vpath http://codespeak.net/svn/vpath/trunk/dist" . then subsequently on "svn up" you'll get the trunk-revision of 'vpath' - externally fetched (it needn't be in the same repo). If you modify the vpath-module (because you have commit rights) the others working on it get a diff. This makes it extremely easy to couple projects in a version-controlled way. Using this and some other fresh infrastructure we now have a "doc" item in the menu-bar of the pypy-site http://codespeak.net/pypy/?doc which should make it easy to browse documentation and spot which documents are outdated etc. The web-page reflects the "trunk" version of our "doc"-directory. If you checkin new ReST-docs then on the server-side a script will generate all information which is subsequently used from the cgi to display it dynamically. Now i have a deal to offer. If someone writes a "plugin" (or whatever) for ReST that accepts parameters and is allowed to return some html fragment which is inserted in-place (where the plugin is invoked) then i'll do a syntax-highlighted cross-referenced "source-view" of the pypy source, including presenting "document-links" to a source-file (which are configured using some svn-property like "pypy:doc"). Both projects are probably a bit of work but it would certainly help to increase the "cross-referencedness" of documentation and source code (including inlining snippets from a source-file to have up-to-date examples). Please bear with me if i messed up the web-site at some places. The messiest part definitely was to get CSS-related rendering "right". so much for now, cheers, holger From arigo at tunes.org Sat Jul 19 19:38:39 2003 From: arigo at tunes.org (Armin Rigo) Date: Sat, 19 Jul 2003 19:38:39 +0200 Subject: [pypy-dev] new (documentation) infrastructure In-Reply-To: <20030718151414.A21766@prim.han.de> References: <20030718151414.A21766@prim.han.de> Message-ID: <20030719173839.GA16552@magma.unil.ch> Hello Holger, Is there a standard way to write documentation attached to a source file? Should we have doc/doc_xxx.txt files just like we have test/test_xxx.py files for each source file xxx.py? Or should we have a subtree of the top-level doc/ that mimics the pypy/ subtree? A bientot, Armin. From hpk at trillke.net Sun Jul 20 16:44:59 2003 From: hpk at trillke.net (holger krekel) Date: Sun, 20 Jul 2003 16:44:59 +0200 Subject: [pypy-dev] new (documentation) infrastructure In-Reply-To: <20030719173839.GA16552@magma.unil.ch>; from arigo@tunes.org on Sat, Jul 19, 2003 at 07:38:39PM +0200 References: <20030718151414.A21766@prim.han.de> <20030719173839.GA16552@magma.unil.ch> Message-ID: <20030720164459.C21766@prim.han.de> [Armin Rigo Sat, Jul 19, 2003 at 07:38:39PM +0200] > Hello Holger, > > Is there a standard way to write documentation attached to a source file? > Should we have doc/doc_xxx.txt files just like we have test/test_xxx.py files > for each source file xxx.py? Or should we have a subtree of the top-level doc/ > that mimics the pypy/ subtree? One possibility is to set a "pypy:doc" property to contain a list of related/dedicated documents. This also would allow sticking documents to directories which might be what we usually want. cheers, holger From hpk at trillke.net Thu Jul 24 21:46:47 2003 From: hpk at trillke.net (holger krekel) Date: Thu, 24 Jul 2003 21:46:47 +0200 Subject: [pypy-dev] another website update (design-wise) Message-ID: <20030724214647.R32350@prim.han.de> hello everybody, just a short notice that another cleanup-update to the web-page just took place. it's still far from perfect but now the layout has been unified to not use tables (yay!). Btw, the diffs for webpage updates are not sent to pypy-svn (unless somebody screems for those). At least all layout of the page elements should now be pretty stable through all the applications we use (some cgi-scripts, roundup, moin, mailman). I think it's a good time to cleanup the doc-hierarchy because it is now nicely visible from the web :-) cheers, holger From lac at strakt.com Thu Jul 24 23:40:28 2003 From: lac at strakt.com (Laura Creighton) Date: Thu, 24 Jul 2003 23:40:28 +0200 Subject: [pypy-dev] another website update (design-wise) In-Reply-To: Message from holger krekel of "Thu, 24 Jul 2003 21:46:47 +0200." <20030724214647.R32350@prim.han.de> References: <20030724214647.R32350@prim.han.de> Message-ID: <200307242140.h6OLeSq3028439@ratthing-b246.strakt.com> I think things look a lot better, but doc end ups looking like a flat namespace, which it isn't. As we add things the left hand side begins to to be already too long. I think we need two things. First of all, we need a 'suggested order in which to read the docs' and secondly, we need an short discription of the document so that people who want to just read a few documents don't have to browse the lot. The lhs clickables are written in a font that has serifs. The rhs, and the docs themselves are written in a serif-less font. I would prefer a serif font for papers, but not enough to kick up a fuss. Is this the sort of thing which one can specify using ones own browser, or is this something which is set from codespeak? Just curious. Some of us have some nice slides of us working, et al. Should we put those in? Laura From hpk at trillke.net Fri Jul 25 08:32:20 2003 From: hpk at trillke.net (holger krekel) Date: Fri, 25 Jul 2003 08:32:20 +0200 Subject: [pypy-dev] another website update (design-wise) In-Reply-To: <200307242140.h6OLeSq3028439@ratthing-b246.strakt.com>; from lac@strakt.com on Thu, Jul 24, 2003 at 11:40:28PM +0200 References: <20030724214647.R32350@prim.han.de> <200307242140.h6OLeSq3028439@ratthing-b246.strakt.com> Message-ID: <20030725083220.U32350@prim.han.de> Hello Laura, [Laura Creighton Thu, Jul 24, 2003 at 11:40:28PM +0200] > I think things look a lot better, but doc end ups looking like a flat > namespace, which it isn't. As we add things the left hand side begins > to to be already too long. I think we need two things. First of all, we > need a 'suggested order in which to read the docs' and secondly, we > need an short discription of the document so that people who want to > just read a few documents don't have to browse the lot. Our documents generally need some overhaul and i hope that their visibility helps this point. I think that by the time we can make the lhs-navigation part into a tree-like (with open/close of subtrees) structure. But right now flat seems to be better than nested imho. > The lhs clickables are written in a font that has serifs. The rhs, > and the docs themselves are written in a serif-less font. I would prefer > a serif font for papers, but not enough to kick up a fuss. Is this the > sort of thing which one can specify using ones own browser, or is this > something which is set from codespeak? Just curious. This is set from the CSS and i just modified it to your suggestion. > Some of us have some nice slides of us working, et al. Should we put those > in? Yes, please. They won't be shown automatically because they are probably not text-files, though. Maybe by time we should put the "most recently modified" list on the lhs and make an annotated tree-like overview in the doc-main screen (including non-text files)? cheers, holger From hpk at trillke.net Mon Jul 28 14:26:18 2003 From: hpk at trillke.net (holger krekel) Date: Mon, 28 Jul 2003 14:26:18 +0200 Subject: [pypy-dev] opcodes bundled in a class? Message-ID: <20030728142617.L32350@prim.han.de> hello pypy, with the builtinrefactor branch i am at the point where i want to eliminate the 'appfile' concept alltogether in favour of intermingling app-level and interp-level code directly. And before i start to fix the stdobjspace (which still isn't working) i'd like to get rid of 'opcode_app.py' so that the interpreter is 'appfile'-free. It's easier to do this when the implementation of the opcodes (interp- or applevel) lives in methods of a to-be-created "Opcode" class. That's because a class can naturally be instantiated (with a space argument) and all app-level functions can be processed so they end up beeing an 'InterpretedFunction' instance which you can seemlessly/natively invoke at interp-level. e.g. def BUILD_CLASS(self, f): w_meths = f.valuestack.pop() w_bases = f.valuestack.pop() w_name = f.valuestack.pop() w_newclass = self.build_class(w_name, w_bases, w_meths, f.w_globals) f.valuestack.push(w_newclass) # not callable as is, takes no self-argument def app_build_class(name, bases, meths, globals): ... return metaclass(name, bases, namespace) I also think that our internal 'InterpretedFunction' or 'Code' object should be responsible for delivering an 'opcode' class specific to its code and space. This would also allow using a different opcode-set (and implementation) for certain functions. Eventually it might be possible to implement or invent bytecodes at app-level. The key task here is to have a compiler package that produces code objects with a specific 'Opcode' class indication. However, is anybody against putting the opcodes/helpers in a class? Note that the 'visilibity' feature that Armin mentioned ealier is currently not done. But there already is a concept called 'AppVisibleModule' also living in the gateway module. It wraps itself into an app-visible module. E.g. the 'builtin' or 'sys' module are classes that inherit from 'AppVisibleModule'. At wrapping-time all 'w_*' attributes are made visible on the corresponding wrapped module instance. Thus you can access the same (wrapped) object from App-Level or Interp-Level. There is some wrappings taking place but just look at the docstring :-) cheers, holger From hpk at trillke.net Wed Jul 30 13:14:25 2003 From: hpk at trillke.net (holger krekel) Date: Wed, 30 Jul 2003 13:14:25 +0200 Subject: [pypy-dev] funding and so Message-ID: <20030730131425.Z32350@prim.han.de> hello pypy-devers, i long wanted to clarify my position on pypy-funding issues. During the Louvain-La-Neuve sprint i probably was the most skeptical/critical person regarding getting EU-funding for the pypy-project and its developers. That wasn't because i dislike the idea of getting EU-funding in itself. Quite the opposite. But I thought that the group of developers who attended our three sprints weren't organized and prepared enough to actually make the funding in good shape. I know from friends and EU-experienced people that setting up an EU project usually takes half a year (at best). The other problem i saw was that quickly moving into a development model where some people get payed and others don't is a risk in itself. The deadline seems to be the 15th of October, btw. Some people understood that i am just plainly against getting funding which is not true. In fact i want to help to make funding possible but i am not going to lead much efforts there. Laura and Jacob has been active in preparing the start of a proposal but i don't know if they find enough time to move it forward. I sure hope so. Also Anders has been investing time and contacting people which is great. Still we should make ours minds up if and how we want to go for the October 15th deadline. My priorities with PyPy definitely lie in development and dev-infrastructure issues. Also because i am already co-managing a bigger housing/artists project in my spare time (some of you know it) and also have to manage myself money-earning wise. cheers, holger From jriehl at cs.uchicago.edu Wed Jul 30 21:57:01 2003 From: jriehl at cs.uchicago.edu (Jonathan David Riehl) Date: Wed, 30 Jul 2003 14:57:01 -0500 (CDT) Subject: [pypy-dev] Python parsers... Message-ID: Hi all, I have a Python parser done in "pure" Python (no C extension module dependencies, etc...). I even have an implementation of pgen in Python. Now, I am wondering what the next step is. Shall I continue onward to bytecode compilation? Eventually, I am going to need some instruction about integrating my code in with pypy. I would be more than happy to press onward and attempt to reimplement a static Python to C translator, and I am also interested in more blue sky stuff like adding optional static typing or playing with type inference stuff. Let me know what ya'll think. Thanks! -Jon From jeremy at alum.mit.edu Thu Jul 31 00:09:41 2003 From: jeremy at alum.mit.edu (Jeremy Hylton) Date: Wed, 30 Jul 2003 18:09:41 -0400 Subject: [pypy-dev] Python parsers... In-Reply-To: Message-ID: > Hi all, > I have a Python parser done in "pure" Python (no C extension > module dependencies, etc...). I even have an implementation of pgen in > Python. Now, I am wondering what the next step is. Shall I continue > onward to bytecode compilation? Eventually, I am going to > need some instruction about integrating my code in with pypy. Are you interested in integrating it into the compiler package? The Python CVS trunk is now wide open for development, and the compiler package could definitely use some tidying up. Jeremy From hpk at trillke.net Thu Jul 31 00:31:37 2003 From: hpk at trillke.net (holger krekel) Date: Thu, 31 Jul 2003 00:31:37 +0200 Subject: [pypy-dev] Python parsers... In-Reply-To: ; from jeremy@alum.mit.edu on Wed, Jul 30, 2003 at 06:09:41PM -0400 References: Message-ID: <20030731003137.F32350@prim.han.de> [Jeremy Hylton Wed, Jul 30, 2003 at 06:09:41PM -0400] > > Hi all, > > I have a Python parser done in "pure" Python (no C extension > > module dependencies, etc...). I even have an implementation of pgen in > > Python. Now, I am wondering what the next step is. Shall I continue > > onward to bytecode compilation? Eventually, I am going to > > need some instruction about integrating my code in with pypy. > > Are you interested in integrating it into the compiler package? The Python > CVS trunk is now wide open for development, and the compiler package could > definitely use some tidying up. note, that we mirror the Python-CVS trunk into our svn-repository. This makes it easy to have a compiler-package within pypy and merge back and fro with CPython's compiler package. However, i am not sure how many design goals a pypy-compiler and cpython-compiler will share in the longer run. E.g. we probably want the pypy-compiler to be configurable at run time, e.g. from pypy import compiler compiler.set_minimal() # only compile very basic constructs compiler.allow_listcomprehension() compiler.allow_classes() # set a different way to compile try/except/finally constructs [*] compiler.set_exception_compiler(MyExcCompiler) compiler.compile(testsource) # ... and CPython's compiler package probably doesn't have such design goals. But then again, it wouldn't hurt if CPython's compiler package could do such things :-) cheers, holger [*] this e.g. means that a code object effectively determines opcode (and possibly frame) implementations. Probably 'MyExcCompiler' could just provide the implementations for its "own" opcodes and the machinery would make them available at interpretation-time. From jriehl at cs.uchicago.edu Thu Jul 31 01:20:23 2003 From: jriehl at cs.uchicago.edu (Jonathan David Riehl) Date: Wed, 30 Jul 2003 18:20:23 -0500 (CDT) Subject: [pypy-dev] Python parsers... In-Reply-To: <20030731003137.F32350@prim.han.de> Message-ID: On Thu, 31 Jul 2003, holger krekel wrote: > [Jeremy Hylton Wed, Jul 30, 2003 at 06:09:41PM -0400] > > > > Are you interested in integrating it into the compiler package? The Python > > CVS trunk is now wide open for development, and the compiler package could > > definitely use some tidying up. Sure thing. Let me know what you'd like done. Perhaps we can fire up the compiler SIG one more time? (*grin*) > note, that we mirror the Python-CVS trunk into our svn-repository. > This makes it easy to have a compiler-package within pypy and merge > back and fro with CPython's compiler package. Heh, this would make three repositories for my code. I already have it integrated into my Basil CVS tree. :) I guess the more the merrier! > However, i am not sure how many design goals a pypy-compiler and > cpython-compiler will share in the longer run. E.g. we probably > want the pypy-compiler to be configurable at run time, e.g. > > from pypy import compiler > compiler.set_minimal() # only compile very basic constructs > compiler.allow_listcomprehension() > compiler.allow_classes() > > # set a different way to compile try/except/finally constructs [*] > compiler.set_exception_compiler(MyExcCompiler) > > compiler.compile(testsource) # ... > > and CPython's compiler package probably doesn't have such > design goals. But then again, it wouldn't hurt if CPython's > compiler package could do such things :-) > This is cool stuff Holger. Could you give me pointers as to where I should head with this stuff? I've had pretty firm ideas in the past, but I'm not sure how they'd be received. For example, using the parse tree of the input grammar, I either have or could easily create a tool that would build a base tree walker class and then developers could derive custom behavior from there. Perhaps the compiler would not be stateful (in your example you use several method/function calls to set compiler state) so much as a module that exposes multiple tree walker classes. A lot of this technology is key to the Basil project as I was going to build a control flow and data flow analysis toolset for C/C++/Python. FYI, as I stated above, the code (or most of it, anyway) is the Basil SF CVS repository (basil.sf.net) if anyone wants a sneak peak (see the basil.lang.python & basil.parsing packages.) The actual Python parser is still hidden in a test of the PyPgen stuff and is built at run time, but I should get around to pretty printing it and packaging it soon. Thanks! -Jon From jeremy at alum.mit.edu Thu Jul 31 05:19:34 2003 From: jeremy at alum.mit.edu (Jeremy Hylton) Date: Wed, 30 Jul 2003 23:19:34 -0400 Subject: [pypy-dev] Python parsers... In-Reply-To: <20030731003137.F32350@prim.han.de> Message-ID: > However, i am not sure how many design goals a pypy-compiler and > cpython-compiler will share in the longer run. E.g. we probably > want the pypy-compiler to be configurable at run time, e.g. > > from pypy import compiler > compiler.set_minimal() # only compile very basic constructs > compiler.allow_listcomprehension() > compiler.allow_classes() > > # set a different way to compile try/except/finally constructs [*] > compiler.set_exception_compiler(MyExcCompiler) > > compiler.compile(testsource) # ... > > and CPython's compiler package probably doesn't have such > design goals. But then again, it wouldn't hurt if CPython's > compiler package could do such things :-) My original goal for the compiler package was to make it possible to experiment with variants and extensions to the core language. The best place to start was a compiler for the current language, and I haven't had much time to pursue it beyond that. But it is exactly in line with the long-term goals for the compiler package. I can help a little. I'll have more spare time now that the 2.3 release is done, but I also want to finish off Python's ast branch. It would be nice if the new py-parser could generate that AST with less effort than the current transformer. Jeremy