From anto.cuni at gmail.com Sun Sep 3 12:51:30 2006 From: anto.cuni at gmail.com (Antonio Cuni) Date: Sun, 03 Sep 2006 12:51:30 +0200 Subject: [pypy-dev] Reference letters for a PhD application Message-ID: <44FAB3B2.5060809@gmail.com> Hi all, as you might know I've not yet decided what to do in the future when my experience at HHU will end; one of the possibilities is to take a PhD here in Genoa, so I'm trying to fill up the form for the application. To participate I need one to three reference letters stating what are my capacity especially in the research fields. I was wondering if someone that has an official position in a company/university is willing to write such a letter (hoping it will contain a good evaluation :-). The template for the letters is here: http://www.disi.unige.it/dottorato/AMMISSIONE/adm-rules/RecommendationLetter.html Thanks a lot for the help ciao Anto From Ben.Young at risk.sungard.com Mon Sep 4 12:59:48 2006 From: Ben.Young at risk.sungard.com (Ben.Young at risk.sungard.com) Date: Mon, 4 Sep 2006 11:59:48 +0100 Subject: [pypy-dev] Non dynalloc framework roots Message-ID: Hi PyPyr's This is mainly for Carl or Michael as it concerns the framework gc, but I was just interested how easy it would be to track the roots without doing any dynamic memory allocation. I came up with the following code, which I am interested to see how easily it could be translated to llpython or whatever the lowest level is called. If someone can give me some hints, I may even have a go at implementing it myself! Cheers, Ben #include "assert.h" // Code typedef struct _GCFuncNode { int count; void*** pointers; struct _GCFuncNode* prev; } GCFuncNode; GCFuncNode* GC_top_node; void GC_push_func_node(GCFuncNode* node, int count, void*** pointers) { GCFuncNode* old_top; node->count = count; node->pointers = pointers; old_top = GC_top_node; GC_top_node = node; node->prev = old_top; } void GC_pop_func_node(void) { GC_top_node = GC_top_node->prev; } typedef struct _GCRootItr { GCFuncNode* cur_node; int cur_ptr; } GCRootItr; void GC_init_root_itr(GCRootItr* itr) { itr->cur_node = GC_top_node; itr->cur_ptr = 0; } void** GC_itr_cur(GCRootItr* itr) { if(itr->cur_node) { return itr->cur_node->pointers[itr->cur_ptr]; } else { return (void**)0; } } void GC_itr_next(GCRootItr* itr) { assert(itr->cur_ptr < itr->cur_node->count); // Find next non-null pointer while(1) { itr->cur_ptr++; if(itr->cur_ptr == itr->cur_node->count) { itr->cur_node = itr->cur_node->prev; itr->cur_ptr = 0; return; } if(*itr->cur_node->pointers[itr->cur_ptr] != (void*)0) { return; } } } // Example typedef void* PyObjPtr; void example_func_2(PyObjPtr a, PyObjPtr b, int x, int y); void example_func_1(int x, int y) { PyObjPtr a = 0, b = 0, c = 0; // GC code GCFuncNode node; void** GC_local_vars[] = {&a, &b, &c}; GC_push_func_node(&node, sizeof(GC_local_vars)/sizeof(void*), GC_local_vars); // end GC code // Do stuff with a, b, c example_func_2(a, b, 3, y); // GC code GC_pop_func_node(); // end GC code } void example_func_2(PyObjPtr a, PyObjPtr b, int x, int y) { PyObjPtr i = 0, j = 0; // GC code GCFuncNode node; void** GC_local_vars[] = {&i, &j}; GC_push_func_node(&node, sizeof(GC_local_vars)/sizeof(void*), GC_local_vars); // end GC code // GC code GC_pop_func_node(); // end GC code } int main(int argc, char* argv[]) { return 0; } From arigo at tunes.org Fri Sep 8 13:41:27 2006 From: arigo at tunes.org (Armin Rigo) Date: Fri, 8 Sep 2006 13:41:27 +0200 Subject: [pypy-dev] PyPy JIT frontend/backend Message-ID: <20060908114127.GB8257@code0.codespeak.net> Hi all, The JIT work finally reached an important step in the past week: there is an RPython interface between the JIT frontend and any possible machine-code-level backend. The interface (called RGenOp and To Be Documented Soon, Tm) has two implementations so far: * codegen/llgraph/rgenop.py that produces flow graphs again (don't look, it's quite obscure to do that while still being RPython enough). It is for testing only, so far. * codegen/i386/ri386genop.py that produces i386 ("IA32", more precisely) machine code into mmap-ed memory blocks. There are some basic tests about using the interface in i386/test/test_ri386genop.py. The tests there have the following structure: * def make_xxx(): this is an example RPython function that calls the rgenop interface to generate some simple code. * def test_xxx_interpret(): tries to run make_xxx() with the codegen/llgraph implementation, and then llinterprets the produced graph. * def test_xxx_direct(): tries to run make_xxx() with the codegen/i386 implementation, and then executes the machine code. Gets nice segfaults if the machine code is wrong. * def test_xxx_compile(): compiles the make_xxx() function into a stand-alone executable (via the normal genc route). This is where we check that our ri386genop implementation is really RPython code only. A hint about the "token" thingy: there are many rgenop.xxxToken() static methods that are here basically because we cannot dynamically handle in RPython code objects like low-level types. So a "token" is whatever RPython value that the back-end needs in order to perform a specific operation on that type. Each variant of token can be anything - it's returned by rgenop only to be passed back to it - but they must be RPython values. For example, 'fieldToken(T, name)' makes a token that is passed back to methods like genop_getfield(). For the i386 backend, the fieldToken is simply an integer: the offset of the field in the structure T. This is used by genop_getfield() to encode the offset in the machine instruction. In this way, genop_getfield() is a completely RPython-friendly function, and only fieldToken() needs to be special (a memo static method). More in the above-promized documentation :-) A bientot, Armin From arigo at tunes.org Sat Sep 9 11:56:27 2006 From: arigo at tunes.org (Armin Rigo) Date: Sat, 9 Sep 2006 11:56:27 +0200 Subject: [pypy-dev] Non dynalloc framework roots In-Reply-To: References: Message-ID: <20060909095627.GA29130@code0.codespeak.net> On Mon, Sep 04, 2006 at 11:59:48AM +0100, Ben.Young at risk.sungard.com wrote: > This is mainly for Carl or Michael as it concerns the framework gc, but I > was just interested how easy it would be to track the roots without doing > any dynamic memory allocation. I came up with the following code, which I > am interested to see how easily it could be translated to llpython or > whatever the lowest level is called. > // GC code > GCFuncNode node; > void** GC_local_vars[] = {&a, &b, &c}; > GC_push_func_node(&node, sizeof(GC_local_vars)/sizeof(void*), > GC_local_vars); > // end GC code Yes, it makes sense. As usual it's not completely clear if it is better or worse than our current approach. One thing I fear is that taking the address of all the local variables is going to kill all gcc optimizations on them. A more annoying problem is that - anyway - we cannot take addresses of local variables in our flow graphs, so far. There is not even a notion of "all the local variables" of a graph; only of a block. It would require quite some tweaking. If would be nice to try out... An intermediate solution between what you suggest and what we do now would be to still use purely-local variables (i.e. no taking pointers to them) and save them around calls instead of around the whole function, but instead of saving to a global stack, save in a precomputed graph-local Array type with enough items for the maximum number of variables that need to be saved. Then, instead of saving/restoring the roots in the global root stack, they would be saved/restored into the local Array. I guess the node and GC_local_vars could both be packed in a single local var-sized Struct, with a 'prev' pointer, and an Array at the end. In term of flow graphs, to obtain local variables of Struct type, we use a 'stack' flavor of malloc: v = flavored_malloc(S, 'stack') It gives you a pointer to an otherwise-invisible local variable of type S. In other words in the C code it becomes: struct S invisible; struct S* v = &invisible; /* constant pointer used for all accesses */ I'm not sure any more if we support 'stack'-flavored mallocs of var-sized structures and arrays, but that should be easy to fix. The meat of the work is to extend the class FrameworkGCTransformer in pypy.rpython.memory.gctransform. See StacklessFrameworkGCTransformer for an example of providing an alternate StackRootIterator. Its push_root/pop_root static methods should save/restore a local variable into/from some place where the StackRootIterator instance can find them later. (For now, pop_root can be a no-op as long as we can't support moving GCs... but hopefully that is going to change soon, thanks to mwh). A bientot, Armin From arigo at tunes.org Mon Sep 11 19:03:27 2006 From: arigo at tunes.org (Armin Rigo) Date: Mon, 11 Sep 2006 19:03:27 +0200 Subject: [pypy-dev] DLS 2006 Call for Participation Message-ID: <20060911170327.GA19754@code0.codespeak.net> Hi all, Here is the call for participation for the DLS conference. We have a 30-minutes slot there. -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= Dynamic Languages Symposium 2006 ** Call for Participation ** Portland, Oregon, United States, October 23, 2006 http://www.dcl.hpi.uni-potsdam.de/dls2006/ http://www.oopsla.org/2006/program/program/dynamic_languages_symposium.html Part of OOPSLA 2006, being held October 22-26 in historic Portland, Oregon (USA). You can learn all about OOPSLA at www.oopsla.org, and download the Advance Program PDF at http://www.oopsla.org/2006//program/oopsla_06_advance_program.pdf -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= ** Dynamic Languages Symposium Program ** 8:30 - 9:30 Invited Talk 1 Openness and simplicity in dynamic systems implementation Ian Piumarta 9:30 - 10:00 Break 10:00 - 11:30 Research Papers 1 PyPy's Approach to Virtual Machine Construction Armin Rigo and Samuele Pedroni Runtime Synthesis of High-Performance Code from Scripting Languages Christopher Mueller and Andrew Lumsdaine Interlanguage Migration: From Scripts to Programs Sam Tobin-Hochstadt and Matthias Felleisen 11:30 - 13:00 Break 13:00 - 14:00 Invited Talk 2 Perl 6 Audrey Tang 14:00 - 14:30 Break 14:30 - 16:00 Research Papers 2 Hop, a Language for Programming the Web 2.0 Manuel Serrano, Erick Gallesio, and Florian Loitsch Ambient References: Addressing Objects in Mobile Networks Tom Van Cutsem, Jessie Dedecker, Stijn Mostinckx, Elisa Gonzalez Boix, Theo D'Hondt, and Wolfgang De Meuter Hardware Transactional Memory Support for Lightweight Dynamic Language Evolution Nicholas Riley and Craig Zilles 16:00 - 16:15 Short Break 16:15 - 17:15 Invited Talk 3 Data Refactoring for Amateurs Avi Bryant -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= ** Dynamic Languages Symposium Invited Talks ** Openness and simplicity in dynamic systems implementation Ian Piumarta The talk will describe a basis for constructing systems (programming languages, environments and applications) in which users can be encouraged to adapt the characteristics of the system to match their needs (rather than the other way round). Such systems can be evolved from a pair of abstractions for state (objects communicating by messaging) and behaviour (first-class functions) that are mutually supporting: objects form structures representing symbolic expressions that fully describe the message sequencing and sending that are needed to implement objects. The result is extreme late-binding (nothing in the system is immune from dynamic modification) and extreme simplicity (each abstraction can be written down in a handful of lines of mathematics, and only slightly more lines of code). Ian Piumarta is a computer scientist at Viewpoints Research Institute. He spends much of his time designing and building systems whose implementations are maximally open, reflexive, dynamically self-describing and understandable. He can be contacted at squeakland dot org. Perl 6 Audrey Tang Perl is a general-purpose language, known for its vast number of freely available libraries. The Perl 6 project was started to improve the language's support for multi-paradigmatic programming, while retaining compatibility with the existing code base. This talk discusses how Perl 6 attempts to reconcile various competing paradigms in the field of programming language design, such as static vs. dynamic typechecking, nominal vs. structural subtyping, prototype vs. class-based objects, and lazy vs. eager evaluation. Moreover, this talk also covers the design and development of Pugs, a self-hosting Perl 6 implementation bootstrapped from Haskell, targeting multiple runtime environments, including Perl 5, JavaScript and Parrot. Audrey Tang is a Taiwanese free software programmer, best known for initiating and leading the Pugs project, a joint effort from Haskell and Perl communities to implement the Perl 6 language. She is also known for internationalization and localization contributions to several Free Software programs, including SVK, Kwiki, Request Tracker and Slash, as well as heading Traditional Chinese translation efforts for various Open Source-related books. On the CPAN, Tang initiated over 100 Perl projects, including the popular Perl Archive Toolkit (PAR), a cross-platform packaging and deployment tool for Perl 5. She is also responsible for setting up smoke test and digital signature systems for CPAN. Tang is a high school dropout and a vocal proponent for autodidacticism and individualist anarchism. Data Refactoring for Amateurs Avi Bryant Agile software development methodologies such as Extreme Programming advocate iterative design via incremental, test-driven code extension and automated refactorings. When the goal is to allow non-developers to build their own solutions, even in a limited way, this approach to incrementality becomes even more important -- non-developers generally have even less of the design experience necessary to make reasonable decisions up front, and need real use and concrete examples to guide their decisions. Dabble DB is a commercial data management tool aimed at casual business users. It encourages users to evolve data models slowly over time, starting with untyped and de-normalized models and proceeding to more sophisticated models only as the need becomes apparent. We introduce a set of data refactorings designed to support this usage pattern, and show selected examples of their real-world use. Avi Bryant is the co-CEO of Smallthought Systems Inc., a Vancouver startup focused on web-based collaboration tools. He is the author and maintainer of the Seaside web application framework, and is active in the open source Squeak Smalltalk community. -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= Dynamic Languages Symposium is part of OOPSLA, the premier gathering of professionals from industry and academia, all sharing their experiences with today's object technologies and its offshoots. OOPSLA appeals to practitioners, researchers, students, educators, and managers, all of whom discover a wealth of information and the chance to meet others with similar interests and varied experiences and knowledge. You can mold your own OOPSLA experience, attending your choices of technical papers, practitioner reports, expert panels, demonstrations, essays, lightning talks, formal and informal educator symposia, workshops, and diverse tutorials and certificate courses from world-class experts. The popular Onward! track presents out-of-the-box thinking at the frontiers of computing. Posters discuss late-breaking results, culminating in the ACM Student Research Competition. Try your hand at solving the DesignFest(R) design challenge. And of course there are plenty of social opportunities for mingling and professional networking. Go to the web (www.oopsla.org) today to reserve your place at OOPSLA '06. See you in Portland! -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= From mwh at python.net Mon Sep 11 23:19:11 2006 From: mwh at python.net (Michael Hudson) Date: Mon, 11 Sep 2006 22:19:11 +0100 Subject: [pypy-dev] DLS 2006 Call for Participation References: <20060911170327.GA19754@code0.codespeak.net> Message-ID: <2mwt8aksog.fsf@starship.python.net> Armin Rigo writes: > Runtime Synthesis of High-Performance Code from Scripting Languages > Christopher Mueller and Andrew Lumsdaine This is (a) interesting (b) potentially stealable :-) and (c) looks frighteningly like translator/asm/ppcgen ... I don't know whether it's more surprising that they don't seem to know what I did or that I didn't know about that :-) Cheers, mwh -- : exploding like a turd Never had that happen to me, I have to admit. They do that often in your world? -- Eric The Read & Dave Brown, asr From mwh at python.net Tue Sep 12 09:12:51 2006 From: mwh at python.net (Michael Hudson) Date: Tue, 12 Sep 2006 08:12:51 +0100 Subject: [pypy-dev] DLS 2006 Call for Participation In-Reply-To: <4505E377.5080502@scottdial.com> References: <20060911170327.GA19754@code0.codespeak.net> <2mwt8aksog.fsf@starship.python.net> <4505E377.5080502@scottdial.com> Message-ID: <1B078E8F-A125-4824-ADEA-56E766D2F52D@python.net> On 11 Sep 2006, at 23:30, Scott Dial wrote: > Michael Hudson wrote: >> Armin Rigo writes: >>> Runtime Synthesis of High-Performance Code from Scripting >>> Languages >>> Christopher Mueller and Andrew Lumsdaine >> This is (a) interesting (b) potentially stealable :-) and (c) looks >> frighteningly like translator/asm/ppcgen ... I don't know whether >> it's >> more surprising that they don't seem to know what I did or that I >> didn't know about that :-) > > (a) yes, > (b) not at the moment (restrictive license) Yes, I saw that, but I get the impression that we could probably beg :) > (c) hmm.. > > And as for not being aware of it, well, AFAICT, this is very > recently demonstrated at SciPy 2006 (Aug 17th), and I think you can > be forgiven for not having heard of it before because I don't think > anyone else had either. Oh right. Then they should at least have heard of PPY: http:// www.python.net/crew/mwh/hacks/PPY.html It would at least have saved some time typing numbers out of the back of the PPC architecture manual... > You can look at it all here: http://www.osl.iu.edu/~chemuell/new/ > sp.php Yeah, it was that that finally convinced me that it _wasn't_ based on my code :-) It also look like it might be easier to convert into RPython than my code. > I haven't met Chris, but I feel like this is one of those "small > world" moments since he literally works in the same building as me. Heh :) Cheers, mwh From Ben.Young at risk.sungard.com Wed Sep 13 11:22:06 2006 From: Ben.Young at risk.sungard.com (Ben.Young at risk.sungard.com) Date: Wed, 13 Sep 2006 10:22:06 +0100 Subject: [pypy-dev] Non dynalloc framework roots In-Reply-To: <20060909095627.GA29130@code0.codespeak.net> Message-ID: Hi Armin, Thanks for the reply! pypy-dev-bounces at codespeak.net wrote on 09/09/2006 10:56:27: > On Mon, Sep 04, 2006 at 11:59:48AM +0100, Ben.Young at risk.sungard.com wrote: > > This is mainly for Carl or Michael as it concerns the framework gc, but I > > was just interested how easy it would be to track the roots without doing > > any dynamic memory allocation. I came up with the following code, which I > > am interested to see how easily it could be translated to llpython or > > whatever the lowest level is called. > > > // GC code > > GCFuncNode node; > > void** GC_local_vars[] = {&a, &b, &c}; > > GC_push_func_node(&node, sizeof(GC_local_vars)/sizeof(void*), > > GC_local_vars); > > // end GC code > > Yes, it makes sense. As usual it's not completely clear if it is better > or worse than our current approach. One thing I fear is that taking the > address of all the local variables is going to kill all gcc > optimizations on them. A more annoying problem is that - anyway - we > cannot take addresses of local variables in our flow graphs, so far. > There is not even a notion of "all the local variables" of a graph; only > of a block. It would require quite some tweaking. If would be nice to > try out... > Hmm, I hadn't thought about killing GC optimizations. I guess the only way to find out it do it and see what happens! > An intermediate solution between what you suggest and what we do now > would be to still use purely-local variables (i.e. no taking pointers to > them) and save them around calls instead of around the whole function, > but instead of saving to a global stack, save in a precomputed > graph-local Array type with enough items for the maximum number of > variables that need to be saved. Then, instead of saving/restoring the > roots in the global root stack, they would be saved/restored into the > local Array. > > I guess the node and GC_local_vars could both be packed in a single > local var-sized Struct, with a 'prev' pointer, and an Array at the end. > In term of flow graphs, to obtain local variables of Struct type, we use > a 'stack' flavor of malloc: > > v = flavored_malloc(S, 'stack') > > It gives you a pointer to an otherwise-invisible local variable of type > S. In other words in the C code it becomes: > > struct S invisible; > struct S* v = &invisible; /* constant pointer used for all accesses */ > > I'm not sure any more if we support 'stack'-flavored mallocs of > var-sized structures and arrays, but that should be easy to fix. > > The meat of the work is to extend the class FrameworkGCTransformer in > pypy.rpython.memory.gctransform. See StacklessFrameworkGCTransformer > for an example of providing an alternate StackRootIterator. Its > push_root/pop_root static methods should save/restore a local variable > into/from some place where the StackRootIterator instance can find them > later. (For now, pop_root can be a no-op as long as we can't support > moving GCs... but hopefully that is going to change soon, thanks to > mwh). > I think I understand. If I get some time I'll have a play with some code. I'm currently looking at how annotation works to see how easy it would be to get generator support in rpython aswell. It would allow me to fix all the iter... methods in the multidict code easily. Cheers, Ben > > A bientot, > > Armin > _______________________________________________ > pypy-dev at codespeak.net > http://codespeak.net/mailman/listinfo/pypy-dev > From arigo at tunes.org Mon Sep 18 09:54:28 2006 From: arigo at tunes.org (Armin Rigo) Date: Mon, 18 Sep 2006 09:54:28 +0200 Subject: [pypy-dev] schedule() Message-ID: <20060918075428.GA19879@code0.codespeak.net> Hi Aurelien, It seems that the logic objspace has had now for a long time a schedule() function, instead of just doing the preemptive scheduling automtically. As we've always said, it is very easy to do the preemptive scheduling bit, but as you never followed-up on that, I guess I have to ask again. Do you want me to add a hook in the logic objspace that calls schedule() every Nth bytecode instruction? A bientot, Armin From aurelien.campeas at logilab.fr Mon Sep 18 10:47:26 2006 From: aurelien.campeas at logilab.fr (=?iso-8859-1?Q?Aur=E9lien_Camp=E9as?=) Date: Mon, 18 Sep 2006 10:47:26 +0200 Subject: [pypy-dev] schedule() In-Reply-To: <20060918075428.GA19879@code0.codespeak.net> References: <20060918075428.GA19879@code0.codespeak.net> Message-ID: <20060918084726.GB31821@crater.logilab.fr> Hi Armin, On Mon, Sep 18, 2006 at 09:54:28AM +0200, Armin Rigo wrote: > Hi Aurelien, > > It seems that the logic objspace has had now for a long time a > schedule() function, instead of just doing the preemptive scheduling > automtically. As we've always said, it is very easy to do the > preemptive scheduling bit, but as you never followed-up on that, I > guess Well, it's nice to know. I didn't ask further because it seemed not 'very easy' nor high-priority (from my pov) at the time (a few monthes ago). > I have to ask again. Do you want me to add a hook in the logic objspace > that calls schedule() every Nth bytecode instruction? Sure. Thanks, Aur?lien. From cfbolz at gmx.de Mon Sep 18 20:01:08 2006 From: cfbolz at gmx.de (Carl Friedrich Bolz) Date: Mon, 18 Sep 2006 20:01:08 +0200 Subject: [pypy-dev] PyPy Sprint Announcement, Duesseldorf 30 Oct - 5 Nov Message-ID: <450EDEE4.2020601@gmx.de> Hi all! The next PyPy sprint will be held in the Computer Science department of Heinrich-Heine Universitaet Duesseldorf from the 30th of October to the 5th of November 2006. Topics and goals ---------------- The topics of the sprints are not fixed yet. We will progress on the subjects that we are currently working on, while giving a special priority to any topic that "non-core" people find interesting. There are many topics that could fit both category :-) Here are some examples: * Just-In-Time work. Two sub-topics: - write and/or optimize a machine-code backend (we have 386 only so far) - work on turning simple interpreters into JIT compilers (we cannot do this for the whole of the PyPy interpreter yet, we're getting there small step by small step). * Optimization of core Python data types, making full use of PyPy's flexible architecture and python-implemented (and then translated) type system. (We have already various dict and str implementations.) * "Next-step stuff" that will requires some thinking and design: - distribution (where a single program runs on multiple machines) - persistence (save an "image" of a running program, or a part of it) - security (in many possible senses of the word) * Working on py.test testing tool: - py.test recently grew some distribution features which are still rough around the edges and could use improvement - there are some more ideas for features of py.test around, like adding profiling capabilities (and more) * Work on the PyPy build tool: There are some plans to provide a tool that allows one to flexibly configure PyPy and to also request builds from a set of build servers. If there is interest there could be work in this area. * and as always, there is the topic of implementing or completing core extension modules (e.g. socket...). This is hacking with a mix of ctypes and RPython. Location -------- The sprint will (probably) take place in a seminar room of the geography department (which is getting assimilated by the cs department and is below it). It is in the building 25.12 of the university campus. For travel instructions see http://stups.cs.uni-duesseldorf.de/anreise/esbahn.php Registration ------------ If you'd like to come, please subscribe to the `pypy-sprint mailing list`_ and drop a note about your interests and post any questions. More organisational information will be send to that list. We'll keep a list of `people`_ which we'll update (which you can do so yourself if you have codespeak commit rights). .. _`pypy-sprint mailing list`: http://codespeak.net/mailman/listinfo/pypy-sprint .. _`people`: http://codespeak.net/pypy/extradoc/sprintinfo/ddorf2006b/people.html Cheers, Carl Friedrich Bolz & the PyPy team