From tomspur at fedoraproject.org Sun Aug 1 12:10:51 2010 From: tomspur at fedoraproject.org (Thomas Spura) Date: Sun, 1 Aug 2010 18:10:51 +0200 Subject: [IPython-dev] correct test-suite In-Reply-To: References: <20100718171412.42f4e970@earth> <20100726091850.218c47ad@earth> <20100729105446.522921bc@earth> Message-ID: <20100801181051.0bf3fc6a@earth> Am Sat, 31 Jul 2010 16:40:28 -0700 schrieb Fernando Perez : > Great, this is much better > > On Thu, Jul 29, 2010 at 1:54 AM, Thomas Spura > wrote: > > I wrote a little script, that creates a ipython*.xz, so I can build > > a random snapshot from ipython as a rpm package and install it > > properly to run iptest on it. > > > > Now the failures are down to 2, hopefully, you see, why they are > > failing: > > These are 'trivial' failures, for some reason on your system the > output is garbled, but in fact the test is running OK. Could you > please open a ticket with your OS details and this failure? Though > it's not serious, I'd like to have it fixed nonetheless so we don't > get these false positives. I plan to update to F-14 in a few weeks, so I can retest this with python 2.7. So I'll wait with filling a ticket, till python 2.7 has the same problem (if it works there, it's ignorable). > But in practice you are OK, as all tests are really passing and these > are failures of the test detection, not of the underlying condition > being tested. Great. Now I've tested "my_bundled_libs" branch again and the first commit there is ready to get applied. There are the same tests failing for me... That's the commit: http://github.com/tomspur/ipython/commit/eba34c585ae2309f74ed83c8f0348792a0c20bb5 The next commit on that branch is not ready to merge, because I might be too optimistic with using the python own uuid module instead of guid... So don't merge this commit, just the one from above: http://github.com/tomspur/ipython/commit/dedc4193e3ebc206999922d9cbeb79ac75ae16ee Thanks Thomas From wackywendell at gmail.com Tue Aug 3 18:31:03 2010 From: wackywendell at gmail.com (Wendell Smith) Date: Tue, 03 Aug 2010 18:31:03 -0400 Subject: [IPython-dev] Terminal-based frontend Message-ID: <4C5898A7.9090409@gmail.com> Hi all! I'm still working steadily away at a terminal frontend with fancy stuff like input coloring, auto-help, auto-complete, etc. - I was calling it 'curses-based' earlier, but now that its urwid-based, it isn't based on the curses library... but its the same idea. If you want to look at my progress, please use 'git clone git://github.com/wackywendell/ipyurwid.git', and follow the directions in the README. Anyways, although there is plenty still to do with the widgets, it's come far enough for me to start worrying about integrating into ipython. I haven't touched that as I've been waiting for the dust to settle around pyzmq, but... I think its time. What I'm looking for ideally is a ipython-frontend class I can use that lets me send and receive input/output, and also has methods for running help on an object, source lookup, completion possibilities for a word, etc. I noticed the old frontend classes are gone; what are the plans to replace them? I'd be happy to be involved with that, too, if that would be helpful - but I don't want to step on anyone's toes, and also there are, I'm sure, a lot of different things that are wanted out of that... what's the status on that? How could I be helpful there? Let me know... -Wendell From satra at mit.edu Wed Aug 4 11:44:54 2010 From: satra at mit.edu (Satrajit Ghosh) Date: Wed, 4 Aug 2010 11:44:54 -0400 Subject: [IPython-dev] space after tabbing Message-ID: hi folks, on 0.10.1, if i hit tab after typing something like: import sci it does import scipyS where S stands for a white space. this is on ubuntu. i'm wondering if i'm missing a particular package. cheers, satra -------------- next part -------------- An HTML attachment was scrubbed... URL: From gokhansever at gmail.com Wed Aug 4 12:31:22 2010 From: gokhansever at gmail.com (=?UTF-8?Q?G=C3=B6khan_Sever?=) Date: Wed, 4 Aug 2010 11:31:22 -0500 Subject: [IPython-dev] space after tabbing In-Reply-To: References: Message-ID: On Wed, Aug 4, 2010 at 10:44 AM, Satrajit Ghosh wrote: > hi folks, > > on 0.10.1, if i hit tab after typing something like: > > import sci > > it does > > import scipyS > > where S stands for a white space. > > this is on ubuntu. i'm wondering if i'm missing a particular package. > > cheers, > > satra > > _______________________________________________ > IPython-dev mailing list > IPython-dev at scipy.org > http://mail.scipy.org/mailman/listinfo/ipython-dev > > There is a bug report about this issue at https://bugs.launchpad.net/ipython/+bug/470824 It seems like this bug was fixed. I lack behind the developments these days. My IPython v0.10 -rev1210 still has this issue. I also see that github (http://github.com/ipython/ipython/issues) page is listing only new issues, right? -- G?khan -------------- next part -------------- An HTML attachment was scrubbed... URL: From gokhansever at gmail.com Wed Aug 4 12:42:25 2010 From: gokhansever at gmail.com (=?UTF-8?Q?G=C3=B6khan_Sever?=) Date: Wed, 4 Aug 2010 11:42:25 -0500 Subject: [IPython-dev] space after tabbing In-Reply-To: References: Message-ID: On Wed, Aug 4, 2010 at 11:31 AM, G?khan Sever wrote: > > It seems like this bug was fixed. I lack behind the developments these > days. My IPython v0.10 -rev1210 still has this issue. Not meant to be misleading; this was a Python related issue. (Mine is at v2.6.2, and apparently fixed at 2.6.4). I will yet to upgrade to Fedora 13 :) G?khan -------------- next part -------------- An HTML attachment was scrubbed... URL: From fperez.net at gmail.com Wed Aug 4 15:16:17 2010 From: fperez.net at gmail.com (Fernando Perez) Date: Wed, 4 Aug 2010 12:16:17 -0700 Subject: [IPython-dev] space after tabbing In-Reply-To: References: Message-ID: On Wed, Aug 4, 2010 at 9:42 AM, G?khan Sever wrote: > > Not meant to be misleading; this was a Python related issue. (Mine is at > v2.6.2, and?apparently?fixed at 2.6.4). > I will yet to upgrade to Fedora 13 :) It's indeed a python bug, mercifully fixed in python 2.6.5. It drove me bonkers for a while. Cheers, f From wackywendell at gmail.com Wed Aug 4 16:03:49 2010 From: wackywendell at gmail.com (Wendell Smith) Date: Wed, 04 Aug 2010 16:03:49 -0400 Subject: [IPython-dev] Terminal-based frontend Message-ID: <4C59C7A5.5010501@gmail.com> Hi all! I'm still working steadily away at a terminal frontend with fancy stuff like input coloring, auto-help, auto-complete, etc. - I was calling it 'curses-based' earlier, but now that its urwid-based, it isn't based on the curses library... but its the same idea. If you want to look at my progress, please use 'git clone git://github.com/wackywendell/ipyurwid.git', and follow the directions in the README. Anyways, although there is plenty still to do with the widgets, it's come far enough for me to start worrying about integrating into ipython. I haven't touched that as I've been waiting for the dust to settle around pyzmq, but... I think its time. What I'm looking for ideally is a ipython-frontend class I can use that lets me send and receive input/output, and also has methods for running help on an object, source lookup, completion possibilities for a word, etc. I noticed the old frontend classes are gone; what are the plans to replace them? I'd be happy to be involved with that, too, if that would be helpful - but I don't want to step on anyone's toes, and also there are, I'm sure, a lot of different things that are wanted out of that... what's the status on that? How could I be helpful there? Let me know... -Wendell P.S. I sent this yesterday, but I didn't even receive it from the list, so I'm sending it again. I apologize to anyone who receives it twice... -------------- next part -------------- An HTML attachment was scrubbed... URL: From justin.t.riley at gmail.com Thu Aug 5 09:51:55 2010 From: justin.t.riley at gmail.com (Justin Riley) Date: Thu, 05 Aug 2010 09:51:55 -0400 Subject: [IPython-dev] SciPy Sprint summary In-Reply-To: References: <4C42B09F.50106@gmail.com> <4C43455F.1050508@gmail.com> <4C45B72F.5020000@gmail.com> Message-ID: <4C5AC1FB.4060708@gmail.com> On 07/23/2010 10:07 PM, Satrajit Ghosh wrote: > i think it still makes sense to add it in. it should be identical to the > --queue option in that it's a switch. unfortunately, i do know of a lot > of places where tcsh is the default shell! So let me back up a bit. The reason you were having issues when your shell was /bin/csh (SGE's default shell) was because of the following line in the *generated* job script from ipcluster: eid=$(($SGE_TASK_ID - 1)) This line doesn't work in tcsh/csh but it does in sh/bash/zsh and this is the real underlying issue around needing to mess with the shell in the generated job script at all. The solution I suggested was to set /bin/sh as the default shell for all generated job scripts submitted to PBS/SGE/LSF. It is possible to specify the shell for a given job to use regardless of the queueing system's default (e.g. PBS/LSF will obey the job script's shebang #!, SGE needs #$ -S /bin/sh) and I modified the ipcluster.py code to force /bin/sh for PBS/SGE/LSF when using the generated code. I did this because it doesn't matter what shell the queueing system (or user) uses by default given that we're being explicit about wanting /bin/sh AND /bin/sh exists on every *NIX system I've encountered (although that assumption may be wrong). Other templates used by ipcluster.py have /bin/sh in the shebang as well so the assumption about /bin/sh existing has been made elsewhere within the ipcluster.py code. Having a --shell option doesn't really fix the issue because the user will still need to know that they need to change the shell for the generated code in the first place (because of the non-standard decrement operation above). Ideally the user shouldn't need to think about the shell used to execute the generated job script (their own, sure) and really shouldn't care as long as the job script runs and launches the engines. This combined with the fact that /bin/sh is everywhere and we can force it is why I'm somewhat hesitant to add a --shell option for the generated code. If the user is passing their own script, they have full control of the shell and much more anyway so I'm only discussing the --shell option in the context of the generated job scripts. Does this make sense? ~Justin From satra at mit.edu Thu Aug 5 11:23:24 2010 From: satra at mit.edu (Satrajit Ghosh) Date: Thu, 5 Aug 2010 11:23:24 -0400 Subject: [IPython-dev] SciPy Sprint summary In-Reply-To: <4C5AC1FB.4060708@gmail.com> References: <4C42B09F.50106@gmail.com> <4C43455F.1050508@gmail.com> <4C45B72F.5020000@gmail.com> <4C5AC1FB.4060708@gmail.com> Message-ID: hi justin, Having a --shell option doesn't really fix the issue because the user > will still need to know that they need to change the shell for the > generated code in the first place (because of the non-standard decrement > operation above). Ideally the user shouldn't need to think about the > shell used to execute the generated job script (their own, sure) and > really shouldn't care as long as the job script runs and launches the > engines. This combined with the fact that /bin/sh is everywhere and we > can force it is why I'm somewhat hesitant to add a --shell option for > the generated code. If the user is passing their own script, they have > full control of the shell and much more anyway so I'm only discussing > the --shell option in the context of the generated job scripts. > > Does this make sense? > yes. cheers, satra -------------- next part -------------- An HTML attachment was scrubbed... URL: From satra at mit.edu Thu Aug 5 11:28:28 2010 From: satra at mit.edu (Satrajit Ghosh) Date: Thu, 5 Aug 2010 11:28:28 -0400 Subject: [IPython-dev] space after tabbing In-Reply-To: References: Message-ID: thanks fernando. i guess i'll have to wait for that update. we still have 2.6.4 where at least on ubuntu it doesn't appear to be fixed. ivan: i'm expecting it to stop and not put the extra space. cheers, satra On Wed, Aug 4, 2010 at 3:16 PM, Fernando Perez wrote: > On Wed, Aug 4, 2010 at 9:42 AM, G?khan Sever > wrote: > > > > Not meant to be misleading; this was a Python related issue. (Mine is at > > v2.6.2, and apparently fixed at 2.6.4). > > I will yet to upgrade to Fedora 13 :) > > It's indeed a python bug, mercifully fixed in python 2.6.5. It drove > me bonkers for a while. > > Cheers, > > f > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fperez.net at gmail.com Fri Aug 6 15:08:16 2010 From: fperez.net at gmail.com (Fernando Perez) Date: Fri, 6 Aug 2010 12:08:16 -0700 Subject: [IPython-dev] space after tabbing In-Reply-To: References: Message-ID: On Thu, Aug 5, 2010 at 8:28 AM, Satrajit Ghosh wrote: > thanks fernando. i guess i'll have to wait for that update. we still have > 2.6.4 where at least on ubuntu it doesn't appear to be fixed. > Ubuntu 10.4 ships with python 2.6.5, where they fixed the bug. But Ubuntu 9.10 still has python 2.6.4, and they didn't backport the fix, unfortunately. So if you're not on Ubuntu 10.4, I think you're stuck with that stupid space... Cheers, f From fperez.net at gmail.com Fri Aug 6 20:37:41 2010 From: fperez.net at gmail.com (Fernando Perez) Date: Fri, 6 Aug 2010 17:37:41 -0700 Subject: [IPython-dev] Terminal-based frontend In-Reply-To: <4C59C7A5.5010501@gmail.com> References: <4C59C7A5.5010501@gmail.com> Message-ID: Hey Wendell, Sorry for the slow reply... On Wed, Aug 4, 2010 at 1:03 PM, Wendell Smith wrote: > > I'm still working steadily away at a terminal frontend with fancy stuff like > input coloring, auto-help, auto-complete, etc. - I was calling it > 'curses-based' earlier, but now that its urwid-based, it isn't based on the > curses library... but its the same idea. If you want to look at my progress, > please use 'git clone git://github.com/wackywendell/ipyurwid.git', and > follow the directions in the README. > > Anyways, although there is plenty still to do with the widgets, it's come > far enough for me to start worrying about integrating into ipython. I > haven't touched that as I've been waiting for the dust to settle around > pyzmq, but... I think its time. > > What I'm looking for ideally is a ipython-frontend class I can use that lets > me send and receive input/output, and also has methods for running help on > an object, source lookup, completion possibilities for a word, etc. I > noticed the old frontend classes are gone; what are the plans to replace > them? I'd be happy to be involved with that, too, if that would be helpful - > but I don't want to step on anyone's toes, and also there are, I'm sure, a > lot of different things that are wanted out of that... what's the status on > that? How could I be helpful there? > > Let me know... Fantastic or horrible timing, depending on how you want to think of it :) Fantastic because we're doing a TON of work in this direction, horrible because things are changing *right now* under you in this regard, and it may be a few weeks before they stabilize. And during this big 'digging in the foundation' period, we may be a bit bandwidth-constrained to coordinate things. All the same, I'll try to summarize things now in the best way possible, so you can both see what's going on and provide feedback. We hope to have precisely what you are asking for in place soon (some of it already exists), so as long as you don't mind the mess on the floor, come on in :) The short version: we have currently three separate clients being built for IPython, all on top of the little zeromq prototype that Brian and I wrote a few months ago. Two of these have been built as part of the Google Summer of Code project by Gerardo and Omar, one by Evan Patterson from Enthought. Here are their branches: - Evan: a qt-based widget with a very 'terminal-like' feel that feeds single blocks of code to the kernel. http://github.com/epatters/ipython/tree/qtfrontend/IPython/frontend/qt/ - Gerardo: a qt-based widget with a more 'notebook-like' feel that feeds cells (lists of blocks) to the kernel. http://github.com/muzgash/ipython/tree/ipythonqt-km/IPython/frontend/qt/nb/ - Omar: a terminal-based frontend that feels much like today's ipython. http://github.com/omazapa/ipython/tree/master/IPython/zmq/ These three ultimately are meant to share a lot of code, but because things are all happening at the same time, right now their merge status is a bit funky. But the immediate plan is to finalize the kernel/frontend protocol and have enough of a kernel in place with the official protocol for all clients to be able to work against a common interface. As we work through, we'll see what can be refactored into common codes for all frontends, what can be common to similar frontends (qt ones, terminal ones, etc) and what will be unique to each. Probably the best place for you to look at right now is Evan's work, where the main abstractions are falling into place, and which Gerardo and Omar are gradually merging into their own lines. Also, we're doing our best to hang out in #ipython on the freenode IRC server when we're actively doing IPython work, so feel free to jump in there with questions for any of us. Best regards, f From andresete.chaos at gmail.com Sat Aug 7 17:47:07 2010 From: andresete.chaos at gmail.com (=?UTF-8?Q?Omar_Andr=C3=A9s_Zapata_Mesa?=) Date: Sat, 7 Aug 2010 16:47:07 -0500 Subject: [IPython-dev] About KernelManager Message-ID: Hi all. I am having problems with kernelmanager from evan's repo. I am writing a simple code # -*- coding: utf-8 -*- from kernelmanager import KernelManager from session import Session import zmq xreq_addr = ('127.0.0.1',5555) sub_addr = ('127.0.0.1', 5556) rep_addr = ('127.0.0.1', 5557) context = zmq.Context() session = Session() km = KernelManager(xreq_addr, sub_addr, rep_addr,context,session) km.start_channels() km.xreq_channel_class.execute("print 1") and the error is km.xreq_channel_class.execute(code) TypeError: unbound method execute() must be called with XReqSocketChannel instance as first argument (got str instance instead) how should I use kernelmanager ? Thnk! -------------- next part -------------- An HTML attachment was scrubbed... URL: From fperez.net at gmail.com Sat Aug 7 19:20:32 2010 From: fperez.net at gmail.com (Fernando Perez) Date: Sat, 7 Aug 2010 16:20:32 -0700 Subject: [IPython-dev] About KernelManager In-Reply-To: References: Message-ID: Hi Omar, 2010/8/7 Omar Andr?s Zapata Mesa : > # -*- coding: utf-8 -*- > from kernelmanager import KernelManager > from session import Session > import zmq > xreq_addr = ('127.0.0.1',5555) > sub_addr = ('127.0.0.1', 5556) > rep_addr = ('127.0.0.1', 5557) > context = zmq.Context() > session = Session() > km = KernelManager(xreq_addr, sub_addr, rep_addr,context,session) > km.start_channels() > km.xreq_channel_class.execute("print 1") You're calling directly the class object, instead of the channel object. Here's an example in Evan's code where he calls execute(): http://github.com/epatters/ipython/blob/qtfrontend/IPython/frontend/qt/console/frontend_widget.py#L129 Cheers, f From wackywendell at gmail.com Sun Aug 8 10:43:09 2010 From: wackywendell at gmail.com (Wendell Smith) Date: Sun, 08 Aug 2010 10:43:09 -0400 Subject: [IPython-dev] Terminal-based frontend In-Reply-To: References: <4C59C7A5.5010501@gmail.com> Message-ID: <4C5EC27D.1040700@gmail.com> Excellent, sounds great! I'm busy for a few days, but I will be looking into that soon. Thanks, Wendell On 08/06/2010 08:37 PM, Fernando Perez wrote: > Hey Wendell, > > Sorry for the slow reply... > > On Wed, Aug 4, 2010 at 1:03 PM, Wendell Smith wrote: >> I'm still working steadily away at a terminal frontend with fancy stuff like >> input coloring, auto-help, auto-complete, etc. - I was calling it >> 'curses-based' earlier, but now that its urwid-based, it isn't based on the >> curses library... but its the same idea. If you want to look at my progress, >> please use 'git clone git://github.com/wackywendell/ipyurwid.git', and >> follow the directions in the README. >> >> Anyways, although there is plenty still to do with the widgets, it's come >> far enough for me to start worrying about integrating into ipython. I >> haven't touched that as I've been waiting for the dust to settle around >> pyzmq, but... I think its time. >> >> What I'm looking for ideally is a ipython-frontend class I can use that lets >> me send and receive input/output, and also has methods for running help on >> an object, source lookup, completion possibilities for a word, etc. I >> noticed the old frontend classes are gone; what are the plans to replace >> them? I'd be happy to be involved with that, too, if that would be helpful - >> but I don't want to step on anyone's toes, and also there are, I'm sure, a >> lot of different things that are wanted out of that... what's the status on >> that? How could I be helpful there? >> >> Let me know... > > Fantastic or horrible timing, depending on how you want to think of it > :) Fantastic because we're doing a TON of work in this direction, > horrible because things are changing *right now* under you in this > regard, and it may be a few weeks before they stabilize. And during > this big 'digging in the foundation' period, we may be a bit > bandwidth-constrained to coordinate things. All the same, I'll try to > summarize things now in the best way possible, so you can both see > what's going on and provide feedback. We hope to have precisely what > you are asking for in place soon (some of it already exists), so as > long as you don't mind the mess on the floor, come on in :) > > The short version: we have currently three separate clients being > built for IPython, all on top of the little zeromq prototype that > Brian and I wrote a few months ago. Two of these have been built as > part of the Google Summer of Code project by Gerardo and Omar, one by > Evan Patterson from Enthought. Here are their branches: > > - Evan: a qt-based widget with a very 'terminal-like' feel that feeds > single blocks of code to the kernel. > http://github.com/epatters/ipython/tree/qtfrontend/IPython/frontend/qt/ > > - Gerardo: a qt-based widget with a more 'notebook-like' feel that > feeds cells (lists of blocks) to the kernel. > http://github.com/muzgash/ipython/tree/ipythonqt-km/IPython/frontend/qt/nb/ > > - Omar: a terminal-based frontend that feels much like today's ipython. > http://github.com/omazapa/ipython/tree/master/IPython/zmq/ > > > These three ultimately are meant to share a lot of code, but because > things are all happening at the same time, right now their merge > status is a bit funky. But the immediate plan is to finalize the > kernel/frontend protocol and have enough of a kernel in place with the > official protocol for all clients to be able to work against a common > interface. As we work through, we'll see what can be refactored into > common codes for all frontends, what can be common to similar > frontends (qt ones, terminal ones, etc) and what will be unique to > each. > > Probably the best place for you to look at right now is Evan's work, > where the main abstractions are falling into place, and which Gerardo > and Omar are gradually merging into their own lines. > > Also, we're doing our best to hang out in #ipython on the freenode IRC > server when we're actively doing IPython work, so feel free to jump in > there with questions for any of us. > > Best regards, > > f From fperez.net at gmail.com Tue Aug 10 04:02:14 2010 From: fperez.net at gmail.com (Fernando Perez) Date: Tue, 10 Aug 2010 01:02:14 -0700 Subject: [IPython-dev] Message spec draft more fleshed out Message-ID: Hi folks, here: http://github.com/ipython/ipython/blob/106bc2e0587d315db67988c1803b8574fc54463a/docs/source/development/messaging.txt is a more fleshed out message spec document for feedback. I'd especially like to hear from Omar and Gerardo if you notice any important point missing, since you've been thinking a fair bit about this. For the stdin socket I changed a little bit things from how Omar implemented it: http://github.com/omazapa/ipython/blob/master/IPython/zmq/kernel.py#L496 so that we have full message headers in here as well, but it's the same idea. This document is still in evolution, but we think we're starting to have a decent specification of the protocol, so now any and all comments are valid. If we've missed anything important, forgotten something or misdesigned anything at this level, speak up :) Cheers, f From andresete.chaos at gmail.com Tue Aug 10 09:58:29 2010 From: andresete.chaos at gmail.com (=?UTF-8?Q?Omar_Andr=C3=A9s_Zapata_Mesa?=) Date: Tue, 10 Aug 2010 08:58:29 -0500 Subject: [IPython-dev] Message spec draft more fleshed out In-Reply-To: References: Message-ID: Hi all. this is great!! but I think with need other message type to get object information, somenthing like object_info_request, this let to frontend to use the magic "?" O. 2010/8/10 Fernando Perez > Hi folks, > > here: > > > http://github.com/ipython/ipython/blob/106bc2e0587d315db67988c1803b8574fc54463a/docs/source/development/messaging.txt > > is a more fleshed out message spec document for feedback. I'd > especially like to hear from Omar and Gerardo if you notice any > important point missing, since you've been thinking a fair bit about > this. > > For the stdin socket I changed a little bit things from how Omar > implemented it: > > http://github.com/omazapa/ipython/blob/master/IPython/zmq/kernel.py#L496 > > so that we have full message headers in here as well, but it's the same > idea. > > This document is still in evolution, but we think we're starting to > have a decent specification of the protocol, so now any and all > comments are valid. If we've missed anything important, forgotten > something or misdesigned anything at this level, speak up :) > > Cheers, > > f > _______________________________________________ > IPython-dev mailing list > IPython-dev at scipy.org > http://mail.scipy.org/mailman/listinfo/ipython-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From benjaminrk at gmail.com Tue Aug 10 14:55:25 2010 From: benjaminrk at gmail.com (MinRK) Date: Tue, 10 Aug 2010 11:55:25 -0700 Subject: [IPython-dev] Message spec draft more fleshed out In-Reply-To: References: Message-ID: This is great. There are a few additional functionalities I need on top of this, that I have added to the message spec I use in my parallel code. I have multiple clients,and need unique message ids, so clearly ints are inadequate. I switched msg_id to also be a uuid. I could certainly generate unique msg ids in the controller by combining the msg id and the session id, which should be unique. Since I need to inspect messages on the way, and don't want to have to unpack the content of the message, I can't send the whole message as one json object. For this, I split it, such that the headers and content are sent separately. msg_type is added to the header for this. I need to be able to send data without copying, and for that I added a 'buffers' element at the top level of a message. I also added an apply_message type, for using Brian's apply model. I will write up how the apply stuff works later (I expect there will be some discussion and rearrangement of some of it). I also added, but no longer use, a subheader, which allows senders of messages to extend the header. I needed this when the Controller parses a message destined for an engine, it shouldn't unpack the content of the message, only the header. Since the routing is now handled purely in zmq, I don't currently have a need for the subheader, but I can certainly imagine it being useful. This is not so much a part of the root message format as it is a part of the session.send() api. On Tue, Aug 10, 2010 at 01:02, Fernando Perez wrote: > Hi folks, > > here: > > > http://github.com/ipython/ipython/blob/106bc2e0587d315db67988c1803b8574fc54463a/docs/source/development/messaging.txt > > is a more fleshed out message spec document for feedback. I'd > especially like to hear from Omar and Gerardo if you notice any > important point missing, since you've been thinking a fair bit about > this. > > For the stdin socket I changed a little bit things from how Omar > implemented it: > > http://github.com/omazapa/ipython/blob/master/IPython/zmq/kernel.py#L496 > > so that we have full message headers in here as well, but it's the same > idea. > > This document is still in evolution, but we think we're starting to > have a decent specification of the protocol, so now any and all > comments are valid. If we've missed anything important, forgotten > something or misdesigned anything at this level, speak up :) > > Cheers, > > f > _______________________________________________ > IPython-dev mailing list > IPython-dev at scipy.org > http://mail.scipy.org/mailman/listinfo/ipython-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fperez.net at gmail.com Tue Aug 10 16:50:55 2010 From: fperez.net at gmail.com (Fernando Perez) Date: Tue, 10 Aug 2010 13:50:55 -0700 Subject: [IPython-dev] Message spec draft more fleshed out In-Reply-To: References: Message-ID: Hi Ville, On Tue, Aug 10, 2010 at 1:40 AM, Ville M. Vainio wrote: > On Tue, Aug 10, 2010 at 11:02 AM, Fernando Perez wrote: >> Hi folks, >> >> here: >> >> http://github.com/ipython/ipython/blob/106bc2e0587d315db67988c1803b8574fc54463a/docs/source/development/messaging.txt >> >> is a more fleshed out message spec document for feedback. ?I'd > > Have you considered using google protocol buffers? I imagine you would > get vastly superior throughput with that (even comparing to pickles?). > > http://code.google.com/apis/protocolbuffers/ For the main code it's probably not worth the price of an extra C/SWIG dependency (zmq is already one, the less we have the better off we are), as for this we don't really have major performance worries. All that is sent are strings for the screen. When the parallel part gets refactored to use zmq, it may be worth looking at these. Brian mentioned that he's seen people looking into pbuffers over zmq for high-performance work, so that's definitely worth keeping in mind for that stage. Cheers, f From fperez.net at gmail.com Tue Aug 10 16:51:53 2010 From: fperez.net at gmail.com (Fernando Perez) Date: Tue, 10 Aug 2010 13:51:53 -0700 Subject: [IPython-dev] Message spec draft more fleshed out In-Reply-To: References: Message-ID: Hi Omar, 2010/8/10 Omar Andr?s Zapata Mesa : > Hi all. > > this is great!! > but I think with need other message type to get object information, > somenthing like object_info_request, this let to frontend to use the magic > "?" You're absolutely right, I'm in the process of reviewing the doc now and will add exactly that request as its own call. Evan also made a case for the same thing, so it's clear we do need it. Thanks! Cheers, f From fperez.net at gmail.com Tue Aug 10 20:23:36 2010 From: fperez.net at gmail.com (Fernando Perez) Date: Tue, 10 Aug 2010 17:23:36 -0700 Subject: [IPython-dev] Message spec draft more fleshed out In-Reply-To: References: Message-ID: Hey Min, On Tue, Aug 10, 2010 at 11:55 AM, MinRK wrote: > This is great. > There are a few additional functionalities I need on top of this, that I > have added to the message spec I use in my parallel code. > I have multiple clients,and need unique message ids, so clearly ints are > inadequate. ?I switched msg_id to also be a uuid. I could certainly generate > unique msg ids in the controller by combining the msg id and the session id, > which should be unique. OK, should we just switch to these right away? If so, does this sound like the right way to make them: ? uuid.uuid1(os.getpid()) We'd obviously cache the pid, but this lets us seed the uuid1 call with the pid of eac client. Alternatively we can call uuid4(), and trust that the probability of collisions is low enough. I sort of like better the idea of seeding with a known quantity; we could combine hostid and pid if we want to be extra safe, but I don't think it's worth worrying about that level of low-probability of collision, is it? > Since I need to inspect messages on the way, and don't want to have to > unpack the content of the message, I can't send the whole message as one > json object. For this, I split it, such that the headers and content are > sent separately. msg_type is added to the header for this. Should we list the multipart spec separately? This would be only for 'data carrying' messages, or would it be for all communications? > I need to be able to send data without copying, and for that I added a > 'buffers' element at the top level of a message. ?I also added an > apply_message type, for using Brian's apply model. I will write up how the > apply stuff works later (I expect there will be some discussion and > rearrangement of some of it). > I also added, but no longer use, a subheader, which allows senders of > messages to extend the header. ?I needed this when the Controller parses a > message destined for an engine, it shouldn't unpack the content of the > message, only the header. Since the routing is now handled purely in zmq, I > don't currently have a need for the subheader, but I can certainly imagine > it being useful. ?This is not so much a part of the root message format as > it is a part of the session.send() api. I'm finishing up the doc, it would be good if you could write up these ideas into it so we have all the design in one place... I'll ping soon with the finished draft. Cheers, f From benjaminrk at gmail.com Wed Aug 11 03:03:12 2010 From: benjaminrk at gmail.com (MinRK) Date: Wed, 11 Aug 2010 00:03:12 -0700 Subject: [IPython-dev] Message spec draft more fleshed out In-Reply-To: References: Message-ID: On Tue, Aug 10, 2010 at 17:23, Fernando Perez wrote: > Hey Min, > > On Tue, Aug 10, 2010 at 11:55 AM, MinRK wrote: > > This is great. > > There are a few additional functionalities I need on top of this, that I > > have added to the message spec I use in my parallel code. > > I have multiple clients,and need unique message ids, so clearly ints are > > inadequate. I switched msg_id to also be a uuid. I could certainly > generate > > unique msg ids in the controller by combining the msg id and the session > id, > > which should be unique. > > OK, should we just switch to these right away? If so, does this sound > like the right way to make them: ? > It's nice to have int access at the client level, and I have that builtin to my client, but the real IDs used by the system are all uuids. This is fairly easy to implement. > > uuid.uuid1(os.getpid()) > > We'd obviously cache the pid, but this lets us seed the uuid1 call > with the pid of eac client. Alternatively we can call uuid4(), and > trust that the probability of collisions is low enough. I sort of like > better the idea of seeding with a known quantity; we could combine > hostid and pid if we want to be extra safe, but I don't think it's > worth worrying about that level of low-probability of collision, is > it? > uuid1(pid) is nice because it makes collision impossible on a single machine. However, the pid seed replaces a 48b section of the UUID, and the rest is time based. PIDs generally exist in a very small range (most likely low thousands unless you have infinite uptime, in which case it is approximately a random 15b number). If you are on many machines, rather than many processes in one machine, the PID section is a dramatic reduction in randomness, as is the time-based segment, which should be quite similar across machines. UUID Sections (from RFC 4122): timestamp: 60b, resolution = 0.1us version : 4b, constant clock_seq: 16b, treat as random node : 48b, assigned from 15b PID range So for two uuids generated on different machines within 1ms (relative internal clock time), the probability of collision is at least: (likelihood of timestamp match) * (clock_seq match) * (PID match) 1:(1e4 * 2^16 * 2^15) ~ 1 in 1e13. treating uuid4 as fully random (124b), the likelihood of the same two uuids is 1:2^124 ~ 1 in 1e37. Much less likely. Running on a single machine with 8 engines (using my zmq IPython cluster), I generated 100k UUIDs on each, as fast as I could, first with uuid1(1), and second with uuid4(). I reliably had at least 1, and up to 5 collisions with the uuid1 case. I never encountered a collision with uuid4. I also discovered that uuid1 with a specified seed is notably faster than uuid4 (22us vs 33us on my machine). > > Since I need to inspect messages on the way, and don't want to have to > > unpack the content of the message, I can't send the whole message as one > > json object. For this, I split it, such that the headers and content are > > sent separately. msg_type is added to the header for this. > > Should we list the multipart spec separately? This would be only for > 'data carrying' messages, or would it be for all communications? > It's not just for data carrying messages, it's relevant for all snooped messages passing through a controller. The controller should never need to unpack the content of a message, since it could be a massive code block in an execute_request or a big fat reply. Currently, all messages are sent this way in my code, but that doesn't need to be the case. It does need to be the case for all messages sent from the client to the kernel, since those are the ones whose headers are inspected. > > > I need to be able to send data without copying, and for that I added a > > 'buffers' element at the top level of a message. I also added an > > apply_message type, for using Brian's apply model. I will write up how > the > > apply stuff works later (I expect there will be some discussion and > > rearrangement of some of it). > > I also added, but no longer use, a subheader, which allows senders of > > messages to extend the header. I needed this when the Controller parses > a > > message destined for an engine, it shouldn't unpack the content of the > > message, only the header. Since the routing is now handled purely in zmq, > I > > don't currently have a need for the subheader, but I can certainly > imagine > > it being useful. This is not so much a part of the root message format > as > > it is a part of the session.send() api. > > I'm finishing up the doc, it would be good if you could write up these > ideas into it so we have all the design in one place... I'll ping > soon with the finished draft. > I will write them up (and have already done some in my fork). I am travelling now, but will be back in Berkeley Thursday. I might have some good writing time on the plane, though. > > Cheers, > > f > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fperez.net at gmail.com Wed Aug 11 03:39:05 2010 From: fperez.net at gmail.com (Fernando Perez) Date: Wed, 11 Aug 2010 00:39:05 -0700 Subject: [IPython-dev] Message spec draft more fleshed out In-Reply-To: References: Message-ID: Howdy, On Tue, Aug 10, 2010 at 1:02 AM, Fernando Perez wrote: > Hi folks, > > here: > > http://github.com/ipython/ipython/blob/106bc2e0587d315db67988c1803b8574fc54463a/docs/source/development/messaging.txt > > is a more fleshed out message spec document for feedback. ?I'd > especially like to hear from Omar and Gerardo if you notice any > important point missing, since you've been thinking a fair bit about > this. Thanks a lot for the feedback so far. It took a lot more work than I'd thought, but I think we now have a fairly solid first pass at a *complete* design and messaging spec (excluding the parallel computing part). Here's the last version I just put up: http://github.com/ipython/ipython/blob/8dbbf5e225c816fe2b74c5756ab0b3a558cd9303/docs/source/development/messaging.txt but if you prefer to read civilized HTML I built and pushed the nightlies: http://ipython.scipy.org/doc/nightly/html/development/messaging.html At this point, please do pound on this document. This should be our *real* spec, the actual code should match it, and it should be complete. We'll be implementing off of this, so anything that I've missed, please do point it out. Thanks for any feedback! f From benjaminrk at gmail.com Wed Aug 11 15:14:10 2010 From: benjaminrk at gmail.com (MinRK) Date: Wed, 11 Aug 2010 12:14:10 -0700 Subject: [IPython-dev] Message spec draft more fleshed out In-Reply-To: References: Message-ID: A note on the heartbeat section. I don't know if you guys are using the heartbeat messages, but my heartbeat monitor doesn't use Python messages at all. It's pure ZMQ, so it never enters Python code, and packing up of reply messages isn't available. The monitor sends out a single zmq message (right now, it is a str of the monitor's lifetime in seconds), and gets the same message right back, prefixed with the zmq identity of the XREQ socket in the heartbeat process. This can be a uuid, or even a full message, but I don't see a need for packing up a message when the sender and receiver are the exact same Python object. The model is this: monitor.send(str(self.lifetime)) # '1.2345678910' and the monitor receives some number of messages of the form: ['uuid-abcd-dead-beef', '1.2345678910'] where the first part is the zmq.IDENTITY of the heart's XREQ on the engine, and the rest is the message sent by the monitor. No Python code ever has any access to the message between the monitor's send, and the monitor's recv. -MinRK On Wed, Aug 11, 2010 at 00:39, Fernando Perez wrote: > Howdy, > > On Tue, Aug 10, 2010 at 1:02 AM, Fernando Perez > wrote: > > Hi folks, > > > > here: > > > > > http://github.com/ipython/ipython/blob/106bc2e0587d315db67988c1803b8574fc54463a/docs/source/development/messaging.txt > > > > is a more fleshed out message spec document for feedback. I'd > > especially like to hear from Omar and Gerardo if you notice any > > important point missing, since you've been thinking a fair bit about > > this. > > Thanks a lot for the feedback so far. It took a lot more work than > I'd thought, but I think we now have a fairly solid first pass at a > *complete* design and messaging spec (excluding the parallel computing > part). Here's the last version I just put up: > > > http://github.com/ipython/ipython/blob/8dbbf5e225c816fe2b74c5756ab0b3a558cd9303/docs/source/development/messaging.txt > > but if you prefer to read civilized HTML I built and pushed the nightlies: > > http://ipython.scipy.org/doc/nightly/html/development/messaging.html > > > At this point, please do pound on this document. This should be our > *real* spec, the actual code should match it, and it should be > complete. We'll be implementing off of this, so anything that I've > missed, please do point it out. > > Thanks for any feedback! > > f > _______________________________________________ > IPython-dev mailing list > IPython-dev at scipy.org > http://mail.scipy.org/mailman/listinfo/ipython-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ellisonbg at gmail.com Wed Aug 11 17:50:25 2010 From: ellisonbg at gmail.com (Brian Granger) Date: Wed, 11 Aug 2010 14:50:25 -0700 Subject: [IPython-dev] Message spec draft more fleshed out In-Reply-To: References: Message-ID: Min, On Wed, Aug 11, 2010 at 12:14 PM, MinRK wrote: > A note on the heartbeat section. > > I don't know if you guys are using the heartbeat messages, but my heartbeat > monitor doesn't use Python messages at all. It's pure ZMQ, so it never > enters Python code, and packing up of reply messages isn't available. The > monitor sends out a single zmq message (right now, it is a str of the > monitor's lifetime in seconds), and gets the same message right back, > prefixed with the zmq identity of the XREQ socket in the heartbeat process. > This can be a uuid, or even a full message, but I don't see a need for > packing up a message when the sender and receiver are the exact same Python > object. > > Very good points. I think we should just copy this description into the message spec. > The model is this: > monitor.send(str(self.lifetime)) # '1.2345678910' > and the monitor receives some number of messages of the form: > ['uuid-abcd-dead-beef', '1.2345678910'] > where the first part is the zmq.IDENTITY of the heart's XREQ on the engine, > and the rest is the message sent by the monitor. No Python code ever has > any access to the message between the monitor's send, and the monitor's > recv. > > Cheers, Brian > -MinRK > > On Wed, Aug 11, 2010 at 00:39, Fernando Perez wrote: > >> Howdy, >> >> On Tue, Aug 10, 2010 at 1:02 AM, Fernando Perez >> wrote: >> > Hi folks, >> > >> > here: >> > >> > >> http://github.com/ipython/ipython/blob/106bc2e0587d315db67988c1803b8574fc54463a/docs/source/development/messaging.txt >> > >> > is a more fleshed out message spec document for feedback. I'd >> > especially like to hear from Omar and Gerardo if you notice any >> > important point missing, since you've been thinking a fair bit about >> > this. >> >> Thanks a lot for the feedback so far. It took a lot more work than >> I'd thought, but I think we now have a fairly solid first pass at a >> *complete* design and messaging spec (excluding the parallel computing >> part). Here's the last version I just put up: >> >> >> http://github.com/ipython/ipython/blob/8dbbf5e225c816fe2b74c5756ab0b3a558cd9303/docs/source/development/messaging.txt >> >> but if you prefer to read civilized HTML I built and pushed the nightlies: >> >> http://ipython.scipy.org/doc/nightly/html/development/messaging.html >> >> >> At this point, please do pound on this document. This should be our >> *real* spec, the actual code should match it, and it should be >> complete. We'll be implementing off of this, so anything that I've >> missed, please do point it out. >> >> Thanks for any feedback! >> >> f >> _______________________________________________ >> IPython-dev mailing list >> IPython-dev at scipy.org >> http://mail.scipy.org/mailman/listinfo/ipython-dev >> > > > _______________________________________________ > IPython-dev mailing list > IPython-dev at scipy.org > http://mail.scipy.org/mailman/listinfo/ipython-dev > > -- Brian E. Granger, Ph.D. Assistant Professor of Physics Cal Poly State University, San Luis Obispo bgranger at calpoly.edu ellisonbg at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From fperez.net at gmail.com Thu Aug 12 04:13:04 2010 From: fperez.net at gmail.com (Fernando Perez) Date: Thu, 12 Aug 2010 01:13:04 -0700 Subject: [IPython-dev] Message spec draft more fleshed out In-Reply-To: References: Message-ID: On Wed, Aug 11, 2010 at 2:50 PM, Brian Granger wrote: > > Very good points. ?I think we should just copy this description into the > message spec. I just updated the doc and pushed to trunk and a build with Min's text: http://ipython.scipy.org/doc/nightly/html/development/messaging.html#heartbeat-for-kernels Modulo final feedback, that design spec is reasonably complete, as far as I'm concerned. Thanks! f From ellisonbg at gmail.com Fri Aug 13 16:27:57 2010 From: ellisonbg at gmail.com (Brian Granger) Date: Fri, 13 Aug 2010 13:27:57 -0700 Subject: [IPython-dev] Message spec draft more fleshed out In-Reply-To: References: Message-ID: I have this in a browser tab and will review it soon. Cheers, Brian On Thu, Aug 12, 2010 at 1:13 AM, Fernando Perez wrote: > On Wed, Aug 11, 2010 at 2:50 PM, Brian Granger wrote: >> >> Very good points. ?I think we should just copy this description into the >> message spec. > > I just updated the doc and pushed to trunk and a build with Min's text: > > http://ipython.scipy.org/doc/nightly/html/development/messaging.html#heartbeat-for-kernels > > Modulo final feedback, that design spec is reasonably complete, as far > as I'm concerned. > > Thanks! > > f > -- Brian E. Granger, Ph.D. Assistant Professor of Physics Cal Poly State University, San Luis Obispo bgranger at calpoly.edu ellisonbg at gmail.com From wackywendell at gmail.com Fri Aug 13 18:04:50 2010 From: wackywendell at gmail.com (Wendell Smith) Date: Fri, 13 Aug 2010 18:04:50 -0400 Subject: [IPython-dev] Message spec draft more fleshed out In-Reply-To: References: Message-ID: <4C65C182.7000807@gmail.com> Hello all, I have been looking all these documents over, and wondering if perhaps we could have some object (descended from KernelManager) that would be constructed to perfectly match the message spec, such that any message received would translate to a functional call (i.e. MessageManager.execute(self, header, code, silent=False)), to make it easy for someone to simply write an object that descends from MessageManager (or whatever we call it) and fill in the methods. This would also then serve as a message spec - it should be created such that it can receive any valid message and sends only (and can send all) valid messages. Of course, this may not make sense, and I may not know what I'm talking about - I don't know much about the zmq communication, and was sort of hoping to stay focused on the fancy console frontend without delving too deeply into that, but if others agree with me but no one with better knowledge wants to do it, I would be happy to write the necessary code myself, but again, I'm probably not the one best able to do it. Speaking of combined code, it would also be nice to have a frontend.pygmentize module that covers pygments coloring for input, output, prompts, and tracebacks, providing a lexer and a style from config, (formatters would depend on the frontend), and also perhaps some object that takes a formatter and provides all these tools for the frontend, perhaps even descending from KernelManager and just having methods that manage these. That would be nice. I could work on that too, and would be happy too - I just noticed that at least Evan and I have written pygments code, and it would be nice to avoid code duplication. Anyways, I just feel like we've got 4 people working away on 4 frontends without too much communication going on about useful common code, and it would be nice to get that sort of work delegated out before we all go and write our own versions of the same tools. Please let me know if this makes sense and is a good idea - I am certainly not the most knowledgeable here, and if I seem to be missing something, please let me know! -Wendell On 08/13/2010 04:27 PM, Brian Granger wrote: > I have this in a browser tab and will review it soon. > > Cheers, > > Brian > > On Thu, Aug 12, 2010 at 1:13 AM, Fernando Perez wrote: >> On Wed, Aug 11, 2010 at 2:50 PM, Brian Granger wrote: >>> Very good points. I think we should just copy this description into the >>> message spec. >> I just updated the doc and pushed to trunk and a build with Min's text: >> >> http://ipython.scipy.org/doc/nightly/html/development/messaging.html#heartbeat-for-kernels >> >> Modulo final feedback, that design spec is reasonably complete, as far >> as I'm concerned. >> >> Thanks! >> >> f >> > > From ellisonbg at gmail.com Fri Aug 13 19:16:40 2010 From: ellisonbg at gmail.com (Brian Granger) Date: Fri, 13 Aug 2010 16:16:40 -0700 Subject: [IPython-dev] Message spec draft more fleshed out In-Reply-To: <4C65C182.7000807@gmail.com> References: <4C65C182.7000807@gmail.com> Message-ID: Wendell, On Fri, Aug 13, 2010 at 3:04 PM, Wendell Smith wrote: > ?Hello all, > > I have been looking all these documents over, and wondering if perhaps > we could have some object (descended from KernelManager) that would be > constructed to perfectly match the message spec, such that any message > received would translate to a functional call (i.e. > MessageManager.execute(self, header, code, silent=False)), to make it > easy for someone to simply write an object that descends from > MessageManager (or whatever we call it) and fill in the methods. This > would also then serve as a message spec - it should be created such that > it can receive any valid message and sends only (and can send all) valid > messages. The KernelManager classes and ZMQChannel classes are about as close as we can get for now. There are a couple of different issues: 1. The handler methods that you are talking about need to be called in the main thread. But all of the channels are receiving msgs in a second thread. The call_handlers method needs to be overridden in a way that causes the true handler methods to be called in the other thread. Each toolkit has its own way of calling functions in other threads, so that is difficult to do in a general way. Furthermore, in a terminal where there is no event loop, there really isn't a way of calling a method in a different thread. Thus, the yet-to-be-written subclasses of KernelManager and the channels will have to simply put the received message in a Queue and the main terminal thread will have to poll that queue. But an important question is this: does curses have a way of calling functions in the main thread? If not, we will have to develop a custom KernelManager and channel classes that use Queues and polling. I have started some of this in blockingkernelmanager. 2. We are moving towards a model where "The message is is API" Thus, we don't want to hide the actual messages from frontend code. You really need all of that information and because of (1), we can't really easily put it into handler methods. 3. We still don't quite know what is needed by different frontends, so it is difficult to identify the common code yet. As time goes on, we may realize all frontends use similar logic and we can abstract that out properly at the time. But for now we are in the wild-wild-west. 4. Everything is truly asynchronous. It takes a while to get used to this fact as it is *very* different than the old terminal based IPython. Because each frontend will handle that asynchronicity in different ways, it is very difficult to come up with abstractions beyond the messages that are universal. > Of course, this may not make sense, and I may not know what I'm talking > about - I don't know much about the zmq communication, and was sort of > hoping to stay focused on the fancy console frontend without delving too > deeply into that, but if others agree with me but no one with better > knowledge wants to do it, I would be happy to write the necessary code > myself, but again, I'm probably not the one best able to do it. > > Speaking of combined code, it would also be nice to have a > frontend.pygmentize module that covers pygments coloring for input, > output, prompts, and tracebacks, providing a lexer and a style from > config, (formatters would depend on the frontend), and also perhaps some > object that takes a formatter and provides all these tools for the > frontend, perhaps even descending from KernelManager and just having > methods that manage these. That would be nice. I could work on that too, > and would be happy too - I just noticed that at least Evan and I have > written pygments code, and it would be nice to avoid code duplication. I agree, and if you want to take what Evan has done and create a common base class that all frontends can use and appropriate frontend subclasses for qt, curse, terminal that would be great. Not sure it should (at least yet) be descending from KernelManager though. > Anyways, I just feel like we've got 4 people working away on 4 frontends > without too much communication going on about useful common code, and it > would be nice to get that sort of work delegated out before we all go > and write our own versions of the same tools. Part of the challenge we are having is that there is so much code being written currently that none of us can keep up with it all. Our current model is that my kernelmanager branch is the "common code base" that all 4 frontends are using. There is definitely repeated code in the various frontends and over time we will move that into the common base. But you are stepping in very early in the process, before all the APIs are very solid. But part of what we want and need is for the various frontend developers to give it a shot and let us know what things they need in the common base. But I think the way that needs to go is that each frontend does it on their own first, to see what works best for them and then we try to identify the commonalities. > Please let me know if this makes sense and is a good idea - I am > certainly not the most knowledgeable here, and if I seem to be missing > something, please let me know! No, you have definitely noticed the most important point - this stuff is not simple and it is definitely not done! But please let us know if you have questions. Cheers, Brian > -Wendell > > On 08/13/2010 04:27 PM, Brian Granger wrote: >> I have this in a browser tab and will review it soon. >> >> Cheers, >> >> Brian >> >> On Thu, Aug 12, 2010 at 1:13 AM, Fernando Perez ?wrote: >>> On Wed, Aug 11, 2010 at 2:50 PM, Brian Granger ?wrote: >>>> Very good points. ?I think we should just copy this description into the >>>> message spec. >>> I just updated the doc and pushed to trunk and a build with Min's text: >>> >>> http://ipython.scipy.org/doc/nightly/html/development/messaging.html#heartbeat-for-kernels >>> >>> Modulo final feedback, that design spec is reasonably complete, as far >>> as I'm concerned. >>> >>> Thanks! >>> >>> f >>> >> >> > > _______________________________________________ > IPython-dev mailing list > IPython-dev at scipy.org > http://mail.scipy.org/mailman/listinfo/ipython-dev > -- Brian E. Granger, Ph.D. Assistant Professor of Physics Cal Poly State University, San Luis Obispo bgranger at calpoly.edu ellisonbg at gmail.com From wackywendell at gmail.com Fri Aug 13 23:40:17 2010 From: wackywendell at gmail.com (Wendell Smith) Date: Fri, 13 Aug 2010 23:40:17 -0400 Subject: [IPython-dev] Message spec draft more fleshed out In-Reply-To: References: <4C65C182.7000807@gmail.com> Message-ID: <4C661021.3030909@gmail.com> Dear Brian, Thank you for your answer and your patience... you're really helping me understand this, and I appreciate it! As for curses... I've switched to the urwid library, which, by the way, I have already mostly ported to py3k... and the urwid library is set up to use any sort of asynchronous main loop you want, with a basic main loop written into it, a tornado-based main loop, and a select-based mainloop already written, and it's flexible, so one could write a main loop on one's own. Input is non-blocking. As for my previous idea, maybe I'm still not understanding, but perhaps we could still have a basic system with a kernelmanager object, a send_receive function, and two queues, in and out. The send_receive function reads messages from the out_queue and sends them, and then receives messages from zmq through the ports and then puts them on the in_queue, never really looking at what messages are coming in and out, just putting them on the queues. The kernel manager object could then be set up exactly as I said before, except that it has an additional method, process_messages, in which it reads a message from the in_queue, determines which method to call, and calls it; the methods do their magic, printing, receiving input, whatever, and some put messages on the out_queue. As I see it, this sounds great: the queues can be from Queue.Queue, and then everything is thread safe, as the send_receive function cold be on one thread, the kernelmanager methods on the other, and the two would interact only through the thread-safe queues. For an asynchronous approach, you have an option to set max_msgs and timeout for both the send_receive function and the process_messages method, and call them alternately with max_msgs = 1, timeout = 0. This then would make the frontend programmer's job easy: all they need to do is get their main loop to frequently call send_receive and kernelmanager.process_messages, with max_num and timeout set appropriately, and then fill in the other methods from the kernel manager to provide output and input. This is a bit simplified... we might need prioritized queues, or send_receive might need to add tags to the message to say which channel they came in on... but that shouldn't be difficult, if the rest of it makes sense. Would this work? If not, what am I missing? Thank you again for your patience, Wendell On 08/13/2010 07:16 PM, Brian Granger wrote: > Wendell, > > On Fri, Aug 13, 2010 at 3:04 PM, Wendell Smith wrote: >> Hello all, >> >> I have been looking all these documents over, and wondering if perhaps >> we could have some object (descended from KernelManager) that would be >> constructed to perfectly match the message spec, such that any message >> received would translate to a functional call (i.e. >> MessageManager.execute(self, header, code, silent=False)), to make it >> easy for someone to simply write an object that descends from >> MessageManager (or whatever we call it) and fill in the methods. This >> would also then serve as a message spec - it should be created such that >> it can receive any valid message and sends only (and can send all) valid >> messages. > The KernelManager classes and ZMQChannel classes are about as close as > we can get for now. There are a couple of different issues: > > 1. The handler methods that you are talking about need to be called in > the main thread. But all of the channels are receiving msgs in a > second thread. The call_handlers method needs to be overridden in a > way that causes the true handler methods to be called in the other > thread. Each toolkit has its own way of calling functions in other > threads, so that is difficult to do in a general way. Furthermore, in > a terminal where there is no event loop, there really isn't a way of > calling a method in a different thread. Thus, the yet-to-be-written > subclasses of KernelManager and the channels will have to simply put > the received message in a Queue and the main terminal thread will have > to poll that queue. But an important question is this: does curses > have a way of calling functions in the main thread? If not, we will > have to develop a custom KernelManager and channel classes that use > Queues and polling. I have started some of this in > blockingkernelmanager. > > 2. We are moving towards a model where "The message is is API" Thus, > we don't want to hide the actual messages from frontend code. You > really need all of that information and because of (1), we can't > really easily put it into handler methods. > > 3. We still don't quite know what is needed by different frontends, > so it is difficult to identify the common code yet. As time goes on, > we may realize all frontends use similar logic and we can abstract > that out properly at the time. But for now we are in the > wild-wild-west. > > 4. Everything is truly asynchronous. It takes a while to get used to > this fact as it is *very* different than the old terminal based > IPython. Because each frontend will handle that asynchronicity in > different ways, it is very difficult to come up with abstractions > beyond the messages that are universal. > >> Of course, this may not make sense, and I may not know what I'm talking >> about - I don't know much about the zmq communication, and was sort of >> hoping to stay focused on the fancy console frontend without delving too >> deeply into that, but if others agree with me but no one with better >> knowledge wants to do it, I would be happy to write the necessary code >> myself, but again, I'm probably not the one best able to do it. >> >> Speaking of combined code, it would also be nice to have a >> frontend.pygmentize module that covers pygments coloring for input, >> output, prompts, and tracebacks, providing a lexer and a style from >> config, (formatters would depend on the frontend), and also perhaps some >> object that takes a formatter and provides all these tools for the >> frontend, perhaps even descending from KernelManager and just having >> methods that manage these. That would be nice. I could work on that too, >> and would be happy too - I just noticed that at least Evan and I have >> written pygments code, and it would be nice to avoid code duplication. > I agree, and if you want to take what Evan has done and create a > common base class > that all frontends can use and appropriate frontend subclasses for qt, > curse, terminal > that would be great. Not sure it should (at least yet) be descending > from KernelManager though. > >> Anyways, I just feel like we've got 4 people working away on 4 frontends >> without too much communication going on about useful common code, and it >> would be nice to get that sort of work delegated out before we all go >> and write our own versions of the same tools. > Part of the challenge we are having is that there is so much code > being written currently that none of us can keep up with it all. Our > current model is that my kernelmanager branch is the "common code > base" that all 4 frontends are using. There is definitely repeated > code in the various frontends and over time we will move that into the > common base. But you are stepping in very early in the process, > before all the APIs are very solid. But part of what we want and need > is for the various frontend developers to give it a shot and let us > know what things they need in the common base. But I think the way > that needs to go is that each frontend does it on their own first, to > see what works best for them and then we try to identify the > commonalities. > >> Please let me know if this makes sense and is a good idea - I am >> certainly not the most knowledgeable here, and if I seem to be missing >> something, please let me know! > No, you have definitely noticed the most important point - this stuff > is not simple and it is definitely not done! But please let us know > if you have questions. > > Cheers, > > Brian > >> -Wendell >> >> On 08/13/2010 04:27 PM, Brian Granger wrote: >>> I have this in a browser tab and will review it soon. >>> >>> Cheers, >>> >>> Brian >>> >>> On Thu, Aug 12, 2010 at 1:13 AM, Fernando Perez wrote: >>>> On Wed, Aug 11, 2010 at 2:50 PM, Brian Granger wrote: >>>>> Very good points. I think we should just copy this description into the >>>>> message spec. >>>> I just updated the doc and pushed to trunk and a build with Min's text: >>>> >>>> http://ipython.scipy.org/doc/nightly/html/development/messaging.html#heartbeat-for-kernels >>>> >>>> Modulo final feedback, that design spec is reasonably complete, as far >>>> as I'm concerned. >>>> >>>> Thanks! >>>> >>>> f >>>> >>> >> _______________________________________________ >> IPython-dev mailing list >> IPython-dev at scipy.org >> http://mail.scipy.org/mailman/listinfo/ipython-dev >> > > From ellisonbg at gmail.com Sat Aug 14 12:48:15 2010 From: ellisonbg at gmail.com (Brian Granger) Date: Sat, 14 Aug 2010 09:48:15 -0700 Subject: [IPython-dev] Message spec draft more fleshed out In-Reply-To: <4C661021.3030909@gmail.com> References: <4C65C182.7000807@gmail.com> <4C661021.3030909@gmail.com> Message-ID: On Fri, Aug 13, 2010 at 8:40 PM, Wendell Smith wrote: > ?Dear Brian, > > Thank you for your answer and your patience... you're really helping me > understand this, and I appreciate it! No problem! > As for curses... I've switched to the urwid library, which, by the way, > I have already mostly ported to py3k... and the urwid library is set up > to use any sort of asynchronous main loop you want, with a basic main > loop written into it, a tornado-based main loop, and a select-based > mainloop already written, and it's flexible, so one could write a main > loop on one's own. Input is non-blocking. That is great - it sounds quite flexible. One interesting point is that we are already using a tornado based event loop in the kernel manager. There might be some nice ways of integrating things without the current threaded channels we are using now. > As for my previous idea, maybe I'm still not understanding, but perhaps > we could still have a basic system with a kernelmanager object, a > send_receive function, and two queues, in and out. The send_receive > function reads messages from the out_queue and sends them, and then > receives messages from zmq through the ports and then puts them on the > in_queue, never really looking at what messages are coming in and out, > just putting them on the queues. The kernel manager object could then be > set up exactly as I said before, except that it has an additional > method, process_messages, in which it reads a message from the in_queue, > determines which method to call, and calls it; the methods do their > magic, printing, receiving input, whatever, and some put messages on the > out_queue. We have definitely thought about doing this type of thing. However, we didn't want to make that the only way of handling things because Qt has the signals/slots thing that work really well across threads. I think we should do something like you are proposing in subclassess, but with one change. The KernelManager is only responsible for running the kernel process and creating the channels. The APIs you are talking about will be on the channels themselves. But, you should subclass the channels, create those queues (the input queues are already there) and then override call_handlers to simple put the messages on the out queue. Then you can implement the callbacks an process_messages. It should work quite well and integrate well with the event loop you end up using. > As I see it, this sounds great: the queues can be from Queue.Queue, and > then everything is thread safe, as the send_receive function cold be on > one thread, the kernelmanager methods on the other, and the two would > interact only through the thread-safe queues. For an asynchronous > approach, you have an option to set max_msgs and timeout for both the > send_receive function and the process_messages method, and call them > alternately with max_msgs = 1, timeout = 0. This then would make the > frontend programmer's job easy: all they need to do is get their main > loop to frequently call send_receive and kernelmanager.process_messages, > with max_num and timeout set appropriately, and then fill in the other > methods from the kernel manager to provide output and input. > > This is a bit simplified... we might need prioritized queues, or > send_receive might need to add tags to the message to say which channel > they came in on... but that shouldn't be difficult, if the rest of it > makes sense. > > Would this work? If not, what am I missing? I think this will work. I would give it a shot and that will show it you need to tweak the design. The only caveat that people have run into is that it can be quite subtle to synchronize the messages on the different channels. That is why the SUB channel has a flush method. But I would still just go for it and see how it goes. Cheers, Brian > Thank you again for your patience, > Wendell > > > On 08/13/2010 07:16 PM, Brian Granger wrote: >> Wendell, >> >> On Fri, Aug 13, 2010 at 3:04 PM, Wendell Smith ?wrote: >>> ? Hello all, >>> >>> I have been looking all these documents over, and wondering if perhaps >>> we could have some object (descended from KernelManager) that would be >>> constructed to perfectly match the message spec, such that any message >>> received would translate to a functional call (i.e. >>> MessageManager.execute(self, header, code, silent=False)), to make it >>> easy for someone to simply write an object that descends from >>> MessageManager (or whatever we call it) and fill in the methods. This >>> would also then serve as a message spec - it should be created such that >>> it can receive any valid message and sends only (and can send all) valid >>> messages. >> The KernelManager classes and ZMQChannel classes are about as close as >> we can get for now. ?There are a couple of different issues: >> >> 1. The handler methods that you are talking about need to be called in >> the main thread. ?But all of the channels are receiving msgs in a >> second thread. ?The call_handlers method needs to be overridden in a >> way that causes the true handler methods to be called in the other >> thread. ?Each toolkit has its own way of calling functions in other >> threads, so that is difficult to do in a general way. ?Furthermore, in >> a terminal where there is no event loop, there really isn't a way of >> calling a method in a different thread. ?Thus, the yet-to-be-written >> subclasses of KernelManager and the channels will have to simply put >> the received message in a Queue and the main terminal thread will have >> to poll that queue. ?But an important question is this: ?does curses >> have a way of calling functions in the main thread? ?If not, we will >> have to develop a custom KernelManager and channel classes that use >> Queues and polling. ?I have started some of this in >> blockingkernelmanager. >> >> 2. We are moving towards a model where "The message is is API" ?Thus, >> we don't want to hide the actual messages from frontend code. ?You >> really need all of that information and because of (1), we can't >> really easily put it into handler methods. >> >> 3. ?We still don't quite know what is needed by different frontends, >> so it is difficult to identify the common code yet. ?As time goes on, >> we may realize all frontends use similar logic and we can abstract >> that out properly at the time. ?But for now we are in the >> wild-wild-west. >> >> 4. ?Everything is truly asynchronous. ?It takes a while to get used to >> this fact as it is *very* different than the old terminal based >> IPython. ?Because each frontend will handle that asynchronicity in >> different ways, it is very difficult to come up with abstractions >> beyond the messages that are universal. >> >>> Of course, this may not make sense, and I may not know what I'm talking >>> about - I don't know much about the zmq communication, and was sort of >>> hoping to stay focused on the fancy console frontend without delving too >>> deeply into that, but if others agree with me but no one with better >>> knowledge wants to do it, I would be happy to write the necessary code >>> myself, but again, I'm probably not the one best able to do it. >>> >>> Speaking of combined code, it would also be nice to have a >>> frontend.pygmentize module that covers pygments coloring for input, >>> output, prompts, and tracebacks, providing a lexer and a style from >>> config, (formatters would depend on the frontend), and also perhaps some >>> object that takes a formatter and provides all these tools for the >>> frontend, perhaps even descending from KernelManager and just having >>> methods that manage these. That would be nice. I could work on that too, >>> and would be happy too - I just noticed that at least Evan and I have >>> written pygments code, and it would be nice to avoid code duplication. >> I agree, and if you want to take what Evan has done and create a >> common base class >> that all frontends can use and appropriate frontend subclasses for qt, >> curse, terminal >> that would be great. ?Not sure it should (at least yet) be descending >> from KernelManager though. >> >>> Anyways, I just feel like we've got 4 people working away on 4 frontends >>> without too much communication going on about useful common code, and it >>> would be nice to get that sort of work delegated out before we all go >>> and write our own versions of the same tools. >> Part of the challenge we are having is that there is so much code >> being written currently that none of us can keep up with it all. ?Our >> current model is that my kernelmanager branch is the "common code >> base" that all 4 frontends are using. ?There is definitely repeated >> code in the various frontends and over time we will move that into the >> common base. ?But you are stepping in very early in the process, >> before all the APIs are very solid. ?But part of what we want and need >> is for the various frontend developers to give it a shot and let us >> know what things they need in the common base. ?But I think the way >> that needs to go is that each frontend does it on their own first, to >> see what works best for them and then we try to identify the >> commonalities. >> >>> Please let me know if this makes sense and is a good idea - I am >>> certainly not the most knowledgeable here, and if I seem to be missing >>> something, please let me know! >> No, you have definitely noticed the most important point - this stuff >> is not simple and it is definitely not done! ?But please let us know >> if you have questions. >> >> Cheers, >> >> Brian >> >>> -Wendell >>> >>> On 08/13/2010 04:27 PM, Brian Granger wrote: >>>> I have this in a browser tab and will review it soon. >>>> >>>> Cheers, >>>> >>>> Brian >>>> >>>> On Thu, Aug 12, 2010 at 1:13 AM, Fernando Perez ? ?wrote: >>>>> On Wed, Aug 11, 2010 at 2:50 PM, Brian Granger ? ?wrote: >>>>>> Very good points. ?I think we should just copy this description into the >>>>>> message spec. >>>>> I just updated the doc and pushed to trunk and a build with Min's text: >>>>> >>>>> http://ipython.scipy.org/doc/nightly/html/development/messaging.html#heartbeat-for-kernels >>>>> >>>>> Modulo final feedback, that design spec is reasonably complete, as far >>>>> as I'm concerned. >>>>> >>>>> Thanks! >>>>> >>>>> f >>>>> >>>> >>> _______________________________________________ >>> IPython-dev mailing list >>> IPython-dev at scipy.org >>> http://mail.scipy.org/mailman/listinfo/ipython-dev >>> >> >> > > _______________________________________________ > IPython-dev mailing list > IPython-dev at scipy.org > http://mail.scipy.org/mailman/listinfo/ipython-dev > -- Brian E. Granger, Ph.D. Assistant Professor of Physics Cal Poly State University, San Luis Obispo bgranger at calpoly.edu ellisonbg at gmail.com From muzgash.lists at gmail.com Mon Aug 16 11:38:19 2010 From: muzgash.lists at gmail.com (Gerardo Gutierrez) Date: Mon, 16 Aug 2010 10:38:19 -0500 Subject: [IPython-dev] iptest issue Message-ID: Hi everyone. I'm writing some tests for ipythonqt and when I try to run iptest I get this message: *muzgash at yggdrasil:~/Projects/GSoC/IPythonQt/ipython/IPython/frontend/qt/nb$ iptest IPython.frontend.qt.nb Traceback (most recent call last): File "/usr/local/bin/iptest", line 9, in load_entry_point('ipython==0.11.alpha1.git', 'console_scripts', 'iptest')() File "/usr/local/lib/python2.6/dist-packages/ipython-0.11.alpha1.git-py2.6.egg/IPython/testing/iptest.py", line 439, in main run_iptest() File "/usr/local/lib/python2.6/dist-packages/ipython-0.11.alpha1.git-py2.6.egg/IPython/testing/iptest.py", line 375, in run_iptest TestProgram(argv=argv, plugins=plugins) File "/usr/lib/pymodules/python2.6/nose/core.py", line 113, in __init__ argv=argv, testRunner=testRunner, testLoader=testLoader) File "/usr/lib/python2.6/unittest.py", line 816, in __init__ self.parseArgs(argv) File "/usr/lib/pymodules/python2.6/nose/core.py", line 164, in parseArgs self.createTests() File "/usr/lib/pymodules/python2.6/nose/core.py", line 178, in createTests self.test = self.testLoader.loadTestsFromNames(self.testNames) File "/usr/lib/pymodules/python2.6/nose/loader.py", line 442, in loadTestsFromNames return unittest.TestLoader.loadTestsFromNames(self, names, module) File "/usr/lib/python2.6/unittest.py", line 613, in loadTestsFromNames suites = [self.loadTestsFromName(name, module) for name in names] File "/usr/lib/pymodules/python2.6/nose/loader.py", line 394, in loadTestsFromName discovered=discovered) File "/usr/lib/pymodules/python2.6/nose/loader.py", line 316, in loadTestsFromModule tests.extend(self.loadTestsFromDir(module_path)) File "/usr/lib/pymodules/python2.6/nose/loader.py", line 165, in loadTestsFromDir entry_path, discovered=True) File "/usr/lib/pymodules/python2.6/nose/loader.py", line 394, in loadTestsFromName discovered=discovered) File "/usr/lib/pymodules/python2.6/nose/loader.py", line 316, in loadTestsFromModule tests.extend(self.loadTestsFromDir(module_path)) File "/usr/lib/pymodules/python2.6/nose/loader.py", line 157, in loadTestsFromDir entry_path, discovered=True) File "/usr/lib/pymodules/python2.6/nose/loader.py", line 394, in loadTestsFromName discovered=discovered) File "/usr/lib/pymodules/python2.6/nose/loader.py", line 305, in loadTestsFromModule test_classes + test_funcs) File "/usr/lib/pymodules/python2.6/nose/loader.py", line 304, in tests = map(lambda t: self.makeTest(t, parent=module), File "/usr/lib/pymodules/python2.6/nose/loader.py", line 511, in makeTest return self.loadTestsFromTestClass(obj) File "/usr/lib/pymodules/python2.6/nose/loader.py", line 475, in loadTestsFromTestClass for case in filter(wanted, dir(cls))] File "/usr/lib/pymodules/python2.6/nose/loader.py", line 521, in makeTest return MethodTestCase(obj) File "/usr/lib/pymodules/python2.6/nose/case.py", line 328, in __init__ self.inst = self.cls() TypeError: __init__() takes no arguments (1 given)* * * I've modified correctly (I think) setupbase.py file:* * * * *add_package(packages, 'frontend.qt.nb',tests=True)* * * * * I don't have a clue of what's hapening.* * Thanks in advance. Best regards. -- Gerardo Guti?rrez Guti?rrez Physics student Universidad de Antioquia Computational physics and astrophysics group (FACom ) Computational science and development branch(FACom-dev ) Linux user #492295 -------------- next part -------------- An HTML attachment was scrubbed... URL: From justin.t.riley at gmail.com Tue Aug 17 11:13:03 2010 From: justin.t.riley at gmail.com (Justin Riley) Date: Tue, 17 Aug 2010 11:13:03 -0400 Subject: [IPython-dev] SciPy Sprint summary In-Reply-To: References: <4C42B09F.50106@gmail.com> <4C43455F.1050508@gmail.com> <4C45B72F.5020000@gmail.com> Message-ID: <4C6AA6FF.30009@gmail.com> Hi Matthieu, Excellent :D Thanks for testing/reporting, ~Justin On 08/16/2010 07:21 AM, Matthieu Brucher wrote: > Hi Justin, > > I've finally managed to get some time and an access to the same cluster. > It works like a charm, with a job array, so 100% fine :) > > Matthieu > > 2010/7/24 Matthieu Brucher: >>> Matthieu, I updated my 0.10.1-sge branch to address the LSF shell >>> redirection issue. Basically I create a bsub wrapper that does the >>> shell redirection and then pass the wrapper to getProcessOutput. I >>> don't believe Twisted's getProcessOutput will handle stdin redirection >>> so this is my solution for now. Would you mind testing this new code >>> with LSF? >> >> I'll have to wait until august, I'm on vacation with no access to the cluster ;) >> >> Matthieu >> -- >> Information System Engineer, Ph.D. >> Blog: http://matt.eifelle.com >> LinkedIn: http://www.linkedin.com/in/matthieubrucher >> > > > From andrei.avk at gmail.com Tue Aug 17 11:38:19 2010 From: andrei.avk at gmail.com (AK) Date: Tue, 17 Aug 2010 11:38:19 -0400 Subject: [IPython-dev] open() function? Message-ID: <4C6AACEB.9050609@gmail.com> Hi, why is open() function used in ipython is imported from posix module instead of the built-in open()? (Ubuntu 10.04)? Thanks, -ak From fperez.net at gmail.com Tue Aug 17 18:47:59 2010 From: fperez.net at gmail.com (Fernando Perez) Date: Tue, 17 Aug 2010 15:47:59 -0700 Subject: [IPython-dev] Full input syntax support finished and ready for review/merge Message-ID: Hi all (esp. frontend authors), In this branch http://github.com/fperez/ipython/tree/blockbreaker I've now completed a functional first pass on complete IPython input support, so that frontends can convert statically all 'ipython syntax' that can be determined statically. This includes all escapes (%, ?, !, !!, etc) plus things like 'a =! ls' and removal of python/ipython prompts from pasted input. Rather than being a mess of multiple little functions scattered all over ipython and interleaved with execution, logging, etc, everything now is in a single file: http://github.com/fperez/ipython/blob/blockbreaker/IPython/core/inputsplitter.py and it's got a solid set of tests: http://github.com/fperez/ipython/blob/blockbreaker/IPython/core/tests/test_inputsplitter.py which give 100% test coverage: (blockbreaker)amirbar[tests]> nosetests --with-coverage --cover-erase --with-doctest --cover-package=IPython.core.inputsplitter IPython.core.inputsplitter test_inputsplitter.py Name Stmts Exec Cover Missing ---------------------------------------------------------- IPython.core.inputsplitter 240 240 100% ---------------------------------------------------------------------- Ran 57 tests in 0.132s OK So a few things: - from anyone: code review/feedback is welcome before I proceed to merge this. The only significant feature missing is probably the creation of a couple of objects to provide extensibility for user input filters, but I want to delay that until we land this in real use, so we see better what the right interface should be. For now, frontends have a tool they can use and their part of the API should be pretty stable (modulo any fixes that may be pointed in review). - from frontend authors: let me know if using this gives you any troubles, or if you see any missing feature that could make your life easier. This took a lot of work, but it's a major, critical chunk of ipython that is now well isolated, specified and tested. Since so much of what we do is provide extended syntax for daily use, rationalizing this was long overdue and I'm glad we took the time to do it right. This will let us shed tons of tricky, untestable codepaths. I should note that I didn't write this completely from scratch: while the code architecture is new, I reused all the existing little functions that did the low-level work (especially many tricky regexps). But as I integrated those, I added tests for each and every line. This gives us the benefit of a clean rethinking, without having gotten trapped into a 'second system syndrome' madness. On to the kernel :) Cheers, f From fperez.net at gmail.com Tue Aug 17 19:25:35 2010 From: fperez.net at gmail.com (Fernando Perez) Date: Tue, 17 Aug 2010 16:25:35 -0700 Subject: [IPython-dev] iptest issue In-Reply-To: References: Message-ID: On Mon, Aug 16, 2010 at 8:38 AM, Gerardo Gutierrez wrote: > > > I've modified correctly (I think) setupbase.py file: > > add_package(packages, 'frontend.qt.nb',tests=True) > > I don't have a clue of what's hapening. No idea, I've never seen that error... Try running the tests just with nose: nosetests etc... Using iptest is necessary for the more esoteric stuff that needs a running, hidden ipython instance, but hopefully all new code can be tested by more normal mechanisms. Cheers, f From fperez.net at gmail.com Tue Aug 17 20:18:41 2010 From: fperez.net at gmail.com (Fernando Perez) Date: Tue, 17 Aug 2010 17:18:41 -0700 Subject: [IPython-dev] open() function? In-Reply-To: <4C6AACEB.9050609@gmail.com> References: <4C6AACEB.9050609@gmail.com> Message-ID: On Tue, Aug 17, 2010 at 8:38 AM, AK wrote: > Hi, why is open() function used in ipython is imported from posix module > instead of the built-in open()? (Ubuntu 10.04)? Why do you say that? This is what I see running the IPython in 10.04: (python-bare)amirbar[virtualenv]> /usr/bin/ipython Python 2.6.5 (r265:79063, Apr 16 2010, 13:09:56) Type "copyright", "credits" or "license" for more information. IPython 0.10 -- An enhanced Interactive Python. ? -> Introduction and overview of IPython's features. %quickref -> Quick reference. help -> Python's own help system. object? -> Details about 'object'. ?object also works, ?? prints more. In [1]: open? Type: builtin_function_or_method Base Class: Namespace: Python builtin Docstring: open(name[, mode[, buffering]]) -> file object Open a file using the file() type, returns a file object. This is the preferred way to open a file. I don't find any direct posix imports in the source either: amirbar[IPython]> grin posix amirbar[IPython]> Cheers f From andrei.avk at gmail.com Tue Aug 17 20:26:24 2010 From: andrei.avk at gmail.com (AK) Date: Tue, 17 Aug 2010 20:26:24 -0400 Subject: [IPython-dev] open() function? In-Reply-To: References: <4C6AACEB.9050609@gmail.com> Message-ID: <4C6B28B0.8000808@gmail.com> On 08/17/2010 08:18 PM, Fernando Perez wrote: > On Tue, Aug 17, 2010 at 8:38 AM, AK wrote: >> Hi, why is open() function used in ipython is imported from posix module >> instead of the built-in open()? (Ubuntu 10.04)? > > Why do you say that? This is what I see running the IPython in 10.04: > > (python-bare)amirbar[virtualenv]> /usr/bin/ipython > Python 2.6.5 (r265:79063, Apr 16 2010, 13:09:56) > Type "copyright", "credits" or "license" for more information. > > IPython 0.10 -- An enhanced Interactive Python. > ? -> Introduction and overview of IPython's features. > %quickref -> Quick reference. > help -> Python's own help system. > object? -> Details about 'object'. ?object also works, ?? prints more. > > In [1]: open? > Type: builtin_function_or_method > Base Class: > Namespace: Python builtin > Docstring: > open(name[, mode[, buffering]]) -> file object > > Open a file using the file() type, returns a file object. This is the > preferred way to open a file. > > > I don't find any direct posix imports in the source either: > > amirbar[IPython]> grin posix > amirbar[IPython]> > > Cheers > > f > Ahh, sorry - I get it, I added import_all("os string re") to ipy_user_conf.py and posix module methods are available in os module. Mystery solved... thanks, -andrei From fperez.net at gmail.com Tue Aug 17 21:07:34 2010 From: fperez.net at gmail.com (Fernando Perez) Date: Tue, 17 Aug 2010 18:07:34 -0700 Subject: [IPython-dev] Message spec draft more fleshed out In-Reply-To: <4C661021.3030909@gmail.com> References: <4C65C182.7000807@gmail.com> <4C661021.3030909@gmail.com> Message-ID: Hi Wendell, On Fri, Aug 13, 2010 at 8:40 PM, Wendell Smith wrote: > As for curses... I've switched to the urwid library, which, by the way, > I have already mostly ported to py3k... and the urwid library is set up > to use any sort of asynchronous main loop you want, with a basic main > loop written into it, a tornado-based main loop, and a select-based > mainloop already written, and it's flexible, so one could write a main > loop on one's own. Input is non-blocking. This is great news, I'm very happy to hear you've taken the py3k lead there. Numpy is just coming out with py3 support, scipy will follow soon, and I imagine at that point matplotlib will start worrying about py3k. So we want to have a clear story on that front. > As for my previous idea, maybe I'm still not understanding, but perhaps > we could still have a basic system with a kernelmanager object, a > send_receive function, and two queues, in and out. The send_receive > function reads messages from the out_queue and sends them, and then > receives messages from zmq through the ports and then puts them on the > in_queue, never really looking at what messages are coming in and out, > just putting them on the queues. The kernel manager object could then be > set up exactly as I said before, except that it has an additional > method, process_messages, in which it reads a message from the in_queue, > determines which method to call, and calls it; the methods do their > magic, printing, receiving input, whatever, and some put messages on the > out_queue. > > As I see it, this sounds great: the queues can be from Queue.Queue, and > then everything is thread safe, as the send_receive function cold be on > one thread, the kernelmanager methods on the other, and the two would > interact only through the thread-safe queues. For an asynchronous > approach, you have an option to set max_msgs and timeout for both the > send_receive function and the process_messages method, and call them > alternately with max_msgs = 1, timeout = 0. This then would make the > frontend programmer's job easy: all they need to do is get their main > loop to frequently call send_receive and kernelmanager.process_messages, > with max_num and timeout set appropriately, and then fill in the other > methods from the kernel manager to provide output and input. Brian already gave you several key details, so I won't repeat too much. I do agree that having a functional api is desirable, especially to make calling some messages easier from code, with the functional api ensuring that all fields in the message header are properly filled without every caller having to fill every field manually. So we'll need to play with this as we understand better the needs and commonalities of all the frontends. Please do keep us posted of your progress and don't hesitate to ask. It's a bit unfortunate in a sense that we have 2 Google Summer of Code students, Evan and you simultaneously needing this without Brian, Min and I having had the time to complete the architectural cleanup, but we'll do our best to minimize the feel of duplicated/wasted effort from everyone. Also, we're being pretty good about hanging out on #ipython in the freenode IRC server whenever we work on ipython, so if you have a quick question and you see any of us there, don't hesitate to ping. With my finishing of the syntax work and Brian having just started the kernel code, we're getting closer to a common layer. With a bit of patience and a lot of hard work, we should have soon something both fun and powerful. Cheers, f From wackywendell at gmail.com Tue Aug 17 23:50:48 2010 From: wackywendell at gmail.com (Wendell Smith) Date: Tue, 17 Aug 2010 23:50:48 -0400 Subject: [IPython-dev] Full input syntax support finished and ready for review/merge In-Reply-To: References: Message-ID: <4C6B5898.8030901@gmail.com> Wow, that sounds excellent! I won't have time to look at it until next week, but that sounds like it will be fun to play with! -Wendell On 08/17/2010 06:47 PM, Fernando Perez wrote: > Hi all (esp. frontend authors), > > In this branch > http://github.com/fperez/ipython/tree/blockbreaker > > I've now completed a functional first pass on complete IPython input > support, so that frontends can convert statically all 'ipython syntax' > that can be determined statically. This includes all escapes (%, ?, > !, !!, etc) plus things like 'a =! ls' and removal of python/ipython > prompts from pasted input. > > Rather than being a mess of multiple little functions scattered all > over ipython and interleaved with execution, logging, etc, everything > now is in a single file: > > http://github.com/fperez/ipython/blob/blockbreaker/IPython/core/inputsplitter.py > > and it's got a solid set of tests: > > http://github.com/fperez/ipython/blob/blockbreaker/IPython/core/tests/test_inputsplitter.py > > which give 100% test coverage: > > (blockbreaker)amirbar[tests]> nosetests --with-coverage --cover-erase > --with-doctest --cover-package=IPython.core.inputsplitter > IPython.core.inputsplitter test_inputsplitter.py > Name Stmts Exec Cover Missing > ---------------------------------------------------------- > IPython.core.inputsplitter 240 240 100% > ---------------------------------------------------------------------- > Ran 57 tests in 0.132s > > OK > > > So a few things: > > - from anyone: code review/feedback is welcome before I proceed to > merge this. The only significant feature missing is probably the > creation of a couple of objects to provide extensibility for user > input filters, but I want to delay that until we land this in real > use, so we see better what the right interface should be. For now, > frontends have a tool they can use and their part of the API should be > pretty stable (modulo any fixes that may be pointed in review). > > - from frontend authors: let me know if using this gives you any > troubles, or if you see any missing feature that could make your life > easier. > > > This took a lot of work, but it's a major, critical chunk of ipython > that is now well isolated, specified and tested. Since so much of > what we do is provide extended syntax for daily use, rationalizing > this was long overdue and I'm glad we took the time to do it right. > This will let us shed tons of tricky, untestable codepaths. > > I should note that I didn't write this completely from scratch: while > the code architecture is new, I reused all the existing little > functions that did the low-level work (especially many tricky > regexps). But as I integrated those, I added tests for each and every > line. This gives us the benefit of a clean rethinking, without having > gotten trapped into a 'second system syndrome' madness. > > On to the kernel :) > > Cheers, > > f > _______________________________________________ > IPython-dev mailing list > IPython-dev at scipy.org > http://mail.scipy.org/mailman/listinfo/ipython-dev From wackywendell at gmail.com Wed Aug 18 00:10:12 2010 From: wackywendell at gmail.com (Wendell Smith) Date: Wed, 18 Aug 2010 00:10:12 -0400 Subject: [IPython-dev] Message spec draft more fleshed out In-Reply-To: References: <4C65C182.7000807@gmail.com> <4C661021.3030909@gmail.com> Message-ID: <4C6B5D24.6090604@gmail.com> Hi Fernando, Thanks for the reply... it's been really nice to see how helpful and responsive this team has been! I'm kinda busy for the next week or two, but then I'll get back to my frontend... it still has some issues before integration, and I think I'm going to let the API settle a bit before I try and get too involved. I do still have one more question... why are we doing so much in the channel threads? Why not have the channels simply send and receive, not even looking at the messages, and have hookups to call methods on some object in the main thread that will do the message parsing? That way the frontend managers only have to hook up 'message_received' and 'send_message' between the two threads, and aren't messing about too much with those threads... once they make those simple connections (through queues, signals, etc.), then one would only have to fill in methods on the main API object. That way seems to have two advantages: firstly, the inter-thread communication is limited to only 4 or 5 methods, making thread errors less likely, and secondly, the main API is then on one object, clearly in the main thread. So... I assume there's a good reason for doing everything in the channel threads... but I can't see it. Could someone explain? thanks, Wendell On 08/17/2010 09:07 PM, Fernando Perez wrote: > Hi Wendell, > > On Fri, Aug 13, 2010 at 8:40 PM, Wendell Smith wrote: >> As for curses... I've switched to the urwid library, which, by the way, >> I have already mostly ported to py3k... and the urwid library is set up >> to use any sort of asynchronous main loop you want, with a basic main >> loop written into it, a tornado-based main loop, and a select-based >> mainloop already written, and it's flexible, so one could write a main >> loop on one's own. Input is non-blocking. > This is great news, I'm very happy to hear you've taken the py3k lead > there. Numpy is just coming out with py3 support, scipy will follow > soon, and I imagine at that point matplotlib will start worrying about > py3k. So we want to have a clear story on that front. > >> As for my previous idea, maybe I'm still not understanding, but perhaps >> we could still have a basic system with a kernelmanager object, a >> send_receive function, and two queues, in and out. The send_receive >> function reads messages from the out_queue and sends them, and then >> receives messages from zmq through the ports and then puts them on the >> in_queue, never really looking at what messages are coming in and out, >> just putting them on the queues. The kernel manager object could then be >> set up exactly as I said before, except that it has an additional >> method, process_messages, in which it reads a message from the in_queue, >> determines which method to call, and calls it; the methods do their >> magic, printing, receiving input, whatever, and some put messages on the >> out_queue. >> >> As I see it, this sounds great: the queues can be from Queue.Queue, and >> then everything is thread safe, as the send_receive function cold be on >> one thread, the kernelmanager methods on the other, and the two would >> interact only through the thread-safe queues. For an asynchronous >> approach, you have an option to set max_msgs and timeout for both the >> send_receive function and the process_messages method, and call them >> alternately with max_msgs = 1, timeout = 0. This then would make the >> frontend programmer's job easy: all they need to do is get their main >> loop to frequently call send_receive and kernelmanager.process_messages, >> with max_num and timeout set appropriately, and then fill in the other >> methods from the kernel manager to provide output and input. > Brian already gave you several key details, so I won't repeat too much. > > I do agree that having a functional api is desirable, especially to > make calling some messages easier from code, with the functional api > ensuring that all fields in the message header are properly filled > without every caller having to fill every field manually. So we'll > need to play with this as we understand better the needs and > commonalities of all the frontends. > > Please do keep us posted of your progress and don't hesitate to ask. > It's a bit unfortunate in a sense that we have 2 Google Summer of Code > students, Evan and you simultaneously needing this without Brian, Min > and I having had the time to complete the architectural cleanup, but > we'll do our best to minimize the feel of duplicated/wasted effort > from everyone. > > Also, we're being pretty good about hanging out on #ipython in the > freenode IRC server whenever we work on ipython, so if you have a > quick question and you see any of us there, don't hesitate to ping. > > With my finishing of the syntax work and Brian having just started the > kernel code, we're getting closer to a common layer. With a bit of > patience and a lot of hard work, we should have soon something both > fun and powerful. > > Cheers, > > f From fperez.net at gmail.com Wed Aug 18 04:45:38 2010 From: fperez.net at gmail.com (Fernando Perez) Date: Wed, 18 Aug 2010 01:45:38 -0700 Subject: [IPython-dev] Full input syntax support finished and ready for review/merge In-Reply-To: References: Message-ID: On Tue, Aug 17, 2010 at 3:47 PM, Fernando Perez wrote: > Hi all (esp. frontend authors), > > In this branch > http://github.com/fperez/ipython/tree/blockbreaker > > I've now completed a functional first pass on complete IPython input > support, so that frontends can convert statically all 'ipython syntax' > that can be determined statically. ?This includes all escapes (%, ?, > !, !!, etc) plus things like 'a =! ls' and removal of python/ipython > prompts from pasted input. OK, thanks a lot to Brian for the careful code review! This is now merged into trunk. Some small fixes remain to be done, but I want this in the hands of the frontend authors sooner rather than later. The remaining cleanups shouldn't affect the public API. Please let me know if you find any problems with it, esp. Evan who's likely the first who will merge this... Cheers, f From fperez.net at gmail.com Wed Aug 18 04:56:04 2010 From: fperez.net at gmail.com (Fernando Perez) Date: Wed, 18 Aug 2010 01:56:04 -0700 Subject: [IPython-dev] Full input syntax support finished and ready for review/merge In-Reply-To: References: Message-ID: On Wed, Aug 18, 2010 at 1:45 AM, Fernando Perez wrote: > This is now merged into trunk. ?Some small fixes remain to be done, > but I want this in the hands of the frontend authors sooner rather > than later. ?The remaining cleanups shouldn't affect the public API. > > Please let me know if you find any problems with it, esp. Evan who's > likely the first who will merge this... ARGH. Sorry: I just saw a weird test failure and pulled the update. I'm too tired to debug it now but I don't want to break trunk in the middle of Brian's work. Since I can't fix this tomorrow morning, best to just pull it until I can push it cleanly into trunk. Lesson: never, never, ever push into trunk without *actually* running the test suite. You'd think I would know this by now... Ouch. Evan, you can instead merge from my branch as before: http://github.com/fperez/ipython/tree/blockbreaker and I'll add whatever is necessary to fix up the remaining part of the test suite before I push to trunk itself. But none of that should affect you. Apologies if anyone pulled from trunk in the few minutes the botched update was visible. Cheers, f From epatters at enthought.com Wed Aug 18 14:05:18 2010 From: epatters at enthought.com (Evan Patterson) Date: Wed, 18 Aug 2010 13:05:18 -0500 Subject: [IPython-dev] Full input syntax support finished and ready for review/merge In-Reply-To: References: Message-ID: Hi Fernando, I've merged from your branch, but the IPythonInputSplitter is behaving strangely: whenever I give multi-line input, push_accepts_more always return True. The base InputSplitter still works as expected. This may have something to do with the test failure you noticed.... Evan On Wed, Aug 18, 2010 at 3:56 AM, Fernando Perez wrote: > On Wed, Aug 18, 2010 at 1:45 AM, Fernando Perez > wrote: > > This is now merged into trunk. Some small fixes remain to be done, > > but I want this in the hands of the frontend authors sooner rather > > than later. The remaining cleanups shouldn't affect the public API. > > > > Please let me know if you find any problems with it, esp. Evan who's > > likely the first who will merge this... > > ARGH. Sorry: I just saw a weird test failure and pulled the update. > I'm too tired to debug it now but I don't want to break trunk in the > middle of Brian's work. Since I can't fix this tomorrow morning, best > to just pull it until I can push it cleanly into trunk. > > Lesson: never, never, ever push into trunk without *actually* running > the test suite. You'd think I would know this by now... Ouch. > > Evan, you can instead merge from my branch as before: > > http://github.com/fperez/ipython/tree/blockbreaker > > and I'll add whatever is necessary to fix up the remaining part of the > test suite before I push to trunk itself. But none of that should > affect you. > > Apologies if anyone pulled from trunk in the few minutes the botched > update was visible. > > Cheers, > > f > _______________________________________________ > IPython-dev mailing list > IPython-dev at scipy.org > http://mail.scipy.org/mailman/listinfo/ipython-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fperez.net at gmail.com Wed Aug 18 15:02:46 2010 From: fperez.net at gmail.com (Fernando Perez) Date: Wed, 18 Aug 2010 12:02:46 -0700 Subject: [IPython-dev] Full input syntax support finished and ready for review/merge In-Reply-To: References: Message-ID: Hey Evan, On Wed, Aug 18, 2010 at 11:05 AM, Evan Patterson wrote: > I've merged from your branch, but the IPythonInputSplitter is behaving > strangely: whenever I give multi-line input, push_accepts_more always return > True. The base InputSplitter still works as expected. > > This may have something to do with the test failure you noticed.... > OK, thanks for the report. I think I didn't test carefully enough the *multi line* input case, only the line-at-a-time. I'll look into this in a few hours and will report back, I have a couple of local things I need to take care of now. Cheers, f From ellisonbg at gmail.com Fri Aug 20 11:48:19 2010 From: ellisonbg at gmail.com (Brian Granger) Date: Fri, 20 Aug 2010 08:48:19 -0700 Subject: [IPython-dev] So much for SGE... Message-ID: http://insidehpc.com/2010/08/20/sun-gridengine-now-100-less-free/ Cheers, Brian -- Brian E. Granger, Ph.D. Assistant Professor of Physics Cal Poly State University, San Luis Obispo bgranger at calpoly.edu ellisonbg at gmail.com From satra at mit.edu Fri Aug 20 12:01:10 2010 From: satra at mit.edu (Satrajit Ghosh) Date: Fri, 20 Aug 2010 12:01:10 -0400 Subject: [IPython-dev] So much for SGE... In-Reply-To: References: Message-ID: well OGE and LSF will now be relegated to the privileged :) I think torque/pbs just got a huge boost. cheers, satra On Fri, Aug 20, 2010 at 11:48 AM, Brian Granger wrote: > http://insidehpc.com/2010/08/20/sun-gridengine-now-100-less-free/ > > Cheers, > > Brian > > -- > Brian E. Granger, Ph.D. > Assistant Professor of Physics > Cal Poly State University, San Luis Obispo > bgranger at calpoly.edu > ellisonbg at gmail.com > _______________________________________________ > IPython-dev mailing list > IPython-dev at scipy.org > http://mail.scipy.org/mailman/listinfo/ipython-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fperez.net at gmail.com Fri Aug 20 14:16:17 2010 From: fperez.net at gmail.com (Fernando Perez) Date: Fri, 20 Aug 2010 11:16:17 -0700 Subject: [IPython-dev] Full input syntax support finished and ready for review/merge In-Reply-To: References: Message-ID: Howdy, On Wed, Aug 18, 2010 at 12:02 PM, Fernando Perez wrote: > > OK, thanks for the report. ?I think I didn't test carefully enough the > *multi line* input case, only the line-at-a-time. ?I'll look into this > in a few hours and will report back, I have a couple of local things I > need to take care of now. > I just merged blockbreaker into trunk, all tests pass. Later today I'll set up a shared newkenel branch in upstream so we can all sync more easily. Cheers, f From ellisonbg at gmail.com Fri Aug 20 14:55:23 2010 From: ellisonbg at gmail.com (Brian Granger) Date: Fri, 20 Aug 2010 11:55:23 -0700 Subject: [IPython-dev] Full input syntax support finished and ready for review/merge In-Reply-To: References: Message-ID: Evan and Fernando, I have merged both the updated trunk and Evan's qtfrontend branch into my newkernel branch. BUT, let's not create a new branch yet because Evan and I will be committing to our branches all day while you are at a meeting. Thus, until we can all sync properly, let's continue to use my newkernel branch as the main development branch for this work. Maybe later tonight once Evan and I are done coding and are fully synch'd again, we can create the new branch. Also: * The %edit magic is working. * All magics that use paging are now working. Evan will have to update the frontends to do something more sensible than the basic things I am doing (eg, I just have the frontend print for paging for now). In terms of the original timeline that we outlined with Eric, we are pretty close to being on target. The only places we are lacking a bit are: * We said we would have exceptions coming back in a structured form by this week. * We said we would start the GUI integration stuff. * We said we would have tab completion working. For very basic things, it is working, but the more I have played with it, the more I have found that doesn't work. Even though technically, we are a little behind, I feel really good because I think we are further ahead in other important ways. The payload system has worked extremely well and has made things like edit and paging way easier than we thought. The payload system should also make it easy to get the other magics working (not sure which don't work right now). For next week here is what we have listed on the timeline: * Additional magics ported. I think we can just start to go through all of our magics and get the rest working. This is tedious, but shouldn't be too bad with the payload stuff in place. * Plotting support starts to work. This will be our big task for next week I think and could be a lot of work. * Finish the things we didn't finish from this week. * Other little things that have come up (I have a list). I have to run some errands this afternoon, but I will be around later. Should we plan on talking again on Monday at 9 am? Cheers, Brian On Fri, Aug 20, 2010 at 11:16 AM, Fernando Perez wrote: > Howdy, > > On Wed, Aug 18, 2010 at 12:02 PM, Fernando Perez wrote: >> >> OK, thanks for the report. ?I think I didn't test carefully enough the >> *multi line* input case, only the line-at-a-time. ?I'll look into this >> in a few hours and will report back, I have a couple of local things I >> need to take care of now. >> > > I just merged blockbreaker into trunk, all tests pass. > > Later today I'll set up a shared newkenel branch in upstream so we can > all sync more easily. > > Cheers, > > f > _______________________________________________ > IPython-dev mailing list > IPython-dev at scipy.org > http://mail.scipy.org/mailman/listinfo/ipython-dev > -- Brian E. Granger, Ph.D. Assistant Professor of Physics Cal Poly State University, San Luis Obispo bgranger at calpoly.edu ellisonbg at gmail.com From epatters at enthought.com Fri Aug 20 15:16:19 2010 From: epatters at enthought.com (Evan Patterson) Date: Fri, 20 Aug 2010 14:16:19 -0500 Subject: [IPython-dev] Full input syntax support finished and ready for review/merge In-Reply-To: References: Message-ID: Thanks for the detailed report, Brian. Let's plan on a 9 am PST Monday morning conference call. I will merge with you soon and begin integrating %edit and paging in the frontend. Evan On Fri, Aug 20, 2010 at 1:55 PM, Brian Granger wrote: > Evan and Fernando, > > I have merged both the updated trunk and Evan's qtfrontend branch into > my newkernel branch. BUT, let's not create a new branch yet because > Evan and I will be committing to our branches all day while you are at > a meeting. Thus, until we can all sync properly, let's continue to > use my newkernel branch as the main development branch for this work. > Maybe later tonight once Evan and I are done coding and are fully > synch'd again, we can create the new branch. > > Also: > > * The %edit magic is working. > * All magics that use paging are now working. > > Evan will have to update the frontends to do something more sensible > than the basic things I am doing (eg, I just have the frontend print > for paging for now). > In terms of the original timeline that we outlined with Eric, we are > pretty close to being on target. The only places we are lacking a bit > are: > > * We said we would have exceptions coming back in a structured form by > this week. > * We said we would start the GUI integration stuff. > * We said we would have tab completion working. For very basic > things, it is working, but the more I have played with it, the more I > have found that doesn't work. > > Even though technically, we are a little behind, I feel really good > because I think we are further ahead in other important ways. The > payload system has worked extremely well and has made things like edit > and paging way easier than we thought. The payload system should also > make it easy to get the other magics working (not sure which don't > work right now). > > For next week here is what we have listed on the timeline: > > * Additional magics ported. I think we can just start to go through > all of our magics and get the rest working. This is tedious, but > shouldn't be too bad with > the payload stuff in place. > * Plotting support starts to work. This will be our big task for next > week I think and could be a lot of work. > * Finish the things we didn't finish from this week. > * Other little things that have come up (I have a list). > > I have to run some errands this afternoon, but I will be around later. > Should we plan on talking again on Monday at 9 am? > > Cheers, > > Brian > > On Fri, Aug 20, 2010 at 11:16 AM, Fernando Perez > wrote: > > Howdy, > > > > On Wed, Aug 18, 2010 at 12:02 PM, Fernando Perez > wrote: > >> > >> OK, thanks for the report. I think I didn't test carefully enough the > >> *multi line* input case, only the line-at-a-time. I'll look into this > >> in a few hours and will report back, I have a couple of local things I > >> need to take care of now. > >> > > > > I just merged blockbreaker into trunk, all tests pass. > > > > Later today I'll set up a shared newkenel branch in upstream so we can > > all sync more easily. > > > > Cheers, > > > > f > > _______________________________________________ > > IPython-dev mailing list > > IPython-dev at scipy.org > > http://mail.scipy.org/mailman/listinfo/ipython-dev > > > > > > -- > Brian E. Granger, Ph.D. > Assistant Professor of Physics > Cal Poly State University, San Luis Obispo > bgranger at calpoly.edu > ellisonbg at gmail.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fperez.net at gmail.com Fri Aug 20 21:01:48 2010 From: fperez.net at gmail.com (Fernando Perez) Date: Fri, 20 Aug 2010 18:01:48 -0700 Subject: [IPython-dev] Full input syntax support finished and ready for review/merge In-Reply-To: References: Message-ID: On Fri, Aug 20, 2010 at 11:55 AM, Brian Granger wrote: > > > I have merged both the updated trunk and Evan's qtfrontend branch into > my newkernel branch. ?BUT, let's not create a new branch yet because > Evan and I will be committing to our branches all day while you are at > a meeting. ?Thus, until we can all sync properly, let's continue to > use my newkernel branch as the main development branch for this work. > Maybe later tonight once Evan and I are done coding and are fully > synch'd again, we can create the new branch. Sounds good, I'll ping you tonight to sync. I've created a top-level entry point again because we should maintain IPytyhon/scripts/ipython for the foreseeable future as the default entry point for a terminal. It's annoying to switch branches and have the entry points move around. Once we stabilize the main frontends we'll think about good naming/organization for these, but for now the default entry point should maintain a backwards-compatible location. > Also: [...] Great work! And excellent summary. > I have to run some errands this afternoon, but I will be around later. > ?Should we plan on talking again on Monday at 9 am? Sounds good. Cheers, f From fperez.net at gmail.com Sun Aug 22 17:23:18 2010 From: fperez.net at gmail.com (Fernando Perez) Date: Sun, 22 Aug 2010 14:23:18 -0700 Subject: [IPython-dev] newkernel branch ready Message-ID: Hi folks, I just pushed a public branch: http://github.com/ipython/ipython/tree/newkernel that merges Evan's and Brian's recent kernel work and all the inputsplitter work as well. Now that we are all working on the newkernel, we need a common shared space to work on, otherwise the N^2 cross-pulling will make us go mad. Once this stabilizes, we'll merge it into trunk itself, but for now trunk will remain unmodified. Cheers, f From justin.t.riley at gmail.com Mon Aug 23 02:25:08 2010 From: justin.t.riley at gmail.com (Justin Riley) Date: Mon, 23 Aug 2010 02:25:08 -0400 Subject: [IPython-dev] So much for SGE... In-Reply-To: References: Message-ID: Wow that sucks. I wonder if this applies to versions prior to 6.2u6? ~Justin On Fri, Aug 20, 2010 at 12:01 PM, Satrajit Ghosh wrote: > well OGE and LSF will now be relegated to the privileged :) I think > torque/pbs just got a huge boost. > > cheers, > > satra > > > On Fri, Aug 20, 2010 at 11:48 AM, Brian Granger wrote: >> >> http://insidehpc.com/2010/08/20/sun-gridengine-now-100-less-free/ >> >> Cheers, >> >> Brian >> >> -- >> Brian E. Granger, Ph.D. >> Assistant Professor of Physics >> Cal Poly State University, San Luis Obispo >> bgranger at calpoly.edu >> ellisonbg at gmail.com >> _______________________________________________ >> IPython-dev mailing list >> IPython-dev at scipy.org >> http://mail.scipy.org/mailman/listinfo/ipython-dev > > > _______________________________________________ > IPython-dev mailing list > IPython-dev at scipy.org > http://mail.scipy.org/mailman/listinfo/ipython-dev > > From fperez.net at gmail.com Mon Aug 23 03:31:42 2010 From: fperez.net at gmail.com (Fernando Perez) Date: Mon, 23 Aug 2010 00:31:42 -0700 Subject: [IPython-dev] Exceptions and completion support in newkernel Message-ID: Hi folks, I've just pushed a bunch of work here: http://github.com/fperez/ipython/tree/newkernel that we should be able to merge tomorrow into the new shared newkernel branch. This updates the exception handling mechanism to use proper structured tracebacks throughout, as well as providing a refactoring of tab completion to make it more independent of readline and more suitable for use in a messaging environment. Evan, a few notes: - I didn't know what to do here. Not a big deal, we can take care of it later: http://github.com/fperez/ipython/blob/newkernel/IPython/frontend/qt/console/frontend_widget.py#L297 - For now, tracebacks are ansi-loaded, so I disabled html reporting: http://github.com/fperez/ipython/blob/newkernel/IPython/frontend/qt/console/ipython_widget.py#L115 Evan, let me know how it goes with this, and tomorrow we can push it to upstream/newkernel after review. Cheers, f From satra at mit.edu Mon Aug 23 10:46:32 2010 From: satra at mit.edu (Satrajit Ghosh) Date: Mon, 23 Aug 2010 10:46:32 -0400 Subject: [IPython-dev] So much for SGE... In-Reply-To: References: Message-ID: you can still download 6.2u5 but the page describing the sun opensource license does not appear to exist any more. cheers, satra ps. OGE == PBS (keyboard warping effect) On Mon, Aug 23, 2010 at 2:25 AM, Justin Riley wrote: > Wow that sucks. > > I wonder if this applies to versions prior to 6.2u6? > > ~Justin > > On Fri, Aug 20, 2010 at 12:01 PM, Satrajit Ghosh wrote: > > well OGE and LSF will now be relegated to the privileged :) I think > > torque/pbs just got a huge boost. > > > > cheers, > > > > satra > > > > > > On Fri, Aug 20, 2010 at 11:48 AM, Brian Granger > wrote: > >> > >> http://insidehpc.com/2010/08/20/sun-gridengine-now-100-less-free/ > >> > >> Cheers, > >> > >> Brian > >> > >> -- > >> Brian E. Granger, Ph.D. > >> Assistant Professor of Physics > >> Cal Poly State University, San Luis Obispo > >> bgranger at calpoly.edu > >> ellisonbg at gmail.com > >> _______________________________________________ > >> IPython-dev mailing list > >> IPython-dev at scipy.org > >> http://mail.scipy.org/mailman/listinfo/ipython-dev > > > > > > _______________________________________________ > > IPython-dev mailing list > > IPython-dev at scipy.org > > http://mail.scipy.org/mailman/listinfo/ipython-dev > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fperez.net at gmail.com Mon Aug 23 14:20:40 2010 From: fperez.net at gmail.com (Fernando Perez) Date: Mon, 23 Aug 2010 11:20:40 -0700 Subject: [IPython-dev] Spurious newline in prompts in newkernel? Message-ID: Hey guys, [Evan might know what's going on here] I'm seeing in the newkernel branch extra newlines somewhere: - what trunk looks like: In [1]: np.arange(2,20,3,np.float64) Out[1]: array([ 2., 5., 8., 11., 14., 17.]) In [2]: - what newkernel produces: In [1]: np.arange(2,20,3,np.float64) Out[1]: array([ 2., 5., 8., 11., 14., 17.]) In [2]: There's one extra newline somewhere in the prompts. I'm guessing it's the one before the input prompts, but I could be wrong. This isn't a big deal right now, I just mention it in case you know what happened, as I recall Evan was playing with separators recently. Cheers, f From ellisonbg at gmail.com Mon Aug 23 14:23:50 2010 From: ellisonbg at gmail.com (Brian Granger) Date: Mon, 23 Aug 2010 11:23:50 -0700 Subject: [IPython-dev] Spurious newline in prompts in newkernel? In-Reply-To: References: Message-ID: Hmmm, here is what 0.10 looks like for me: In [1]: a = 10 In [2]: a Out[2]: 10 In [3]: On Mon, Aug 23, 2010 at 11:20 AM, Fernando Perez wrote: > Hey guys, > > [Evan might know what's going on here] I'm seeing in the newkernel > branch extra newlines somewhere: > > - what trunk looks like: > > In [1]: np.arange(2,20,3,np.float64) > Out[1]: array([ ?2., ? 5., ? 8., ?11., ?14., ?17.]) > > In [2]: > > - what newkernel produces: > In [1]: np.arange(2,20,3,np.float64) > > Out[1]: array([ ?2., ? 5., ? 8., ?11., ?14., ?17.]) > > > In [2]: > > > There's one extra newline somewhere in the prompts. ?I'm guessing it's > the one before the input prompts, but I could be wrong. > > This isn't a big deal right now, I just mention it in case you know > what happened, as I recall Evan was playing with separators recently. > > Cheers, > > f > _______________________________________________ > IPython-dev mailing list > IPython-dev at scipy.org > http://mail.scipy.org/mailman/listinfo/ipython-dev > -- Brian E. Granger, Ph.D. Assistant Professor of Physics Cal Poly State University, San Luis Obispo bgranger at calpoly.edu ellisonbg at gmail.com From epatters at enthought.com Mon Aug 23 14:45:05 2010 From: epatters at enthought.com (Evan Patterson) Date: Mon, 23 Aug 2010 13:45:05 -0500 Subject: [IPython-dev] Spurious newline in prompts in newkernel? In-Reply-To: References: Message-ID: When we were discussing 'input_sep' and 'output_set' last week, I remarked facetiously to Fernando that I thought I was losing my mind because I remembered seeing what Fernando has listed for the trunk version, but when I tried it (on newkernel) I saw the output he has listed above. I'm starting to suspect that maybe something *has* changed along the way. Evan On Mon, Aug 23, 2010 at 1:23 PM, Brian Granger wrote: > Hmmm, here is what 0.10 looks like for me: > > In [1]: a = 10 > > In [2]: a > > Out[2]: 10 > > > In [3]: > > > > On Mon, Aug 23, 2010 at 11:20 AM, Fernando Perez > wrote: > > Hey guys, > > > > [Evan might know what's going on here] I'm seeing in the newkernel > > branch extra newlines somewhere: > > > > - what trunk looks like: > > > > In [1]: np.arange(2,20,3,np.float64) > > Out[1]: array([ 2., 5., 8., 11., 14., 17.]) > > > > In [2]: > > > > - what newkernel produces: > > In [1]: np.arange(2,20,3,np.float64) > > > > Out[1]: array([ 2., 5., 8., 11., 14., 17.]) > > > > > > In [2]: > > > > > > There's one extra newline somewhere in the prompts. I'm guessing it's > > the one before the input prompts, but I could be wrong. > > > > This isn't a big deal right now, I just mention it in case you know > > what happened, as I recall Evan was playing with separators recently. > > > > Cheers, > > > > f > > _______________________________________________ > > IPython-dev mailing list > > IPython-dev at scipy.org > > http://mail.scipy.org/mailman/listinfo/ipython-dev > > > > > > -- > Brian E. Granger, Ph.D. > Assistant Professor of Physics > Cal Poly State University, San Luis Obispo > bgranger at calpoly.edu > ellisonbg at gmail.com > _______________________________________________ > IPython-dev mailing list > IPython-dev at scipy.org > http://mail.scipy.org/mailman/listinfo/ipython-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ellisonbg at gmail.com Mon Aug 23 14:50:33 2010 From: ellisonbg at gmail.com (Brian Granger) Date: Mon, 23 Aug 2010 11:50:33 -0700 Subject: [IPython-dev] Spurious newline in prompts in newkernel? In-Reply-To: References: Message-ID: I think I probably just messed up the defaults in 0.11 (trunk). I think we should follow whatever 0.10 does unless we want to reconsider the defaults. Brian On Mon, Aug 23, 2010 at 11:45 AM, Evan Patterson wrote: > When we were discussing 'input_sep' and 'output_set' last week, I remarked > facetiously? to Fernando that I thought I was losing my mind because I > remembered seeing what Fernando has listed for the trunk version, but when I > tried it (on newkernel) I saw the output he has listed above. > > I'm starting to suspect that maybe something *has* changed along the way. > > Evan > > On Mon, Aug 23, 2010 at 1:23 PM, Brian Granger wrote: >> >> Hmmm, here is what 0.10 looks like for me: >> >> In [1]: a = 10 >> >> In [2]: a >> >> Out[2]: 10 >> >> >> In [3]: >> >> >> >> On Mon, Aug 23, 2010 at 11:20 AM, Fernando Perez >> wrote: >> > Hey guys, >> > >> > [Evan might know what's going on here] I'm seeing in the newkernel >> > branch extra newlines somewhere: >> > >> > - what trunk looks like: >> > >> > In [1]: np.arange(2,20,3,np.float64) >> > Out[1]: array([ ?2., ? 5., ? 8., ?11., ?14., ?17.]) >> > >> > In [2]: >> > >> > - what newkernel produces: >> > In [1]: np.arange(2,20,3,np.float64) >> > >> > Out[1]: array([ ?2., ? 5., ? 8., ?11., ?14., ?17.]) >> > >> > >> > In [2]: >> > >> > >> > There's one extra newline somewhere in the prompts. ?I'm guessing it's >> > the one before the input prompts, but I could be wrong. >> > >> > This isn't a big deal right now, I just mention it in case you know >> > what happened, as I recall Evan was playing with separators recently. >> > >> > Cheers, >> > >> > f >> > _______________________________________________ >> > IPython-dev mailing list >> > IPython-dev at scipy.org >> > http://mail.scipy.org/mailman/listinfo/ipython-dev >> > >> >> >> >> -- >> Brian E. Granger, Ph.D. >> Assistant Professor of Physics >> Cal Poly State University, San Luis Obispo >> bgranger at calpoly.edu >> ellisonbg at gmail.com >> _______________________________________________ >> IPython-dev mailing list >> IPython-dev at scipy.org >> http://mail.scipy.org/mailman/listinfo/ipython-dev > > -- Brian E. Granger, Ph.D. Assistant Professor of Physics Cal Poly State University, San Luis Obispo bgranger at calpoly.edu ellisonbg at gmail.com From fperez.net at gmail.com Mon Aug 23 20:53:54 2010 From: fperez.net at gmail.com (Fernando Perez) Date: Mon, 23 Aug 2010 17:53:54 -0700 Subject: [IPython-dev] Spurious newline in prompts in newkernel? In-Reply-To: References: Message-ID: On Mon, Aug 23, 2010 at 11:23 AM, Brian Granger wrote: > Hmmm, here is what 0.10 looks like for me: > > In [1]: a = 10 > > In [2]: a > > Out[2]: 10 > > Those aren't the defaults, that's picking up your config. If you start an ipython 0.10.x without a ~/.ipython dir, you get: (ipython-0.10.1)uqbar[~]> mv .ipython/ .ipython.tmp (ipython-0.10.1)uqbar[~]> ip ********************************************************************** Welcome to IPython. I will try to create a personal configuration directory where you can customize many aspects of IPython's functionality in: /home/fperez/.ipython Initializing from configuration: /home/fperez/usr/opt/virtualenv/ipython-0.10.1/lib/python2.6/site-packages/IPython/UserConfig Successful installation! [.... snipped] ********************************************************************** Python 2.6.5 (r265:79063, Apr 16 2010, 13:57:41) Type "copyright", "credits" or "license" for more information. IPython 0.10.1 -- An enhanced Interactive Python. ? -> Introduction and overview of IPython's features. %quickref -> Quick reference. help -> Python's own help system. object? -> Details about 'object'. ?object also works, ?? prints more. In [1]: a=10 In [2]: a Out[2]: 10 In [3]: Do you really want to exit ([y]/n)? (ipython-0.10.1)uqbar[~]> I chose those defaults in my ocd-driven frenzy to: - visually group related input/output together - visually separate each input from its neighbors This set of defaults seemed to be the best balance between using vertical space conservatively while providing some separation between commands. We can tweak things later to get the above behavior back, no worries. I just wanted to note it so we don't forget. Cheers, f From ellisonbg at gmail.com Mon Aug 23 23:09:32 2010 From: ellisonbg at gmail.com (Brian Granger) Date: Mon, 23 Aug 2010 20:09:32 -0700 Subject: [IPython-dev] Spurious newline in prompts in newkernel? In-Reply-To: References: Message-ID: Haha, yes, I do have a config file that I was messing with the other day. I will change the defaults in trunk/newkernel back to that of 0.10. Cheers, Brian On Mon, Aug 23, 2010 at 5:53 PM, Fernando Perez wrote: > On Mon, Aug 23, 2010 at 11:23 AM, Brian Granger wrote: >> Hmmm, here is what 0.10 looks like for me: >> >> In [1]: a = 10 >> >> In [2]: a >> >> Out[2]: 10 >> >> > > Those aren't the defaults, that's picking up your config. ?If you > start an ipython 0.10.x without a ~/.ipython dir, you get: > > (ipython-0.10.1)uqbar[~]> mv .ipython/ .ipython.tmp > (ipython-0.10.1)uqbar[~]> ip > ********************************************************************** > Welcome to IPython. I will try to create a personal configuration directory > where you can customize many aspects of IPython's functionality in: > > /home/fperez/.ipython > Initializing from configuration: > /home/fperez/usr/opt/virtualenv/ipython-0.10.1/lib/python2.6/site-packages/IPython/UserConfig > > Successful installation! > > [.... snipped] > ********************************************************************** > Python 2.6.5 (r265:79063, Apr 16 2010, 13:57:41) > Type "copyright", "credits" or "license" for more information. > > IPython 0.10.1 -- An enhanced Interactive Python. > ? ? ? ? ? -> Introduction and overview of IPython's features. > %quickref -> Quick reference. > help ? ? ?-> Python's own help system. > object? ? -> Details about 'object'. ?object also works, ?? prints more. > > In [1]: a=10 > > In [2]: a > Out[2]: 10 > > In [3]: > Do you really want to exit ([y]/n)? > (ipython-0.10.1)uqbar[~]> > > I chose those defaults in my ocd-driven frenzy to: > > - visually group related input/output together > > - visually separate each input from its neighbors > > This set of defaults seemed to be the best balance between using > vertical space conservatively while providing some separation between > commands. > > We can tweak things later to get the above behavior back, no worries. > I just wanted to note it so we don't forget. > > Cheers, > > f > -- Brian E. Granger, Ph.D. Assistant Professor of Physics Cal Poly State University, San Luis Obispo bgranger at calpoly.edu ellisonbg at gmail.com From ellisonbg at gmail.com Mon Aug 23 23:33:56 2010 From: ellisonbg at gmail.com (Brian Granger) Date: Mon, 23 Aug 2010 20:33:56 -0700 Subject: [IPython-dev] Spurious newline in prompts in newkernel? In-Reply-To: References: Message-ID: OK, I fixed this and it will go into newkernel with my next merge. Brian On Mon, Aug 23, 2010 at 8:09 PM, Brian Granger wrote: > Haha, yes, I do have a config file that I was messing with the other > day. ?I will change the defaults in trunk/newkernel back to that of > 0.10. > > Cheers, > > Brian > > On Mon, Aug 23, 2010 at 5:53 PM, Fernando Perez wrote: >> On Mon, Aug 23, 2010 at 11:23 AM, Brian Granger wrote: >>> Hmmm, here is what 0.10 looks like for me: >>> >>> In [1]: a = 10 >>> >>> In [2]: a >>> >>> Out[2]: 10 >>> >>> >> >> Those aren't the defaults, that's picking up your config. ?If you >> start an ipython 0.10.x without a ~/.ipython dir, you get: >> >> (ipython-0.10.1)uqbar[~]> mv .ipython/ .ipython.tmp >> (ipython-0.10.1)uqbar[~]> ip >> ********************************************************************** >> Welcome to IPython. I will try to create a personal configuration directory >> where you can customize many aspects of IPython's functionality in: >> >> /home/fperez/.ipython >> Initializing from configuration: >> /home/fperez/usr/opt/virtualenv/ipython-0.10.1/lib/python2.6/site-packages/IPython/UserConfig >> >> Successful installation! >> >> [.... snipped] >> ********************************************************************** >> Python 2.6.5 (r265:79063, Apr 16 2010, 13:57:41) >> Type "copyright", "credits" or "license" for more information. >> >> IPython 0.10.1 -- An enhanced Interactive Python. >> ? ? ? ? ? -> Introduction and overview of IPython's features. >> %quickref -> Quick reference. >> help ? ? ?-> Python's own help system. >> object? ? -> Details about 'object'. ?object also works, ?? prints more. >> >> In [1]: a=10 >> >> In [2]: a >> Out[2]: 10 >> >> In [3]: >> Do you really want to exit ([y]/n)? >> (ipython-0.10.1)uqbar[~]> >> >> I chose those defaults in my ocd-driven frenzy to: >> >> - visually group related input/output together >> >> - visually separate each input from its neighbors >> >> This set of defaults seemed to be the best balance between using >> vertical space conservatively while providing some separation between >> commands. >> >> We can tweak things later to get the above behavior back, no worries. >> I just wanted to note it so we don't forget. >> >> Cheers, >> >> f >> > > > > -- > Brian E. Granger, Ph.D. > Assistant Professor of Physics > Cal Poly State University, San Luis Obispo > bgranger at calpoly.edu > ellisonbg at gmail.com > -- Brian E. Granger, Ph.D. Assistant Professor of Physics Cal Poly State University, San Luis Obispo bgranger at calpoly.edu ellisonbg at gmail.com From ellisonbg at gmail.com Tue Aug 24 00:24:02 2010 From: ellisonbg at gmail.com (Brian Granger) Date: Mon, 23 Aug 2010 21:24:02 -0700 Subject: [IPython-dev] Message spec draft more fleshed out In-Reply-To: <4C6B5D24.6090604@gmail.com> References: <4C65C182.7000807@gmail.com> <4C661021.3030909@gmail.com> <4C6B5D24.6090604@gmail.com> Message-ID: On Tue, Aug 17, 2010 at 9:10 PM, Wendell Smith wrote: > ?Hi Fernando, > > Thanks for the reply... it's been really nice to see how helpful and > responsive this team has been! > > I'm kinda busy for the next week or two, but then I'll get back to my > frontend... it still has some issues before integration, and I think I'm > going to let the API settle a bit before I try and get too involved. OK great. > I do still have one more question... why are we doing so much in the > channel threads? Why not have the channels simply send and receive, not > even looking at the messages, and have hookups to call methods on some > object in the main thread that will do the message parsing? Currently that is really all that the channels are doing. I don't think they actually look at the msg formats at all anymore. They used to do that, but I took that logic out of them. > That way the > frontend managers only have to hook up 'message_received' and > 'send_message' between the two threads, and aren't messing about too > much with those threads... once they make those simple connections > (through queues, signals, etc.), then one would only have to fill in > methods on the main API object. That way seems to have two advantages: > firstly, the inter-thread communication is limited to only 4 or 5 > methods, making thread errors less likely, and secondly, the main API is > then on one object, clearly in the main thread. > > So... I assume there's a good reason for doing everything in the channel > threads... but I can't see it. Could someone explain? We could do everything in one thread, but that would require a much more careful thinking about how to integrate our zmq even tloops (that are currently in the channel threads) with the main event loop on the GUI. This is a much more subtle and difficult problem that we end up avoiding with the channel threads. We truly do need event loops for each channel. We could combine them into 1 event loop, but we like the idea of keeping them separate, because some clients may want to only use a subset of the channels. Cheers, Brian > thanks, > Wendell > > > On 08/17/2010 09:07 PM, Fernando Perez wrote: >> Hi Wendell, >> >> On Fri, Aug 13, 2010 at 8:40 PM, Wendell Smith ?wrote: >>> As for curses... I've switched to the urwid library, which, by the way, >>> I have already mostly ported to py3k... and the urwid library is set up >>> to use any sort of asynchronous main loop you want, with a basic main >>> loop written into it, a tornado-based main loop, and a select-based >>> mainloop already written, and it's flexible, so one could write a main >>> loop on one's own. Input is non-blocking. >> This is great news, I'm very happy to hear you've taken the py3k lead >> there. ?Numpy is just coming out with py3 support, scipy will follow >> soon, and I imagine at that point matplotlib will start worrying about >> py3k. ?So we want to have a clear story on that front. >> >>> As for my previous idea, maybe I'm still not understanding, but perhaps >>> we could still have a basic system with a kernelmanager object, a >>> send_receive function, and two queues, in and out. The send_receive >>> function reads messages from the out_queue and sends them, and then >>> receives messages from zmq through the ports and then puts them on the >>> in_queue, never really looking at what messages are coming in and out, >>> just putting them on the queues. The kernel manager object could then be >>> set up exactly as I said before, except that it has an additional >>> method, process_messages, in which it reads a message from the in_queue, >>> determines which method to call, and calls it; the methods do their >>> magic, printing, receiving input, whatever, and some put messages on the >>> out_queue. >>> >>> As I see it, this sounds great: the queues can be from Queue.Queue, and >>> then everything is thread safe, as the send_receive function cold be on >>> one thread, the kernelmanager methods on the other, and the two would >>> interact only through the thread-safe queues. For an asynchronous >>> approach, you have an option to set max_msgs and timeout for both the >>> send_receive function and the process_messages method, and call them >>> alternately with max_msgs = 1, timeout = 0. This then would make the >>> frontend programmer's job easy: all they need to do is get their main >>> loop to frequently call send_receive and kernelmanager.process_messages, >>> with max_num and timeout set appropriately, and then fill in the other >>> methods from the kernel manager to provide output and input. >> Brian already gave you several key details, so I won't repeat too much. >> >> I do agree that having a functional api is desirable, especially to >> make calling some messages easier from code, with the functional api >> ensuring that all fields in the message header are properly filled >> without every caller having to fill every field manually. ?So we'll >> need to play with this as we understand better the needs and >> commonalities of all the frontends. >> >> Please do keep us posted of your progress and don't hesitate to ask. >> It's a bit unfortunate in a sense that we have 2 Google Summer of Code >> students, Evan and you simultaneously needing this without Brian, Min >> and I having had the time to complete the architectural cleanup, but >> we'll do our best to minimize the feel of duplicated/wasted effort >> from everyone. >> >> Also, we're being pretty good about hanging out on #ipython in the >> freenode IRC server whenever we work on ipython, so if you have a >> quick question and you see any of us there, don't hesitate to ping. >> >> With my finishing of the syntax work and Brian having just started the >> kernel code, we're getting closer to a common layer. ?With a bit of >> patience and a lot of hard work, we should have soon something both >> fun and powerful. >> >> Cheers, >> >> f > > _______________________________________________ > IPython-dev mailing list > IPython-dev at scipy.org > http://mail.scipy.org/mailman/listinfo/ipython-dev > -- Brian E. Granger, Ph.D. Assistant Professor of Physics Cal Poly State University, San Luis Obispo bgranger at calpoly.edu ellisonbg at gmail.com From ellisonbg at gmail.com Tue Aug 24 00:35:51 2010 From: ellisonbg at gmail.com (Brian Granger) Date: Mon, 23 Aug 2010 21:35:51 -0700 Subject: [IPython-dev] Monkeying around.. Message-ID: Interesting new site: http://monkeyanalytics.com Cheers, Brian -- Brian E. Granger, Ph.D. Assistant Professor of Physics Cal Poly State University, San Luis Obispo bgranger at calpoly.edu ellisonbg at gmail.com From fperez.net at gmail.com Tue Aug 24 02:31:56 2010 From: fperez.net at gmail.com (Fernando Perez) Date: Mon, 23 Aug 2010 23:31:56 -0700 Subject: [IPython-dev] Spurious newline in prompts in newkernel? In-Reply-To: References: Message-ID: On Mon, Aug 23, 2010 at 8:33 PM, Brian Granger wrote: > OK, I fixed this and it will go into newkernel with my next merge. > Thanks :) f From fperez.net at gmail.com Tue Aug 24 03:18:36 2010 From: fperez.net at gmail.com (Fernando Perez) Date: Tue, 24 Aug 2010 00:18:36 -0700 Subject: [IPython-dev] Monkeying around.. In-Reply-To: References: Message-ID: On Mon, Aug 23, 2010 at 9:35 PM, Brian Granger wrote: > Interesting new site: > > http://monkeyanalytics.com > Neat, thanks for the link! f From almar.klein at gmail.com Tue Aug 24 11:06:37 2010 From: almar.klein at gmail.com (Almar Klein) Date: Tue, 24 Aug 2010 17:06:37 +0200 Subject: [IPython-dev] Uniform way of integrating event loops among different IDE's Message-ID: Hi Fernando and others, I'm developing an IDE for Python (http://code.google.com/p/iep/) that is capable of integrating the event loop of several GUI toolkits. On a side note, I used much code of IPython as inspiration on how to do that, so thanks for that. I saw in the IPython documentation that IPython users can detect whether IPython hijacked the event loop as follows (for wx): try: from IPython import appstart_wx appstart_wx(app) except ImportError: app.MainLoop() A very nifty feature indeed. However, building further on this, wouldn't it be nice if people could perform this trick regardless of in which IDE or shell the code is running? Therefore I propose to insert an object in the GUI's module to indicate that the GUI event loop does not need to be entered. I currently use for my IDE: import wx if not hasattr(wx, '_integratedEventLoop'): app = wx.PySimpleApp() app.MainLoop() Currently, _integratedEventLoop is a string with the value 'IEP', indicating who hijacked the main loop. I'm not sure what IPythons appstart_* function does, but the inserted object might just as well be a function that needs to be called (using the app instance as an argument, but how to call it for fltk or gtk then?). I'm interested to know what you think of this idea. Almar PS: for PyQt4, I would propose inserting the object both in the PyQt4 and PyQt4.QtGui namespaces. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jorgen.stenarson at bostream.nu Tue Aug 24 13:11:00 2010 From: jorgen.stenarson at bostream.nu (=?ISO-8859-1?Q?J=F6rgen_Stenarson?=) Date: Tue, 24 Aug 2010 19:11:00 +0200 Subject: [IPython-dev] Tab-completion in master Message-ID: <4C73FD24.70807@bostream.nu> Hi, I have a problem with tab-completion in the master branch when running on windows with pyreadline. If there are both magics and regular functions matching I get a list of matches but the line I started with is removed, if there is only magics or regular matches it works as expected. I think in 0.10 or earlier magics were only matched whith an explicit % at the beginning of the line. example: In [1]: r %reset reversed raw_input %rehashx %rep reduce reload round return repr range README.txt raise %run ren %r %reload_ext %reset_selective rmdir In [1]: with longer string In [1]: res %reset %reset_selective In [1]: %reset Is the behaviour the same on Linux? /J?rgen From fperez.net at gmail.com Tue Aug 24 13:18:36 2010 From: fperez.net at gmail.com (Fernando Perez) Date: Tue, 24 Aug 2010 10:18:36 -0700 Subject: [IPython-dev] Tab-completion in master In-Reply-To: <4C73FD24.70807@bostream.nu> References: <4C73FD24.70807@bostream.nu> Message-ID: On Tue, Aug 24, 2010 at 10:11 AM, J?rgen Stenarson wrote: > Hi, > > I have a problem with tab-completion in the master branch when running > on windows with pyreadline. If there are both magics and regular > functions matching I get a list of matches but the line I started with > is removed, if there is only magics or regular matches it works as > expected. I think in 0.10 or earlier magics were only matched whith an > explicit % at the beginning of the line. > > example: > In [1]: r > %reset ? ? ? ? ? reversed ? ? ? ? raw_input ? ? ? ?%rehashx > %rep ? ? ? ? ? ? reduce ? ? ? ? ? reload ? ? ? ? ? round > return ? ? ? ? ? repr ? ? ? ? ? ? range ? ? ? ? ? ?README.txt > raise ? ? ? ? ? ?%run ? ? ? ? ? ? ren ? ? ? ? ? ? ?%r > %reload_ext ? ? ?%reset_selective rmdir > > In [1]: > > with longer string > > In [1]: res > %reset ? ? ? ? ? %reset_selective > > In [1]: %reset > > > Is the behaviour the same on Linux? Yes, I see the same: In [1]: r %r %rundemo repr %rehashx raise return %reload_ext range reversed %rep raw_input rich_ipython_widget.pyc %reset re rm %reset_selective reduce rmdir %run reload round In [1]: res %reset %reset_selective In [1]: %reset but that's actually fairly similar to what 0.10 gives me: IPython 0.10.1 -- An enhanced Interactive Python. ? -> Introduction and overview of IPython's features. %quickref -> Quick reference. help -> Python's own help system. object? -> Details about 'object'. ?object also works, ?? prints more. In [1]: r raise raw_input reload return rm round range reduce repr reversed rmdir In [1]: %reset [This was re-written] I don't find it to be a big bother, what part of that behavior don't you like? The auto-prepending of '%'? I think that would be fairly easy to fix, though it's been around for a while without complaints... I'm happy to look at improving the behavior, I just want to make sure I understand what direction you'd like to see it improved in. Regards, f From jorgen.stenarson at bostream.nu Tue Aug 24 13:36:51 2010 From: jorgen.stenarson at bostream.nu (=?ISO-8859-1?Q?J=F6rgen_Stenarson?=) Date: Tue, 24 Aug 2010 19:36:51 +0200 Subject: [IPython-dev] Tab-completion in master In-Reply-To: References: <4C73FD24.70807@bostream.nu> Message-ID: <4C740333.1010809@bostream.nu> Fernando Perez skrev 2010-08-24 19:18: > On Tue, Aug 24, 2010 at 10:11 AM, J?rgen Stenarson > wrote: >> Hi, >> >> I have a problem with tab-completion in the master branch when running >> on windows with pyreadline. If there are both magics and regular >> functions matching I get a list of matches but the line I started with >> is removed, if there is only magics or regular matches it works as >> expected. I think in 0.10 or earlier magics were only matched whith an >> explicit % at the beginning of the line. >> >> example: >> In [1]: r >> %reset reversed raw_input %rehashx >> %rep reduce reload round >> return repr range README.txt >> raise %run ren %r >> %reload_ext %reset_selective rmdir >> >> In [1]: >> >> with longer string >> >> In [1]: res >> %reset %reset_selective >> >> In [1]: %reset >> >> >> Is the behaviour the same on Linux? > > Yes, I see the same: > > In [1]: r > %r %rundemo repr > %rehashx raise return > %reload_ext range reversed > %rep raw_input rich_ipython_widget.pyc > %reset re rm > %reset_selective reduce rmdir > %run reload round > > In [1]: res > %reset %reset_selective > > In [1]: %reset > > but that's actually fairly similar to what 0.10 gives me: > > IPython 0.10.1 -- An enhanced Interactive Python. > ? -> Introduction and overview of IPython's features. > %quickref -> Quick reference. > help -> Python's own help system. > object? -> Details about 'object'. ?object also works, ?? prints more. > > In [1]: r > raise raw_input reload return rm round > range reduce repr reversed rmdir > > In [1]: %reset [This was re-written] > > > I don't find it to be a big bother, what part of that behavior don't > you like? The auto-prepending of '%'? I think that would be fairly > easy to fix, though it's been around for a while without complaints... > > I'm happy to look at improving the behavior, I just want to make sure > I understand what direction you'd like to see it improved in. > > Regards, > > f > The main problem is that as it works now (at least for me) is that it actually deletes everything I have written if there are matches both in magics and regular. For example if I try to complete on 'color' when in pylab mode then I get a list of 5 matches 2 magics and 3 regular but I get to start all over again and type in color again: In [1]: color colorbar colors %colors colormaps %color_info In [1]: <--Empty line I find it quite annoying to have to retype things in cases like this. I think it would be nice to get alla matches including magics but not if the price is that I loose things I have typed. /J?rgen From fperez.net at gmail.com Tue Aug 24 14:07:36 2010 From: fperez.net at gmail.com (Fernando Perez) Date: Tue, 24 Aug 2010 11:07:36 -0700 Subject: [IPython-dev] Tab-completion in master In-Reply-To: <4C740333.1010809@bostream.nu> References: <4C73FD24.70807@bostream.nu> <4C740333.1010809@bostream.nu> Message-ID: Hey, On Tue, Aug 24, 2010 at 10:36 AM, J?rgen Stenarson wrote: > The main problem is that as it works now (at least for me) is that it > actually deletes everything I have written if there are matches both in > magics and regular. For example if I try to complete on 'color' when in > pylab mode then I get a list of 5 matches 2 magics and 3 regular but I get > to start all over again and type in color again: > > In [1]: color > colorbar ? ?colors ? ? ?%colors ? ? colormaps ? %color_info > > In [1]: ? ? ? ? ? ? ? ? ? ? ? ? <--Empty line > > I find it quite annoying to have to retype things in cases like this. > > I think it would be nice to get alla matches including magics but not if the > price is that I loose things I have typed. Ah, I certainly don't see the deletion in Linux: amirbar[~]> ip -pylab Python 2.6.5 (r265:79063, Apr 16 2010, 13:09:56) Type "copyright", "credits" or "license" for more information. IPython 0.11.alpha1.git -- An enhanced Interactive Python. ? -> Introduction and overview of IPython's features. %quickref -> Quick reference. help -> Python's own help system. object? -> Details about 'object'. ?object also works, ?? prints more. Welcome to pylab, a matplotlib-based Python environment [backend: Qt4Agg]. For more information, type 'help(pylab)'. In [1]: color colorbar %color_info colormaps colors %colors In [1]: color So in your case, the 'color' typed so far disappears? That's certainly pretty annoying... I wonder why that is happening and why it's different on windows than linux, that's quite odd. No idea here so far, sorry. I am working on some of the tab completion stuff now, but mostly for the network, but I'll keep an eye out for anything that sticks out. I'm working on the newkernel branch anyway, so nothing I've done should have touched trunk on that front. Mmh, sorry not to have a better idea right now. But if this is behaving like that on windows, we *definitely* need to fix it. f From fperez.net at gmail.com Tue Aug 24 14:56:32 2010 From: fperez.net at gmail.com (Fernando Perez) Date: Tue, 24 Aug 2010 11:56:32 -0700 Subject: [IPython-dev] Uniform way of integrating event loops among different IDE's In-Reply-To: References: Message-ID: Hi Almar, On Tue, Aug 24, 2010 at 8:06 AM, Almar Klein wrote: > I'm developing an IDE for Python (http://code.google.com/p/iep/) that is > capable of integrating the event loop of several GUI toolkits. On a side > note, I used much code of IPython as inspiration on how to do that, so > thanks for that. Welcome! It looks like iep has some very nice features. I'm glad IPython has been useful to you, both in code and in inspiration, I see you have put in magics, the ? sytnax, etc. Great :) > I saw in the IPython documentation that IPython users can detect whether > IPython hijacked the event loop as follows (for wx): > > try: > ??? from IPython import appstart_wx > ??? appstart_wx(app) > except ImportError: > ???? app.MainLoop() > > A very nifty feature indeed. However, building further on this, wouldn't it > be nice if people could perform this trick regardless of in which IDE or > shell the code is running? Therefore I propose to insert an object in the > GUI's module to indicate that the GUI event loop does not need to be > entered. I currently use for my IDE: > > import wx > if not hasattr(wx, '_integratedEventLoop'): > ??? app = wx.PySimpleApp() > ??? app.MainLoop() > > Currently, _integratedEventLoop is a string with the value 'IEP', indicating > who hijacked the main loop. I'm not sure what IPythons appstart_* function > does, but the inserted object might just as well be a function that needs to > be called (using the app instance as an argument, but how to call it for > fltk or gtk then?). > > I'm interested to know what you think of this idea. > ? Almar > > PS: for PyQt4, I would propose inserting the object both in the PyQt4 and > PyQt4.QtGui namespaces. This certainly sounds like a viable idea, it's basically having a simple convention for how third-party codes can detect whether the main gui loop is already controlled. I should note that *right now* Brian is re-working our gui support, because for the new 2-process work we can't hook into pyosinputhook anymore (since the kernel isn't running in a terminal that reads user input anymore, but instead listening on a network port). I'm working on a different part of the code right now so he may pitch in later, but this is just to say that the details of how we do things in the new zmq-based 2 process code may differ somewhat. But regardless of what changes we end up having in console vs. network, I do think that standardizing a 'protocol' for applications that can expose an interactive event loop to sync with user's gui code is definitely what we want to have. Once a clean solution is in place, matplotlib, chaco and friends can be adjusted to a common standard and work with ipython, iep, etc. in a single shot. On a different topic: I downloaded iep's hg tip to have a look, but I realized that your code is GPL, so I preferred not to go much deeper into it. I would like to at least ask that you consider releasing your code with a license that makes it easier to share code between iep and ipython, numpy, matplotlib, etc. You mention how code and ideas in ipython have benefitted you in various places, and I think that's great. However, by building a GPL code, you are in fact creating an asymmetric relationship: you can use our code and ideas, but we can't use yours. IPython, numpy, matplotlib, scipy, mayavi, chaco and all the other scientific python tools you benefit from daily are all released under the BSD license (like Python itself), which makes it very easy to share code across all of them. But a single (small or large) application that is GPL in this ecosystem becomes a one-way street: that project can use all the others, but it doesn't give anything back. I obviously respect your decision to release your code as GPL, it is your legal right to do so. I would only ask that you consider how the hundreds of thousands of lines of code combined in ipython, mpl, numpy, scipy, etc (and the time this community has contributed to create and maintain them) have benefitted you when working and creating IEP, and how you'd like to participate in this community as a fellow contributor. We've built a great community of projects that all share back and forth with each other, it would be great if IEP was a new member of this same community instead of only taking from it. All the best, f From almar.klein at gmail.com Tue Aug 24 16:38:10 2010 From: almar.klein at gmail.com (Almar Klein) Date: Tue, 24 Aug 2010 22:38:10 +0200 Subject: [IPython-dev] Uniform way of integrating event loops among different IDE's In-Reply-To: References: Message-ID: This certainly sounds like a viable idea, it's basically having a > simple convention for how third-party codes can detect whether the > main gui loop is already controlled. > > I should note that *right now* Brian is re-working our gui support, > because for the new 2-process work we can't hook into pyosinputhook > anymore (since the kernel isn't running in a terminal that reads user > input anymore, but instead listening on a network port). I'm working > on a different part of the code right now so he may pitch in later, > but this is just to say that the details of how we do things in the > new zmq-based 2 process code may differ somewhat. > Interesting. IEP runs its interpreter in a different processes also. You (or Brian) might be interested in the channels module which IEP uses for communication (via a socket, full Unicode support). You'd be happy to know I choose to license it separately as BSD, since I thought it might be useful for other projects. http://code.google.com/p/iep/wiki/Channels But regardless of what changes we end up having in console vs. > network, I do think that standardizing a 'protocol' for applications > that can expose an interactive event loop to sync with user's gui code > is definitely what we want to have. Once a clean solution is in > place, matplotlib, chaco and friends can be adjusted to a common > standard and work with ipython, iep, etc. in a single shot. > Great to hear you're interested. As soon as things fall into place for IPython, we should get in contact and discuss how we want to do that. On a different topic: I downloaded iep's hg tip to have a look, but I > realized that your code is GPL, so I preferred not to go much deeper > into it. I would like to at least ask that you consider releasing > your code with a license that makes it easier to share code between > iep and ipython, numpy, matplotlib, etc. You mention how code and > ideas in ipython have benefitted you in various places, and I think > that's great. However, by building a GPL code, you are in fact > creating an asymmetric relationship: you can use our code and ideas, > but we can't use yours. IPython, numpy, matplotlib, scipy, mayavi, > chaco and all the other scientific python tools you benefit from daily > are all released under the BSD license (like Python itself), which > makes it very easy to share code across all of them. But a single > (small or large) application that is GPL in this ecosystem becomes a > one-way street: that project can use all the others, but it doesn't > give anything back. > > I obviously respect your decision to release your code as GPL, it is > your legal right to do so. I would only ask that you consider how the > hundreds of thousands of lines of code combined in ipython, mpl, > numpy, scipy, etc (and the time this community has contributed to > create and maintain them) have benefitted you when working and > creating IEP, and how you'd like to participate in this community as a > fellow contributor. We've built a great community of projects that > all share back and forth with each other, it would be great if IEP was > a new member of this same community instead of only taking from it. > You bring forward compelling arguments. I will seriously reconsider the license. I find this license landscape quite difficult to comprehend sometimes. I mean, GPL has it going for it that it protects the code from being used commercially, which is good right? At least if I should believe Richard Stallman :) In a landscape dominated by GPL code this would make sense, since projects would be able to borrow from each other. However, you're right: in the Python landscape BSD is the norm, which means a GPL project would not "fit in". Please note that it's not my intention to only "take", or I would not have released IEP in the first place. The only problem is that other projects cannot easily borrow code from IEP if they're not GPL itself. I'll need to give this some thought. Cheers, Almar -------------- next part -------------- An HTML attachment was scrubbed... URL: From fperez.net at gmail.com Tue Aug 24 18:08:52 2010 From: fperez.net at gmail.com (Fernando Perez) Date: Tue, 24 Aug 2010 15:08:52 -0700 Subject: [IPython-dev] Uniform way of integrating event loops among different IDE's In-Reply-To: References: Message-ID: Hi Almar, On Tue, Aug 24, 2010 at 1:38 PM, Almar Klein wrote: > Interesting. IEP runs its interpreter in a different processes also. You (or > Brian) might be interested in the channels module which IEP uses for > communication (via a socket, full Unicode support). You'd be happy to know I > choose to license it separately as BSD, since I thought it might be useful > for other projects. > http://code.google.com/p/iep/wiki/Channels Cool! We might find ways of making our APIs more compatible with that, though for the implementation we're pretty sure we're going to stick with zeromq, at least for a while: zeromq manages the messaging 100% in C++ without any python dependencies, which means that the messaging layer can continue to manage data even when engines are busy running C extension code. For us, that's an important feature. Zeromq is being developed by a really strong team with years of experience in high-performance networking, and it's an entire messaging architecture that we do benefit from at multiple points (witness Min's recent work). But it seems like channels offers some of the same basic ideas in pure python, so it would be cool if we could find a common API so that we could have a non-zmq multiprocess version (even if it had a few limitations), using channels.py, and the full zmq-based one for the rest. So many thanks for pointing this out, it could be really nice to have a pure-python fallback mode, so that even the multi-process setups could be run just on top of the stdlib, even if losing a little bit of speed and robustness compared to zmq. Interesting... A few minor notes: - typo in the site, it reads 'DSB' license - Since this is new code, may I suggest you use PEP-8 naming conventions? While in places like code that inherits from Wx or Qt one has no option but following its own naming scheme, these days most python code has standardized on PEP-8 naming style (ClassNames and functions_or_methods). It would be good to see new code (especially code landing for py3) arriving in a consistent style with this. >> But regardless of what changes we end up having in console vs. >> network, I do think that standardizing a 'protocol' for applications >> that can expose an interactive event loop to sync with user's gui code >> is definitely what we want to have. ? Once a clean solution is in >> place, matplotlib, chaco and friends can be adjusted to a common >> standard and work with ipython, iep, etc. in a single shot. > > Great to hear you're interested. As soon as things fall into place for > IPython, we should get in contact and discuss how we want to do that. Yup! >> On a different topic: I downloaded iep's hg tip to have a look, but I >> realized that your code is GPL, so I preferred not to go much deeper >> into it. ?I would like to at least ask that you consider releasing >> your code with a license that makes it easier to share code between >> iep and ipython, numpy, matplotlib, etc. ?You mention how code and >> ideas in ipython have benefitted you in various places, and I think >> that's great. ?However, by building a GPL code, you are in fact >> creating an asymmetric relationship: you can use our code and ideas, >> but we can't use yours. ?IPython, numpy, matplotlib, scipy, mayavi, >> chaco and all the other scientific python tools you benefit from daily >> are all released under the BSD license (like Python itself), which >> makes it very easy to share code across all of them. ?But a single >> (small or large) application that is GPL in this ecosystem becomes a >> one-way street: that project can use all the others, but it doesn't >> give anything back. >> >> I obviously respect your decision to release your code as GPL, it is >> your legal right to do so. ?I would only ask that you consider how the >> hundreds of thousands of lines of code combined in ipython, mpl, >> numpy, scipy, etc (and the time this community has contributed to >> create and maintain them) have benefitted you when working and >> creating IEP, and how you'd like to participate in this community as a >> fellow contributor. ?We've built a great community of projects that >> all share back and forth with each other, it would be great if IEP was >> a new member of this same community instead of only taking from it. > > You bring forward compelling arguments. I will seriously reconsider the > license. > > I find this license landscape quite difficult to comprehend sometimes. I > mean, GPL has it going for it that it protects the code from being used > commercially, which is good right? At least if I should believe Richard > Stallman :)? In a landscape dominated by GPL code this would make sense, > since projects would be able to borrow from each other. However, you're > right: in the Python landscape BSD is the norm, which means a GPL project > would not "fit in". Many thanks for giving it some thought. It is indeed a matter of 'ecosystems': in the python one, BSD is a very natural fit and GPL projects actually create islands with one-way flow of code. I'll be happy to discuss it further if you have any other questions or ideas. > Please note that it's not my intention to only "take", or I would not have > released IEP in the first place. The only problem is that other projects > cannot easily borrow code from IEP if they're not GPL itself. I'll need to > give this some thought. Certainly, and I apologize if the tone of that last comment of mine wasn't quite right, I only realized it after re-reading it. I certainly appreciate your contribution, and would quite possibly use IEP (as a *user*) regardless of the GPL/BSD issue. Users can and will benefit from your contribution, and for that alone you are already to be thanked. My comment had a narrow scope: regarding the 'taking' of *code* being only one way. I didn't mean to imply a selfish or unethical attitude on your part, and sorry if my wording wasn't the best. Regards, f From fperez.net at gmail.com Wed Aug 25 01:02:22 2010 From: fperez.net at gmail.com (Fernando Perez) Date: Tue, 24 Aug 2010 22:02:22 -0700 Subject: [IPython-dev] Update: tab completion and better debugging in newkernel Message-ID: Hi folks, [ mainly for Evan with our time difference ] I've merged with your changes and pushed to the newkernel branch. The code now has tab completion support on several things, I'm working on fixing the inputsplitter problems you saw. Cheers, f From fperez.net at gmail.com Wed Aug 25 02:50:55 2010 From: fperez.net at gmail.com (Fernando Perez) Date: Tue, 24 Aug 2010 23:50:55 -0700 Subject: [IPython-dev] Update: tab completion and better debugging in newkernel In-Reply-To: References: Message-ID: On Tue, Aug 24, 2010 at 10:02 PM, Fernando Perez wrote: > > I've merged with your changes and pushed to the newkernel branch. ?The > code now has tab completion support on several things, I'm working on > fixing the inputsplitter problems you saw. I've also fixed the multiline inputsplitter problem you saw, adding a test in the process. It was indeed a bug only visible when the input mode was set for blocks. By the way, I changed 'append' and 'replace' to read 'line' and 'block', which are far more intuitive and descriptive of their true purpose. So please merge with usptream/newkernel before getting started tomorrow. Regards, f From almar.klein at gmail.com Wed Aug 25 04:10:44 2010 From: almar.klein at gmail.com (Almar Klein) Date: Wed, 25 Aug 2010 10:10:44 +0200 Subject: [IPython-dev] Uniform way of integrating event loops among different IDE's In-Reply-To: References: Message-ID: Hi Fernando, On 25 August 2010 00:08, Fernando Perez wrote: > Hi Almar, > > On Tue, Aug 24, 2010 at 1:38 PM, Almar Klein > wrote: > > Interesting. IEP runs its interpreter in a different processes also. You > (or > > Brian) might be interested in the channels module which IEP uses for > > communication (via a socket, full Unicode support). You'd be happy to > know I > > choose to license it separately as BSD, since I thought it might be > useful > > for other projects. > > http://code.google.com/p/iep/wiki/Channels > > Cool! We might find ways of making our APIs more compatible with > that, though for the implementation we're pretty sure we're going to > stick with zeromq, at least for a while: zeromq manages the messaging > 100% in C++ without any python dependencies, which means that the > messaging layer can continue to manage data even when engines are busy > running C extension code. For us, that's an important feature. > Zeromq is being developed by a really strong team with years of > experience in high-performance networking, and it's an entire > messaging architecture that we do benefit from at multiple points > (witness Min's recent work). > > But it seems like channels offers some of the same basic ideas in pure > python, so it would be cool if we could find a common API so that we > could have a non-zmq multiprocess version (even if it had a few > limitations), using channels.py, and the full zmq-based one for the > rest. > > So many thanks for pointing this out, it could be really nice to have > a pure-python fallback mode, so that even the multi-process setups > could be run just on top of the stdlib, even if losing a little bit of > speed and robustness compared to zmq. Interesting... > This zmq looks interesting indeed. I should take a look at it in the future. A common API, that's an interesting idea. We might even cooperate on creating a package specifically for this kind of inter process communication, that would use zmq if it can and falls back to pure Python otherwise. Thinking of wilder ideas, it might even be possible to share a common interpreter (with which I mean the code running in the second process). Such that only the way of controlling it is different. Whether one uses IPython or IEP, under the hood it's the same thing. There are of course some fundamental differences between IEP and IPython (for example IEP needs to be able to run a selection of code), but who knows? - typo in the site, it reads 'DSB' license > Woops, thanks for noticing. - Since this is new code, may I suggest you use PEP-8 naming > conventions? While in places like code that inherits from Wx or Qt > one has no option but following its own naming scheme, these days most > python code has standardized on PEP-8 naming style (ClassNames and > functions_or_methods). It would be good to see new code (especially > code landing for py3) arriving in a consistent style with this. > You're right. I will change the names today. > >> On a different topic: I downloaded iep's hg tip to have a look, but I > >> realized that your code is GPL, so I preferred not to go much deeper > >> into it. I would like to at least ask that you consider releasing > >> your code with a license that makes it easier to share code between > >> iep and ipython, numpy, matplotlib, etc. You mention how code and > >> ideas in ipython have benefitted you in various places, and I think > >> that's great. However, by building a GPL code, you are in fact > >> creating an asymmetric relationship: you can use our code and ideas, > >> but we can't use yours. IPython, numpy, matplotlib, scipy, mayavi, > >> chaco and all the other scientific python tools you benefit from daily > >> are all released under the BSD license (like Python itself), which > >> makes it very easy to share code across all of them. But a single > >> (small or large) application that is GPL in this ecosystem becomes a > >> one-way street: that project can use all the others, but it doesn't > >> give anything back. > >> > >> I obviously respect your decision to release your code as GPL, it is > >> your legal right to do so. I would only ask that you consider how the > >> hundreds of thousands of lines of code combined in ipython, mpl, > >> numpy, scipy, etc (and the time this community has contributed to > >> create and maintain them) have benefitted you when working and > >> creating IEP, and how you'd like to participate in this community as a > >> fellow contributor. We've built a great community of projects that > >> all share back and forth with each other, it would be great if IEP was > >> a new member of this same community instead of only taking from it. > > > > You bring forward compelling arguments. I will seriously reconsider the > > license. > > > > I find this license landscape quite difficult to comprehend sometimes. I > > mean, GPL has it going for it that it protects the code from being used > > commercially, which is good right? At least if I should believe Richard > > Stallman :) In a landscape dominated by GPL code this would make sense, > > since projects would be able to borrow from each other. However, you're > > right: in the Python landscape BSD is the norm, which means a GPL project > > would not "fit in". > > Many thanks for giving it some thought. It is indeed a matter of > 'ecosystems': in the python one, BSD is a very natural fit and GPL > projects actually create islands with one-way flow of code. I'll be > happy to discuss it further if you have any other questions or ideas. > Well, I have one question: can I use the BSD license even though IEP uses PyQt (which is GPL)? > Please note that it's not my intention to only "take", or I would not have > > released IEP in the first place. The only problem is that other projects > > cannot easily borrow code from IEP if they're not GPL itself. I'll need > to > > give this some thought. > > Certainly, and I apologize if the tone of that last comment of mine > wasn't quite right, I only realized it after re-reading it. I > certainly appreciate your contribution, and would quite possibly use > IEP (as a *user*) regardless of the GPL/BSD issue. Users can and will > benefit from your contribution, and for that alone you are already to > be thanked. My comment had a narrow scope: regarding the 'taking' of > *code* being only one way. I didn't mean to imply a selfish or > unethical attitude on your part, and sorry if my wording wasn't the > best. > Well, your words had a sharp edge to it, but I understood what you meant. No hard feelings :) Almar -------------- next part -------------- An HTML attachment was scrubbed... URL: From dsdale24 at gmail.com Wed Aug 25 12:54:10 2010 From: dsdale24 at gmail.com (Darren Dale) Date: Wed, 25 Aug 2010 12:54:10 -0400 Subject: [IPython-dev] question about multiprocessing Message-ID: Against Robert Kern's characteristically sage and eloquently communicated advice, I contributed a feature to h5py which, at import time, attempts to determine if the import is occurring in an IPython session and, if so, registers a custom completer. I think Robert's exact response was: "Eww." It came back and bit me today, as I was starting to port some parallel processing code from parallelpython to the multiprocessing package. A simple test script would raise a PicklingError if the master development branch of ipython was installed. Part of the issue was a change in behavior of IPython.core.ipapi.get(): it used to return None if there was no global instance of InteractiveShell, now it creates one if it doesn't exist. That is easy enough to deal with, we can call IPython.core.iplib.InteractiveShell.initialized() instead. Here is a simple script that reproduces the error without importing h5py. Note that the ipython import and call to get() are lame, and the script will run without errors if those two lines are commented out. from multiprocessing import Pool import IPython.core.ipapi as ip ip.get() def update(i): print i def f(i): return i*i if __name__ == '__main__': pool = Pool() for i in range(10): pool.apply_async(f, [i], callback=update) pool.close() pool.join() I don't understand the issue here. Maybe it's a situation that should never crop up in the real world. But in case it is important, I thought I should bring it to the devs attention. Cheers, Darren From jorgen.stenarson at bostream.nu Wed Aug 25 13:00:40 2010 From: jorgen.stenarson at bostream.nu (=?ISO-8859-1?Q?J=F6rgen_Stenarson?=) Date: Wed, 25 Aug 2010 19:00:40 +0200 Subject: [IPython-dev] Tab-completion in master In-Reply-To: References: <4C73FD24.70807@bostream.nu> <4C740333.1010809@bostream.nu> Message-ID: <4C754C38.5050901@bostream.nu> Fernando Perez skrev 2010-08-24 20:07: > Hey, > > On Tue, Aug 24, 2010 at 10:36 AM, J?rgen Stenarson > wrote: >> The main problem is that as it works now (at least for me) is that it >> actually deletes everything I have written if there are matches both in >> magics and regular. For example if I try to complete on 'color' when in >> pylab mode then I get a list of 5 matches 2 magics and 3 regular but I get >> to start all over again and type in color again: >> >> In [1]: color >> colorbar colors %colors colormaps %color_info >> >> In [1]:<--Empty line >> >> I find it quite annoying to have to retype things in cases like this. >> >> I think it would be nice to get alla matches including magics but not if the >> price is that I loose things I have typed. > > Ah, I certainly don't see the deletion in Linux: > > amirbar[~]> ip -pylab > Python 2.6.5 (r265:79063, Apr 16 2010, 13:09:56) > Type "copyright", "credits" or "license" for more information. > > IPython 0.11.alpha1.git -- An enhanced Interactive Python. > ? -> Introduction and overview of IPython's features. > %quickref -> Quick reference. > help -> Python's own help system. > object? -> Details about 'object'. ?object also works, ?? prints more. > > Welcome to pylab, a matplotlib-based Python environment [backend: Qt4Agg]. > For more information, type 'help(pylab)'. > > In [1]: color > colorbar %color_info colormaps colors %colors > > In [1]: color > > So in your case, the 'color' typed so far disappears? That's > certainly pretty annoying... > > I wonder why that is happening and why it's different on windows than > linux, that's quite odd. No idea here so far, sorry. > > I am working on some of the tab completion stuff now, but mostly for > the network, but I'll keep an eye out for anything that sticks out. > I'm working on the newkernel branch anyway, so nothing I've done > should have touched trunk on that front. > > Mmh, sorry not to have a better idea right now. But if this is > behaving like that on windows, we *definitely* need to fix it. > > f > My main problem was a bug in pyreadline when there is no commonprefix among the completions. Now nothing disappears in this case. /J?rgen From fperez.net at gmail.com Wed Aug 25 13:06:07 2010 From: fperez.net at gmail.com (Fernando Perez) Date: Wed, 25 Aug 2010 10:06:07 -0700 Subject: [IPython-dev] Tab-completion in master In-Reply-To: <4C754C38.5050901@bostream.nu> References: <4C73FD24.70807@bostream.nu> <4C740333.1010809@bostream.nu> <4C754C38.5050901@bostream.nu> Message-ID: On Wed, Aug 25, 2010 at 10:00 AM, J?rgen Stenarson wrote: > My main problem was a bug in pyreadline when there is no commonprefix among > the completions. Now nothing disappears in this case. Ah, OK, good to know :) f From robert.kern at gmail.com Wed Aug 25 14:31:28 2010 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 25 Aug 2010 13:31:28 -0500 Subject: [IPython-dev] question about multiprocessing In-Reply-To: References: Message-ID: On 8/25/10 11:54 AM, Darren Dale wrote: > Against Robert Kern's characteristically sage and eloquently > communicated advice, I contributed a feature to h5py which, at import > time, attempts to determine if the import is occurring in an IPython > session and, if so, registers a custom completer. I think Robert's > exact response was: "Eww." Nelson: Ha ha! -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From almar.klein at gmail.com Wed Aug 25 16:08:46 2010 From: almar.klein at gmail.com (Almar Klein) Date: Wed, 25 Aug 2010 22:08:46 +0200 Subject: [IPython-dev] Uniform way of integrating event loops among different IDE's In-Reply-To: References: Message-ID: On 25 August 2010 10:10, Almar Klein wrote: > Hi Fernando, > > On 25 August 2010 00:08, Fernando Perez wrote: > >> Hi Almar, >> >> On Tue, Aug 24, 2010 at 1:38 PM, Almar Klein >> wrote: >> > Interesting. IEP runs its interpreter in a different processes also. You >> (or >> > Brian) might be interested in the channels module which IEP uses for >> > communication (via a socket, full Unicode support). You'd be happy to >> know I >> > choose to license it separately as BSD, since I thought it might be >> useful >> > for other projects. >> > http://code.google.com/p/iep/wiki/Channels >> >> Cool! We might find ways of making our APIs more compatible with >> that, though for the implementation we're pretty sure we're going to >> stick with zeromq, at least for a while: zeromq manages the messaging >> 100% in C++ without any python dependencies, which means that the >> messaging layer can continue to manage data even when engines are busy >> running C extension code. For us, that's an important feature. >> Zeromq is being developed by a really strong team with years of >> experience in high-performance networking, and it's an entire >> messaging architecture that we do benefit from at multiple points >> (witness Min's recent work). >> >> But it seems like channels offers some of the same basic ideas in pure >> python, so it would be cool if we could find a common API so that we >> could have a non-zmq multiprocess version (even if it had a few >> limitations), using channels.py, and the full zmq-based one for the >> rest. >> >> So many thanks for pointing this out, it could be really nice to have >> a pure-python fallback mode, so that even the multi-process setups >> could be run just on top of the stdlib, even if losing a little bit of >> speed and robustness compared to zmq. Interesting... >> > > This zmq looks interesting indeed. I should take a look at it in the > future. > > A common API, that's an interesting idea. We might even cooperate on > creating a package specifically for this kind of inter process > communication, that would use zmq if it can and falls back to pure Python > otherwise. > > Thinking of wilder ideas, it might even be possible to share a common > interpreter (with which I mean the code running in the second process). Such > that only the way of controlling it is different. Whether one uses IPython > or IEP, under the hood it's the same thing. There are of course some > fundamental differences between IEP and IPython (for example IEP needs to be > able to run a selection of code), but who knows? > > > - typo in the site, it reads 'DSB' license >> > > Woops, thanks for noticing. > > > - Since this is new code, may I suggest you use PEP-8 naming >> conventions? While in places like code that inherits from Wx or Qt >> one has no option but following its own naming scheme, these days most >> python code has standardized on PEP-8 naming style (ClassNames and >> functions_or_methods). It would be good to see new code (especially >> code landing for py3) arriving in a consistent style with this. >> > > You're right. I will change the names today. > > > > >> >> On a different topic: I downloaded iep's hg tip to have a look, but I >> >> realized that your code is GPL, so I preferred not to go much deeper >> >> into it. I would like to at least ask that you consider releasing >> >> your code with a license that makes it easier to share code between >> >> iep and ipython, numpy, matplotlib, etc. You mention how code and >> >> ideas in ipython have benefitted you in various places, and I think >> >> that's great. However, by building a GPL code, you are in fact >> >> creating an asymmetric relationship: you can use our code and ideas, >> >> but we can't use yours. IPython, numpy, matplotlib, scipy, mayavi, >> >> chaco and all the other scientific python tools you benefit from daily >> >> are all released under the BSD license (like Python itself), which >> >> makes it very easy to share code across all of them. But a single >> >> (small or large) application that is GPL in this ecosystem becomes a >> >> one-way street: that project can use all the others, but it doesn't >> >> give anything back. >> >> >> >> I obviously respect your decision to release your code as GPL, it is >> >> your legal right to do so. I would only ask that you consider how the >> >> hundreds of thousands of lines of code combined in ipython, mpl, >> >> numpy, scipy, etc (and the time this community has contributed to >> >> create and maintain them) have benefitted you when working and >> >> creating IEP, and how you'd like to participate in this community as a >> >> fellow contributor. We've built a great community of projects that >> >> all share back and forth with each other, it would be great if IEP was >> >> a new member of this same community instead of only taking from it. >> > >> > You bring forward compelling arguments. I will seriously reconsider the >> > license. >> > >> > I find this license landscape quite difficult to comprehend sometimes. I >> > mean, GPL has it going for it that it protects the code from being used >> > commercially, which is good right? At least if I should believe Richard >> > Stallman :) In a landscape dominated by GPL code this would make sense, >> > since projects would be able to borrow from each other. However, you're >> > right: in the Python landscape BSD is the norm, which means a GPL >> project >> > would not "fit in". >> >> Many thanks for giving it some thought. It is indeed a matter of >> 'ecosystems': in the python one, BSD is a very natural fit and GPL >> projects actually create islands with one-way flow of code. I'll be >> happy to discuss it further if you have any other questions or ideas. >> > > Well, I have one question: can I use the BSD license even though IEP uses > PyQt (which is GPL)? > > > > Please note that it's not my intention to only "take", or I would not >> have >> > released IEP in the first place. The only problem is that other projects >> > cannot easily borrow code from IEP if they're not GPL itself. I'll need >> to >> > give this some thought. >> >> Certainly, and I apologize if the tone of that last comment of mine >> wasn't quite right, I only realized it after re-reading it. I >> certainly appreciate your contribution, and would quite possibly use >> IEP (as a *user*) regardless of the GPL/BSD issue. Users can and will >> benefit from your contribution, and for that alone you are already to >> be thanked. My comment had a narrow scope: regarding the 'taking' of >> *code* being only one way. I didn't mean to imply a selfish or >> unethical attitude on your part, and sorry if my wording wasn't the >> best. >> > > Well, your words had a sharp edge to it, but I understood what you meant. > No hard feelings :) > > Almar > On whether I'd be allowed to use the BSD license, I found this exception to the GPL license, that is used by both Nokia and Riverbank: http://doc.trolltech.com/4.4/license-gpl-exceptions.html As for as I can see, item 2 (at the bottom) covers it already, because IEP links to the precompiled binaries. Almar -------------- next part -------------- An HTML attachment was scrubbed... URL: From fperez.net at gmail.com Wed Aug 25 16:53:20 2010 From: fperez.net at gmail.com (Fernando Perez) Date: Wed, 25 Aug 2010 13:53:20 -0700 Subject: [IPython-dev] Uniform way of integrating event loops among different IDE's In-Reply-To: References: Message-ID: Hi Almar, On Wed, Aug 25, 2010 at 1:10 AM, Almar Klein wrote: > Hi Fernando, > This zmq looks interesting indeed. I should take a look at it in the future. > > A common API, that's an interesting idea. We might even cooperate on > creating a package specifically for this kind of inter process > communication, that would use zmq if it can and falls back to pure Python > otherwise. > > Thinking of wilder ideas, it might even be possible to share a common > interpreter (with which I mean the code running in the second process). Such > that only the way of controlling it is different. Whether one uses IPython > or IEP, under the hood it's the same thing. There are of course some > fundamental differences between IEP and IPython (for example IEP needs to be > able to run a selection of code), but who knows? Not wild at all, in fact all the recent work from the Google SoC students, as well as the stuff Brian, Evan and I are currently working on, goes precisely in that direction. We're basically specifying an 'IPython protocol' as described here: http://ipython.scipy.org/doc/nightly/html/development/messaging.html That messaging spec fully defines how to interact with an ipython kernel that has all the goodies we see today in the terminal, and any number of frontends can talk to one. The 'newkernel' branch in the ipython repo has a Qt implementation of a client that uses this spec, and it's getting to be pretty functional already. We're doing a ton of work on this over the next few weeks. So yes, these ideas are definitely in line with what is being built in ipython right now. >> - Since this is new code, may I suggest you use PEP-8 naming >> conventions? ?While in places like code that inherits from Wx or Qt >> one has no option but following its own naming scheme, these days most >> python code has standardized on PEP-8 naming style (ClassNames and >> functions_or_methods). ?It would be good to see new code (especially >> code landing for py3) arriving in a consistent style with this. > > You're right. I will change the names today. Awesome, thanks. If you want a quick reference, our coding and documentation guides may be handy: http://ipython.scipy.org/doc/nightly/html/development/coding_guide.html http://ipython.scipy.org/doc/nightly/html/development/doc_guide.html > Well, your words had a sharp edge to it, but I understood what you meant. No > hard feelings :) Thanks again for not reacting to the edge I added to my language when it wasn't necessary :) All the best, f From fperez.net at gmail.com Wed Aug 25 16:59:34 2010 From: fperez.net at gmail.com (Fernando Perez) Date: Wed, 25 Aug 2010 13:59:34 -0700 Subject: [IPython-dev] Uniform way of integrating event loops among different IDE's In-Reply-To: References: Message-ID: On Wed, Aug 25, 2010 at 1:08 PM, Almar Klein wrote: > On whether I'd be allowed to use the BSD license, I found this exception to > the GPL license, that is used by both Nokia and Riverbank: > http://doc.trolltech.com/4.4/license-gpl-exceptions.html > > As for as I can see, item 2 (at the bottom) covers it already, because IEP > links to the precompiled binaries. > I'm no lawyerr by any stretch of the imagination, but it seems to me that you comply with 1A, 1B and 1C, so that you are indeed allowed to use the BSD license (just like ipython or matplotlib do, both of which have small amounts of Qt-using code in them). I read 2 as: if you want to link and distribute *non open-source* applications, you must use the commercial Qt license. But again, not a lawyer here :) It's also worth keeping in mind that PySide is slowly but surely improving, so a pure LGPL set of python bindings for Qt is also on the horizon... Regards, f From fperez.net at gmail.com Wed Aug 25 17:09:15 2010 From: fperez.net at gmail.com (Fernando Perez) Date: Wed, 25 Aug 2010 14:09:15 -0700 Subject: [IPython-dev] question about multiprocessing In-Reply-To: References: Message-ID: Hey Darren, On Wed, Aug 25, 2010 at 9:54 AM, Darren Dale wrote: > > I don't understand the issue here. Maybe it's a situation that should > never crop up in the real world. But in case it is important, I > thought I should bring it to the devs attention. Thanks! This is indeed a bug we have: http://github.com/ipython/ipython/issues/86 and I've added your example as well so we can track it down. Regards, f From fperez.net at gmail.com Wed Aug 25 18:55:56 2010 From: fperez.net at gmail.com (Fernando Perez) Date: Wed, 25 Aug 2010 15:55:56 -0700 Subject: [IPython-dev] Qt frontend idea Message-ID: Hey Evan, how easy would it be to add support for Ctrl-L to clear the widget like a terminal does? This is useful when you want to be able to start typing at the top again with a clean screen. It's particularly good when teaching, and now that we have popup info overlays, being at the top makes it more likely that the popup overlay won't end up below the bottom of the monitor. Just an idea... Cheers, f From epatters at enthought.com Wed Aug 25 23:02:03 2010 From: epatters at enthought.com (Evan Patterson) Date: Wed, 25 Aug 2010 20:02:03 -0700 Subject: [IPython-dev] Qt frontend idea In-Reply-To: References: Message-ID: The functionality for this is actually already there, but it's untested. I'll add a keybinding for it tomorrow and make sure that it works. Also, the call tip popup is supposed to have logic to re-position itself above the input line if it would be below the bottom of the monitor. If this isn't the behavior that you're seeing, let me know. Evan On Wed, Aug 25, 2010 at 3:55 PM, Fernando Perez wrote: > Hey Evan, > > how easy would it be to add support for Ctrl-L to clear the widget > like a terminal does? This is useful when you want to be able to > start typing at the top again with a clean screen. It's particularly > good when teaching, and now that we have popup info overlays, being at > the top makes it more likely that the popup overlay won't end up below > the bottom of the monitor. > > Just an idea... > > Cheers, > > f > _______________________________________________ > IPython-dev mailing list > IPython-dev at scipy.org > http://mail.scipy.org/mailman/listinfo/ipython-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fperez.net at gmail.com Wed Aug 25 23:29:03 2010 From: fperez.net at gmail.com (Fernando Perez) Date: Wed, 25 Aug 2010 20:29:03 -0700 Subject: [IPython-dev] Qt frontend idea In-Reply-To: References: Message-ID: On Wed, Aug 25, 2010 at 8:02 PM, Evan Patterson wrote: > The functionality for this is actually already there, but it's untested. > I'll add a keybinding for it tomorrow and make sure that it works. Great, thanks! Once it's working, we can implement the %clear magic to return on the payload a message that says something like 'clear screen' for each frontend to do the right thing (it's different on a terminal than in Qt, for example). > Also, the call tip popup is supposed to have logic to re-position itself > above the input line if it would be below the bottom of the monitor. If this > isn't the behavior that you're seeing, let me know. No, it's certainly not working right here. If the terminal is close to the bottom and my input line is at the bottom of the terminal itself, I can hardly read anything from the popup, as most of it is 'below the monitor'. I could show you tomorrow with a teamviewer setup if you want. Cheers, f From Fernando.Perez at berkeley.edu Thu Aug 26 02:01:35 2010 From: Fernando.Perez at berkeley.edu (Fernando Perez) Date: Wed, 25 Aug 2010 23:01:35 -0700 Subject: [IPython-dev] GUI support added. In-Reply-To: References: Message-ID: [ Cc-ing the dev list so the power figures below get recorded where Google will find them ] On Wed, Aug 25, 2010 at 22:08, Brian Granger wrote: > I just pushed GUI support for Qt, Tk and Wx into ipython/newkernel. ?I > think we are doing pretty good overall with the GUI support for now. > We just need lots of testing. ?I have tried many of the matplotlib > examples and most of them work fine. ?Evan, if you can try some big > trait apps, that would be great. ?We should also try some Mayavi > examples as well. ?Right now I have tuned the polling time on the GUI > timers so that the CPU usage is below 1% for the kernel. ?This is > about what the frontend itself is as well. This is fantastic, great job! As I mentioned before, CPU load isn't the only metric we need to look at, the key one is the number of CPU wakeups-from-idle per second induced by an app, that's what kills battery life. A linux laptop running on battery (you don't get this info on AC power) has the 'powertop' utility written by Intel to show who's keeping the CPU awake. Some numbers I've seen from quick testing on my new laptop (core i5 ultra low voltage, running in 'powersave' mode): - plain python shell: doesn't even register in powertop. - IPython 0.10.1, no pylab/thread support: same - IPython 0.10.1, with pylab using qt4 backend: same - IPython 0.10.1, with pylab using Wx backend: 10 wakeups per second. - IPython newkernel at the terminal (no zmq), no pylab: doesn't register - IPython newkernel at the terminal (no zmq), with pylab/qt4: same This is *fantastic* news. I'm not sure what changes are in the code that may explain this, but it seems that the one-process one (with pyosinputhook and qt4) is behaving better than I remember it from a while ago. Maybe it's just my memory, but I seem to recall it showed up more in powertop. Or maybe not, Qt has been OK all along and it's Wx that's the bad guy: - IPython newkernel at the terminal (no zmq), with pylab/wx: bad news: ~50 wakeups per second, the worst offender program in the whole computer, only second to the (linux) kernel itself. Indeed, Wx is bad: with -wthread it already gave ~10 wakeups per second, and with PyOSInputHook it's ~50. Nasty... Basically, Wx is a wakeup hog that will kill any battery. The good news is that in one process, even Qt is very well behaved and gives no detectable power signature. Now, when we run ipythonqt, which brings out two processes, messages flying around and a full qt app, we do eat more power. Here are the numbers (in all cases we have the Qt app for the frontend, zmq, and possibly some gui toolkit active in the kernel): - no pylab: ~37 - pylab tk: same - pylab qt: same - pylab wx: same The good news from this: enabling gui support in the new system has no net power cost. The bad news: even with no gui support, the power signature of the combined qt frontend/zmq communications/2 processes is pretty noticeable. One more reason to keep around the lightweight one-process guy: if you're on a plane trying to get every last ounce of battery out, it's a good option. Similar to how I switch window managers from Gnome to Awesome when I need to maximize battery life, this simply means that we'll have a range of interface options. The fancier ones have a power cost, and the more spartan ones will be very efficient. > Some notes: > > * Wx and Tk work out of the box with the matplotlib in EPD. Great. > * For Qt, we are going to have to patch matplotlib. ?I am attaching my > patched qt backend. ?This is just a draft of the patch and we may have > to add additional logic. OK, let's work on this one a bit, and when ready we'll get in touch with MPL. > * During the process of merging with newkernel I found some things: > ?- The default color scheme for the crash handler was set to Linux. > I have changed this to LightBR on the Mac so the crash tracebacks are > not invisible. Yes, good call. Sorry I forgot to do that yesterday, I enabled it and never went back to clean it up. > ?- I ran PyFlakes on some files and found some bugs (ultratb, > entry_point, etc.). ?These bugs were not discovered because they were > in parts of the code > ? ?that are not run usually. ?Let's make it a habit of running > PyFlakes before any merge. ?It is amazing the things that it will > catch! Yup, good point! I keep it on my Emacs setup all the time, I just forgot to run it (it's just a keystroke, I don't know why I got out of the habit). Pyflakes is definitely something to run regularly. > ?- The names rprint/rprinte are great for quick debugging shortcuts. > But these are now showing up in production code. ?Could we alias them > to raw_print_out and > ? ?raw_print_err and use the longer names in production code so 6 > months from now we don't have to go looking up what these functions > do? ? I am fine keeping the > ? ?short names around for quick debugging though. Yup. In fact, I'll rename them just raw_print and raw_print_err, the normal one doesn't really need a separate name. > # Patch to backend_qt4.py > # I have changed the _create_aApp function to the following: def _create_qApp(): """ Only one qApp can exist at a time, so check before creating one. """ if QtGui.QApplication.startingUp(): if DEBUG: print "Starting up QApplication" global qApp app = QtGui.QApplication.instance() if app is None: qApp = QtGui.QApplication( [" "] ) QtCore.QObject.connect( qApp, QtCore.SIGNAL( "lastWindowClosed()" ), qApp, QtCore.SLOT( "quit()" ) ) #remember that matplotlib created the qApp - will be used by show() _create_qApp.qAppCreatedHere = True else: qApp = app _create_qApp.qAppCreatedHere = False OK, we'll pound on the Qt code a little more until it feels robust. Cheers, and thanks again for the great job! f From epatters at enthought.com Thu Aug 26 10:33:46 2010 From: epatters at enthought.com (Evan Patterson) Date: Thu, 26 Aug 2010 09:33:46 -0500 Subject: [IPython-dev] Qt frontend idea In-Reply-To: References: Message-ID: On Wed, Aug 25, 2010 at 10:29 PM, Fernando Perez wrote: > On Wed, Aug 25, 2010 at 8:02 PM, Evan Patterson > wrote: > > The functionality for this is actually already there, but it's untested. > > I'll add a keybinding for it tomorrow and make sure that it works. > > Great, thanks! > > Once it's working, we can implement the %clear magic to return on the > payload a message that says something like 'clear screen' for each > frontend to do the right thing (it's different on a terminal than in > Qt, for example). > This is done. Note that Ctrl-L actually clears the screen rather than moving the prompt to the top of the widget. This may not be what we want. > > Also, the call tip popup is supposed to have logic to re-position itself > > above the input line if it would be below the bottom of the monitor. If > this > > isn't the behavior that you're seeing, let me know. > > No, it's certainly not working right here. If the terminal is close > to the bottom and my input line is at the bottom of the terminal > itself, I can hardly read anything from the popup, as most of it is > 'below the monitor'. > > I could show you tomorrow with a teamviewer setup if you want. > Now that I actually look at the code, I see that the logic isn't even there, which explains why it doesn't work. I'm pretty sure this was implemented at one point... in any case, I'll get this working soon. > Cheers, > > f > -------------- next part -------------- An HTML attachment was scrubbed... URL: From epatters at enthought.com Thu Aug 26 11:34:55 2010 From: epatters at enthought.com (Evan Patterson) Date: Thu, 26 Aug 2010 10:34:55 -0500 Subject: [IPython-dev] Qt frontend idea In-Reply-To: References: Message-ID: On Thu, Aug 26, 2010 at 9:33 AM, Evan Patterson wrote: > On Wed, Aug 25, 2010 at 10:29 PM, Fernando Perez wrote: > >> On Wed, Aug 25, 2010 at 8:02 PM, Evan Patterson >> wrote: >> > The functionality for this is actually already there, but it's untested. >> > I'll add a keybinding for it tomorrow and make sure that it works. >> >> Great, thanks! >> >> Once it's working, we can implement the %clear magic to return on the >> payload a message that says something like 'clear screen' for each >> frontend to do the right thing (it's different on a terminal than in >> Qt, for example). >> > > This is done. Note that Ctrl-L actually clears the screen rather than > moving the prompt to the top of the widget. This may not be what we want. > > >> > Also, the call tip popup is supposed to have logic to re-position itself >> > above the input line if it would be below the bottom of the monitor. If >> this >> > isn't the behavior that you're seeing, let me know. >> >> No, it's certainly not working right here. If the terminal is close >> to the bottom and my input line is at the bottom of the terminal >> itself, I can hardly read anything from the popup, as most of it is >> 'below the monitor'. >> >> I could show you tomorrow with a teamviewer setup if you want. >> > > Now that I actually look at the code, I see that the logic isn't even > there, which explains why it doesn't work. I'm pretty sure this was > implemented at one point... in any case, I'll get this working soon. > It turns out that code was never there (I don't where I get these ideas), but it is now. Let me know if you have problems. > > >> Cheers, >> >> f >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fperez.net at gmail.com Thu Aug 26 14:01:41 2010 From: fperez.net at gmail.com (Fernando Perez) Date: Thu, 26 Aug 2010 11:01:41 -0700 Subject: [IPython-dev] Qt frontend idea In-Reply-To: References: Message-ID: On Thu, Aug 26, 2010 at 8:34 AM, Evan Patterson wrote: > > It turns out that code was never there (I don't where I get these ideas), > but it is now. Let me know if you have problems. Works perfectly, thanks! f From ellisonbg at gmail.com Fri Aug 27 00:54:21 2010 From: ellisonbg at gmail.com (Brian Granger) Date: Thu, 26 Aug 2010 21:54:21 -0700 Subject: [IPython-dev] GUI support added. In-Reply-To: References: Message-ID: On Wed, Aug 25, 2010 at 11:01 PM, Fernando Perez wrote: > [ Cc-ing the dev list so the power figures below get recorded where > Google will find them ] > > On Wed, Aug 25, 2010 at 22:08, Brian Granger wrote: >> I just pushed GUI support for Qt, Tk and Wx into ipython/newkernel. ?I >> think we are doing pretty good overall with the GUI support for now. >> We just need lots of testing. ?I have tried many of the matplotlib >> examples and most of them work fine. ?Evan, if you can try some big >> trait apps, that would be great. ?We should also try some Mayavi >> examples as well. ?Right now I have tuned the polling time on the GUI >> timers so that the CPU usage is below 1% for the kernel. ?This is >> about what the frontend itself is as well. > > This is fantastic, great job! > > As I mentioned before, CPU load isn't the only metric we need to look > at, the key one is the number of CPU wakeups-from-idle per second > induced by an app, that's what kills battery life. ?A linux laptop > running on battery (you don't get this info on AC power) has the > 'powertop' utility written by Intel to show who's keeping the CPU > awake. ?Some numbers I've seen from quick testing on my new laptop > (core i5 ultra low voltage, running in 'powersave' mode): > > - plain python shell: doesn't even register in powertop. > - IPython 0.10.1, no pylab/thread support: same > - IPython 0.10.1, with pylab using qt4 backend: same > - IPython 0.10.1, with pylab using Wx backend: 10 wakeups per second. > - IPython newkernel at the terminal (no zmq), no pylab: doesn't register > - IPython newkernel at the terminal (no zmq), with pylab/qt4: same This is quite good new. I am glad the Qt stuff looks good. I am not too surprised though because the Qt inputhook does not do the polling that the wx one does. > This is *fantastic* news. ?I'm not sure what changes are in the code > that may explain this, but it seems that the one-process one (with > pyosinputhook and qt4) is behaving better than I remember it from a > while ago. ?Maybe it's just my memory, but I seem to recall it showed > up more in powertop. ?Or maybe not, Qt has been OK all along and it's > Wx that's the bad guy: > > - IPython newkernel at the terminal (no zmq), with pylab/wx: bad news: > ~50 wakeups per second, the worst offender program in the whole > computer, only second to the (linux) kernel itself. > > Indeed, Wx is bad: with -wthread it already gave ~10 wakeups per > second, and with PyOSInputHook it's ~50. ?Nasty... ?Basically, Wx is a > wakeup hog that will kill any battery. > > The good news is that in one process, even Qt is very well behaved and > gives no detectable power signature. > > Now, when we run ipythonqt, which brings out two processes, messages > flying around and a full qt app, we do eat more power. Here are the > numbers (in all cases we have the Qt app for the frontend, zmq, and > possibly some gui toolkit active in the kernel): > > - no pylab: ~37 > - pylab tk: same > - pylab qt: same > - pylab wx: same Fernando, this is great that you looked at these stats. It is really helpful to get an idea of this. But, I would like to know if the issue is from the frontend or the kernel. Is there any chance you could repeat the 2 process tests and get separate stats for the frontend and kernel. I think we may be able to improve the situation, but I first need to know which process to look at. > The good news from this: enabling gui support in the new system has no > net power cost. The bad news: even with no gui support, the power > signature of the combined qt frontend/zmq communications/2 processes > is pretty noticeable. > > One more reason to keep around the lightweight one-process guy: if > you're on a plane trying to get every last ounce of battery out, it's > a good option. ?Similar to how I switch window managers from Gnome to > Awesome when I need to maximize battery life, this simply means that > we'll have a range of interface options. ?The fancier ones have a > power cost, and the more spartan ones will be very efficient. > >> Some notes: >> >> * Wx and Tk work out of the box with the matplotlib in EPD. > > Great. > >> * For Qt, we are going to have to patch matplotlib. ?I am attaching my >> patched qt backend. ?This is just a draft of the patch and we may have >> to add additional logic. > > OK, let's work on this one a bit, and when ready we'll get in touch with MPL. I submitted a patch tonight for that stuff. >> * During the process of merging with newkernel I found some things: >> ?- The default color scheme for the crash handler was set to Linux. >> I have changed this to LightBR on the Mac so the crash tracebacks are >> not invisible. > > Yes, good call. ?Sorry I forgot to do that yesterday, I enabled it and > never went back to clean it up. > >> ?- I ran PyFlakes on some files and found some bugs (ultratb, >> entry_point, etc.). ?These bugs were not discovered because they were >> in parts of the code >> ? ?that are not run usually. ?Let's make it a habit of running >> PyFlakes before any merge. ?It is amazing the things that it will >> catch! Yes, it is a pretty nifty tool. > Yup, good point! I keep it on my Emacs setup all the time, I just > forgot to run it (it's just a keystroke, I don't know why I got out of > the habit). ?Pyflakes is definitely something to run regularly. > >> ?- The names rprint/rprinte are great for quick debugging shortcuts. >> But these are now showing up in production code. ?Could we alias them >> to raw_print_out and >> ? ?raw_print_err and use the longer names in production code so 6 >> months from now we don't have to go looking up what these functions >> do? ? I am fine keeping the >> ? ?short names around for quick debugging though. > > Yup. ? In fact, I'll rename them just raw_print and raw_print_err, the > normal one doesn't really need a separate name. Sounds good, thanks. >> # Patch to backend_qt4.py >> # I have changed the _create_aApp function to the following: > > def _create_qApp(): > ? """ > ? Only one qApp can exist at a time, so check before creating one. > ? """ > ? if QtGui.QApplication.startingUp(): > ? ? ? if DEBUG: print "Starting up QApplication" > ? ? ? global qApp > ? ? ? app = QtGui.QApplication.instance() > ? ? ? if app is None: > ? ? ? ? ? qApp = QtGui.QApplication( [" "] ) > ? ? ? ? ? QtCore.QObject.connect( qApp, QtCore.SIGNAL( "lastWindowClosed()" ), > ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? qApp, QtCore.SLOT( "quit()" ) ) > ? ? ? ? ? #remember that matplotlib created the qApp - will be used by show() > ? ? ? ? ? _create_qApp.qAppCreatedHere = True > ? ? ? else: > ? ? ? ? ? qApp = app > ? ? ? ? ? _create_qApp.qAppCreatedHere = False > > > OK, we'll pound on the Qt code a little more until it feels robust. > > Cheers, and thanks again for the great job! Cheers, Brian > f > -- Brian E. Granger, Ph.D. Assistant Professor of Physics Cal Poly State University, San Luis Obispo bgranger at calpoly.edu ellisonbg at gmail.com From ellisonbg at gmail.com Fri Aug 27 01:18:03 2010 From: ellisonbg at gmail.com (Brian Granger) Date: Thu, 26 Aug 2010 22:18:03 -0700 Subject: [IPython-dev] Subtle ls bug :( Message-ID: Can you both try the following: Just keep typing ls return. Every so often I get: In [28]: ls --------------------------------------------------------------------------- IOError Traceback (most recent call last) /Library/Frameworks/Python.framework/Versions/6.2/Doc/ in () /Users/bgranger/Documents/Computation/IPython/code/ipython/IPython/zmq/zmqshell.pyc in system(self, cmd) 61 sys.stderr.flush() 62 p = Popen(cmd, shell=True, stdout=PIPE, stderr=PIPE) ---> 63 for line in p.stdout.read().split('\n'): 64 if len(line) > 0: 65 print line IOError: [Errno 4] Interrupted system call This only seems to show up when running the qt GUI mode. I think this one will be fun to debug :) Brian -- Brian E. Granger, Ph.D. Assistant Professor of Physics Cal Poly State University, San Luis Obispo bgranger at calpoly.edu ellisonbg at gmail.com From ellisonbg at gmail.com Fri Aug 27 01:46:54 2010 From: ellisonbg at gmail.com (Brian Granger) Date: Thu, 26 Aug 2010 22:46:54 -0700 Subject: [IPython-dev] good news bad news Message-ID: I have been playing more with the Qt Frontend that has GUi support. So far all of the testing I have done recently is with the Qt eventloop enabled in the kernel. Good news: * also all of the matplotlib examples that I have tried are working. I would say we are over 90% working with mpl examples. * I have started to try out tvtk/mayavi examples and most of them fail miserably, either freezing or crashing the kernel entirely. I think an in between thing to try is trait based apps. Evan, do you have some good examples of trait apps we can try in the kernel? Like we had to modify matplotlib, I expect we will have to patch traits and Mayavi as well. Tomorrow I am going to start shifting gears to working on the zmq stuff so we can heartbeat the kernel. Cheers, Brian -- Brian E. Granger, Ph.D. Assistant Professor of Physics Cal Poly State University, San Luis Obispo bgranger at calpoly.edu ellisonbg at gmail.com From gael.varoquaux at normalesup.org Fri Aug 27 02:50:49 2010 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Fri, 27 Aug 2010 08:50:49 +0200 Subject: [IPython-dev] good news bad news In-Reply-To: References: Message-ID: <20100827065049.GA17758@phare.normalesup.org> On Thu, Aug 26, 2010 at 10:46:54PM -0700, Brian Granger wrote: > * I have started to try out tvtk/mayavi examples and most of them fail > miserably, either freezing or crashing the kernel entirely. Maybe I am totally out of scope, and I am going to say something stupid, but did you comment out the line starting the event loop (mlab.show, or mayavi.standalone)? Last time I look, I couldn't find a non kludgy way of having these commands work with the latest IPython. My 2 cents, Ga?l From efiring at hawaii.edu Fri Aug 27 03:20:08 2010 From: efiring at hawaii.edu (Eric Firing) Date: Thu, 26 Aug 2010 21:20:08 -1000 Subject: [IPython-dev] good news bad news In-Reply-To: References: Message-ID: <4C776728.10603@hawaii.edu> On 08/26/2010 07:46 PM, Brian Granger wrote: > I have been playing more with the Qt Frontend that has GUi support. > So far all of the testing I have done recently is with the Qt > eventloop enabled in the kernel. > > Good news: > > * also all of the matplotlib examples that I have tried are working. > I would say we are over 90% working with mpl examples. Brian, One key bit of recently-attained mpl behavior is that show() is supposed to block if and only if mpl is not in interactive mode, and show() can be called repeatedly in either case. Have you checked whether this is still working? Eric > * I have started to try out tvtk/mayavi examples and most of them fail > miserably, either freezing or crashing the kernel entirely. > > I think an in between thing to try is trait based apps. Evan, do you > have some good examples of trait apps we can try in the kernel? Like > we had to modify matplotlib, I expect we will have to patch traits and > Mayavi as well. Tomorrow I am going to start shifting gears to > working on the zmq stuff so we can heartbeat the kernel. > > Cheers, > > Brian > From epatters at enthought.com Fri Aug 27 10:07:09 2010 From: epatters at enthought.com (Evan Patterson) Date: Fri, 27 Aug 2010 09:07:09 -0500 Subject: [IPython-dev] Subtle ls bug :( In-Reply-To: References: Message-ID: A summer or two ago I was writing some socket code and ran into precisely this problem. When a Unix process receives a signal and is the middle of a call that can potentially block forever (reading from a pipe in our case), it will interrupt the kernel-level system call involved. The solution usually employed is to simply restart the call if it is interrupted. I think the following should be sufficient: import errno def read_no_interrupt(f): ??? while True: ??????? return f.read() except IOError, err: if err.errno != errno.EINTR: raise Evan On Fri, Aug 27, 2010 at 12:18 AM, Brian Granger wrote: > > Can you both try the following: > > Just keep typing ls return. ?Every so often I get: > > In [28]: ls > --------------------------------------------------------------------------- > IOError ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? Traceback (most recent call last) > /Library/Frameworks/Python.framework/Versions/6.2/Doc/ console> in () > > /Users/bgranger/Documents/Computation/IPython/code/ipython/IPython/zmq/zmqshell.pyc > in system(self, cmd) > ? ? 61 ? ? ? ? sys.stderr.flush() > ? ? 62 ? ? ? ? p = Popen(cmd, shell=True, stdout=PIPE, stderr=PIPE) > ---> 63 ? ? ? ? for line in p.stdout.read().split('\n'): > ? ? 64 ? ? ? ? ? ? if len(line) > 0: > ? ? 65 ? ? ? ? ? ? ? ? print line > > IOError: [Errno 4] Interrupted system call > > This only seems to show up when running the qt GUI mode. ?I think this > one will be fun to debug :) > > Brian > > -- > Brian E. Granger, Ph.D. > Assistant Professor of Physics > Cal Poly State University, San Luis Obispo > bgranger at calpoly.edu > ellisonbg at gmail.com From epatters at enthought.com Fri Aug 27 10:17:06 2010 From: epatters at enthought.com (Evan Patterson) Date: Fri, 27 Aug 2010 09:17:06 -0500 Subject: [IPython-dev] good news bad news In-Reply-To: References: Message-ID: On Fri, Aug 27, 2010 at 12:46 AM, Brian Granger wrote: > I have been playing more with the Qt Frontend that has GUi support. > So far all of the testing I have done recently is with the Qt > eventloop enabled in the kernel. > > Good news: > > * also all of the matplotlib examples that I have tried are working. > I would say we are over 90% working with mpl examples. > * I have started to try out tvtk/mayavi examples and most of them fail > miserably, either freezing or crashing the kernel entirely. > > I think an in between thing to try is trait based apps. ?Evan, do you > have some good examples of trait apps we can try in the kernel? ?Like > we had to modify matplotlib, I expect we will have to patch traits and > Mayavi as well. ?Tomorrow I am going to start shifting gears to > working on the zmq stuff so we can heartbeat the kernel. One Traits application to try is the RST editor, which is included in ETS. It's more than a toy application, but still much lighter than Mayavi. The RST editor is in AppTools; the entry point is enthought.rst.app. Evan > > Cheers, > > Brian > > -- > Brian E. Granger, Ph.D. > Assistant Professor of Physics > Cal Poly State University, San Luis Obispo > bgranger at calpoly.edu > ellisonbg at gmail.com > From gokhansever at gmail.com Fri Aug 27 12:41:17 2010 From: gokhansever at gmail.com (=?UTF-8?Q?G=C3=B6khan_Sever?=) Date: Fri, 27 Aug 2010 11:41:17 -0500 Subject: [IPython-dev] iptest on 0.11.alpha1.git Message-ID: A simple question before showing the test suite results: What is git equivalent of svn revision number? Is it the commit id @ http://github.com/ipython/ipython ********************************************************************** Test suite completed for system with the following information: IPython version: 0.11.alpha1.git BZR revision : 0 Platform info : os.name -> posix, sys.platform -> linux2 : Linux-2.6.33.6-147.2.4.fc13.i686-i686-with-fedora-13-Goddard Python info : 2.6.4 (r264:75706, Jun 4 2010, 18:20:16) [GCC 4.4.4 20100503 (Red Hat 4.4.4-2)] Tools and libraries available at test time: curses foolscap gobject gtk pexpect twisted wx wx.aui zope.interface Ran 9 test groups in 93.470s Status: ERROR - 1 out of 9 test groups failed. ---------------------------------------- Runner failed: IPython.core You may wish to rerun this one individually, with: /usr/bin/python /home/g/Desktop/python-repo/ipython/IPython/testing/iptest.py IPython.core [g at a testing]$ python iptest.py IPython.core >f2("a b c") >f1("a", "b", "c") >f1(1,2,3) >f2(4) ..............................Out[83]: 'get_ipython().system("true ")\n' Out[85]: 'get_ipython().system("d:/cygwin/top ")\n' Out[86]: 'no change' Out[87]: '"no change"\n' Out[89]: 'get_ipython().system("true")\n' Out[90]: [''] Out[91]: 'get_ipython().magic("sx true")\n' Out[92]: [''] Out[93]: 'get_ipython().magic("sx true")\n' Out[95]: 'get_ipython().magic("lsmagic ")\n' Out[97]: 'get_ipython().magic("lsmagic ")\n' Out[99]: 'get_ipython().system(" true")\n' Out[101]: 'x=1 # what?\n' Out[103]: 'if 1:\n !true\n' Out[105]: 'if 1:\n lsmagic\n' Out[107]: 'if 1:\n an_alias\n' Out[109]: 'if 1:\n get_ipython().system("true")\n' Out[111]: 'if 2:\n get_ipython().magic("lsmagic ")\n' Out[113]: 'if 1:\n get_ipython().system("true ")\n' Out[114]: [''] Out[115]: 'if 1:\n get_ipython().magic("sx true")\n' Out[117]: 'if 1:\n /fun 1 2\n' Out[119]: 'if 1:\n ;fun 1 2\n' Out[121]: 'if 1:\n ,fun 1 2\n' Out[123]: 'if 1:\n ?fun 1 2\n' Out[125]: 'len "abc"\n' >autocallable() Out[126]: 'called' Out[127]: 'autocallable()\n' >list("1", "2", "3") Out[129]: 'list("1", "2", "3")\n' >list("1 2 3") Out[130]: ['1', ' ', '2', ' ', '3'] Out[131]: 'list("1 2 3")\n' >len(range(1,4)) Out[132]: 3 Out[133]: 'len(range(1,4))\n' >list("1", "2", "3") Out[135]: 'list("1", "2", "3")\n' >list("1 2 3") Out[136]: ['1', ' ', '2', ' ', '3'] Out[137]: 'list("1 2 3")\n' >len(range(1,4)) Out[138]: 3 Out[139]: 'len(range(1,4))\n' >len("abc") Out[140]: 3 Out[141]: 'len("abc")\n' >len("abc"); Out[143]: 'len("abc");\n' >len([1,2]) Out[144]: 2 Out[145]: 'len([1,2])\n' Out[146]: True Out[147]: 'call_idx [1]\n' >call_idx(1) Out[148]: True Out[149]: 'call_idx(1)\n' Out[150]: Out[151]: 'len \n' >list("1", "2", "3") Out[153]: 'list("1", "2", "3")\n' >list("1 2 3") Out[154]: ['1', ' ', '2', ' ', '3'] Out[155]: 'list("1 2 3")\n' >len(range(1,4)) Out[156]: 3 Out[157]: 'len(range(1,4))\n' >len("abc") Out[158]: 3 Out[159]: 'len("abc")\n' >len("abc"); Out[161]: 'len("abc");\n' >len([1,2]) Out[162]: 2 Out[163]: 'len([1,2])\n' Out[164]: True Out[165]: 'call_idx [1]\n' >call_idx(1) Out[166]: True Out[167]: 'call_idx(1)\n' >len() Out[169]: 'len()\n' .......................................................................................................................S.......# print "bar" # ....>f(1) ...................................................................................F.F.. ====================================================================== FAIL: Test that object's __del__ methods are called on exit. ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.6/site-packages/nose/case.py", line 186, in runTest self.test(*self.arg) File "/home/g/Desktop/python-repo/ipython/IPython/testing/decorators.py", line 225, in skipper_func return f(*args, **kwargs) File "/home/g/Desktop/python-repo/ipython/IPython/core/tests/test_run.py", line 155, in test_obj_del tt.ipexec_validate(self.fname, 'object A deleted') File "/home/g/Desktop/python-repo/ipython/IPython/testing/tools.py", line 252, in ipexec_validate nt.assert_equals(out.strip(), expected_out.strip()) AssertionError: '\x1b[?1034hobject A deleted' != 'object A deleted' >> raise self.failureException, \ (None or '%r != %r' % ('\x1b[?1034hobject A deleted', 'object A deleted')) ====================================================================== FAIL: IPython.core.tests.test_run.TestMagicRunSimple.test_tclass ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.6/site-packages/nose/case.py", line 186, in runTest self.test(*self.arg) File "/home/g/Desktop/python-repo/ipython/IPython/testing/decorators.py", line 225, in skipper_func return f(*args, **kwargs) File "/home/g/Desktop/python-repo/ipython/IPython/core/tests/test_run.py", line 169, in test_tclass tt.ipexec_validate(self.fname, out) File "/home/g/Desktop/python-repo/ipython/IPython/testing/tools.py", line 252, in ipexec_validate nt.assert_equals(out.strip(), expected_out.strip()) AssertionError: "\x1b[?1034hARGV 1-: ['C-first']\nARGV 1-: ['C-second']\ntclass.py: deleting object: C-first" != "ARGV 1-: ['C-first']\nARGV 1-: ['C-second']\ntclass.py: deleting object: C-first" >> raise self.failureException, \ (None or '%r != %r' % ("\x1b[?1034hARGV 1-: ['C-first']\nARGV 1-: ['C-second']\ntclass.py: deleting object: C-first", "ARGV 1-: ['C-first']\nARGV 1-: ['C-second']\ntclass.py: deleting object: C-first")) ---------------------------------------------------------------------- Ran 254 tests in 3.268s FAILED (SKIP=1, failures=2) -- G?khan -------------- next part -------------- An HTML attachment was scrubbed... URL: From fperez.net at gmail.com Fri Aug 27 13:06:27 2010 From: fperez.net at gmail.com (Fernando Perez) Date: Fri, 27 Aug 2010 10:06:27 -0700 Subject: [IPython-dev] iptest on 0.11.alpha1.git In-Reply-To: References: Message-ID: Hey, On Fri, Aug 27, 2010 at 9:41 AM, G?khan Sever wrote: > A simple question before showing the test suite results: > What is git equivalent of svn revision number? Is it the commit id > @?http://github.com/ipython/ipython Yes, the commit hash. The first 7 or 8 characters typically suffice, unless you have a gigantic git repo like the linux kernel, which already needs 10 or 11 to ensure uniqueness because it has collisions on the first 9 characters. I know about this test failure, it's something really odd that *only* happens on Fedora. I have no idea why, I've been able to reproduce it with Fedora virtual machines but never on any other system. We'll try to revisit it later, but it's an old one and I can't dive into a Fedora expedition right now. It may be a fairly tricky bug to understand. It would be great if you want to help out, I just want to give you a fair warning of what may be involved. Cheers, f From robert.kern at gmail.com Fri Aug 27 13:21:52 2010 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 27 Aug 2010 12:21:52 -0500 Subject: [IPython-dev] good news bad news In-Reply-To: References: Message-ID: On 8/27/10 12:46 AM, Brian Granger wrote: > I have been playing more with the Qt Frontend that has GUi support. > So far all of the testing I have done recently is with the Qt > eventloop enabled in the kernel. > > Good news: > > * also all of the matplotlib examples that I have tried are working. > I would say we are over 90% working with mpl examples. > * I have started to try out tvtk/mayavi examples and most of them fail > miserably, either freezing or crashing the kernel entirely. > > I think an in between thing to try is trait based apps. Evan, do you > have some good examples of trait apps we can try in the kernel? from enthought.traits.api import HasTraits, Float class Foo(HasTraits): x = Float() f = Foo() f.edit_traits() > Like > we had to modify matplotlib, I expect we will have to patch traits and > Mayavi as well. Possibly. Both Traits and Pyface (and thus Envisage Workbench apps like Mayavi) do try to check if there is a QApplication already existing before creating one, but we may not be doing exactly the right things. Search for QApplication in enthought.traits.ui.qt4.toolkit and enthought.pyface.ui.qt4.init to see what we do. Are you sure that you have ETS_TOOLKIT=qt4 set? -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From ellisonbg at gmail.com Fri Aug 27 16:35:34 2010 From: ellisonbg at gmail.com (Brian Granger) Date: Fri, 27 Aug 2010 13:35:34 -0700 Subject: [IPython-dev] good news bad news In-Reply-To: References: Message-ID: On Fri, Aug 27, 2010 at 10:21 AM, Robert Kern wrote: > On 8/27/10 12:46 AM, Brian Granger wrote: >> I have been playing more with the Qt Frontend that has GUi support. >> So far all of the testing I have done recently is with the Qt >> eventloop enabled in the kernel. >> >> Good news: >> >> * also all of the matplotlib examples that I have tried are working. >> I would say we are over 90% working with mpl examples. >> * I have started to try out tvtk/mayavi examples and most of them fail >> miserably, either freezing or crashing the kernel entirely. >> >> I think an in between thing to try is trait based apps. ?Evan, do you >> have some good examples of trait apps we can try in the kernel? > > from enthought.traits.api import HasTraits, Float > > class Foo(HasTraits): > ? ? x = Float() > > f = Foo() > f.edit_traits() I will give this a shot. >> Like >> we had to modify matplotlib, I expect we will have to patch traits and >> Mayavi as well. > > Possibly. Both Traits and Pyface (and thus Envisage Workbench apps like Mayavi) > do try to check if there is a QApplication already existing before creating one, > but we may not be doing exactly the right things. Search for QApplication in > enthought.traits.ui.qt4.toolkit and enthought.pyface.ui.qt4.init to see what we do. I looked at this code already and it looks like it is handling the QApplication creation OK, but I need to look a bit more at how it is starting the event loop. > Are you sure that you have ETS_TOOLKIT=qt4 set? Nope, that would probably make a huge difference! Thanks! Brian > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless enigma > ?that is made terrible by our own mad attempt to interpret it as though it had > ?an underlying truth." > ? -- Umberto Eco > > _______________________________________________ > IPython-dev mailing list > IPython-dev at scipy.org > http://mail.scipy.org/mailman/listinfo/ipython-dev > -- Brian E. Granger, Ph.D. Assistant Professor of Physics Cal Poly State University, San Luis Obispo bgranger at calpoly.edu ellisonbg at gmail.com From almar.klein at gmail.com Fri Aug 27 18:17:34 2010 From: almar.klein at gmail.com (Almar Klein) Date: Sat, 28 Aug 2010 00:17:34 +0200 Subject: [IPython-dev] Uniform way of integrating event loops among different IDE's In-Reply-To: References: Message-ID: On 25 August 2010 22:59, Fernando Perez wrote: > On Wed, Aug 25, 2010 at 1:08 PM, Almar Klein > wrote: > > On whether I'd be allowed to use the BSD license, I found this exception > to > > the GPL license, that is used by both Nokia and Riverbank: > > http://doc.trolltech.com/4.4/license-gpl-exceptions.html > > > > As for as I can see, item 2 (at the bottom) covers it already, because > IEP > > links to the precompiled binaries. > > > > I'm no lawyerr by any stretch of the imagination, but it seems to me > that you comply with 1A, 1B and 1C, so that you are indeed allowed to > use the BSD license (just like ipython or matplotlib do, both of which > have small amounts of Qt-using code in them). Allrighty. I've thought about this for the past couple of days, and read some information on the internet. I've concluded that you're right and have decided that I will change IEP's license to BSD in the next release, and also for my visualization project . So thank you for bringing this to my attention :) Cheers, Almar -------------- next part -------------- An HTML attachment was scrubbed... URL: From almar.klein at gmail.com Fri Aug 27 19:10:15 2010 From: almar.klein at gmail.com (Almar Klein) Date: Sat, 28 Aug 2010 01:10:15 +0200 Subject: [IPython-dev] Uniform way of integrating event loops among different IDE's In-Reply-To: References: Message-ID: Hi, > Thinking of wilder ideas, it might even be possible to share a common > > interpreter (with which I mean the code running in the second process). > Such > > that only the way of controlling it is different. Whether one uses > IPython > > or IEP, under the hood it's the same thing. There are of course some > > fundamental differences between IEP and IPython (for example IEP needs to > be > > able to run a selection of code), but who knows? > > Not wild at all, in fact all the recent work from the Google SoC > students, as well as the stuff Brian, Evan and I are currently working > on, goes precisely in that direction. We're basically specifying an > 'IPython protocol' as described here: > > http://ipython.scipy.org/doc/nightly/html/development/messaging.html > > That messaging spec fully defines how to interact with an ipython > kernel that has all the goodies we see today in the terminal, and any > number of frontends can talk to one. The 'newkernel' branch in the > ipython repo has a Qt implementation of a client that uses this spec, > and it's getting to be pretty functional already. We're doing a ton > of work on this over the next few weeks. > I've read some of the documentation for the new stuff you're working on. It all sounds really well thought through, and am looking forwards for the results. I've got a couple of questions though. - I see the possibilities of distributed computing by connecting multiple kernels to a single client. However, I don't get why you would want to connect multiple clients to a single kernel at the same time? - I saw an example in which you're kind of going towards a Mathematica/Sage type of UI. Is this what you're really aiming at, or is this one possible front end? I'm asking because IEP has more of a Matlab kind of UI, with an editor from which the user can run code (selected lines or cells: code between two lines starting with two ##'s). Would that be compatible with the kernel you're designing? - About the heartbeat thing to detect whether kernels are still alive. I use a similar concept in the channels module. I actually never realized that this would fail if Python is running extension code. However, I do run Cython code that takes about a minute to run without problems. Is that because it's Cython and the Python interpreter is still involved? I'll do some test running Cython and C code next week. Since I think its interesting to see that we've taking rather different approaches to do (more or less) the same thing, I'll share some background on what I do in IEP: I use one Channels instance from the channels.py module, which means all communication goes over one socket. However, I can use as many as 128 different channels each way. Instead of a messaging format, I use a channel for each task. By the way, I'm not saying my method is better; yours is probably more "scalable", mine requires no/little message processing. So from the kernel's perspective, I have one receiving channel for stdin, two sending for stdout and stderr, one receiving for control (mostly debugging at the moment) and one sending for status messages (whether busy/ready, and debug info). Lastly there's one receiving and one sending channel for introspection requests and responses. To receive code, sys.stdin is replaced with a receivingChannel from channels.py, which is non-blocking. The readline() method (which is what raw_input() uses) *is* blocking, so that raw_input() behaves appropriately. The remote process runs an interpreter loop. Each iteration the interpreter checks (non-blocking) the stdin for a command to be run. If there is, it does so using almost the same code in code.py. Next (if required) process GUI events. Next produce prompt if necessary, and send status. In another thread, there is a loop that listens for introspection requests (auto-completion, calltips, docs). This code is all in iepRemote1.py and iepRemote2.py if you're interested in the details. Cheers, Almar -------------- next part -------------- An HTML attachment was scrubbed... URL: From fperez.net at gmail.com Sat Aug 28 03:29:53 2010 From: fperez.net at gmail.com (Fernando Perez) Date: Sat, 28 Aug 2010 00:29:53 -0700 Subject: [IPython-dev] !commands in the zmq model Message-ID: Howdy, I've spent hours on getting a better implementation of !cmd in the multiprocess model, that: 1. shows output as it happens 2. is cleanly interruptible. We can interrupt subprocesses fine with our current (subprocess.Popen-based) implementation, but we can't get their output cleanly as it happens. So in the usual: for i in range(200): print i, sys.stdout.flush() time.sleep(0.01) print 'done!' We only see *all* the numbers at the very end. I haven't found any way around this with subprocess, and neither did our old ipythonx implementation done by Gael; as best as I understand popen it simply can't be done: in pipe mode, the C stdio library does block-buffered io on all process execution and there's no clean way I can find, to read from a pipe with a timeout. But the good news is that I think I have an implementation that will work on *nix (linux/mac), using pexpect. On Windows we'll have to fall back to subprocess.popen(), with its limitations. In addition, the pexpect implementation gives us the benefit of correctly formatted 'ls' output, reverting the surprise Brian had initially from ls being only on one column. Pexpect creates a proper pseudo-tty, and knows how to read from it very intelligently. It's a shame it doesn't exist on windows, but it's apparently a very non-trivial task to port it (I remember hearing William Stein several times comment on how good it would be for Sage to have pexpect on Windows, and knowing them, if it was doable they would have already done it). I may not finish this today, I'm too exhausted, but if anyone knows this type of problem well, pitch in. I'm sure I'll make good use of any help... Cheers, f ps - for reference, the current implementation is along the lines of: from __future__ import print_function import sys import pexpect self = get_ipython() if 1: def system( cmd): cmd = self.var_expand(cmd, depth=2).strip() sh = '/bin/bash' timeout = 0.05 # seconds pcmd = '%s -c %r' % (sh, cmd) try: child = pexpect.run(pcmd, logfile=sys.stdout) ## child = pexpect.spawn(sh, ['-c', cmd]) ## while True: ## res = child.expect([pexpect.TIMEOUT, pexpect.EOF], timeout) ## if res==0: ## #pass ## print(child.before, end='') ## elif res==1: ## break except KeyboardInterrupt: print('\nInterrupted command: %r.' % cmd, file=sys.stderr) #return child From tomspur at fedoraproject.org Sat Aug 28 08:21:20 2010 From: tomspur at fedoraproject.org (Thomas Spura) Date: Sat, 28 Aug 2010 14:21:20 +0200 Subject: [IPython-dev] iptest on 0.11.alpha1.git In-Reply-To: References: Message-ID: <20100828142120.18fbef4d@earth> On Fri, 27 Aug 2010 10:06:27 -0700 Fernando Perez wrote: > Hey, > > On Fri, Aug 27, 2010 at 9:41 AM, G?khan Sever > wrote: > > A simple question before showing the test suite results: > > What is git equivalent of svn revision number? Is it the commit id > > @?http://github.com/ipython/ipython > > Yes, the commit hash. The first 7 or 8 characters typically suffice, > unless you have a gigantic git repo like the linux kernel, which > already needs 10 or 11 to ensure uniqueness because it has collisions > on the first 9 characters. The commit hash will be noted in the version at the startup automatically after this commit: http://github.com/tomspur/ipython/commit/e2e56f2917d941051e99a893f3e26989b78aaa53 And the branch name will be there after this commit: http://github.com/tomspur/ipython/commit/7b5f6ed4abd9308b9fc2a2071a756b0aba3a680b Both are ready for review for inclusion into the master branch: http://github.com/tomspur/ipython/commits/ready_for_merge Thomas From gael.varoquaux at normalesup.org Sat Aug 28 08:51:56 2010 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Sat, 28 Aug 2010 14:51:56 +0200 Subject: [IPython-dev] !commands in the zmq model In-Reply-To: References: Message-ID: <20100828125156.GA582@phare.normalesup.org> On Sat, Aug 28, 2010 at 12:29:53AM -0700, Fernando Perez wrote: > We can interrupt subprocesses fine with our current > (subprocess.Popen-based) implementation, but we can't get their output > cleanly as it happens. So in the usual: > for i in range(200): > print i, > sys.stdout.flush() > time.sleep(0.01) > print 'done!' > We only see *all* the numbers at the very end. I haven't found any way > around this with subprocess, and neither did our old ipythonx > implementation done by Gael; as best as I understand popen it simply > can't be done: in pipe mode, the C stdio library does block-buffered > io on all process execution and there's no clean way I can find, to > read from a pipe with a timeout. Hum, I thought that I had it working. I can't test right now, but I seem to remember I had spend quite a lot of time on that. I am not claiming that my code was terribly good, ( :> "I write code, I look at it a year later, I vomit", Michael A., CACR) but if I remember correctly, there might be a few tricks to try and keep. * For pure Python code, as your example above, the trick was to use what I have called the 'RedirectorOutputTrap' that registers a callback on writing to the sys.stdout/sys.stderr. It is implemented in IPython.kernel.core.redirector_output_trap. * For subprocesses, I had to resort to 2 threads, one executing the process, the other polling its stdout (ugly). This logic can be found in IPython.frontend.process.pipedprocess. It also adds a bit of logic to be able to kill the process under Windows, which you might want to keep in mind. I am not sure if this code can be of some use to you or not, but just in case. I believe that it might be one of the few useful things that came out of my work. It actually even seems to have tests :$ Cheers, Ga?l From gokhansever at gmail.com Sat Aug 28 12:25:52 2010 From: gokhansever at gmail.com (=?UTF-8?Q?G=C3=B6khan_Sever?=) Date: Sat, 28 Aug 2010 11:25:52 -0500 Subject: [IPython-dev] iptest on 0.11.alpha1.git In-Reply-To: References: Message-ID: On Fri, Aug 27, 2010 at 12:06 PM, Fernando Perez wrote: > Hey, > > On Fri, Aug 27, 2010 at 9:41 AM, G?khan Sever > wrote: > > A simple question before showing the test suite results: > > What is git equivalent of svn revision number? Is it the commit id > > @ http://github.com/ipython/ipython > > Yes, the commit hash. The first 7 or 8 characters typically suffice, > unless you have a gigantic git repo like the linux kernel, which > already needs 10 or 11 to ensure uniqueness because it has collisions > on the first 9 characters. > Nice that with this question asked, we already gotten something implemented showing this information on the main entry which I find very useful. > > I know about this test failure, it's something really odd that *only* > happens on Fedora. I have no idea why, I've been able to reproduce > it with Fedora virtual machines but never on any other system. > > We'll try to revisit it later, but it's an old one and I can't dive > into a Fedora expedition right now. It may be a fairly tricky bug to > understand. It would be great if you want to help out, I just want to > give you a fair warning of what may be involved. With virtual OS'es it is very easy to test devel level software. Overall, doesn't many buildbots use virtual OS'es and arches to test their software. I still think it is a good idea to give a very brief information update to the end-users (even a few lines and a screenshot would suffice) with a recommendation to testing these in a virtual OS for the curious like me --this was a partial reply to your (Fernando's) previous response. >From the mailing lists for the Linux OS it seems like many uses Ubuntu. I know there are few who uses Fedora and Redhat. There is serious support on Fedora for Python (http://fedoraproject.org/wiki/Python_in_Fedora_13) I might not be much help besides reporting my observations to fix those test errors, but probably I could ask Fedora Python maintainer to provide some insight. Speaking of OS'es, I think it might be a good idea to start a simple poll or an actively edited document to list people and their OS choice for their academic/developmental Python work. What do you think? -- G?khan -------------- next part -------------- An HTML attachment was scrubbed... URL: From ellisonbg at gmail.com Sat Aug 28 13:22:26 2010 From: ellisonbg at gmail.com (Brian Granger) Date: Sat, 28 Aug 2010 10:22:26 -0700 Subject: [IPython-dev] !commands in the zmq model In-Reply-To: References: Message-ID: Fernando, This is really great news. The simple implementation that was there covered the basic cases, but left a lot to be desired. On Sat, Aug 28, 2010 at 12:29 AM, Fernando Perez wrote: > Howdy, > > I've spent hours on getting a better implementation of > > !cmd > > in the multiprocess model, that: > > 1. shows output as it happens > 2. is cleanly interruptible. This is really wonderful. Thanks for putting in the effort on this. > We can interrupt subprocesses fine with our current > (subprocess.Popen-based) implementation, but we can't get their output > cleanly as it happens. ?So in the usual: > > ? ?for i in range(200): > ? ? ? ?print i, > ? ? ? ?sys.stdout.flush() > ? ? ? ?time.sleep(0.01) > ? ?print 'done!' > > We only see *all* the numbers at the very end. I haven't found any way > around this with subprocess, and neither did our old ipythonx > implementation done by Gael; as best as I understand popen it simply > can't be done: in pipe mode, the C stdio library does block-buffered > io on all process execution and there's no clean way I can find, to > read from a pipe with a timeout. Bummer, I thought it was smarter than that, but oh well. > But the good news is that I think I have an implementation that will > work on *nix (linux/mac), using pexpect. ?On Windows we'll have to > fall back to subprocess.popen(), with its limitations. I think this is a very reasonable compromise. Maybe it will motivate some passionate Windows users to look further at a Windows pexpect ;-) > In addition, the pexpect implementation gives us the benefit of > correctly formatted 'ls' output, reverting the surprise Brian had > initially from ls being only on one column. ?Pexpect creates a proper > pseudo-tty, and knows how to read from it very intelligently. This is awesome! > It's a shame it doesn't exist on windows, but it's apparently a very > non-trivial task to port it (I remember hearing William Stein several > times comment on how good it would be for Sage to have pexpect on > Windows, and knowing them, if it was doable they would have already > done it). > > I may not finish this today, I'm too exhausted, but if anyone knows > this type of problem well, pitch in. ?I'm sure I'll make good use of > any help... I am going to be focusing on the GUI stuff and ZMQ/PyZMQ stuff this weekend. I will be around on IRC and working if you want to chat. Cheers, Brian > Cheers, > > f > > ps - for reference, the current implementation is along the lines of: > > from __future__ import print_function > > import sys > import pexpect > > self = get_ipython() > > if 1: > ? ?def system( cmd): > ? ? ? ?cmd = self.var_expand(cmd, depth=2).strip() > ? ? ? ?sh = '/bin/bash' > ? ? ? ?timeout = 0.05 # seconds > ? ? ? ?pcmd = '%s -c %r' % (sh, cmd) > ? ? ? ?try: > ? ? ? ? ? ?child = pexpect.run(pcmd, logfile=sys.stdout) > > ? ? ? ? ? ?## child = pexpect.spawn(sh, ['-c', cmd]) > ? ? ? ? ? ?## while True: > ? ? ? ? ? ?## ? ? res = child.expect([pexpect.TIMEOUT, pexpect.EOF], timeout) > ? ? ? ? ? ?## ? ? if res==0: > ? ? ? ? ? ?## ? ? ? ? #pass > ? ? ? ? ? ?## ? ? ? ? print(child.before, end='') > ? ? ? ? ? ?## ? ? elif res==1: > ? ? ? ? ? ?## ? ? ? ? break > > ? ? ? ?except KeyboardInterrupt: > ? ? ? ? ? ?print('\nInterrupted command: %r.' % cmd, file=sys.stderr) > > ? ? ? ? ? ?#return child > _______________________________________________ > IPython-dev mailing list > IPython-dev at scipy.org > http://mail.scipy.org/mailman/listinfo/ipython-dev > -- Brian E. Granger, Ph.D. Assistant Professor of Physics Cal Poly State University, San Luis Obispo bgranger at calpoly.edu ellisonbg at gmail.com From ellisonbg at gmail.com Sat Aug 28 15:42:45 2010 From: ellisonbg at gmail.com (Brian Granger) Date: Sat, 28 Aug 2010 12:42:45 -0700 Subject: [IPython-dev] Uniform GUI support across matplotlib, ets and ipython Message-ID: Hi all, As you may know, this summer we have been working on a new two process IPython that has a beautiful Qt frontend GUI and a ZMQ based messaging layer between that GUI and the new IPython kernel. Many thanks to Enthought for funding this effort! We are currently in the process of adding GUI event loop integration to the ipython kernel so users can do interactive plotting like they can with the regular ipython. You may also remember that last summer we implemented a new PyOs_InputHook based GUI integration for the regular ipython. This has not been released yet, but all of this will be released in the upcoming 0.11 release. I am emailing everyone because we see that there is a need for all of us to agree on two things: 1. How to detect if a GUI application object has been created by someone else. 2. How to detect if a GUI event loop is running. Currently there is code in both ETS and matplotlib that fails to handle these things properly in certain cases. With IPython 0.10, this was not a problem because we used to hijack/monkeypatch the GUI eventloops after we started them. In 0.11, we will no longer be doing that. To address these issues, we have created a standalone module that implements the needed logic: http://github.com/ipython/ipython/blob/newkernel/IPython/lib/guisupport.py This module is heavily commented and introduces a new informal protocol that all of use can use to detect if event loops are running. This informal protocol is inspired by how some of this is handled inside ETS. Our idea is that all projects will simply copy this module into their code and ship it. It is lightweight and does not depend on IPython or other top-level imports. As you will see, we have implemented the logic for wx and qt4, we will need help with other toolkits. An important point is that matplotlib and ets WILL NOT WORK with the upcoming release of IPython unless changes are made to their respective codebases. We consider this a draft and are more than willing to modify the design or approach as appropriate. One thing that we have not thought about yet is how to continue to support 0.10 within this model. The good news amidst all of this is that the quality and stability of the GUI support in IPython is orders of magnitude better than that in the 0.10 series. Cheers, Brian PS: If you are curious, here is a bit of background on the issues related to the PyOS_Inputhook stuff: http://mail.scipy.org/pipermail/ipython-dev/2010-July/006330.html From ellisonbg at gmail.com Sat Aug 28 15:46:45 2010 From: ellisonbg at gmail.com (Brian Granger) Date: Sat, 28 Aug 2010 12:46:45 -0700 Subject: [IPython-dev] Subtle ls bug :( In-Reply-To: References: Message-ID: Fernando, Will this be fixed with your new pexpect based version, or do we still have an issue on Windows that we need to fix? Cheers, Brian On Fri, Aug 27, 2010 at 7:07 AM, Evan Patterson wrote: > A summer or two ago I was writing some socket code and ran into > precisely this problem. When a Unix process receives a signal and is > the middle of a call that can potentially block forever (reading from > a pipe in our case), it will interrupt the kernel-level system call > involved. The solution usually employed is to simply restart the call > if it is interrupted. I think the following should be sufficient: > > import errno > > def read_no_interrupt(f): > ??? while True: > ??????? return f.read() > ? ?except IOError, err: > ? ? ? ?if err.errno != errno.EINTR: > ? ? ? ? ? ?raise > > Evan > > On Fri, Aug 27, 2010 at 12:18 AM, Brian Granger wrote: >> >> Can you both try the following: >> >> Just keep typing ls return. ?Every so often I get: >> >> In [28]: ls >> --------------------------------------------------------------------------- >> IOError ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? Traceback (most recent call last) >> /Library/Frameworks/Python.framework/Versions/6.2/Doc/> console> in () >> >> /Users/bgranger/Documents/Computation/IPython/code/ipython/IPython/zmq/zmqshell.pyc >> in system(self, cmd) >> ? ? 61 ? ? ? ? sys.stderr.flush() >> ? ? 62 ? ? ? ? p = Popen(cmd, shell=True, stdout=PIPE, stderr=PIPE) >> ---> 63 ? ? ? ? for line in p.stdout.read().split('\n'): >> ? ? 64 ? ? ? ? ? ? if len(line) > 0: >> ? ? 65 ? ? ? ? ? ? ? ? print line >> >> IOError: [Errno 4] Interrupted system call >> >> This only seems to show up when running the qt GUI mode. ?I think this >> one will be fun to debug :) >> >> Brian >> >> -- >> Brian E. Granger, Ph.D. >> Assistant Professor of Physics >> Cal Poly State University, San Luis Obispo >> bgranger at calpoly.edu >> ellisonbg at gmail.com > -- Brian E. Granger, Ph.D. Assistant Professor of Physics Cal Poly State University, San Luis Obispo bgranger at calpoly.edu ellisonbg at gmail.com From fperez.net at gmail.com Sat Aug 28 15:52:32 2010 From: fperez.net at gmail.com (Fernando Perez) Date: Sat, 28 Aug 2010 12:52:32 -0700 Subject: [IPython-dev] Subtle ls bug :( In-Reply-To: References: Message-ID: On Sat, Aug 28, 2010 at 12:46 PM, Brian Granger wrote: > Fernando, > > Will this be fixed with your new pexpect based version, or do we still > have an issue on Windows that we need to fix? I haven't tested on Windows, but since we have no pexpect on Windows, the implementation there will likely be similar to what we have, and likely to need some work. Did Evan's trick help? Cheers, f From ellisonbg at gmail.com Sat Aug 28 17:06:17 2010 From: ellisonbg at gmail.com (Brian Granger) Date: Sat, 28 Aug 2010 14:06:17 -0700 Subject: [IPython-dev] good news bad news In-Reply-To: References: Message-ID: On Fri, Aug 27, 2010 at 1:35 PM, Brian Granger wrote: > On Fri, Aug 27, 2010 at 10:21 AM, Robert Kern wrote: >> On 8/27/10 12:46 AM, Brian Granger wrote: >>> I have been playing more with the Qt Frontend that has GUi support. >>> So far all of the testing I have done recently is with the Qt >>> eventloop enabled in the kernel. >>> >>> Good news: >>> >>> * also all of the matplotlib examples that I have tried are working. >>> I would say we are over 90% working with mpl examples. >>> * I have started to try out tvtk/mayavi examples and most of them fail >>> miserably, either freezing or crashing the kernel entirely. >>> >>> I think an in between thing to try is trait based apps. ?Evan, do you >>> have some good examples of trait apps we can try in the kernel? >> >> from enthought.traits.api import HasTraits, Float >> >> class Foo(HasTraits): >> ? ? x = Float() >> >> f = Foo() >> f.edit_traits() > > I will give this a shot. This now works. >>> Like >>> we had to modify matplotlib, I expect we will have to patch traits and >>> Mayavi as well. >> >> Possibly. Both Traits and Pyface (and thus Envisage Workbench apps like Mayavi) >> do try to check if there is a QApplication already existing before creating one, >> but we may not be doing exactly the right things. Search for QApplication in >> enthought.traits.ui.qt4.toolkit and enthought.pyface.ui.qt4.init to see what we do. > > I looked at this code already and it looks like it is handling the > QApplication creation OK, but I need to look a bit more at how it is > starting the event loop. > >> Are you sure that you have ETS_TOOLKIT=qt4 set? > > Nope, that would probably make a huge difference! ?Thanks! This did help some issues, but some remain. Brian > Brian > >> -- >> Robert Kern >> >> "I have come to believe that the whole world is an enigma, a harmless enigma >> ?that is made terrible by our own mad attempt to interpret it as though it had >> ?an underlying truth." >> ? -- Umberto Eco >> >> _______________________________________________ >> IPython-dev mailing list >> IPython-dev at scipy.org >> http://mail.scipy.org/mailman/listinfo/ipython-dev >> > > > > -- > Brian E. Granger, Ph.D. > Assistant Professor of Physics > Cal Poly State University, San Luis Obispo > bgranger at calpoly.edu > ellisonbg at gmail.com > -- Brian E. Granger, Ph.D. Assistant Professor of Physics Cal Poly State University, San Luis Obispo bgranger at calpoly.edu ellisonbg at gmail.com From robert.kern at gmail.com Sat Aug 28 20:17:17 2010 From: robert.kern at gmail.com (Robert Kern) Date: Sat, 28 Aug 2010 19:17:17 -0500 Subject: [IPython-dev] !commands in the zmq model In-Reply-To: References: Message-ID: On 2010-08-28 02:29 , Fernando Perez wrote: > It's a shame it doesn't exist on windows, but it's apparently a very > non-trivial task to port it (I remember hearing William Stein several > times comment on how good it would be for Sage to have pexpect on > Windows, and knowing them, if it was doable they would have already > done it). I believe the current development is here: http://sage.math.washington.edu/home/goreckc/sage/wexpect/ You may want to check with the author, Chris K. Gorecki for information. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From ellisonbg at gmail.com Sun Aug 29 15:24:11 2010 From: ellisonbg at gmail.com (Brian Granger) Date: Sun, 29 Aug 2010 12:24:11 -0700 Subject: [IPython-dev] [matplotlib-devel] Uniform GUI support across matplotlib, ets and ipython In-Reply-To: <854193.34240.qm@web62407.mail.re1.yahoo.com> References: <854193.34240.qm@web62407.mail.re1.yahoo.com> Message-ID: On Sat, Aug 28, 2010 at 8:12 PM, Michiel de Hoon wrote: > I implemented an event loop in the MacOSX backend and the PyOS_ImportHook event loop in PyGTK, so I've been interested in this topic. Yes, and you were quite helpful last summer when i was trying to understand the PyOS_InputHook logic. I appreciated that greatly! > If I understand guisupport.py correctly, IPython runs the backend-specific event loop. Have you considered to implement an event loop in IPython and to run that instead of a backend-specific event loop? Then you won't have to iterate the event loop, and you can run multiple GUI backends (PyGTK, PyQT, Tkinter, ...) at the same time. The latter may work with the current guisupport.py, but is fragile, because running one of the backend-specific event loops may inadvertently run code from a different backend. Yes, we do run the native event loops of the GUI toolkit requested. There are a few reasons we haven't gone the direction you are mentioning (although it has crossed our minds): 1. We are not *that* passionate about GUI event loops. I would say our philosophy with event loops is "the simplest solution possible that is robust." 2. While it might be nice to be able to run multiple event loops, in most cases users can survive fine without this feature. This is especially true with more and more people migrating to Qt because of the license change. 3. We are just barely at the point of getting the new PyOS_InputHook and two process kernel GUI support working robustly with matplotlib/traits/mayavi/etc. It is an 2xNxMxP testing nightmare with 2 ways IPython can run the event loop x N toolkits x M projects x P platforms. Simply installing all possible combinations would probably take a couple of weeks time, let alone debugging it all. I envy matlab developers that simple have to test their plotting on a few platforms. We will be lucky to cover matplotlib/traits/mayavi on just qt4/wx on Mac/Linux/windows for the 0.11 release. 4. Integrating multiple event loops is either 1) super subtle and difficult (if you actually start all the event loops involved) or 2) tends to create solutions that busy poll or consume non-trivial CPU power. The wx based PyOS_Inputhook and our two process GUI support are already great examples of this. We have to work pretty hard to create things that are responsive but that don't consume 100% of the CPU. To reduce the CPU usage of the wx PyOS_InputHook we actually dynamically scale back the polling time depending on how often the user is triggering GUI events. 5. It is not just about integrating GUI event loops. We also have multiple other event loops in our apps that handle networking. Cheers, Brian > --Michiel. > > --- On Sat, 8/28/10, Brian Granger wrote: > >> From: Brian Granger >> Subject: [matplotlib-devel] Uniform GUI support across matplotlib, ets and ipython >> To: matplotlib-devel at lists.sourceforge.net, "IPython Development list" , enthought-dev at enthought.com, "Evan Patterson" >> Date: Saturday, August 28, 2010, 3:42 PM >> Hi all, >> >> As? you may know, this summer we have been working on >> a new two >> process IPython that has a beautiful Qt frontend GUI and a >> ZMQ based >> messaging layer between that GUI and the new IPython >> kernel.? Many >> thanks to Enthought for funding this effort! >> >> We are currently in the process of adding GUI event loop >> integration >> to the ipython kernel so users can do interactive plotting >> like they >> can with the regular ipython.? You may also remember >> that last summer >> we implemented a new PyOs_InputHook based GUI integration >> for the >> regular ipython.? This has not been released yet, but >> all of this will >> be released in the upcoming 0.11 release. >> >> I am emailing everyone because we see that there is a need >> for all of >> us to agree on two things: >> >> 1.? How to detect if a GUI application object has been >> created by someone else. >> 2.? How to detect if a GUI event loop is running. >> >> Currently there is code in both ETS and matplotlib that >> fails to >> handle these things properly in certain cases.? With >> IPython 0.10, >> this was not a problem because we used to >> hijack/monkeypatch the GUI >> eventloops after we started them.? In 0.11, we will no >> longer be doing >> that.? To address these issues, we have created a >> standalone module >> that implements the needed logic: >> >> http://github.com/ipython/ipython/blob/newkernel/IPython/lib/guisupport.py >> >> This module is heavily commented and introduces a new >> informal >> protocol that all of use? can use to detect if event >> loops are >> running.? This informal protocol is inspired by how >> some of this is >> handled inside ETS.? Our idea is that all projects >> will simply copy >> this module into their code and ship it.? It is >> lightweight and does >> not depend on IPython or other top-level imports.? As >> you will see, we >> have implemented the logic for wx and qt4, we will need >> help with >> other toolkits.? An important point is that matplotlib >> and ets WILL >> NOT WORK with the upcoming release of IPython unless >> changes are made >> to their respective codebases.? We consider this a >> draft and are more >> than willing to modify the design or approach as >> appropriate.? One >> thing that we have not thought about yet is how to continue >> to support >> 0.10 within this model. >> >> The good news amidst all of this is that the quality and >> stability of >> the GUI support in IPython is orders of magnitude better >> than that in >> the 0.10 series. >> >> Cheers, >> >> Brian >> >> PS:? If you are curious, here is a bit of background >> on the issues >> related to the PyOS_Inputhook stuff: >> >> http://mail.scipy.org/pipermail/ipython-dev/2010-July/006330.html >> >> ------------------------------------------------------------------------------ >> Sell apps to millions through the Intel(R) Atom(Tm) >> Developer Program >> Be part of this innovative community and reach millions of >> netbook users >> worldwide. Take advantage of special opportunities to >> increase revenue and >> speed time-to-market. Join now, and jumpstart your future. >> http://p.sf.net/sfu/intel-atom-d2d >> _______________________________________________ >> Matplotlib-devel mailing list >> Matplotlib-devel at lists.sourceforge.net >> https://lists.sourceforge.net/lists/listinfo/matplotlib-devel >> > > > > -- Brian E. Granger, Ph.D. Assistant Professor of Physics Cal Poly State University, San Luis Obispo bgranger at calpoly.edu ellisonbg at gmail.com From fperez.net at gmail.com Mon Aug 30 01:20:27 2010 From: fperez.net at gmail.com (Fernando Perez) Date: Sun, 29 Aug 2010 22:20:27 -0700 Subject: [IPython-dev] Uniform way of integrating event loops among different IDE's In-Reply-To: References: Message-ID: Hi Almar, On Fri, Aug 27, 2010 at 3:17 PM, Almar Klein wrote: > Allrighty. I've thought about this for the past couple of days, and read > some?information on the internet. I've concluded that you're right and > have?decided that I will change IEP's license to BSD in the next release, > and also for my visualization project. So thank you for bringing this to my > attention :) Fantastic! I really would like to thank you for taking my (not always worded as well as I should) comments to heart, thinking about them, and ultimately making this decision. While I would have equally respected your right to go the other way, this makes me very happy and I hope it will be the start of a fruitful collaboration between projects! Sincerely yours, Fernando. From fperez.net at gmail.com Mon Aug 30 01:42:54 2010 From: fperez.net at gmail.com (Fernando Perez) Date: Sun, 29 Aug 2010 22:42:54 -0700 Subject: [IPython-dev] !commands in the zmq model In-Reply-To: <20100828125156.GA582@phare.normalesup.org> References: <20100828125156.GA582@phare.normalesup.org> Message-ID: On Sat, Aug 28, 2010 at 5:51 AM, Gael Varoquaux wrote: > Hum, I thought that I had it working. I can't test right now, but I seem > to remember I had spend quite a lot of time on that. > > I am not claiming that my code was terribly good, ( :> "I write code, I > look at it a year later, I vomit", Michael A., CACR) but if I remember > correctly, there might be a few tricks to try and keep. > > * For pure Python code, as your example above, the trick was to use what > ?I have called the 'RedirectorOutputTrap' that registers a callback on > ?writing to the sys.stdout/sys.stderr. It is implemented in > ?IPython.kernel.core.redirector_output_trap. For pure python in-process, we're OK because the zmq objects nicely carry all stdout/err over the network, so we're in good shape. But I was using that same little python code as a test, by calling it in a separate subprocess, to see how rapid async output and interrupts could be hanlded. > * For subprocesses, I had to resort to 2 threads, one executing the > ?process, the other polling its stdout (ugly). This logic can be found > ?in IPython.frontend.process.pipedprocess. It also adds a bit of logic > ?to be able to kill the process under Windows, which you might want to > ?keep in mind. MMh, I've been testing your ipythonx implementation, and it definitely does *not* show subprocess output asynchronously as it happens. At least not with the example above run in a subprocess, and it won't let me interrupt the subprocess either (I see the KeyboardInterrupt happen only at the end, and coming from the ipythonx code: 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 done! --------------------------------------------------------------------------- KeyboardInterrupt Traceback (most recent call last) /home/fperez/ipython/branches/0.10.1/IPython/frontend/wx/wx_frontend.pyc in _on_key_down(self, event, skip) 401 if self.debug: 402 print >>sys.__stderr__, 'Raising KeyboardInterrupt' --> 403 raise KeyboardInterrupt 404 # XXX: We need to make really sure we 405 # get back to a prompt. KeyboardInterrupt: No worries, getting that kind of code to work is devilishly hard. For now, the pexpect solution works very well, I'll report separately on that. And on windows, people will need to live with the suboptimal piped behavior, until someone either ports pexpect to Windows or writes something as good based on pipes (if it can be done). > I am not sure if this code can be of some use to you or not, but just in > case. I believe that it might be one of the few useful things that came > out of my work. It actually even seems to have tests :$ Thanks for pointing it out though, I wasn't sure where it was. We may very well still use it to have at least as-robust-as-possible a solution on Windows, so it's great that you brought it up. On *nix I think we'll go with the seemingly magical pexpect, which lets us have our cake and eat it too. Take care, f From fperez.net at gmail.com Mon Aug 30 01:43:20 2010 From: fperez.net at gmail.com (Fernando Perez) Date: Sun, 29 Aug 2010 22:43:20 -0700 Subject: [IPython-dev] !commands in the zmq model In-Reply-To: References: Message-ID: On Sat, Aug 28, 2010 at 5:17 PM, Robert Kern wrote: > > > I believe the current development is here: > > ? http://sage.math.washington.edu/home/goreckc/sage/wexpect/ > > You may want to check with the author, Chris K. Gorecki > for information. Thanks much for the pointer, I just asked. Cheers, f From fperez.net at gmail.com Mon Aug 30 01:55:06 2010 From: fperez.net at gmail.com (Fernando Perez) Date: Sun, 29 Aug 2010 22:55:06 -0700 Subject: [IPython-dev] !commands in the zmq model In-Reply-To: References: Message-ID: On Sat, Aug 28, 2010 at 12:29 AM, Fernando Perez wrote: > ps - for reference, the current implementation is along the lines of: > > from __future__ import print_function > > import sys > import pexpect > > self = get_ipython() > > if 1: > ? ?def system( cmd): > ? ? ? ?cmd = self.var_expand(cmd, depth=2).strip() > ? ? ? ?sh = '/bin/bash' > ? ? ? ?timeout = 0.05 # seconds > ? ? ? ?pcmd = '%s -c %r' % (sh, cmd) > ? ? ? ?try: > ? ? ? ? ? ?child = pexpect.run(pcmd, logfile=sys.stdout) > ? ? ? ?except KeyboardInterrupt: > ? ? ? ? ? ?print('\nInterrupted command: %r.' % cmd, file=sys.stderr) OK, I realized there was a problem with the lovingly simple pexpect.run() form: *we* get the KeyboardInterrupt, but the subprocess doesn't. So while it's better than nothing, it would be nice to let the subprocess handle SIGINT correctly, since it's quite likely that it may know what to do. This more elaborate solution seems like a good compromise to me: we still kill the subprocess (in case something stubborn just decides to ignore SIGINT), but first we send it a gentle SIGINT to give it a chance to act on it: def system2( cmd): cmd = self.var_expand(cmd, depth=2).strip() sh = '/bin/bash' timeout = 0.05 # seconds pats = [pexpect.TIMEOUT, pexpect.EOF] out_size = 0 try: child = pexpect.spawn(sh, ['-c', cmd]) while True: res = child.expect_list(pats, timeout) print(child.before[out_size:], end='') out_size = len(child.before) if res==1: break except KeyboardInterrupt: out_size = len(child.before) child.sendline(chr(3)) res = child.expect_list(pats, timeout) print(child.before[out_size:], end='') child.terminate(force=True) In a bunch of tests using this code: import time,sys try: for i in range(200): print str(i)[-1], print >> sys.stderr, '.', sys.stdout.flush() sys.stderr.flush() time.sleep(0.01) print 'done!' except KeyboardInterrupt: print 'kbint in sleep loop' ### it behaves *exactly* like we'd want. Full async output, correct interleaving of stdout and stderr, we do see the print from the script after the SIGINT come out, and then the subprocess terminates. By putting in there the child.terminate() call after the .sendline(chr(3)), we ensure that even an ill-behaved thing that does while True: try: whatever except KeyboardInterrupt: pass can actually be killed from within our shell. In all my tests this is now working 100%, so I'll clean things up and will put it in. And for Windows things will unfortunately be much less satisfactory, but without some pretty deep expertise on the windows APIs to get something like pexpect to run there, I think we're stuck. Cheers, f From fperez.net at gmail.com Mon Aug 30 03:14:39 2010 From: fperez.net at gmail.com (Fernando Perez) Date: Mon, 30 Aug 2010 00:14:39 -0700 Subject: [IPython-dev] Uniform way of integrating event loops among different IDE's In-Reply-To: References: Message-ID: Hi Almar, On Fri, Aug 27, 2010 at 4:10 PM, Almar Klein wrote: > I've read some of the documentation for the new stuff you're working on. It > all sounds really well thought through, and am looking forwards for the > results. I've got a couple of questions though. Thanks! I hope it's well thought out, it's definitely *very* thought out, but unfortunately those two things aren't always the same :) Critical feedback very, very welcome. > - I see the possibilities of distributed computing by connecting multiple > kernels to a single client. However, I don't get why you would want to > connect multiple clients to a single kernel at the same time? Collaboration: you're working on a problem and would like to discuss it with a colleague. She opens a frontend pointed to your same kernel and voil?, you're sharing a set of data and code you can both work on, type code into, make plots from, etc. Think of it like desktop sharing but for code. Ad-hoc monitoring of a computation: you have a kernel you left in the office running a long computation. From the bar, you log in with your Android frontend, view the information it's printing, and log out knowing that everyting is OK. Or you stop it when you realize something went crazy. Ad-hoc continuation of work: you go home for the day and leave a session open at work. All of a sudden you have an idea and would like to test it, but it depends on a bunch of long computations you've already run at work and varaiables that are sitting in that session. No problem, just connect to it, try something out and disconnect again when satisfied. Monitoring: you can set up a 'read-only' client that monitors a kernel and publishes its output somewhere (logs, http, sms, whatever). There's plenty more, I'm sure. These are just a few that quickly come to mind. > - I saw an example in which you're kind of going towards a Mathematica/Sage > type of UI. Is this what you're really aiming at, or is this one possible > front end? I'm asking because IEP has more of a Matlab kind of UI, with an > editor from which the user can run code (selected lines or cells: code > between two lines starting with two ##'s). Would that be compatible with the > kernel you're designing? Absolutely! We want *both* types of interface. Evan's frontend is more of a terminal widget that could be embedded in an IDE, while Gerardo's has more the feel of a Qt-based notebook. And obviously as soon as an HTTP layer is written, something like the Sage notebook becomes the next step. Several of us are long-time Mathematica users and use Sage regularly, so those interfaces have obviously shaped our views a lot. But what we're trying to build is a *protocol* and infrastructure to make multiple types of client possible. > - About the heartbeat thing to detect whether kernels are still alive. I use > a similar concept in the channels module. I actually never realized that > this would fail if Python is running extension code. However, I do run > Cython code that takes about a minute to run without problems. Is that > because it's Cython and the Python interpreter is still involved? I'll do > some test running Cython and C code next week. The question is whether your messaging layer can continue to function if you have a long-running computation that's not in Python. You can easily see that by just calling a large SVD, eigenvalue decomposition or FFT from scipy, things that are easy to make big and that are locked inside some Fortran routine for a long time. In that scenario, your program will not touch the python parts until the Fortran (or pure C) finish. Whether that's detrimental to your overall app or not depends on how the other parts handle one component being unresponsive for a while. In our case obviously the kernel itself remains unresponsive, but the important part is that the networking doesn't suffer. So we have enough information to take action even in the face of an unresponsive kernel. > Since I think its interesting to see that we've taking rather different > approaches to do (more or less) the same thing, I'll share some background > on what I do in IEP: > > I use one Channels instance from the channels.py module, which means all > communication goes over one socket. However, I can use as many as 128 > different channels each way. Instead of a messaging format, I use a channel > for each task. By the way, I'm not saying my method is better; yours is > probably more "scalable", mine requires no/little message processing. So > from the kernel's perspective, I have one receiving channel for stdin, two > sending for stdout and stderr, one receiving for control (mostly debugging > at the moment) and one sending for status messages (whether busy/ready, and > debug info). Lastly there's one receiving and one sending channel for > introspection requests and responses. Interesting... But I imagine each channel requires a socket pair, right? In that case then you'll definitely have problems if you want to have hundreds/thousasnds of kernels, as you'll eventually run out of ports for connections. Since that's a key part of ipython, we need a design that scales well in that direction from the get-go. But I see how your approach provides you with important benefits in certain contexts. > To receive code, sys.stdin is replaced with a receivingChannel from > channels.py, which is non-blocking. The readline() method (which is what > raw_input() uses) *is* blocking, so that raw_input() behaves appropriately. > > The remote process runs an interpreter loop. Each iteration the interpreter > checks (non-blocking) the stdin for a command to be run. If there is, it > does so using almost the same code in code.py. Next (if required) process > GUI events. Next produce prompt if necessary, and send status. In another > thread, there is a loop that listens for introspection requests > (auto-completion, calltips, docs). What happens if the user wants to execute in the remote process which itself calls raw_input()? For example, can one call pdb in post-mortem mode in the remote process? In any case, thanks a lot for your interest! Especially now with your license change, it would be wonderful if the two projects could collaborate more closely. All the best, f From fperez.net at gmail.com Mon Aug 30 03:18:55 2010 From: fperez.net at gmail.com (Fernando Perez) Date: Mon, 30 Aug 2010 00:18:55 -0700 Subject: [IPython-dev] Uniform way of integrating event loops among different IDE's In-Reply-To: References: Message-ID: Hi Almar, returning to your original thread, which got a little sidetracked by our licensing discussion... :) 7 On Tue, Aug 24, 2010 at 8:06 AM, Almar Klein wrote: > I'm developing an IDE for Python (http://code.google.com/p/iep/) that is > capable of integrating the event loop of several GUI toolkits. On a side > note, I used much code of IPython as inspiration on how to do that, so > thanks for that. > > I saw in the IPython documentation that IPython users can detect whether > IPython hijacked the event loop as follows (for wx): > > try: > ??? from IPython import appstart_wx > ??? appstart_wx(app) > except ImportError: > ???? app.MainLoop() > > A very nifty feature indeed. However, building further on this, wouldn't it > be nice if people could perform this trick regardless of in which IDE or > shell the code is running? Therefore I propose to insert an object in the > GUI's module to indicate that the GUI event loop does not need to be > entered. I currently use for my IDE: > > import wx > if not hasattr(wx, '_integratedEventLoop'): > ??? app = wx.PySimpleApp() > ??? app.MainLoop() > > Currently, _integratedEventLoop is a string with the value 'IEP', indicating > who hijacked the main loop. I'm not sure what IPythons appstart_* function > does, but the inserted object might just as well be a function that needs to > be called (using the app instance as an argument, but how to call it for > fltk or gtk then?). > > I'm interested to know what you think of this idea. Well, Brian just implemented more or less this very same thing: http://github.com/ipython/ipython/blob/newkernel/IPython/lib/guisupport.py We decided to call the attribute '_in_event_loop' instead, partly for PEP-8 reasons but especially because Enthought was already using that name. Absent a good reason to deviate from their chosen name, which is already in a very large codebase, we figured we'd use that. So let's hope we can get all GUI projects to agree on this approach, and it should become possible with only minimal work on the part of authors to coexist well with the event loops of various toolkits and interactive apps like IPython or IEP. Regards, From fperez.net at gmail.com Mon Aug 30 03:32:20 2010 From: fperez.net at gmail.com (Fernando Perez) Date: Mon, 30 Aug 2010 00:32:20 -0700 Subject: [IPython-dev] iptest on 0.11.alpha1.git In-Reply-To: <20100828142120.18fbef4d@earth> References: <20100828142120.18fbef4d@earth> Message-ID: On Sat, Aug 28, 2010 at 5:21 AM, Thomas Spura wrote: > The commit hash will be noted in the version at the startup > automatically after this commit: > http://github.com/tomspur/ipython/commit/e2e56f2917d941051e99a893f3e26989b78aaa53 > > And the branch name will be there after this commit: > http://github.com/tomspur/ipython/commit/7b5f6ed4abd9308b9fc2a2071a756b0aba3a680b > Mmh, actually the more I think about this, the more reluctant I am to have anything that opens a subprocess or imports a whole repo-reading module just to get a version string. Operations like that have an impact on startup time, and IPython is arleady slow enough to start up as it is. We should think carefully about how to get that information in the absolutely fastest, most static way possible in all cases. Recording it in a file with post-commit hooks or something like the old update-revno tool we had for bzr, but NOT with dynamic code run at initialization time. I really want our startup time to go down, not up. A long time ago I had a regular habit of starting up plain python and then ipython, to compare the difference. It used to be nearly undetectable, now it's noticeable, and doing this kind of repo-spelunking at startup time isn't going to help matters. Sorry to go back a bit on your efforts, but I want to make sure that we don't end up harming the everyday experience just to get a fancy version string. Regards, f From almar.klein at gmail.com Mon Aug 30 04:51:44 2010 From: almar.klein at gmail.com (Almar Klein) Date: Mon, 30 Aug 2010 10:51:44 +0200 Subject: [IPython-dev] Kernel-client communication Message-ID: Hi Fernando, > - I see the possibilities of distributed computing by connecting multiple > > kernels to a single client. However, I don't get why you would want to > > connect multiple clients to a single kernel at the same time? > > Collaboration: you're working on a problem and would like to discuss > it with a colleague. She opens a frontend pointed to your same kernel > and voil?, you're sharing a set of data and code you can both work on, > type code into, make plots from, etc. Think of it like desktop > sharing but for code. > > Ad-hoc monitoring of a computation: you have a kernel you left in the > office running a long computation. From the bar, you log in with your > Android frontend, view the information it's printing, and log out > knowing that everyting is OK. Or you stop it when you realize > something went crazy. > > Ad-hoc continuation of work: you go home for the day and leave a > session open at work. All of a sudden you have an idea and would like > to test it, but it depends on a bunch of long computations you've > already run at work and varaiables that are sitting in that session. > No problem, just connect to it, try something out and disconnect again > when satisfied. > > Monitoring: you can set up a 'read-only' client that monitors a kernel > and publishes its output somewhere (logs, http, sms, whatever). > > There's plenty more, I'm sure. These are just a few that quickly come to > mind. > Ah right. Although I'm not sure how often one would use such this in practice, it's certainly a nice feature, and seems to open op a range of possibilities. I can imagine this requirement makes things considerably harder to implement, but since you're designing a whole new protocol from scratch, it's probably a good choice to include it now. > > - I saw an example in which you're kind of going towards a > Mathematica/Sage > > type of UI. Is this what you're really aiming at, or is this one possible > > front end? I'm asking because IEP has more of a Matlab kind of UI, with > an > > editor from which the user can run code (selected lines or cells: code > > between two lines starting with two ##'s). Would that be compatible with > the > > kernel you're designing? > > Absolutely! We want *both* types of interface. Evan's frontend is > more of a terminal widget that could be embedded in an IDE, while > Gerardo's has more the feel of a Qt-based notebook. And obviously as > soon as an HTTP layer is written, something like the Sage notebook > becomes the next step. Several of us are long-time Mathematica users > and use Sage regularly, so those interfaces have obviously shaped our > views a lot. But what we're trying to build is a *protocol* and > infrastructure to make multiple types of client possible. > Great! > - About the heartbeat thing to detect whether kernels are still alive. I > use > > a similar concept in the channels module. I actually never realized that > > this would fail if Python is running extension code. However, I do run > > Cython code that takes about a minute to run without problems. Is that > > because it's Cython and the Python interpreter is still involved? I'll do > > some test running Cython and C code next week. > > The question is whether your messaging layer can continue to function > if you have a long-running computation that's not in Python. You can > easily see that by just calling a large SVD, eigenvalue decomposition > or FFT from scipy, things that are easy to make big and that are > locked inside some Fortran routine for a long time. In that scenario, > your program will not touch the python parts until the Fortran (or > pure C) finish. Whether that's detrimental to your overall app or not > depends on how the other parts handle one component being unresponsive > for a while. > > In our case obviously the kernel itself remains unresponsive, but the > important part is that the networking doesn't suffer. So we have > enough information to take action even in the face of an unresponsive > kernel. > I'm quite a new to networking, so sorry for if this sounds stupid: Other than the heartbeat stuff not working, would it also have other effects? I mean, data can not be send or received, so would maybe network buffers overflow or anything? Further, am I right that the heartbeat is not necessary when communicating between processes on the same box using 'localhost' (since some network layers are bypassed)? That would give a short term solution for IEP. > > Since I think its interesting to see that we've taking rather different > > approaches to do (more or less) the same thing, I'll share some > background > > on what I do in IEP: > > > > I use one Channels instance from the channels.py module, which means all > > communication goes over one socket. However, I can use as many as 128 > > different channels each way. Instead of a messaging format, I use a > channel > > for each task. By the way, I'm not saying my method is better; yours is > > probably more "scalable", mine requires no/little message processing. So > > from the kernel's perspective, I have one receiving channel for stdin, > two > > sending for stdout and stderr, one receiving for control (mostly > debugging > > at the moment) and one sending for status messages (whether busy/ready, > and > > debug info). Lastly there's one receiving and one sending channel for > > introspection requests and responses. > > Interesting... But I imagine each channel requires a socket pair, > right? In that case then you'll definitely have problems if you want > to have hundreds/thousasnds of kernels, as you'll eventually run out > of ports for connections. Since that's a key part of ipython, we need > a design that scales well in that direction from the get-go. But I > see how your approach provides you with important benefits in certain > contexts. > No, that's the great thing! All channels are multiplexed over the same socket pair. When writing a message to a channel, it is put in a queue, adding a small header to indicate the channel id. There is a single thread that sends and receives messages over the socket. It just pops the messages from the queue and sends them to the other side. At the receiver side, the messages are distributed to the queue corresponding to the right channel. So there's one 'global' queue on the sending side and one queue per channel on the receiver side. > > To receive code, sys.stdin is replaced with a receivingChannel from > > channels.py, which is non-blocking. The readline() method (which is what > > raw_input() uses) *is* blocking, so that raw_input() behaves > appropriately. > > > > The remote process runs an interpreter loop. Each iteration the > interpreter > > checks (non-blocking) the stdin for a command to be run. If there is, it > > does so using almost the same code in code.py. Next (if required) process > > GUI events. Next produce prompt if necessary, and send status. In another > > thread, there is a loop that listens for introspection requests > > (auto-completion, calltips, docs). > > What happens if the user wants to execute in the remote process which > itself calls raw_input()? For example, can one call pdb in > post-mortem mode in the remote process? > All goes well. Calling code that uses raw_input(), simply uses the sys.stdin.readline() method. Where the sys.stdin is simply a ReceivingChannel instance from channels.py. In any case, thanks a lot for your interest! > > Especially now with your license change, it would be wonderful if the > two projects could collaborate more closely. > I'm looking forward to it! Cheers, Almar PS: I changed the topic name of this e-mail to something that better represents what we're discussing :) -------------- next part -------------- An HTML attachment was scrubbed... URL: From almar.klein at gmail.com Mon Aug 30 05:09:30 2010 From: almar.klein at gmail.com (Almar Klein) Date: Mon, 30 Aug 2010 11:09:30 +0200 Subject: [IPython-dev] Uniform way of integrating event loops among different IDE's In-Reply-To: References: Message-ID: On 30 August 2010 09:18, Fernando Perez wrote: > Hi Almar, > > returning to your original thread, which got a little sidetracked by > our licensing discussion... :) > 7 > On Tue, Aug 24, 2010 at 8:06 AM, Almar Klein > wrote: > > I'm developing an IDE for Python (http://code.google.com/p/iep/) that is > > capable of integrating the event loop of several GUI toolkits. On a side > > note, I used much code of IPython as inspiration on how to do that, so > > thanks for that. > > > > I saw in the IPython documentation that IPython users can detect whether > > IPython hijacked the event loop as follows (for wx): > > > > try: > > from IPython import appstart_wx > > appstart_wx(app) > > except ImportError: > > app.MainLoop() > > > > A very nifty feature indeed. However, building further on this, wouldn't > it > > be nice if people could perform this trick regardless of in which IDE or > > shell the code is running? Therefore I propose to insert an object in the > > GUI's module to indicate that the GUI event loop does not need to be > > entered. I currently use for my IDE: > > > > import wx > > if not hasattr(wx, '_integratedEventLoop'): > > app = wx.PySimpleApp() > > app.MainLoop() > > > > Currently, _integratedEventLoop is a string with the value 'IEP', > indicating > > who hijacked the main loop. I'm not sure what IPythons appstart_* > function > > does, but the inserted object might just as well be a function that needs > to > > be called (using the app instance as an argument, but how to call it for > > fltk or gtk then?). > > > > I'm interested to know what you think of this idea. > > Well, Brian just implemented more or less this very same thing: > > http://github.com/ipython/ipython/blob/newkernel/IPython/lib/guisupport.py > > We decided to call the attribute '_in_event_loop' instead, partly for > PEP-8 reasons but especially because Enthought was already using that > name. Absent a good reason to deviate from their chosen name, which > is already in a very large codebase, we figured we'd use that. > > So let's hope we can get all GUI projects to agree on this approach, > and it should become possible with only minimal work on the part of > authors to coexist well with the event loops of various toolkits and > interactive apps like IPython or IEP. > Great. I was not aware of Enthought using this, so I'm happy to adopt that name instead. Still, may I suggest the following: IPython or IEP, or any environment, could inject a function 'start_event_loop' in the module namespace of the GUI toolkit it integrates. My main argument for this, is that it would be independent of IPython or any specific library or IDE. The user can then simply call: import wx if hasattr(wx, 'start_event_loop'): wx.start_event_loop() else: # Start the "native" way app = wx.PySimpleApp(*args, **kwargs) app.MainLoop() Regards, Almar -------------- next part -------------- An HTML attachment was scrubbed... URL: From tomspur at fedoraproject.org Mon Aug 30 07:11:27 2010 From: tomspur at fedoraproject.org (Thomas Spura) Date: Mon, 30 Aug 2010 13:11:27 +0200 Subject: [IPython-dev] iptest on 0.11.alpha1.git In-Reply-To: References: <20100828142120.18fbef4d@earth> Message-ID: <20100830131127.0b4c38ec@earth> On Mon, 30 Aug 2010 00:32:20 -0700 Fernando Perez wrote: > On Sat, Aug 28, 2010 at 5:21 AM, Thomas Spura > wrote: > > The commit hash will be noted in the version at the startup > > automatically after this commit: > > http://github.com/tomspur/ipython/commit/e2e56f2917d941051e99a893f3e26989b78aaa53 > > > > And the branch name will be there after this commit: > > http://github.com/tomspur/ipython/commit/7b5f6ed4abd9308b9fc2a2071a756b0aba3a680b > > > > Mmh, actually the more I think about this, the more reluctant I am to > have anything that opens a subprocess or imports a whole repo-reading > module just to get a version string. Operations like that have an > impact on startup time, and IPython is arleady slow enough to start up > as it is. > > We should think carefully about how to get that information in the > absolutely fastest, most static way possible in all cases. Recording > it in a file with post-commit hooks or something like the old > update-revno tool we had for bzr, but NOT with dynamic code run at > initialization time. I rewrote the update-revno so that it does the right thing with git now: Replace branch name and revision number with the git-branch and git-revision. Unfortunately I didn't found yet a good git command to create an archive with the not yet commited changes... > I really want our startup time to go down, not up. A long time ago I > had a regular habit of starting up plain python and then ipython, to > compare the difference. It used to be nearly undetectable, now it's > noticeable, and doing this kind of repo-spelunking at startup time > isn't going to help matters. Hmm, yes, it would be great to have a good startup time, but that patch does is make worse when running in development mode. When releasing a version all that won't get even touched... I could try to get the development startup time down too, but I think, it's not that important to think about maybe half a second more in development startup. It would be far better to set the release flag and compare that timing, isn't it? Thomas From tomspur at fedoraproject.org Mon Aug 30 07:59:31 2010 From: tomspur at fedoraproject.org (Thomas Spura) Date: Mon, 30 Aug 2010 13:59:31 +0200 Subject: [IPython-dev] iptest on 0.11.alpha1.git In-Reply-To: <20100830131127.0b4c38ec@earth> References: <20100828142120.18fbef4d@earth> <20100830131127.0b4c38ec@earth> Message-ID: <20100830135931.25d70548@earth> On Mon, 30 Aug 2010 13:11:27 +0200 Thomas Spura wrote: > On Mon, 30 Aug 2010 00:32:20 -0700 > Fernando Perez wrote: > > > On Sat, Aug 28, 2010 at 5:21 AM, Thomas Spura > > wrote: > > > The commit hash will be noted in the version at the startup > > > automatically after this commit: > > > http://github.com/tomspur/ipython/commit/e2e56f2917d941051e99a893f3e26989b78aaa53 > > > > > > And the branch name will be there after this commit: > > > http://github.com/tomspur/ipython/commit/7b5f6ed4abd9308b9fc2a2071a756b0aba3a680b > > > > > > > Mmh, actually the more I think about this, the more reluctant I am > > to have anything that opens a subprocess or imports a whole > > repo-reading module just to get a version string. Operations like > > that have an impact on startup time, and IPython is arleady slow > > enough to start up as it is. > > > > We should think carefully about how to get that information in the > > absolutely fastest, most static way possible in all cases. > > Recording it in a file with post-commit hooks or something like the > > old update-revno tool we had for bzr, but NOT with dynamic code run > > at initialization time. > > I rewrote the update-revno so that it does the right thing with git > now: Replace branch name and revision number with the git-branch and > git-revision. Unfortunately I didn't found yet a good git command to > create an archive with the not yet commited changes... > > > I really want our startup time to go down, not up. A long time ago > > I had a regular habit of starting up plain python and then ipython, > > to compare the difference. It used to be nearly undetectable, now > > it's noticeable, and doing this kind of repo-spelunking at startup > > time isn't going to help matters. > > Hmm, yes, it would be great to have a good startup time, but that > patch does is make worse when running in development mode. When > releasing a version all that won't get even touched... > > I could try to get the development startup time down too, but I think, > it's not that important to think about maybe half a second more in > development startup. > It would be far better to set the release flag and compare that > timing, isn't it? This is now done with "./update_revnum.py" after this commit: http://github.com/tomspur/ipython/commit/412b73175bb654f87653338bc3085c31ec7081db Stays the problem with make-tarball: Currently it will make a tarball of the latest commited changes, but not the changes made by update_revnum... :( Thomas From ellisonbg at gmail.com Mon Aug 30 14:06:06 2010 From: ellisonbg at gmail.com (Brian Granger) Date: Mon, 30 Aug 2010 11:06:06 -0700 Subject: [IPython-dev] [matplotlib-devel] Uniform GUI support across matplotlib, ets and ipython In-Reply-To: <894265.20993.qm@web62401.mail.re1.yahoo.com> References: <894265.20993.qm@web62401.mail.re1.yahoo.com> Message-ID: On Mon, Aug 30, 2010 at 7:10 AM, Michiel de Hoon wrote: > Hi Brian, > Thanks for your reply. I agree that integrating multiple event loops is not essential for most users. But if you are not integrating multiple event loops, then why do you need poll? In the two process kernel we do currently integrate two event loops: 1. Our networking event loop that is based on zeromq/pyzmq 2. A single GUI event loop from wx, qt4, etc. We do this by triggering an iteration of our networking event loop on a periodic GUI timer. So we definitely have to face multiple event loop integration, but it is much simpler when you only have 1 GUi event loop involved. Cheers, Brian > Best, > --Michiel. > > > --- On Sun, 8/29/10, Brian Granger wrote: > >> From: Brian Granger >> Subject: Re: [matplotlib-devel] Uniform GUI support across matplotlib, ets and ipython >> To: "Michiel de Hoon" >> Cc: matplotlib-devel at lists.sourceforge.net, "IPython Development list" , enthought-dev at enthought.com, "Evan Patterson" >> Date: Sunday, August 29, 2010, 3:24 PM >> On Sat, Aug 28, 2010 at 8:12 PM, >> Michiel de Hoon >> wrote: >> > I implemented an event loop in the MacOSX backend and >> the PyOS_ImportHook event loop in PyGTK, so I've been >> interested in this topic. >> >> Yes, and you were quite helpful last summer when i was >> trying to >> understand the PyOS_InputHook logic. I appreciated that >> greatly! >> >> > If I understand guisupport.py correctly, IPython runs >> the backend-specific event loop. Have you considered to >> implement an event loop in IPython and to run that instead >> of a backend-specific event loop? Then you won't have to >> iterate the event loop, and you can run multiple GUI >> backends (PyGTK, PyQT, Tkinter, ...) at the same time. The >> latter may work with the current guisupport.py, but is >> fragile, because running one of the backend-specific event >> loops may inadvertently run code from a different backend. >> >> Yes, we do run the native event loops of the GUI toolkit >> requested. >> There are a few reasons we haven't gone the direction you >> are >> mentioning (although it has crossed our minds): >> >> 1.? We are not *that* passionate about GUI event >> loops.? I would say >> our philosophy with event loops is "the simplest solution >> possible >> that is robust." >> 2.? While it might be nice to be able to run multiple >> event loops, in >> most cases users can survive fine without this >> feature.? This is >> especially true with more and more people migrating to Qt >> because of >> the license change. >> 3.? We are just barely at the point of getting the new >> PyOS_InputHook >> and two process kernel GUI support working robustly with >> matplotlib/traits/mayavi/etc.? It is an 2xNxMxP >> testing nightmare with >> 2 ways IPython can run the event loop x N toolkits x M >> projects x P >> platforms.? Simply installing all possible >> combinations would probably >> take a couple of weeks time, let alone debugging it >> all.? I envy >> matlab developers that simple have to test their plotting >> on a few >> platforms.? We will be lucky to cover >> matplotlib/traits/mayavi on just >> qt4/wx on Mac/Linux/windows for the 0.11 release. >> 4.? Integrating multiple event loops is either 1) >> super subtle and >> difficult (if you actually start all the event loops >> involved) or 2) >> tends to create solutions that busy poll or consume >> non-trivial CPU >> power.? The wx based PyOS_Inputhook and our two >> process GUI support >> are already great examples of this.? We have to work >> pretty hard to >> create things that are responsive but that don't consume >> 100% of the >> CPU.? To reduce the CPU usage of the wx PyOS_InputHook >> we actually >> dynamically scale back the polling time depending on how >> often the >> user is triggering GUI events. >> 5.? It is not just about integrating GUI event >> loops.? We also have >> multiple other event loops in our apps that handle >> networking. >> >> Cheers, >> >> Brian >> >> >> > --Michiel. >> > >> > --- On Sat, 8/28/10, Brian Granger >> wrote: >> > >> >> From: Brian Granger >> >> Subject: [matplotlib-devel] Uniform GUI support >> across matplotlib, ets and ipython >> >> To: matplotlib-devel at lists.sourceforge.net, >> "IPython Development list" , >> enthought-dev at enthought.com, >> "Evan Patterson" >> >> Date: Saturday, August 28, 2010, 3:42 PM >> >> Hi all, >> >> >> >> As? you may know, this summer we have been >> working on >> >> a new two >> >> process IPython that has a beautiful Qt frontend >> GUI and a >> >> ZMQ based >> >> messaging layer between that GUI and the new >> IPython >> >> kernel.? Many >> >> thanks to Enthought for funding this effort! >> >> >> >> We are currently in the process of adding GUI >> event loop >> >> integration >> >> to the ipython kernel so users can do interactive >> plotting >> >> like they >> >> can with the regular ipython.? You may also >> remember >> >> that last summer >> >> we implemented a new PyOs_InputHook based GUI >> integration >> >> for the >> >> regular ipython.? This has not been released yet, >> but >> >> all of this will >> >> be released in the upcoming 0.11 release. >> >> >> >> I am emailing everyone because we see that there >> is a need >> >> for all of >> >> us to agree on two things: >> >> >> >> 1.? How to detect if a GUI application object has >> been >> >> created by someone else. >> >> 2.? How to detect if a GUI event loop is >> running. >> >> >> >> Currently there is code in both ETS and matplotlib >> that >> >> fails to >> >> handle these things properly in certain cases. >> With >> >> IPython 0.10, >> >> this was not a problem because we used to >> >> hijack/monkeypatch the GUI >> >> eventloops after we started them.? In 0.11, we >> will no >> >> longer be doing >> >> that.? To address these issues, we have created >> a >> >> standalone module >> >> that implements the needed logic: >> >> >> >> http://github.com/ipython/ipython/blob/newkernel/IPython/lib/guisupport.py >> >> >> >> This module is heavily commented and introduces a >> new >> >> informal >> >> protocol that all of use? can use to detect if >> event >> >> loops are >> >> running.? This informal protocol is inspired by >> how >> >> some of this is >> >> handled inside ETS.? Our idea is that all >> projects >> >> will simply copy >> >> this module into their code and ship it.? It is >> >> lightweight and does >> >> not depend on IPython or other top-level >> imports.? As >> >> you will see, we >> >> have implemented the logic for wx and qt4, we will >> need >> >> help with >> >> other toolkits.? An important point is that >> matplotlib >> >> and ets WILL >> >> NOT WORK with the upcoming release of IPython >> unless >> >> changes are made >> >> to their respective codebases.? We consider this >> a >> >> draft and are more >> >> than willing to modify the design or approach as >> >> appropriate.? One >> >> thing that we have not thought about yet is how to >> continue >> >> to support >> >> 0.10 within this model. >> >> >> >> The good news amidst all of this is that the >> quality and >> >> stability of >> >> the GUI support in IPython is orders of magnitude >> better >> >> than that in >> >> the 0.10 series. >> >> >> >> Cheers, >> >> >> >> Brian >> >> >> >> PS:? If you are curious, here is a bit of >> background >> >> on the issues >> >> related to the PyOS_Inputhook stuff: >> >> >> >> http://mail.scipy.org/pipermail/ipython-dev/2010-July/006330.html >> >> >> >> >> ------------------------------------------------------------------------------ >> >> Sell apps to millions through the Intel(R) >> Atom(Tm) >> >> Developer Program >> >> Be part of this innovative community and reach >> millions of >> >> netbook users >> >> worldwide. Take advantage of special opportunities >> to >> >> increase revenue and >> >> speed time-to-market. Join now, and jumpstart your >> future. >> >> http://p.sf.net/sfu/intel-atom-d2d >> >> _______________________________________________ >> >> Matplotlib-devel mailing list >> >> Matplotlib-devel at lists.sourceforge.net >> >> https://lists.sourceforge.net/lists/listinfo/matplotlib-devel >> >> >> > >> > >> > >> > >> >> >> >> -- >> Brian E. Granger, Ph.D. >> Assistant Professor of Physics >> Cal Poly State University, San Luis Obispo >> bgranger at calpoly.edu >> ellisonbg at gmail.com >> > > > > -- Brian E. Granger, Ph.D. Assistant Professor of Physics Cal Poly State University, San Luis Obispo bgranger at calpoly.edu ellisonbg at gmail.com From fperez.net at gmail.com Tue Aug 31 01:28:11 2010 From: fperez.net at gmail.com (Fernando Perez) Date: Mon, 30 Aug 2010 22:28:11 -0700 Subject: [IPython-dev] Kernel-client communication In-Reply-To: References: Message-ID: On Mon, Aug 30, 2010 at 1:51 AM, Almar Klein wrote: > Ah right. Although I'm not sure how often one would use such this in > practice, it's certainly a nice feature, and seems to open op a range of > possibilities. I can imagine this requirement makes things considerably > harder to implement, but since you're designing a whole new protocol from > scratch, it's probably a good choice to include it now. And the whole thing fits naturally in our design for tools that enable both interactive/collaborative computing and distributed/parallel work within one single framework. After all, it's just manipulating namespaces :) >> In our case obviously the kernel itself remains unresponsive, but the >> important part is that the networking doesn't suffer. ?So we have >> enough information to take action even in the face of an unresponsive >> kernel. > > I'm quite a new to networking, so sorry for if this sounds stupid: Other > than the heartbeat stuff not working, would it also have other effects? I > mean, data can not be send or received, so would maybe network buffers > overflow or anything? Depending on how you implemented your networking layer, you're likely to lose data. And you'll need to ensure that your api recovers gracefully from half-sent messages, unreplied messages, etc. Getting a robust and efficient message transport layer written is not easy work. It takes expertise and detailed knowledge, coupled with extensive real-world experience, to do it right. We simply decided to piggy back on some of the best that was out there, rather than trying to rewrite our own. The features we gain from zmq (it's not just the low-level performance, it's also the simple but powerful semantics of their various socket types, which we've baked into the very core of our design) are well worth the price of a C dependency in this case. > Further, am I right that the heartbeat is not necessary when communicating > between processes on the same box using 'localhost' (since some network > layers are bypassed)? That would give a short term solution for IEP. Yes, on local host you can detect the process via other mechanisms. The question is whether the system recovers gracefully from dropped messages or incomplete connections. You do need to engineer that into the code itself, so that you don't lock up your client when the kernel becomes unresponsive, for example. I'm sure we still have corner cases in our code where we can lock up, it's not easy to prevent all such occurrences. > No, that's the great thing! All channels are multiplexed over the same > socket pair. When writing a message to a channel, it is put in a queue, > adding a small header to indicate the channel id. There is a single thread > that sends and receives messages over the socket. It just pops the messages > from the queue and sends them to the other side. At the receiver side, the > messages are distributed to the queue corresponding to the right channel. So > there's one 'global' queue on the sending side and one queue per channel on > the receiver side. Ah, excellent! It seems your channels are similar to our message types, we simply dispatch on the message type (a string) with the appropriate handler. The twist in ipython is that we have used as an integral part of the design the various types of zmq sockets: req/rep for stdin control, xrep/xreq for execution requests multiplexed across clients, and pub/sub for side effects (things that don't fit in a functional paradigm). We thus have a very strong marriage between the abstractions that zmq exposes and our design. Honestly, I sometimes feel as if zmq had been designed for us, because it makes certain things we'd wanted for a very long time almost embarrassingly easy. Thanks a lot for sharing your ideas, it's always super useful to look at these questions from multiple perspectives. Regards, f From fperez.net at gmail.com Tue Aug 31 01:29:20 2010 From: fperez.net at gmail.com (Fernando Perez) Date: Mon, 30 Aug 2010 22:29:20 -0700 Subject: [IPython-dev] Uniform way of integrating event loops among different IDE's In-Reply-To: References: Message-ID: On Mon, Aug 30, 2010 at 2:09 AM, Almar Klein wrote: > Still, may I suggest the following: IPython or IEP, or any environment, > could inject a function 'start_event_loop' in the module namespace of the > GUI toolkit it integrates. My main argument for this, is that it would be > independent of IPython or any specific library or IDE. The user can then > simply call: > > import wx > if hasattr(wx, 'start_event_loop'): > ??? wx.start_event_loop() > else: > ??? # Start the "native" way > ??? app = wx.PySimpleApp(*args, **kwargs) > ??? app.MainLoop() > I mentioned it to Brian on IRC and he saw a catch with this idea, but I'm not sure of the details. Over the next couple of days we can hash it over here, thanks a lot for the feedback. We certainly want a solution that covers all the bases for all projects, so we can all reuse a common approach. Regards, f From fperez.net at gmail.com Tue Aug 31 01:30:52 2010 From: fperez.net at gmail.com (Fernando Perez) Date: Mon, 30 Aug 2010 22:30:52 -0700 Subject: [IPython-dev] iptest on 0.11.alpha1.git In-Reply-To: <20100830135931.25d70548@earth> References: <20100828142120.18fbef4d@earth> <20100830131127.0b4c38ec@earth> <20100830135931.25d70548@earth> Message-ID: On Mon, Aug 30, 2010 at 4:59 AM, Thomas Spura wrote: > This is now done with "./update_revnum.py" after this commit: > http://github.com/tomspur/ipython/commit/412b73175bb654f87653338bc3085c31ec7081db > > Stays the problem with make-tarball: Currently it will make a tarball > of the latest commited changes, but not the changes made by > update_revnum... :( Thanks! It may be a few days before I have a chance to merge this, as right now all my bandwidth is going into the zmq code. But I'm not forgetting :) Regards, f