From takowl at gmail.com Sat Apr 2 18:02:28 2011 From: takowl at gmail.com (Thomas Kluyver) Date: Sat, 2 Apr 2011 23:02:28 +0100 Subject: [IPython-dev] Splitting input by AST nodes Message-ID: Our current methods of splitting input into 'blocks' is...well, one of the comments starts "HACK!!!". It's also the cause of at least issue 306*, and trying to change it is likely to lead us into a minefield of other similar problems. I propose that we redo this with a system making use of the AST (abstract syntax tree) capabilities introduced in Python 2.6. Using the ast module, we can parse a code string into an AST, and then use the nodes from that instead of code blocks. AST nodes can be compiled into standard Python code objects to be run with exec. This has a couple of consequences which I've thought of so far: firstly, there's no easy way to get back the code as a string from an ast node (third party modules exist, but I get the impression they're rather hackish themselves). So we could no longer do things like "if the last node is two lines or less, execute it differently". Secondly, "a = 11; b = 12" is currently understood as one block, but in the AST it would be two nodes. Since this would obviously be a fairly major change to a key part of IPython, I thought I'd get some feedback before I start trying to implement it. Does using AST nodes instead of blocks sound useful, or is there a good reason why it wouldn't work? Thanks, Thomas * https://github.com/ipython/ipython/issues/306 -------------- next part -------------- An HTML attachment was scrubbed... URL: From ellisonbg at gmail.com Sat Apr 2 21:06:53 2011 From: ellisonbg at gmail.com (Brian Granger) Date: Sat, 2 Apr 2011 18:06:53 -0700 Subject: [IPython-dev] Splitting input by AST nodes In-Reply-To: References: Message-ID: Fernando knows the most about this code, but I understood that our input splitter actually works on IPython code (with magics and !ls, etc.) not plain Python code. Thus, we can't use the AST splitter. But this code is *super* subtle, so let's see what Fernando thinks. Cheers, Brian On Sat, Apr 2, 2011 at 3:02 PM, Thomas Kluyver wrote: > Our current methods of splitting input into 'blocks' is...well, one of the > comments starts "HACK!!!". It's also the cause of at least issue 306*, and > trying to change it is likely to lead us into a minefield of other similar > problems. > > I propose that we redo this with a system making use of the AST (abstract > syntax tree) capabilities introduced in Python 2.6. Using the ast module, we > can parse a code string into an AST, and then use the nodes from that > instead of code blocks. AST nodes can be compiled into standard Python code > objects to be run with exec. This has a couple of consequences which I've > thought of so far: firstly, there's no easy way to get back the code as a > string from an ast node (third party modules exist, but I get the impression > they're rather hackish themselves). So we could no longer do things like "if > the last node is two lines or less, execute it differently". Secondly, "a = > 11; b = 12" is currently understood as one block, but in the AST it would be > two nodes. > > Since this would obviously be a fairly major change to a key part of > IPython, I thought I'd get some feedback before I start trying to implement > it. Does using AST nodes instead of blocks sound useful, or is there a good > reason why it wouldn't work? > > Thanks, > Thomas > > > * https://github.com/ipython/ipython/issues/306 > > _______________________________________________ > IPython-dev mailing list > IPython-dev at scipy.org > http://mail.scipy.org/mailman/listinfo/ipython-dev > > -- Brian E. Granger Cal Poly State University, San Luis Obispo bgranger at calpoly.edu and ellisonbg at gmail.com From fperez.net at gmail.com Sun Apr 3 02:07:32 2011 From: fperez.net at gmail.com (Fernando Perez) Date: Sat, 2 Apr 2011 23:07:32 -0700 Subject: [IPython-dev] Pushed to master... In-Reply-To: References: Message-ID: Hi Thomas, On Thu, Mar 31, 2011 at 4:25 PM, Thomas Kluyver wrote: > Apologies for pushing my experimental change directly to master. I had the > wrong branch checked out. I've pushed another commit to undo it - I > understand that it's bad practice to remove a commit once it's been > published. No worries, it's no big deal. One way to reduce the likelihood of this happening is to have your local copy of master *not* be a tracking branch. That would mean you'd manually need to do git push origin master when pushing, but that's something you *know* you are doing, where as a simple git push on a tracking branch is far more likely to happen accidentally. So in general, unless you like high-wire acts without a safety net, http://www.dailymotion.com/video/x4v15s_slackline-basejump-par-dean-potter_sport it's probably a good idea to have local master not track. Cheers, f From fperez.net at gmail.com Sun Apr 3 02:55:31 2011 From: fperez.net at gmail.com (Fernando Perez) Date: Sat, 2 Apr 2011 23:55:31 -0700 Subject: [IPython-dev] Splitting input by AST nodes In-Reply-To: References: Message-ID: Howdy, On Sat, Apr 2, 2011 at 3:02 PM, Thomas Kluyver wrote: > Our current methods of splitting input into 'blocks' is...well, one of the > comments starts "HACK!!!". It's also the cause of at least issue 306*, and > trying to change it is likely to lead us into a minefield of other similar > problems. > > I propose that we redo this with a system making use of the AST (abstract > syntax tree) capabilities introduced in Python 2.6. Using the ast module, we > can parse a code string into an AST, and then use the nodes from that > instead of code blocks. AST nodes can be compiled into standard Python code > objects to be run with exec. This has a couple of consequences which I've > thought of so far: firstly, there's no easy way to get back the code as a > string from an ast node (third party modules exist, but I get the impression > they're rather hackish themselves). So we could no longer do things like "if > the last node is two lines or less, execute it differently". Secondly, "a = > 11; b = 12" is currently understood as one block, but in the AST it would be > two nodes. > > Since this would obviously be a fairly major change to a key part of > IPython, I thought I'd get some feedback before I start trying to implement > it. Does using AST nodes instead of blocks sound useful, or is there a good > reason why it wouldn't work? This is a great idea in principle, but as Brian points out, the issue is the extended ipython syntax. As you can see in inputsplitter, the split_blocks method works by calling the .push() method, and the IPythonInputSplitter subclass overrides this method to extend what is considered valid syntax, by transforming things out. Now, this doesn't mean the job can't be done: the good thing here is that we have a very solid test suite for this code, and test coverage is excellent: (master)dreamweaver[tests]> nosetests --with-coverage --cover-package=IPython.core.inputsplitter -vvs test_inputsplitter.py [...] Name Stmts Exec Cover Missing ---------------------------------------------------------- IPython.core.inputsplitter 296 294 99% 520-521 ---------------------------------------------------------------------- Ran 83 tests in 0.104s OK In fact, before doing anything I'd *strongly* recommend getting that back to 100% percent by adding a test that exercises lines 520-521. You wouldn't believe how many times I found bugs by propping up test coverage back from 98% or 99% to 100% when I wrote that stuff... And those lines are in the heart of split_blocks, so it would be best to start with full coverage before breaking anything. Now, the job *can* be done; the trick will be to decouple all the syntactic transformations from the block analysis. And keep in mind that .push(), or at least push_accepts_more(), is a key part of the public API, because line-oriented frontends need it (or something like it) to decide when to stop accepting input and starting execution. But while it's true that this code is critical, it's also so well tested that I'm not terribly concerned about you trying to improve it. If you can keep that test suite passing first (and later obviously the whole system-wide one), we should be in good shape. Before you start though, in addition to 100% coverage, have a look at the test suite and see if you can think of any important cases we might have missed. I tried to cover all the bases, and over time Robert has also added more tests, but spend a bit of time thinking if we missed something. If you make that test suite really good, ultimately that's all we care about. The internal implementation is really a detail, and even if you need to juggle around the api a little bit it's no big deal (you can fix the calling code as needed). Those tests are our 'contract' on what we do syntax-wise and block-splitting-wise, so as long as you can still pass it, how you get there is up to you :) And getting rid of that double-pass hack would be very good indeed. Needless to say, do this on a branch ;) Good luck, and keep us posted! f From takowl at gmail.com Sun Apr 3 11:26:12 2011 From: takowl at gmail.com (Thomas Kluyver) Date: Sun, 3 Apr 2011 16:26:12 +0100 Subject: [IPython-dev] Pushed to master... In-Reply-To: References: Message-ID: On 3 April 2011 07:07, Fernando Perez wrote: > No worries, it's no big deal. One way to reduce the likelihood of > this happening is to have your local copy of master *not* be a > tracking branch. That would mean you'd manually need to do > OK, that makes sense. For the record, to do this you have to edit the .git/config file, and delete the section telling it about the remote for master. And of course then you have to do git pull origin master to update it. Thanks, Thomas -------------- next part -------------- An HTML attachment was scrubbed... URL: From takowl at gmail.com Sun Apr 3 11:39:55 2011 From: takowl at gmail.com (Thomas Kluyver) Date: Sun, 3 Apr 2011 16:39:55 +0100 Subject: [IPython-dev] Splitting input by AST nodes In-Reply-To: References: Message-ID: On 3 April 2011 07:55, Fernando Perez wrote: > This is a great idea in principle, but as Brian points out, the issue > is the extended ipython syntax. As you can see in inputsplitter, the > split_blocks method works by calling the .push() method, and the > IPythonInputSplitter subclass overrides this method to extend what is > considered valid syntax, by transforming things out. > My approach to this would be simply to transform input lines as they arrive into valid Python syntax, then parse that to an AST. In fact, we already do this - but then we use the linenos of the AST nodes to split the code string up into blocks (which is what goes wrong in #306). To what extent is the contract set in stone? E.g. I don't think I can easily preserve the "if last block is two lines or less, run it interactively, otherwise, just run it in exec mode" behaviour. But I remember this was mentioned at Sage days, and someone put forward the suggestion that it would be more consistent to always run the last block interactively (which is probably what I'll implement). I'll have a go at making this work. Thanks, Thomas -------------- next part -------------- An HTML attachment was scrubbed... URL: From fperez.net at gmail.com Sun Apr 3 12:30:03 2011 From: fperez.net at gmail.com (Fernando Perez) Date: Sun, 3 Apr 2011 09:30:03 -0700 Subject: [IPython-dev] Splitting input by AST nodes In-Reply-To: References: Message-ID: On Sun, Apr 3, 2011 at 8:39 AM, Thomas Kluyver wrote: > To what extent is the contract set in stone? E.g. I don't think I can easily > preserve the "if last block is two lines or less, run it interactively, > otherwise, just run it in exec mode" behaviour. But I remember this was > mentioned at Sage days, and someone put forward the suggestion that it would > be more consistent to always run the last block interactively (which is > probably what I'll implement). No, that part of the 'contract' is totally flexible, and I actually agree that the two-line heurisitc probably needs to go overboard. What's not negotiable is the syntactic support for magics and other special prefixes and transformations, as encoded by the tests in test_inputsplitter. Cheers, f From fperez.net at gmail.com Sun Apr 3 12:32:21 2011 From: fperez.net at gmail.com (Fernando Perez) Date: Sun, 3 Apr 2011 09:32:21 -0700 Subject: [IPython-dev] Pushed to master... In-Reply-To: References: Message-ID: On Sun, Apr 3, 2011 at 8:26 AM, Thomas Kluyver wrote: > And of course then you have to do git pull origin master to update it. Yes. For convenience, you could define a little git alias 'pm=pull origin master' to shorten that common operation, leaving you with the safety of having to manually type out the more uncommon (and dangerous) push one. Cheers, f From fperez.net at gmail.com Sun Apr 3 16:22:13 2011 From: fperez.net at gmail.com (Fernando Perez) Date: Sun, 3 Apr 2011 13:22:13 -0700 Subject: [IPython-dev] Ipython frontend for two process In-Reply-To: References: Message-ID: Hey Omar, 2011/3/29 Thomas Kluyver : > Thanks, that's working now. As for tab completion, if you look in > IPython/core/completer.py, at line 865, uncomment the DEBUG = True line, and > it should give you more info on what's going wrong. This looks like a great start, thanks! As Thomas mentions, that's the easiest way to activate debugging for tab completion. What I recommend is that you study the qt frontend for behavior, since effectively what you are doing is implementing the same basic architecture. The qt console converts a tab key event into a completion call to the kernel, in your case you'll need to do that via readline by registering a completer. I've put up the original zmq example as its own repo here: https://github.com/fperez/zmq-pykernel so that you can easily see how all that works. This is the old code you may have already seen, but having it available in an easy-to-reference permanent location is probably a good idea. Let us know if any of this doesn't make sense... Cheers, f From mietkins7 at gmail.com Sun Apr 3 16:29:48 2011 From: mietkins7 at gmail.com (mietkins) Date: Sun, 03 Apr 2011 22:29:48 +0200 Subject: [IPython-dev] Missing ssh module (branch newparallel) Message-ID: In [2]: from IPython.parallel.client import Client --------------------------------------------------------------------------- ImportError Traceback (most recent call last) /ipython/ in () ----> 1 from IPython.parallel.client import Client /usr/local/lib/python2.6/dist-packages/IPython/parallel/__init__.py in () 17 18 from .asyncresult import * ---> 19 from .client import Client 20 from .dependency import * 21 from .remotefunction import * /usr/local/lib/python2.6/dist-packages/IPython/parallel/client.py in () 28 Dict, List, Bool, Str, Set) 29 from IPython.external.decorator import decorator ---> 30 from IPython.external.ssh import tunnel 31 32 from . import error ImportError: No module named ssh -- All the best, Martin -------------- next part -------------- A non-text attachment was scrubbed... Name: 0001-Missing-ssh-module-after-install.patch Type: application/octet-stream Size: 770 bytes Desc: not available URL: From andresete.chaos at gmail.com Sun Apr 3 17:42:04 2011 From: andresete.chaos at gmail.com (=?UTF-8?Q?Omar_Andr=C3=A9s_Zapata_Mesa?=) Date: Sun, 3 Apr 2011 16:42:04 -0500 Subject: [IPython-dev] Ipython frontend for two process In-Reply-To: References: Message-ID: Hi Fernando. Ok I am studing the qt code and the new link. I hope have some result tomorrow. PD: I have ready raw_input code too. Thanks On Sun, Apr 3, 2011 at 3:22 PM, Fernando Perez wrote: > Hey Omar, > > 2011/3/29 Thomas Kluyver : > > Thanks, that's working now. As for tab completion, if you look in > > IPython/core/completer.py, at line 865, uncomment the DEBUG = True line, > and > > it should give you more info on what's going wrong. > > This looks like a great start, thanks! As Thomas mentions, that's the > easiest way to activate debugging for tab completion. What I > recommend is that you study the qt frontend for behavior, since > effectively what you are doing is implementing the same basic > architecture. The qt console converts a tab key event into a > completion call to the kernel, in your case you'll need to do that via > readline by registering a completer. > > I've put up the original zmq example as its own repo here: > > https://github.com/fpereadlinerez/zmq-pykernel > > so that you can easily see how all that works. This is the old code > you may have already seen, but having it available in an > easy-to-reference permanent location is probably a good idea. > > Let us know if any of this doesn't make sense... > > Cheers, > > f > -------------- next part -------------- An HTML attachment was scrubbed... URL: From benjaminrk at gmail.com Sun Apr 3 18:05:19 2011 From: benjaminrk at gmail.com (MinRK) Date: Sun, 3 Apr 2011 15:05:19 -0700 Subject: [IPython-dev] Missing ssh module (branch newparallel) In-Reply-To: References: Message-ID: Fixed and pushed, thanks. On Sun, Apr 3, 2011 at 13:29, mietkins wrote: > > In [2]: from IPython.parallel.client import Client > --------------------------------------------------------------------------- > ImportError ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? Traceback (most recent call last) > /ipython/ in () > ----> 1 from IPython.parallel.client import Client > > /usr/local/lib/python2.6/dist-packages/IPython/parallel/__init__.py in > () > ? ? 17 > ? ? 18 from .asyncresult import * > ---> 19 from .client import Client > ? ? 20 from .dependency import * > ? ? 21 from .remotefunction import * > > /usr/local/lib/python2.6/dist-packages/IPython/parallel/client.py in > () > ? ? 28 ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? Dict, List, Bool, Str, Set) > ? ? 29 from IPython.external.decorator import decorator > ---> 30 from IPython.external.ssh import tunnel > ? ? 31 > ? ? 32 from . import error > > ImportError: No module named ssh > > > -- > All the best, Martin > _______________________________________________ > IPython-dev mailing list > IPython-dev at scipy.org > http://mail.scipy.org/mailman/listinfo/ipython-dev > > From andresete.chaos at gmail.com Sun Apr 3 23:09:30 2011 From: andresete.chaos at gmail.com (=?UTF-8?Q?Omar_Andr=C3=A9s_Zapata_Mesa?=) Date: Sun, 3 Apr 2011 22:09:30 -0500 Subject: [IPython-dev] Ipython frontend for two process In-Reply-To: References: Message-ID: Hi all. I am having a wierd problem with tab completion. Description: with the completer added to ipython's master repo https://github.com/ipython/ipython/blob/master/IPython/zmq/completer.py I have timing problems because it dont use complete_request's messages in the new model, then I create my own completer and it is very simple https://github.com/omazapa/ipython/blob/frontend-logging/IPython/frontend/terminal/completer.py using new methods and messages. Note that the line 45 have commented #print self.matches it is for debug the return's list, the print show that the content is good. In a normal case readline get the list matches and print it in stdout, but it is not working. Some idea? 2011/4/3 Omar Andr?s Zapata Mesa > Hi Fernando. > > Ok I am studing the qt code and the new link. I hope have some result > tomorrow. > > PD: I have ready raw_input code too. > > Thanks > > On Sun, Apr 3, 2011 at 3:22 PM, Fernando Perez wrote: > >> Hey Omar, >> >> 2011/3/29 Thomas Kluyver : >> > Thanks, that's working now. As for tab completion, if you look in >> > IPython/core/completer.py, at line 865, uncomment the DEBUG = True line, >> and >> > it should give you more info on what's going wrong. >> >> This looks like a great start, thanks! As Thomas mentions, that's the >> easiest way to activate debugging for tab completion. What I >> recommend is that you study the qt frontend for behavior, since >> effectively what you are doing is implementing the same basic >> architecture. The qt console converts a tab key event into a >> completion call to the kernel, in your case you'll need to do that via >> readline by registering a completer. >> >> I've put up the original zmq example as its own repo here: >> >> https://github.com/fpereadlinerez/zmq-pykernel >> >> >> so that you can easily see how all that works. This is the old code >> you may have already seen, but having it available in an >> easy-to-reference permanent location is probably a good idea. >> >> Let us know if any of this doesn't make sense... >> >> Cheers, >> >> f >> > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sourceforge.ipython at user.fastmail.fm Mon Apr 4 13:04:41 2011 From: sourceforge.ipython at user.fastmail.fm (Hugo Gagnon) Date: Mon, 04 Apr 2011 13:04:41 -0400 Subject: [IPython-dev] Column format Message-ID: <1301936681.15222.1437324861@webmail.messagingengine.com> Hello, Is there a function in the ipython API that takes in a list of strings to output and returns a nicely formatted list of lines depending on the terminal's width? If so where is it located so I'll probably need to modify it since I don't really have a terminal as is but rather a GUI application... Thanks. -- Hugo Gagnon From robert.kern at gmail.com Mon Apr 4 16:58:35 2011 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 04 Apr 2011 15:58:35 -0500 Subject: [IPython-dev] Column format In-Reply-To: <1301936681.15222.1437324861@webmail.messagingengine.com> References: <1301936681.15222.1437324861@webmail.messagingengine.com> Message-ID: On 4/4/11 12:04 PM, Hugo Gagnon wrote: > Hello, > > Is there a function in the ipython API that takes in a list of strings > to output and returns a nicely formatted list of lines depending on the > terminal's width? If so where is it located so I'll probably need to > modify it since I don't really have a terminal as is but rather a GUI > application... https://bitbucket.org/robertkern/kernmagic/src/tip/kernmagic/utils.py#cl-92 -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From benjaminrk at gmail.com Mon Apr 4 18:05:22 2011 From: benjaminrk at gmail.com (MinRK) Date: Mon, 4 Apr 2011 15:05:22 -0700 Subject: [IPython-dev] [Announcement] Imminent removal of IPython.kernel in favor of new IPython.parallel Message-ID: Attn: users of IPython.kernel, gist: IPython.kernel will be removed from master this week, by this Pull Request on GitHub: https://github.com/ipython/ipython/pull/325 We have been working hard on the new 0MQ-based interactive parallel computing tools, which should be merged into master very soon. Part of this transition is the complete removal of the twisted-based IPython.kernel package. This removal should happen in the next week, so this serves as fair warning to anyone using IPython.kernel from git master - it's about to disappear. The main reason for not keeping IPython.kernel in 0.11 as a deprecated package is the revised configuration system also added in 0.11. It would require significant work to bring the kernel code up to date, and we are not prepared to spend the necessary time and effort fixing a package that we intend to remove soon anyway. For information on the new IPython.parallel, see the revised docs: http://minrk.github.com/ipython-doc/newparallel/parallelz To test the current state of the code, it is found in the 'newparallel' branch of the main IPython git repo on GitHub. Which will be moved to the main doc location once newparallel is merged into master in the next week or two. -Min RK A disclaimer, because I have been asked about this before, and don't want to do the Twisted folks a disservice: While our new code is much faster than our Twisted code, most of the reasons for that are design decisions *we* made in the old code, not issues with Twisted itself. From andresete.chaos at gmail.com Mon Apr 4 18:23:49 2011 From: andresete.chaos at gmail.com (=?UTF-8?Q?Omar_Andr=C3=A9s_Zapata_Mesa?=) Date: Mon, 4 Apr 2011 17:23:49 -0500 Subject: [IPython-dev] applaying to GSoC Message-ID: Hi guys I am writing a blog to apply in GSoC 2011 http://ipython-http.blogspot.com/ has ipython participation this year? The good mentors should be Fernando Perez and Brian Granger. Please give suggestions and feedback. Thanks! -- Omar Andres Zapata Mesa Systems Engineering Student Universidad de Antioquia At Medellin - Colombia Usuario Linux #490962 -------------- next part -------------- An HTML attachment was scrubbed... URL: From satra at mit.edu Mon Apr 4 18:35:00 2011 From: satra at mit.edu (Satrajit Ghosh) Date: Mon, 4 Apr 2011 18:35:00 -0400 Subject: [IPython-dev] [Announcement] Imminent removal of IPython.kernel in favor of new IPython.parallel In-Reply-To: References: Message-ID: hi min, great work on the parallel stuff. a few questions though: regarding using ipclusterz with pbs/sge/torque, i think the parallel docs refer to the old style of the user providing a template, but this is actually not necessary right? (at least from my cursory look at the code, it seems to contain the 0.10.1 upgrade to generate a default template) also are you planning to include the options from the 0.10.1 series? (e.g., -e for ssh, -q for the queues, lsf support). if not, i'll try to create those when i have some time unless somebody gets to them before i do. cheers, satra On Mon, Apr 4, 2011 at 6:05 PM, MinRK wrote: > Attn: users of IPython.kernel, > > gist: IPython.kernel will be removed from master this week, by this > Pull Request on GitHub: > https://github.com/ipython/ipython/pull/325 > > We have been working hard on the new 0MQ-based interactive parallel > computing tools, which should be merged into master very soon. Part > of this transition is the complete removal of the twisted-based > IPython.kernel package. This removal should happen in the next week, > so this serves as fair warning to anyone using IPython.kernel from git > master - it's about to disappear. > > The main reason for not keeping IPython.kernel in 0.11 as a deprecated > package is the revised configuration system also added in 0.11. It > would require significant work to bring the kernel code up to date, > and we are not prepared to spend the necessary time and effort fixing > a package that we intend to remove soon anyway. > > For information on the new IPython.parallel, see the revised docs: > http://minrk.github.com/ipython-doc/newparallel/parallelz > > To test the current state of the code, it is found in the > 'newparallel' branch of the main IPython git repo on GitHub. > > Which will be moved to the main doc location once newparallel is > merged into master in the next week or two. > > -Min RK > > A disclaimer, because I have been asked about this before, and don't > want to do the Twisted folks a disservice: While our new code is much > faster than our Twisted code, most of the reasons for that are design > decisions *we* made in the old code, not issues with Twisted itself. > _______________________________________________ > IPython-dev mailing list > IPython-dev at scipy.org > http://mail.scipy.org/mailman/listinfo/ipython-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hans_meine at gmx.net Tue Apr 5 03:25:53 2011 From: hans_meine at gmx.net (Hans Meine) Date: Tue, 5 Apr 2011 09:25:53 +0200 Subject: [IPython-dev] Column format In-Reply-To: References: <1301936681.15222.1437324861@webmail.messagingengine.com> Message-ID: <201104050925.53924.hans_meine@gmx.net> Am Montag, 4. April 2011, um 22:58:35 schrieb Robert Kern: > On 4/4/11 12:04 PM, Hugo Gagnon wrote: > > Hello, > > > > Is there a function in the ipython API that takes in a list of strings > > to output and returns a nicely formatted list of lines depending on the > > terminal's width? If so where is it located so I'll probably need to > > modify it since I don't really have a terminal as is but rather a GUI > > application... > > https://bitbucket.org/robertkern/kernmagic/src/tip/kernmagic/utils.py#cl-92 Is there a good reason for this: isinstance(strings[i], str) I would have expected sth. more like: isinstance(strings[i], basestring) (which would lead to unicode being returned if the input contained unicode strings). Nice util function otherwise, Hans From robert.kern at gmail.com Tue Apr 5 11:31:50 2011 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 05 Apr 2011 10:31:50 -0500 Subject: [IPython-dev] Column format In-Reply-To: <201104050925.53924.hans_meine@gmx.net> References: <1301936681.15222.1437324861@webmail.messagingengine.com> <201104050925.53924.hans_meine@gmx.net> Message-ID: On 4/5/11 2:25 AM, Hans Meine wrote: > Am Montag, 4. April 2011, um 22:58:35 schrieb Robert Kern: >> On 4/4/11 12:04 PM, Hugo Gagnon wrote: >>> Hello, >>> >>> Is there a function in the ipython API that takes in a list of strings >>> to output and returns a nicely formatted list of lines depending on the >>> terminal's width? If so where is it located so I'll probably need to >>> modify it since I don't really have a terminal as is but rather a GUI >>> application... >> >> https://bitbucket.org/robertkern/kernmagic/src/tip/kernmagic/utils.py#cl-92 > > Is there a good reason for this: > isinstance(strings[i], str) > I would have expected sth. more like: > isinstance(strings[i], basestring) > (which would lead to unicode being returned if the input contained unicode > strings). > > Nice util function otherwise, I took the function from cmd.py, which uses str. I don't think anything bad would happen if you used basestring and passed unicode strings. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From fperez.net at gmail.com Wed Apr 6 03:26:07 2011 From: fperez.net at gmail.com (Fernando Perez) Date: Wed, 6 Apr 2011 00:26:07 -0700 Subject: [IPython-dev] Scientific Python at SIAM CSE 2011 conference Message-ID: Hi all, sorry for the massive cross-post, but since all these projects were highlighted with talks at this event, I figured there would be interest... Hans-Petter Langtangen, Randy LeVeque and I organized a set of Python-focused sessions at the recent SIAM Computational Science and Engineering conference, with talks on numpy/scipy, cython, matplotlib, ipython, sympy, as well as application-oriented talks on astronomy and femhub. For those interested: - The slides: http://fperez.org/events/2011_siam_cse/ - A blog post: http://blog.fperez.org/2011/04/python-goes-to-reno-siam-cse-2011.html - Some pictures: https://picasaweb.google.com/fdo.perez/SIAMCSE2011InReno# Back to being quiet... f From andresete.chaos at gmail.com Thu Apr 7 18:25:20 2011 From: andresete.chaos at gmail.com (=?UTF-8?Q?Omar_Andr=C3=A9s_Zapata_Mesa?=) Date: Thu, 7 Apr 2011 17:25:20 -0500 Subject: [IPython-dev] About GSoC 2011 Message-ID: Hi all. I applied to GSoC with IPython proposal, please give some feedback and suggestions, my blog is http://ipython-http.blogspot.com/ I think a I can a good job with the prototype of James Gao, Brian Granger and discussing web interface standards for ipython, art and features in the mailing list. Thanks. O. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jason-sage at creativetrax.com Thu Apr 7 21:27:09 2011 From: jason-sage at creativetrax.com (Jason Grout) Date: Thu, 07 Apr 2011 20:27:09 -0500 Subject: [IPython-dev] messaging protocol Message-ID: <4D9E646D.1020504@creativetrax.com> I've been pondering the messaging spec quite a bit over the last few weeks and meshing it in with a project in Sage (a single-cell server, like a one-off ipython or sage web notebook). Our architecture is that we have a big database sitting on the server side that basically acts as a cache of the zeromq messages between the kernel and the web client. It is easy enough to store the messages in the database, but we have a problem that you guys don't have since you use zeromq end-to-end. Once we put a list of messages into the database, we don't know what the right order for the messages is. So here is a suggestion/request/proposal for the API: PROPOSAL: Can we add another field to the header of a, a msg_order (or msg_counter?) field, which (across messages with the same header['session']) is guaranteed to be an increasing integer signifying the order of messages? You might ask why we don't just use the msg_id field for this. Well, it seems nice for us to make the msg_id be a mongodb id object, which is just a hash. It seems elegant to let the client or server dictate what how the msg_id's are formed with no insistence on a particular format or structure for the msg_id, other than it be unique among messages in the same session. You must have had a related problem storing history, though. How did you sort the messages in the history database? On the other hand, since the proposed msg_order *is* guaranteed to be unique across all messages in a session, maybe there's nothing lost in (just us?) insisting that the msg_id be an increasing integer sequence. Thanks, Jason -- Jason Grout From fperez.net at gmail.com Thu Apr 7 23:13:28 2011 From: fperez.net at gmail.com (Fernando Perez) Date: Thu, 7 Apr 2011 20:13:28 -0700 Subject: [IPython-dev] tests hanging on parallel Message-ID: Hey Min, with zmq 2.1.4 and pyzmq 2.1.4, I'm getting: amirbar[junk]> iptest -vvs IPython.parallel Doctest: IPython.parallel.client.remotefunction.parallel ... ok Doctest: IPython.parallel.client.remotefunction.remote ... ok Doctest: IPython.parallel.client.view.DirectView ... ok Doctest: IPython.parallel.client.view.View ... ok Doctest: IPython.parallel.controller.dependency.require ... ok never completes... any ideas? Cheers, f From benjaminrk at gmail.com Fri Apr 8 00:23:28 2011 From: benjaminrk at gmail.com (MinRK) Date: Thu, 7 Apr 2011 21:23:28 -0700 Subject: [IPython-dev] tests hanging on parallel In-Reply-To: References: Message-ID: What happens if you just do: 'ipcontroller -p iptest' ? On Thu, Apr 7, 2011 at 20:13, Fernando Perez wrote: > Hey Min, > > with zmq 2.1.4 and pyzmq 2.1.4, I'm getting: > > amirbar[junk]> iptest -vvs IPython.parallel > Doctest: IPython.parallel.client.remotefunction.parallel ... ok > Doctest: IPython.parallel.client.remotefunction.remote ... ok > Doctest: IPython.parallel.client.view.DirectView ... ok > Doctest: IPython.parallel.client.view.View ... ok > Doctest: IPython.parallel.controller.dependency.require ... ok > > never completes... any ideas? > > Cheers, > > f > _______________________________________________ > IPython-dev mailing list > IPython-dev at scipy.org > http://mail.scipy.org/mailman/listinfo/ipython-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fperez.net at gmail.com Fri Apr 8 00:49:17 2011 From: fperez.net at gmail.com (Fernando Perez) Date: Thu, 7 Apr 2011 21:49:17 -0700 Subject: [IPython-dev] tests hanging on parallel In-Reply-To: References: Message-ID: On Thu, Apr 7, 2011 at 9:23 PM, MinRK wrote: > What happens if you just do: > 'ipcontroller -p iptest' Aha, thanks! It was user error: it was picking up the system copies of the ipcontroller script, which points to kernel, but off my master install. So, big mess :) All good now, all tests pass. Great job! f From benjaminrk at gmail.com Fri Apr 8 00:59:27 2011 From: benjaminrk at gmail.com (MinRK) Date: Thu, 7 Apr 2011 21:59:27 -0700 Subject: [IPython-dev] tests hanging on parallel In-Reply-To: References: Message-ID: Okay, good. I just pushed some better checking for controller/engines failing to start in the tests, so it should raise an error, instead of hanging. It's always harder to get tests to behave nicely when something's wrong than when everything works. -MinRK On Thu, Apr 7, 2011 at 21:49, Fernando Perez wrote: > On Thu, Apr 7, 2011 at 9:23 PM, MinRK wrote: > > What happens if you just do: > > 'ipcontroller -p iptest' > > Aha, thanks! It was user error: it was picking up the system copies > of the ipcontroller script, which points to kernel, but off my master > install. So, big mess :) > > All good now, all tests pass. Great job! > > f > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fperez.net at gmail.com Fri Apr 8 01:04:41 2011 From: fperez.net at gmail.com (Fernando Perez) Date: Thu, 7 Apr 2011 22:04:41 -0700 Subject: [IPython-dev] tests hanging on parallel In-Reply-To: References: Message-ID: On Thu, Apr 7, 2011 at 9:59 PM, MinRK wrote: > I just pushed some better checking for controller/engines failing to start > in the tests, so it should raise an error, instead of hanging. Great, thanks. > It's always harder to get tests to behave nicely when something's wrong than > when everything works. Indeed, especially because all the possible failure modes can't really be predicted in advance, so it's not always obvious how to protect against them. Thanks! f From ellisonbg at gmail.com Fri Apr 8 01:42:25 2011 From: ellisonbg at gmail.com (Brian Granger) Date: Thu, 7 Apr 2011 22:42:25 -0700 Subject: [IPython-dev] messaging protocol In-Reply-To: <4D9E646D.1020504@creativetrax.com> References: <4D9E646D.1020504@creativetrax.com> Message-ID: Jason, I will reply more later, but the msg_id can be anything. Couldn't you just use a prefix like: msg_counter-msg_uuid 01-123124214 Brian On Thu, Apr 7, 2011 at 6:27 PM, Jason Grout wrote: > I've been pondering the messaging spec quite a bit over the last few > weeks and meshing it in with a project in Sage (a single-cell server, > like a one-off ipython or sage web notebook). ?Our architecture is that > we have a big database sitting on the server side that basically acts as > a cache of the zeromq messages between the kernel and the web client. It > is easy enough to store the messages in the database, but we have a > problem that you guys don't have since you use zeromq end-to-end. ?Once > we put a list of messages into the database, we don't know what the > right order for the messages is. ?So here is a > suggestion/request/proposal for the API: > > PROPOSAL: Can we add another field to the header of a, a msg_order (or > msg_counter?) field, which (across messages with the same > header['session']) is guaranteed to be an increasing integer signifying > the order of messages? > > You might ask why we don't just use the msg_id field for this. ?Well, it > seems nice for us to make the msg_id be a mongodb id object, which is > just a hash. ?It seems elegant to let the client or server dictate what > how the msg_id's are formed with no insistence on a particular format or > structure for the msg_id, other than it be unique among messages in the > same session. > > You must have had a related problem storing history, though. ?How did > you sort the messages in the history database? > > On the other hand, since the proposed msg_order *is* guaranteed to be > unique across all messages in a session, maybe there's nothing lost in > (just us?) insisting that the msg_id be an increasing integer sequence. > > Thanks, > > Jason > > -- > Jason Grout > > _______________________________________________ > IPython-dev mailing list > IPython-dev at scipy.org > http://mail.scipy.org/mailman/listinfo/ipython-dev > -- Brian E. Granger Cal Poly State University, San Luis Obispo bgranger at calpoly.edu and ellisonbg at gmail.com From benjaminrk at gmail.com Fri Apr 8 02:12:39 2011 From: benjaminrk at gmail.com (MinRK) Date: Thu, 7 Apr 2011 23:12:39 -0700 Subject: [IPython-dev] messaging protocol In-Reply-To: <4D9E646D.1020504@creativetrax.com> References: <4D9E646D.1020504@creativetrax.com> Message-ID: On Thu, Apr 7, 2011 at 18:27, Jason Grout wrote: > I've been pondering the messaging spec quite a bit over the last few > weeks and meshing it in with a project in Sage (a single-cell server, > like a one-off ipython or sage web notebook). Our architecture is that > we have a big database sitting on the server side that basically acts as > a cache of the zeromq messages between the kernel and the web client. It > is easy enough to store the messages in the database, but we have a > problem that you guys don't have since you use zeromq end-to-end. Once > we put a list of messages into the database, we don't know what the > right order for the messages is. So here is a > suggestion/request/proposal for the API: > > PROPOSAL: Can we add another field to the header of a, a msg_order (or > msg_counter?) field, which (across messages with the same > header['session']) is guaranteed to be an increasing integer signifying > the order of messages? > The parallel code uses a slightly more advanced Session object (the thing that builds messages) that includes the notion of an extensible 'subheader'. With this, you can put any extra (jsonable) information you want into the header. > > You might ask why we don't just use the msg_id field for this. Well, it > seems nice for us to make the msg_id be a mongodb id object, which is > just a hash. It seems elegant to let the client or server dictate what > how the msg_id's are formed with no insistence on a particular format or > structure for the msg_id, other than it be unique among messages in the > same session. The Client's Session object (or whoever builds the message) creates the msg_id, and the only restriction is that it be jsonable. The Client building the message can use any scheme they like to build msg_ids, same with the Server and its replies, and they don't even have to be the same as each other. In fact, I'm not even sure there's anywhere in the code that actually requires that msg_id be unique (though *we* do, for various reasons). > > You must have had a related problem storing history, though. How did > you sort the messages in the history database? > the IPython console has an execution_count (the number that determines what goes in 'In [nn]'. the sqlite database stores the session ID and execution counter for every entry. > > On the other hand, since the proposed msg_order *is* guaranteed to be > unique across all messages in a session, maybe there's nothing lost in > (just us?) insisting that the msg_id be an increasing integer sequence. > Originally, the msg_id was just an integer, but that was replaced with UUIDs to allow for simultaneous clients. If you don't have more than one client per kernel, there is no reason to not use integers for msg_id. -MinRK > > Thanks, > > Jason > > -- > Jason Grout > > _______________________________________________ > IPython-dev mailing list > IPython-dev at scipy.org > http://mail.scipy.org/mailman/listinfo/ipython-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fperez.net at gmail.com Fri Apr 8 03:21:23 2011 From: fperez.net at gmail.com (Fernando Perez) Date: Fri, 8 Apr 2011 00:21:23 -0700 Subject: [IPython-dev] About GSoC 2011 In-Reply-To: References: Message-ID: Hi Omar, 2011/4/7 Omar Andr?s Zapata Mesa : > I applied to GSoC with IPython proposal, please give some feedback and > suggestions, > my blog is?http://ipython-http.blogspot.com/ > I think a I can a good job with the prototype of James Gao, Brian Granger > and ?discussing web interface standards for ipython, art and features in > the?mailing list. I'm sorry I've been silent, I had a family health emergency that sent me mostly offline for a few days, I know you needed feedback, and I apologize for leaving you waiting (as well as other people were also bottlenecking on me). I'm very happy to see you wanting to contribute further, but I think that it's going to be very difficult for this to happen as a gsoc project this year. The first reason is that, as I had said earlier, as a second-year student you would be expected to clear a much higher bar of entry to the program than a new student. While you have made a good effort recently to push forward on completing your work from last year, which is great, that hasn't been finished yet. With help from the core team your logging work is now merged, but there's still no complete implementation of the terminal client ready for final merge. I played with your prototype and gave some feedback (as did Thomas), and you have a good start there, but it's certainly not code ready for merge yet. Note that I am really willing to continue helping you with feedback to improve this until it's merged, if you still want to work on it. It's useful code that we do need, so it needs to happen, and it would be great to have you work on it if you find it interesting. In addition to the completion of that project, there's a second reason that is also very important: the *main* point of gsoc is not to develop one piece of code, but rather to grow the community of core contributors to a project. In the last round of mentor list discussions, they have very strongly emphasized how important it is that students who are accepted have shown a real participation in a project, measured in multiple ways (especially for students who have been around for a while). That means fixing bugs, contributing code reviews, making small developments, writing documentation, etc. It may not be as much fun or sexy as diving into a big, standalone idea, but that's much more the reality of everyday work in a project. You can see for example how Thomas showed up out of the blue contributing pull requests at first for the python3 branch, then for small things, and these days if you look at the log there's a ton of major, critical work by him (he just completely refactored one of the most delicate pieces of code we had, the input handling stuff). But he dove into all aspects of the project, including the thankless jobs of flushing the crazy backlog of pull requests I had allowed to accumulate after the India sprints, as well as doing massive bug triage. That work is never as fun as designing some new cool app, but it's a necessary part of sustaining a project in the long term. In no small part thanks to Thomas' efforts, we now have *zero* open pull requests, and we're down to *four* critical bugs for releasing 0.11. That's the kind of participation in a project that brings a member in and makes an enormous difference. And I think you will find that if you engage the project like this, you will actually learn more, and find it more fun in the long run, than only working on one specific idea, because you'll get to really participate of the entire effort. Keep in mind that if you are still interested in participating in the project, we'd be thrilled to have you continue here. The terminal work you've started is still very much needed, and it working on that will help you to become a regular ipython contributor. With sufficient code merged in the project and a real record of project contributions and participation, you would have a much stronger case for applying next year, for example. I realize this isn't what you wanted to hear, but I hope you understand the reasons for this, and I'm happy to chat further about this, on- or off-list. But I want to reply publicly because this is not something directed in any way at your personally, but rather a statement of how I think the project must handle students who want to return (as well as informing potentially interested ones in the future). All the best, f ps - thanks to Min and Brian for feedback on my reply before sending it; this is an important part of attempting to run a project well, so I wanted to make sure I said things in the clearest, but kindest way possible. From fperez.net at gmail.com Fri Apr 8 03:26:16 2011 From: fperez.net at gmail.com (Fernando Perez) Date: Fri, 8 Apr 2011 00:26:16 -0700 Subject: [IPython-dev] [Announcement] Imminent removal of IPython.kernel in favor of new IPython.parallel In-Reply-To: References: Message-ID: Hey Satra, On Mon, Apr 4, 2011 at 3:35 PM, Satrajit Ghosh wrote: > regarding using ipclusterz with pbs/sge/torque, i think the parallel docs > refer to the old style of the user providing a template, but this is > actually not necessary right? (at least from my cursory look at the code, it > seems to contain the 0.10.1 upgrade to generate a default template) > > also are you planning to include the options from the 0.10.1 series? (e.g., > -e for ssh, -q for the queues, lsf support). if not, i'll try to create > those when i have some time unless somebody gets to them before i do. Min knows the details of this far better than I, so I'll let him reply on the details. I just wanted to let you know that we just completed the twisted removal and parallel merge. The new parallel code can be imported as IPython.parallel. So if you want to dive into it, now it's in master and we'll just continue refining it there for 0.11 hopefully soon. Cheers, f From fperez.net at gmail.com Fri Apr 8 03:43:24 2011 From: fperez.net at gmail.com (Fernando Perez) Date: Fri, 8 Apr 2011 00:43:24 -0700 Subject: [IPython-dev] Heads-up: Twisted code removed, IPython.parallel (zmq-based) now in master Message-ID: Hi all, just a heads-up, that all Twisted code is now completely gone from IPython, and the awesome new parallel code that replaces it has been merged today. It can be imported as IPython.parallel. The three most relevant links from the docs are probably: http://ipython.github.com/ipython-doc/dev/parallel/index.html http://ipython.github.com/ipython-doc/dev/development/parallel_messages.html http://ipython.github.com/ipython-doc/dev/development/parallel_connections.html Dependencies: while the qt console works with zmq 2.0.10.1 or newer, the parallel code requires zmq 2.1.4 (current). We'll be beating this code a lot in the next few months over a few releases, so the apis are open for feedback, improvement, suggestions, etc. But we're very excited both with the functionality and the performance, and we see this as a stable foundation that will be supported for a long time, so please do test it and complain on anything that doesn't work. I want to acknowledge Min's extraordinary work on this front: when Brian and I started building the zmq code for interactive work just about 1 year ago, it was clear to us these ideas would have a lot of potential for parallel computing, but we always imagined we would at best *start* work on that topic around summer 2011. That come early spring this has already been completely done to the point where we can merge it and begin polishing it for release is crazy, and only makes sense if you know Min's talent. It's a privilege to have him working on IPython. Cheers, f From fperez.net at gmail.com Fri Apr 8 03:55:59 2011 From: fperez.net at gmail.com (Fernando Perez) Date: Fri, 8 Apr 2011 00:55:59 -0700 Subject: [IPython-dev] Release plans, yet again. And a road to 1.0, believe it or not. Message-ID: I know, everybody laughs when I talk about releases... But for the two of you still listening: - 0.10.2 will be out asap, literally as soon as I find a block of a few hours free. I thought I'd have the rc released last week but some family health problems have made the last week very unpleasant for me, and I'm only now getting back on track. I should be able to cut the 0.10.2 final on Saturday at the latest. - 0.11: we now have no pending pull requests and just a few critical bugs. We do need to give some time to shake out in user testing the massive merges of the past few days, today I discussed with Thomas some important things that need to be done to the sqlite history code, and I have a few local things as well, but since all the big stuff is done, we should be looking at pushing 0.11 out the door finally in just a few weeks. If you have anything on your local trees that you think is in good shape for 0.11, try to make a pull request before too long (though we'll announce the release freeze in advance, we're not quite there yet). So now is the time to really start playing with master. Install zeromq/pyzmq 2.1.4 and take it for a spin. Anything that breaks, let us know by filing a bug report. If you think we have already a bug but not listed as critical, please let us know and we'll look into raising its priority. We want to focus on flushing only the critical bugs before cutting out 0.11, so that we can start a quicker release cycle after 0.11. The plan will be to try and push small releases after 0.11 to the point where we are happy with the API, and then simply start a stabilization series like matplotlib had, 0.99.x, leading to 1.0. I don't want to make any promises on when 1.0 will be out, but ideally it would be by this summer. But we'll see, I've broken those enough times that the joke isn't funny anymore. Many thanks to everyone who has jumped in recently with so much great work to get us to this state. I particularly want to thank Thomas, whose massive clearing job initially really got us 'unstuck' from behind a pile of accumulated pull requests and bugs, and who now has moved into doing brain surgery right at the core, improving some of our most delicate code in really nice ways (the recent AST inputsplitter refactor). Cheers, f From jason-sage at creativetrax.com Fri Apr 8 03:58:26 2011 From: jason-sage at creativetrax.com (Jason Grout) Date: Fri, 08 Apr 2011 02:58:26 -0500 Subject: [IPython-dev] messaging protocol In-Reply-To: References: <4D9E646D.1020504@creativetrax.com> Message-ID: <4D9EC022.3010603@creativetrax.com> On 4/8/11 12:42 AM, Brian Granger wrote: > Jason, > > I will reply more later, but the msg_id can be anything. Couldn't you > just use a prefix like: > > msg_counter-msg_uuid > > 01-123124214 I guess for that matter, I could make the message id tuple (or dictionary): (normal hash, counter) That would work fine too. Sorry for the noise--I'm still exploring how general this protocol is. Thanks, Jason From fperez.net at gmail.com Fri Apr 8 04:00:11 2011 From: fperez.net at gmail.com (Fernando Perez) Date: Fri, 8 Apr 2011 01:00:11 -0700 Subject: [IPython-dev] messaging protocol In-Reply-To: <4D9EC022.3010603@creativetrax.com> References: <4D9E646D.1020504@creativetrax.com> <4D9EC022.3010603@creativetrax.com> Message-ID: On Fri, Apr 8, 2011 at 12:58 AM, Jason Grout wrote: > That would work fine too. ?Sorry for the noise--I'm still exploring how > general this protocol is. No worries at all, it's not noise: we want precisely to explore the boundaries of the protocol, to see what kind of extension/flexibility is necessary to accommodate other use cases beyond ipython's internal one. So keep pinging, we can always say no ;) Best, f From jason-sage at creativetrax.com Fri Apr 8 04:04:30 2011 From: jason-sage at creativetrax.com (Jason Grout) Date: Fri, 08 Apr 2011 03:04:30 -0500 Subject: [IPython-dev] messaging protocol In-Reply-To: References: <4D9E646D.1020504@creativetrax.com> Message-ID: <4D9EC18E.50903@creativetrax.com> On 4/8/11 1:12 AM, MinRK wrote: > > > On Thu, Apr 7, 2011 at 18:27, Jason Grout > wrote: > > I've been pondering the messaging spec quite a bit over the last few > weeks and meshing it in with a project in Sage (a single-cell server, > like a one-off ipython or sage web notebook). Our architecture is that > we have a big database sitting on the server side that basically acts as > a cache of the zeromq messages between the kernel and the web client. It > is easy enough to store the messages in the database, but we have a > problem that you guys don't have since you use zeromq end-to-end. Once > we put a list of messages into the database, we don't know what the > right order for the messages is. So here is a > suggestion/request/proposal for the API: > > PROPOSAL: Can we add another field to the header of a, a msg_order (or > msg_counter?) field, which (across messages with the same > header['session']) is guaranteed to be an increasing integer signifying > the order of messages? > > > The parallel code uses a slightly more advanced Session object (the thing > that builds messages) that includes the notion of an extensible 'subheader'. > With this, you can put any extra (jsonable) information you want into > the header. Interesting. I was going off of the documented spec. Is this subheader information going to make it into the documented spec, or is it an extension to the spec? Also, do you envision the various fields (like user_variables or the session field of the header) being required? For example, our simplified execute message is simply: {"header":{"msg_id":"DB_ID"}, "msg_type":"execute_request", "content":{"code":"CODE USER TYPES IN"}} where the omitted fields have the "obvious" defaults (silent=false, all other fields empty strings, lists, or dictionaries. > > > You might ask why we don't just use the msg_id field for this. Well, it > seems nice for us to make the msg_id be a mongodb id object, which is > just a hash. It seems elegant to let the client or server dictate what > how the msg_id's are formed with no insistence on a particular format or > structure for the msg_id, other than it be unique among messages in the > same session. > > > The Client's Session object (or whoever builds the message) creates the > msg_id, > and the only restriction is that it be jsonable. The Client building > the message > can use any scheme they like to build msg_ids, same with the Server and > its replies, > and they don't even have to be the same as each other. > > In fact, I'm not even sure there's anywhere in the code that actually > requires that msg_id be unique > (though *we* do, for various reasons). > > > You must have had a related problem storing history, though. How did > you sort the messages in the history database? > > > the IPython console has an execution_count (the number that determines > what goes in 'In [nn]'. the sqlite database stores the session ID and > execution counter for every entry. I was thinking more of multiple output messages for a single input (assuming that you stored multiple outputs in the database). That is our current problem, since our use-case will always have an execution_count of 1. Thanks, Jason From fperez.net at gmail.com Fri Apr 8 04:17:21 2011 From: fperez.net at gmail.com (Fernando Perez) Date: Fri, 8 Apr 2011 01:17:21 -0700 Subject: [IPython-dev] messaging protocol In-Reply-To: <4D9EC18E.50903@creativetrax.com> References: <4D9E646D.1020504@creativetrax.com> <4D9EC18E.50903@creativetrax.com> Message-ID: On Fri, Apr 8, 2011 at 1:04 AM, Jason Grout wrote: > Interesting. ?I was going off of the documented spec. ?Is this subheader > information going to make it into the documented spec, or is it an > extension to the spec? We just haven't completely reconciled the documented spec with the parallel code. The spec was mostly written when we designed the model for the qt console. Min then used those ideas and built upon them the parallel code, but he implemented some new stuff as he needed. One of the tasks ahead of us for the next few weeks (hey, we just merged the parallel code a couple of hours ago! :) is precisely to reconcile all of this, so that things are as unified as possible between what interactive kernels and parallel engines talking to a controller use. > Also, do you envision the various fields (like user_variables or the > session field of the header) being required? ?For example, our > simplified execute message is simply: > > {"header":{"msg_id":"DB_ID"}, > "msg_type":"execute_request", > "content":{"code":"CODE USER TYPES IN"}} > > where the omitted fields have the "obvious" defaults (silent=false, all > other fields empty strings, lists, or dictionaries. I don't recall right now (and I need to crash) if we implemented this, but I always wanted to have object send functions that would only need the minimum info and would otherwise fill in the other defaults as needed. It seems sensible to have matching behavior, where we also accept minimal messages and fill the rest with defaults upon receive. > I was thinking more of multiple output messages for a single input > (assuming that you stored multiple outputs in the database). ?That is > our current problem, since our use-case will always have an > execution_count of 1. We've been talking about changing the counter logic to increment always, so that multiple outputs from a single input block get different numbers (what Mathematica does). It adds a little complexity to the code, but I think it's a cleaner solution in the long run. Having multiple Out[4] outputs is really kind of not very nice... Cheers, f From fperez.net at gmail.com Fri Apr 8 04:18:46 2011 From: fperez.net at gmail.com (Fernando Perez) Date: Fri, 8 Apr 2011 01:18:46 -0700 Subject: [IPython-dev] [IPython-User] Heads-up: Twisted code removed, IPython.parallel (zmq-based) now in master In-Reply-To: References: Message-ID: On Fri, Apr 8, 2011 at 1:09 AM, Mario Ceresa wrote: > Fernando: these are exciting news! Indeed :) > We'll try to have zqm 2.1.4 pushed in fedora soon, so we can test the changes. That's great, thanks! Having that version shipped in distros will make things go a lot smoother for users, thanks. Cheers, f From takowl at gmail.com Fri Apr 8 05:43:55 2011 From: takowl at gmail.com (Thomas Kluyver) Date: Fri, 8 Apr 2011 10:43:55 +0100 Subject: [IPython-dev] messaging protocol In-Reply-To: <4D9EC18E.50903@creativetrax.com> References: <4D9E646D.1020504@creativetrax.com> <4D9EC18E.50903@creativetrax.com> Message-ID: On 8 April 2011 09:04, Jason Grout wrote: > I was thinking more of multiple output messages for a single input > (assuming that you stored multiple outputs in the database). That is > our current problem, since our use-case will always have an > execution_count of 1. At present, if you get multiple outputs (e.g by doing "12*3; 88*14", they are stored in a JSON list in the database (if you have output logging enabled, which it isn't by default). I'm not quite sure what you mean by always having an execution count of 1. Is this the aleph prototype* or something similar? The way I envisaged history working, if the namespace is preserved from command to command, the execution count increases. If it isn't, each run is kind of a new session. The interface for retrieving history works better with sessions with more than one command, but there's nothing to stop you making sessions with a single cell in each. * For those who haven't seen aleph: http://aleph.sagemath.org/static/sagealeph.html Best wishes, Thomas -------------- next part -------------- An HTML attachment was scrubbed... URL: From satra at mit.edu Fri Apr 8 08:32:10 2011 From: satra at mit.edu (Satrajit Ghosh) Date: Fri, 8 Apr 2011 08:32:10 -0400 Subject: [IPython-dev] [Announcement] Imminent removal of IPython.kernel in favor of new IPython.parallel In-Reply-To: References: Message-ID: hey fernando, min did answer my questions. i'll start working on some of the compatibility changes soon. cheers, satra On Fri, Apr 8, 2011 at 3:26 AM, Fernando Perez wrote: > Hey Satra, > > On Mon, Apr 4, 2011 at 3:35 PM, Satrajit Ghosh wrote: > > regarding using ipclusterz with pbs/sge/torque, i think the parallel docs > > refer to the old style of the user providing a template, but this is > > actually not necessary right? (at least from my cursory look at the code, > it > > seems to contain the 0.10.1 upgrade to generate a default template) > > > > also are you planning to include the options from the 0.10.1 series? > (e.g., > > -e for ssh, -q for the queues, lsf support). if not, i'll try to create > > those when i have some time unless somebody gets to them before i do. > > Min knows the details of this far better than I, so I'll let him reply > on the details. I just wanted to let you know that we just completed > the twisted removal and parallel merge. The new parallel code can be > imported as IPython.parallel. So if you want to dive into it, now > it's in master and we'll just continue refining it there for 0.11 > hopefully soon. > > Cheers, > > f > -------------- next part -------------- An HTML attachment was scrubbed... URL: From takowl at gmail.com Fri Apr 8 09:16:44 2011 From: takowl at gmail.com (Thomas Kluyver) Date: Fri, 8 Apr 2011 14:16:44 +0100 Subject: [IPython-dev] Release plans, yet again. And a road to 1.0, believe it or not. In-Reply-To: References: Message-ID: On 8 April 2011 08:55, Fernando Perez wrote: > - 0.11: we now have no pending pull requests and just a few critical > bugs. > Let's go into some more detail on what we need to do for 0.11: - We wanted to revise the config system, which I think Brian was working on. - Resolving history performance issues (me) - General testing. Critical bugs: - 336: Missing figure development/figs/iopubfade.png for docs (Min; should be an easy fix) - 8: Ensure %gui qt works with new Mayavi and pylab (Fernando, I think you were looking into this at sage days. You've commented there that you want Brian's input.) - 297: Shouldn't use pexpect for subprocesses in in-process terminal frontend (Reported by Min, no comments there yet) - 175: Qt console needs configuration support (Brian, is this part of your config plans?) - 66: Update the main What's New document to reflect work on 0.11 (Brian, Fernando) [Seems it would almost be easier to write a What's not new document!] There are 12 high priority bugs ( https://github.com/ipython/ipython/issues/labels/prio-high). Should any of these be considered blocking? Anything else that I've missed? Thomas -------------- next part -------------- An HTML attachment was scrubbed... URL: From fperez.net at gmail.com Fri Apr 8 12:10:21 2011 From: fperez.net at gmail.com (Fernando Perez) Date: Fri, 8 Apr 2011 09:10:21 -0700 Subject: [IPython-dev] [Announcement] Imminent removal of IPython.kernel in favor of new IPython.parallel In-Reply-To: References: Message-ID: On Fri, Apr 8, 2011 at 5:32 AM, Satrajit Ghosh wrote: > > min did answer my questions. i'll start working on some of the compatibility > changes soon. Great, thanks! f From benjaminrk at gmail.com Fri Apr 8 12:13:50 2011 From: benjaminrk at gmail.com (MinRK) Date: Fri, 8 Apr 2011 09:13:50 -0700 Subject: [IPython-dev] [Announcement] Imminent removal of IPython.kernel in favor of new IPython.parallel In-Reply-To: References: Message-ID: I managed to fail to send my reply to the list, so here it is for mailing list records: On Mon, Apr 4, 2011 at 15:35, Satrajit Ghosh wrote: > hi min, > > great work on the parallel stuff. a few questions though: > > regarding using ipclusterz with pbs/sge/torque, i think the parallel docs > refer to the old style of the user providing a template, but this is > actually not necessary right? (at least from my cursory look at the code, it > seems to contain the 0.10.1 upgrade to generate a default template) I did include your default templates from 0.10.1, so it is not necessary to specify a template for simple cases, but I imagine many users will still need to. I'll make sure the doc reflects this. And yes, queue is one of the configurable parameters. I have been using the default templates on SGE with no problems so far, so they do appear to work for at least fairly simple cases. > > also are you planning to include the options from the 0.10.1 series? (e.g., > -e for ssh, -q for the queues, lsf support). if not, i'll try to create > those when i have some time unless somebody gets to them before i do. Configuration of ipcluster (aside from 'n') is not exposed to the command line, so you must always use a config file with ipcluster, but we do allow pretty extensive control over how things execute. Looking at 0.10.1, it appears that LSF can be supported with the PBS launchers purely with existing configurables (PBS and SGE certainly could, but defaults were added for convenience). SSH launching and killing is still more primitive than 0.10.1 (though I have used it to launch 80 engines on a cluster this morning), so if you want to add LSF defaults and improve SSH, help filling in the gaps is certainly welcome. An important point is that, unlike in 0.10, it is not acceptable to require the Controller to be local. -MinRK On Fri, Apr 8, 2011 at 09:10, Fernando Perez wrote: > On Fri, Apr 8, 2011 at 5:32 AM, Satrajit Ghosh wrote: > > > > min did answer my questions. i'll start working on some of the > compatibility > > changes soon. > > Great, thanks! > > f > _______________________________________________ > IPython-dev mailing list > IPython-dev at scipy.org > http://mail.scipy.org/mailman/listinfo/ipython-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ellisonbg at gmail.com Fri Apr 8 12:14:47 2011 From: ellisonbg at gmail.com (Brian Granger) Date: Fri, 8 Apr 2011 09:14:47 -0700 Subject: [IPython-dev] Release plans, yet again. And a road to 1.0, believe it or not. In-Reply-To: References: Message-ID: On Fri, Apr 8, 2011 at 6:16 AM, Thomas Kluyver wrote: > On 8 April 2011 08:55, Fernando Perez wrote: >> >> - 0.11: we now have no pending pull requests and just a few critical >> bugs. > > Let's go into some more detail on what we need to do for 0.11: > > - We wanted to revise the config system, which I think Brian was working on. I will coordinate with Fernando on this. Once this is done, it will take some work to update the top level ipython programs. > - Resolving history performance issues (me) > - General testing. > > Critical bugs: > - 336: Missing figure development/figs/iopubfade.png for docs (Min; should > be an easy fix) > - 8: Ensure %gui qt works with new Mayavi and pylab (Fernando, I think you > were looking into this at sage days. You've commented there that you want > Brian's input.) > - 297: Shouldn't use pexpect for subprocesses in in-process terminal > frontend (Reported by Min, no comments there yet) > - 175: Qt console needs configuration support (Brian, is this part of your > config plans?) > - 66: Update the main What's New document to reflect work on 0.11 (Brian, > Fernando) > [Seems it would almost be easier to write a What's not new document!] > > There are 12 high priority bugs > (https://github.com/ipython/ipython/issues/labels/prio-high). Should any of > these be considered blocking? I consider 291 a blocker, but it will be easy to fix. I plan on working on that soon. > Anything else that I've missed? > > Thomas > > _______________________________________________ > IPython-dev mailing list > IPython-dev at scipy.org > http://mail.scipy.org/mailman/listinfo/ipython-dev > > -- Brian E. Granger Cal Poly State University, San Luis Obispo bgranger at calpoly.edu and ellisonbg at gmail.com From fperez.net at gmail.com Fri Apr 8 12:50:58 2011 From: fperez.net at gmail.com (Fernando Perez) Date: Fri, 8 Apr 2011 09:50:58 -0700 Subject: [IPython-dev] Release plans, yet again. And a road to 1.0, believe it or not. In-Reply-To: References: Message-ID: On Fri, Apr 8, 2011 at 9:14 AM, Brian Granger wrote: > > I consider 291 a blocker, but it will be easy to fix. ?I plan on > working on that soon. > Yup, I bumped it to critical so we can use that list to track what we focus on. f From fperez.net at gmail.com Fri Apr 8 13:32:44 2011 From: fperez.net at gmail.com (Fernando Perez) Date: Fri, 8 Apr 2011 10:32:44 -0700 Subject: [IPython-dev] Release plans, yet again. And a road to 1.0, believe it or not. In-Reply-To: References: Message-ID: On Fri, Apr 8, 2011 at 6:16 AM, Thomas Kluyver wrote: > Critical bugs: > - 336: Missing figure development/figs/iopubfade.png for docs (Min; should > be an easy fix) > - 8: Ensure %gui qt works with new Mayavi and pylab (Fernando, I think you > were looking into this at sage days. You've commented there that you want > Brian's input.) I think the final answer is simply that we can work well with pylab/mayavi on Qt, but arbitrary random Qt apps won't work well. Even using the inputhook support I got strange lockups if I tried to rerun a Qt app multiple times. There *may* be a way out of this, but I don't know what it is, and it will require more Qt expertise than I have. On the up side, the support for interactive traits apps, mayavi and matplotlib (even both simultaneously) seems *very* robust. Those are our primary 'customers' for now, so on that front we're OK. We'd be happy to continue improving the gui support if someone who knows that stuff well pitches in, but not by adding the old nasty threading hacks, that were very brittle. > - 297: Shouldn't use pexpect for subprocesses in in-process terminal > frontend (Reported by Min, no comments there yet) Just commented, that's mine. > - 175: Qt console needs configuration support (Brian, is this part of your > config plans?) Yes, that should be part of the config work, I'll help there too. > - 66: Update the main What's New document to reflect work on 0.11 (Brian, > Fernando) > [Seems it would almost be easier to write a What's not new document!] I know :) > There are 12 high priority bugs > (https://github.com/ipython/ipython/issues/labels/prio-high). Should any of > these be considered blocking? I just went through them, commented on some, and these two would be my only candidates to promoting to critical: https://github.com/ipython/ipython/issues/29 https://github.com/ipython/ipython/issues/296 The first one I'm reluctant to, see comments for details. But it would be great to see it fixed once and for all (and at least even a KnownFailure test would be already an improvement). The second one is a real regression that I think we really should fix. If you think you can take a shot at it, go ahead and make it critical and pound on it. > Anything else that I've missed? The only other thing I have in mind is this idea I've just filed, but I don't want to make it a blocker unless Evan (or anyone else with good Qt chops) happens to have time/bandwidth to work on it. Otherwise it will just sit there. I'll send a separate email about that to ping our Qt gurus: https://github.com/ipython/ipython/issues/338 Beyond that, nothing else comes to mind right now. Thanks a lot for the triage! Cheers, f From takowl at gmail.com Fri Apr 8 13:49:56 2011 From: takowl at gmail.com (Thomas Kluyver) Date: Fri, 8 Apr 2011 18:49:56 +0100 Subject: [IPython-dev] Release plans, yet again. And a road to 1.0, believe it or not. In-Reply-To: References: Message-ID: On 8 April 2011 18:32, Fernando Perez wrote: > I think the final answer is simply that we can work well with > pylab/mayavi on Qt, but arbitrary random Qt apps won't work well. > Even using the inputhook support I got strange lockups if I tried to > rerun a Qt app multiple times. There *may* be a way out of this, but > I don't know what it is, and it will require more Qt expertise than I > have. > OK. It seems fair enough that we can consider that fixed (or at least non-blocking), then. > I just went through them, commented on some, and these two would be my > only candidates to promoting to critical: > > https://github.com/ipython/ipython/issues/29 > https://github.com/ipython/ipython/issues/296 > > The first one I'm reluctant to, see comments for details. But it > would be great to see it fixed once and for all (and at least even a > KnownFailure test would be already an improvement). > > The second one is a real regression that I think we really should fix. > If you think you can take a shot at it, go ahead and make it critical > and pound on it. > The first one (pickling classes defined interactively) I agree with what you've said in the comments: we've lived with it for some time already, and we don't know of an easy way to fix it. I suggest we don't block on it for 0.11, we can always decide to block a later release on it if we want. Of course, if someone has a moment of inspiration on how to do it, that'd be great. The second one (automatic calling of pdb) I'll try to have a look at. At the least, if it's not working, we shouldn't be offering the function. Also, I notice that %pdb's docstring refers to ipythonrc, so I guess a review of the docs wouldn't go amiss. Thanks, Thomas -------------- next part -------------- An HTML attachment was scrubbed... URL: From fperez.net at gmail.com Fri Apr 8 13:54:34 2011 From: fperez.net at gmail.com (Fernando Perez) Date: Fri, 8 Apr 2011 10:54:34 -0700 Subject: [IPython-dev] Release plans, yet again. And a road to 1.0, believe it or not. In-Reply-To: References: Message-ID: On Fri, Apr 8, 2011 at 10:49 AM, Thomas Kluyver wrote: [...] agreed on all counts, sounds like a good plan. Cheers, f From ellisonbg at gmail.com Fri Apr 8 14:19:33 2011 From: ellisonbg at gmail.com (Brian Granger) Date: Fri, 8 Apr 2011 11:19:33 -0700 Subject: [IPython-dev] messaging protocol In-Reply-To: References: <4D9E646D.1020504@creativetrax.com> <4D9EC18E.50903@creativetrax.com> Message-ID: On Fri, Apr 8, 2011 at 1:17 AM, Fernando Perez wrote: > On Fri, Apr 8, 2011 at 1:04 AM, Jason Grout wrote: >> Interesting. ?I was going off of the documented spec. ?Is this subheader >> information going to make it into the documented spec, or is it an >> extension to the spec? > > We just haven't completely reconciled the documented spec with the > parallel code. ?The spec was mostly written when we designed the model > for the qt console. Min then used those ideas and built upon them the > parallel code, but he implemented some new stuff as he needed. > > One of the tasks ahead of us for the next few weeks (hey, we just > merged the parallel code a couple of hours ago! :) is precisely to > reconcile all of this, so that things are as unified as possible > between what interactive kernels and parallel engines talking to a > controller use. > >> Also, do you envision the various fields (like user_variables or the >> session field of the header) being required? ?For example, our >> simplified execute message is simply: >> >> {"header":{"msg_id":"DB_ID"}, >> "msg_type":"execute_request", >> "content":{"code":"CODE USER TYPES IN"}} >> >> where the omitted fields have the "obvious" defaults (silent=false, all >> other fields empty strings, lists, or dictionaries. > > I don't recall right now (and I need to crash) if we implemented this, > but I always wanted to have object send functions that would only need > the minimum info and would otherwise fill in the other defaults as > needed. ?It seems sensible to have matching behavior, where we also > accept minimal messages and fill the rest with defaults upon receive. > >> I was thinking more of multiple output messages for a single input >> (assuming that you stored multiple outputs in the database). ?That is >> our current problem, since our use-case will always have an >> execution_count of 1. > > We've been talking about changing the counter logic to increment > always, so that multiple outputs from a single input block get > different numbers (what Mathematica does). ?It adds a little > complexity to the code, but I think it's a cleaner solution in the > long run. ?Having multiple Out[4] outputs is really kind of not very > nice... But hold on a second. I thought that we had made the decision that display hook was never to be triggered twice for a given input cell. Has that changed? A few weeks ago, the code had regressed into the multiple display hook being called state, but I thought that was a bug we were going to fix. Brian > Cheers, > > f > _______________________________________________ > IPython-dev mailing list > IPython-dev at scipy.org > http://mail.scipy.org/mailman/listinfo/ipython-dev > -- Brian E. Granger Cal Poly State University, San Luis Obispo bgranger at calpoly.edu and ellisonbg at gmail.com From ellisonbg at gmail.com Fri Apr 8 14:34:45 2011 From: ellisonbg at gmail.com (Brian Granger) Date: Fri, 8 Apr 2011 11:34:45 -0700 Subject: [IPython-dev] Release plans, yet again. And a road to 1.0, believe it or not. In-Reply-To: References: Message-ID: Two other potential ones: * I don't see an issue for the display hook being triggered multiple times. Based on previous discussions, I would consider that a critical problem in master currently. * The docs are quite a mess in that we have not updated them to reflect all the 0.11 work. * We have stopped updating the "What's new" document (was changes.txt). Are we ditching this completely or do we want to at least summarize our work for 0.11 there? * The scripts that get the top level scripts installed in the MSI installer and likely broken (they are super fragile). Brian On Fri, Apr 8, 2011 at 10:32 AM, Fernando Perez wrote: > On Fri, Apr 8, 2011 at 6:16 AM, Thomas Kluyver wrote: >> Critical bugs: >> - 336: Missing figure development/figs/iopubfade.png for docs (Min; should >> be an easy fix) >> - 8: Ensure %gui qt works with new Mayavi and pylab (Fernando, I think you >> were looking into this at sage days. You've commented there that you want >> Brian's input.) > > I think the final answer is simply that we can work well with > pylab/mayavi on Qt, but arbitrary random Qt apps won't work well. > Even using the inputhook support I got strange lockups if I tried to > rerun a Qt app multiple times. ?There *may* be a way out of this, but > I don't know what it is, and it will require more Qt expertise than I > have. > > On the up side, the support for interactive traits apps, mayavi and > matplotlib (even both simultaneously) seems *very* robust. ?Those are > our primary 'customers' for now, so on that front we're OK. ?We'd be > happy to continue improving the gui support if someone who knows that > stuff well pitches in, but not by adding the old nasty threading > hacks, that were very brittle. > >> - 297: Shouldn't use pexpect for subprocesses in in-process terminal >> frontend (Reported by Min, no comments there yet) > > Just commented, that's mine. > >> - 175: Qt console needs configuration support (Brian, is this part of your >> config plans?) > > Yes, that should be part of the config work, I'll help there too. > >> - 66: Update the main What's New document to reflect work on 0.11 (Brian, >> Fernando) >> [Seems it would almost be easier to write a What's not new document!] > > I know :) > >> There are 12 high priority bugs >> (https://github.com/ipython/ipython/issues/labels/prio-high). Should any of >> these be considered blocking? > > I just went through them, commented on some, and these two would be my > only candidates to promoting to critical: > > https://github.com/ipython/ipython/issues/29 > https://github.com/ipython/ipython/issues/296 > > The first one I'm reluctant to, see comments for details. ?But it > would be great to see it fixed once and for all (and at least even a > KnownFailure test would be already an improvement). > > The second one is a real regression that I think we really should fix. > ?If you think you can take a shot at it, go ahead and make it critical > and pound on it. > >> Anything else that I've missed? > > The only other thing I have in mind is this idea I've just filed, but > I don't want to make it a blocker unless Evan (or anyone else with > good Qt chops) happens to have time/bandwidth to work on it. > Otherwise it will just sit there. ?I'll send a separate email about > that to ping our Qt gurus: > > https://github.com/ipython/ipython/issues/338 > > > Beyond that, nothing else comes to mind right now. ?Thanks a lot for the triage! > > Cheers, > > f > _______________________________________________ > IPython-dev mailing list > IPython-dev at scipy.org > http://mail.scipy.org/mailman/listinfo/ipython-dev > -- Brian E. Granger Cal Poly State University, San Luis Obispo bgranger at calpoly.edu and ellisonbg at gmail.com From fperez.net at gmail.com Fri Apr 8 14:56:10 2011 From: fperez.net at gmail.com (Fernando Perez) Date: Fri, 8 Apr 2011 11:56:10 -0700 Subject: [IPython-dev] Release plans, yet again. And a road to 1.0, believe it or not. In-Reply-To: References: Message-ID: On Fri, Apr 8, 2011 at 11:34 AM, Brian Granger wrote: > Two other potential ones: You have an interesting definition of two :) > * I don't see an issue for the display hook being triggered multiple > times. ?Based on previous discussions, I would consider that a > critical problem in master currently. Yes, we do need an issue to track this discussion and record whatever we end up doing... > * The docs are quite a mess in that we have not updated them to > reflect all the 0.11 work. Very true. Though we can't really block release on this, or we'll never get to it. But any progress on this front would be very, very welcome. > * We have stopped updating the "What's new" document (was > changes.txt). ?Are we ditching this completely or do we want to at > least summarize our work for 0.11 there? That one we already have: https://github.com/ipython/ipython/issues/#issue/66 Anyone can pitch in to update that doc, else I'll need to take a week off for writing :) > * The scripts that get the top level scripts installed in the MSI > installer and likely broken (they are super fragile). Yup, good catch. If you can open a couple of issues for those, great, I have to run now... f From ellisonbg at gmail.com Fri Apr 8 15:08:19 2011 From: ellisonbg at gmail.com (Brian Granger) Date: Fri, 8 Apr 2011 12:08:19 -0700 Subject: [IPython-dev] Release plans, yet again. And a road to 1.0, believe it or not. In-Reply-To: References: Message-ID: On Fri, Apr 8, 2011 at 11:56 AM, Fernando Perez wrote: > On Fri, Apr 8, 2011 at 11:34 AM, Brian Granger wrote: >> Two other potential ones: > > You have an interesting definition of two :) I do quantum mechanics, where a two state system can store more than a bit of information... >> * I don't see an issue for the display hook being triggered multiple >> times. ?Based on previous discussions, I would consider that a >> critical problem in master currently. > > Yes, we do need an issue to track this discussion and record whatever > we end up doing... OK great. >> * The docs are quite a mess in that we have not updated them to >> reflect all the 0.11 work. > > Very true. ?Though we can't really block release on this, or we'll > never get to it. ?But any progress on this front would be very, very > welcome. OK >> * We have stopped updating the "What's new" document (was >> changes.txt). ?Are we ditching this completely or do we want to at >> least summarize our work for 0.11 there? > > That one we already have: > > https://github.com/ipython/ipython/issues/#issue/66 > > Anyone can pitch in to update that doc, else I'll need to take a week > off for writing :) I vote that we start to only update the document at release time and simply summarize the work that was done. Describing every single change is too tedious and is really what git/github is for. >> * The scripts that get the top level scripts installed in the MSI >> installer and likely broken (they are super fragile). > > Yup, good catch. > > If you can open a couple of issues for those, great, I have to run now... Haha, I too have to run, but *will* try to handle the issues for this later. Cheers, Brian > f > -- Brian E. Granger Cal Poly State University, San Luis Obispo bgranger at calpoly.edu and ellisonbg at gmail.com From takowl at gmail.com Fri Apr 8 16:21:11 2011 From: takowl at gmail.com (Thomas Kluyver) Date: Fri, 8 Apr 2011 21:21:11 +0100 Subject: [IPython-dev] messaging protocol In-Reply-To: References: <4D9E646D.1020504@creativetrax.com> <4D9EC18E.50903@creativetrax.com> Message-ID: On 8 April 2011 19:19, Brian Granger wrote: > But hold on a second. I thought that we had made the decision that > display hook was never to be triggered twice for a given input cell. > Has that changed? A few weeks ago, the code had regressed into the > multiple display hook being called state, but I thought that was a bug > we were going to fix. > > Brian > I think we worked out that what had actually been written was code to ensure that only one block per cell (the last one) was able to produce output. But if that block was, say "for a in range(5): a", it could still produce multiple outputs. As far as I know, we've never prevented that. In fact, now that we're using AST instead of code blocks, we could actually do what you suggest. We could check the last node, and only run it interactively if it was a single expression. Whether or not that's what we want to do, I don't know: any views? Thomas -------------- next part -------------- An HTML attachment was scrubbed... URL: From takowl at gmail.com Fri Apr 8 16:28:04 2011 From: takowl at gmail.com (Thomas Kluyver) Date: Fri, 8 Apr 2011 21:28:04 +0100 Subject: [IPython-dev] Release plans, yet again. And a road to 1.0, believe it or not. In-Reply-To: References: Message-ID: On 8 April 2011 20:08, Brian Granger wrote: > I vote that we start to only update the document at release time and > simply summarize the work that was done. Describing every single > change is too tedious and is really what git/github is for. > Definitely agree. Especially for this release, there's far too much changed to meaningfully describe it all in detail. Thomas -------------- next part -------------- An HTML attachment was scrubbed... URL: From fperez.net at gmail.com Fri Apr 8 16:32:49 2011 From: fperez.net at gmail.com (Fernando Perez) Date: Fri, 8 Apr 2011 13:32:49 -0700 Subject: [IPython-dev] messaging protocol In-Reply-To: References: <4D9E646D.1020504@creativetrax.com> <4D9EC18E.50903@creativetrax.com> Message-ID: On Fri, Apr 8, 2011 at 1:21 PM, Thomas Kluyver wrote: > > In fact, now that we're using AST instead of code blocks, we could actually > do what you suggest. We could check the last node, and only run it > interactively if it was a single expression. Whether or not that's what we > want to do, I don't know: any views? That's a very good point. The fragile heuristics we had were precisely because we lacked this information. But I really do like this suggestion, because I think it provides the most intuitive semantics. Things like: for i in range(10): plot(foo[i]) won't produce 10 different Out[] outputs, and yet any last-block expression, even if it contains some multline string or other complex formatting that makes it be more than one *line of text* will still be executed interactively, yielding just one result. I'm very much +1 on this idea. Big benefit of your recent tackling inputsplitter!! Awesome. f From fperez.net at gmail.com Fri Apr 8 16:34:09 2011 From: fperez.net at gmail.com (Fernando Perez) Date: Fri, 8 Apr 2011 13:34:09 -0700 Subject: [IPython-dev] Release plans, yet again. And a road to 1.0, believe it or not. In-Reply-To: References: Message-ID: On Fri, Apr 8, 2011 at 1:28 PM, Thomas Kluyver wrote: > Definitely agree. Especially for this release, there's far too much changed > to meaningfully describe it all in detail. > Oh, certainly. But it's going to be enough work, that I think if someone wants to get started making an outline or putting important points we shouldn't forget, they can go ahead right away... Otherwise the final writeup is going to be a pain, even if we only stick to the 'big ideas'. Cheers, f From takowl at gmail.com Fri Apr 8 16:49:41 2011 From: takowl at gmail.com (Thomas Kluyver) Date: Fri, 8 Apr 2011 21:49:41 +0100 Subject: [IPython-dev] messaging protocol In-Reply-To: References: <4D9E646D.1020504@creativetrax.com> <4D9EC18E.50903@creativetrax.com> Message-ID: On 8 April 2011 21:32, Fernando Perez wrote: > That's a very good point. The fragile heuristics we had were > precisely because we lacked this information. But I really do like > this suggestion, because I think it provides the most intuitive > semantics. OK, I'll add it to my todo list ;-). Thomas -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Fri Apr 8 17:05:34 2011 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 08 Apr 2011 16:05:34 -0500 Subject: [IPython-dev] messaging protocol In-Reply-To: References: <4D9E646D.1020504@creativetrax.com> <4D9EC18E.50903@creativetrax.com> Message-ID: On 4/8/11 3:32 PM, Fernando Perez wrote: > On Fri, Apr 8, 2011 at 1:21 PM, Thomas Kluyver wrote: >> >> In fact, now that we're using AST instead of code blocks, we could actually >> do what you suggest. We could check the last node, and only run it >> interactively if it was a single expression. Whether or not that's what we >> want to do, I don't know: any views? > > That's a very good point. The fragile heuristics we had were > precisely because we lacked this information. But I really do like > this suggestion, because I think it provides the most intuitive > semantics. Things like: > > for i in range(10): > plot(foo[i]) > > won't produce 10 different Out[] outputs, and yet any last-block > expression, even if it contains some multline string or other complex > formatting that makes it be more than one *line of text* will still be > executed interactively, yielding just one result. > > I'm very much +1 on this idea. Big benefit of your recent tackling > inputsplitter!! Awesome. I don't think we need to solve this in the splitter. You can do everything you need to do in the display trap. The logic is very simple. The display trap sets the displayhook before the code In[N] is executed. Every time the displayhook gets called during this execution, the display trap records the object and its formatted representation, overwriting whatever was there. Once the execution is done, *then* the formatted representation is given to the reply message (or printed if in the terminal frontend) and the history DB (if it is storing outputs), and the object is shoved into Out[N]. Then the display trap is cleared before In[N+1] is executed. This allows code like "x; y = x+1" to put x into Out[N] even though the last AST node is not an expression. It's not *entirely* clear to me that this is a real use case, but I think it would be less surprising behavior. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From takowl at gmail.com Fri Apr 8 17:24:48 2011 From: takowl at gmail.com (Thomas Kluyver) Date: Fri, 8 Apr 2011 22:24:48 +0100 Subject: [IPython-dev] messaging protocol In-Reply-To: References: <4D9E646D.1020504@creativetrax.com> <4D9EC18E.50903@creativetrax.com> Message-ID: On 8 April 2011 22:05, Robert Kern wrote: > This allows code like "x; y = x+1" to put x into Out[N] even though the > last AST > node is not an expression. It's not *entirely* clear to me that this is a > real > use case, but I think it would be less surprising behavior. > We could also on the AST side make the last node that's an expression run interactively. Do some more people want to pitch in on what they consider the clearest behaviour for multiline cells? Thomas -------------- next part -------------- An HTML attachment was scrubbed... URL: From takowl at gmail.com Fri Apr 8 19:40:21 2011 From: takowl at gmail.com (Thomas Kluyver) Date: Sat, 9 Apr 2011 00:40:21 +0100 Subject: [IPython-dev] Heads-up: Twisted code removed, IPython.parallel (zmq-based) now in master In-Reply-To: References: Message-ID: Thanks, Min for doing this. Now that we're using pyzmq, we can also start thinking about having our parallel code on Python 3. I've copied the latest changes across into the IPython 3k repository, and fixed up the simplest bugs. We're not quite there yet, but most of the tests are now passing. If anyone has a burning desire to try parallel computing with Python 3, then you're welcome to come and fix up the problems you find: https://github.com/ipython/ipython-py3k Thanks, Thomas -------------- next part -------------- An HTML attachment was scrubbed... URL: From benjaminrk at gmail.com Fri Apr 8 19:54:16 2011 From: benjaminrk at gmail.com (MinRK) Date: Fri, 8 Apr 2011 16:54:16 -0700 Subject: [IPython-dev] Heads-up: Twisted code removed, IPython.parallel (zmq-based) now in master In-Reply-To: References: Message-ID: On Fri, Apr 8, 2011 at 16:40, Thomas Kluyver wrote: > Thanks, Min for doing this. > > Now that we're using pyzmq, we can also start thinking about having our > parallel code on Python 3. I've copied the latest changes across into the > IPython 3k repository, and fixed up the simplest bugs. We're not quite there > yet, but most of the tests are now passing. If anyone has a burning desire > to try parallel computing with Python 3, then you're welcome to come and fix > up the problems you find: > That's very exciting. I'll dig around there, and see what I find. It hopefully shouldn't be too much, but that would certainly be something to brag about. > > https://github.com/ipython/ipython-py3k > > Thanks, > Thomas > > _______________________________________________ > IPython-dev mailing list > IPython-dev at scipy.org > http://mail.scipy.org/mailman/listinfo/ipython-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From andresete.chaos at gmail.com Sat Apr 9 01:15:39 2011 From: andresete.chaos at gmail.com (=?UTF-8?Q?Omar_Andr=C3=A9s_Zapata_Mesa?=) Date: Sat, 9 Apr 2011 00:15:39 -0500 Subject: [IPython-dev] About GSoC 2011 In-Reply-To: References: Message-ID: Oh thanks Fernando, I am no worry about GSoC, If GSoC is not possible obviously I want to continue working with IPython's team, I love IPython's project and I am working to write a good code and to be a active member. My plans is to continue with two process's frontend and to write good web interface! I am thinking how to implement the brian's web interface that is very nice to improve ipython-http framework that I have working a little. My absence from the list these days and my slow progress in the frontend is because you know how is the situation at my university, and teachers are accelerating the evaluation's process by the political situation, but soon I will be update and I show you a good progress. Then soon I will cancel my application for GSoC, I'm not worried about that. Thanks to let me be a part of this. 2011/4/8 Fernando Perez > Hi Omar, > > 2011/4/7 Omar Andr?s Zapata Mesa : > > I applied to GSoC with IPython proposal, please give some feedback and > > suggestions, > > my blog is http://ipython-http.blogspot.com/ > > I think a I can a good job with the prototype of James Gao, Brian Granger > > and discussing web interface standards for ipython, art and features in > > the mailing list. > > I'm sorry I've been silent, I had a family health emergency that sent > me mostly offline for a few days, I know you needed feedback, and I > apologize for leaving you waiting (as well as other people were also > bottlenecking on me). > > I'm very happy to see you wanting to contribute further, but I think > that it's going to be very difficult for this to happen as a gsoc > project this year. > > The first reason is that, as I had said earlier, as a second-year > student you would be expected to clear a much higher bar of entry to > the program than a new student. While you have made a good effort > recently to push forward on completing your work from last year, which > is great, that hasn't been finished yet. With help from the core team > your logging work is now merged, but there's still no complete > implementation of the terminal client ready for final merge. I played > with your prototype and gave some feedback (as did Thomas), and you > have a good start there, but it's certainly not code ready for merge > yet. > > Note that I am really willing to continue helping you with feedback to > improve this until it's merged, if you still want to work on it. It's > useful code that we do need, so it needs to happen, and it would be > great to have you work on it if you find it interesting. > > In addition to the completion of that project, there's a second reason > that is also very important: the *main* point of gsoc is not to > develop one piece of code, but rather to grow the community of core > contributors to a project. In the last round of mentor list > discussions, they have very strongly emphasized how important it is > that students who are accepted have shown a real participation in a > project, measured in multiple ways (especially for students who have > been around for a while). That means fixing bugs, contributing code > reviews, making small developments, writing documentation, etc. It > may not be as much fun or sexy as diving into a big, standalone idea, > but that's much more the reality of everyday work in a project. > > You can see for example how Thomas showed up out of the blue > contributing pull requests at first for the python3 branch, then for > small things, and these days if you look at the log there's a ton of > major, critical work by him (he just completely refactored one of the > most delicate pieces of code we had, the input handling stuff). But > he dove into all aspects of the project, including the thankless jobs > of flushing the crazy backlog of pull requests I had allowed to > accumulate after the India sprints, as well as doing massive bug > triage. That work is never as fun as designing some new cool app, but > it's a necessary part of sustaining a project in the long term. In no > small part thanks to Thomas' efforts, we now have *zero* open pull > requests, and we're down to *four* critical bugs for releasing 0.11. > That's the kind of participation in a project that brings a member in > and makes an enormous difference. And I think you will find that if > you engage the project like this, you will actually learn more, and > find it more fun in the long run, than only working on one specific > idea, because you'll get to really participate of the entire effort. > > Keep in mind that if you are still interested in participating in the > project, we'd be thrilled to have you continue here. The terminal > work you've started is still very much needed, and it working on that > will help you to become a regular ipython contributor. With > sufficient code merged in the project and a real record of project > contributions and participation, you would have a much stronger case > for applying next year, for example. > > I realize this isn't what you wanted to hear, but I hope you > understand the reasons for this, and I'm happy to chat further about > this, on- or off-list. But I want to reply publicly because this is > not something directed in any way at your personally, but rather a > statement of how I think the project must handle students who want to > return (as well as informing potentially interested ones in the > future). > > All the best, > > f > > ps - thanks to Min and Brian for feedback on my reply before sending > it; this is an important part of attempting to run a project well, so > I wanted to make sure I said things in the clearest, but kindest way > possible. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jason-sage at creativetrax.com Sat Apr 9 02:30:23 2011 From: jason-sage at creativetrax.com (Jason Grout) Date: Sat, 09 Apr 2011 01:30:23 -0500 Subject: [IPython-dev] messaging protocol In-Reply-To: References: <4D9E646D.1020504@creativetrax.com> <4D9EC18E.50903@creativetrax.com> Message-ID: <4D9FFCFF.9040800@creativetrax.com> On 4/8/11 4:43 AM, Thomas Kluyver wrote: > On 8 April 2011 09:04, Jason Grout > wrote: > > I was thinking more of multiple output messages for a single input > (assuming that you stored multiple outputs in the database). That > is our current problem, since our use-case will always have an > execution_count of 1. > > > At present, if you get multiple outputs (e.g by doing "12*3; 88*14", > they are stored in a JSON list in the database (if you have output > logging enabled, which it isn't by default). > > I'm not quite sure what you mean by always having an execution count > of 1. Is this the aleph prototype* or something similar? That's exactly what it is: an extension of the aleph idea [1]. Ira Hanson and I are working on moving the communication over to use the ipython messaging protocol. > The way I envisaged history working, if the namespace is preserved > from command to command, the execution count increases. If it isn't, > each run is kind of a new session. The interface for retrieving > history works better with sessions with more than one command, but > there's nothing to stop you making sessions with a single cell in > each. Interesting. So execution count really is just a counter on executions where the namespace is preserved. What do you mean by "namespace is preserved"? How do you test for it? Thanks, Jason [1] The codebase is here: http://code.google.com/p/simple-python-db-compute/ (see our various clones of it as well). The design/todo list is here: http://wiki.sagemath.org/DrakeSageGroup From fperez.net at gmail.com Sat Apr 9 04:05:26 2011 From: fperez.net at gmail.com (Fernando Perez) Date: Sat, 9 Apr 2011 01:05:26 -0700 Subject: [IPython-dev] [ANN] IPython 0.10.2 is out Message-ID: Hi all, we've just released IPython 0.10.2, full release notes are below. Downloads in source and windows binary form are available in the usual location: http://ipython.scipy.org/dist/ as well as on github: http://github.com/ipython/ipython/archives/rel-0.10.2 and at the Python Package Index (which easy_install will use by default): http://pypi.python.org/pypi/ipython so at any time you should find a location with good download speeds. You can find the full documentation at: http://ipython.github.com/ipython-doc/rel-0.10.2/html/ http://ipython.github.com/ipython-doc/rel-0.10.2/ipython.pdf Enjoy! Fernando (on behalf of the whole IPython team) Release 0.10.2 ============== IPython 0.10.2 was released April 9, 2011. This is a minor bugfix release that preserves backward compatibility. At this point, all IPython development resources are focused on the 0.11 series that includes a complete architectural restructuring of the project as well as many new capabilities, so this is likely to be the last release of the 0.10.x series. We have tried to fix all major bugs in this series so that it remains a viable platform for those not ready yet to transition to the 0.11 and newer codebase (since that will require some porting effort, as a number of APIs have changed). Thus, we are not opening a 0.10.3 active development branch yet, but if the user community requires new patches and is willing to maintain/release such a branch, we'll be happy to host it on the IPython github repositories. Highlights of this release: - The main one is the closing of github ticket #185, a major regression we had in 0.10.1 where pylab mode with GTK (or gthread) was not working correctly, hence plots were blocking with GTK. Since this is the default matplotlib backend on Unix systems, this was a major annoyance for many users. Many thanks to Paul Ivanov for helping resolve this issue. - Fix IOError bug on Windows when used with -gthread. - Work robustly if $HOME is missing from environment. - Better POSIX support in ssh scripts (remove bash-specific idioms). - Improved support for non-ascii characters in log files. - Work correctly in environments where GTK can be imported but not started (such as a linux text console without X11). For this release we merged 24 commits, contributed by the following people (please let us know if we ommitted your name and we'll gladly fix this in the notes for the future): * Fernando Perez * MinRK * Paul Ivanov * Pieter Cristiaan de Groot * TvrtkoM From fperez.net at gmail.com Sat Apr 9 04:34:11 2011 From: fperez.net at gmail.com (Fernando Perez) Date: Sat, 9 Apr 2011 01:34:11 -0700 Subject: [IPython-dev] Finally 0.10.x is behind us.. Message-ID: Hi all, I really hope we don't need to revisit that code anymore. Hopefully now we can focus our available resources on the new code only, and get 0.11 ready for release before too long. I'm really excited to see all the new contributions (the zero pull request status only lasted a few hours :). Thanks to all those who've recently contributed! Cheers, f From jason-sage at creativetrax.com Sat Apr 9 07:53:18 2011 From: jason-sage at creativetrax.com (Jason Grout) Date: Sat, 09 Apr 2011 06:53:18 -0500 Subject: [IPython-dev] variable information request Message-ID: <4DA048AE.60504@creativetrax.com> Here's a crazy idea that could lead to some nice interactively-updating components of an IPython frontend. Right now, in the new messaging protocol, it appears that I can send a computation request and then ask for the output to contain the values of several variables (the "user_variables" field of a execute_reply message). However, what if I want to check the value of a variable *during* a computation and get an immediate response? I might, for example, have a box that prints out the current value of a root approximation, for example, or a slider that contains the current iteration number, and I want these to be updated fairly frequently while the computation is running. Here's one way to do it, I think: Run a separate thread that just answers these queries. Since the GIL handles access to variables, I think it's okay for the separate thread to retrieve the value of a variable and return that, and it seamlessly does this while the main thread is carrying on a computation. For example, the following code will print out the current iteration number every time enter is pressed: -------------------------------------- from threading import Timer, Thread import time def printi(): global i while True: raw_input('Press enter to see the iteration number') print i t=Thread(target=printi) t.daemon=True t.start() for i in range(20): time.sleep(.5) --------------------------------------- I can see a problem if the value you are querying actually changes the object state in the middle of another computation using that object. But simple queries about an object's value should work fine, and maybe we could leave it up to the user to not mess up a currently-running computation by changing an object state. What do people think? Thanks, Jason -- Jason Grout From takowl at gmail.com Sat Apr 9 08:21:15 2011 From: takowl at gmail.com (Thomas Kluyver) Date: Sat, 9 Apr 2011 13:21:15 +0100 Subject: [IPython-dev] messaging protocol In-Reply-To: <4D9FFCFF.9040800@creativetrax.com> References: <4D9E646D.1020504@creativetrax.com> <4D9EC18E.50903@creativetrax.com> <4D9FFCFF.9040800@creativetrax.com> Message-ID: On 9 April 2011 07:30, Jason Grout wrote: > Interesting. So execution count really is just a counter on executions > where the namespace is preserved. What do you mean by "namespace is > preserved"? How do you test for it? Well, if I do "a=12" , "a" , I will get 12 back if the namespace is preserved, but get a NameError if not. A quick test with aleph suggests that it's not, so each run is effectively a one-cell session. Likewise, in IPython trunk, if you do %reset, it nukes your namespace, gets a new history session, and sets the execution counter back to 1. Thomas -------------- next part -------------- An HTML attachment was scrubbed... URL: From jason-sage at creativetrax.com Sat Apr 9 11:47:45 2011 From: jason-sage at creativetrax.com (Jason Grout) Date: Sat, 09 Apr 2011 10:47:45 -0500 Subject: [IPython-dev] messaging protocol In-Reply-To: References: <4D9E646D.1020504@creativetrax.com> <4D9EC18E.50903@creativetrax.com> <4D9FFCFF.9040800@creativetrax.com> Message-ID: <4DA07FA1.6070702@creativetrax.com> On 4/9/11 7:21 AM, Thomas Kluyver wrote: > On 9 April 2011 07:30, Jason Grout > wrote: > > Interesting. So execution count really is just a counter on > executions where the namespace is preserved. What do you mean by > "namespace is preserved"? How do you test for it? > > > Well, if I do "a=12" , "a" , I will get 12 back if the > namespace is preserved, but get a NameError if not. A quick test with > aleph suggests that it's not, so each run is effectively a one-cell > session. Likewise, in IPython trunk, if you do %reset, it nukes your > namespace, gets a new history session, and sets the execution counter > back to 1. Ah, right. So we were communicating correctly. Yes, aleph and the single-cell server are doing exactly what you describe. No state is preserved between calls. That is simpler and lets us play with the design of the compute system and protocol before implementing a full-blown notebook. It's also a lower barrier of entry for the public. Thanks, Jason From fperez.net at gmail.com Sat Apr 9 16:51:03 2011 From: fperez.net at gmail.com (Fernando Perez) Date: Sat, 9 Apr 2011 13:51:03 -0700 Subject: [IPython-dev] On sqlite history performance improvements... Message-ID: Hey Thomas, the history fixes of swapping the context manager and write loop do make a huge difference, so I've pushed your fixes already (it's the right algorithmic order, so no question on this change). I rebased it to avoid a merge handle on a single commit. Thanks a lot for this fix. But I think we still should look into further improving the performance/usability of the sqlite history support. A few notes: - it's worth trying what happens with the writeout method in a thread. If the python sqlite module releases the gil for the disk i/o operation (which I dearly hope it does), then it will be a net win and will completely solve the slight but annoying pause the system currently has on every prompt when using a laptop with a slow disk on battery. - we also need to expose the cache parameter as a configurable, though this may need to wait for the config work to be completed, not sure right now if the history object is already accessible to the config file mechanisms. - and we certainly want to have a magic to set it at runtime without the get_ipython()....whatever=10 dance. Name? %history_buffer, %history_cache, %history_delay, ...? Cheers, f From takowl at gmail.com Sat Apr 9 17:09:45 2011 From: takowl at gmail.com (Thomas Kluyver) Date: Sat, 9 Apr 2011 22:09:45 +0100 Subject: [IPython-dev] On sqlite history performance improvements... In-Reply-To: References: Message-ID: On 9 April 2011 21:51, Fernando Perez wrote: > Hey Thomas, > > the history fixes of swapping the context manager and write loop do > make a huge difference, so I've pushed your fixes already (it's the > right algorithmic order, so no question on this change). I rebased it > to avoid a merge handle on a single commit. Thanks a lot for this > fix. > OK, glad it helped. > > But I think we still should look into further improving the > performance/usability of the sqlite history support. A few notes: > > - it's worth trying what happens with the writeout method in a thread. > If the python sqlite module releases the gil for the disk i/o > operation (which I dearly hope it does), then it will be a net win and > will completely solve the slight but annoying pause the system > currently has on every prompt when using a laptop with a slow disk on > battery. > I'll make a branch for you to test. > - we also need to expose the cache parameter as a configurable, though > this may need to wait for the config work to be completed, not sure > right now if the history object is already accessible to the config > file mechanisms. > It should already be exposed: db_cache_size = Int(0, config=True) > - and we certainly want to have a magic to set it at runtime without > the get_ipython()....whatever=10 dance. Name? %history_buffer, > %history_cache, %history_delay, ...? > Perhaps there's scope for a more general magic to change those config options which can be changed while we're running. Something like: %ipconf history_db_cache 10 %ipconf history_db_log_output true Thomas -------------- next part -------------- An HTML attachment was scrubbed... URL: From fperez.net at gmail.com Sat Apr 9 17:17:53 2011 From: fperez.net at gmail.com (Fernando Perez) Date: Sat, 9 Apr 2011 14:17:53 -0700 Subject: [IPython-dev] On sqlite history performance improvements... In-Reply-To: References: Message-ID: On Sat, Apr 9, 2011 at 2:09 PM, Thomas Kluyver wrote: > I'll make a branch for you to test. Great, thanks! >> - we also need to expose the cache parameter as a configurable, though >> this may need to wait for the config work to be completed, not sure >> right now if the history object is already accessible to the config >> file mechanisms. > > It should already be exposed: > db_cache_size = Int(0, config=True) Yup, you're right. Putting this in the config file: c.HistoryManager.db_cache_size = 10 makes the default cache size be 10. > Perhaps there's scope for a more general magic to change those config > options which can be changed while we're running. Something like: > > %ipconf history_db_cache 10 > %ipconf history_db_log_output true Absolutely. Let's hold off on this though until we sort out the config work, since the implementation details of such a magic would likely depend on that. Cheers, f From fperez.net at gmail.com Sat Apr 9 17:43:20 2011 From: fperez.net at gmail.com (Fernando Perez) Date: Sat, 9 Apr 2011 14:43:20 -0700 Subject: [IPython-dev] On good commit messages Message-ID: Hi all, I was reviewing some pull requests, and in making some comments about the commits and searching for background info, came across this great post: http://who-t.blogspot.com/2009/12/on-commit-messages.html I'll be adding a link to it in our development guide, but I think it makes for a worthwhile 5 minute read. I don't really agree with the idea that "As a rule of thumb, given only the commit message, another developer should be able to implement the same patch in a reasonable amount of time.", I don't really think that's realistic. But otherwise it's a bunch of sensible advice, well worded and with the reasons behind the advice. Cheers, f From ellisonbg at gmail.com Sat Apr 9 18:10:47 2011 From: ellisonbg at gmail.com (Brian Granger) Date: Sat, 9 Apr 2011 15:10:47 -0700 Subject: [IPython-dev] On sqlite history performance improvements... In-Reply-To: References: Message-ID: On Sat, Apr 9, 2011 at 1:51 PM, Fernando Perez wrote: > Hey Thomas, > > the history fixes of swapping the context manager and write loop do > make a huge difference, so I've pushed your fixes already (it's the > right algorithmic order, so no question on this change). ?I rebased it > to avoid a merge handle on a single commit. ?Thanks a lot for this > fix. > > But I think we still should look into further improving the > performance/usability of the sqlite history support. ?A few notes: > > - it's worth trying what happens with the writeout method in a thread. > ?If the python sqlite module releases the gil for the disk i/o > operation (which I dearly hope it does), then it will be a net win and > will completely solve the slight but annoying pause the system > currently has on every prompt when using a laptop with a slow disk on > battery. > > - we also need to expose the cache parameter as a configurable, though > this may need to wait for the config work to be completed, not sure > right now if the history object is already accessible to the config > file mechanisms. > > - and we certainly want to have a magic to set it at runtime without > the get_ipython()....whatever=10 dance. ?Name? %history_buffer, > %history_cache, %history_delay, ...? I think we probably want to discuss a good model for this type of thing. Simple adding more magics to access IPython's runtime will only lead to a bloated main namespace. If we tied this type of thing into the config sytem we might be able to get all of this done with a single magic: %config Foo.bar=10 %config InteractiveShell.autocall=False I agree that the get_ipython() .... dance is not a very pleasant model, but I think we can find something that will scale better in covering IPython's runtime. Cheers, Brian > Cheers, > > f > _______________________________________________ > IPython-dev mailing list > IPython-dev at scipy.org > http://mail.scipy.org/mailman/listinfo/ipython-dev > -- Brian E. Granger Cal Poly State University, San Luis Obispo bgranger at calpoly.edu and ellisonbg at gmail.com From fperez.net at gmail.com Sat Apr 9 18:34:22 2011 From: fperez.net at gmail.com (Fernando Perez) Date: Sat, 9 Apr 2011 15:34:22 -0700 Subject: [IPython-dev] On sqlite history performance improvements... In-Reply-To: References: Message-ID: On Sat, Apr 9, 2011 at 3:10 PM, Brian Granger wrote: > I think we probably want to discuss a good model for this type of > thing. ?Simple adding more magics to access IPython's runtime will > only lead to a bloated main namespace. ?If we tied this type of thing > into the config sytem we might be able to get all of this done with a > single magic: > > %config Foo.bar=10 > %config InteractiveShell.autocall=False Yup, I think that's precisely what Thomas was calling %ipconf... Let's get the config layer sorted out first, and then we can add high-level stuff for it. Cheers, f From takowl at gmail.com Sat Apr 9 18:39:10 2011 From: takowl at gmail.com (Thomas Kluyver) Date: Sat, 9 Apr 2011 23:39:10 +0100 Subject: [IPython-dev] On sqlite history performance improvements... In-Reply-To: References: Message-ID: Fernando, try this branch: https://github.com/takluyver/ipython/tree/history-threaded There are some issues in the tests (I guess it's race conditions with history access). I must say, I don't much like this approach. Apart from the race conditions, we need a separate connection to the database for the thread, which then has to passed as an argument to every function called from that thread... I'm sure it can be done, but it's a significant extra source of potential problems. Indeed, one of the motivations for my history refactor was to remove the need for a thread to save history... How slow is the system on which you're seeing noticeable delays? If we changed the defaults so that we're saving on every 5th or 10th command, rather than after every command, would the delay be acceptable? Thomas -------------- next part -------------- An HTML attachment was scrubbed... URL: From fperez.net at gmail.com Sat Apr 9 21:21:22 2011 From: fperez.net at gmail.com (Fernando Perez) Date: Sat, 9 Apr 2011 18:21:22 -0700 Subject: [IPython-dev] big github issues improvement!!! Message-ID: Hi everyone, check out e.g. https://github.com/ipython/ipython/issues literally in the last few minutes (I know b/c I've been opening issues pages all day, so I know a half hour ago it wasn't like this) they rolled out a new issues interface. I haven't explored it much yet, but already some of our biggest gripes are improved: - many labels (like we're using) don't obscure the issue titles. - milestones, assigned persons as separate fields. - multiple pages to see other issues - working search !!! yes! Finally, finally... I kept saying these guys were so good, and the old interface so bad, that something *had* to be coming. But the wait was getting long... This looks really, really great. In case anyone from gh sees this: THANK YOU!!! BTW, speaking of github, a post along the lines many of us in academia have been talking about: http://marciovm.com/i-want-a-github-of-science Cheers, f From fperez.net at gmail.com Sat Apr 9 21:23:36 2011 From: fperez.net at gmail.com (Fernando Perez) Date: Sat, 9 Apr 2011 18:23:36 -0700 Subject: [IPython-dev] big github issues improvement!!! In-Reply-To: References: Message-ID: On Sat, Apr 9, 2011 at 6:21 PM, Fernando Perez wrote: > Hi everyone, > > check out e.g. https://github.com/ipython/ipython/issues > > literally in the last few minutes (I know b/c I've been opening issues > pages all day, so I know a half hour ago it wasn't like this) they > rolled out a new issues interface. their post just went up: https://github.com/blog/831-issues-2-0-the-next-generation f From fperez.net at gmail.com Sat Apr 9 22:14:20 2011 From: fperez.net at gmail.com (Fernando Perez) Date: Sat, 9 Apr 2011 19:14:20 -0700 Subject: [IPython-dev] the view for 0.11, using the new issues features Message-ID: by the way, with the new issues page, we can get stable filtering of the issues list by labels, so we can easily see our active, high-priority 0.11 bugs: https://github.com/ipython/ipython/issues?labels=prio-critical%2Cstatus-active&sort=created&direction=asc&state=open&page=1&milestone=1 that url is a filtering view, so it will always provide a current view of whatever matches. I took the time to convert our manual milestone- and person name labels into proper milestones and assigned people, so now we have less labels, and two actual milestones: https://github.com/ipython/ipython/issues/milestones Yay for the new system :) f From ellisonbg at gmail.com Sun Apr 10 01:59:32 2011 From: ellisonbg at gmail.com (Brian Granger) Date: Sat, 9 Apr 2011 22:59:32 -0700 Subject: [IPython-dev] variable information request In-Reply-To: <4DA048AE.60504@creativetrax.com> References: <4DA048AE.60504@creativetrax.com> Message-ID: Jason, On Sat, Apr 9, 2011 at 4:53 AM, Jason Grout wrote: > Here's a crazy idea that could lead to some nice interactively-updating > components of an IPython frontend. ?Right now, in the new messaging > protocol, it appears that I can send a computation request and then ask > for the output to contain the values of several variables (the > "user_variables" field of a execute_reply message). ?However, what if I > want to check the value of a variable *during* a computation and get an > immediate response? ?I might, for example, have a box that prints out > the current value of a root approximation, for example, or a slider that > contains the current iteration number, and I want these to be updated > fairly frequently while the computation is running. I think this is where the display system comes in. We have extended the model of the displayhook to allow that macheniery to be triggered by users anywhere in their code by simply calling our top-level display functions: https://github.com/ipython/ipython/blob/master/IPython/core/display.py Calling these functions causes a JSON message to be sent to all frontends instantly (it doesn't wait for the code to finish). These messages can have extra metedata and a A given frontend could easily look at that metadata and decide how to display the JSON data. > Here's one way to do it, I think: ?Run a separate thread that just > answers these queries. ?Since the GIL handles access to variables, I > think it's okay for the separate thread to retrieve the value of a > variable and return that, and it seamlessly does this while the main > thread is carrying on a computation. ?For example, the following code > will print out the current iteration number every time enter is pressed: Almost always, in Python, threads are not the answer. This type of service would go down the second that non-GIL releasing extension code is run. This is one of the main reasons we have gone to pyzmq, as all of its networking stuff is handled in GIL releasing C++ threads. The benefit of the existing display logic that uses pyzmq is that it will continue to work fine in these situations. Cheers, Brian > -------------------------------------- > from threading import Timer, Thread > import time > > def printi(): > ? ? global i > ? ? while True: > ? ? ? ? raw_input('Press enter to see the iteration number') > ? ? ? ? print i > > t=Thread(target=printi) > t.daemon=True > t.start() > > for i in range(20): > ? ? time.sleep(.5) > --------------------------------------- > > I can see a problem if the value you are querying actually changes the > object state in the middle of another computation using that object. But > simple queries about an object's value should work fine, and maybe we > could leave it up to the user to not mess up a currently-running > computation by changing an object state. > > What do people think? > > Thanks, > > Jason > > -- > Jason Grout > _______________________________________________ > IPython-dev mailing list > IPython-dev at scipy.org > http://mail.scipy.org/mailman/listinfo/ipython-dev > -- Brian E. Granger Cal Poly State University, San Luis Obispo bgranger at calpoly.edu and ellisonbg at gmail.com From ellisonbg at gmail.com Sun Apr 10 02:06:32 2011 From: ellisonbg at gmail.com (Brian Granger) Date: Sat, 9 Apr 2011 23:06:32 -0700 Subject: [IPython-dev] messaging protocol In-Reply-To: References: <4D9E646D.1020504@creativetrax.com> <4D9EC18E.50903@creativetrax.com> Message-ID: On Fri, Apr 8, 2011 at 2:05 PM, Robert Kern wrote: > On 4/8/11 3:32 PM, Fernando Perez wrote: >> On Fri, Apr 8, 2011 at 1:21 PM, Thomas Kluyver ?wrote: >>> >>> In fact, now that we're using AST instead of code blocks, we could actually >>> do what you suggest. We could check the last node, and only run it >>> interactively if it was a single expression. Whether or not that's what we >>> want to do, I don't know: any views? >> >> That's a very good point. ?The fragile heuristics we had were >> precisely because we lacked this information. ?But I really do like >> this suggestion, because I think it provides the most intuitive >> semantics. ?Things like: >> >> for i in range(10): >> ? ?plot(foo[i]) >> >> won't produce 10 different Out[] outputs, and yet any last-block >> expression, even if it contains some multline string or other complex >> formatting that makes it be more than one *line of text* will still be >> executed interactively, yielding just one result. >> >> I'm very much +1 on this idea. ?Big benefit of your recent tackling >> inputsplitter!! Awesome. > > I don't think we need to solve this in the splitter. You can do everything you > need to do in the display trap. The logic is very simple. The display trap sets > the displayhook before the code In[N] is executed. Every time the displayhook > gets called during this execution, the display trap records the object and its > formatted representation, overwriting whatever was there. Once the execution is > done, *then* the formatted representation is given to the reply message (or > printed if in the terminal frontend) and the history DB (if it is storing > outputs), and the object is shoved into Out[N]. Then the display trap is cleared > before In[N+1] is executed. This is a good point that is worth investigating. The only hitch is that the formatted representation is not sent in the reply to the original code execution request. It is published over the PUB socket, which makes sure that gets sent to all frontends. This means that the display hook data can be sent out to all frontends *before* the execute reply message goes back. But we might still be able to get this to work. It is attractive because it would be this logic out of the splitter, where it seems a bit out of place. Cheers, Brian > This allows code like "x; y = x+1" to put x into Out[N] even though the last AST > node is not an expression. It's not *entirely* clear to me that this is a real > use case, but I think it would be less surprising behavior. > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless enigma > ?that is made terrible by our own mad attempt to interpret it as though it had > ?an underlying truth." > ? -- Umberto Eco > > _______________________________________________ > IPython-dev mailing list > IPython-dev at scipy.org > http://mail.scipy.org/mailman/listinfo/ipython-dev > -- Brian E. Granger Cal Poly State University, San Luis Obispo bgranger at calpoly.edu and ellisonbg at gmail.com From ellisonbg at gmail.com Sun Apr 10 02:10:59 2011 From: ellisonbg at gmail.com (Brian Granger) Date: Sat, 9 Apr 2011 23:10:59 -0700 Subject: [IPython-dev] the view for 0.11, using the new issues features In-Reply-To: References: Message-ID: Fernando, Great find! They read our minds. On Sat, Apr 9, 2011 at 7:14 PM, Fernando Perez wrote: > by the way, with the new issues page, we can get stable filtering of > the issues list by labels, so we can easily see our active, > high-priority 0.11 bugs: > > > https://github.com/ipython/ipython/issues?labels=prio-critical%2Cstatus-active&sort=created&direction=asc&state=open&page=1&milestone=1 > > that url is a filtering view, so it will always provide a current view > of whatever matches. > I took the time to convert our manual milestone- and person name > labels into proper milestones and assigned people, so now we have less > labels, and two actual milestones: > > https://github.com/ipython/ipython/issues/milestones Fantastic, thanks for updating everything. What a great improvement! > Yay for the new system :) Yes! Brian > f > _______________________________________________ > IPython-dev mailing list > IPython-dev at scipy.org > http://mail.scipy.org/mailman/listinfo/ipython-dev > -- Brian E. Granger Cal Poly State University, San Luis Obispo bgranger at calpoly.edu and ellisonbg at gmail.com From robert.kern at gmail.com Sun Apr 10 02:18:33 2011 From: robert.kern at gmail.com (Robert Kern) Date: Sun, 10 Apr 2011 01:18:33 -0500 Subject: [IPython-dev] messaging protocol In-Reply-To: References: <4D9E646D.1020504@creativetrax.com> <4D9EC18E.50903@creativetrax.com> Message-ID: On 2011-04-10 01:06 , Brian Granger wrote: > On Fri, Apr 8, 2011 at 2:05 PM, Robert Kern wrote: >> On 4/8/11 3:32 PM, Fernando Perez wrote: >>> On Fri, Apr 8, 2011 at 1:21 PM, Thomas Kluyver wrote: >>>> >>>> In fact, now that we're using AST instead of code blocks, we could actually >>>> do what you suggest. We could check the last node, and only run it >>>> interactively if it was a single expression. Whether or not that's what we >>>> want to do, I don't know: any views? >>> >>> That's a very good point. The fragile heuristics we had were >>> precisely because we lacked this information. But I really do like >>> this suggestion, because I think it provides the most intuitive >>> semantics. Things like: >>> >>> for i in range(10): >>> plot(foo[i]) >>> >>> won't produce 10 different Out[] outputs, and yet any last-block >>> expression, even if it contains some multline string or other complex >>> formatting that makes it be more than one *line of text* will still be >>> executed interactively, yielding just one result. >>> >>> I'm very much +1 on this idea. Big benefit of your recent tackling >>> inputsplitter!! Awesome. >> >> I don't think we need to solve this in the splitter. You can do everything you >> need to do in the display trap. The logic is very simple. The display trap sets >> the displayhook before the code In[N] is executed. Every time the displayhook >> gets called during this execution, the display trap records the object and its >> formatted representation, overwriting whatever was there. Once the execution is >> done, *then* the formatted representation is given to the reply message (or >> printed if in the terminal frontend) and the history DB (if it is storing >> outputs), and the object is shoved into Out[N]. Then the display trap is cleared >> before In[N+1] is executed. > > This is a good point that is worth investigating. The only hitch is > that the formatted representation is not sent in the reply to the > original code execution request. It is published over the PUB socket, > which makes sure that gets sent to all frontends. This means that the > display hook data can be sent out to all frontends *before* the > execute reply message goes back. But we might still be able to get > this to work. It is attractive because it would be this logic out of > the splitter, where it seems a bit out of place. Replace "reply message" with "displayhook PUB message" in what I wrote. You just need to explicitly tell the DisplayHook when to actually send the message/print the repr in the actual execution cycle. It should not do it on every __call__. That's all. That was the design I had in the ipwx prototype way back when. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From fperez.net at gmail.com Sun Apr 10 23:18:51 2011 From: fperez.net at gmail.com (Fernando Perez) Date: Sun, 10 Apr 2011 20:18:51 -0700 Subject: [IPython-dev] Qt console making great strides, please use it and let us know what works, what doesn't Message-ID: Hi all, now that our codebase is starting to settle down into the shape the next release will look like, it would be great if we start getting more feedback. In particular, Evan Patterson has done a great job of fine-tuning the Qt console so that it's a really useful tool for everyday work, and while we've tried to be much more disciplined about testing things now, the reality of GUI apps is that some things are very difficult to test short of humans using them. So if you would like to start using it, and reporting what doesn't work either here or even better, on the bug tracker, that would be great. Evan today just merged a change that fixes what in my eyes was one of the major usability issues left: when you moved out of a cell you had already edited, you lost your unexecuted changes. This could be very annoying in practice, but now the console automatically remembers these changes. I think with this issue fixed, it makes for a great environment to experiment and work in. My favorite way to start it is with this alias: alias iqlab='ipython-qtconsole --paging vsplit --pylab' and sometimes I'll add "inline" if I want inline figures (I'm working on a patch to allow toggling of inline/floating figures at runtime, but it's not ready yet). So use it, pound on it, let us know how it goes. We'd like this to be really a production-ready tool when 0.11 goes out. A big thanks to Evan for the great work he's done and being so responsive, as well as Mark and others who have pitched in with the Qt code. And as always, a pull request is even better than a bug report! Cheers, f From ellisonbg at gmail.com Mon Apr 11 00:09:32 2011 From: ellisonbg at gmail.com (Brian Granger) Date: Sun, 10 Apr 2011 21:09:32 -0700 Subject: [IPython-dev] messaging protocol In-Reply-To: References: <4D9E646D.1020504@creativetrax.com> <4D9EC18E.50903@creativetrax.com> Message-ID: On Sat, Apr 9, 2011 at 11:18 PM, Robert Kern wrote: > On 2011-04-10 01:06 , Brian Granger wrote: >> On Fri, Apr 8, 2011 at 2:05 PM, Robert Kern ?wrote: >>> On 4/8/11 3:32 PM, Fernando Perez wrote: >>>> On Fri, Apr 8, 2011 at 1:21 PM, Thomas Kluyver ? ?wrote: >>>>> >>>>> In fact, now that we're using AST instead of code blocks, we could actually >>>>> do what you suggest. We could check the last node, and only run it >>>>> interactively if it was a single expression. Whether or not that's what we >>>>> want to do, I don't know: any views? >>>> >>>> That's a very good point. ?The fragile heuristics we had were >>>> precisely because we lacked this information. ?But I really do like >>>> this suggestion, because I think it provides the most intuitive >>>> semantics. ?Things like: >>>> >>>> for i in range(10): >>>> ? ? plot(foo[i]) >>>> >>>> won't produce 10 different Out[] outputs, and yet any last-block >>>> expression, even if it contains some multline string or other complex >>>> formatting that makes it be more than one *line of text* will still be >>>> executed interactively, yielding just one result. >>>> >>>> I'm very much +1 on this idea. ?Big benefit of your recent tackling >>>> inputsplitter!! Awesome. >>> >>> I don't think we need to solve this in the splitter. You can do everything you >>> need to do in the display trap. The logic is very simple. The display trap sets >>> the displayhook before the code In[N] is executed. Every time the displayhook >>> gets called during this execution, the display trap records the object and its >>> formatted representation, overwriting whatever was there. Once the execution is >>> done, *then* the formatted representation is given to the reply message (or >>> printed if in the terminal frontend) and the history DB (if it is storing >>> outputs), and the object is shoved into Out[N]. Then the display trap is cleared >>> before In[N+1] is executed. >> >> This is a good point that is worth investigating. ?The only hitch is >> that the formatted representation is not sent in the reply to the >> original code execution request. ?It is published over the PUB socket, >> which makes sure that gets sent to all frontends. ?This means that the >> display hook data can be sent out to all frontends *before* the >> execute reply message goes back. ?But we might still be able to get >> this to work. ?It is attractive because it would be this logic out of >> the splitter, where it seems a bit out of place. > > Replace "reply message" with "displayhook PUB message" in what I wrote. You just > need to explicitly tell the DisplayHook when to actually send the message/print > the repr in the actual execution cycle. It should not do it on every __call__. > That's all. That was the design I had in the ipwx prototype way back when. I don't think it is that simple as the display hook PUB message gets send out immediately: a = 10 a # the PUB message happens here, before blocks are done being executed time.sleep(10) Thus, the display hook *can't* have the information it needs to make the decision about what to publish. Upon further thinking, I don't see how we can get around this. Brian > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless enigma > ?that is made terrible by our own mad attempt to interpret it as though it had > ?an underlying truth." > ? -- Umberto Eco > > _______________________________________________ > IPython-dev mailing list > IPython-dev at scipy.org > http://mail.scipy.org/mailman/listinfo/ipython-dev > -- Brian E. Granger Cal Poly State University, San Luis Obispo bgranger at calpoly.edu and ellisonbg at gmail.com From robert.kern at gmail.com Mon Apr 11 00:27:30 2011 From: robert.kern at gmail.com (Robert Kern) Date: Sun, 10 Apr 2011 23:27:30 -0500 Subject: [IPython-dev] messaging protocol In-Reply-To: References: <4D9E646D.1020504@creativetrax.com> <4D9EC18E.50903@creativetrax.com> Message-ID: On 2011-04-10 23:09 , Brian Granger wrote: > On Sat, Apr 9, 2011 at 11:18 PM, Robert Kern wrote: >> On 2011-04-10 01:06 , Brian Granger wrote: >>> On Fri, Apr 8, 2011 at 2:05 PM, Robert Kern wrote: >>>> On 4/8/11 3:32 PM, Fernando Perez wrote: >>>>> On Fri, Apr 8, 2011 at 1:21 PM, Thomas Kluyver wrote: >>>>>> >>>>>> In fact, now that we're using AST instead of code blocks, we could actually >>>>>> do what you suggest. We could check the last node, and only run it >>>>>> interactively if it was a single expression. Whether or not that's what we >>>>>> want to do, I don't know: any views? >>>>> >>>>> That's a very good point. The fragile heuristics we had were >>>>> precisely because we lacked this information. But I really do like >>>>> this suggestion, because I think it provides the most intuitive >>>>> semantics. Things like: >>>>> >>>>> for i in range(10): >>>>> plot(foo[i]) >>>>> >>>>> won't produce 10 different Out[] outputs, and yet any last-block >>>>> expression, even if it contains some multline string or other complex >>>>> formatting that makes it be more than one *line of text* will still be >>>>> executed interactively, yielding just one result. >>>>> >>>>> I'm very much +1 on this idea. Big benefit of your recent tackling >>>>> inputsplitter!! Awesome. >>>> >>>> I don't think we need to solve this in the splitter. You can do everything you >>>> need to do in the display trap. The logic is very simple. The display trap sets >>>> the displayhook before the code In[N] is executed. Every time the displayhook >>>> gets called during this execution, the display trap records the object and its >>>> formatted representation, overwriting whatever was there. Once the execution is >>>> done, *then* the formatted representation is given to the reply message (or >>>> printed if in the terminal frontend) and the history DB (if it is storing >>>> outputs), and the object is shoved into Out[N]. Then the display trap is cleared >>>> before In[N+1] is executed. >>> >>> This is a good point that is worth investigating. The only hitch is >>> that the formatted representation is not sent in the reply to the >>> original code execution request. It is published over the PUB socket, >>> which makes sure that gets sent to all frontends. This means that the >>> display hook data can be sent out to all frontends *before* the >>> execute reply message goes back. But we might still be able to get >>> this to work. It is attractive because it would be this logic out of >>> the splitter, where it seems a bit out of place. >> >> Replace "reply message" with "displayhook PUB message" in what I wrote. You just >> need to explicitly tell the DisplayHook when to actually send the message/print >> the repr in the actual execution cycle. It should not do it on every __call__. >> That's all. That was the design I had in the ipwx prototype way back when. > > I don't think it is that simple as the display hook PUB message gets > send out immediately: > > a = 10 > > a # the PUB message happens here, before blocks are done being executed > > time.sleep(10) > > Thus, the display hook *can't* have the information it needs to make > the decision about what to publish. Upon further thinking, I don't > see how we can get around this. I'm not sure how much more clearly I can state this: just don't make the DisplayHook send the PUB message in it's __call__(). Have the __call__() only collect the object and the formatted representations. Add another method that actually formulates the PUB message and send it out. Call that method explicitly after the cell has finished executing. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From ellisonbg at gmail.com Mon Apr 11 01:28:58 2011 From: ellisonbg at gmail.com (Brian Granger) Date: Sun, 10 Apr 2011 22:28:58 -0700 Subject: [IPython-dev] messaging protocol In-Reply-To: References: <4D9E646D.1020504@creativetrax.com> <4D9EC18E.50903@creativetrax.com> Message-ID: On Sun, Apr 10, 2011 at 9:27 PM, Robert Kern wrote: > On 2011-04-10 23:09 , Brian Granger wrote: >> On Sat, Apr 9, 2011 at 11:18 PM, Robert Kern ?wrote: >>> On 2011-04-10 01:06 , Brian Granger wrote: >>>> On Fri, Apr 8, 2011 at 2:05 PM, Robert Kern ? ?wrote: >>>>> On 4/8/11 3:32 PM, Fernando Perez wrote: >>>>>> On Fri, Apr 8, 2011 at 1:21 PM, Thomas Kluyver ? ? ?wrote: >>>>>>> >>>>>>> In fact, now that we're using AST instead of code blocks, we could actually >>>>>>> do what you suggest. We could check the last node, and only run it >>>>>>> interactively if it was a single expression. Whether or not that's what we >>>>>>> want to do, I don't know: any views? >>>>>> >>>>>> That's a very good point. ?The fragile heuristics we had were >>>>>> precisely because we lacked this information. ?But I really do like >>>>>> this suggestion, because I think it provides the most intuitive >>>>>> semantics. ?Things like: >>>>>> >>>>>> for i in range(10): >>>>>> ? ? ?plot(foo[i]) >>>>>> >>>>>> won't produce 10 different Out[] outputs, and yet any last-block >>>>>> expression, even if it contains some multline string or other complex >>>>>> formatting that makes it be more than one *line of text* will still be >>>>>> executed interactively, yielding just one result. >>>>>> >>>>>> I'm very much +1 on this idea. ?Big benefit of your recent tackling >>>>>> inputsplitter!! Awesome. >>>>> >>>>> I don't think we need to solve this in the splitter. You can do everything you >>>>> need to do in the display trap. The logic is very simple. The display trap sets >>>>> the displayhook before the code In[N] is executed. Every time the displayhook >>>>> gets called during this execution, the display trap records the object and its >>>>> formatted representation, overwriting whatever was there. Once the execution is >>>>> done, *then* the formatted representation is given to the reply message (or >>>>> printed if in the terminal frontend) and the history DB (if it is storing >>>>> outputs), and the object is shoved into Out[N]. Then the display trap is cleared >>>>> before In[N+1] is executed. >>>> >>>> This is a good point that is worth investigating. ?The only hitch is >>>> that the formatted representation is not sent in the reply to the >>>> original code execution request. ?It is published over the PUB socket, >>>> which makes sure that gets sent to all frontends. ?This means that the >>>> display hook data can be sent out to all frontends *before* the >>>> execute reply message goes back. ?But we might still be able to get >>>> this to work. ?It is attractive because it would be this logic out of >>>> the splitter, where it seems a bit out of place. >>> >>> Replace "reply message" with "displayhook PUB message" in what I wrote. You just >>> need to explicitly tell the DisplayHook when to actually send the message/print >>> the repr in the actual execution cycle. It should not do it on every __call__. >>> That's all. That was the design I had in the ipwx prototype way back when. >> >> I don't think it is that simple as the display hook PUB message gets >> send out immediately: >> >> a = 10 >> >> a ? ? # the PUB message happens here, before blocks are done being executed >> >> time.sleep(10) >> >> Thus, the display hook *can't* have the information it needs to make >> the decision about what to publish. ?Upon further thinking, I don't >> see how we can get around this. > > I'm not sure how much more clearly I can state this: just don't make the > DisplayHook send the PUB message in it's __call__(). Have the __call__() only > collect the object and the formatted representations. Add another method that > actually formulates the PUB message and send it out. Call that method explicitly > after the cell has finished executing. I do understand the model you are proposing. Implementing it would require that the sending of the PUB message be delayed until the cell has finished executing. It is this delay that our current model is incompatible with. Any traffic (stdout, stderr, display hook) that is sent out over the PUB channel is send *immediately*. This is required so that the users sees these things in real time as the code is executed, not after. Cheers, Brian > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless enigma > ?that is made terrible by our own mad attempt to interpret it as though it had > ?an underlying truth." > ? -- Umberto Eco > > _______________________________________________ > IPython-dev mailing list > IPython-dev at scipy.org > http://mail.scipy.org/mailman/listinfo/ipython-dev > -- Brian E. Granger Cal Poly State University, San Luis Obispo bgranger at calpoly.edu and ellisonbg at gmail.com From ellisonbg at gmail.com Mon Apr 11 02:23:56 2011 From: ellisonbg at gmail.com (Brian Granger) Date: Sun, 10 Apr 2011 23:23:56 -0700 Subject: [IPython-dev] Qt console making great strides, please use it and let us know what works, what doesn't In-Reply-To: References: Message-ID: On Sun, Apr 10, 2011 at 8:18 PM, Fernando Perez wrote: > Hi all, > > now that our codebase is starting to settle down into the shape the > next release will look like, it would be great if we start getting > more feedback. ?In particular, Evan Patterson has done a great job of > fine-tuning the Qt console so that it's a really useful tool for > everyday work, and while we've tried to be much more disciplined about > testing things now, the reality of GUI apps is that some things are > very difficult to test short of humans using them. > > So if you would like to start using it, and reporting what doesn't > work either here or even better, on the bug tracker, that would be > great. ?Evan today just merged a change that fixes what in my eyes was > one of the major usability issues left: when you moved out of a cell > you had already edited, you lost your unexecuted changes. ?This could > be very annoying in practice, but now the console automatically > remembers these changes. ?I think with this issue fixed, it makes for > a great environment to experiment and work in. ?My favorite way to > start it is with this alias: > > alias iqlab='ipython-qtconsole --paging vsplit --pylab' > > and sometimes I'll add "inline" if I want inline figures (I'm working > on a patch to allow toggling of inline/floating figures at runtime, > but it's not ready yet). > > So use it, pound on it, let us know how it goes. ?We'd like this to be > really a production-ready tool when 0.11 goes out. > > A big thanks to Evan for the great work he's done and being so > responsive, as well as Mark and others who have pitched in with the Qt > code. +1. Thanks Evan! Cheers, Brian > And as always, a pull request is even better than a bug report! > > Cheers, > > f > _______________________________________________ > IPython-dev mailing list > IPython-dev at scipy.org > http://mail.scipy.org/mailman/listinfo/ipython-dev > -- Brian E. Granger Cal Poly State University, San Luis Obispo bgranger at calpoly.edu and ellisonbg at gmail.com From jason-sage at creativetrax.com Mon Apr 11 06:17:42 2011 From: jason-sage at creativetrax.com (Jason Grout) Date: Mon, 11 Apr 2011 05:17:42 -0500 Subject: [IPython-dev] messaging protocol In-Reply-To: References: <4D9E646D.1020504@creativetrax.com> <4D9EC18E.50903@creativetrax.com> Message-ID: <4DA2D546.90105@creativetrax.com> On 4/11/11 12:28 AM, Brian Granger wrote: > Any traffic (stdout, stderr, display hook) that is > sent out over the PUB channel is send*immediately*. This is required > so that the users sees these things in real time as the code is > executed, not after. Remember that the use-case here is that only the last pyout output is displayed, so it seems perfectly reasonable for the displayhook to delay sending until after the cell is executed. This is not the use-case where we want interactive pyout output from every command. The one case where I see a possible problem is when you have something like: a=1 a print "done" Then I suppose the pyout output would either be the pyout of a, or the pyout "None" (I don't know offhand which it would be). If it was the pyout of a, then the pyout would be sent after the printed stdout output, which would seem weird. Thanks, Jason From dave.hirschfeld at gmail.com Mon Apr 11 07:49:22 2011 From: dave.hirschfeld at gmail.com (Dave Hirschfeld) Date: Mon, 11 Apr 2011 11:49:22 +0000 (UTC) Subject: [IPython-dev] Qt console making great strides, please use it and let us know what works, what doesn't References: Message-ID: Fernando Perez gmail.com> writes: > now that our codebase is starting to settle down into the shape the > next release will look like, it would be great if we start getting > more feedback. In particular, Evan Patterson has done a great job of > fine-tuning the Qt console so that it's a really useful tool for > everyday work, and while we've tried to be much more disciplined about > testing things now, the reality of GUI apps is that some things are > very difficult to test short of humans using them. > > So if you would like to start using it, and reporting what doesn't > work either here or even better, on the bug tracker, that would be > great. Evan today just merged a change that fixes what in my eyes was > one of the major usability issues left: when you moved out of a cell > you had already edited, you lost your unexecuted changes. This could > be very annoying in practice, but now the console automatically > remembers these changes. I think with this issue fixed, it makes for > a great environment to experiment and work in. My favorite way to > start it is with this alias: > > alias iqlab='ipython-qtconsole --paging vsplit --pylab' > > and sometimes I'll add "inline" if I want inline figures (I'm working > on a patch to allow toggling of inline/floating figures at runtime, > but it's not ready yet). > > So use it, pound on it, let us know how it goes. We'd like this to be > really a production-ready tool when 0.11 goes out. > > A big thanks to Evan for the great work he's done and being so > responsive, as well as Mark and others who have pitched in with the Qt > code. > > And as always, a pull request is even better than a bug report! > > Cheers, > > f > Thanks to everyone involved for all the hard work, I've been eagerly awaiting the 0.11 release but thought I would try to help out on the feedback front by jumping in early. I'm currently running Python 2.6.5 (r265:79096, Mar 19 2010, 21:48:26) [MSC v.1500 32 bit (Intel)] on Win7 x64 from the Python (x,y) distribution. I'm sure I remembered a discussion about installing 0.11/pyzmq on list but I couldn't find it so I went to the documentation and found: http://ipython.github.com/ipython-doc/dev/install/install.html#installing-the-development-version ...which is obviously a bit out of date. Nevertheless I figured I would just install ZMQ/PyZMQ/IPython from source and see how I went. Compiling ZMQ was fine, however when trying the configure stage of pyzmq it couldn't find the libzmq.lib as it wasn't copied to the zeromq-2.1.4\lib directory. The solution was to simply copy the lib file from the zmq build directory to the lib directory where the dll was located: C:\dev\src\pyzmq>python setup.py configure --zmq=C:\dev\src\zeromq-2.1.4 running configure ****************************************** Configure: Autodetecting ZMQ settings... Custom ZMQ dir: C:\dev\src\zeromq-2.1.4 c:\Program Files (x86)\Microsoft Visual Studio 9.0\VC\BIN\cl.exe /c /nologo /Ox /MD /W3 /GS- /DNDEBUG -IC:\dev\src\zeromq-2.1.4\include -Izmq\utils -Izmq\core -Izmq\devices /Tcdetect\vers.c /Fodetect\vers.obj vers.c c:\Program Files (x86)\Microsoft Visual Studio 9.0\VC\BIN\link.exe /nologo /INCREMENTAL:NO /LIBPATH:C:\dev\src\zeromq-2.1.4\lib libzmq.lib detect\vers.obj /OUT:detect\vers.exe /MANIFESTFILE:detect\vers.exe.manifest C:\dev\src\zeromq-2.1.4\lib\libzmq.lib : fatal error LNK1107: invalid or corrupt file: cannot read at 0x2C0 Fatal: Failed to compile ZMQ test program. Please check to make sure: * You have a C compiler installed * A development version of Python is installed (including header files) * A development version of ZeroMQ >= 2.1.0 is installed (including header files) * If ZMQ is not in a default location, supply the argument --zmq= ****************************************** C:\dev\src\pyzmq>ls C:/dev/src/zeromq-2.1.4/lib libzmq.dll C:\dev\src\pyzmq>ls C:/dev/src/zeromq-2.1.4/include zmq.h zmq.hpp zmq_utils.h C:\dev\src\pyzmq>ls C:/dev/src/zeromq-2.1.4/builds/msvc/Release/*.lib libzmq.lib C:\dev\src\pyzmq>python setup.py configure --zmq=C:\dev\src\zeromq-2.1.4 running configure ****************************************** Configure: Autodetecting ZMQ settings... Custom ZMQ dir: C:\dev\src\zeromq-2.1.4 c:\Program Files (x86)\Microsoft Visual Studio 9.0\VC\BIN\cl.exe /c /nologo /Ox /MD /W3 /GS- /DNDEBUG -IC:\dev\src\zeromq-2.1.4\include -Izmq\utils -Izmq\core -Izmq\devices /Tcdetect\vers.c /Fodetect\vers.obj vers.c c:\Program Files (x86)\Microsoft Visual Studio 9.0\VC\BIN\link.exe /nologo /INCREMENTAL:NO /LIBPATH:C:\dev\src\zeromq-2.1.4\lib libzmq.lib detect\vers.obj /OUT:detect\vers.exe /MANIFESTFILE:detect\vers.exe.manifest C:\Program Files\Microsoft SDKs\Windows\v6.0A\bin\mt.exe -nologo -manifest detect\vers.exe.manifest -outputresource:detect\vers.exe;1 ZMQ version detected: 2.1.4 ****************************************** Unfortunately trying to import zmq failed as the libzmq dll wasn't copied to the site-packages directory when running python setup.py install. Copying the dll over fixed the problem: In [2]: import zmq --------------------------------------------------------------------------- WindowsError Traceback (most recent call last) C:\dev\src\ in () C:\dev\bin\Python26\lib\site-packages\zmq\__init__.py in () 28 import os, ctypes 29 here = os.path.dirname(__file__) ---> 30 ctypes.cdll.LoadLibrary(os.path.join(here, 'libzmq.dll')) 31 32 from zmq.utils import initthreads # initialize threads WindowsError: [Error 126] The specified module could not be found In [3]: debug > c:\dev\bin\python26\lib\ctypes\__init__.py(431)LoadLibrary() 430 def LoadLibrary(self, name): --> 431 return self._dlltype(name) 432 ipdb> name 'C:\\dev\\bin\\Python26\\lib\\site-packages\\zmq\\libzmq.dll' ipdb> up > c:\dev\bin\python26\lib\site-packages\zmq\__init__.py(30)() 29 here = os.path.dirname(__file__) ---> 30 ctypes.cdll.LoadLibrary(os.path.join(here, 'libzmq.dll')) 31 ipdb> print here None ipdb> print __file__ None In [4]: !cp C:\dev\src\zeromq-2.1.4\lib\libzmq.dll C:\dev\bin\Python26\Lib\site-packages\zmq In [5]: import zmq In [6]: Installing IPython from source went fine, again using python setup.py install. At this stage I was at a bit of a loss as to how to test it, my normal ipython shortcut seemed to still load up my old version 10.1. I found a file "ipython-qtconsole" in my Python26/Scripts directory but without a suffix or associated .bat file it wasn't directly executable. This was easily resolved by creating a bat file iqlab.bat with the contents: @"C:\dev\bin\Python26\python.exe" "C:\dev\bin\Python26\scripts\ipython-qtconsole" --paging vsplit --pylab inline %* The only other point to note was that my version of sip didn't have the setapi function requiring me to update my PyQt4 install. After that however I now have a working qtconsole - it looks great! Inline figures and sympy pretty-printing (using %load_ext sympy_printing) work beautifully. Just one question - in "normal" IPython the ability to define magics and pre-import functions into the namespace using the ipy_user_conf files is very handy - is it possibly to define magics and imports for the new qtconsole? Let me know if there any specific I can do to help with the testing. Thanks, Dave From ellisonbg at gmail.com Mon Apr 11 12:34:23 2011 From: ellisonbg at gmail.com (Brian Granger) Date: Mon, 11 Apr 2011 09:34:23 -0700 Subject: [IPython-dev] messaging protocol In-Reply-To: <4DA2D546.90105@creativetrax.com> References: <4D9E646D.1020504@creativetrax.com> <4D9EC18E.50903@creativetrax.com> <4DA2D546.90105@creativetrax.com> Message-ID: Jason, On Mon, Apr 11, 2011 at 3:17 AM, Jason Grout wrote: > On 4/11/11 12:28 AM, Brian Granger wrote: >> ? Any traffic (stdout, stderr, display hook) that is >> sent out over the PUB channel is send*immediately*. ?This is required >> so that the users sees these things in real time as the code is >> executed, not after. > > Remember that the use-case here is that only the last pyout output is > displayed, so it seems perfectly reasonable for the displayhook to delay > sending until after the cell is executed. ?This is not the use-case > where we want interactive pyout output from every command. Yes, you are right. If we only trigger pyout/dhook on the last block the code will most likely be done executing (basically no delay). > The one case where I see a possible problem is when you have something like: > > a=1 > a > print "done" > > Then I suppose the pyout output would either be the pyout of a, or the > pyout "None" (I don't know offhand which it would be). ?If it was the > pyout of a, then the pyout would be sent after the printed stdout > output, which would seem weird. Fernando and I spent some time last night looking at various options and we think the best option is what Thomas has in this branch: https://github.com/takluyver/ipython/tree/single-output The idea is that dhook will only be displayed for: * The last block in a multiblock expression. * And only if that block is an ast.Expr. This means that the following will trigger the dhook: a # if a is defined... But the following won't: for i in range(10): i This will result in dhook being called either 0 or 1 times. To make this decision, we need the ast information, so it makes sense to handle this in the splitter. > Thanks, > > Jason > > _______________________________________________ > IPython-dev mailing list > IPython-dev at scipy.org > http://mail.scipy.org/mailman/listinfo/ipython-dev > -- Brian E. Granger Cal Poly State University, San Luis Obispo bgranger at calpoly.edu and ellisonbg at gmail.com From ellisonbg at gmail.com Mon Apr 11 12:45:22 2011 From: ellisonbg at gmail.com (Brian Granger) Date: Mon, 11 Apr 2011 09:45:22 -0700 Subject: [IPython-dev] variable information request In-Reply-To: <4DA2D7C8.9060608@drake.edu> References: <4DA048AE.60504@creativetrax.com> <4DA2D7C8.9060608@drake.edu> Message-ID: Jason, On Mon, Apr 11, 2011 at 3:28 AM, Jason Grout wrote: > On 4/10/11 12:59 AM, Brian Granger wrote: >> >> Jason, >> >> On Sat, Apr 9, 2011 at 4:53 AM, Jason Grout >> ?wrote: >>> >>> Here's a crazy idea that could lead to some nice interactively-updating >>> components of an IPython frontend. ?Right now, in the new messaging >>> protocol, it appears that I can send a computation request and then ask >>> for the output to contain the values of several variables (the >>> "user_variables" field of a execute_reply message). ?However, what if I >>> want to check the value of a variable *during* a computation and get an >>> immediate response? ?I might, for example, have a box that prints out >>> the current value of a root approximation, for example, or a slider that >>> contains the current iteration number, and I want these to be updated >>> fairly frequently while the computation is running. >> >> I think this is where the display system comes in. ?We have extended >> the model of the displayhook to allow that macheniery to be triggered >> by users anywhere in their code by simply calling our top-level >> display functions: >> >> https://github.com/ipython/ipython/blob/master/IPython/core/display.py >> >> Calling these functions causes a JSON message to be sent to all >> frontends instantly (it doesn't wait for the code to finish). ?These >> messages can have extra metedata and a A given frontend could easily >> look at that metadata and decide how to display the JSON data. > > But that requires that I actually call the display function from within > code. ?That answers the case where I am actually writing all of the relevant > code and can anticipate in advance what information a user will want. > ?Another use-case is allowing the user to just query about whatever > variables they want, at any time, without needing to anticipate what > information needs to be sent ahead of time. Ahh, yes, you want a pull mechanism that allows users to request a variable on the fly, rather than the push approach I described that does require statements in the code. > > >> >>> Here's one way to do it, I think: ?Run a separate thread that just >>> answers these queries. ?Since the GIL handles access to variables, I >>> think it's okay for the separate thread to retrieve the value of a >>> variable and return that, and it seamlessly does this while the main >>> thread is carrying on a computation. ?For example, the following code >>> will print out the current iteration number every time enter is pressed: >> >> Almost always, in Python, threads are not the answer. ?This type of >> service would go down the second that non-GIL releasing extension code >> is run. ?This is one of the main reasons we have gone to pyzmq, as all >> of its networking stuff is handled in GIL releasing C++ threads. ?The >> benefit of the existing display logic that uses pyzmq is that it will >> continue to work fine in these situations. > > > It seems that the service wouldn't go down as much as the service would be > delayed, right? ?That seems rather unavoidable if we want to access a Python > variable without corruption. ? ?As soon as the GIL was released, the > variables would be queried and values sent. ?We face the same issue in doing > a display() inside the code---the display() has to run after the non-GIL > releasing extension code is run too. ?So I don't see the advantage of using > a display() in the code we are trying to investigate and query, versus using > a separate thread to query values in the currently running code. I agree that for what you are wanting, display is not the right solution. In the threading approach, the user namespace would have to be protected with a lock. The main challenge is that we are extremely weary of growing the number of threads in IPython. Currently we only have one place where we use a thread in the core of IPython: for saving the users history. This is done to hide the latency of sqlite writes and it appears that the GIL is released during this io. While I am not ready to add another thread to enable the feature you are asking about, I don't see any reason we couldn't make is possible to implement as some sort of IPython extension. It would also require a new pyzmq channel. Cheers, Brian > Thanks, > > Jason > > -- > Jason Grout > jason.grout at drake.edu > Math and Computer Science Department > Drake University > -- Brian E. Granger Cal Poly State University, San Luis Obispo bgranger at calpoly.edu and ellisonbg at gmail.com From dave.hirschfeld at gmail.com Mon Apr 11 16:24:43 2011 From: dave.hirschfeld at gmail.com (Dave Hirschfeld) Date: Mon, 11 Apr 2011 20:24:43 +0000 (UTC) Subject: [IPython-dev] =?utf-8?q?Qt_console_making_great_strides=2C=09plea?= =?utf-8?q?se_use_it_and_let_us_know_what_works=2C_what_doesn=27t?= References: Message-ID: I did some testing of the parallel code, but ran into the same problem as: http://article.gmane.org/gmane.comp.python.ipython.user/5616 C:\dev\code>python C:\dev\bin\Python26\Scripts\ipcluster start -n 4 [IPClusterApp] Using existing cluster dir: C:\Users\dhirschfeld\.ipython\cluster_default [IPClusterApp] Cluster directory set to: C:\Users\dhirschfeld\.ipython\cluster_default [IPClusterApp] Starting ipcluster with [daemon=False] [IPClusterApp] Creating pid file: C:\Users\dhirschfeld\.ipython\cluster_default\pid\ipcluster.pid [IPClusterApp] Starting LocalControllerLauncher: ['C:\\dev\\bin\\Python26 \\python.exe', '-u', u'C:\\dev\\bin\\Python26\\lib\\site-packages\\IPython \\parallel\\apps\\ipcontrollerapp.py', '--log-to-file', '--log-level', '20', '--cluster-dir', u'C:\\Users\\dhirschfeld\\.ipython\\cluster_default'] [IPClusterApp] Process 'C:\\dev\\bin\\Python26\\python.exe' started: 7096 [IPClusterApp] IPython cluster: started Assertion failed: Socket operation on non-socket (..\..\..\src\zmq.cpp:632) I sent an email to the Users list but it appears to have gotten lost. So I've reproduced it below: Starting the controller and engines seperately appeared to work: c:\dev\code>python C:\dev\bin\Python26\Scripts\ipcontroller [IPControllerApp] Using existing cluster dir: C:\Users\dhirschfeld\.ipython\cluster_default [IPControllerApp] Cluster directory set to: C:\Users\dhirschfeld\.ipython\cluster_default [IPControllerApp] Hub listening on tcp://127.0.0.1:57543 for registration. [IPControllerApp] Hub using DB backend: 'IPython.parallel.controller.dictdb.DictDB' [IPControllerApp] hub::created hub [IPControllerApp] task::using Python leastload Task scheduler [IPControllerApp] Heartmonitor started [IPControllerApp] Creating pid file: C:\Users\dhirschfeld\.ipython\cluster_default\pid\ipcontroller.pid tcp://127.0.0.1:57564 tcp://127.0.0.1:57565 tcp://127.0.0.1:57544 tcp://127.0.0.1:57557 Scheduler started... c:\dev\code>python C:\dev\bin\Python26\Scripts\ipengine [IPEngineApp] Using existing cluster dir: C:\Users\dhirschfeld\.ipython\cluster_default [IPEngineApp] Cluster directory set to: C:\Users\dhirschfeld\.ipython\cluster_default [IPEngineApp] registering [IPEngineApp] Completed registration with id 0 ... In IPython I can connect to the engines but any attempt to do any calculation results in each engine dying with the error "Assertion failed: Invalid argument (..\..\..\src\zmq.cpp:632)" in the console. from IPython.parallel import Client rc = Client() rc.ids Out[3]: [0, 1] dview = rc[:] parallel_result = dview.map_sync(lambda x: x**10, range(32)) --------------------------------------------------------------------------- CompositeError Traceback (most recent call last) C:\dev\bin\Python26\Scripts\ in () ----> 1 parallel_result = dview.map_sync(lambda x: x**10, range(32)) C:\dev\bin\Python26\lib\site-packages\IPython\parallel\client\view.pyc in map_sync(self, f, *sequences, **kwargs) 336 raise TypeError("map_sync doesn't take a `block` keyword argument.") 337 kwargs['block'] = True --> 338 return self.map(f,*sequences,**kwargs) 339 340 def imap(self, f, *sequences, **kwargs): C:\dev\bin\Python26\Scripts\ in map(self, f, *sequences, **kwargs) C:\dev\bin\Python26\lib\site-packages\IPython\parallel\client\view.pyc in spin_after(f, self, *args, **kwargs) 62 def spin_after(f, self, *args, **kwargs): 63 """call spin after the method.""" ---> 64 ret = f(self, *args, **kwargs) 65 self.spin() 66 return ret C:\dev\bin\Python26\lib\site-packages\IPython\parallel\client\view.pyc in map(self, f, *sequences, **kwargs) 571 assert len(sequences) > 0, "must have some sequences to map onto!" 572 pf = ParallelFunction(self, f, block=block, **kwargs) --> 573 return pf.map(*sequences) 574 575 def execute(self, code, targets=None, block=None): C:\dev\bin\Python26\lib\site-packages\IPython\parallel\client\remotefunction.pyc in map(self, *sequences) 193 self._map = True 194 try: --> 195 ret = self.__call__(*sequences) 196 finally: 197 del self._map C:\dev\bin\Python26\lib\site-packages\IPython\parallel\client\remotefunction.pyc in __call__(self, *sequences) 179 if self.block: 180 try: --> 181 return r.get() 182 except KeyboardInterrupt: 183 return r C:\dev\bin\Python26\lib\site-packages\IPython\parallel\client\asyncresult.pyc in get(self, timeout) 96 return self._result 97 else: ---> 98 raise self._exception 99 else: 100 raise error.TimeoutError("Result not ready.") CompositeError: one or more exceptions from call to method: [Engine Exception]EngineError: Engine 0 died while running task 'c89ab757-1db6-4976-a7aa-86b859fe8f4f' [Engine Exception]EngineError: Engine 1 died while running task '436ffdf6-c082-45e0-a7f1-b09c10c74fe4' NB: pyzmq tests seem to pass except for the one below which seems like it's getting tripped up ove a deprecation warning. C:\dev\bin\Python26\Lib\site-packages\zmq\tests>nosetests --pdb-failures ................................S...............................> c:\dev\bin\python26\lib\site-packages\zmq\tests\__init__.py(104) assertRaisesErrno() -> got '%s'" % (zmq.ZMQError(errno), zmq.ZMQError(e.errno))) (Pdb) print e.message C:\dev\bin\Python26\Lib\site-packages\zmq\tests\__init__.py:1: DeprecationWarning: BaseException.message has been deprecated as of Python 2.6 (Pdb) exit F........ ====================================================================== FAIL: test_create (zmq.tests.test_socket.TestSocket) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\dev\bin\Python26\Lib\site-packages\zmq\tests\test_socket.py", line 47, in test_create self.assertRaisesErrno(zmq.EPROTONOSUPPORT, s.bind, 'ftl://a') File "C:\dev\bin\Python26\Lib\site-packages\zmq\tests\__init__.py", line 104, in assertRaisesErrno got '%s'" % (zmq.ZMQError(errno), zmq.ZMQError(e.errno))) AssertionError: wrong error raised, expected 'Unknown error' got 'Protocol not supported' ---------------------------------------------------------------------- Ran 73 tests in 29.960s FAILED (SKIP=1, failures=1) HTH, Dave From benjaminrk at gmail.com Mon Apr 11 16:32:03 2011 From: benjaminrk at gmail.com (MinRK) Date: Mon, 11 Apr 2011 13:32:03 -0700 Subject: [IPython-dev] Qt console making great strides, please use it and let us know what works, what doesn't In-Reply-To: References: Message-ID: Thanks for the report. Can I ask how you installed zmq/pyzmq? What is the output of: zmq.zmq_version() zmq.pyzmq_version() ? By any chance, are you using EPD 7? I'll have to find a Windows machine and do some digging around. -MinRK On Mon, Apr 11, 2011 at 13:24, Dave Hirschfeld wrote: > I did some testing of the parallel code, but ran into the same problem as: > > http://article.gmane.org/gmane.comp.python.ipython.user/5616 > > C:\dev\code>python C:\dev\bin\Python26\Scripts\ipcluster start -n 4 > [IPClusterApp] Using existing cluster dir: > C:\Users\dhirschfeld\.ipython\cluster_default > [IPClusterApp] Cluster directory set to: > C:\Users\dhirschfeld\.ipython\cluster_default > [IPClusterApp] Starting ipcluster with [daemon=False] > [IPClusterApp] Creating pid file: > C:\Users\dhirschfeld\.ipython\cluster_default\pid\ipcluster.pid > [IPClusterApp] Starting LocalControllerLauncher: ['C:\\dev\\bin\\Python26 > \\python.exe', '-u', u'C:\\dev\\bin\\Python26\\lib\\site-packages\\IPython > \\parallel\\apps\\ipcontrollerapp.py', '--log-to-file', '--log-level', > '20', > '--cluster-dir', u'C:\\Users\\dhirschfeld\\.ipython\\cluster_default'] > [IPClusterApp] Process 'C:\\dev\\bin\\Python26\\python.exe' started: 7096 > [IPClusterApp] IPython cluster: started > Assertion failed: Socket operation on non-socket (..\..\..\src\zmq.cpp:632) > > I sent an email to the Users list but it appears to have gotten lost. So > I've > reproduced it below: > > > Starting the controller and engines seperately appeared to work: > > c:\dev\code>python C:\dev\bin\Python26\Scripts\ipcontroller > [IPControllerApp] Using existing cluster dir: > C:\Users\dhirschfeld\.ipython\cluster_default > [IPControllerApp] Cluster directory set to: > C:\Users\dhirschfeld\.ipython\cluster_default > [IPControllerApp] Hub listening on tcp://127.0.0.1:57543 for registration. > [IPControllerApp] Hub using DB backend: > 'IPython.parallel.controller.dictdb.DictDB' > [IPControllerApp] hub::created hub > [IPControllerApp] task::using Python leastload Task scheduler > [IPControllerApp] Heartmonitor started > [IPControllerApp] Creating pid file: > C:\Users\dhirschfeld\.ipython\cluster_default\pid\ipcontroller.pid > tcp://127.0.0.1:57564 > tcp://127.0.0.1:57565 > tcp://127.0.0.1:57544 > tcp://127.0.0.1:57557 > Scheduler started... > > > > c:\dev\code>python C:\dev\bin\Python26\Scripts\ipengine > [IPEngineApp] Using existing cluster dir: > C:\Users\dhirschfeld\.ipython\cluster_default > [IPEngineApp] Cluster directory set to: > C:\Users\dhirschfeld\.ipython\cluster_default > [IPEngineApp] registering > [IPEngineApp] Completed registration with id 0 > > ... > > In IPython I can connect to the engines but any attempt to do any > calculation > results in each engine dying with the error > "Assertion failed: Invalid argument (..\..\..\src\zmq.cpp:632)" in the > console. > > > from IPython.parallel import Client > > rc = Client() > > rc.ids > Out[3]: [0, 1] > > dview = rc[:] > > parallel_result = dview.map_sync(lambda x: x**10, range(32)) > --------------------------------------------------------------------------- > CompositeError Traceback (most recent call last) > C:\dev\bin\Python26\Scripts\ in () > ----> 1 parallel_result = dview.map_sync(lambda x: x**10, range(32)) > > C:\dev\bin\Python26\lib\site-packages\IPython\parallel\client\view.pyc > in map_sync(self, f, *sequences, **kwargs) > 336 raise TypeError("map_sync doesn't take a `block` keyword > argument.") > 337 kwargs['block'] = True > --> 338 return self.map(f,*sequences,**kwargs) > 339 > 340 def imap(self, f, *sequences, **kwargs): > > C:\dev\bin\Python26\Scripts\ in map(self, f, *sequences, **kwargs) > > C:\dev\bin\Python26\lib\site-packages\IPython\parallel\client\view.pyc > in spin_after(f, self, *args, **kwargs) > 62 def spin_after(f, self, *args, **kwargs): > 63 """call spin after the method.""" > ---> 64 ret = f(self, *args, **kwargs) > 65 self.spin() > 66 return ret > > C:\dev\bin\Python26\lib\site-packages\IPython\parallel\client\view.pyc > in map(self, f, *sequences, **kwargs) > 571 assert len(sequences) > 0, > "must have some sequences to map onto!" > 572 pf = ParallelFunction(self, f, block=block, **kwargs) > --> 573 return pf.map(*sequences) > 574 > 575 def execute(self, code, targets=None, block=None): > > > C:\dev\bin\Python26\lib\site-packages\IPython\parallel\client\remotefunction.pyc > in map(self, *sequences) > 193 self._map = True > 194 try: > --> 195 ret = self.__call__(*sequences) > 196 finally: > 197 del self._map > > > C:\dev\bin\Python26\lib\site-packages\IPython\parallel\client\remotefunction.pyc > in __call__(self, *sequences) > 179 if self.block: > 180 try: > --> 181 return r.get() > 182 except KeyboardInterrupt: > 183 return r > > > C:\dev\bin\Python26\lib\site-packages\IPython\parallel\client\asyncresult.pyc > in get(self, timeout) > 96 return self._result > 97 else: > ---> 98 raise self._exception > 99 else: > 100 raise error.TimeoutError("Result not ready.") > > CompositeError: one or more exceptions from call to method: > [Engine Exception]EngineError: Engine 0 died while running task > 'c89ab757-1db6-4976-a7aa-86b859fe8f4f' > [Engine Exception]EngineError: Engine 1 died while running task > '436ffdf6-c082-45e0-a7f1-b09c10c74fe4' > > > NB: pyzmq tests seem to pass except for the one below which seems like it's > getting tripped up ove a deprecation warning. > > C:\dev\bin\Python26\Lib\site-packages\zmq\tests>nosetests --pdb-failures > ................................S...............................> > c:\dev\bin\python26\lib\site-packages\zmq\tests\__init__.py(104) > assertRaisesErrno() -> got '%s'" % (zmq.ZMQError(errno), > zmq.ZMQError(e.errno))) > (Pdb) print e.message > C:\dev\bin\Python26\Lib\site-packages\zmq\tests\__init__.py:1: > DeprecationWarning: BaseException.message has been deprecated as of Python > 2.6 > (Pdb) exit > F........ > ====================================================================== > FAIL: test_create (zmq.tests.test_socket.TestSocket) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "C:\dev\bin\Python26\Lib\site-packages\zmq\tests\test_socket.py", > line > 47, in test_create > self.assertRaisesErrno(zmq.EPROTONOSUPPORT, s.bind, 'ftl://a') > File "C:\dev\bin\Python26\Lib\site-packages\zmq\tests\__init__.py", line > 104, > in assertRaisesErrno > got '%s'" % (zmq.ZMQError(errno), zmq.ZMQError(e.errno))) > AssertionError: wrong error raised, expected 'Unknown error' got 'Protocol > not > supported' > > ---------------------------------------------------------------------- > Ran 73 tests in 29.960s > > FAILED (SKIP=1, failures=1) > > HTH, > Dave > > > > > > > > _______________________________________________ > IPython-dev mailing list > IPython-dev at scipy.org > http://mail.scipy.org/mailman/listinfo/ipython-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From benjaminrk at gmail.com Mon Apr 11 16:38:22 2011 From: benjaminrk at gmail.com (MinRK) Date: Mon, 11 Apr 2011 13:38:22 -0700 Subject: [IPython-dev] Qt console making great strides, please use it and let us know what works, what doesn't In-Reply-To: References: Message-ID: apologies for the last email, I missed your first message, and only saw the second. On Mon, Apr 11, 2011 at 13:32, MinRK wrote: > Thanks for the report. Can I ask how you installed zmq/pyzmq? > > What is the output of: > > zmq.zmq_version() > zmq.pyzmq_version() > > ? > > By any chance, are you using EPD 7? > > I'll have to find a Windows machine and do some digging around. > > -MinRK > > On Mon, Apr 11, 2011 at 13:24, Dave Hirschfeld wrote: > >> I did some testing of the parallel code, but ran into the same problem as: >> >> http://article.gmane.org/gmane.comp.python.ipython.user/5616 >> >> C:\dev\code>python C:\dev\bin\Python26\Scripts\ipcluster start -n 4 >> [IPClusterApp] Using existing cluster dir: >> C:\Users\dhirschfeld\.ipython\cluster_default >> [IPClusterApp] Cluster directory set to: >> C:\Users\dhirschfeld\.ipython\cluster_default >> [IPClusterApp] Starting ipcluster with [daemon=False] >> [IPClusterApp] Creating pid file: >> C:\Users\dhirschfeld\.ipython\cluster_default\pid\ipcluster.pid >> [IPClusterApp] Starting LocalControllerLauncher: ['C:\\dev\\bin\\Python26 >> \\python.exe', '-u', u'C:\\dev\\bin\\Python26\\lib\\site-packages\\IPython >> \\parallel\\apps\\ipcontrollerapp.py', '--log-to-file', '--log-level', >> '20', >> '--cluster-dir', u'C:\\Users\\dhirschfeld\\.ipython\\cluster_default'] >> [IPClusterApp] Process 'C:\\dev\\bin\\Python26\\python.exe' started: 7096 >> [IPClusterApp] IPython cluster: started >> Assertion failed: Socket operation on non-socket >> (..\..\..\src\zmq.cpp:632) >> >> I sent an email to the Users list but it appears to have gotten lost. So >> I've >> reproduced it below: >> >> >> Starting the controller and engines seperately appeared to work: >> >> c:\dev\code>python C:\dev\bin\Python26\Scripts\ipcontroller >> [IPControllerApp] Using existing cluster dir: >> C:\Users\dhirschfeld\.ipython\cluster_default >> [IPControllerApp] Cluster directory set to: >> C:\Users\dhirschfeld\.ipython\cluster_default >> [IPControllerApp] Hub listening on tcp://127.0.0.1:57543 for >> registration. >> [IPControllerApp] Hub using DB backend: >> 'IPython.parallel.controller.dictdb.DictDB' >> [IPControllerApp] hub::created hub >> [IPControllerApp] task::using Python leastload Task scheduler >> [IPControllerApp] Heartmonitor started >> [IPControllerApp] Creating pid file: >> C:\Users\dhirschfeld\.ipython\cluster_default\pid\ipcontroller.pid >> tcp://127.0.0.1:57564 >> tcp://127.0.0.1:57565 >> tcp://127.0.0.1:57544 >> tcp://127.0.0.1:57557 >> Scheduler started... >> >> >> >> c:\dev\code>python C:\dev\bin\Python26\Scripts\ipengine >> [IPEngineApp] Using existing cluster dir: >> C:\Users\dhirschfeld\.ipython\cluster_default >> [IPEngineApp] Cluster directory set to: >> C:\Users\dhirschfeld\.ipython\cluster_default >> [IPEngineApp] registering >> [IPEngineApp] Completed registration with id 0 >> >> ... >> >> In IPython I can connect to the engines but any attempt to do any >> calculation >> results in each engine dying with the error >> "Assertion failed: Invalid argument (..\..\..\src\zmq.cpp:632)" in the >> console. >> >> >> from IPython.parallel import Client >> >> rc = Client() >> >> rc.ids >> Out[3]: [0, 1] >> >> dview = rc[:] >> >> parallel_result = dview.map_sync(lambda x: x**10, range(32)) >> >> --------------------------------------------------------------------------- >> CompositeError Traceback (most recent call >> last) >> C:\dev\bin\Python26\Scripts\ in () >> ----> 1 parallel_result = dview.map_sync(lambda x: x**10, range(32)) >> >> C:\dev\bin\Python26\lib\site-packages\IPython\parallel\client\view.pyc >> in map_sync(self, f, *sequences, **kwargs) >> 336 raise TypeError("map_sync doesn't take a `block` >> keyword >> argument.") >> 337 kwargs['block'] = True >> --> 338 return self.map(f,*sequences,**kwargs) >> 339 >> 340 def imap(self, f, *sequences, **kwargs): >> >> C:\dev\bin\Python26\Scripts\ in map(self, f, *sequences, **kwargs) >> >> C:\dev\bin\Python26\lib\site-packages\IPython\parallel\client\view.pyc >> in spin_after(f, self, *args, **kwargs) >> 62 def spin_after(f, self, *args, **kwargs): >> 63 """call spin after the method.""" >> ---> 64 ret = f(self, *args, **kwargs) >> 65 self.spin() >> 66 return ret >> >> C:\dev\bin\Python26\lib\site-packages\IPython\parallel\client\view.pyc >> in map(self, f, *sequences, **kwargs) >> 571 assert len(sequences) > 0, >> "must have some sequences to map onto!" >> 572 pf = ParallelFunction(self, f, block=block, **kwargs) >> --> 573 return pf.map(*sequences) >> 574 >> 575 def execute(self, code, targets=None, block=None): >> >> >> C:\dev\bin\Python26\lib\site-packages\IPython\parallel\client\remotefunction.pyc >> in map(self, *sequences) >> 193 self._map = True >> 194 try: >> --> 195 ret = self.__call__(*sequences) >> 196 finally: >> 197 del self._map >> >> >> C:\dev\bin\Python26\lib\site-packages\IPython\parallel\client\remotefunction.pyc >> in __call__(self, *sequences) >> 179 if self.block: >> 180 try: >> --> 181 return r.get() >> 182 except KeyboardInterrupt: >> 183 return r >> >> >> C:\dev\bin\Python26\lib\site-packages\IPython\parallel\client\asyncresult.pyc >> in get(self, timeout) >> 96 return self._result >> 97 else: >> ---> 98 raise self._exception >> 99 else: >> 100 raise error.TimeoutError("Result not ready.") >> >> CompositeError: one or more exceptions from call to method: >> [Engine Exception]EngineError: Engine 0 died while running task >> 'c89ab757-1db6-4976-a7aa-86b859fe8f4f' >> [Engine Exception]EngineError: Engine 1 died while running task >> '436ffdf6-c082-45e0-a7f1-b09c10c74fe4' >> >> >> NB: pyzmq tests seem to pass except for the one below which seems like >> it's >> getting tripped up ove a deprecation warning. >> >> C:\dev\bin\Python26\Lib\site-packages\zmq\tests>nosetests --pdb-failures >> ................................S...............................> >> c:\dev\bin\python26\lib\site-packages\zmq\tests\__init__.py(104) >> assertRaisesErrno() -> got '%s'" % (zmq.ZMQError(errno), >> zmq.ZMQError(e.errno))) >> (Pdb) print e.message >> C:\dev\bin\Python26\Lib\site-packages\zmq\tests\__init__.py:1: >> DeprecationWarning: BaseException.message has been deprecated as of Python >> 2.6 >> (Pdb) exit >> F........ >> ====================================================================== >> FAIL: test_create (zmq.tests.test_socket.TestSocket) >> ---------------------------------------------------------------------- >> Traceback (most recent call last): >> File "C:\dev\bin\Python26\Lib\site-packages\zmq\tests\test_socket.py", >> line >> 47, in test_create >> self.assertRaisesErrno(zmq.EPROTONOSUPPORT, s.bind, 'ftl://a') >> File "C:\dev\bin\Python26\Lib\site-packages\zmq\tests\__init__.py", line >> 104, >> in assertRaisesErrno >> got '%s'" % (zmq.ZMQError(errno), zmq.ZMQError(e.errno))) >> AssertionError: wrong error raised, expected 'Unknown error' got 'Protocol >> not >> supported' >> >> ---------------------------------------------------------------------- >> Ran 73 tests in 29.960s >> >> FAILED (SKIP=1, failures=1) >> >> HTH, >> Dave >> >> >> >> >> >> >> >> _______________________________________________ >> IPython-dev mailing list >> IPython-dev at scipy.org >> http://mail.scipy.org/mailman/listinfo/ipython-dev >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dave.hirschfeld at gmail.com Mon Apr 11 16:42:29 2011 From: dave.hirschfeld at gmail.com (Dave Hirschfeld) Date: Mon, 11 Apr 2011 20:42:29 +0000 (UTC) Subject: [IPython-dev] =?utf-8?q?Qt_console_making_great_strides=2C=09plea?= =?utf-8?q?se_use_it_and_let_us_know_what_works=2C_what_doesn=27t?= References: Message-ID: Dave Hirschfeld gmail.com> writes: > > I did some testing of the parallel code, but ran into the same problem as: > > http://article.gmane.org/gmane.comp.python.ipython.user/5616 > On Mon, Apr 11, 2011 at 13:32, MinRK wrote: > Thanks for the report. Can I ask how you installed zmq/pyzmq? > > What is the output of: > > zmq.zmq_version() > zmq.pyzmq_version() FWIW the results are: In [2]: import zmq In [3]: zmq.zmq_version() Out[3]: '2.1.4' In [4]: zmq.pyzmq_version() Out[4]: '2.1dev' from Python(x,y) 2.6.5.6 -Dave From benjaminrk at gmail.com Mon Apr 11 16:58:28 2011 From: benjaminrk at gmail.com (MinRK) Date: Mon, 11 Apr 2011 13:58:28 -0700 Subject: [IPython-dev] Qt console making great strides, please use it and let us know what works, what doesn't In-Reply-To: References: Message-ID: And what are the values of: zmq.EPROTONOSUPPORT and zmq.strerror(zmq.EPROTONOSUPPORT) ? The fact that the zmq error code doesn't match itself in your pyzmq test worries me, but maybe it's a Windows thing (though not on my Windows VMs). Under what circumstances are '.lib' files needed in addition to '.dll'? I've never needed it. I'm also surprised that pyzmq didn't copy libzmq.dll along with it, as it is in the package_data for setup. Can you post the complete output of a pyzmq build and install (after your libzmq.dll/lib fixes)? I cannot replicate these problems on my XP VM. On Mon, Apr 11, 2011 at 13:42, Dave Hirschfeld wrote: > > Dave Hirschfeld gmail.com> writes: > > > > > I did some testing of the parallel code, but ran into the same problem as: > > > > http://article.gmane.org/gmane.comp.python.ipython.user/5616 > > > > On Mon, Apr 11, 2011 at 13:32, MinRK wrote: > > Thanks for the report. ?Can I ask how you installed zmq/pyzmq? > > > > What is the output of: > > > > zmq.zmq_version() > > zmq.pyzmq_version() > > FWIW the results are: > > In [2]: import zmq > > In [3]: zmq.zmq_version() > Out[3]: '2.1.4' > > In [4]: zmq.pyzmq_version() > Out[4]: '2.1dev' > > from Python(x,y) 2.6.5.6 > > -Dave > > _______________________________________________ > IPython-dev mailing list > IPython-dev at scipy.org > http://mail.scipy.org/mailman/listinfo/ipython-dev From tdimiduk at physics.harvard.edu Mon Apr 11 17:05:56 2011 From: tdimiduk at physics.harvard.edu (Tom Dimiduk) Date: Mon, 11 Apr 2011 17:05:56 -0400 Subject: [IPython-dev] Qt console making great strides, please use it and let us know what works, what doesn't In-Reply-To: References: Message-ID: <4DA36D34.4020501@physics.harvard.edu> ctrl-k does not appear to be yanking properly (I cannot past again with ctrl-y), at least for me. I am new on the list, so apologies if this is a known limitation. Loving the new console though. Tom On 04/10/2011 11:18 PM, Fernando Perez wrote: > Hi all, > > now that our codebase is starting to settle down into the shape the > next release will look like, it would be great if we start getting > more feedback. In particular, Evan Patterson has done a great job of > fine-tuning the Qt console so that it's a really useful tool for > everyday work, and while we've tried to be much more disciplined about > testing things now, the reality of GUI apps is that some things are > very difficult to test short of humans using them. > > So if you would like to start using it, and reporting what doesn't > work either here or even better, on the bug tracker, that would be > great. Evan today just merged a change that fixes what in my eyes was > one of the major usability issues left: when you moved out of a cell > you had already edited, you lost your unexecuted changes. This could > be very annoying in practice, but now the console automatically > remembers these changes. I think with this issue fixed, it makes for > a great environment to experiment and work in. My favorite way to > start it is with this alias: > > alias iqlab='ipython-qtconsole --paging vsplit --pylab' > > and sometimes I'll add "inline" if I want inline figures (I'm working > on a patch to allow toggling of inline/floating figures at runtime, > but it's not ready yet). > > So use it, pound on it, let us know how it goes. We'd like this to be > really a production-ready tool when 0.11 goes out. > > A big thanks to Evan for the great work he's done and being so > responsive, as well as Mark and others who have pitched in with the Qt > code. > > And as always, a pull request is even better than a bug report! > > Cheers, > > f > _______________________________________________ > IPython-dev mailing list > IPython-dev at scipy.org > http://mail.scipy.org/mailman/listinfo/ipython-dev From takowl at gmail.com Mon Apr 11 17:39:15 2011 From: takowl at gmail.com (Thomas Kluyver) Date: Mon, 11 Apr 2011 22:39:15 +0100 Subject: [IPython-dev] Pylabtools test failing Message-ID: I'm getting this error message in the test suite, from the recently added test_pylabtools.py: https://gist.github.com/914402 Is this just that my version of matplotlib is too old to have the relevant function? I have 0.99.3 (as shipped with Ubuntu maverick). Thomas -------------- next part -------------- An HTML attachment was scrubbed... URL: From efiring at hawaii.edu Mon Apr 11 17:42:44 2011 From: efiring at hawaii.edu (Eric Firing) Date: Mon, 11 Apr 2011 11:42:44 -1000 Subject: [IPython-dev] Pylabtools test failing In-Reply-To: References: Message-ID: <4DA375D4.5040608@hawaii.edu> On 04/11/2011 11:39 AM, Thomas Kluyver wrote: > I'm getting this error message in the test suite, from the recently > added test_pylabtools.py: https://gist.github.com/914402 > > Is this just that my version of matplotlib is too old to have the > relevant function? I have 0.99.3 (as shipped with Ubuntu maverick). Yes, that's too old. Eric > > Thomas From songofacandy at gmail.com Mon Apr 11 20:22:45 2011 From: songofacandy at gmail.com (INADA Naoki) Date: Tue, 12 Apr 2011 09:22:45 +0900 Subject: [IPython-dev] Qt console making great strides, please use it and let us know what works, what doesn't In-Reply-To: References: Message-ID: I can't use input method (IBus) on Ubuntu Natty. I'm using Natty's Python and PyQt packages. And I compiled most recent released version of zma, pyzmq. I don't know why QTextEdit widgets in console_widget doesn't support input method. On Mon, Apr 11, 2011 at 12:18 PM, Fernando Perez wrote: > Hi all, > > now that our codebase is starting to settle down into the shape the > next release will look like, it would be great if we start getting > more feedback. ?In particular, Evan Patterson has done a great job of > fine-tuning the Qt console so that it's a really useful tool for > everyday work, and while we've tried to be much more disciplined about > testing things now, the reality of GUI apps is that some things are > very difficult to test short of humans using them. > > So if you would like to start using it, and reporting what doesn't > work either here or even better, on the bug tracker, that would be > great. ?Evan today just merged a change that fixes what in my eyes was > one of the major usability issues left: when you moved out of a cell > you had already edited, you lost your unexecuted changes. ?This could > be very annoying in practice, but now the console automatically > remembers these changes. ?I think with this issue fixed, it makes for > a great environment to experiment and work in. ?My favorite way to > start it is with this alias: > > alias iqlab='ipython-qtconsole --paging vsplit --pylab' > > and sometimes I'll add "inline" if I want inline figures (I'm working > on a patch to allow toggling of inline/floating figures at runtime, > but it's not ready yet). > > So use it, pound on it, let us know how it goes. ?We'd like this to be > really a production-ready tool when 0.11 goes out. > > A big thanks to Evan for the great work he's done and being so > responsive, as well as Mark and others who have pitched in with the Qt > code. > > And as always, a pull request is even better than a bug report! > > Cheers, > > f > _______________________________________________ > IPython-dev mailing list > IPython-dev at scipy.org > http://mail.scipy.org/mailman/listinfo/ipython-dev > -- INADA Naoki? From fperez.net at gmail.com Mon Apr 11 20:50:06 2011 From: fperez.net at gmail.com (Fernando Perez) Date: Mon, 11 Apr 2011 17:50:06 -0700 Subject: [IPython-dev] Qt console making great strides, please use it and let us know what works, what doesn't In-Reply-To: <4DA36D34.4020501@physics.harvard.edu> References: <4DA36D34.4020501@physics.harvard.edu> Message-ID: On Mon, Apr 11, 2011 at 2:05 PM, Tom Dimiduk wrote: > ctrl-k does not appear to be yanking properly (I cannot past again with > ctrl-y), https://github.com/ipython/ipython/issues/366 thanks for the report From fperez.net at gmail.com Mon Apr 11 20:52:06 2011 From: fperez.net at gmail.com (Fernando Perez) Date: Mon, 11 Apr 2011 17:52:06 -0700 Subject: [IPython-dev] Qt console making great strides, please use it and let us know what works, what doesn't In-Reply-To: References: Message-ID: On Mon, Apr 11, 2011 at 5:22 PM, INADA Naoki wrote: > can't use input method (IBus) on Ubuntu Natty. > > I'm using Natty's Python and PyQt packages. And I compiled most recent > released version of zma, pyzmq. > I don't know why QTextEdit widgets in console_widget doesn't support > input method. Thanks for the report: https://github.com/ipython/ipython/issues/367 Cheers, f From fperez.net at gmail.com Mon Apr 11 20:59:24 2011 From: fperez.net at gmail.com (Fernando Perez) Date: Mon, 11 Apr 2011 17:59:24 -0700 Subject: [IPython-dev] Qt console making great strides, please use it and let us know what works, what doesn't In-Reply-To: References: Message-ID: Hi Dave, On Mon, Apr 11, 2011 at 4:49 AM, Dave Hirschfeld wrote: > Thanks to everyone involved for all the hard work, I've been eagerly awaiting > the 0.11 release but thought I would try to help out on the feedback front by > jumping in early. sorry for the troubles! I guess you've realized by now that none of the core devs are terribly experts in windows-specific issues. But we do want this to be easy to install and fully functional on windows at release time, so hopefully with a bit of help from users like you we'll get there. Things are being tracked now here: https://github.com/ipython/ipython/issues/365 https://github.com/ipython/ipython/issues/368 https://github.com/ipython/ipython/issues/369 thanks for your persistence! Cheers, f From fperez.net at gmail.com Mon Apr 11 21:06:20 2011 From: fperez.net at gmail.com (Fernando Perez) Date: Mon, 11 Apr 2011 18:06:20 -0700 Subject: [IPython-dev] Pylabtools test failing In-Reply-To: References: Message-ID: On Mon, Apr 11, 2011 at 2:39 PM, Thomas Kluyver wrote: > Is this just that my version of matplotlib is too old to have the relevant > function? I have 0.99.3 (as shipped with Ubuntu maverick). > sorry about that, fixed the test to use the old api, pushed in 76f17ab. f From fperez.net at gmail.com Tue Apr 12 03:06:01 2011 From: fperez.net at gmail.com (Fernando Perez) Date: Tue, 12 Apr 2011 00:06:01 -0700 Subject: [IPython-dev] Revival of %store (pspersistence)? In-Reply-To: <201011011456.32914.mark.voorhies@ucsf.edu> References: <201011011720.41814.hans_meine@gmx.net> <201011011456.32914.mark.voorhies@ucsf.edu> Message-ID: Hi Mark, On Mon, Nov 1, 2010 at 2:56 PM, Mark Voorhies wrote: > R saves session data in $PWD/.Rdata and restores it when R is re-invoked from $PWD. > The usual workflow is to have project-specific data directories and invoke R next to > the data that the session will operate on. ?R frontends (e.g., ESS for Emacs) support this > workflow by prompting for a working directory when invoked (and R prompts for saving > session data on exit). ?Would this be a useful way to make persistent data loading in > IPython "deliberate but automatic"? something along these lines would probably be great to have eventually (even if we don't have the bandwidth to implement it quite now). Would you care to open an issue with some of these thoughts and any other info you can think on the design? That would also be a place to have a discussion of how to tie a session storage mechanism (even if it can't pickle/save everything) with our new, more powerful history handling that uses sqlite. Cheers, f From dave.hirschfeld at gmail.com Tue Apr 12 07:37:03 2011 From: dave.hirschfeld at gmail.com (David Hirschfeld) Date: Tue, 12 Apr 2011 12:37:03 +0100 Subject: [IPython-dev] Qt console making great strides, please use it and let us know what works, what doesn't In-Reply-To: References: Message-ID: On Mon, Apr 11, 2011 at 9:58 PM, MinRK wrote: > And what are the values of: > zmq.EPROTONOSUPPORT > and > zmq.strerror(zmq.EPROTONOSUPPORT) > > ? > > The fact that the zmq error code doesn't match itself in your pyzmq > test worries me, but maybe it's a Windows thing (though not on my > Windows VMs). > > Under what circumstances are '.lib' files needed in addition to > '.dll'? ?I've never needed it. > > I'm also surprised that pyzmq didn't copy libzmq.dll along with it, as > it is in the package_data for setup. ?Can you post the complete output > of a pyzmq build and install (after your libzmq.dll/lib fixes)? ?I > cannot replicate these problems on my XP VM. > > On Mon, Apr 11, 2011 at 13:42, Dave Hirschfeld > wrote: >> >> FWIW the results are: >> >> In [2]: import zmq >> >> In [3]: zmq.zmq_version() >> Out[3]: '2.1.4' >> >> In [4]: zmq.pyzmq_version() >> Out[4]: '2.1dev' >> >> from Python(x,y) 2.6.5.6 >> >> -Dave >> In [5]: zmq.EPROTONOSUPPORT Out[5]: 156384714 In [6]: zmq.strerror(zmq.EPROTONOSUPPORT) Out[6]: 'Unknown error' After building ZMQ 2.1.4 with VS2010 the contents of the lib and include directories are: C:\dev\src\pyzmq>ls C:/dev/src/zeromq-2.1.4/lib libzmq.dll C:\dev\src\pyzmq>ls C:/dev/src/zeromq-2.1.4/include zmq.h ?zmq.hpp ?zmq_utils.h i.e. no libzmq.lib in C:/dev/src/zeromq-2.1.4/lib. At this point running the configure stage of pyzmq gives: C:\dev\src\pyzmq>python setup.py configure --zmq=C:\dev\src\zeromq-2.1.4 running configure ****************************************** Configure: Autodetecting ZMQ settings... ? ?Custom ZMQ dir: ? ? ? C:\dev\src\zeromq-2.1.4 c:\Program Files (x86)\Microsoft Visual Studio 9.0\VC\BIN\cl.exe /c /nologo /Ox /MD /W3 /GS- /DNDEBUG -IC:\dev\src\zeromq-2.1.4\include -Izmq\utils -Izmq\core -Izmq\devices /Tcdetect\vers.c /Fodetect\vers.obj vers.c c:\Program Files (x86)\Microsoft Visual Studio 9.0\VC\BIN\link.exe /nologo /INCREMENTAL:NO /LIBPATH:C:\dev\src\zeromq-2.1.4\lib libzmq.lib detect\vers.obj /OUT:detect\vers.exe /MANIFESTFILE:detect\vers.exe.manifest LINK : fatal error LNK1181: cannot open input file 'libzmq.lib' Fatal: ? ?Failed to compile ZMQ test program. ?Please check to make sure: ? ?* You have a C compiler installed ? ?* A development version of Python is installed (including header files) ? ?* A development version of ZeroMQ >= 2.1.0 is installed (including header ? ? ?files) ? ?* If ZMQ is not in a default location, supply the argument --zmq= ****************************************** -i.e. it's looking for libzmq.lib In case it helps I patched line 241 of setup.py to save the exception as exc and debugged it in IPython: > c:\dev\src\pyzmq\setup.py(255)run() ? ?254 ? ? ? ? ? ? print ("*"*42) --> 255 ? ? ? ? ? ? self.erase_tempdir() ? ?256 ? ? ? ? self.config = config ipdb> exc LinkError(DistutilsExecError('command \'"c:\\Program Files (x86)\\Microsoft Visual Studio 9.0\\VC\\BIN\\link.exe"\' failed with exit status 1181',),) ipdb> settings {'libraries': ['libzmq'], 'library_dirs': ['C:/dev/src/zeromq-2.1.4\\lib'], 'extra_compile_args': [], 'include_dirs': ['C:/dev/src/zeromq-2.1.4\\include', ? ? ? ? ? ? ? ? ?'zmq\\utils', 'zmq\\core', 'zmq\\devices']} I'm no expert but I think .lib files are compile/link time dependencies and .dlls are only runtime dependencies so I don't think you can get around copying the libzmq.lib file to the lib (C:/dev/src/zeromq-2.1.4/lib in my case) directory. Anyway, if you do that the configure runs fine: C:\dev\src\pyzmq>python setup.py configure --zmq=C:\dev\src\zeromq-2.1.4 running configure ****************************************** Configure: Autodetecting ZMQ settings... ? ?Custom ZMQ dir: ? ? ? C:\dev\src\zeromq-2.1.4 c:\Program Files (x86)\Microsoft Visual Studio 9.0\VC\BIN\cl.exe /c /nologo /Ox /MD /W3 /GS- /DNDEBUG -IC:\dev\src\zeromq-2.1.4\include -Izmq\utils -Izmq\core -Izmq\devices /Tcdetect\vers.c /Fodetect\vers.obj vers.c c:\Program Files (x86)\Microsoft Visual Studio 9.0\VC\BIN\link.exe /nologo /INCREMENTAL:NO /LIBPATH:C:\dev\src\zeromq-2.1.4\lib libzmq.lib detect\vers.obj /OUT:detect\vers.exe /MANIFESTFILE:detect\vers.exe.manifest C:\Program Files\Microsoft SDKs\Windows\v6.0A\bin\mt.exe -nologo -manifest detect\vers.exe.manifest -outputresource:detect\vers.exe;1 ? ?ZMQ version detected: 2.1.4 ****************************************** Running the install command: C:\dev\src\pyzmq> python setup.py install 2>&1 | tee pyzmq.build.log ...seemed to run fine too. Lines 45 & 159 of the attached log would suggest that the dll is being copied over: 45 copying zmq\libzmq.dll -> build\lib.win32-2.6\zmq 159 copying build\lib.win32-2.6\zmq\libzmq.dll ?-> ? ? C:\dev\bin\Python26\Lib\site-packages\zmq ...and indeed libzmq.dll is there this time so I think we can chalk that one up to user error, sorry. This still leaves us with the zmq error: Assertion failed: Socket operation on non-socket (c:\dev\src\zeromq-2.1.4\src\zmq.cpp:632) ...whenever we try to use the parallel code. I next patched line 29 of zmq/__init__.py to set `here` to point to the output of the Visual Studio ZMQ project: here = r'C:\dev\src\zeromq-2.1.4\lib' and ran the ipcluster script from the Visual Studio debugger. As expected the debugger stopped at line 632 of zmq.cpp in the zmq_poll function. Unfortunately that's the limit of my capabilities :( I've attached the output of the locals window at the point of the exception in case it helps. -Dave -------------- next part -------------- A non-text attachment was scrubbed... Name: pyzmq.build.log Type: application/octet-stream Size: 25538 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: locals.png Type: image/png Size: 67424 bytes Desc: not available URL: From dave.hirschfeld at gmail.com Thu Apr 14 13:09:43 2011 From: dave.hirschfeld at gmail.com (Dave Hirschfeld) Date: Thu, 14 Apr 2011 17:09:43 +0000 (UTC) Subject: [IPython-dev] Qt console making great strides, please use it and let us know what works, what doesn't References: Message-ID: Fernando Perez gmail.com> writes: > > Hi all, > > now that our codebase is starting to settle down into the shape the > next release will look like, it would be great if we start getting > more feedback. In particular, Evan Patterson has done a great job of > fine-tuning the Qt console so that it's a really useful tool for > everyday work, and while we've tried to be much more disciplined about > testing things now, the reality of GUI apps is that some things are > very difficult to test short of humans using them. > > So if you would like to start using it, and reporting what doesn't > work either here or even better, on the bug tracker, that would be > great. > > Cheers, > > f > I saw issues #318 & #319, but I'm not sure this is the same thing. It appears the %debug doesn't work in the qt-console? On Win7 x64 using Python 2.6.5 (r265:79096, Mar 19 2010, 21:48:26) [MSC v.1500 32 bit (Intel)] I get the following traceback: In [1]: def f(): ...: 1/0 ...: In [2]: f() --------------------------------------------------------------------------- ZeroDivisionError Traceback (most recent call last) C:\dev\bin\Python26\Scripts\ in () ----> 1 f() C:\dev\bin\Python26\Scripts\ in f() 1 def f(): ----> 2 1/0 3 ZeroDivisionError: integer division or modulo by zero In [3]: debug --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) C:\dev\bin\Python26\Scripts\ in () ----> 1 get_ipython().magic(u"debug ") C:\dev\bin\Python26\lib\site-packages\IPython\core\interactiveshell.pyc in magic(self, arg_s) 1783 self._magic_locals = sys._getframe(1).f_locals 1784 with self.builtin_trap: -> 1785 result = fn(magic_args) 1786 # Ensure we're not keeping object references around: 1787 self._magic_locals = {} C:\dev\bin\Python26\lib\site-packages\IPython\core\magic.pyc in magic_debug(self, parameter_s) 1265 the %pdb magic for more details. 1266 """ -> 1267 self.shell.debugger(force=True) 1268 1269 @testdec.skip_doctest C:\dev\bin\Python26\lib\site-packages\IPython\core\interactiveshell.pyc in debugger(self, force) 782 pm = lambda : self.InteractiveTB.debugger(force=True) 783 --> 784 with self.readline_no_record: 785 pm() 786 C:\dev\bin\Python26\lib\site-packages\IPython\core\interactiveshell.pyc in __enter__(self) 145 def __enter__(self): 146 if self._nested_level == 0: --> 147 self.orig_length = self.current_length() 148 self.readline_tail = self.get_readline_tail() 149 self._nested_level += 1 C:\dev\bin\Python26\lib\site-packages\IPython\core\interactiveshell.pyc in current_length(self) 166 167 def current_length(self): --> 168 return self.shell.readline.get_current_history_length() 169 170 def get_readline_tail(self, n=10): AttributeError: 'module' object has no attribute 'get_current_history_length' In [5]: Thanks, Dave From benjaminrk at gmail.com Thu Apr 14 14:06:20 2011 From: benjaminrk at gmail.com (MinRK) Date: Thu, 14 Apr 2011 11:06:20 -0700 Subject: [IPython-dev] Qt console making great strides, please use it and let us know what works, what doesn't In-Reply-To: References: Message-ID: On Thu, Apr 14, 2011 at 10:09, Dave Hirschfeld wrote: > Fernando Perez gmail.com> writes: > >> >> Hi all, >> >> now that our codebase is starting to settle down into the shape the >> next release will look like, it would be great if we start getting >> more feedback. ?In particular, Evan Patterson has done a great job of >> fine-tuning the Qt console so that it's a really useful tool for >> everyday work, and while we've tried to be much more disciplined about >> testing things now, the reality of GUI apps is that some things are >> very difficult to test short of humans using them. >> >> So if you would like to start using it, and reporting what doesn't >> work either here or even better, on the bug tracker, that would be >> great. >> >> Cheers, >> >> f >> > > I saw issues #318 & #319, but I'm not sure this is the same thing. It appears > the %debug doesn't work in the qt-console? On Win7 x64 using Python 2.6.5 > (r265:79096, Mar 19 2010, 21:48:26) [MSC v.1500 32 bit (Intel)] I get the > following traceback: > > In [1]: def f(): > ? ...: 1/0 > ? ...: > > In [2]: f() > --------------------------------------------------------------------------- > ZeroDivisionError Traceback (most recent call last) > C:\dev\bin\Python26\Scripts\ in () > ----> 1 f() > > C:\dev\bin\Python26\Scripts\ in f() > ? ? ?1 def f(): > ----> 2 1/0 > ? ? ?3 > > ZeroDivisionError: integer division or modulo by zero > > In [3]: debug > --------------------------------------------------------------------------- > AttributeError Traceback (most recent call last) > C:\dev\bin\Python26\Scripts\ in () > ----> 1 get_ipython().magic(u"debug ") > > > C:\dev\bin\Python26\lib\site-packages\IPython\core\interactiveshell.pyc in > magic(self, arg_s) > ? ?1783 self._magic_locals = sys._getframe(1).f_locals > ? ?1784 with self.builtin_trap: > -> ?1785 result = fn(magic_args) > ? ?1786 # Ensure we're not keeping object references around: > > ? ?1787 self._magic_locals = {} > > > C:\dev\bin\Python26\lib\site-packages\IPython\core\magic.pyc in > magic_debug(self, parameter_s) > ? ?1265 the %pdb magic for more details. > ? ?1266 """ > -> ?1267 self.shell.debugger(force=True) > ? ?1268 > ? ?1269 @testdec.skip_doctest > > > C:\dev\bin\Python26\lib\site-packages\IPython\core\interactiveshell.pyc in > debugger(self, force) > ? ?782 pm = lambda : self.InteractiveTB.debugger(force=True) > ? ?783 > --> 784 with self.readline_no_record: > ? ?785 pm() > ? ?786 > > > C:\dev\bin\Python26\lib\site-packages\IPython\core\interactiveshell.pyc in > __enter__(self) > ? ?145 def __enter__(self): > ? ?146 if self._nested_level == 0: > --> 147 self.orig_length = self.current_length() > ? ?148 self.readline_tail = self.get_readline_tail() > ? ?149 self._nested_level += 1 > > > C:\dev\bin\Python26\lib\site-packages\IPython\core\interactiveshell.pyc in > current_length(self) > ? ?166 > ? ?167 def current_length(self): > --> 168 return self.shell.readline.get_current_history_length() > ? ?169 > ? ?170 def get_readline_tail(self, n=10): > > AttributeError: 'module' object has no attribute 'get_current_history_length' > > In [5]: > > Thanks, > Dave This is actually the pyreadline issue brought up in the discussion on #375. The history was rewritten pretty significantly, but resulted in falling out of sync with pyreadline on Windows. I believe Thomas is working on handling pyreadline's difference. -MinRK > > _______________________________________________ > IPython-dev mailing list > IPython-dev at scipy.org > http://mail.scipy.org/mailman/listinfo/ipython-dev > From takowl at gmail.com Thu Apr 14 15:16:40 2011 From: takowl at gmail.com (Thomas Kluyver) Date: Thu, 14 Apr 2011 20:16:40 +0100 Subject: [IPython-dev] Qt console making great strides, please use it and let us know what works, what doesn't In-Reply-To: References: Message-ID: On 14 April 2011 19:06, MinRK wrote: > This is actually the pyreadline issue brought up in the discussion on > #375. The history was rewritten pretty significantly, > but resulted in falling out of sync with pyreadline on Windows. I > believe Thomas is working on handling pyreadline's difference. > Can you (and anyone with Windows) test my pyreadline-errors branch: https://github.com/takluyver/ipython/tree/pyreadline-errors This catches the errors coming from the differences in pyreadline. For now, we're not doing anything, so commands you enter in a debug session will show up when you press Up afterwards. It's easy enough to change that to always reset readline, but that will mean a fairly expensive operation to transfer 1000 lines from the database to readline, even when it's not necessary. But that branch should stop the errors, at least. Thanks, Thomas -------------- next part -------------- An HTML attachment was scrubbed... URL: From jorgen.stenarson at bostream.nu Thu Apr 14 15:36:01 2011 From: jorgen.stenarson at bostream.nu (=?ISO-8859-1?Q?J=F6rgen_Stenarson?=) Date: Thu, 14 Apr 2011 21:36:01 +0200 Subject: [IPython-dev] Qt console making great strides, please use it and let us know what works, what doesn't In-Reply-To: References: Message-ID: <4DA74CA1.2010202@bostream.nu> Thomas Kluyver skrev 2011-04-14 21:16: > On 14 April 2011 19:06, MinRK > wrote: > > This is actually the pyreadline issue brought up in the discussion on > #375. The history was rewritten pretty significantly, > but resulted in falling out of sync with pyreadline on Windows. I > believe Thomas is working on handling pyreadline's difference. > > > Can you (and anyone with Windows) test my pyreadline-errors branch: > > https://github.com/takluyver/ipython/tree/pyreadline-errors > > This catches the errors coming from the differences in pyreadline. For > now, we're not doing anything, so commands you enter in a debug session > will show up when you press Up afterwards. It's easy enough to change > that to always reset readline, but that will mean a fairly expensive > operation to transfer 1000 lines from the database to readline, even > when it's not necessary. But that branch should stop the errors, at least. > Hi, I have just released a new pyreadline 1.7 which should make the apis compatible. /J?rgen From fperez.net at gmail.com Thu Apr 14 16:39:49 2011 From: fperez.net at gmail.com (Fernando Perez) Date: Thu, 14 Apr 2011 13:39:49 -0700 Subject: [IPython-dev] Using ipcluster behind restrictive login setups Message-ID: Hi all, A note on a tool that may come in handy for those of you who wish to run ipcluster in environments with very restrictive setups for remote logins, such as those requiring multifactor authentication where the ssh tunneling for zmq would require multiple activations of the token (one for each zmq connection being tunneled): https://github.com/apenwarr/sshuttle We already have some reports of users in such scenarios using it successfully to simplify their entire login and control process, so I figured I'd mention it here until we have a chance to document it more permanently. regards, f From fperez.net at gmail.com Fri Apr 15 18:53:57 2011 From: fperez.net at gmail.com (Fernando Perez) Date: Fri, 15 Apr 2011 15:53:57 -0700 Subject: [IPython-dev] Nasty bug in master: fully broken handling of interactive namespace Message-ID: https://github.com/ipython/ipython/issues/387 Unfortunately I need to run, I just post it here in case anyone outside of the core team (we all get the github email) wants to earn hero brownie points over the weekend... From fperez.net at gmail.com Fri Apr 15 19:04:43 2011 From: fperez.net at gmail.com (Fernando Perez) Date: Fri, 15 Apr 2011 16:04:43 -0700 Subject: [IPython-dev] Nasty bug in master: fully broken handling of interactive namespace In-Reply-To: References: Message-ID: Hi all, On Fri, Apr 15, 2011 at 3:53 PM, Fernando Perez wrote: > https://github.com/ipython/ipython/issues/387 > > Unfortunately I need to run, I just post it here in case anyone > outside of the core team (we all get the github email) wants to earn > hero brownie points over the weekend... semi-false alarm: Min pointed out to me the bug is in a branch currently under review, not in master. I hadn't realized that after doing the review, I hadn't checked back to master. We still need to sort out the problem, but fortunately we caught it in the branch (though I saw it accidentally while working interactively). f From erik.tollerud at gmail.com Tue Apr 19 03:49:43 2011 From: erik.tollerud at gmail.com (Erik Tollerud) Date: Tue, 19 Apr 2011 00:49:43 -0700 Subject: [IPython-dev] Screen-like use-case implementation questions Message-ID: I've been thinking about this ever since the qt console appeared, but it seems like now that zmq has hit master, it might be a good time to actually look more into implementing this idea. As I see it, the IPython two-process model is an excellent way to replace a common use case that is currently (clunkily) only addressed via GNU Screen: I regularly log into a computer on which a long-running python process is operating, but on a connection that I can't keep open (typically sshing into a desktop while I am traveling). Screen has been a reasonable way to deal with this, as it doesn't care if my ssh connection drops - the terminal will just connect back up when I came back later. But of course it is not ideal as it is not very adaptable. An IPython kernel that I can disconnect from and reconnect to (like the current qtconsole) with the most appropriate frontend for the occasion is a great answer to this use case... A few things are preventing this right now though: First, there's no simple way to manage a kernel in the way needed for this use case - but this is probably just a matter of subclassing KernelManager to create a scheme to store simple names that can be used to map onto all the necessary ZMQ connections and the like. Crucially, however, there needs to be a way to connect to the kernel through a terminal-based frontend, so that I can easily ssh in without needing to go through the qt client, as that would be very slow using X forwarding. But as I understand the TerminalInteractiveShell code, it seems to be implemented separately from the ZMQInteractiveShell. This says to me that there is no easy way to connect the existing terminal shell to a ZMQ kernel - is that the case? Or is there some way I'm not understanding to connect to a ZMQ shell directly from the TerminalInteractiveShell? (Also, a nice but not critical piece would be to use this in a system-shell like mode a la the pysh profile, as then there would be no need for screen at all! In the current master, this doesn't completely work because calling anything from IPython that requires stdin doesn't seem to work, as it doesn't seem to get passed from ipython to the subprocess. A note in the docs suggests this is intentional/unfixable, but I vauguely remember some discussion on the list a while back about passing stdin from frontends into the kernels - is that possible/implemented?) -- Erik Tollerud From takowl at gmail.com Tue Apr 19 05:33:23 2011 From: takowl at gmail.com (Thomas Kluyver) Date: Tue, 19 Apr 2011 10:33:23 +0100 Subject: [IPython-dev] Screen-like use-case implementation questions In-Reply-To: References: Message-ID: On 19 April 2011 08:49, Erik Tollerud wrote: > as I understand the TerminalInteractiveShell code, > it seems to be implemented separately from the ZMQInteractiveShell. > This says to me that there is no easy way to connect the existing > terminal shell to a ZMQ kernel - is that the case? Or is there some > way I'm not understanding to connect to a ZMQ shell directly from the > TerminalInteractiveShell? > You're right, the TerminalInteractiveShell is separate, and it runs in a single process, without any communication layer. I believe Omar was working on a prototype of a two-process terminal client communicating over ZMQ. Omar, any word on what sort of shape that's in? Thomas -------------- next part -------------- An HTML attachment was scrubbed... URL: From ludwig.schwardt at gmail.com Wed Apr 20 03:10:20 2011 From: ludwig.schwardt at gmail.com (Ludwig Schwardt) Date: Wed, 20 Apr 2011 09:10:20 +0200 Subject: [IPython-dev] let ipython return parent's class method docstring automatically Message-ID: Hi, >From what I can tell, the patch mentioned below never made it home... It would be a really nice feature to have! Ludwig On Wed, Jul 23, 2008 at 17:25 PM, Ondrej Certik wrote: >On Wed, Jul 23, 2008 at 11:59 PM, Fernando Perez wrote: >> On Wed, Jul 23, 2008 at 1:11 PM, Ondrej Certik wrote: >> >>> The parent class, in this case Basic, has a nice docstring. The thing >>> is, that the Basic class has the docstring, but the child classes >>> don't (obviously, because it'd be the same). >>> What is the best way to handle this? >>> >>> We can see two possibilities: >>> >>> 1) patch ipython to return parent's (or parent's parent's) docstring. >>> I checked that and the patch would be just a few lines of code in >>> OInspect.py >> >> I think this is a fairly generic problem, that of children who >> override parent methods but don't rewrite the docstrings. So I'd be >> happy with ipython making the user's life easier out of the box, >> though I think in this case we should say something like: >> >> Docstring : [extracted from parent class foo.bar.baz] >> blah... >> >> so users are aware that they're reading the parent's docstring, just >> in case there are small inconsistencies the developer forgot to >> document. >> >> How does this sound? > >I'll send a patch. > >Ondrej From fperez.net at gmail.com Thu Apr 21 13:56:30 2011 From: fperez.net at gmail.com (Fernando Perez) Date: Thu, 21 Apr 2011 10:56:30 -0700 Subject: [IPython-dev] Off-list for a few days, sorry if I've created any bottlenecks... Message-ID: Hi folks, I just wanted to apologize for being off-list this week, I realize a bunch of good work is going on and I haven't provided any feedback. I'm deep in a massive grant-writing black hole that will only open up a little next week. Sorry for not giving an earlier warning. Please don't bottleneck on me: if anything looks good and others have reviewed it enough, do move forward. It's my problem if I'm not available :) It's also worth saying that we really welcome code reviews, discussion, etc, from *anyone*. The core devs are there to merging, but github makes it very easy for anyone to contribute to the process. And for a core dev, finding a good back-and-forth discussion on a branch review or bug, makes the final decision much easier, since it's possible that all the key issues have already been ironed out by then. So please do jump in, it's a great way to learn the code and help the project, so that the limited time of our small core team is even less of an issue for moving forward. Cheers, f From takowl at gmail.com Fri Apr 22 18:31:24 2011 From: takowl at gmail.com (Thomas Kluyver) Date: Fri, 22 Apr 2011 23:31:24 +0100 Subject: [IPython-dev] Off-list for a few days, sorry if I've created any bottlenecks... In-Reply-To: References: Message-ID: I'm also going to be offline for a few days. I think my two pull requests (ipdir unicode, and pickling interactively defined objects) are working, but I look forward to reading any comments when I get back. Thomas On 21 April 2011 18:56, Fernando Perez wrote: > Hi folks, > > I just wanted to apologize for being off-list this week, I realize a > bunch of good work is going on and I haven't provided any feedback. > I'm deep in a massive grant-writing black hole that will only open up > a little next week. Sorry for not giving an earlier warning. > > Please don't bottleneck on me: if anything looks good and others have > reviewed it enough, do move forward. It's my problem if I'm not > available :) > > It's also worth saying that we really welcome code reviews, > discussion, etc, from *anyone*. The core devs are there to merging, > but github makes it very easy for anyone to contribute to the process. > And for a core dev, finding a good back-and-forth discussion on a > branch review or bug, makes the final decision much easier, since it's > possible that all the key issues have already been ironed out by then. > So please do jump in, it's a great way to learn the code and help the > project, so that the limited time of our small core team is even less > of an issue for moving forward. > > Cheers, > > f > _______________________________________________ > IPython-dev mailing list > IPython-dev at scipy.org > http://mail.scipy.org/mailman/listinfo/ipython-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ellisonbg at gmail.com Sat Apr 23 18:15:27 2011 From: ellisonbg at gmail.com (Brian Granger) Date: Sat, 23 Apr 2011 15:15:27 -0700 Subject: [IPython-dev] stdout when running the test suite Message-ID: Hi, I am running the test suite a lot today as I code I am getting a lot of things dumped to stdout or stderr from the tests suite. Things like this: ...........................................>autocallable() >list("1", "2", "3") >list("1 2 3") >len(range(1,4)) >list("1", "2", "3") This is even without -vs enabled. Are other people seeing this? Does anyone have thoughts on where to start looking to get rid of this output? Cheers, Brian -- Brian E. Granger Cal Poly State University, San Luis Obispo bgranger at calpoly.edu and ellisonbg at gmail.com From takowl at gmail.com Sun Apr 24 13:20:30 2011 From: takowl at gmail.com (Thomas Kluyver) Date: Sun, 24 Apr 2011 18:20:30 +0100 Subject: [IPython-dev] stdout when running the test suite In-Reply-To: References: Message-ID: On 23 April 2011 23:15, Brian Granger wrote: > I am running the test suite a lot today as I code I am getting a lot > of things dumped to stdout or stderr from the tests suite. I've seen those since I started running the test suite. Specifically in the IPython.core tests. I'd assumed they were just some side effect that wasn't concerning us. I think IPython replaces sys.stdout, so I've seen code for the test suite that overrides that (in testing/globalipapp). I wonder if this interferes somehow with nose, which must also be redirecting stdout. Thomas -------------- next part -------------- An HTML attachment was scrubbed... URL: From benjaminrk at gmail.com Sun Apr 24 16:26:00 2011 From: benjaminrk at gmail.com (MinRK) Date: Sun, 24 Apr 2011 13:26:00 -0700 Subject: [IPython-dev] stdout when running the test suite In-Reply-To: References: Message-ID: This has been true for a *long* time - at least January 2010. I did some checking, and it turns out that it's actually related to io.Term, so I pushed a fix to PR #397, since that's regarding the same code. The Term object created with the InteractiveShell latches onto sys.stdout before nose redirects it (which happens in every test), so anything printed there would still arrive at stdout. I've fixed this in the tests, by creating simple StreamProxy objects that are soft links to sys.stdout/err, rather than hard ones, so the Capture plugin should still work with IPython output. -MinRK On Sun, Apr 24, 2011 at 10:20, Thomas Kluyver wrote: > On 23 April 2011 23:15, Brian Granger wrote: >> >> I am running the test suite a lot today as I code I am getting a lot >> of things dumped to stdout or stderr from the tests suite. > > I've seen those since I started running the test suite. Specifically in the > IPython.core tests. I'd assumed they were just some side effect that wasn't > concerning us. > I think IPython replaces sys.stdout, so I've seen code for the test suite > that overrides that (in testing/globalipapp). I wonder if this interferes > somehow with nose, which must also be redirecting stdout. > Thomas > _______________________________________________ > IPython-dev mailing list > IPython-dev at scipy.org > http://mail.scipy.org/mailman/listinfo/ipython-dev > > From robert.kern at gmail.com Sun Apr 24 20:16:26 2011 From: robert.kern at gmail.com (Robert Kern) Date: Sun, 24 Apr 2011 19:16:26 -0500 Subject: [IPython-dev] stdout when running the test suite In-Reply-To: References: Message-ID: On 4/24/11 3:26 PM, MinRK wrote: > This has been true for a *long* time - at least January 2010. I did > some checking, and it turns out that it's actually related to io.Term, > so I pushed a fix to PR #397, since that's regarding the same code. > > The Term object created with the InteractiveShell latches onto > sys.stdout before nose redirects it (which happens in every test), so > anything printed there would still arrive at stdout. I've fixed this > in the tests, by creating simple StreamProxy objects that are soft > links to sys.stdout/err, rather than hard ones, so the Capture plugin > should still work with IPython output. For what it's worth, I've had to do a similar thing in a non-test situation. It might be worthwhile to expose such a thing in io itself. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From benjaminrk at gmail.com Sun Apr 24 22:22:48 2011 From: benjaminrk at gmail.com (MinRK) Date: Sun, 24 Apr 2011 19:22:48 -0700 Subject: [IPython-dev] stdout when running the test suite In-Reply-To: References: Message-ID: On Sun, Apr 24, 2011 at 17:16, Robert Kern wrote: > On 4/24/11 3:26 PM, MinRK wrote: >> This has been true for a *long* time - at least January 2010. ?I did >> some checking, and it turns out that it's actually related to io.Term, >> so I pushed a fix to PR #397, since that's regarding the same code. >> >> The Term object created with the InteractiveShell latches onto >> sys.stdout before nose redirects it (which happens in every test), so >> anything printed there would still arrive at stdout. ?I've fixed this >> in the tests, by creating simple StreamProxy objects that are soft >> links to sys.stdout/err, rather than hard ones, so the Capture plugin >> should still work with IPython output. > > For what it's worth, I've had to do a similar thing in a non-test situation. It > might be worthwhile to expose such a thing in io itself. Good to know. For that matter, I'm not sure why we have io.stdout/stderr (previously io.Term.cout/cerr) at all. Under what circumstances do we want streams going to our stdout/stderr to be handled *differently* from sys.stdout,stderr, or a logger? -MinRK > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless enigma > ?that is made terrible by our own mad attempt to interpret it as though it had > ?an underlying truth." > ? -- Umberto Eco > > _______________________________________________ > IPython-dev mailing list > IPython-dev at scipy.org > http://mail.scipy.org/mailman/listinfo/ipython-dev > From ellisonbg at gmail.com Sun Apr 24 23:50:12 2011 From: ellisonbg at gmail.com (Brian Granger) Date: Sun, 24 Apr 2011 20:50:12 -0700 Subject: [IPython-dev] stdout when running the test suite In-Reply-To: References: Message-ID: On Sun, Apr 24, 2011 at 7:22 PM, MinRK wrote: > On Sun, Apr 24, 2011 at 17:16, Robert Kern wrote: >> On 4/24/11 3:26 PM, MinRK wrote: >>> This has been true for a *long* time - at least January 2010. ?I did >>> some checking, and it turns out that it's actually related to io.Term, >>> so I pushed a fix to PR #397, since that's regarding the same code. >>> >>> The Term object created with the InteractiveShell latches onto >>> sys.stdout before nose redirects it (which happens in every test), so >>> anything printed there would still arrive at stdout. ?I've fixed this >>> in the tests, by creating simple StreamProxy objects that are soft >>> links to sys.stdout/err, rather than hard ones, so the Capture plugin >>> should still work with IPython output. >> >> For what it's worth, I've had to do a similar thing in a non-test situation. It >> might be worthwhile to expose such a thing in io itself. > > Good to know. ?For that matter, I'm not sure why we have > io.stdout/stderr (previously io.Term.cout/cerr) at all. > Under what circumstances do we want streams going to our stdout/stderr > to be handled *differently* from sys.stdout,stderr, or a logger? I am not sure how Term came into being. The only reasons I can think of to separate sys.stdout/in/err from io.stdout/in/err are: 1) The versions in io handle colors on Windows. 2) The idea of separating user io (sys.stdout/in/err) from IPython Now 1) we are definitely using and I am not sure we want to run all user io through that. While I like the idea of 2), we don't really take advantage of this separation currently. Although, I imagine a frontend could handle that output differently if it wanted to. In my mind, logging is a completely different issue in terms of function/content, although we still have to decide which streams to run logging on. Cheers, Brian > -MinRK > >> >> -- >> Robert Kern >> >> "I have come to believe that the whole world is an enigma, a harmless enigma >> ?that is made terrible by our own mad attempt to interpret it as though it had >> ?an underlying truth." >> ? -- Umberto Eco >> >> _______________________________________________ >> IPython-dev mailing list >> IPython-dev at scipy.org >> http://mail.scipy.org/mailman/listinfo/ipython-dev >> > _______________________________________________ > IPython-dev mailing list > IPython-dev at scipy.org > http://mail.scipy.org/mailman/listinfo/ipython-dev > -- Brian E. Granger Cal Poly State University, San Luis Obispo bgranger at calpoly.edu and ellisonbg at gmail.com From zachary.pincus at yale.edu Fri Apr 29 13:03:19 2011 From: zachary.pincus at yale.edu (Zachary Pincus) Date: Fri, 29 Apr 2011 13:03:19 -0400 Subject: [IPython-dev] ipython in a background thread Message-ID: <840FA957-B8F0-481D-8BC1-7B26921D90B1@yale.edu> Hello all, I find that ipython, specifically ipython running in a good terminal (as opposed to embedded in a GUI window), provides a really great interface (e.g. in my case, a microscope and camera hardware). Given this use, it's been important to also have a GUI thread running so I can throw acquired images rapidly up onto a GL canvas or something. For a while, I've used a hacked-up pyglet runloop that would work in a background thread, and set up a simple message-passing system. This got quite kluged quite fast though, and it turns out this approach won't work at all with cocoa GUIs on OS X, which seem to require running on thread-0 (a pity). I know people on this list have looked and thought about these issues a lot, so I'd be curious what seems like the best approach in general? (There used to be some code in IPython for doing this with various window toolkits, but I don't see that in the 0.10.2's codebase anymore...) - I could run the GUI from thread-0 and try message-passing to IPython on another thread. Does this work well at all, or is it a huge kluge to get all the readline etc. features working right? - I could run the GUI in a separate process entirely, which would force a much cleaner API, but I'm not sure if pumping images at video-rate could be done cleanly. Maybe with shared memmapped arrays? (I've seen some recipes for this sort of thing on the numpy list lately.) - Any other possibilities or thoughts? Thanks a lot, Zach