From edreamleo at gmail.com Sat May 1 12:02:30 2010 From: edreamleo at gmail.com (Edward K. Ream) Date: Sat, 1 May 2010 11:02:30 -0500 Subject: [IPython-dev] notebook-like format for IPythonQt sessions. In-Reply-To: References: Message-ID: On Fri, Apr 30, 2010 at 4:54 PM, Fernando Perez wrote: > On Fri, Apr 30, 2010 at 6:58 AM, Edward K. Ream wrote: >> >> To repeat, I have no idea whether any part of Leo will be of use in >> the project under discussion. ?But it might be worthwhile to take a >> look at Leo, its Qt interface, its xml file format and its in-core >> data structures. ?The features I have summarized here have been under >> development for 15+ years. ?Leo is pure Python and a single code base >> works with Python 2.6 and Python 3.1. > > Thanks for the pointers. Since leo is mit licensed it's compatible > with ipython's license, so we can take ideas from there. Good. One more point to consider. Leo's bridge module makes it possible to use .leo files from platforms other than Leo itself. http://webpages.charter.net/edreamleo/leoBridge.html Edward From chipperb at nc.rr.com Tue May 4 12:27:10 2010 From: chipperb at nc.rr.com (Ralph Blach) Date: Tue, 04 May 2010 12:27:10 -0400 Subject: [IPython-dev] problem with accerciser and ipython Message-ID: <4BE04ADE.7010609@nc.rr.com> I have installed accerciser and ipython .10 on an fedora 12 x86_64 machine accerciser 1.9.3 ipython .10 I bring up Ekiga, and go to the contacts table, and enter the following commands table=acc.queryTable() In [37]: for i in range(table,nRows): What am I doing incorrectly here. I saw in a google search that the latest had an exec problem What can I do to fix this problem Thanks Chip From gokhansever at gmail.com Tue May 4 13:25:16 2010 From: gokhansever at gmail.com (=?UTF-8?Q?G=C3=B6khan_Sever?=) Date: Tue, 4 May 2010 12:25:16 -0500 Subject: [IPython-dev] problem with accerciser and ipython In-Reply-To: <4BE04ADE.7010609@nc.rr.com> References: <4BE04ADE.7010609@nc.rr.com> Message-ID: On Tue, May 4, 2010 at 11:27 AM, Ralph Blach wrote: > I have installed accerciser and ipython .10 on an fedora 12 x86_64 machine > accerciser 1.9.3 > ipython .10 > > I bring up Ekiga, and go to the contacts table, and enter the following > commands > > table=acc.queryTable() > In [37]: for i in range(table,nRows): > > What is in your table? You might be calling range with erroneous arguments. I[1]: mydict = {"a":1, "b":2, "c":3} I[3]: for i in mydict: ...: print mydict.keys() ...: ...: ['a', 'c', 'b'] ['a', 'c', 'b'] ['a', 'c', 'b'] I[4]: range(mydict) --------------------------------------------------------------------------- TypeError Traceback (most recent call last) TypeError: range() integer end argument expected, got dict. -- G?khan -------------- next part -------------- An HTML attachment was scrubbed... URL: From chipperb at nc.rr.com Tue May 4 13:39:01 2010 From: chipperb at nc.rr.com (Ralph Blach) Date: Tue, 04 May 2010 13:39:01 -0400 Subject: [IPython-dev] problem with accerciser and ipython In-Reply-To: References: <4BE04ADE.7010609@nc.rr.com> Message-ID: <4BE05BB5.4050806@nc.rr.com> Thanks for the suggestion. I will try and find out. Chip G?khan Sever wrote: > > > On Tue, May 4, 2010 at 11:27 AM, Ralph Blach > wrote: > > I have installed accerciser and ipython .10 on an fedora 12 x86_64 > machine > accerciser 1.9.3 > ipython .10 > > I bring up Ekiga, and go to the contacts table, and enter the > following commands > > table=acc.queryTable() > In [37]: for i in range(table,nRows): > > > > What is in your table? You might be calling range with erroneous > arguments. > > I[1]: mydict = {"a":1, "b":2, "c":3} > > I[3]: for i in mydict: > ...: print mydict.keys() > ...: > ...: > ['a', 'c', 'b'] > ['a', 'c', 'b'] > ['a', 'c', 'b'] > > I[4]: range(mydict) > --------------------------------------------------------------------------- > TypeError Traceback (most recent call > last) > TypeError: range() integer end argument expected, got dict. > > -- > G?khan From fperez.net at gmail.com Tue May 4 20:36:41 2010 From: fperez.net at gmail.com (Fernando Perez) Date: Tue, 4 May 2010 17:36:41 -0700 Subject: [IPython-dev] Github transition, ready, go on Saturday? Message-ID: Hi all, it seems we have lined up everything we need to transition to github, so here's a last call for any code you may want to bring over. I plan on making a repo with all the 0.X branches plus trunk in, and anything else you point me to right now that can be reviewed. I asked for the same a few weeks ago and nobody suggested anything, so this is a last call. If you forget something it will be possible to later make a diff from bzr and apply the patch over, but large merges from bzr to git will probably be a pain, so now would be the time to bring up anything significant you may have forgotten to mention. I hope to make the switch on Saturday, barring any last-minute problems. Let me know if anyone objects. Matthew pointed this out: http://pypi.python.org/pypi/github-cli/0.2.9 which should make the bug upload even easier than what I was going to use, so I think we're set on the bugs front as well. Cheers, f From edreamleo at gmail.com Wed May 5 10:12:32 2010 From: edreamleo at gmail.com (Edward K. Ream) Date: Wed, 5 May 2010 09:12:32 -0500 Subject: [IPython-dev] Github transition, ready, go on Saturday? In-Reply-To: References: Message-ID: On Tue, May 4, 2010 at 7:36 PM, Fernando Perez wrote: > it seems we have lined up everything we need to transition to github, > so here's a last call for any code you may want to bring over. Re-reading the IPython-dev archives, I see that your initial post on this topic tacitly assumed that the problems couldn't be fixed. Alas, that suppressed my curiosity. I wish I had asked you straight away whether you had contacted the bzr folks. I've found them to be quite helpful. Did you contact them or did you just assume that the problems were intractable? Edward ------------------------------------------------------------------------------ Edward K. Ream email: edreamleo at gmail.com Leo: http://webpages.charter.net/edreamleo/front.html ------------------------------------------------------------------------------ From fperez.net at gmail.com Thu May 6 01:41:48 2010 From: fperez.net at gmail.com (Fernando Perez) Date: Wed, 5 May 2010 22:41:48 -0700 Subject: [IPython-dev] Github transition, ready, go on Saturday? In-Reply-To: References: Message-ID: On Wed, May 5, 2010 at 7:12 AM, Edward K. Ream wrote: > > > ?I wish I had asked you straight away whether you had contacted the > bzr folks. ?I've found them to be quite helpful. ?Did you contact them > or did you just assume that the problems were intractable? They were very helpful when I contacted them with specific questions, actually, and for that I'm very thankful. The point is that github and git provide, *for us*, a fundamentally smoother workflow than bzr/lp. I wish bzr and lp the very best as projects, but I simply don't have the time to sort out the multiple issues we kept on running into with the tools, when git and github *just work*, out of the box, phenomenally well, without any further hassles. I'm not interested in becoming a full time bzr/lp designer/architect/contributor, I'm afraid. My time for ipython is very limited, and I can choose to spend it fixing bzr or getting work done using git. I've chosen the latter. Cheers, f From edreamleo at gmail.com Thu May 6 10:03:05 2010 From: edreamleo at gmail.com (Edward K. Ream) Date: Thu, 6 May 2010 09:03:05 -0500 Subject: [IPython-dev] Github transition, ready, go on Saturday? In-Reply-To: References: Message-ID: On Thu, May 6, 2010 at 12:41 AM, Fernando Perez wrote: > On Wed, May 5, 2010 at 7:12 AM, Edward K. Ream wrote: >> >> >> ?I wish I had asked you straight away whether you had contacted the >> bzr folks. ?I've found them to be quite helpful. ?Did you contact them >> or did you just assume that the problems were intractable? > > They were very helpful when I contacted them with specific questions, > actually, and for that I'm very thankful. > > The point is that github and git provide, *for us*, a fundamentally > smoother workflow than bzr/lp. ?I wish bzr and lp the very best as > projects, but I simply don't have the time to sort out the multiple > issues we kept on running into with the tools, when git and github > *just work*, out of the box, phenomenally well, without any further > hassles. ?I'm not interested in becoming a full time bzr/lp > designer/architect/contributor, I'm afraid. Neither am I :-) In your opinion, should projects such as Leo that rely on bzr be concerned about its stability? Edward ------------------------------------------------------------------------------ Edward K. Ream email: edreamleo at gmail.com Leo: http://webpages.charter.net/edreamleo/front.html ------------------------------------------------------------------------------ From fperez.net at gmail.com Thu May 6 14:24:24 2010 From: fperez.net at gmail.com (Fernando Perez) Date: Thu, 6 May 2010 11:24:24 -0700 Subject: [IPython-dev] Github transition, ready, go on Saturday? In-Reply-To: References: Message-ID: On Thu, May 6, 2010 at 7:03 AM, Edward K. Ream wrote: > > Neither am I :-) ?In your opinion, should projects such as Leo that > rely on bzr be concerned about its stability? I don't think (neither did I ever imply) bzr is in any danger of disappearing: Canonical is solidly behind it, Launchpad is a large system with many projects on it, and the developers seem to be a friendly and hard working bunch. If it fits your workflow needs and you're happy with it, you should be fine. *we* weren't happy with it in ways that a simple switch to a different platform fixed, so we are switching, but I'm not trying to badmouth bzr or seed doubt about it. Switching to git was a decision we made as the most desirable for *our* project, but if bzr fits the needs of *your* project, stay there and continue enjoying its benefits. Cheers, f From matthew.brett at gmail.com Thu May 6 15:08:54 2010 From: matthew.brett at gmail.com (Matthew Brett) Date: Thu, 6 May 2010 12:08:54 -0700 Subject: [IPython-dev] Another pass at the git workflow docs Message-ID: Hi, I've had another go at the 'gitwash' workflow docs that Fernando pointed y'all to earlier. I've tried to include many of your suggestions. For the Ipython folks: your version will just be a search replace of 'nipy' with 'ipython'.... Any feedback would be very welcome. We'll put them up soon... http://github.com/matthew-brett/gitwash https://cirl.berkeley.edu/mb312/gitwash Thanks a lot, Matthew From fperez.net at gmail.com Mon May 10 05:46:34 2010 From: fperez.net at gmail.com (Fernando Perez) Date: Mon, 10 May 2010 02:46:34 -0700 Subject: [IPython-dev] Move of IPython development to GitHub completed Message-ID: Hi all, [Cc'ing -user list for those of you who use our source tree, since the bzr one on Launchpad won't be updated further] as per our recent discussions and scheduled plans, the LP repos have all now been moved over to GitHub. I've created this repo: http://github.com/ipython/ipython For now I only have Brian and me added as 'collaborators', simply because I don't have the Github login from anyone else on ~ipython-dev. So those of you on the ~ipython-dev Launchpad team https://launchpad.net/~ipython-dev/+members who would like to continue managing the main repo, please just contact me with your GitHub id. Please note that since GitHub allows trivial collaboration between anyone on the site, without needing to create special 'teams', we won't be making a special '-contrib' repo like we had on Launchpad. That team and associated repos were simply to work around Launchpad's limitations, on github we don't need it. I have migrated all 113 open bugs from Launchpad to GitHub, including links to their parent on LP for reference, as well as their full commit history: http://github.com/ipython/ipython/issues Note that on LP we had 210 listed 'open', but that included many with 'fix committed', which I didn't migrate since once 0.10.1 and 0.11 are out they'll automatically close, and I didn't want to populate our GitHub repo with a ton of effectively dead bugs from day 1. But I was very careful not to lose any important information from old bug reports, the only problem may be some ugly formatting because GitHub uses the markdown syntax and thus renders funny some of the old comments. I'm sure we'll have to do a bit of adjustment to the new setup, but by and large I'm convinced it will make everyone's life easier in the long run. Please let us know of any problems you may encounter. Cheers, f From fperez.net at gmail.com Mon May 10 05:51:53 2010 From: fperez.net at gmail.com (Fernando Perez) Date: Mon, 10 May 2010 02:51:53 -0700 Subject: [IPython-dev] Devs on github: fork the main repo Message-ID: Hi folks, As soon as I can I'll include Matthew's instructions: https://cirl.berkeley.edu/mb312/gitwash/ in our real development docs with explicit IPython references. But for now, please use this as our workflow description replacing 'nipy' with 'ipython' in your head. Importantly, for those of you who know you will develop for IPython, start by forking the main repo on http://github.com/ipython/ipython by clicking the 'fork' button, so you have your own copy of the repo to work from, generate pull requests, etc. Cheers, f From ellisonbg at gmail.com Mon May 10 14:52:49 2010 From: ellisonbg at gmail.com (Brian Granger) Date: Mon, 10 May 2010 11:52:49 -0700 Subject: [IPython-dev] Devs on github: fork the main repo In-Reply-To: References: Message-ID: > As soon as I can I'll include Matthew's instructions: > > https://cirl.berkeley.edu/mb312/gitwash/ Thanks, this has turned out really nice! My only comment is that I tend to use the name "upstream" rather than mainline. > in our real development docs with explicit IPython references. ?But > for now, please use this as our workflow description replacing 'nipy' > with 'ipython' in your head. ?Importantly, for those of you who know > you will develop for IPython, start by forking the main repo on > > http://github.com/ipython/ipython > > by clicking the 'fork' button, so you have your own copy of the repo > to work from, generate pull requests, etc. Done! Feels great to have this done!!! :-) Brian > Cheers, > > f > _______________________________________________ > IPython-dev mailing list > IPython-dev at scipy.org > http://mail.scipy.org/mailman/listinfo/ipython-dev > -- Brian E. Granger, Ph.D. Assistant Professor of Physics Cal Poly State University, San Luis Obispo bgranger at calpoly.edu ellisonbg at gmail.com From ellisonbg at gmail.com Mon May 10 14:57:13 2010 From: ellisonbg at gmail.com (Brian Granger) Date: Mon, 10 May 2010 11:57:13 -0700 Subject: [IPython-dev] Move of IPython development to GitHub completed In-Reply-To: References: Message-ID: Fernando, Wow! > as per our recent discussions and scheduled plans, the LP repos have > all now been moved over to GitHub. ?I've created this repo: > > http://github.com/ipython/ipython Fantastic! Thanks so much for doing this. I definitely owe you a dinner/beer/coffee. > For now I only have Brian and me added as 'collaborators', simply > because I don't have the Github login from anyone else on > ~ipython-dev. ?So those of you on the ~ipython-dev Launchpad team > > https://launchpad.net/~ipython-dev/+members > > who would like to continue managing the main repo, please just contact > me with your GitHub id. ?Please note that since GitHub allows trivial > collaboration between anyone on the site, without needing to create > special 'teams', we won't be making a special '-contrib' repo like we > had on Launchpad. ?That team and associated repos were simply to work > around Launchpad's limitations, on github we don't need it. > > I have migrated all 113 open bugs from Launchpad to GitHub, including > links to their parent on LP for reference, as well as their full > commit history: > > http://github.com/ipython/ipython/issues > > Note that on LP we had 210 listed 'open', but that included many with > 'fix committed', which I didn't migrate since once 0.10.1 and 0.11 are > out they'll automatically close, and I didn't want to populate our > GitHub repo with a ton of effectively dead bugs from day 1. > > But I was very careful not to lose any important information from old > bug reports, the only problem may be some ugly formatting because > GitHub uses the markdown syntax and thus renders funny some of the old > comments. > > I'm sure we'll have to do a bit of adjustment to the new setup, but by > and large I'm convinced it will make everyone's life easier in the > long run. > > Please let us know of any problems you may encounter. Finally, I can sleep at night... Cheers, Brian > Cheers, > > f > _______________________________________________ > IPython-dev mailing list > IPython-dev at scipy.org > http://mail.scipy.org/mailman/listinfo/ipython-dev > -- Brian E. Granger, Ph.D. Assistant Professor of Physics Cal Poly State University, San Luis Obispo bgranger at calpoly.edu ellisonbg at gmail.com From matthew.brett at gmail.com Tue May 11 02:46:21 2010 From: matthew.brett at gmail.com (Matthew Brett) Date: Mon, 10 May 2010 23:46:21 -0700 Subject: [IPython-dev] Devs on github: fork the main repo In-Reply-To: References: Message-ID: Yo, On Mon, May 10, 2010 at 11:52 AM, Brian Granger wrote: >> As soon as I can I'll include Matthew's instructions: >> >> https://cirl.berkeley.edu/mb312/gitwash/ > > Thanks, this has turned out really nice! ?My only comment is that I > tend to use the name "upstream" rather than mainline. OK - upstream is fine by me, I've changed it. I put a perl -pi -e make targets 'make for-ipython' and 'make for-nipy' to toggle between ipython and nipy. I think they're ready for import to our respective trees if y'all agree... Have uploaded the docs as for ipython: https://cirl.berkeley.edu/mb312/gitwash See you, Matthew From dsdale24 at gmail.com Tue May 11 12:27:30 2010 From: dsdale24 at gmail.com (Darren Dale) Date: Tue, 11 May 2010 12:27:30 -0400 Subject: [IPython-dev] Move of IPython development to GitHub completed In-Reply-To: References: Message-ID: On Mon, May 10, 2010 at 5:46 AM, Fernando Perez wrote: > as per our recent discussions and scheduled plans, the LP repos have > all now been moved over to GitHub. May I suggest: the documentation at http://ipython.scipy.org/doc/nightly/html/install/install.html#installing-the-development-version needs to be updated. Darren From fperez.net at gmail.com Tue May 11 12:30:51 2010 From: fperez.net at gmail.com (Fernando Perez) Date: Tue, 11 May 2010 09:30:51 -0700 Subject: [IPython-dev] Devs on github: fork the main repo In-Reply-To: References: Message-ID: On Mon, May 10, 2010 at 11:46 PM, Matthew Brett wrote: > > OK - upstream is fine by me, I've changed it. > > I put a perl -pi -e make targets 'make for-ipython' and 'make > for-nipy' to toggle between ipython and nipy. ? I think they're ready > for import to our respective trees if y'all agree... > > Have uploaded the docs as for ipython: > > https://cirl.berkeley.edu/mb312/gitwash > Thanks a lot for updating this Matthew, it reads great. Cheers, f From fperez.net at gmail.com Tue May 11 12:31:33 2010 From: fperez.net at gmail.com (Fernando Perez) Date: Tue, 11 May 2010 09:31:33 -0700 Subject: [IPython-dev] Move of IPython development to GitHub completed In-Reply-To: References: Message-ID: On Tue, May 11, 2010 at 9:27 AM, Darren Dale wrote: > > May I suggest: the documentation at > http://ipython.scipy.org/doc/nightly/html/install/install.html#installing-the-development-version > needs to be updated. Yup, high on my list, but swamped right now... Pull requests on github welcome ;) Cheers, f From fperez.net at gmail.com Wed May 12 02:44:15 2010 From: fperez.net at gmail.com (Fernando Perez) Date: Tue, 11 May 2010 23:44:15 -0700 Subject: [IPython-dev] embedding ipython In-Reply-To: References: Message-ID: Hi Piotr, sorry for the long delay; the git/github transition swallowed most of my ipython time for the last few days. Thankfully we're over that now :) On Sun, Apr 25, 2010 at 11:29 AM, Piotr Zolnierczuk wrote: > I?have been using an embedded IPython shell (sorry I am still at 0.9.1) in?a > wxPython app for quite some time and I (and my users) like very much. > I cloned IPython.gui.wx.ipython_view to customize look-and-feel with the > rest of my app. > > My app has a bunch of "GUI" tabs that control physics experiment hardware > (neutron scattering at Oak Ridge National Lab) and one tab that is the > IPython shell that allows for "custom" scripts. The main app needs to "pass" > some objects into the shell, for example?an object that is responsible for > communication with the control hardware, > so it can be used in the interactive shell. I used user_ns dictionary for it > and it works for me. Yes, that's the intent of the user_ns dict. > Now I have a bunch of questions, so please bear with me: > > a) what is the difference between user_ns and user_global_ns? (in particular > in IPython.gui.wx.ipshell_nonblocking)? I'm not 100% sure about the wx one, but in general, those two exist because in Python, the exec statement's grammar: http://docs.python.org/reference/simple_stmts.html#exec supports *two* dictionaries to resolve names, one for local names and one for global ones. This lets you control those separately. I'm afraid I can't recall exact instances where using both is needed, but at least you know the purpose :) > b) the example (wxIPython) as well I my embedded shell do not fill _oh > dictionary and '_'?is always empty so users cannot "recall" the results of > the previous statement. The ipython shell works fine in that respect. Why > wxIPython does not? I think the wx shell code doesn't completely implement the entire ipython machinery. Laurent, the author of that code, may be able to help out better, though he's been fairly quiet as of late. > c) could "passing" of the objects ?be done via 'configuration' file? If yes, > how?. In the pre-0.11 api, you could try to populate the user namespace with objects loaded via this config file: http://ipython.scipy.org/doc/stable/html/config/customization.html#ipy-user-conf-py though I have to admit I'm not 100% sure if the wxip loads that at startup time. In the 0.11 series we completely reworked the configuration system into something fairly clean and rational where this can be done (and if it doesn't work, we'll consider it a bug). I'll announce plans for 0.11 shortly. > d) could an external app execute a script (function) via IPython shell? >From another process, not out of the box today. You'd need to wire some code inside your process that listens over a port/pipe and responds to commands. But writing such bit of code isn't particularly hard, it just depends on whether you need only local behavior (case in which a simple named pipe may be enough) or you need network support. > e) some users suggested?a capability where user "queues" scripts and then > some kernel executes them one-by-one. Anybody knows a good python solution > for that? Well, the ipython distributed computing apis do much of that indeed: http://ipython.scipy.org/doc/stable/html/parallel/index.html Cheers, f From satra at mit.edu Wed May 12 09:39:29 2010 From: satra at mit.edu (Satrajit Ghosh) Date: Wed, 12 May 2010 09:39:29 -0400 Subject: [IPython-dev] taskclient clear and hierarchical parallel processing Message-ID: hi brian and fernando, Issue 1 --------- i create an ipython parallel cluster using ipcluster. now shell 1: launch python, get taskclient, run tasks with block=False now shell 2: launch python, get taskclient, run tasks with block=False get my results using get_task_result call taskclient.clear() this clears all the tasks in shell 1. is this the intended mode of operation? alternatively, is there a way to clear a specific task based on its task id? basically i want to use the same pool of resources from multiple client connections. Issue 2 --------- a related second question is more of a design pattern issue. i have hierarchically complex DAGs such that each node of a DAG can be a DAG itself. running this DAG in parallel leads me to the following issue. let's assume i have 2 computational nodes on which i can send two nodes of the DAG (Task 1 and Task 2). now suppose these tasks themselves are DAGs and use the same mechanism for executing nodes. now when we push a task from within Task1, these go into a queue and will never run (essentially wait for the parent tasks to release the node). now i can flatten out all the DAGs and run them, but it would be neat if there was a pattern that would enable running these as concrete entities. cheers, satra -------------- next part -------------- An HTML attachment was scrubbed... URL: From piotr.zolnierczuk at gmail.com Wed May 12 11:48:44 2010 From: piotr.zolnierczuk at gmail.com (Piotr Zolnierczuk) Date: Wed, 12 May 2010 11:48:44 -0400 Subject: [IPython-dev] embedding ipython In-Reply-To: References: Message-ID: Fernando, Thanks for the reply. I understand the delay - I have been there. Is wxPython integration something that I could contribute perhaps? For example I would like to have a configurable IPython shell widget that one could just "drop" in into an app. The present example - as great as it is - has some hard-coded stuff (fonts, etc) that I would like to see being configurable. Also the user config that you mention is a great thing to improve. I will check the latest sources and see how can one make things better. Thanks again Piotr On Wed, May 12, 2010 at 2:44 AM, Fernando Perez wrote: > Hi Piotr, > > sorry for the long delay; the git/github transition swallowed most of > my ipython time for the last few days. Thankfully we're over that now > :) > > On Sun, Apr 25, 2010 at 11:29 AM, Piotr Zolnierczuk > wrote: > > I have been using an embedded IPython shell (sorry I am still at 0.9.1) > in a > > wxPython app for quite some time and I (and my users) like very much. > > I cloned IPython.gui.wx.ipython_view to customize look-and-feel with the > > rest of my app. > > > > My app has a bunch of "GUI" tabs that control physics experiment hardware > > (neutron scattering at Oak Ridge National Lab) and one tab that is the > > IPython shell that allows for "custom" scripts. The main app needs to > "pass" > > some objects into the shell, for example an object that is responsible > for > > communication with the control hardware, > > so it can be used in the interactive shell. I used user_ns dictionary for > it > > and it works for me. > > Yes, that's the intent of the user_ns dict. > > > Now I have a bunch of questions, so please bear with me: > > > > a) what is the difference between user_ns and user_global_ns (in > particular > > in IPython.gui.wx.ipshell_nonblocking)? > > I'm not 100% sure about the wx one, but in general, those two exist > because in Python, the exec statement's grammar: > > http://docs.python.org/reference/simple_stmts.html#exec > > supports *two* dictionaries to resolve names, one for local names and > one for global ones. This lets you control those separately. I'm > afraid I can't recall exact instances where using both is needed, but > at least you know the purpose :) > > > b) the example (wxIPython) as well I my embedded shell do not fill _oh > > dictionary and '_' is always empty so users cannot "recall" the results > of > > the previous statement. The ipython shell works fine in that respect. Why > > wxIPython does not? > > I think the wx shell code doesn't completely implement the entire > ipython machinery. Laurent, the author of that code, may be able to > help out better, though he's been fairly quiet as of late. > > > c) could "passing" of the objects be done via 'configuration' file? If > yes, > > how?. > > In the pre-0.11 api, you could try to populate the user namespace with > objects loaded via this config file: > > > http://ipython.scipy.org/doc/stable/html/config/customization.html#ipy-user-conf-py > > though I have to admit I'm not 100% sure if the wxip loads that at startup > time. > > In the 0.11 series we completely reworked the configuration system > into something fairly clean and rational where this can be done (and > if it doesn't work, we'll consider it a bug). I'll announce plans for > 0.11 shortly. > > > d) could an external app execute a script (function) via IPython shell? > > From another process, not out of the box today. You'd need to wire > some code inside your process that listens over a port/pipe and > responds to commands. But writing such bit of code isn't particularly > hard, it just depends on whether you need only local behavior (case in > which a simple named pipe may be enough) or you need network support. > > > e) some users suggested a capability where user "queues" scripts and then > > some kernel executes them one-by-one. Anybody knows a good python > solution > > for that? > > Well, the ipython distributed computing apis do much of that indeed: > > http://ipython.scipy.org/doc/stable/html/parallel/index.html > > Cheers, > > f > -- Piotr Adam Zolnierczuk e-mail: piotr at zolnierczuk.net www: http://www.zolnierczuk.net _____________________________________ written on recycled electrons -------------- next part -------------- An HTML attachment was scrubbed... URL: From ellisonbg at gmail.com Wed May 12 14:59:53 2010 From: ellisonbg at gmail.com (Brian Granger) Date: Wed, 12 May 2010 11:59:53 -0700 Subject: [IPython-dev] taskclient clear and hierarchical parallel processing In-Reply-To: References: Message-ID: Satra, On Wed, May 12, 2010 at 6:39 AM, Satrajit Ghosh wrote: > hi brian and fernando, > > Issue 1 > --------- > i create an ipython parallel cluster using ipcluster. > > now shell 1: > launch python, get taskclient, run tasks with block=False > > now shell 2: > launch python, get taskclient, run tasks with block=False > get my results using get_task_result > call taskclient.clear() > > this clears all the tasks in shell 1. > > is this the intended mode of operation? For now it is. The issue is that the controller stores all the tasks in memory (eventually it should store them on disk), so clear() is used to get rid of tasks you are done with so you can save memory. So yes, for now it is the desired behavior. > alternatively, is there a way to clear a specific task based on its task id? > basically i want to use the same pool of resources from multiple client > connections. We don't have a way of doing this currently. Basically, you should call clear() when you know all clients are done with a set of tasks and you want to free up that memory. > Issue 2 > --------- > a related second question is more of a design pattern issue. i have > hierarchically complex DAGs such that each node of a DAG can be a DAG > itself. running this DAG in parallel leads me to the following issue. > > let's assume i have 2 computational nodes on which i can send two nodes of > the DAG (Task 1 and Task 2). now suppose these tasks themselves are DAGs and > use the same mechanism for executing nodes. now when we push a task from > within Task1, these go into a queue and will never run (essentially wait for > the parent tasks to release the node). > > now i can flatten out all the DAGs and run them, but it would be neat if > there was a pattern that would enable running these as concrete entities. I have not really thought about this point before. But I guess that you can get into a deadlock situation if all the nodes are busy and the sub-DAGs can't be executed because their parents are using all the engines. Obviously if you have a larger number of engines, this problem can be avoided, but it would be nice if the scheduler itself could handle this. Some questions: * Could be re-write it so that the sub DAGs were top-level tasks rather than doing the recursive task-submit-a-task thing? We might have to look into ways of allowing tasks to be paused to that sub-tasks can run while the parent is paused. Interesting things to think about... Cheers, Brian > cheers, > > satra > > > _______________________________________________ > IPython-dev mailing list > IPython-dev at scipy.org > http://mail.scipy.org/mailman/listinfo/ipython-dev > > -- Brian E. Granger, Ph.D. Assistant Professor of Physics Cal Poly State University, San Luis Obispo bgranger at calpoly.edu ellisonbg at gmail.com From muzgash.lists at gmail.com Thu May 13 01:06:45 2010 From: muzgash.lists at gmail.com (Gerardo Gutierrez) Date: Thu, 13 May 2010 00:06:45 -0500 Subject: [IPython-dev] IPython-ZMQ about client and server division. Message-ID: Hi everyone. This week, Omar and I were discussing about the fundamentals of this project (again) which apparently are not clear enough (at least for me). The thing here that I'm talking about is based on Omar's idea that every other frontend should connect to the client not the kernel, I'm referring to the *client* as a frontend because I think it should be written in the same manner as the other frontends, this is if the frontends were going to connect to the kernel, as I thought initially. This idea, I think, will lighten the tasks the other frontends will have, but I think also, that the idea of having to rely on two processes for a simple client/frontend (Qt,ncurses,etc) is not healthy. I'm not really sure if this questioning here is explained in the diagram that Omar has in the wiki, but it's very important for both of us to clarify this soon. I hope Omar reads this message soon so he can explain better his idea. Sorry if I'm wrong. Best regards. -- Gerardo Guti?rrez Guti?rrez Physics student Universidad de Antioquia Computational physics and astrophysics group (FACom ) Computational science and development branch(FACom-dev ) Usuario Linux #492295 -------------- next part -------------- An HTML attachment was scrubbed... URL: From andresete.chaos at gmail.com Thu May 13 01:18:08 2010 From: andresete.chaos at gmail.com (=?UTF-8?Q?Omar_Andr=C3=A9s_Zapata_Mesa?=) Date: Thu, 13 May 2010 00:18:08 -0500 Subject: [IPython-dev] IPython-ZMQ about client and server division. In-Reply-To: References: Message-ID: 2010/5/13 Gerardo Gutierrez > Hi everyone. > > This week, Omar and I were discussing about the fundamentals of this > project (again) which apparently are not clear enough (at least for me). > *The idea is that GUI use zmq-client to connect to the kernel, ipython-zmq will have a console that will contain classes to facilitate the connection of any other frontend.* > The thing here that I'm talking about is based on Omar's idea that every > other frontend should connect to the client not the kernel, I'm referring to > the *client* as a frontend because I think it should be written in the same > manner as the other frontends, this is if the frontends were going to > connect to the kernel, as I thought initially. > > This idea, I think, will lighten the tasks the other frontends will have, > but I think also, that the idea of having to rely on two processes for a > simple client/frontend (Qt,ncurses,etc) is not healthy. > > I'm not really sure if this questioning here is explained in the diagram > that Omar has in the wiki, but it's very important for both of us to clarify > this soon. > > I hope Omar reads this message soon so he can explain better his idea. > > Sorry if I'm wrong. > > > > *Best regards.* > -- > Gerardo Guti?rrez Guti?rrez > Physics student > Universidad de Antioquia > Computational physics and astrophysics group (FACom > ) > Computational science and development branch(FACom-dev > ) > Usuario Linux #492295 > > > > > _______________________________________________ > IPython-dev mailing list > IPython-dev at scipy.org > http://mail.scipy.org/mailman/listinfo/ipython-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ellisonbg at gmail.com Thu May 13 01:26:29 2010 From: ellisonbg at gmail.com (Brian Granger) Date: Wed, 12 May 2010 22:26:29 -0700 Subject: [IPython-dev] IPython-ZMQ about client and server division. In-Reply-To: References: Message-ID: Thanks for bringing this discussion to the list... On Wed, May 12, 2010 at 10:06 PM, Gerardo Gutierrez wrote: > Hi everyone. > > This week, Omar and I were discussing about the fundamentals of this project > (again) which apparently are not clear enough (at least for me). > The thing here that I'm talking about is based on Omar's idea that every > other frontend should connect to the client not the kernel, I'm referring to > the *client* as a frontend because I think it should be written in the same > manner as the other frontends, this is if the frontends were going to > connect to the kernel, as I thought initially. The frontends should definitely all connect directly to the kernel. There are a number of reasons for this, but the main one is that the frontends may all go away while the kernel remains. When a frontend tries to connect to the kernel, there may be no other frontends in existence. > This idea, I think, will lighten the tasks the other frontends will have, > but I think also, that the idea of having to rely on two processes for a > simple client/frontend (Qt,ncurses,etc) is not healthy. The benefits of a two process model, even for the simplest possible case, are huge: * Isolation - either part can die, but the other remains, so you can simple re-start the part the died and continue. * Easy to do a hard reset of the kernel without killing the frontend. This a a common feature request and you can't do it with 1 process because of how python imports extension code. * Performance. Using two processes gives you the possibility of using 2 cores (1 for the kernel, 1 for the frontend) rather than just 1. * Automatically works over the network. * Separates the event loop issues of the frontend from those of the kernel. You could use wx for plotting in the kernel, but froma Qt based frontend. * You can easily do Control-C interrupts in a clean manner. * Your GUI won't die when you run extension code that holds the GIL. This is huge! Bottom line: the two process model is the best possible solution is almost every case. We will always have a 1 process version of IPython, but it will be very limited compared to the 2 process version eventually. Cheers, Brian > I'm not really sure if this questioning here is explained in the diagram > that Omar has in the wiki, but it's very important for both of us to clarify > this soon. > > I hope Omar reads this message soon so he can explain better his idea. > > Sorry if I'm wrong. > > > > Best regards. > -- > Gerardo Guti?rrez Guti?rrez > Physics student > Universidad de Antioquia > Computational physics and astrophysics group (FACom) > Computational science and development branch(FACom-dev) > Usuario Linux #492295 > > > > -- Brian E. Granger, Ph.D. Assistant Professor of Physics Cal Poly State University, San Luis Obispo bgranger at calpoly.edu ellisonbg at gmail.com From fperez.net at gmail.com Thu May 13 04:03:13 2010 From: fperez.net at gmail.com (Fernando Perez) Date: Thu, 13 May 2010 01:03:13 -0700 Subject: [IPython-dev] Cutting an 0.10.1 release and an 0.11 'tech preview'? Message-ID: Hi all, now that we've transitioned over to github, we might as well get back to the real point of it all, moving ipython forward :) In the bug cleanup I managed to apply a lot of little fixes for 0.10, and since that series is really in maintenance-only mode, I don't see a reason to delay the release much longer. I'd like to simply prepare an RC for it and leave it out for testing for a few days, if nobody complains we push it out. Any objections? For 0.11, we do have a lot of work still to do, but I worry that only a handful of people are using it, and without more users we won't see both the full scope of the real problems. The state of 0.11 is not ideal from a 'spit and polish' perspective: http://github.com/ipython/ipython/issues/labels/milestone-0_11 on the other hand I'm starting to think we'll benefit much more from putting it out as is, labelling it a 'tech preview' for people who use ipython in other projects, embedding it, etc, to start updating to the new APIs, checking what we may have missed, etc. If we delay too long we'll just drag on and on... I use this version as my daily working IPython and have since after my winter blitz, and it's perfectly usable (along with a bunch of little nice things). There are some regressions, but I think there's a net win, and only by releasing and getting things moving again will we make progress on fixing the regressions... Thoughts? Cheers, f From fperez.net at gmail.com Thu May 13 04:22:01 2010 From: fperez.net at gmail.com (Fernando Perez) Date: Thu, 13 May 2010 01:22:01 -0700 Subject: [IPython-dev] IPython-ZMQ about client and server division. In-Reply-To: References: Message-ID: On Wed, May 12, 2010 at 10:26 PM, Brian Granger wrote: > Thanks for bringing this discussion to the list... Indeed, we'll do our best to help out here so that the discussions also can get the input from others are searchable in the archives, etc. > On Wed, May 12, 2010 at 10:06 PM, Gerardo Gutierrez > wrote: >> Hi everyone. >> >> This week, Omar and I were discussing about the fundamentals of this project >> (again) which apparently are not clear enough (at least for me). >> The thing here that I'm talking about is based on Omar's idea that every >> other frontend should connect to the client not the kernel, I'm referring to >> the *client* as a frontend because I think it should be written in the same >> manner as the other frontends, this is if the frontends were going to >> connect to the kernel, as I thought initially. > > The frontends should definitely all connect directly to the kernel. > There are a number of reasons for this, but the main one is that the > frontends may all go away while the kernel remains. ?When a frontend > tries to connect to the kernel, there may be no other frontends in > existence. Does this make sense to you guys? Brian, should we move the message spec from the pyzmq toy example into IPython itself? Ondrej and Mateusz were today in town for a py4science meeting and we went together over the message protocol and demo. They've been building a very neat web frontend, and it would be good to start standardizing the protocol properly in IPython beyond the little zmq experiment. Are you OK with moving this over? Cheers, f From fperez.net at gmail.com Thu May 13 04:30:26 2010 From: fperez.net at gmail.com (Fernando Perez) Date: Thu, 13 May 2010 01:30:26 -0700 Subject: [IPython-dev] IPython-ZMQ about client and server division. In-Reply-To: References: Message-ID: On Thu, May 13, 2010 at 1:22 AM, Fernando Perez wrote: > > Brian, should we move the message spec from the pyzmq toy example into > IPython itself? ?Ondrej and Mateusz were today in town for a > py4science meeting and we went together over the message protocol and > demo. They've been building a very neat web frontend, and it would be > good to start standardizing the protocol properly in IPython beyond > the little zmq experiment. ?Are you OK with moving this over? > ps - by 'moving' I don't mean deleting it from the pyzmq repo, simply copying it into IPython's for future development, we can add a note in pyzmq indicating that this was meant as a prototype only, and that real development of these ideas into production code is done in ipython. If you're OK with that I can do the move of the docs so Omar and Gerardo start working off the main IPython sources only from now on (modulo updating pyzmq due to fixes/improvements that may come and turn out to be necessary for us). Cheers, f From fperez.net at gmail.com Thu May 13 04:31:48 2010 From: fperez.net at gmail.com (Fernando Perez) Date: Thu, 13 May 2010 01:31:48 -0700 Subject: [IPython-dev] taskclient clear and hierarchical parallel processing In-Reply-To: References: Message-ID: On Wed, May 12, 2010 at 11:59 AM, Brian Granger wrote: > >> alternatively, is there a way to clear a specific task based on its task id? >> basically i want to use the same pool of resources from multiple client >> connections. > > We don't have a way of doing this currently. ?Basically, you should > call clear() when you know all clients are done with a set of tasks > and you want to free up that memory. Mmmh, I now realize that I think I misunderstood Satra earlier today. Satra, clear() only flushes *already completed* tasks. Do you want to cancel tasks, or clear completed ones but by id? Cheers, f From fperez.net at gmail.com Thu May 13 04:41:25 2010 From: fperez.net at gmail.com (Fernando Perez) Date: Thu, 13 May 2010 01:41:25 -0700 Subject: [IPython-dev] embedding ipython In-Reply-To: References: Message-ID: Hi Piotr, On Wed, May 12, 2010 at 8:48 AM, Piotr Zolnierczuk wrote: > > Is wxPython integration something that I could contribute perhaps? For > example I would like to have a configurable IPython shell widget that one > could just "drop" in into an app. The present example - as great as it is - > has some hard-coded stuff (fonts, etc) that I would like to see being > configurable. Also the user config that you mention is a great thing to > improve. > I will check the latest sources and see how can one make things better. Absolutely! As you may have seen, part of the reason we want to push 0.11 out even if it's a bit rough around the edges, is to start working on implementing interfaces that all share more code than up until now. With the SOC students working on a cleaner architecture and the new component machinery we have in place, there's a sensible foundation on which to build consistent frontends. Currently we have the 'small and light' ipythonx widget that Gael wrote and the more featureful wxipython that Laurent wrote, but these share very little. Having someone work on a Wx-based tool that gets the best of these and is developed on the updated apis we now have would be fantastic. Cheers, f From satra at mit.edu Thu May 13 07:43:33 2010 From: satra at mit.edu (Satrajit Ghosh) Date: Thu, 13 May 2010 07:43:33 -0400 Subject: [IPython-dev] taskclient clear and hierarchical parallel processing In-Reply-To: References: Message-ID: hi brian, For now it is. The issue is that the controller stores all the tasks > in memory (eventually it should store them on disk), so clear() is > used to get rid of tasks you are done with so you can save memory. So > yes, for now it is the desired behavior. > ah. i haven't actually performed a memory consumption analysis, but clearing things when possible seemed like a reasonable thing to do. again the key problem was running two or more parallel analysis scripts simultaneously. we have situations where the workflow execution graph contains close to 700 nodes. > > alternatively, is there a way to clear a specific task based on its task > id? > > basically i want to use the same pool of resources from multiple client > > connections. > > We don't have a way of doing this currently. Basically, you should > call clear() when you know all clients are done with a set of tasks > and you want to free up that memory. > any suggestions on how to figure out when all clients are done? > > > > now i can flatten out all the DAGs and run them, but it would be neat if > > there was a pattern that would enable running these as concrete entities. > > I have not really thought about this point before. But I guess that > you can get into a deadlock situation if all the nodes are busy and > the sub-DAGs can't be executed because their parents are using all the > engines. Obviously if you have a larger number of engines, this > problem can be avoided, but it would be nice if the scheduler itself > could handle this. Some questions: > > * Could be re-write it so that the sub DAGs were top-level tasks > rather than doing the recursive task-submit-a-task thing? > i think this is what i meant by flattening out the DAGs and that's what we are currently doing. > We might have to look into ways of allowing tasks to be paused to that > sub-tasks can run while the parent is paused. Interesting things to > think about... > that would be neat. it's like the the node saying, put me to sleep and use my resources, when my child tasks are done wake me up. cheers, satra -------------- next part -------------- An HTML attachment was scrubbed... URL: From satra at mit.edu Thu May 13 07:46:14 2010 From: satra at mit.edu (Satrajit Ghosh) Date: Thu, 13 May 2010 07:46:14 -0400 Subject: [IPython-dev] taskclient clear and hierarchical parallel processing In-Reply-To: References: Message-ID: hi fernando, > > > We don't have a way of doing this currently. Basically, you should > > call clear() when you know all clients are done with a set of tasks > > and you want to free up that memory. > > Mmmh, I now realize that I think I misunderstood Satra earlier today. > Satra, clear() only flushes *already completed* tasks. Do you want to > cancel tasks, or clear completed ones but by id? > clear completed ones but by id. the reason i can't call clear is that i used the id to get the result. so i would need to know that all clients don't have any pending results *and* while i'm doing the clearing no pending task has completed. this seems like a philosopher's chopstick nightmare if not handled properly through parallel semantics. cheers, satra -------------- next part -------------- An HTML attachment was scrubbed... URL: From ellisonbg at gmail.com Thu May 13 13:22:05 2010 From: ellisonbg at gmail.com (Brian Granger) Date: Thu, 13 May 2010 10:22:05 -0700 Subject: [IPython-dev] IPython-ZMQ about client and server division. In-Reply-To: References: Message-ID: On Thu, May 13, 2010 at 1:22 AM, Fernando Perez wrote: > On Wed, May 12, 2010 at 10:26 PM, Brian Granger wrote: >> Thanks for bringing this discussion to the list... > > Indeed, we'll do our best to help out here so that the discussions > also can get the input from others are searchable in the archives, > etc. > >> On Wed, May 12, 2010 at 10:06 PM, Gerardo Gutierrez >> wrote: >>> Hi everyone. >>> >>> This week, Omar and I were discussing about the fundamentals of this project >>> (again) which apparently are not clear enough (at least for me). >>> The thing here that I'm talking about is based on Omar's idea that every >>> other frontend should connect to the client not the kernel, I'm referring to >>> the *client* as a frontend because I think it should be written in the same >>> manner as the other frontends, this is if the frontends were going to >>> connect to the kernel, as I thought initially. >> >> The frontends should definitely all connect directly to the kernel. >> There are a number of reasons for this, but the main one is that the >> frontends may all go away while the kernel remains. ?When a frontend >> tries to connect to the kernel, there may be no other frontends in >> existence. > > Does this make sense to you guys? > > Brian, should we move the message spec from the pyzmq toy example into > IPython itself? ?Ondrej and Mateusz were today in town for a > py4science meeting and we went together over the message protocol and > demo. They've been building a very neat web frontend, and it would be > good to start standardizing the protocol properly in IPython beyond > the little zmq experiment. ?Are you OK with moving this over? Sure, I think that is fine. For now, let's create a new top-level subpackage for this stuff. Eventually, once it is stable, we an move it into an appropriate location. Cheers, Brian > Cheers, > > f > -- Brian E. Granger, Ph.D. Assistant Professor of Physics Cal Poly State University, San Luis Obispo bgranger at calpoly.edu ellisonbg at gmail.com From ellisonbg at gmail.com Thu May 13 13:31:26 2010 From: ellisonbg at gmail.com (Brian Granger) Date: Thu, 13 May 2010 10:31:26 -0700 Subject: [IPython-dev] Cutting an 0.10.1 release and an 0.11 'tech preview'? In-Reply-To: References: Message-ID: Fernando, > now that we've transitioned over to github, we might as well get back > to the real point of it all, moving ipython forward :) Sounds great! > In the bug cleanup I managed to apply a lot of little fixes for 0.10, > and since that series is really in maintenance-only mode, I don't see > a reason to delay the release much longer. ?I'd like to simply prepare > an RC for it and leave it out for testing for a few days, if nobody > complains we push it out. ?Any objections? I think a 0.10 series release totally makes sense. > For 0.11, we do have a lot of work still to do, but I worry that only > a handful of people are using it, and without more users we won't see > both the full scope of the real problems. ?The state of 0.11 is not > ideal from a 'spit and polish' perspective: > > http://github.com/ipython/ipython/issues/labels/milestone-0_11 Wow, so nice to have all this on github.... I think we should start to divide these issues into different categories and prioritize: * Doc related things - prio-low. Personally I think that when the code base is changing a lot and man power is thin, we should wait until the codebase is stable before we spend a lot of time on the docs. Minimally, we should probably just put a huge warning on our docs that says "these are out of date right now...." * Anything related to the core interactive shell *code* - prio-critical * Anything related to parallel computing *code* - prio-medium - unless it is a fatal bug... * Other ideas... > on the other hand I'm starting to think we'll benefit much more from > putting it out as is, labelling it a 'tech preview' for people who use > ipython in other projects, embedding it, etc, to start updating to the > new APIs, checking what we may have missed, etc. ?If we delay too long > we'll just drag on and on... Yes, we need to get this out... > I use this version as my daily working IPython and have since after my > winter blitz, and it's perfectly usable (along with a bunch of little > nice things). ?There are some regressions, but I think there's a net > win, and only by releasing and getting things moving again will we > make progress on fixing the regressions... > > Thoughts? We should definitely push out a tech-preview/alpha soon. I will try to work on the issues to clarify what exactly we need to do... Cheers, Brian > Cheers, > > f > _______________________________________________ > IPython-dev mailing list > IPython-dev at scipy.org > http://mail.scipy.org/mailman/listinfo/ipython-dev > -- Brian E. Granger, Ph.D. Assistant Professor of Physics Cal Poly State University, San Luis Obispo bgranger at calpoly.edu ellisonbg at gmail.com From ondrej at certik.cz Thu May 13 15:04:24 2010 From: ondrej at certik.cz (Ondrej Certik) Date: Thu, 13 May 2010 12:04:24 -0700 Subject: [IPython-dev] api between kernel, web server and frontend Message-ID: Hi, we were in Berkeley with Mateusz and we discussed the API between the computational kernel, web server and the javascript frontend (and other frontends), so I wanted to share our thoughts. The main workhorse is a kernel, that has two connections: 1) feed, that publishes all changes to a given session, anyone (who knows the session UID) can subscribe to it and observe the whole session (e.g. ipython session) 2) request/response channel, which is used for sending python code for evaluation and some other minor things. The results are published in the feed. The API is done using json, and it is described here: http://github.com/ellisonbg/pyzmq/blob/master/examples/kernel/message_spec.rst and demo implementation of this is here: http://github.com/ellisonbg/pyzmq/tree/master/examples/kernel/ This demo implementation is using 0MQ (http://www.zeromq.org/) for the transport layer (i.e. for sending the json messages). 0MQ is probably the best library for sending/receiving messages over the net, but a big disadvantage is that it probably can't be used on the google app engine and it creates a dependency. But it's just the transport layer, in principle it can be replaced with anything else. So the above is the main logic for handling computational sessions, with many people (frontends) doing simultaneous calculations in the same session and one kernel (that however can dispatch the actual little calculations to many cores, e.g. do it in parallel). Note that this kernel has no database, no user accounts, nothing. All that it has is a session with UID (so you need to create the session somehow, but that's it) and it has a namespace for the running session, that's it. Now we need a web server, that would use the above API to communicate with the kernel, and this web server would have a database with user accounts, it would also have a database with user worksheets and cells and it would expose this over HTTP. The problem is that using HTTP and the current web technologies pose some limitations currently to what can be done. We talked with Mateusz how to do that in the car to Reno and we agreed that the best way is to use a JSONRPC, so the web server would define a Python class with some functionality, and then the javascript frontend (running in the browser) would use methods of this class to do things (create worksheets, get users, create cells, evaluate cells, get the feed from the kernel, ...). The JS frontend would have to periodically call some method on the server to get the feed, as we don't know how to "subscribe" to it using AJAX. Maybe in couple years, this would be simple using web sockets. I would be interested in any feedback and discussion. We now need to write a demo implementation of the above web server and a client that uses JSONRPC to do the above. The client can (should) be command line based and it will only use JSONRPC for all the communication. It will then be quite trivial (trivial in terms of the logic) to anyone to write a javascript client, as JSONRPC works nice with javascript, or adapt codenode or the sage notebook frontends to use this JSONRPC. For testing purposes, I would really like to have a demo implementation that runs on the google app engine (both the web server and the kernel). Then anyone can run a simple client, that connects to it (and we will also have a simple javascript client, so you would just go to the demo page and it would work). For that to happen, we need to figure out some other transport layer than 0MQ, something that works on the app engine. This can also be the only option for people that don't want/can't to install 0MQ for some reason. Any ideas? Ondrej From ondrej at certik.cz Thu May 13 15:10:02 2010 From: ondrej at certik.cz (Ondrej Certik) Date: Thu, 13 May 2010 12:10:02 -0700 Subject: [IPython-dev] api between kernel, web server and frontend In-Reply-To: References: Message-ID: On Thu, May 13, 2010 at 12:04 PM, Ondrej Certik wrote: > Hi, > > we were in Berkeley with Mateusz and we discussed the API between the > computational kernel, web server and the javascript frontend (and > other frontends), so I wanted to share our thoughts. > > The main workhorse is a kernel, that has two connections: > > 1) feed, that publishes all changes to a given session, anyone (who > knows the session UID) can subscribe to it and observe the whole > session (e.g. ipython session) > 2) request/response channel, which is used for sending python code for > evaluation and some other minor things. The results are published in > the feed. > > The API is done using json, and it is described here: > > http://github.com/ellisonbg/pyzmq/blob/master/examples/kernel/message_spec.rst > > and demo implementation of this is here: > > http://github.com/ellisonbg/pyzmq/tree/master/examples/kernel/ > > This demo implementation is using 0MQ (http://www.zeromq.org/) for the > transport layer (i.e. for sending the json messages). 0MQ is probably > the best library for sending/receiving messages over the net, but a > big disadvantage is that it probably can't be used on the google app > engine and it creates a dependency. But it's just the transport layer, > in principle it can be replaced with anything else. > > > > So the above is the main logic for handling computational sessions, > with many people (frontends) doing simultaneous calculations in the > same session and one kernel (that however can dispatch the actual > little calculations to many cores, e.g. do it in parallel). Note that > this kernel has no database, no user accounts, nothing. All that it > has is a session with UID (so you need to create the session somehow, > but that's it) and it has a namespace for the running session, that's > it. > > Now we need a web server, that would use the above API to communicate > with the kernel, and this web server would have a database with user > accounts, it would also have a database with user worksheets and cells > and it would expose this over HTTP. The problem is that using HTTP and > the current web technologies pose some limitations currently to what > can be done. We talked with Mateusz how to do that in the car to Reno > and we agreed that the best way is to use a JSONRPC, so the web server > would define a Python class with some functionality, and then the > javascript frontend (running in the browser) would use methods of this > class to do things (create worksheets, get users, create cells, > evaluate cells, get the feed from the kernel, ...). The JS frontend > would have to periodically call some method on the server to get the > feed, as we don't know how to "subscribe" to it using AJAX. Maybe in > couple years, this would be simple using web sockets. > > I would be interested in any feedback and discussion. > > We now need to write a demo implementation of the above web server and > a client that uses JSONRPC to do the above. The client can (should) be > command line based and it will only use JSONRPC for all the > communication. It will then be quite trivial (trivial in terms of the > logic) to anyone to write a javascript client, as JSONRPC works nice > with javascript, or adapt codenode or the sage notebook frontends to > use this JSONRPC. > > > For testing purposes, I would really like to have a demo > implementation that runs on the google app engine (both the web server > and the kernel). Then anyone can run a simple client, that connects to > it (and we will also have a simple javascript client, so you would > just go to the demo page and it would work). For that to happen, we > need to figure out some other transport layer than 0MQ, something that > works on the app engine. This can also be the only option for people > that don't want/can't to install 0MQ for some reason. Any ideas? Well, one way to do that would be that the kernel and the web server would be in the same program and I would be passing messages to the kernel by simply calling a Python method, or by somehow emulating the 0MQ api in Python in the same process. That should work, to get it working both with 0MQ and on the app engine with minimal modifications. Also, I can imagine that ipython itself could use this emulation by default for people that just want the good old ipython, and only use 0MQ if you want to run it over the net. Ondrej From ellisonbg at gmail.com Thu May 13 17:25:01 2010 From: ellisonbg at gmail.com (Brian Granger) Date: Thu, 13 May 2010 14:25:01 -0700 Subject: [IPython-dev] api between kernel, web server and frontend In-Reply-To: References: Message-ID: Ondrej, I wish I could have joined you at Berkeley... On Thu, May 13, 2010 at 12:04 PM, Ondrej Certik wrote: > Hi, > > we were in Berkeley with Mateusz and we discussed the API between the > computational kernel, web server and the javascript frontend (and > other frontends), so I wanted to share our thoughts. Great! > The main workhorse is a kernel, that has two connections: > > 1) feed, that publishes all changes to a given session, anyone (who > knows the session UID) can subscribe to it and observe the whole > session (e.g. ipython session) > 2) request/response channel, which is used for sending python code for > evaluation and some other minor things. The results are published in > the feed. > > The API is done using json, and it is described here: > > http://github.com/ellisonbg/pyzmq/blob/master/examples/kernel/message_spec.rst > > and demo implementation of this is here: > > http://github.com/ellisonbg/pyzmq/tree/master/examples/kernel/ > > This demo implementation is using 0MQ (http://www.zeromq.org/) for the > transport layer (i.e. for sending the json messages). 0MQ is probably > the best library for sending/receiving messages over the net, but a > big disadvantage is that it probably can't be used on the google app > engine and it creates a dependency. But it's just the transport layer, > in principle it can be replaced with anything else. I am a little weary of calling 0MQ a "transport layer" as it does many non-trivial things underneath the hood. The reason I am weary about this is that it suggests that we could simply swap out 0MQ for a different "transport layer" in other contexts. After struggling with these networking issues for years, I don't think there is anything else out there that will do what 0MQ does. The only thing that even comes close would be Twisted, but even it does really come close. Thus, I am not sure what you have in mind when you way "it can be replaced with anything else." What other things are you thinking? On GAE, can you even do raw sockets? As I understood, the only networking related things you can do on GAE is an outbound HTTP client request (query a non-Google HTTP server). > So the above is the main logic for handling computational sessions, > with many people (frontends) doing simultaneous calculations in the > same session and one kernel (that however can dispatch the actual > little calculations to many cores, e.g. do it in parallel). Note that > this kernel has no database, no user accounts, nothing. All that it > has is a session with UID (so you need to create the session somehow, > but that's it) and it has a namespace for the running session, that's > it. Yep. > Now we need a web server, that would use the above API to communicate > with the kernel, and this web server would have a database with user > accounts, it would also have a database with user worksheets and cells > and it would expose this over HTTP. The problem is that using HTTP and > the current web technologies pose some limitations currently to what > can be done. Definitely. A big issue that I see with GAE is that it doesn't allow long-lived requests. > We talked with Mateusz how to do that in the car to Reno > and we agreed that the best way is to use a JSONRPC, so the web server > would define a Python class with some functionality, and then the > javascript frontend (running in the browser) would use methods of this > class to do things (create worksheets, get users, create cells, > evaluate cells, get the feed from the kernel, ...). The JS frontend > would have to periodically call some method on the server to get the > feed, as we don't know how to "subscribe" to it using AJAX. Maybe in > couple years, this would be simple using web sockets. Yes, the JS frontend would need to poll probably. > I would be interested in any feedback and discussion. While I like the idea of being able to do things on GAE, I do think that its constaints are so severe that it probably doesn't make sense to implement these ideas in that context. We could use GAE to manage user accounts and the notebook sessions but I think the only way to use this architecture is to have the kernels run elsewhere and have GAE make HTTP calls to the kernels (through an HHTP-0MQ bridge). We have struggle for years with trying to implement this architecture and have gotten basically no where. Part of this is that we are very busy, but a huge part of the issue is technical. Without the unique capabilities that 0MQ provides, I am convinced we will end up hacking things together than only partially work and are much more complex than they need be. Of course, I think you can probably get something working on GAE (you already have), but I think the 0MQ based architecture is simple not compatible with GAE. But, I do think that we should think about how to implement these things outside of GAE using 0MQ for real. Some thoughts along those lines: The core bit of technology that we need is an HTTP/JSONRPC to 0MQ bridge. This would make it possible for any HTTP client to interact with a 0MQ based kernel: HTTP client <--HTTP--> Bridge <---OMQ----> Kernel The main thing that this bridge would do is to translate the more complex message based communications to the simple request/reply style of HTTP. A single bridge could even manage multiple kernels. But I think that the bridge should be developed initially without any knowledge of the notebook/user side of things. This bridge could be written using any python webapp framework and should be done in the "RESTful" manner (http://en.wikipedia.org/wiki/Representational_State_Transfer) to allow for JSONRPC calls to the bridge. There are some issues related to integrating the 0MQ client into a custom webapp, but these are solvable problems. > We now need to write a demo implementation of the above web server and > a client that uses JSONRPC to do the above. The client can (should) be > command line based and it will only use JSONRPC for all the > communication. It will then be quite trivial (trivial in terms of the > logic) to anyone to write a javascript client, as JSONRPC works nice > with javascript, or adapt codenode or the sage notebook frontends to > use this JSONRPC. Yes, I agree, once the bridge exists. > > For testing purposes, I would really like to have a demo > implementation that runs on the google app engine (both the web server > and the kernel). Then anyone can run a simple client, that connects to > it (and we will also have a simple javascript client, so you would > just go to the demo page and it would work). For that to happen, we > need to figure out some other transport layer than 0MQ, something that > works on the app engine. This can also be the only option for people > that don't want/can't to install 0MQ for some reason. Any ideas? While IPython will always ship with a non-OMQ version, for things like you are talking about, I don't see 0MQ as optional. I simple don't think there is another good transport layer (at least not one that is pure python and more lightweight that 0MQ). Cheers, Brian > Ondrej > _______________________________________________ > IPython-dev mailing list > IPython-dev at scipy.org > http://mail.scipy.org/mailman/listinfo/ipython-dev > -- Brian E. Granger, Ph.D. Assistant Professor of Physics Cal Poly State University, San Luis Obispo bgranger at calpoly.edu ellisonbg at gmail.com From fperez.net at gmail.com Thu May 13 18:00:07 2010 From: fperez.net at gmail.com (Fernando Perez) Date: Thu, 13 May 2010 15:00:07 -0700 Subject: [IPython-dev] api between kernel, web server and frontend In-Reply-To: References: Message-ID: On Thu, May 13, 2010 at 2:25 PM, Brian Granger wrote: > > While IPython will always ship with a non-OMQ version, for things like > you are talking about, I don't see 0MQ as optional. ?I simple don't > think there is another good transport layer (at least not one that is > pure python and more lightweight that 0MQ). Just to add context, I think Ondrej's view (at least that was my view when we worked on this) is that there could be non-0mq clients, such as a web-based one, accessing an ipython kernel, and that what we should agree on is the protocol description. For *ipython itself*, we're 100% committed to doing all communications over zmq, and for now we don't have the time to write the http server wrapper around a kernel, but if that were to exist, then others could connect to the kernel over http. I'm going to spend a bit of time now moving these docs over to ipython itself, so we have a place to start putting down the protocol description. Cheers, f From fperez.net at gmail.com Thu May 13 18:01:34 2010 From: fperez.net at gmail.com (Fernando Perez) Date: Thu, 13 May 2010 15:01:34 -0700 Subject: [IPython-dev] taskclient clear and hierarchical parallel processing In-Reply-To: References: Message-ID: On Thu, May 13, 2010 at 4:46 AM, Satrajit Ghosh wrote: > > clear completed ones but by id. the reason i can't call clear is that i used > the id to get the result. so i would need to know that all clients don't > have any pending results *and* while i'm doing the clearing no pending task > has completed. this seems like a philosopher's chopstick nightmare if not > handled properly through parallel semantics. Got it. Let me see if I can add some of this quickly, I looked at the code yesterday and it didn't seem hard. Do you think if I can add this, we're otherwise good as far as exposing this to your nipype users? Cheers, f From ellisonbg at gmail.com Thu May 13 18:06:21 2010 From: ellisonbg at gmail.com (Brian Granger) Date: Thu, 13 May 2010 15:06:21 -0700 Subject: [IPython-dev] api between kernel, web server and frontend In-Reply-To: References: Message-ID: On Thu, May 13, 2010 at 3:00 PM, Fernando Perez wrote: > On Thu, May 13, 2010 at 2:25 PM, Brian Granger wrote: >> >> While IPython will always ship with a non-OMQ version, for things like >> you are talking about, I don't see 0MQ as optional. ?I simple don't >> think there is another good transport layer (at least not one that is >> pure python and more lightweight that 0MQ). > > Just to add context, I think Ondrej's view (at least that was my view > when we worked on this) is that there could be non-0mq clients, such > as a web-based one, accessing an ipython kernel, and that what we > should agree on is the protocol description. By protocol description, do you mean the precise format of the JSON messages? > For *ipython itself*, > we're 100% committed to doing all communications over zmq, and for now > we don't have the time to write the http server wrapper around a > kernel, but if that were to exist, then others could connect to the > kernel over http. > > I'm going to spend a bit of time now moving these docs over to ipython > itself, so we have a place to start putting down the protocol > description. OK, I think this makes sense. Cheers, Brian > Cheers, > > f > -- Brian E. Granger, Ph.D. Assistant Professor of Physics Cal Poly State University, San Luis Obispo bgranger at calpoly.edu ellisonbg at gmail.com From ondrej at certik.cz Thu May 13 18:23:19 2010 From: ondrej at certik.cz (Ondrej Certik) Date: Thu, 13 May 2010 15:23:19 -0700 Subject: [IPython-dev] api between kernel, web server and frontend In-Reply-To: References: Message-ID: On Thu, May 13, 2010 at 3:06 PM, Brian Granger wrote: > On Thu, May 13, 2010 at 3:00 PM, Fernando Perez wrote: >> On Thu, May 13, 2010 at 2:25 PM, Brian Granger wrote: >>> >>> While IPython will always ship with a non-OMQ version, for things like >>> you are talking about, I don't see 0MQ as optional. ?I simple don't >>> think there is another good transport layer (at least not one that is >>> pure python and more lightweight that 0MQ). >> >> Just to add context, I think Ondrej's view (at least that was my view >> when we worked on this) is that there could be non-0mq clients, such >> as a web-based one, accessing an ipython kernel, and that what we >> should agree on is the protocol description. > > By protocol description, do you mean the precise format of the JSON messages? That's exactly correct. My understanding was that the protocol are the JSON messages, plus feed + request/reply connections. How this is actually done (0MQ, HTTP) are low level things. Yes, I would like to run this over HTTP on GAE, I don't think that the limitations are so strong. Ondrej From satra at mit.edu Thu May 13 18:30:22 2010 From: satra at mit.edu (Satrajit Ghosh) Date: Thu, 13 May 2010 18:30:22 -0400 Subject: [IPython-dev] taskclient clear and hierarchical parallel processing In-Reply-To: References: Message-ID: hi fernando, Got it. Let me see if I can add some of this quickly, I looked at the > code yesterday and it didn't seem hard. Do you think if I can add > this, we're otherwise good as far as exposing this to your nipype > users? > yes. i ran some tests yesterday. we are currently fine without the fix. i removed all the taskclient.clear() lines! :) (so the fix will be nice to have.) cheers, satra -------------- next part -------------- An HTML attachment was scrubbed... URL: From benjaminrk at gmail.com Thu May 13 18:43:26 2010 From: benjaminrk at gmail.com (MinRK) Date: Thu, 13 May 2010 15:43:26 -0700 Subject: [IPython-dev] taskclient clear and hierarchical parallel processing In-Reply-To: References: Message-ID: Fernando, That should indeed be quite easy...In fact, I just did it and pushed it to my git fork (http://github.com/minrk/ipython). -MinRK On Thu, May 13, 2010 at 15:01, Fernando Perez wrote: > On Thu, May 13, 2010 at 4:46 AM, Satrajit Ghosh wrote: >> >> clear completed ones but by id. the reason i can't call clear is that i used >> the id to get the result. so i would need to know that all clients don't >> have any pending results *and* while i'm doing the clearing no pending task >> has completed. this seems like a philosopher's chopstick nightmare if not >> handled properly through parallel semantics. > > Got it. ?Let me see if I can add some of this quickly, I looked at the > code yesterday and it didn't seem hard. ?Do you think if I can add > this, we're otherwise good as far as exposing this to your nipype > users? > > Cheers, > > f > _______________________________________________ > IPython-dev mailing list > IPython-dev at scipy.org > http://mail.scipy.org/mailman/listinfo/ipython-dev > From fperez.net at gmail.com Thu May 13 18:45:12 2010 From: fperez.net at gmail.com (Fernando Perez) Date: Thu, 13 May 2010 15:45:12 -0700 Subject: [IPython-dev] api between kernel, web server and frontend In-Reply-To: References: Message-ID: On Thu, May 13, 2010 at 3:06 PM, Brian Granger wrote: > > By protocol description, do you mean the precise format of the JSON messages? Yes, I meant the message_spec.rst file we quickly wrote up. I think we should pound that spec with more eyes/feedback until we have something we think is solid for a first 'real' prototype beyond our little zmq one. A good round of work on that protocol will be very useful. Cheers, f From benjaminrk at gmail.com Thu May 13 19:09:32 2010 From: benjaminrk at gmail.com (MinRK) Date: Thu, 13 May 2010 16:09:32 -0700 Subject: [IPython-dev] taskclient clear and hierarchical parallel processing In-Reply-To: References: Message-ID: Issue 2: You can possibly avoid deadlocks by allowing the parent task to throw a dependency unmet error, so that it gets resubmitted and waits for the child. Since we are allowing tasks to submit tasks, let's assume that every engine has a MultiEngineClient (mec) and TaskClient (tc) connected to the controller. The current (in-progress) dependency implementation is to raise IPython.kernel.error.TaskRejectError if a dependency is not met. You can use this # init, run on all engines: from IPython.kernel.error import TaskRejectError taskIDs = {} # a dict to live on all engines ParentTask: if not taskIDs.has_key(child_signature): # child hadn't been submitted tid = tc.run(child_task) mec.execute("taskIDs['%s'] = tid"%(child_signature, tid) raise TaskRejectError # tell controller dependency is unmet child_result = tc.get_result(taskIDs[child_signature],block=False) if child_result is None: # child isn't done raise TaskRejectError # only get here if child is done do_work(child_result) ... Adding the mec.execute call is not the greatest solution, but it does ensure that all engines have the same value of taskIDs[sig] for all tasks submitted to workers after this call. I haven't actually run this, but something along this model should work. The dependency implementation is a work in progress. -MinRK On Thu, May 13, 2010 at 04:46, Satrajit Ghosh wrote: > hi fernando, > >> > >> > We don't have a way of doing this currently. ?Basically, you should >> > call clear() when you know all clients are done with a set of tasks >> > and you want to free up that memory. >> >> Mmmh, I now realize that I think I misunderstood Satra earlier today. >> Satra, clear() only flushes *already completed* tasks. ?Do you want to >> cancel tasks, or clear completed ones but by id? > > clear completed ones but by id. the reason i can't call clear is that i used > the id to get the result. so i would need to know that all clients don't > have any pending results *and* while i'm doing the clearing no pending task > has completed. this seems like a philosopher's chopstick nightmare if not > handled properly through parallel semantics. > > cheers, > > satra > > > _______________________________________________ > IPython-dev mailing list > IPython-dev at scipy.org > http://mail.scipy.org/mailman/listinfo/ipython-dev > > From fperez.net at gmail.com Thu May 13 21:57:41 2010 From: fperez.net at gmail.com (Fernando Perez) Date: Thu, 13 May 2010 18:57:41 -0700 Subject: [IPython-dev] taskclient clear and hierarchical parallel processing In-Reply-To: References: Message-ID: On Thu, May 13, 2010 at 3:43 PM, MinRK wrote: > > > That should indeed be quite easy...In fact, I just did it and pushed it to my > git fork (http://github.com/minrk/ipython). Fabulous, thanks! I'll merge this into 0.10.1 before cutting the RC, as well as master. See, we move to git, and you get going again ;) Cheers, f From benjaminrk at gmail.com Thu May 13 22:44:09 2010 From: benjaminrk at gmail.com (MinRK) Date: Thu, 13 May 2010 19:44:09 -0700 Subject: [IPython-dev] taskclient clear and hierarchical parallel processing In-Reply-To: References: Message-ID: That, and I finished my course requirements for grad school yesterday. But definitely hooray for git. -MinRK On Thu, May 13, 2010 at 18:57, Fernando Perez wrote: > On Thu, May 13, 2010 at 3:43 PM, MinRK wrote: >> >> >> That should indeed be quite easy...In fact, I just did it and pushed it to my >> git fork (http://github.com/minrk/ipython). > > Fabulous, thanks! ?I'll merge this into 0.10.1 before cutting the RC, > as well as master. > > See, we move to git, and you get going again ;) > > > Cheers, > > f > From fperez.net at gmail.com Fri May 14 03:09:52 2010 From: fperez.net at gmail.com (Fernando Perez) Date: Fri, 14 May 2010 00:09:52 -0700 Subject: [IPython-dev] taskclient clear and hierarchical parallel processing In-Reply-To: References: Message-ID: On Thu, May 13, 2010 at 7:44 PM, MinRK wrote: > That, and I finished my course requirements for grad school yesterday. Ah, details, details ;) Congratulations! Cheers, f From ellisonbg at gmail.com Fri May 14 11:04:14 2010 From: ellisonbg at gmail.com (Brian Granger) Date: Fri, 14 May 2010 08:04:14 -0700 Subject: [IPython-dev] taskclient clear and hierarchical parallel processing In-Reply-To: References: Message-ID: Min, Congratulations, I know what that feels like and it is great! It will be great to catch up with you at SciPy. Thanks for looking at the clear(taskid) issue as well. I am working lots on PyZMQ and it will be great to talk more with you about how to work that into the parallel stuff. Cheers, Brian On Thu, May 13, 2010 at 7:44 PM, MinRK wrote: > That, and I finished my course requirements for grad school yesterday. > But definitely hooray for git. > > -MinRK > > On Thu, May 13, 2010 at 18:57, Fernando Perez wrote: >> On Thu, May 13, 2010 at 3:43 PM, MinRK wrote: >>> >>> >>> That should indeed be quite easy...In fact, I just did it and pushed it to my >>> git fork (http://github.com/minrk/ipython). >> >> Fabulous, thanks! ?I'll merge this into 0.10.1 before cutting the RC, >> as well as master. >> >> See, we move to git, and you get going again ;) >> >> >> Cheers, >> >> f >> > _______________________________________________ > IPython-dev mailing list > IPython-dev at scipy.org > http://mail.scipy.org/mailman/listinfo/ipython-dev > -- Brian E. Granger, Ph.D. Assistant Professor of Physics Cal Poly State University, San Luis Obispo bgranger at calpoly.edu ellisonbg at gmail.com From gael.varoquaux at normalesup.org Fri May 14 13:29:42 2010 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Fri, 14 May 2010 19:29:42 +0200 Subject: [IPython-dev] User configuration in recent IPython Message-ID: <20100514172942.GB8607@phare.normalesup.org> Hey, I have just pulled the latest IPython, and I am a bit at loss. It seems that it is not finding my configuration, whereas version 0.9 did. What the prefered way to set configuration for IPython nowadays? Cheers, Ga?l From fperez.net at gmail.com Fri May 14 13:44:16 2010 From: fperez.net at gmail.com (Fernando Perez) Date: Fri, 14 May 2010 10:44:16 -0700 Subject: [IPython-dev] User configuration in recent IPython In-Reply-To: <20100514172942.GB8607@phare.normalesup.org> References: <20100514172942.GB8607@phare.normalesup.org> Message-ID: Hey, On Fri, May 14, 2010 at 10:29 AM, Gael Varoquaux wrote: > Hey, > > I have just pulled the latest IPython, and I am a bit at loss. It seems > that it is not finding my configuration, whereas version 0.9 did. What > the prefered way to set configuration for IPython nowadays? sorry, I gotta run now but here's some info: http://ipython.scipy.org/doc/nightly/html/config/overview.html In short, my ipython_config.py reads: import sys c = get_config() c.Global.exec_lines = [#'import sys, os, math', #'ip = get_ipython()', ] if sys.version_info[:2] >= (2,6): c.Global.exec_files = ['extras.py'] if sys.platform=='win32': #c.InteractiveShell.colors = 'NoColor' pass c.InteractiveShell.colors = 'LightBG' c.InteractiveShell.colors = 'Linux' c.InteractiveShell.confirm_exit = False c.InteractiveShell.readline_omit__names = 2 c.AliasManager.user_aliases = [ ('cl', 'clear'), ('clk', 'rm -f *~ .*~'), # color ls ('d', 'ls -F -o --color'), # ls symbolic links ('dl', 'ls -F -o --color %l | grep ^l'), # directories or links to directories, both as alias ldalias # and alias ldiralias in case users load the real alias ldalias linker ('dd', 'ls -F -o --color %l | grep /$'), # things which are executable ('dx', 'ls -F -o --color %l | grep ^-..x'), ] ### EOF I hope this helps, f From gael.varoquaux at normalesup.org Fri May 14 13:45:24 2010 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Fri, 14 May 2010 19:45:24 +0200 Subject: [IPython-dev] User configuration in recent IPython In-Reply-To: References: <20100514172942.GB8607@phare.normalesup.org> Message-ID: <20100514174524.GC8607@phare.normalesup.org> On Fri, May 14, 2010 at 10:44:16AM -0700, Fernando Perez wrote: > I hope this helps, Yes, that will do the trick. I managed to miss this information, probably because I missed the night docs. Thanks, Ga?l From gael.varoquaux at normalesup.org Fri May 14 13:50:56 2010 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Fri, 14 May 2010 19:50:56 +0200 Subject: [IPython-dev] User configuration in recent IPython In-Reply-To: <20100514174524.GC8607@phare.normalesup.org> References: <20100514172942.GB8607@phare.normalesup.org> <20100514174524.GC8607@phare.normalesup.org> Message-ID: <20100514175056.GA3599@phare.normalesup.org> On Fri, May 14, 2010 at 07:45:24PM +0200, Gael Varoquaux wrote: > Yes, that will do the trick. I managed to miss this information, probably > because I missed the night docs. Hey, That's a lot of advanced information. It will probably scare users. How about trying to copy the default file to the .ipython directory if there is none? This will probably be good to kick start users (I know the suggestion to do this is in the docs, but it is in the middle of many other things). My 2 cents, Ga?l From ellisonbg at gmail.com Fri May 14 14:15:28 2010 From: ellisonbg at gmail.com (Brian Granger) Date: Fri, 14 May 2010 11:15:28 -0700 Subject: [IPython-dev] User configuration in recent IPython In-Reply-To: <20100514175056.GA3599@phare.normalesup.org> References: <20100514172942.GB8607@phare.normalesup.org> <20100514174524.GC8607@phare.normalesup.org> <20100514175056.GA3599@phare.normalesup.org> Message-ID: You can use the %install_default_config magic to install the default config files. Cheers, Brian On Fri, May 14, 2010 at 10:50 AM, Gael Varoquaux wrote: > On Fri, May 14, 2010 at 07:45:24PM +0200, Gael Varoquaux wrote: >> Yes, that will do the trick. I managed to miss this information, probably >> because I missed the night docs. > > Hey, > > That's a lot of advanced information. It will probably scare users. How > about trying to copy the default file to the .ipython directory if there > is none? This will probably be good to kick start users (I know the > suggestion to do this is in the docs, but it is in the middle of many > other things). > > My 2 cents, > > Ga?l > _______________________________________________ > IPython-dev mailing list > IPython-dev at scipy.org > http://mail.scipy.org/mailman/listinfo/ipython-dev > -- Brian E. Granger, Ph.D. Assistant Professor of Physics Cal Poly State University, San Luis Obispo bgranger at calpoly.edu ellisonbg at gmail.com From gael.varoquaux at normalesup.org Fri May 14 14:18:21 2010 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Fri, 14 May 2010 20:18:21 +0200 Subject: [IPython-dev] User configuration in recent IPython In-Reply-To: References: <20100514172942.GB8607@phare.normalesup.org> <20100514174524.GC8607@phare.normalesup.org> <20100514175056.GA3599@phare.normalesup.org> Message-ID: <20100514181820.GB3599@phare.normalesup.org> On Fri, May 14, 2010 at 11:15:28AM -0700, Brian Granger wrote: > You can use the %install_default_config magic to install the default > config files. Right, but why not do this by default: the user has to find out all this and I bet only a fraction of users will put the effort. With the old system only a tiny fraction of users actually edited the default configuration file to change the colors, eventhough they were getting unreadable colors on white backgrounds. So adding one step is just going to loose more users. My 2 cents, Ga?l From ellisonbg at gmail.com Fri May 14 14:25:29 2010 From: ellisonbg at gmail.com (Brian Granger) Date: Fri, 14 May 2010 11:25:29 -0700 Subject: [IPython-dev] User configuration in recent IPython In-Reply-To: <20100514181820.GB3599@phare.normalesup.org> References: <20100514172942.GB8607@phare.normalesup.org> <20100514174524.GC8607@phare.normalesup.org> <20100514175056.GA3599@phare.normalesup.org> <20100514181820.GB3599@phare.normalesup.org> Message-ID: On Fri, May 14, 2010 at 11:18 AM, Gael Varoquaux wrote: > On Fri, May 14, 2010 at 11:15:28AM -0700, Brian Granger wrote: >> You can use the %install_default_config magic to install the default >> config files. > > Right, but why not do this by default: the user has to find out all this > and I bet only a fraction of users will put the effort. The finding/resolution of config files in IPython is actually quite complicated already (this is an understatement!). Copying the config file automatically adds another layer of complexity that I don't think is wise to introduce at this point - especially because we are still fine tuning the new config system. But, in the long run, I think this is something we might want to revisit. > With the old system only a tiny fraction of users actually edited the > default configuration file to change the colors, eventhough they were > getting unreadable colors on white backgrounds. So adding one step is > just going to loose more users. Not sure I agree with that. Many of the tools we use (git, matplotlib) don't create a default config file. I agree it could be better documented though. Cheers, Brian > My 2 cents, > > Ga?l > -- Brian E. Granger, Ph.D. Assistant Professor of Physics Cal Poly State University, San Luis Obispo bgranger at calpoly.edu ellisonbg at gmail.com From gael.varoquaux at normalesup.org Fri May 14 14:31:22 2010 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Fri, 14 May 2010 20:31:22 +0200 Subject: [IPython-dev] embedding ipython In-Reply-To: References: Message-ID: <20100514183122.GE19797@phare.normalesup.org> On Thu, May 13, 2010 at 01:41:25AM -0700, Fernando Perez wrote: > Currently we have the 'small and light' ipythonx widget that Gael > wrote and the more featureful wxipython that Laurent wrote, but these > share very little. Having someone work on a Wx-based tool that gets > the best of these and is developed on the updated apis we now have > would be fantastic. I have been following very lightly what is going on in the IPython world with regards to front end and interactive use. I must say that I am very excited by what I hear. It looks like you guys are finally opening the road to a solid architecture and well-designed GUI applications using IPython. If someone is going to set off to write a newer Wx frontend, I think that the best way to do it would be to take the current one, 'carve out' anything that is not purely wx-related and throw it away, to replace it with the same core than the currently-developped Qt work (and maybe eastablish a link with the web projects that Ondrej was mentioning). I had to go through 'interesting' hoops to get the right 'feeling' for an interactive frontend without any threads (for instance instant updates to the screen as the code was printing messages). The new developments will enable to avoid these workarounds. Some of the Wx code with timers and 'wx.Yield' can thus probably go away, although I don't know the new archicture and can't really invest time to give an educated opinion. The Wx-related code for keyboard events processing, cursor movement, and printing to screen is probably useful, on the other hand. So I would say: hack with no restriction, rip my old code to pieces, there is no point having legacy and hacky code lying around when we can do much better. I will take no offence in having this code fully re-written. Cheers, Ga?l From ondrej at certik.cz Sun May 16 12:24:33 2010 From: ondrej at certik.cz (Ondrej Certik) Date: Sun, 16 May 2010 09:24:33 -0700 Subject: [IPython-dev] pyzmq: kernel.py doesn't work Message-ID: Hi, this is what I get when tried the latest git zeromz+pyzmq: ondrej at crow:~/repos/pyzmq/examples/kernel(master)$ ./kernel.py Starting the kernel... On: tcp://127.0.0.1:5555 tcp://127.0.0.1:5556 Use Ctrl-\ (NOT Ctrl-C!) to terminate. {'content': {u'data': 'Traceback (most recent call last):\n', u'name': u'stderr'}, 'header': {'username': u'kernel', 'msg_id': 0, 'session': '5c4579fd-ed9d-4aa1-891e-09f157be9488'}, 'msg_type': u'stream', 'parent_header': {}} {'content': {u'data': ' File "./kernel.py", line 257, in \n', u'name': u'stderr'}, 'header': {'username': u'kernel', 'msg_id': 1, 'session': '5c4579fd-ed9d-4aa1-891e-09f157be9488'}, 'msg_type': u'stream', 'parent_header': {}} {'content': {u'data': ' main()\n', u'name': u'stderr'}, 'header': {'username': u'kernel', 'msg_id': 2, 'session': '5c4579fd-ed9d-4aa1-891e-09f157be9488'}, 'msg_type': u'stream', 'parent_header': {}} {'content': {u'data': ' File "./kernel.py", line 253, in main\n', u'name': u'stderr'}, 'header': {'username': u'kernel', 'msg_id': 3, 'session': '5c4579fd-ed9d-4aa1-891e-09f157be9488'}, 'msg_type': u'stream', 'parent_header': {}} {'content': {u'data': ' kernel.start()\n', u'name': u'stderr'}, 'header': {'username': u'kernel', 'msg_id': 4, 'session': '5c4579fd-ed9d-4aa1-891e-09f157be9488'}, 'msg_type': u'stream', 'parent_header': {}} {'content': {u'data': ' File "./kernel.py", line 205, in start\n', u'name': u'stderr'}, 'header': {'username': u'kernel', 'msg_id': 5, 'session': '5c4579fd-ed9d-4aa1-891e-09f157be9488'}, 'msg_type': u'stream', 'parent_header': {}} {'content': {u'data': ' ident, msg = self.reply_socket.recv_json(ident=True)\n', u'name': u'stderr'}, 'header': {'username': u'kernel', 'msg_id': 6, 'session': '5c4579fd-ed9d-4aa1-891e-09f157be9488'}, 'msg_type': u'stream', 'parent_header': {}} {'content': {u'data': ' File "_zmq.pyx", line 692, in zmq._zmq.Socket.recv_json (zmq/_zmq.c:5145)\n', u'name': u'stderr'}, 'header': {'username': u'kernel', 'msg_id': 7, 'session': '5c4579fd-ed9d-4aa1-891e-09f157be9488'}, 'msg_type': u'stream', 'parent_header': {}} {'content': {u'data': "TypeError: recv_json() got an unexpected keyword argument 'ident'\n", u'name': u'stderr'}, 'header': {'username': u'kernel', 'msg_id': 8, 'session': '5c4579fd-ed9d-4aa1-891e-09f157be9488'}, 'msg_type': u'stream', 'parent_header': {}} and this is when I run the frontend: ondrej at crow:~/repos/pyzmq/examples/kernel(master)$ ./frontend.py Python 2.6.4 (r264:75706, Dec 7 2009, 18:43:55) [GCC 4.4.1] on linux2 Type "help", "copyright", "credits" or "license" for more information. (Console) Py>>> 1+1 Traceback (most recent call last): File "./frontend.py", line 194, in main() File "./frontend.py", line 190, in main client.interact() File "./frontend.py", line 165, in interact self.console.interact() File "/usr/lib/python2.6/code.py", line 243, in interact more = self.push(line) File "/usr/lib/python2.6/code.py", line 265, in push more = self.runsource(source, self.filename) File "/usr/lib/python2.6/code.py", line 87, in runsource self.runcode(code) File "./frontend.py", line 136, in runcode 'execute_request', dict(code=src)) File "/home/ondrej/repos/pyzmq/examples/kernel/session.py", line 87, in send socket.send_json(msg, ident=ident) File "_zmq.pyx", line 676, in zmq._zmq.Socket.send_json (zmq/_zmq.c:4963) TypeError: send_json() got an unexpected keyword argument 'ident' I have installed pyzmq using: ./setup.py install --home=~/usr I bet the problem is with some ident kwarg, I guess it should be easy to fix it. Ondrej From ellisonbg at gmail.com Sun May 16 12:46:11 2010 From: ellisonbg at gmail.com (Brian Granger) Date: Sun, 16 May 2010 09:46:11 -0700 Subject: [IPython-dev] pyzmq: kernel.py doesn't work In-Reply-To: References: Message-ID: Ondrej, Sorry about this, the underlying 0MQ APIs have changed in the last month to add some new features and I haven't updated the examples yet. I will try to finish that today. Cheers, Brian On Sun, May 16, 2010 at 9:24 AM, Ondrej Certik wrote: > Hi, > > this is what I get when tried the latest git zeromz+pyzmq: > > > ondrej at crow:~/repos/pyzmq/examples/kernel(master)$ ./kernel.py > Starting the kernel... > On: tcp://127.0.0.1:5555 tcp://127.0.0.1:5556 > Use Ctrl-\ (NOT Ctrl-C!) to terminate. > {'content': {u'data': 'Traceback (most recent call last):\n', u'name': > u'stderr'}, > ?'header': {'username': u'kernel', 'msg_id': 0, 'session': > '5c4579fd-ed9d-4aa1-891e-09f157be9488'}, > ?'msg_type': u'stream', > ?'parent_header': {}} > {'content': {u'data': ' ?File "./kernel.py", line 257, in \n', > u'name': u'stderr'}, > ?'header': {'username': u'kernel', 'msg_id': 1, 'session': > '5c4579fd-ed9d-4aa1-891e-09f157be9488'}, > ?'msg_type': u'stream', > ?'parent_header': {}} > {'content': {u'data': ' ? ?main()\n', u'name': u'stderr'}, > ?'header': {'username': u'kernel', 'msg_id': 2, 'session': > '5c4579fd-ed9d-4aa1-891e-09f157be9488'}, > ?'msg_type': u'stream', > ?'parent_header': {}} > {'content': {u'data': ' ?File "./kernel.py", line 253, in main\n', > u'name': u'stderr'}, > ?'header': {'username': u'kernel', 'msg_id': 3, 'session': > '5c4579fd-ed9d-4aa1-891e-09f157be9488'}, > ?'msg_type': u'stream', > ?'parent_header': {}} > {'content': {u'data': ' ? ?kernel.start()\n', u'name': u'stderr'}, > ?'header': {'username': u'kernel', 'msg_id': 4, 'session': > '5c4579fd-ed9d-4aa1-891e-09f157be9488'}, > ?'msg_type': u'stream', > ?'parent_header': {}} > {'content': {u'data': ' ?File "./kernel.py", line 205, in start\n', > u'name': u'stderr'}, > ?'header': {'username': u'kernel', 'msg_id': 5, 'session': > '5c4579fd-ed9d-4aa1-891e-09f157be9488'}, > ?'msg_type': u'stream', > ?'parent_header': {}} > {'content': {u'data': ' ? ?ident, msg = > self.reply_socket.recv_json(ident=True)\n', u'name': u'stderr'}, > ?'header': {'username': u'kernel', 'msg_id': 6, 'session': > '5c4579fd-ed9d-4aa1-891e-09f157be9488'}, > ?'msg_type': u'stream', > ?'parent_header': {}} > {'content': {u'data': ' ?File "_zmq.pyx", line 692, in > zmq._zmq.Socket.recv_json (zmq/_zmq.c:5145)\n', u'name': u'stderr'}, > ?'header': {'username': u'kernel', 'msg_id': 7, 'session': > '5c4579fd-ed9d-4aa1-891e-09f157be9488'}, > ?'msg_type': u'stream', > ?'parent_header': {}} > {'content': {u'data': "TypeError: recv_json() got an unexpected > keyword argument 'ident'\n", u'name': u'stderr'}, > ?'header': {'username': u'kernel', 'msg_id': 8, 'session': > '5c4579fd-ed9d-4aa1-891e-09f157be9488'}, > ?'msg_type': u'stream', > ?'parent_header': {}} > > > > > > and this is when I run the frontend: > > > ondrej at crow:~/repos/pyzmq/examples/kernel(master)$ ./frontend.py > Python 2.6.4 (r264:75706, Dec ?7 2009, 18:43:55) > [GCC 4.4.1] on linux2 > Type "help", "copyright", "credits" or "license" for more information. > (Console) > Py>>> 1+1 > Traceback (most recent call last): > ?File "./frontend.py", line 194, in > ? ?main() > ?File "./frontend.py", line 190, in main > ? ?client.interact() > ?File "./frontend.py", line 165, in interact > ? ?self.console.interact() > ?File "/usr/lib/python2.6/code.py", line 243, in interact > ? ?more = self.push(line) > ?File "/usr/lib/python2.6/code.py", line 265, in push > ? ?more = self.runsource(source, self.filename) > ?File "/usr/lib/python2.6/code.py", line 87, in runsource > ? ?self.runcode(code) > ?File "./frontend.py", line 136, in runcode > ? ?'execute_request', dict(code=src)) > ?File "/home/ondrej/repos/pyzmq/examples/kernel/session.py", line 87, in send > ? ?socket.send_json(msg, ident=ident) > ?File "_zmq.pyx", line 676, in zmq._zmq.Socket.send_json (zmq/_zmq.c:4963) > TypeError: send_json() got an unexpected keyword argument 'ident' > > > I have installed pyzmq using: > > ./setup.py install --home=~/usr > > > I bet the problem is with some ident kwarg, I guess it should be easy to fix it. > > Ondrej > _______________________________________________ > IPython-dev mailing list > IPython-dev at scipy.org > http://mail.scipy.org/mailman/listinfo/ipython-dev > -- Brian E. Granger, Ph.D. Assistant Professor of Physics Cal Poly State University, San Luis Obispo bgranger at calpoly.edu ellisonbg at gmail.com From ellisonbg at gmail.com Sun May 16 14:44:02 2010 From: ellisonbg at gmail.com (Brian Granger) Date: Sun, 16 May 2010 11:44:02 -0700 Subject: [IPython-dev] pyzmq: kernel.py doesn't work In-Reply-To: References: Message-ID: Ondrej, I have fixed the kernel examples to work with the updated APIs: http://github.com/ellisonbg/pyzmq/commit/75f61fad9e0ce56f6da76a9ad516f3ba68eb4f44 WARNING: a lot of work has been done on 0MQ itself lately and PyZMQ requires teh latest master of 0MQ itself, so you may have to update your ZMQ build. Once the next version of 0MQ is released, I will probably start to release PyZMQ in synch with 0MQ. Cheers, Brian On Sun, May 16, 2010 at 9:46 AM, Brian Granger wrote: > Ondrej, > > Sorry about this, the underlying 0MQ APIs have changed in the last > month to add some new features and I haven't updated the examples yet. > ?I will try to finish that today. > > Cheers, > > Brian > > On Sun, May 16, 2010 at 9:24 AM, Ondrej Certik wrote: >> Hi, >> >> this is what I get when tried the latest git zeromz+pyzmq: >> >> >> ondrej at crow:~/repos/pyzmq/examples/kernel(master)$ ./kernel.py >> Starting the kernel... >> On: tcp://127.0.0.1:5555 tcp://127.0.0.1:5556 >> Use Ctrl-\ (NOT Ctrl-C!) to terminate. >> {'content': {u'data': 'Traceback (most recent call last):\n', u'name': >> u'stderr'}, >> ?'header': {'username': u'kernel', 'msg_id': 0, 'session': >> '5c4579fd-ed9d-4aa1-891e-09f157be9488'}, >> ?'msg_type': u'stream', >> ?'parent_header': {}} >> {'content': {u'data': ' ?File "./kernel.py", line 257, in \n', >> u'name': u'stderr'}, >> ?'header': {'username': u'kernel', 'msg_id': 1, 'session': >> '5c4579fd-ed9d-4aa1-891e-09f157be9488'}, >> ?'msg_type': u'stream', >> ?'parent_header': {}} >> {'content': {u'data': ' ? ?main()\n', u'name': u'stderr'}, >> ?'header': {'username': u'kernel', 'msg_id': 2, 'session': >> '5c4579fd-ed9d-4aa1-891e-09f157be9488'}, >> ?'msg_type': u'stream', >> ?'parent_header': {}} >> {'content': {u'data': ' ?File "./kernel.py", line 253, in main\n', >> u'name': u'stderr'}, >> ?'header': {'username': u'kernel', 'msg_id': 3, 'session': >> '5c4579fd-ed9d-4aa1-891e-09f157be9488'}, >> ?'msg_type': u'stream', >> ?'parent_header': {}} >> {'content': {u'data': ' ? ?kernel.start()\n', u'name': u'stderr'}, >> ?'header': {'username': u'kernel', 'msg_id': 4, 'session': >> '5c4579fd-ed9d-4aa1-891e-09f157be9488'}, >> ?'msg_type': u'stream', >> ?'parent_header': {}} >> {'content': {u'data': ' ?File "./kernel.py", line 205, in start\n', >> u'name': u'stderr'}, >> ?'header': {'username': u'kernel', 'msg_id': 5, 'session': >> '5c4579fd-ed9d-4aa1-891e-09f157be9488'}, >> ?'msg_type': u'stream', >> ?'parent_header': {}} >> {'content': {u'data': ' ? ?ident, msg = >> self.reply_socket.recv_json(ident=True)\n', u'name': u'stderr'}, >> ?'header': {'username': u'kernel', 'msg_id': 6, 'session': >> '5c4579fd-ed9d-4aa1-891e-09f157be9488'}, >> ?'msg_type': u'stream', >> ?'parent_header': {}} >> {'content': {u'data': ' ?File "_zmq.pyx", line 692, in >> zmq._zmq.Socket.recv_json (zmq/_zmq.c:5145)\n', u'name': u'stderr'}, >> ?'header': {'username': u'kernel', 'msg_id': 7, 'session': >> '5c4579fd-ed9d-4aa1-891e-09f157be9488'}, >> ?'msg_type': u'stream', >> ?'parent_header': {}} >> {'content': {u'data': "TypeError: recv_json() got an unexpected >> keyword argument 'ident'\n", u'name': u'stderr'}, >> ?'header': {'username': u'kernel', 'msg_id': 8, 'session': >> '5c4579fd-ed9d-4aa1-891e-09f157be9488'}, >> ?'msg_type': u'stream', >> ?'parent_header': {}} >> >> >> >> >> >> and this is when I run the frontend: >> >> >> ondrej at crow:~/repos/pyzmq/examples/kernel(master)$ ./frontend.py >> Python 2.6.4 (r264:75706, Dec ?7 2009, 18:43:55) >> [GCC 4.4.1] on linux2 >> Type "help", "copyright", "credits" or "license" for more information. >> (Console) >> Py>>> 1+1 >> Traceback (most recent call last): >> ?File "./frontend.py", line 194, in >> ? ?main() >> ?File "./frontend.py", line 190, in main >> ? ?client.interact() >> ?File "./frontend.py", line 165, in interact >> ? ?self.console.interact() >> ?File "/usr/lib/python2.6/code.py", line 243, in interact >> ? ?more = self.push(line) >> ?File "/usr/lib/python2.6/code.py", line 265, in push >> ? ?more = self.runsource(source, self.filename) >> ?File "/usr/lib/python2.6/code.py", line 87, in runsource >> ? ?self.runcode(code) >> ?File "./frontend.py", line 136, in runcode >> ? ?'execute_request', dict(code=src)) >> ?File "/home/ondrej/repos/pyzmq/examples/kernel/session.py", line 87, in send >> ? ?socket.send_json(msg, ident=ident) >> ?File "_zmq.pyx", line 676, in zmq._zmq.Socket.send_json (zmq/_zmq.c:4963) >> TypeError: send_json() got an unexpected keyword argument 'ident' >> >> >> I have installed pyzmq using: >> >> ./setup.py install --home=~/usr >> >> >> I bet the problem is with some ident kwarg, I guess it should be easy to fix it. >> >> Ondrej >> _______________________________________________ >> IPython-dev mailing list >> IPython-dev at scipy.org >> http://mail.scipy.org/mailman/listinfo/ipython-dev >> > > > > -- > Brian E. Granger, Ph.D. > Assistant Professor of Physics > Cal Poly State University, San Luis Obispo > bgranger at calpoly.edu > ellisonbg at gmail.com > -- Brian E. Granger, Ph.D. Assistant Professor of Physics Cal Poly State University, San Luis Obispo bgranger at calpoly.edu ellisonbg at gmail.com From ondrej at certik.cz Sun May 16 15:11:26 2010 From: ondrej at certik.cz (Ondrej Certik) Date: Sun, 16 May 2010 12:11:26 -0700 Subject: [IPython-dev] pyzmq: kernel.py doesn't work In-Reply-To: References: Message-ID: On Sun, May 16, 2010 at 11:44 AM, Brian Granger wrote: > Ondrej, > > I have fixed the kernel examples to work with the updated APIs: > > http://github.com/ellisonbg/pyzmq/commit/75f61fad9e0ce56f6da76a9ad516f3ba68eb4f44 Indeed, it works, thanks! > > WARNING: ?a lot of work has been done on 0MQ itself lately and PyZMQ > requires teh latest master of 0MQ itself, so you may have to update > your ZMQ build. ?Once the next version of 0MQ is released, I will > probably start to release PyZMQ in synch with 0MQ. I use the latest ZMQ git and all works fine. Ondrej From ellisonbg at gmail.com Sun May 16 15:15:25 2010 From: ellisonbg at gmail.com (Brian Granger) Date: Sun, 16 May 2010 12:15:25 -0700 Subject: [IPython-dev] pyzmq: kernel.py doesn't work In-Reply-To: References: Message-ID: Great! On Sun, May 16, 2010 at 12:11 PM, Ondrej Certik wrote: > On Sun, May 16, 2010 at 11:44 AM, Brian Granger wrote: >> Ondrej, >> >> I have fixed the kernel examples to work with the updated APIs: >> >> http://github.com/ellisonbg/pyzmq/commit/75f61fad9e0ce56f6da76a9ad516f3ba68eb4f44 > > Indeed, it works, thanks! > >> >> WARNING: ?a lot of work has been done on 0MQ itself lately and PyZMQ >> requires teh latest master of 0MQ itself, so you may have to update >> your ZMQ build. ?Once the next version of 0MQ is released, I will >> probably start to release PyZMQ in synch with 0MQ. > > I use the latest ZMQ git and all works fine. > > Ondrej > -- Brian E. Granger, Ph.D. Assistant Professor of Physics Cal Poly State University, San Luis Obispo bgranger at calpoly.edu ellisonbg at gmail.com From andresete.chaos at gmail.com Mon May 17 04:38:30 2010 From: andresete.chaos at gmail.com (=?UTF-8?Q?Omar_Andr=C3=A9s_Zapata_Mesa?=) Date: Mon, 17 May 2010 03:38:30 -0500 Subject: [IPython-dev] ipzmq example don`t work Message-ID: hi all!! I am working with pyzmq, the idea is to wirte a module to ipython In the attached file on this email, I wrote an example or pyzmq but it dont work. the idea is write a zmq serve class an other zmq client class, with which you can send and recive message. you run this code this way $python ipzmq.py in a separate console $python ipzmqtest.py ipzmqtest.py send massge to ipzmq.py that it have a server estarted, but the message the message does not come thks!! -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: ipzmq.tgz Type: application/x-gzip Size: 1147 bytes Desc: not available URL: From david.n.mashburn at gmail.com Mon May 17 15:44:11 2010 From: david.n.mashburn at gmail.com (David Mashburn) Date: Mon, 17 May 2010 15:44:11 -0400 Subject: [IPython-dev] embedding ipython In-Reply-To: <20100514183122.GE19797@phare.normalesup.org> References: <20100514183122.GE19797@phare.normalesup.org> Message-ID: <4BF19C8B.7030903@gmail.com> Just wanted to put in my support as well! I love the idea of a common wx/qt code base for ipython! Just fyi, I am the current maintainer of the wx.py suite of tools (written by Patrick O'Brien: PyCrust and now the notebook version, PySlices). If you haven't looked at it lately, you might want to check it out: http://code.google.com/p/wxpysuite/ I have built in a number of concepts into the notebook interface: * Re-editable multi-command code blocks * Input and output cells (called "slices") that can be folded, created, deleted, split and merged. * An "almost python" save format for input and output that can be run as a regular python script if no magic features are used. I've been hacking on the project for several years now and use it all the time in my work in physics. That said, I really like ipython and would love to see some solid ipython GUI tools with threading support and other features that are too much for me to tackle by myself in my small project. So if I can help with ipython's gui, I'll be happy to! I've really enjoyed hacking PySlices together and learned some things about code blocks and the interactive interpreter. I'm sure some of the ways I have done things are good and others are less than ideal, but I thought you might like to see how someone else did it and at least share some ideas. Let me know where I can help! Thanks, -David P.S. I've also got some ideas about using unicode with python that are a little off the beaten path... you can see what I mean if you check out SymPySlices... :-) Gael Varoquaux wrote: > On Thu, May 13, 2010 at 01:41:25AM -0700, Fernando Perez wrote: > >> Currently we have the 'small and light' ipythonx widget that Gael >> wrote and the more featureful wxipython that Laurent wrote, but these >> share very little. Having someone work on a Wx-based tool that gets >> the best of these and is developed on the updated apis we now have >> would be fantastic. >> > > I have been following very lightly what is going on in the IPython world > with regards to front end and interactive use. I must say that I am very > excited by what I hear. It looks like you guys are finally opening the > road to a solid architecture and well-designed GUI applications using > IPython. > > If someone is going to set off to write a newer Wx frontend, I think that > the best way to do it would be to take the current one, 'carve out' > anything that is not purely wx-related and throw it away, to replace it > with the same core than the currently-developped Qt work (and maybe > eastablish a link with the web projects that Ondrej was mentioning). > > I had to go through 'interesting' hoops to get the right 'feeling' for an > interactive frontend without any threads (for instance instant updates to > the screen as the code was printing messages). The new developments > will enable to avoid these workarounds. Some of the Wx code with timers > and 'wx.Yield' can thus probably go away, although I don't know the new > archicture and can't really invest time to give an educated opinion. > > The Wx-related code for keyboard events processing, cursor movement, and > printing to screen is probably useful, on the other hand. > > So I would say: hack with no restriction, rip my old code to pieces, > there is no point having legacy and hacky code lying around when we can > do much better. I will take no offence in having this code fully > re-written. > > Cheers, > > Ga?l > _______________________________________________ > IPython-dev mailing list > IPython-dev at scipy.org > http://mail.scipy.org/mailman/listinfo/ipython-dev > From piotr.zolnierczuk at gmail.com Mon May 17 15:50:08 2010 From: piotr.zolnierczuk at gmail.com (Piotr Zolnierczuk) Date: Mon, 17 May 2010 15:50:08 -0400 Subject: [IPython-dev] embedding ipython In-Reply-To: <4BF19C8B.7030903@gmail.com> References: <20100514183122.GE19797@phare.normalesup.org> <4BF19C8B.7030903@gmail.com> Message-ID: Great. Can we have a little chat during SciPy conference? Piotr ____________________________________________________________ Piotr Adam Zolnierczuk e-mail: piotr.zolnierczuk at gmail.com ____________________________________________________________ written on recycled electrons, sent from my droid On May 17, 2010 3:44 PM, "David Mashburn" wrote: Just wanted to put in my support as well! I love the idea of a common wx/qt code base for ipython! Just fyi, I am the current maintainer of the wx.py suite of tools (written by Patrick O'Brien: PyCrust and now the notebook version, PySlices). If you haven't looked at it lately, you might want to check it out: http://code.google.com/p/wxpysuite/ I have built in a number of concepts into the notebook interface: * Re-editable multi-command code blocks * Input and output cells (called "slices") that can be folded, created, deleted, split and merged. * An "almost python" save format for input and output that can be run as a regular python script if no magic features are used. I've been hacking on the project for several years now and use it all the time in my work in physics. That said, I really like ipython and would love to see some solid ipython GUI tools with threading support and other features that are too much for me to tackle by myself in my small project. So if I can help with ipython's gui, I'll be happy to! I've really enjoyed hacking PySlices together and learned some things about code blocks and the interactive interpreter. I'm sure some of the ways I have done things are good and others are less than ideal, but I thought you might like to see how someone else did it and at least share some ideas. Let me know where I can help! Thanks, -David P.S. I've also got some ideas about using unicode with python that are a little off the beaten path... you can see what I mean if you check out SymPySlices... :-) Gael Varoquaux wrote: > > > > On Thu, May 13, 2010 at 01:41:25AM -0700, Fernando Perez wrote: > > > >> > >> Currently we have the ... > _______________________________________________ > IPython-dev mailing list > IPython-dev at scipy.org > http://mail.scipy.org/mailman/listinfo/ipython-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From david.n.mashburn at gmail.com Mon May 17 15:58:46 2010 From: david.n.mashburn at gmail.com (David Mashburn) Date: Mon, 17 May 2010 15:58:46 -0400 Subject: [IPython-dev] embedding ipython In-Reply-To: References: <20100514183122.GE19797@phare.normalesup.org> <4BF19C8B.7030903@gmail.com> Message-ID: <4BF19FF6.1030803@gmail.com> Unfortunately I cannot attend Scipy this year... I'm happy to email, or if you want to do a phone conference during the conference time (or another time), we could probably do that, too! Thanks, -David Piotr Zolnierczuk wrote: > > Great. > Can we have a little chat during SciPy conference? > > Piotr > > ____________________________________________________________ > Piotr Adam Zolnierczuk > e-mail: piotr.zolnierczuk at gmail.com > ____________________________________________________________ > written on recycled electrons, sent from my droid > >> On May 17, 2010 3:44 PM, "David Mashburn" > > wrote: >> >> Just wanted to put in my support as well! I love the idea of a >> common wx/qt code base for ipython! >> >> Just fyi, I am the current maintainer of the wx.py suite of tools >> (written by Patrick O'Brien: PyCrust and now the notebook version, >> PySlices). If you haven't looked at it lately, you might want to >> check it out: >> http://code.google.com/p/wxpysuite/ >> >> I have built in a number of concepts into the notebook interface: >> >> * Re-editable multi-command code blocks >> * Input and output cells (called "slices") that can be folded, >> created, deleted, split and merged. >> * An "almost python" save format for input and output that can be run >> as a regular python script if no magic features are used. >> >> I've been hacking on the project for several years now and use it all >> the time in my work in physics. >> >> That said, I really like ipython and would love to see some solid >> ipython GUI tools with threading support and other features that are >> too much for me to tackle by myself in my small project. So if I can >> help with ipython's gui, I'll be happy to! >> >> I've really enjoyed hacking PySlices together and learned some things >> about code blocks and the interactive interpreter. I'm sure some of >> the ways I have done things are good and others are less than ideal, >> but I thought you might like to see how someone else did it and at >> least share some ideas. >> >> Let me know where I can help! >> >> Thanks, >> -David >> >> P.S. >> I've also got some ideas about using unicode with python that are a >> little off the beaten path... you can see what I mean if you check >> out SymPySlices... :-) >> >> >> Gael Varoquaux wrote: >> >> > >> > On Thu, May 13, 2010 at 01:41:25AM -0700, Fernando Perez wrote: >> > >> >> >> >> Currently we have the ... >> >> _______________________________________________ >> IPython-dev mailing list >> IPython-dev at scipy.org >> http://mail.scipy.org/mailman/listinfo/ipython-dev >> >> >> From muzgash.lists at gmail.com Wed May 19 00:53:58 2010 From: muzgash.lists at gmail.com (Gerardo Gutierrez) Date: Tue, 18 May 2010 23:53:58 -0500 Subject: [IPython-dev] IPython-ZMQ about client and server division. In-Reply-To: References: Message-ID: 2010/5/13 Omar Andr?s Zapata Mesa > > > 2010/5/13 Gerardo Gutierrez > >> Hi everyone. >> >> >> This week, Omar and I were discussing about the fundamentals of this >> project (again) which apparently are not clear enough (at least for me). >> > > *The idea is that GUI use zmq-client to connect to the kernel, ipython-zmq > will have a console that will contain classes to facilitate the connection > of any other frontend.* > > >> The thing here that I'm talking about is based on Omar's idea that every >> other frontend should connect to the client not the kernel, I'm referring to >> the *client* as a frontend because I think it should be written in the same >> manner as the other frontends, this is if the frontends were going to >> connect to the kernel, as I thought initially. >> >> This idea, I think, will lighten the tasks the other frontends will have, >> but I think also, that the idea of having to rely on two processes for a >> simple client/frontend (Qt,ncurses,etc) is not healthy. >> >> I'm not really sure if this questioning here is explained in the diagram >> that Omar has in the wiki, but it's very important for both of us to clarify >> this soon. >> >> I hope Omar reads this message soon so he can explain better his idea. >> >> Sorry if I'm wrong. >> >> >> >> *Best regards.* >> -- >> Gerardo Guti?rrez Guti?rrez >> Physics student >> Universidad de Antioquia >> Computational physics and astrophysics group (FACom >> ) >> Computational science and development branch(FACom-dev >> ) >> Usuario Linux #492295 >> >> >> >> Ok, what I understand from this is that Omar wants to make the client as a set of classes that the other frontends can import or inherit it. > >> _______________________________________________ >> IPython-dev mailing list >> IPython-dev at scipy.org >> http://mail.scipy.org/mailman/listinfo/ipython-dev >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hans_meine at gmx.net Wed May 19 08:54:25 2010 From: hans_meine at gmx.net (Hans Meine) Date: Wed, 19 May 2010 14:54:25 +0200 Subject: [IPython-dev] Ctrl-C regression with current git master and -q4thread Message-ID: <201005191454.25748.hans_meine@gmx.net> Hi, I am just trying out the current ipython from github, and I noticed that I cannot clear the commandline using Ctrl-C anymore when using -q4thread. Even worse, the next command that I confirm using [enter] is getting a delayed KeyboardInterrupt. Have a nice day, Hans From JDM at MarchRay.net Wed May 19 12:12:07 2010 From: JDM at MarchRay.net (Jonathan March) Date: Wed, 19 May 2010 11:12:07 -0500 Subject: [IPython-dev] 0.11 regression in %who_ls - report issue and request code review Message-ID: http://github.com/ipython/ipython/issues/issue/119 %who_ls (hence also %who and %whos) mistakenly interpret any type filter as str. Broken by http://github.com/ipython/ipython/commit/c870360cc5e0b06054ab92a649f034f47824bf3b Fixed by http://github.com/jdmarch/ipython/commit/322eec22bf6fc65d34a6ebe7e4c4728c8a2cd4e8 Questions: 1. The fix just changes one line, but I have configured my editor to remove trailing white space, which it did on dozens of lines in this file. This cleanup makes it harder to view the diff on github (unlike local diff tools which can ignore whitespace differences). So is it better not to do such cleanup? 2. I don't want to create more work than necessary for dev team. For something tiny like this, should I skip the bug report and code review steps and go straight to pull request? Jonathan -------------- next part -------------- An HTML attachment was scrubbed... URL: From jorgen.stenarson at bostream.nu Wed May 19 12:50:26 2010 From: jorgen.stenarson at bostream.nu (=?ISO-8859-1?Q?J=F6rgen_Stenarson?=) Date: Wed, 19 May 2010 18:50:26 +0200 Subject: [IPython-dev] git question Message-ID: <4BF416D2.2000400@bostream.nu> I'm trying to get to know git. I have made my on fork on github following the instructions in the gitwash document. But when I try to commit I get the following error message. What is the recommended way to fix this? Should I just set i18n.commitencoding to utf-8? Or should it be something else? I run on windows with portable-msysgit. C:\python\external\ipython>git commit -m "Testing" Warning: commit message does not conform to UTF-8. You may want to amend it after fixing the message, or set the config variable i18n.commitencoding to the encoding your project uses. [master 6a420af] Testing 1 files changed, 1 insertions(+), 0 deletions(-) create mode 100644 slask.py /J?rgen From jorgen.stenarson at bostream.nu Wed May 19 12:59:33 2010 From: jorgen.stenarson at bostream.nu (=?ISO-8859-1?Q?J=F6rgen_Stenarson?=) Date: Wed, 19 May 2010 18:59:33 +0200 Subject: [IPython-dev] unicode error issue #25 Message-ID: <4BF418F5.5000206@bostream.nu> Hi I have been thinking about the problems reported in issue #25. As a start on getting tests for this I would like to instantiate an ipythonapp object and then have it execute a line of code and then capture or just view the output. Something like: app = ipythonapp() app.runline("print '???'") What is the recommended way to do something like this today? Is there perhaps already some test that does something similar that I can look at? /J?rgen From ellisonbg at gmail.com Wed May 19 13:50:45 2010 From: ellisonbg at gmail.com (Brian Granger) Date: Wed, 19 May 2010 10:50:45 -0700 Subject: [IPython-dev] unicode error issue #25 In-Reply-To: <4BF418F5.5000206@bostream.nu> References: <4BF418F5.5000206@bostream.nu> Message-ID: Do you want to parse config files or command line options, or simple instantiate a minimal IPython object? Brian On Wed, May 19, 2010 at 9:59 AM, J?rgen Stenarson wrote: > Hi > > I have been thinking about the problems reported in issue #25. > > As a start on getting tests for this I would like to instantiate an > ipythonapp object and then have it execute a line of code and then > capture or just view the output. > > Something like: > > app = ipythonapp() > app.runline("print '???'") > > > What is the recommended way to do something like this today? Is there > perhaps already some test that does something similar that I can look at? > > /J?rgen > _______________________________________________ > IPython-dev mailing list > IPython-dev at scipy.org > http://mail.scipy.org/mailman/listinfo/ipython-dev > -- Brian E. Granger, Ph.D. Assistant Professor of Physics Cal Poly State University, San Luis Obispo bgranger at calpoly.edu ellisonbg at gmail.com From ellisonbg at gmail.com Wed May 19 14:19:56 2010 From: ellisonbg at gmail.com (Brian Granger) Date: Wed, 19 May 2010 11:19:56 -0700 Subject: [IPython-dev] Ctrl-C regression with current git master and -q4thread In-Reply-To: <201005191454.25748.hans_meine@gmx.net> References: <201005191454.25748.hans_meine@gmx.net> Message-ID: Simple answer: we have removed this feature Less simple answer: this feature was reclassified as a bug a fixed :) Complex answer: The new approach to GUI integration works like this. Anytime you stop typing, raw_input calls a hook that runs the events loop. When you start typing again, the hook stops running the event loop. Thus, when you stop typing (right before you type ctrl-C), the event loop starts. When you type ctrl-C then KeyboardInterrupt is raised in the middle of the event loop code. This code has two options: 1. Handle the KeyboardInterrupt by catching and using pass. This is what we do. 2. Let the KeyboardInterrupt propagate. The problem with this is that the code in raw_input that calls the hook that runs the event loop doesn't have logic for handling KeyboardInterrupt and things crash. The only reason this sort of worked before in IPython is that we ran the event loop in a second thread and attempted to propagate the ctrl-C signal across threads (it didn't really work, which is why it was unstable). Cheers, Brian On Wed, May 19, 2010 at 5:54 AM, Hans Meine wrote: > Hi, > > I am just trying out the current ipython from github, and I noticed that I > cannot clear the commandline using Ctrl-C anymore when using -q4thread. > Even worse, the next command that I confirm using [enter] is getting a delayed > KeyboardInterrupt. > > Have a nice day, > ?Hans > _______________________________________________ > IPython-dev mailing list > IPython-dev at scipy.org > http://mail.scipy.org/mailman/listinfo/ipython-dev > -- Brian E. Granger, Ph.D. Assistant Professor of Physics Cal Poly State University, San Luis Obispo bgranger at calpoly.edu ellisonbg at gmail.com From hans_meine at gmx.net Thu May 20 03:26:59 2010 From: hans_meine at gmx.net (Hans Meine) Date: Thu, 20 May 2010 09:26:59 +0200 Subject: [IPython-dev] Ctrl-C regression with current git master and -q4thread In-Reply-To: References: <201005191454.25748.hans_meine@gmx.net> Message-ID: <201005200927.00270.hans_meine@gmx.net> Hi Brian! On Wednesday 19 May 2010 20:19:56 Brian Granger wrote: > Simple answer: we have removed this feature OK, but that's inacceptable if not temporary, isn't it? > Thus, when you stop typing (right before you type ctrl-C), the event > loop starts. When you type ctrl-C then KeyboardInterrupt is raised in > the middle of the event loop code. This code has two options: > > 1. Handle the KeyboardInterrupt by catching and using pass. This is > what we do. > 2. Let the KeyboardInterrupt propagate. The problem with this is > that the code in raw_input that calls the hook that runs the event > loop doesn't have logic for handling KeyboardInterrupt and things > crash. Looks like 2. needs to be fixed then, no? I just had a look, but it seems that the custom inputhook for Qt is buried inside PyQt itself? > The only reason this sort of worked before in IPython is that we ran > the event loop in a second thread and attempted to propagate the > ctrl-C signal across threads (it didn't really work, which is why it > was unstable). Let's not talk about the old code; we're all happy that this hack is no longer used.. ;-) (Yes, it was unreliable and yes, people who used it a log - like me - did suffer from that every now and then.) Have a nice day, Hans From ellisonbg at gmail.com Thu May 20 11:45:55 2010 From: ellisonbg at gmail.com (Brian Granger) Date: Thu, 20 May 2010 08:45:55 -0700 Subject: [IPython-dev] Ctrl-C regression with current git master and -q4thread In-Reply-To: <201005200927.00270.hans_meine@gmx.net> References: <201005191454.25748.hans_meine@gmx.net> <201005200927.00270.hans_meine@gmx.net> Message-ID: On Thu, May 20, 2010 at 12:26 AM, Hans Meine wrote: > Hi Brian! > > On Wednesday 19 May 2010 20:19:56 Brian Granger wrote: >> Simple answer: ?we have removed this feature > > OK, but that's inacceptable if not temporary, isn't it? I agree it is undesirable, but I don't know if this is a solvable problem, so it may not be temporary. >> Thus, when you stop typing (right before you type ctrl-C), the event >> loop starts. ?When you type ctrl-C then KeyboardInterrupt is raised in >> the middle of the event loop code. ?This code has two options: >> >> 1. ?Handle the KeyboardInterrupt by catching and using pass. ?This is >> what we do. >> 2. ?Let the KeyboardInterrupt propagate. ?The problem with this is >> that the code in raw_input that calls the hook that runs the event >> loop doesn't have logic for handling KeyboardInterrupt and things >> crash. > > Looks like 2. needs to be fixed then, no? ?I just had a look, but it seems > that the custom inputhook for Qt is buried inside PyQt itself? Yes, the custom inputhook for Qt is in PyQt, but we could implement our own version in pure Python using ctypes (like we did for wx). I completely agree that we should look at this further. Currently the behavior between Tk, Wx and Qt is different (Tk actually works!), which is also not good. But, it might be the case that the problems are in PyQt or Python itself. We really want more people to start using this stuff so we can find these bugs. I don't have time to look at this right now, but have created a ticket for this targeted at 0.11: http://github.com/ipython/ipython/issues/issue/122 >> The only reason this sort of worked before in IPython is that we ran >> the event loop in a second thread and attempted to propagate the >> ctrl-C signal across threads (it didn't really work, which is why it >> was unstable). > > Let's not talk about the old code; we're all happy that this hack is no longer > used.. ;-) ?(Yes, it was unreliable and yes, people who used it a log - like > me - did suffer from that every now and then.) I only mention this because some folks (not you) still talk about bringing the old stuff back. I don't see that as an option. Cheers, Brian > Have a nice day, > ?Hans > -- Brian E. Granger, Ph.D. Assistant Professor of Physics Cal Poly State University, San Luis Obispo bgranger at calpoly.edu ellisonbg at gmail.com From andresete.chaos at gmail.com Sat May 22 12:09:38 2010 From: andresete.chaos at gmail.com (=?UTF-8?Q?Omar_Andr=C3=A9s_Zapata_Mesa?=) Date: Sat, 22 May 2010 11:09:38 -0500 Subject: [IPython-dev] pyzmq problems Message-ID: hi all I am working yet in zmq module to ipython, but I have the next problem using json. the code are in http://github.com/omazapa/ipython into the dir ipython/IPython/core/ I run my zmq server prototype *the output is* omazapa at tuxhome:~/MyProjects/GSoC/tmp/ipython/IPython/core$ python ipzmq_server.py reply socket= tcp://127.0.0.1:5555 publisher socket = tcp://127.0.0.1:5556 Server started. in this moment I am waiting json`s message in reply socket. then I run my client prototype *the output is* omazapa at tuxhome:~/MyProjects/GSoC/tmp/ipython/IPython/core$ python ipzmq_client.py request socket = tcp://127.0.0.1:5556 subscribe socket = tcp://127.0.0.1:5555 but server no recieve the message. *the output is* Traceback (most recent call last): File "ipzmq_server.py", line 112, in msg=server.recieve_reply() File "ipzmq_server.py", line 79, in recieve_reply msg=self._reply_socket.recv_json() File "_zmq.pyx", line 709, in zmq._zmq.Socket.recv_json (zmq/_zmq.c:5242) File "/usr/lib/python2.6/json/__init__.py", line 307, in loads return _default_decoder.decode(s) File "/usr/lib/python2.6/json/decoder.py", line 319, in decode obj, end = self.raw_decode(s, idx=_w(s, 0).end()) File "/usr/lib/python2.6/json/decoder.py", line 338, in raw_decode raise ValueError("No JSON object could be decoded") ValueError: No JSON object could be decoded have you some idea? maybe, do I need encode my message before send it? I have the last version of zeromq2 from official repo and pyzmq http://github.com/ellisonbg/pyzmq/, I am using python2.6 Brian said me that the problem is that I have outdated version of zeromq and pyzmq but I update zeromq and pyzmq and It is not working yet. thks -------------- next part -------------- An HTML attachment was scrubbed... URL: From jorgen.stenarson at bostream.nu Mon May 24 12:56:32 2010 From: jorgen.stenarson at bostream.nu (=?ISO-8859-1?Q?J=F6rgen_Stenarson?=) Date: Mon, 24 May 2010 18:56:32 +0200 Subject: [IPython-dev] unicode error issue #25 In-Reply-To: References: <4BF418F5.5000206@bostream.nu> <4BF433A4.50901@bostream.nu> Message-ID: <4BFAAFC0.4050803@bostream.nu> Brian Granger skrev 2010-05-19 21:16: > from IPython.core.iplib import InteractiveShell > ip = InteractiveShell() > ip.runlines('a=10') > I see some strangeness with this example when run at the commandline: >>> from IPython.core.iplib import InteractiveShell >>> ip = InteractiveShell() >>> ip.runlines(u'a=10') Traceback (most recent call last): File "", line 1, in NameError: name 'ip' is not defined If I run the example using python -i I can not access the ip object at the prompt. Is this expected? /J?rgen From ellisonbg at gmail.com Mon May 24 14:32:06 2010 From: ellisonbg at gmail.com (Brian Granger) Date: Mon, 24 May 2010 11:32:06 -0700 Subject: [IPython-dev] unicode error issue #25 In-Reply-To: <4BFAAFC0.4050803@bostream.nu> References: <4BF418F5.5000206@bostream.nu> <4BF433A4.50901@bostream.nu> <4BFAAFC0.4050803@bostream.nu> Message-ID: J?rgen On Mon, May 24, 2010 at 9:56 AM, J?rgen Stenarson wrote: > Brian Granger skrev 2010-05-19 21:16: >> from IPython.core.iplib import InteractiveShell >> ip = InteractiveShell() >> ip.runlines('a=10') >> > > I see some strangeness with this example when run at the commandline: > > ?>>> from IPython.core.iplib import InteractiveShell > ?>>> ip = InteractiveShell() > ?>>> ip.runlines(u'a=10') > Traceback (most recent call last): > ? File "", line 1, in > NameError: name 'ip' is not defined This should work, and it once did. Could be a subtle bug related to how IPython plays with Python internals. Could you file a bug report? Brian > If I run the example using python -i I can not access the ip object at > the prompt. > > Is this expected? > > /J?rgen > _______________________________________________ > IPython-dev mailing list > IPython-dev at scipy.org > http://mail.scipy.org/mailman/listinfo/ipython-dev > -- Brian E. Granger, Ph.D. Assistant Professor of Physics Cal Poly State University, San Luis Obispo bgranger at calpoly.edu ellisonbg at gmail.com From ellisonbg at gmail.com Mon May 24 14:49:11 2010 From: ellisonbg at gmail.com (Brian Granger) Date: Mon, 24 May 2010 11:49:11 -0700 Subject: [IPython-dev] pyzmq problems In-Reply-To: References: Message-ID: Omar, I am busy today but here are some ideas: * To get to know pyzmq better, I would open up 2-3 IPython sessions, import zmq on all of them and then start to create sockets and send messages between the different IPython sessions. This works really well and will give you a better idea of how the different socket types work, how the json stuff works, etc. This would be invaluable. * To simplify debugging, create a version of the code that has the absolute minimal code - no objects, config, etc. Just the raw zmq messaging stuff. I think if you do these 2 things, the error will be more obvious. Keep posting back to the list so I or Fernando can help with this though. Cheers, Brian 2010/5/22 Omar Andr?s Zapata Mesa : > hi all > I am working yet in zmq module to ipython, but I have the next problem using > json. > the code are in http://github.com/omazapa/ipython > into the dir? ipython/IPython/core/ > > I run my zmq server prototype > > the output is > > omazapa at tuxhome:~/MyProjects/GSoC/tmp/ipython/IPython/core$ python > ipzmq_server.py > reply socket= tcp://127.0.0.1:5555 > publisher socket = tcp://127.0.0.1:5556 > Server started. > > > in this moment I am waiting json`s message in reply socket. > > > then I run my client prototype > > the output is > > omazapa at tuxhome:~/MyProjects/GSoC/tmp/ipython/IPython/core$ python > ipzmq_client.py > request socket = tcp://127.0.0.1:5556 > subscribe socket = tcp://127.0.0.1:5555 > > > but server no recieve the message. > > the output is > > Traceback (most recent call last): > ? File "ipzmq_server.py", line 112, in > ??? msg=server.recieve_reply() > ? File "ipzmq_server.py", line 79, in recieve_reply > ??? msg=self._reply_socket.recv_json() > ? File "_zmq.pyx", line 709, in zmq._zmq.Socket.recv_json (zmq/_zmq.c:5242) > ? File "/usr/lib/python2.6/json/__init__.py", line 307, in loads > ??? return _default_decoder.decode(s) > ? File "/usr/lib/python2.6/json/decoder.py", line 319, in decode > ??? obj, end = self.raw_decode(s, idx=_w(s, 0).end()) > ? File "/usr/lib/python2.6/json/decoder.py", line 338, in raw_decode > ??? raise ValueError("No JSON object could be decoded") > ValueError: No JSON object could be decoded > > > have you some idea? > maybe, do I need encode my message before send it? > I have the last version of zeromq2 from official repo and pyzmq > http://github.com/ellisonbg/pyzmq/, I am using python2.6 > > Brian? said me that the problem is that I have outdated version of zeromq > and pyzmq but I update zeromq and pyzmq and It is not working yet. > > > thks > > _______________________________________________ > IPython-dev mailing list > IPython-dev at scipy.org > http://mail.scipy.org/mailman/listinfo/ipython-dev > > -- Brian E. Granger, Ph.D. Assistant Professor of Physics Cal Poly State University, San Luis Obispo bgranger at calpoly.edu ellisonbg at gmail.com From jorgen.stenarson at bostream.nu Mon May 24 15:02:40 2010 From: jorgen.stenarson at bostream.nu (=?ISO-8859-1?Q?J=F6rgen_Stenarson?=) Date: Mon, 24 May 2010 21:02:40 +0200 Subject: [IPython-dev] unicode error issue #25 In-Reply-To: References: <4BF418F5.5000206@bostream.nu> <4BF433A4.50901@bostream.nu> <4BFAAFC0.4050803@bostream.nu> Message-ID: <4BFACD50.6010306@bostream.nu> Brian Granger skrev 2010-05-24 20:32: > J?rgen > ... >> >> I see some strangeness with this example when run at the commandline: >> >> >>> from IPython.core.iplib import InteractiveShell >> >>> ip = InteractiveShell() >> >>> ip.runlines(u'a=10') >> Traceback (most recent call last): >> File "", line 1, in >> NameError: name 'ip' is not defined > > This should work, and it once did. Could be a subtle bug related to > how IPython plays with Python internals. Could you file a bug report? > Done #124 /J?rgen From andresete.chaos at gmail.com Mon May 24 18:37:07 2010 From: andresete.chaos at gmail.com (=?UTF-8?Q?Omar_Andr=C3=A9s_Zapata_Mesa?=) Date: Mon, 24 May 2010 17:37:07 -0500 Subject: [IPython-dev] pyzmq problems In-Reply-To: References: Message-ID: I have now read the zmq doc from zmq`s website reference. I think we need to use for the kernel 3 ports for the communication system. Kernel description: http://github.com/omazapa/ipython/blob/master/IPython/core/ipzmq_server.py -> port 5555 have subscribe socket into kernel class to read publisher messages from frontend. self._subscribe_socket = self._Context.socket(zmq.SUB) self._subscribe_socket.bind(self._subscribe_connection) self._subscribe_socket.setsockopt(zmq.SUBSCRIBE,"") since the subscribe socket can not send messages "it was read in the reference", we need to implement another socket called publisher to send messages to frontend, then -> port 5556 has a socket which allow kernel class to send messages to frontend, then the subbscribe and publisher sockets will communicate. self._publisher_socket = self._Context.socket(zmq.PUB) self._publisher_socket.bind(self._publisher__connection) -> and 5557 will be implemented to request and publisher sockets that are working very well. do you think this 3-socket model is a good idea? You can check it because I've already implemented it an and it's working fine. http://github.com/omazapa/ipython/blob/master/IPython/core/ipzmq_client.py O. El 24 de mayo de 2010 13:49, Brian Granger escribi?: > Omar, > > I am busy today but here are some ideas: > > * To get to know pyzmq better, I would open up 2-3 IPython sessions, > import zmq on all of them and then start to create sockets and send > messages between the different IPython sessions. This works really > well and will give you a better idea of how the different socket types > work, how the json stuff works, etc. This would be invaluable. > > * To simplify debugging, create a version of the code that has the > absolute minimal code - no objects, config, etc. Just the raw zmq > messaging stuff. > > I think if you do these 2 things, the error will be more obvious. > Keep posting back to the list so I or Fernando can help with this > though. > > Cheers, > > Brian > > 2010/5/22 Omar Andr?s Zapata Mesa : > > hi all > > I am working yet in zmq module to ipython, but I have the next problem > using > > json. > > the code are in http://github.com/omazapa/ipython > > into the dir ipython/IPython/core/ > > > > I run my zmq server prototype > > > > the output is > > > > omazapa at tuxhome:~/MyProjects/GSoC/tmp/ipython/IPython/core$ python > > ipzmq_server.py > > reply socket= tcp://127.0.0.1:5555 > > publisher socket = tcp://127.0.0.1:5556 > > Server started. > > > > > > in this moment I am waiting json`s message in reply socket. > > > > > > then I run my client prototype > > > > the output is > > > > omazapa at tuxhome:~/MyProjects/GSoC/tmp/ipython/IPython/core$ python > > ipzmq_client.py > > request socket = tcp://127.0.0.1:5556 > > subscribe socket = tcp://127.0.0.1:5555 > > > > > > but server no recieve the message. > > > > the output is > > > > Traceback (most recent call last): > > File "ipzmq_server.py", line 112, in > > msg=server.recieve_reply() > > File "ipzmq_server.py", line 79, in recieve_reply > > msg=self._reply_socket.recv_json() > > File "_zmq.pyx", line 709, in zmq._zmq.Socket.recv_json > (zmq/_zmq.c:5242) > > File "/usr/lib/python2.6/json/__init__.py", line 307, in loads > > return _default_decoder.decode(s) > > File "/usr/lib/python2.6/json/decoder.py", line 319, in decode > > obj, end = self.raw_decode(s, idx=_w(s, 0).end()) > > File "/usr/lib/python2.6/json/decoder.py", line 338, in raw_decode > > raise ValueError("No JSON object could be decoded") > > ValueError: No JSON object could be decoded > > > > > > have you some idea? > > maybe, do I need encode my message before send it? > > I have the last version of zeromq2 from official repo and pyzmq > > http://github.com/ellisonbg/pyzmq/, I am using python2.6 > > > > Brian said me that the problem is that I have outdated version of zeromq > > and pyzmq but I update zeromq and pyzmq and It is not working yet. > > > > > > thks > > > > _______________________________________________ > > IPython-dev mailing list > > IPython-dev at scipy.org > > http://mail.scipy.org/mailman/listinfo/ipython-dev > > > > > > > > -- > Brian E. Granger, Ph.D. > Assistant Professor of Physics > Cal Poly State University, San Luis Obispo > bgranger at calpoly.edu > ellisonbg at gmail.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ellisonbg at gmail.com Tue May 25 16:08:08 2010 From: ellisonbg at gmail.com (Brian Granger) Date: Tue, 25 May 2010 13:08:08 -0700 Subject: [IPython-dev] pyzmq problems In-Reply-To: References: Message-ID: I guess I am not clear why the kernel needs to have the SUB socket. If the client needs to send a message to the kernel, can't it simply use the REQ/REP channel? But I do think the kernel needs the REP and PUB sockets. Brian 2010/5/24 Omar Andr?s Zapata Mesa : > I have now read the zmq doc from zmq`s website reference. > I think we need to use for the kernel 3 ports for the communication system. > Kernel description: > ?http://github.com/omazapa/ipython/blob/master/IPython/core/ipzmq_server.py > -> port 5555 have subscribe socket into kernel class to read publisher > messages from frontend. > > > > > > self._subscribe_socket = self._Context.socket(zmq.SUB) > > self._subscribe_socket.bind(self._subscribe_connection) > > self._subscribe_socket.setsockopt(zmq.SUBSCRIBE,"") > > > > > > > since the subscribe socket can not send messages "it was read in the > reference", we need to implement another socket called publisher to send > messages to frontend, then > > > -> port 5556 has a socket which allow kernel class to send messages to > frontend, then the subbscribe and publisher sockets will communicate. > > > > self._publisher_socket = self._Context.socket(zmq.PUB) > > self._publisher_socket.bind(self._publisher__connection) > > > -> and 5557 will be implemented to request and publisher sockets that are > working very well. > > do you think this 3-socket model is a good idea? You can check it because > I've already implemented it an and it's working fine. > http://github.com/omazapa/ipython/blob/master/IPython/core/ipzmq_client.py > O. > El 24 de mayo de 2010 13:49, Brian Granger escribi?: >> >> Omar, >> >> I am busy today but here are some ideas: >> >> * To get to know pyzmq better, I would open up 2-3 IPython sessions, >> import zmq on all of them and then start to create sockets and send >> messages between the different IPython sessions. ?This works really >> well and will give you a better idea of how the different socket types >> work, how the json stuff works, etc. ?This would be invaluable. >> >> * To simplify debugging, create a version of the code that has the >> absolute minimal code - no objects, config, etc. ?Just the raw zmq >> messaging stuff. >> >> I think if you do these 2 things, the error will be more obvious. >> Keep posting back to the list so I or Fernando can help with this >> though. >> >> Cheers, >> >> Brian >> >> 2010/5/22 Omar Andr?s Zapata Mesa : >> > hi all >> > I am working yet in zmq module to ipython, but I have the next problem >> > using >> > json. >> > the code are in http://github.com/omazapa/ipython >> > into the dir? ipython/IPython/core/ >> > >> > I run my zmq server prototype >> > >> > the output is >> > >> > omazapa at tuxhome:~/MyProjects/GSoC/tmp/ipython/IPython/core$ python >> > ipzmq_server.py >> > reply socket= tcp://127.0.0.1:5555 >> > publisher socket = tcp://127.0.0.1:5556 >> > Server started. >> > >> > >> > in this moment I am waiting json`s message in reply socket. >> > >> > >> > then I run my client prototype >> > >> > the output is >> > >> > omazapa at tuxhome:~/MyProjects/GSoC/tmp/ipython/IPython/core$ python >> > ipzmq_client.py >> > request socket = tcp://127.0.0.1:5556 >> > subscribe socket = tcp://127.0.0.1:5555 >> > >> > >> > but server no recieve the message. >> > >> > the output is >> > >> > Traceback (most recent call last): >> > ? File "ipzmq_server.py", line 112, in >> > ??? msg=server.recieve_reply() >> > ? File "ipzmq_server.py", line 79, in recieve_reply >> > ??? msg=self._reply_socket.recv_json() >> > ? File "_zmq.pyx", line 709, in zmq._zmq.Socket.recv_json >> > (zmq/_zmq.c:5242) >> > ? File "/usr/lib/python2.6/json/__init__.py", line 307, in loads >> > ??? return _default_decoder.decode(s) >> > ? File "/usr/lib/python2.6/json/decoder.py", line 319, in decode >> > ??? obj, end = self.raw_decode(s, idx=_w(s, 0).end()) >> > ? File "/usr/lib/python2.6/json/decoder.py", line 338, in raw_decode >> > ??? raise ValueError("No JSON object could be decoded") >> > ValueError: No JSON object could be decoded >> > >> > >> > have you some idea? >> > maybe, do I need encode my message before send it? >> > I have the last version of zeromq2 from official repo and pyzmq >> > http://github.com/ellisonbg/pyzmq/, I am using python2.6 >> > >> > Brian? said me that the problem is that I have outdated version of >> > zeromq >> > and pyzmq but I update zeromq and pyzmq and It is not working yet. >> > >> > >> > thks >> > >> > _______________________________________________ >> > IPython-dev mailing list >> > IPython-dev at scipy.org >> > http://mail.scipy.org/mailman/listinfo/ipython-dev >> > >> > >> >> >> >> -- >> Brian E. Granger, Ph.D. >> Assistant Professor of Physics >> Cal Poly State University, San Luis Obispo >> bgranger at calpoly.edu >> ellisonbg at gmail.com > > -- Brian E. Granger, Ph.D. Assistant Professor of Physics Cal Poly State University, San Luis Obispo bgranger at calpoly.edu ellisonbg at gmail.com From andresete.chaos at gmail.com Tue May 25 17:58:28 2010 From: andresete.chaos at gmail.com (=?UTF-8?Q?Omar_Andr=C3=A9s_Zapata_Mesa?=) Date: Tue, 25 May 2010 16:58:28 -0500 Subject: [IPython-dev] pyzmq problems In-Reply-To: References: Message-ID: hi. the idea is to have 2 types of channels for different types of messages. As is specified in the next file: http://github.com/ellisonbg/pyzmq/blob/master/examples/kernel/message_spec.rst Another thing to discuss is the different types of messages that I don't find correct or clear on the previous link. I mean, I think there are redundancies in that proposal such as the pyin and execute request types of messages as a variable assignation in pub/sub and req/rep. What I suggest is the following: *REQ/REP:* * Request: # msg_type = 'execute_request' content = { code : 'a = 10', } Reply: # msg_type = 'execute_reply' content = { 'status' : 'ok' OR 'error' OR 'abort' # data depends on status value 'message':'error_message' or 'output_message' } * * * *PUB/SUB:* * * *Complete:* # msg_type = 'complete_request' content = { text : 'a.f', # complete on this line : 'print a.f' # full line } # msg_type = 'complete_reply' content = { matches : ['a.foo', 'a.bar'] } *Control:* # msg_type = 'heartbeat' content = { } What do you think about this? Do you think that a sole req/rep channel is enough? Best O. El 25 de mayo de 2010 15:08, Brian Granger escribi?: > I guess I am not clear why the kernel needs to have the SUB socket. > If the client needs to send a message to the kernel, can't it simply > use the REQ/REP channel? But I do think the kernel needs the REP and > PUB sockets. > > Brian > > 2010/5/24 Omar Andr?s Zapata Mesa : > > I have now read the zmq doc from zmq`s website reference. > > I think we need to use for the kernel 3 ports for the communication > system. > > Kernel description: > > > http://github.com/omazapa/ipython/blob/master/IPython/core/ipzmq_server.py > > -> port 5555 have subscribe socket into kernel class to read publisher > > messages from frontend. > > > > > > > > > > > > self._subscribe_socket = self._Context.socket(zmq.SUB) > > > > self._subscribe_socket.bind(self._subscribe_connection) > > > > self._subscribe_socket.setsockopt(zmq.SUBSCRIBE,"") > > > > > > > > > > > > > > since the subscribe socket can not send messages "it was read in the > > reference", we need to implement another socket called publisher to send > > messages to frontend, then > > > > > > -> port 5556 has a socket which allow kernel class to send messages to > > frontend, then the subbscribe and publisher sockets will communicate. > > > > > > > > self._publisher_socket = self._Context.socket(zmq.PUB) > > > > self._publisher_socket.bind(self._publisher__connection) > > > > > > -> and 5557 will be implemented to request and publisher sockets that are > > working very well. > > > > do you think this 3-socket model is a good idea? You can check it because > > I've already implemented it an and it's working fine. > > > http://github.com/omazapa/ipython/blob/master/IPython/core/ipzmq_client.py > > O. > > El 24 de mayo de 2010 13:49, Brian Granger > escribi?: > >> > >> Omar, > >> > >> I am busy today but here are some ideas: > >> > >> * To get to know pyzmq better, I would open up 2-3 IPython sessions, > >> import zmq on all of them and then start to create sockets and send > >> messages between the different IPython sessions. This works really > >> well and will give you a better idea of how the different socket types > >> work, how the json stuff works, etc. This would be invaluable. > >> > >> * To simplify debugging, create a version of the code that has the > >> absolute minimal code - no objects, config, etc. Just the raw zmq > >> messaging stuff. > >> > >> I think if you do these 2 things, the error will be more obvious. > >> Keep posting back to the list so I or Fernando can help with this > >> though. > >> > >> Cheers, > >> > >> Brian > >> > >> 2010/5/22 Omar Andr?s Zapata Mesa : > >> > hi all > >> > I am working yet in zmq module to ipython, but I have the next problem > >> > using > >> > json. > >> > the code are in http://github.com/omazapa/ipython > >> > into the dir ipython/IPython/core/ > >> > > >> > I run my zmq server prototype > >> > > >> > the output is > >> > > >> > omazapa at tuxhome:~/MyProjects/GSoC/tmp/ipython/IPython/core$ python > >> > ipzmq_server.py > >> > reply socket= tcp://127.0.0.1:5555 > >> > publisher socket = tcp://127.0.0.1:5556 > >> > Server started. > >> > > >> > > >> > in this moment I am waiting json`s message in reply socket. > >> > > >> > > >> > then I run my client prototype > >> > > >> > the output is > >> > > >> > omazapa at tuxhome:~/MyProjects/GSoC/tmp/ipython/IPython/core$ python > >> > ipzmq_client.py > >> > request socket = tcp://127.0.0.1:5556 > >> > subscribe socket = tcp://127.0.0.1:5555 > >> > > >> > > >> > but server no recieve the message. > >> > > >> > the output is > >> > > >> > Traceback (most recent call last): > >> > File "ipzmq_server.py", line 112, in > >> > msg=server.recieve_reply() > >> > File "ipzmq_server.py", line 79, in recieve_reply > >> > msg=self._reply_socket.recv_json() > >> > File "_zmq.pyx", line 709, in zmq._zmq.Socket.recv_json > >> > (zmq/_zmq.c:5242) > >> > File "/usr/lib/python2.6/json/__init__.py", line 307, in loads > >> > return _default_decoder.decode(s) > >> > File "/usr/lib/python2.6/json/decoder.py", line 319, in decode > >> > obj, end = self.raw_decode(s, idx=_w(s, 0).end()) > >> > File "/usr/lib/python2.6/json/decoder.py", line 338, in raw_decode > >> > raise ValueError("No JSON object could be decoded") > >> > ValueError: No JSON object could be decoded > >> > > >> > > >> > have you some idea? > >> > maybe, do I need encode my message before send it? > >> > I have the last version of zeromq2 from official repo and pyzmq > >> > http://github.com/ellisonbg/pyzmq/, I am using python2.6 > >> > > >> > Brian said me that the problem is that I have outdated version of > >> > zeromq > >> > and pyzmq but I update zeromq and pyzmq and It is not working yet. > >> > > >> > > >> > thks > >> > > >> > _______________________________________________ > >> > IPython-dev mailing list > >> > IPython-dev at scipy.org > >> > http://mail.scipy.org/mailman/listinfo/ipython-dev > >> > > >> > > >> > >> > >> > >> -- > >> Brian E. Granger, Ph.D. > >> Assistant Professor of Physics > >> Cal Poly State University, San Luis Obispo > >> bgranger at calpoly.edu > >> ellisonbg at gmail.com > > > > > > > > -- > Brian E. Granger, Ph.D. > Assistant Professor of Physics > Cal Poly State University, San Luis Obispo > bgranger at calpoly.edu > ellisonbg at gmail.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From andresete.chaos at gmail.com Tue May 25 20:40:06 2010 From: andresete.chaos at gmail.com (=?UTF-8?Q?Omar_Andr=C3=A9s_Zapata_Mesa?=) Date: Tue, 25 May 2010 19:40:06 -0500 Subject: [IPython-dev] pyzmq problems In-Reply-To: References: Message-ID: Hi, then, are not pub/sub sockets needed? because I think that maybe with pub/sub you can to synchronize the frontends, sending messages that let know a kernel`s user_ns, heartbeat or tab_completion, because request and reply maybe have some longs processes in queue. O. El 25 de mayo de 2010 17:02, Brian Granger escribi?: > 2010/5/25 Omar Andr?s Zapata Mesa : > > hi. > > the idea is to have 2 types of channels for different types of messages. > > As is specified in the next file: > > > http://github.com/ellisonbg/pyzmq/blob/master/examples/kernel/message_spec.rst > > This document is very out of date. We wrote it before writing the > prototype here: > > http://github.com/ellisonbg/pyzmq/tree/master/examples/kernel/ > > But never updated the design doc. At this point, I would consider our > prototype as the design doc. I don't see a need for the client to > have a PUB socket that the kernel is SUB'd to. > > > Another thing to discuss is the different types of messages that I don't > > find correct or clear on the previous link. > > I mean, I think there are redundancies in that proposal such as the pyin > and > > execute request types of messages as a variable assignation in pub/sub > and > > req/rep. > > What I suggest is the following: > > REQ/REP: > > Request: > > # msg_type = 'execute_request' content = { > > code : 'a = 10', > > } > > Reply: > > # msg_type = 'execute_reply' content = { > > 'status' : 'ok' OR 'error' OR 'abort' # data depends on status value > > 'message':'error_message' or 'output_message' > > } > > PUB/SUB: > > Complete: > > # msg_type = 'complete_request' content = { > > text : 'a.f', # complete on this line : 'print a.f' # full line > > } > > # msg_type = 'complete_reply' content = { > > matches : ['a.foo', 'a.bar'] > > } > > Control: > > # msg_type = 'heartbeat' content = { > > } > > For now, please have a close look at the prototype in the link above. > > Cheers, > > Brian > > > What do you think about this? > > Do you think that a sole req/rep channel is enough? > > Best > > O. > > El 25 de mayo de 2010 15:08, Brian Granger > escribi?: > >> > >> I guess I am not clear why the kernel needs to have the SUB socket. > >> If the client needs to send a message to the kernel, can't it simply > >> use the REQ/REP channel? But I do think the kernel needs the REP and > >> PUB sockets. > >> > >> Brian > >> > >> 2010/5/24 Omar Andr?s Zapata Mesa : > >> > I have now read the zmq doc from zmq`s website reference. > >> > I think we need to use for the kernel 3 ports for the communication > >> > system. > >> > Kernel description: > >> > > >> > > http://github.com/omazapa/ipython/blob/master/IPython/core/ipzmq_server.py > >> > -> port 5555 have subscribe socket into kernel class to read publisher > >> > messages from frontend. > >> > > >> > > >> > > >> > > >> > > >> > self._subscribe_socket = self._Context.socket(zmq.SUB) > >> > > >> > self._subscribe_socket.bind(self._subscribe_connection) > >> > > >> > self._subscribe_socket.setsockopt(zmq.SUBSCRIBE,"") > >> > > >> > > >> > > >> > > >> > > >> > > >> > since the subscribe socket can not send messages "it was read in the > >> > reference", we need to implement another socket called publisher to > send > >> > messages to frontend, then > >> > > >> > > >> > -> port 5556 has a socket which allow kernel class to send messages to > >> > frontend, then the subbscribe and publisher sockets will communicate. > >> > > >> > > >> > > >> > self._publisher_socket = self._Context.socket(zmq.PUB) > >> > > >> > self._publisher_socket.bind(self._publisher__connection) > >> > > >> > > >> > -> and 5557 will be implemented to request and publisher sockets that > >> > are > >> > working very well. > >> > > >> > do you think this 3-socket model is a good idea? You can check it > >> > because > >> > I've already implemented it an and it's working fine. > >> > > >> > > http://github.com/omazapa/ipython/blob/master/IPython/core/ipzmq_client.py > >> > O. > >> > El 24 de mayo de 2010 13:49, Brian Granger > >> > escribi?: > >> >> > >> >> Omar, > >> >> > >> >> I am busy today but here are some ideas: > >> >> > >> >> * To get to know pyzmq better, I would open up 2-3 IPython sessions, > >> >> import zmq on all of them and then start to create sockets and send > >> >> messages between the different IPython sessions. This works really > >> >> well and will give you a better idea of how the different socket > types > >> >> work, how the json stuff works, etc. This would be invaluable. > >> >> > >> >> * To simplify debugging, create a version of the code that has the > >> >> absolute minimal code - no objects, config, etc. Just the raw zmq > >> >> messaging stuff. > >> >> > >> >> I think if you do these 2 things, the error will be more obvious. > >> >> Keep posting back to the list so I or Fernando can help with this > >> >> though. > >> >> > >> >> Cheers, > >> >> > >> >> Brian > >> >> > >> >> 2010/5/22 Omar Andr?s Zapata Mesa : > >> >> > hi all > >> >> > I am working yet in zmq module to ipython, but I have the next > >> >> > problem > >> >> > using > >> >> > json. > >> >> > the code are in http://github.com/omazapa/ipython > >> >> > into the dir ipython/IPython/core/ > >> >> > > >> >> > I run my zmq server prototype > >> >> > > >> >> > the output is > >> >> > > >> >> > omazapa at tuxhome:~/MyProjects/GSoC/tmp/ipython/IPython/core$ python > >> >> > ipzmq_server.py > >> >> > reply socket= tcp://127.0.0.1:5555 > >> >> > publisher socket = tcp://127.0.0.1:5556 > >> >> > Server started. > >> >> > > >> >> > > >> >> > in this moment I am waiting json`s message in reply socket. > >> >> > > >> >> > > >> >> > then I run my client prototype > >> >> > > >> >> > the output is > >> >> > > >> >> > omazapa at tuxhome:~/MyProjects/GSoC/tmp/ipython/IPython/core$ python > >> >> > ipzmq_client.py > >> >> > request socket = tcp://127.0.0.1:5556 > >> >> > subscribe socket = tcp://127.0.0.1:5555 > >> >> > > >> >> > > >> >> > but server no recieve the message. > >> >> > > >> >> > the output is > >> >> > > >> >> > Traceback (most recent call last): > >> >> > File "ipzmq_server.py", line 112, in > >> >> > msg=server.recieve_reply() > >> >> > File "ipzmq_server.py", line 79, in recieve_reply > >> >> > msg=self._reply_socket.recv_json() > >> >> > File "_zmq.pyx", line 709, in zmq._zmq.Socket.recv_json > >> >> > (zmq/_zmq.c:5242) > >> >> > File "/usr/lib/python2.6/json/__init__.py", line 307, in loads > >> >> > return _default_decoder.decode(s) > >> >> > File "/usr/lib/python2.6/json/decoder.py", line 319, in decode > >> >> > obj, end = self.raw_decode(s, idx=_w(s, 0).end()) > >> >> > File "/usr/lib/python2.6/json/decoder.py", line 338, in > raw_decode > >> >> > raise ValueError("No JSON object could be decoded") > >> >> > ValueError: No JSON object could be decoded > >> >> > > >> >> > > >> >> > have you some idea? > >> >> > maybe, do I need encode my message before send it? > >> >> > I have the last version of zeromq2 from official repo and pyzmq > >> >> > http://github.com/ellisonbg/pyzmq/, I am using python2.6 > >> >> > > >> >> > Brian said me that the problem is that I have outdated version of > >> >> > zeromq > >> >> > and pyzmq but I update zeromq and pyzmq and It is not working yet. > >> >> > > >> >> > > >> >> > thks > >> >> > > >> >> > _______________________________________________ > >> >> > IPython-dev mailing list > >> >> > IPython-dev at scipy.org > >> >> > http://mail.scipy.org/mailman/listinfo/ipython-dev > >> >> > > >> >> > > >> >> > >> >> > >> >> > >> >> -- > >> >> Brian E. Granger, Ph.D. > >> >> Assistant Professor of Physics > >> >> Cal Poly State University, San Luis Obispo > >> >> bgranger at calpoly.edu > >> >> ellisonbg at gmail.com > >> > > >> > > >> > >> > >> > >> -- > >> Brian E. Granger, Ph.D. > >> Assistant Professor of Physics > >> Cal Poly State University, San Luis Obispo > >> bgranger at calpoly.edu > >> ellisonbg at gmail.com > > > > > > > > -- > Brian E. Granger, Ph.D. > Assistant Professor of Physics > Cal Poly State University, San Luis Obispo > bgranger at calpoly.edu > ellisonbg at gmail.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ellisonbg at gmail.com Tue May 25 21:56:15 2010 From: ellisonbg at gmail.com (Brian Granger) Date: Tue, 25 May 2010 18:56:15 -0700 Subject: [IPython-dev] pyzmq problems In-Reply-To: References: Message-ID: 2010/5/25 Omar Andr?s Zapata Mesa : > Hi, > then, > are not?pub/sub sockets ?needed? We definitely need one PUB/SUB channel. This channel will have the PUB socket in the kernel and the SUB socket in the frontends. This will be used by the kernel to send stdout, stderr, pyin, pyout, etc to the frontends. But your examples also has a 2nd PUB/SUB channel going the other direction from the frontend to the kernel. What are you planning on sending over that channel? > because I think that maybe with pub/sub you can to synchronize the > frontends, sending messages that let know a kernel`s user_ns, heartbeat or > tab_completion, because request and reply maybe have some longs processes in > queue. Possibly, can you explain more of the these usage cases. So far, we have done tab completion over the REQ/REP channel. The heartbeat stuff would probably be a separate socket altogether. Brian > O. > El 25 de mayo de 2010 17:02, Brian Granger escribi?: >> >> 2010/5/25 Omar Andr?s Zapata Mesa : >> > hi. >> > the idea is to have 2 types of channels for different types of messages. >> > As is specified in the next file: >> > >> > http://github.com/ellisonbg/pyzmq/blob/master/examples/kernel/message_spec.rst >> >> This document is very out of date. ?We wrote it before writing the >> prototype here: >> >> http://github.com/ellisonbg/pyzmq/tree/master/examples/kernel/ >> >> But never updated the design doc. ?At this point, I would consider our >> prototype as the design doc. ?I don't see a need for the client to >> have a PUB socket that the kernel is SUB'd to. >> >> > Another thing to discuss is the different types of messages that I don't >> > find correct or clear on the previous link. >> > I mean, I think there are redundancies in that proposal such as the pyin >> > and >> > execute request types of messages as a variable assignation in pub/sub >> > and >> > req/rep. >> > What I suggest is the following: >> > REQ/REP: >> > Request: >> > # msg_type = 'execute_request' content = { >> > code : 'a = 10', >> > } >> > Reply: >> > # msg_type = 'execute_reply' content = { >> > 'status' : 'ok' OR 'error' OR 'abort' # data depends on status value >> > 'message':'error_message' or 'output_message' >> > } >> > PUB/SUB: >> > Complete: >> > # msg_type = 'complete_request' content = { >> > text : 'a.f', # complete on this line : 'print a.f' # full line >> > } >> > # msg_type = 'complete_reply' content = { >> > matches : ['a.foo', 'a.bar'] >> > } >> > Control: >> > # msg_type = 'heartbeat' content = { >> > } >> >> For now, please have a close look at the prototype in the link above. >> >> Cheers, >> >> Brian >> >> > What do you think about this? >> > Do you think that a sole req/rep channel is enough? >> > Best >> > O. >> > El 25 de mayo de 2010 15:08, Brian Granger >> > escribi?: >> >> >> >> I guess I am not clear why the kernel needs to have the SUB socket. >> >> If the client needs to send a message to the kernel, can't it simply >> >> use the REQ/REP channel? ?But I do think the kernel needs the REP and >> >> PUB sockets. >> >> >> >> Brian >> >> >> >> 2010/5/24 Omar Andr?s Zapata Mesa : >> >> > I have now read the zmq doc from zmq`s website reference. >> >> > I think we need to use for the kernel 3 ports for the communication >> >> > system. >> >> > Kernel description: >> >> > >> >> > >> >> > ?http://github.com/omazapa/ipython/blob/master/IPython/core/ipzmq_server.py >> >> > -> port 5555 have subscribe socket into kernel class to read >> >> > publisher >> >> > messages from frontend. >> >> > >> >> > >> >> > >> >> > >> >> > >> >> > self._subscribe_socket = self._Context.socket(zmq.SUB) >> >> > >> >> > self._subscribe_socket.bind(self._subscribe_connection) >> >> > >> >> > self._subscribe_socket.setsockopt(zmq.SUBSCRIBE,"") >> >> > >> >> > >> >> > >> >> > >> >> > >> >> > >> >> > since the subscribe socket can not send messages "it was read in the >> >> > reference", we need to implement another socket called publisher to >> >> > send >> >> > messages to frontend, then >> >> > >> >> > >> >> > -> port 5556 has a socket which allow kernel class to send messages >> >> > to >> >> > frontend, then the subbscribe and publisher sockets will communicate. >> >> > >> >> > >> >> > >> >> > self._publisher_socket = self._Context.socket(zmq.PUB) >> >> > >> >> > self._publisher_socket.bind(self._publisher__connection) >> >> > >> >> > >> >> > -> and 5557 will be implemented to request and publisher sockets that >> >> > are >> >> > working very well. >> >> > >> >> > do you think this 3-socket model is a good idea? You can check it >> >> > because >> >> > I've already implemented it an and it's working fine. >> >> > >> >> > >> >> > http://github.com/omazapa/ipython/blob/master/IPython/core/ipzmq_client.py >> >> > O. >> >> > El 24 de mayo de 2010 13:49, Brian Granger >> >> > escribi?: >> >> >> >> >> >> Omar, >> >> >> >> >> >> I am busy today but here are some ideas: >> >> >> >> >> >> * To get to know pyzmq better, I would open up 2-3 IPython sessions, >> >> >> import zmq on all of them and then start to create sockets and send >> >> >> messages between the different IPython sessions. ?This works really >> >> >> well and will give you a better idea of how the different socket >> >> >> types >> >> >> work, how the json stuff works, etc. ?This would be invaluable. >> >> >> >> >> >> * To simplify debugging, create a version of the code that has the >> >> >> absolute minimal code - no objects, config, etc. ?Just the raw zmq >> >> >> messaging stuff. >> >> >> >> >> >> I think if you do these 2 things, the error will be more obvious. >> >> >> Keep posting back to the list so I or Fernando can help with this >> >> >> though. >> >> >> >> >> >> Cheers, >> >> >> >> >> >> Brian >> >> >> >> >> >> 2010/5/22 Omar Andr?s Zapata Mesa : >> >> >> > hi all >> >> >> > I am working yet in zmq module to ipython, but I have the next >> >> >> > problem >> >> >> > using >> >> >> > json. >> >> >> > the code are in http://github.com/omazapa/ipython >> >> >> > into the dir? ipython/IPython/core/ >> >> >> > >> >> >> > I run my zmq server prototype >> >> >> > >> >> >> > the output is >> >> >> > >> >> >> > omazapa at tuxhome:~/MyProjects/GSoC/tmp/ipython/IPython/core$ python >> >> >> > ipzmq_server.py >> >> >> > reply socket= tcp://127.0.0.1:5555 >> >> >> > publisher socket = tcp://127.0.0.1:5556 >> >> >> > Server started. >> >> >> > >> >> >> > >> >> >> > in this moment I am waiting json`s message in reply socket. >> >> >> > >> >> >> > >> >> >> > then I run my client prototype >> >> >> > >> >> >> > the output is >> >> >> > >> >> >> > omazapa at tuxhome:~/MyProjects/GSoC/tmp/ipython/IPython/core$ python >> >> >> > ipzmq_client.py >> >> >> > request socket = tcp://127.0.0.1:5556 >> >> >> > subscribe socket = tcp://127.0.0.1:5555 >> >> >> > >> >> >> > >> >> >> > but server no recieve the message. >> >> >> > >> >> >> > the output is >> >> >> > >> >> >> > Traceback (most recent call last): >> >> >> > ? File "ipzmq_server.py", line 112, in >> >> >> > ??? msg=server.recieve_reply() >> >> >> > ? File "ipzmq_server.py", line 79, in recieve_reply >> >> >> > ??? msg=self._reply_socket.recv_json() >> >> >> > ? File "_zmq.pyx", line 709, in zmq._zmq.Socket.recv_json >> >> >> > (zmq/_zmq.c:5242) >> >> >> > ? File "/usr/lib/python2.6/json/__init__.py", line 307, in loads >> >> >> > ??? return _default_decoder.decode(s) >> >> >> > ? File "/usr/lib/python2.6/json/decoder.py", line 319, in decode >> >> >> > ??? obj, end = self.raw_decode(s, idx=_w(s, 0).end()) >> >> >> > ? File "/usr/lib/python2.6/json/decoder.py", line 338, in >> >> >> > raw_decode >> >> >> > ??? raise ValueError("No JSON object could be decoded") >> >> >> > ValueError: No JSON object could be decoded >> >> >> > >> >> >> > >> >> >> > have you some idea? >> >> >> > maybe, do I need encode my message before send it? >> >> >> > I have the last version of zeromq2 from official repo and pyzmq >> >> >> > http://github.com/ellisonbg/pyzmq/, I am using python2.6 >> >> >> > >> >> >> > Brian? said me that the problem is that I have outdated version of >> >> >> > zeromq >> >> >> > and pyzmq but I update zeromq and pyzmq and It is not working yet. >> >> >> > >> >> >> > >> >> >> > thks >> >> >> > >> >> >> > _______________________________________________ >> >> >> > IPython-dev mailing list >> >> >> > IPython-dev at scipy.org >> >> >> > http://mail.scipy.org/mailman/listinfo/ipython-dev >> >> >> > >> >> >> > >> >> >> >> >> >> >> >> >> >> >> >> -- >> >> >> Brian E. Granger, Ph.D. >> >> >> Assistant Professor of Physics >> >> >> Cal Poly State University, San Luis Obispo >> >> >> bgranger at calpoly.edu >> >> >> ellisonbg at gmail.com >> >> > >> >> > >> >> >> >> >> >> >> >> -- >> >> Brian E. Granger, Ph.D. >> >> Assistant Professor of Physics >> >> Cal Poly State University, San Luis Obispo >> >> bgranger at calpoly.edu >> >> ellisonbg at gmail.com >> > >> > >> >> >> >> -- >> Brian E. Granger, Ph.D. >> Assistant Professor of Physics >> Cal Poly State University, San Luis Obispo >> bgranger at calpoly.edu >> ellisonbg at gmail.com > > > _______________________________________________ > IPython-dev mailing list > IPython-dev at scipy.org > http://mail.scipy.org/mailman/listinfo/ipython-dev > > -- Brian E. Granger, Ph.D. Assistant Professor of Physics Cal Poly State University, San Luis Obispo bgranger at calpoly.edu ellisonbg at gmail.com From andresete.chaos at gmail.com Tue May 25 23:14:39 2010 From: andresete.chaos at gmail.com (=?UTF-8?Q?Omar_Andr=C3=A9s_Zapata_Mesa?=) Date: Tue, 25 May 2010 22:14:39 -0500 Subject: [IPython-dev] pyzmq problems In-Reply-To: References: Message-ID: Ok I get it! but. can you send messages just on request socket from frontend? because I can not send messages with zmq.SUB socket, and I can not recv with PUB socket. that information appears in the zmq APIs. http://api.zeromq.org/zmq_socket.html and I try do this in pyzmq and the message is ZMQError: Operation not supported is it the idea? O. El 25 de mayo de 2010 20:56, Brian Granger escribi?: > 2010/5/25 Omar Andr?s Zapata Mesa : > > Hi, > > then, > > are not pub/sub sockets needed? > > We definitely need one PUB/SUB channel. This channel will have the > PUB socket in the kernel and the SUB socket in the frontends. This > will be used by the kernel to send stdout, stderr, pyin, pyout, etc to > the frontends. But your examples also has a 2nd PUB/SUB channel going > the other direction from the frontend to the kernel. What are you > planning on sending over that channel? > > > because I think that maybe with pub/sub you can to synchronize the > > frontends, sending messages that let know a kernel`s user_ns, heartbeat > or > > tab_completion, because request and reply maybe have some longs processes > in > > queue. > > Possibly, can you explain more of the these usage cases. So far, we > have done tab completion over the REQ/REP channel. The heartbeat > stuff would probably be a separate socket altogether. > > Brian > > > O. > > El 25 de mayo de 2010 17:02, Brian Granger > escribi?: > >> > >> 2010/5/25 Omar Andr?s Zapata Mesa : > >> > hi. > >> > the idea is to have 2 types of channels for different types of > messages. > >> > As is specified in the next file: > >> > > >> > > http://github.com/ellisonbg/pyzmq/blob/master/examples/kernel/message_spec.rst > >> > >> This document is very out of date. We wrote it before writing the > >> prototype here: > >> > >> http://github.com/ellisonbg/pyzmq/tree/master/examples/kernel/ > >> > >> But never updated the design doc. At this point, I would consider our > >> prototype as the design doc. I don't see a need for the client to > >> have a PUB socket that the kernel is SUB'd to. > >> > >> > Another thing to discuss is the different types of messages that I > don't > >> > find correct or clear on the previous link. > >> > I mean, I think there are redundancies in that proposal such as the > pyin > >> > and > >> > execute request types of messages as a variable assignation in pub/sub > >> > and > >> > req/rep. > >> > What I suggest is the following: > >> > REQ/REP: > >> > Request: > >> > # msg_type = 'execute_request' content = { > >> > code : 'a = 10', > >> > } > >> > Reply: > >> > # msg_type = 'execute_reply' content = { > >> > 'status' : 'ok' OR 'error' OR 'abort' # data depends on status value > >> > 'message':'error_message' or 'output_message' > >> > } > >> > PUB/SUB: > >> > Complete: > >> > # msg_type = 'complete_request' content = { > >> > text : 'a.f', # complete on this line : 'print a.f' # full line > >> > } > >> > # msg_type = 'complete_reply' content = { > >> > matches : ['a.foo', 'a.bar'] > >> > } > >> > Control: > >> > # msg_type = 'heartbeat' content = { > >> > } > >> > >> For now, please have a close look at the prototype in the link above. > >> > >> Cheers, > >> > >> Brian > >> > >> > What do you think about this? > >> > Do you think that a sole req/rep channel is enough? > >> > Best > >> > O. > >> > El 25 de mayo de 2010 15:08, Brian Granger > >> > escribi?: > >> >> > >> >> I guess I am not clear why the kernel needs to have the SUB socket. > >> >> If the client needs to send a message to the kernel, can't it simply > >> >> use the REQ/REP channel? But I do think the kernel needs the REP and > >> >> PUB sockets. > >> >> > >> >> Brian > >> >> > >> >> 2010/5/24 Omar Andr?s Zapata Mesa : > >> >> > I have now read the zmq doc from zmq`s website reference. > >> >> > I think we need to use for the kernel 3 ports for the communication > >> >> > system. > >> >> > Kernel description: > >> >> > > >> >> > > >> >> > > http://github.com/omazapa/ipython/blob/master/IPython/core/ipzmq_server.py > >> >> > -> port 5555 have subscribe socket into kernel class to read > >> >> > publisher > >> >> > messages from frontend. > >> >> > > >> >> > > >> >> > > >> >> > > >> >> > > >> >> > self._subscribe_socket = self._Context.socket(zmq.SUB) > >> >> > > >> >> > self._subscribe_socket.bind(self._subscribe_connection) > >> >> > > >> >> > self._subscribe_socket.setsockopt(zmq.SUBSCRIBE,"") > >> >> > > >> >> > > >> >> > > >> >> > > >> >> > > >> >> > > >> >> > since the subscribe socket can not send messages "it was read in > the > >> >> > reference", we need to implement another socket called publisher to > >> >> > send > >> >> > messages to frontend, then > >> >> > > >> >> > > >> >> > -> port 5556 has a socket which allow kernel class to send messages > >> >> > to > >> >> > frontend, then the subbscribe and publisher sockets will > communicate. > >> >> > > >> >> > > >> >> > > >> >> > self._publisher_socket = self._Context.socket(zmq.PUB) > >> >> > > >> >> > self._publisher_socket.bind(self._publisher__connection) > >> >> > > >> >> > > >> >> > -> and 5557 will be implemented to request and publisher sockets > that > >> >> > are > >> >> > working very well. > >> >> > > >> >> > do you think this 3-socket model is a good idea? You can check it > >> >> > because > >> >> > I've already implemented it an and it's working fine. > >> >> > > >> >> > > >> >> > > http://github.com/omazapa/ipython/blob/master/IPython/core/ipzmq_client.py > >> >> > O. > >> >> > El 24 de mayo de 2010 13:49, Brian Granger > >> >> > escribi?: > >> >> >> > >> >> >> Omar, > >> >> >> > >> >> >> I am busy today but here are some ideas: > >> >> >> > >> >> >> * To get to know pyzmq better, I would open up 2-3 IPython > sessions, > >> >> >> import zmq on all of them and then start to create sockets and > send > >> >> >> messages between the different IPython sessions. This works > really > >> >> >> well and will give you a better idea of how the different socket > >> >> >> types > >> >> >> work, how the json stuff works, etc. This would be invaluable. > >> >> >> > >> >> >> * To simplify debugging, create a version of the code that has the > >> >> >> absolute minimal code - no objects, config, etc. Just the raw zmq > >> >> >> messaging stuff. > >> >> >> > >> >> >> I think if you do these 2 things, the error will be more obvious. > >> >> >> Keep posting back to the list so I or Fernando can help with this > >> >> >> though. > >> >> >> > >> >> >> Cheers, > >> >> >> > >> >> >> Brian > >> >> >> > >> >> >> 2010/5/22 Omar Andr?s Zapata Mesa : > >> >> >> > hi all > >> >> >> > I am working yet in zmq module to ipython, but I have the next > >> >> >> > problem > >> >> >> > using > >> >> >> > json. > >> >> >> > the code are in http://github.com/omazapa/ipython > >> >> >> > into the dir ipython/IPython/core/ > >> >> >> > > >> >> >> > I run my zmq server prototype > >> >> >> > > >> >> >> > the output is > >> >> >> > > >> >> >> > omazapa at tuxhome:~/MyProjects/GSoC/tmp/ipython/IPython/core$ > python > >> >> >> > ipzmq_server.py > >> >> >> > reply socket= tcp://127.0.0.1:5555 > >> >> >> > publisher socket = tcp://127.0.0.1:5556 > >> >> >> > Server started. > >> >> >> > > >> >> >> > > >> >> >> > in this moment I am waiting json`s message in reply socket. > >> >> >> > > >> >> >> > > >> >> >> > then I run my client prototype > >> >> >> > > >> >> >> > the output is > >> >> >> > > >> >> >> > omazapa at tuxhome:~/MyProjects/GSoC/tmp/ipython/IPython/core$ > python > >> >> >> > ipzmq_client.py > >> >> >> > request socket = tcp://127.0.0.1:5556 > >> >> >> > subscribe socket = tcp://127.0.0.1:5555 > >> >> >> > > >> >> >> > > >> >> >> > but server no recieve the message. > >> >> >> > > >> >> >> > the output is > >> >> >> > > >> >> >> > Traceback (most recent call last): > >> >> >> > File "ipzmq_server.py", line 112, in > >> >> >> > msg=server.recieve_reply() > >> >> >> > File "ipzmq_server.py", line 79, in recieve_reply > >> >> >> > msg=self._reply_socket.recv_json() > >> >> >> > File "_zmq.pyx", line 709, in zmq._zmq.Socket.recv_json > >> >> >> > (zmq/_zmq.c:5242) > >> >> >> > File "/usr/lib/python2.6/json/__init__.py", line 307, in loads > >> >> >> > return _default_decoder.decode(s) > >> >> >> > File "/usr/lib/python2.6/json/decoder.py", line 319, in decode > >> >> >> > obj, end = self.raw_decode(s, idx=_w(s, 0).end()) > >> >> >> > File "/usr/lib/python2.6/json/decoder.py", line 338, in > >> >> >> > raw_decode > >> >> >> > raise ValueError("No JSON object could be decoded") > >> >> >> > ValueError: No JSON object could be decoded > >> >> >> > > >> >> >> > > >> >> >> > have you some idea? > >> >> >> > maybe, do I need encode my message before send it? > >> >> >> > I have the last version of zeromq2 from official repo and pyzmq > >> >> >> > http://github.com/ellisonbg/pyzmq/, I am using python2.6 > >> >> >> > > >> >> >> > Brian said me that the problem is that I have outdated version > of > >> >> >> > zeromq > >> >> >> > and pyzmq but I update zeromq and pyzmq and It is not working > yet. > >> >> >> > > >> >> >> > > >> >> >> > thks > >> >> >> > > >> >> >> > _______________________________________________ > >> >> >> > IPython-dev mailing list > >> >> >> > IPython-dev at scipy.org > >> >> >> > http://mail.scipy.org/mailman/listinfo/ipython-dev > >> >> >> > > >> >> >> > > >> >> >> > >> >> >> > >> >> >> > >> >> >> -- > >> >> >> Brian E. Granger, Ph.D. > >> >> >> Assistant Professor of Physics > >> >> >> Cal Poly State University, San Luis Obispo > >> >> >> bgranger at calpoly.edu > >> >> >> ellisonbg at gmail.com > >> >> > > >> >> > > >> >> > >> >> > >> >> > >> >> -- > >> >> Brian E. Granger, Ph.D. > >> >> Assistant Professor of Physics > >> >> Cal Poly State University, San Luis Obispo > >> >> bgranger at calpoly.edu > >> >> ellisonbg at gmail.com > >> > > >> > > >> > >> > >> > >> -- > >> Brian E. Granger, Ph.D. > >> Assistant Professor of Physics > >> Cal Poly State University, San Luis Obispo > >> bgranger at calpoly.edu > >> ellisonbg at gmail.com > > > > > > _______________________________________________ > > IPython-dev mailing list > > IPython-dev at scipy.org > > http://mail.scipy.org/mailman/listinfo/ipython-dev > > > > > > > > -- > Brian E. Granger, Ph.D. > Assistant Professor of Physics > Cal Poly State University, San Luis Obispo > bgranger at calpoly.edu > ellisonbg at gmail.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ellisonbg at gmail.com Wed May 26 17:16:28 2010 From: ellisonbg at gmail.com (Brian Granger) Date: Wed, 26 May 2010 14:16:28 -0700 Subject: [IPython-dev] pyzmq problems In-Reply-To: References: Message-ID: 2010/5/25 Omar Andr?s Zapata Mesa : > Ok I get it! > but. > can you send ?messages just on request socket from frontend? Yes, but I think that is all we need. > because I can not send messages with zmq.SUB socket, and I can not recv with > PUB socket. > that information appears in the zmq APIs. The REQ/REP channel is for cases where the frontend needs to make a request that the kernel will reply to immediately and there is only 1 reply. The PUB/SBU channel allows the kernel to send messages to the frontend where there wasn't a corresponding reply. A perfect example is stdout, which can happen at any moment, without a corresponding request from the frontend. In this sense, the PUB/SUB channel is for out of channel side effects. Again, I would look carefully at the example in the pyzmq source tree. Cheers, Brian > http://api.zeromq.org/zmq_socket.html > and I try do this in pyzmq and the message is?ZMQError: Operation not > supported > is it the idea? > O. > > El 25 de mayo de 2010 20:56, Brian Granger escribi?: >> >> 2010/5/25 Omar Andr?s Zapata Mesa : >> > Hi, >> > then, >> > are not?pub/sub sockets ?needed? >> >> We definitely need one PUB/SUB channel. ?This channel will have the >> PUB socket in the kernel and the SUB socket in the frontends. ?This >> will be used by the kernel to send stdout, stderr, pyin, pyout, etc to >> the frontends. ?But your examples also has a 2nd PUB/SUB channel going >> the other direction from the frontend to the kernel. ?What are you >> planning on sending over that channel? >> >> > because I think that maybe with pub/sub you can to synchronize the >> > frontends, sending messages that let know a kernel`s user_ns, heartbeat >> > or >> > tab_completion, because request and reply maybe have some longs >> > processes in >> > queue. >> >> Possibly, can you explain more of the these usage cases. ?So far, we >> have done tab completion over the REQ/REP channel. ?The heartbeat >> stuff would probably be a separate socket altogether. >> >> Brian >> >> > O. >> > El 25 de mayo de 2010 17:02, Brian Granger >> > escribi?: >> >> >> >> 2010/5/25 Omar Andr?s Zapata Mesa : >> >> > hi. >> >> > the idea is to have 2 types of channels for different types of >> >> > messages. >> >> > As is specified in the next file: >> >> > >> >> > >> >> > http://github.com/ellisonbg/pyzmq/blob/master/examples/kernel/message_spec.rst >> >> >> >> This document is very out of date. ?We wrote it before writing the >> >> prototype here: >> >> >> >> http://github.com/ellisonbg/pyzmq/tree/master/examples/kernel/ >> >> >> >> But never updated the design doc. ?At this point, I would consider our >> >> prototype as the design doc. ?I don't see a need for the client to >> >> have a PUB socket that the kernel is SUB'd to. >> >> >> >> > Another thing to discuss is the different types of messages that I >> >> > don't >> >> > find correct or clear on the previous link. >> >> > I mean, I think there are redundancies in that proposal such as the >> >> > pyin >> >> > and >> >> > execute request types of messages as a variable assignation in >> >> > pub/sub >> >> > and >> >> > req/rep. >> >> > What I suggest is the following: >> >> > REQ/REP: >> >> > Request: >> >> > # msg_type = 'execute_request' content = { >> >> > code : 'a = 10', >> >> > } >> >> > Reply: >> >> > # msg_type = 'execute_reply' content = { >> >> > 'status' : 'ok' OR 'error' OR 'abort' # data depends on status value >> >> > 'message':'error_message' or 'output_message' >> >> > } >> >> > PUB/SUB: >> >> > Complete: >> >> > # msg_type = 'complete_request' content = { >> >> > text : 'a.f', # complete on this line : 'print a.f' # full line >> >> > } >> >> > # msg_type = 'complete_reply' content = { >> >> > matches : ['a.foo', 'a.bar'] >> >> > } >> >> > Control: >> >> > # msg_type = 'heartbeat' content = { >> >> > } >> >> >> >> For now, please have a close look at the prototype in the link above. >> >> >> >> Cheers, >> >> >> >> Brian >> >> >> >> > What do you think about this? >> >> > Do you think that a sole req/rep channel is enough? >> >> > Best >> >> > O. >> >> > El 25 de mayo de 2010 15:08, Brian Granger >> >> > escribi?: >> >> >> >> >> >> I guess I am not clear why the kernel needs to have the SUB socket. >> >> >> If the client needs to send a message to the kernel, can't it simply >> >> >> use the REQ/REP channel? ?But I do think the kernel needs the REP >> >> >> and >> >> >> PUB sockets. >> >> >> >> >> >> Brian >> >> >> >> >> >> 2010/5/24 Omar Andr?s Zapata Mesa : >> >> >> > I have now read the zmq doc from zmq`s website reference. >> >> >> > I think we need to use for the kernel 3 ports for the >> >> >> > communication >> >> >> > system. >> >> >> > Kernel description: >> >> >> > >> >> >> > >> >> >> > >> >> >> > ?http://github.com/omazapa/ipython/blob/master/IPython/core/ipzmq_server.py >> >> >> > -> port 5555 have subscribe socket into kernel class to read >> >> >> > publisher >> >> >> > messages from frontend. >> >> >> > >> >> >> > >> >> >> > >> >> >> > >> >> >> > >> >> >> > self._subscribe_socket = self._Context.socket(zmq.SUB) >> >> >> > >> >> >> > self._subscribe_socket.bind(self._subscribe_connection) >> >> >> > >> >> >> > self._subscribe_socket.setsockopt(zmq.SUBSCRIBE,"") >> >> >> > >> >> >> > >> >> >> > >> >> >> > >> >> >> > >> >> >> > >> >> >> > since the subscribe socket can not send messages "it was read in >> >> >> > the >> >> >> > reference", we need to implement another socket called publisher >> >> >> > to >> >> >> > send >> >> >> > messages to frontend, then >> >> >> > >> >> >> > >> >> >> > -> port 5556 has a socket which allow kernel class to send >> >> >> > messages >> >> >> > to >> >> >> > frontend, then the subbscribe and publisher sockets will >> >> >> > communicate. >> >> >> > >> >> >> > >> >> >> > >> >> >> > self._publisher_socket = self._Context.socket(zmq.PUB) >> >> >> > >> >> >> > self._publisher_socket.bind(self._publisher__connection) >> >> >> > >> >> >> > >> >> >> > -> and 5557 will be implemented to request and publisher sockets >> >> >> > that >> >> >> > are >> >> >> > working very well. >> >> >> > >> >> >> > do you think this 3-socket model is a good idea? You can check it >> >> >> > because >> >> >> > I've already implemented it an and it's working fine. >> >> >> > >> >> >> > >> >> >> > >> >> >> > http://github.com/omazapa/ipython/blob/master/IPython/core/ipzmq_client.py >> >> >> > O. >> >> >> > El 24 de mayo de 2010 13:49, Brian Granger >> >> >> > escribi?: >> >> >> >> >> >> >> >> Omar, >> >> >> >> >> >> >> >> I am busy today but here are some ideas: >> >> >> >> >> >> >> >> * To get to know pyzmq better, I would open up 2-3 IPython >> >> >> >> sessions, >> >> >> >> import zmq on all of them and then start to create sockets and >> >> >> >> send >> >> >> >> messages between the different IPython sessions. ?This works >> >> >> >> really >> >> >> >> well and will give you a better idea of how the different socket >> >> >> >> types >> >> >> >> work, how the json stuff works, etc. ?This would be invaluable. >> >> >> >> >> >> >> >> * To simplify debugging, create a version of the code that has >> >> >> >> the >> >> >> >> absolute minimal code - no objects, config, etc. ?Just the raw >> >> >> >> zmq >> >> >> >> messaging stuff. >> >> >> >> >> >> >> >> I think if you do these 2 things, the error will be more obvious. >> >> >> >> Keep posting back to the list so I or Fernando can help with this >> >> >> >> though. >> >> >> >> >> >> >> >> Cheers, >> >> >> >> >> >> >> >> Brian >> >> >> >> >> >> >> >> 2010/5/22 Omar Andr?s Zapata Mesa : >> >> >> >> > hi all >> >> >> >> > I am working yet in zmq module to ipython, but I have the next >> >> >> >> > problem >> >> >> >> > using >> >> >> >> > json. >> >> >> >> > the code are in http://github.com/omazapa/ipython >> >> >> >> > into the dir? ipython/IPython/core/ >> >> >> >> > >> >> >> >> > I run my zmq server prototype >> >> >> >> > >> >> >> >> > the output is >> >> >> >> > >> >> >> >> > omazapa at tuxhome:~/MyProjects/GSoC/tmp/ipython/IPython/core$ >> >> >> >> > python >> >> >> >> > ipzmq_server.py >> >> >> >> > reply socket= tcp://127.0.0.1:5555 >> >> >> >> > publisher socket = tcp://127.0.0.1:5556 >> >> >> >> > Server started. >> >> >> >> > >> >> >> >> > >> >> >> >> > in this moment I am waiting json`s message in reply socket. >> >> >> >> > >> >> >> >> > >> >> >> >> > then I run my client prototype >> >> >> >> > >> >> >> >> > the output is >> >> >> >> > >> >> >> >> > omazapa at tuxhome:~/MyProjects/GSoC/tmp/ipython/IPython/core$ >> >> >> >> > python >> >> >> >> > ipzmq_client.py >> >> >> >> > request socket = tcp://127.0.0.1:5556 >> >> >> >> > subscribe socket = tcp://127.0.0.1:5555 >> >> >> >> > >> >> >> >> > >> >> >> >> > but server no recieve the message. >> >> >> >> > >> >> >> >> > the output is >> >> >> >> > >> >> >> >> > Traceback (most recent call last): >> >> >> >> > ? File "ipzmq_server.py", line 112, in >> >> >> >> > ??? msg=server.recieve_reply() >> >> >> >> > ? File "ipzmq_server.py", line 79, in recieve_reply >> >> >> >> > ??? msg=self._reply_socket.recv_json() >> >> >> >> > ? File "_zmq.pyx", line 709, in zmq._zmq.Socket.recv_json >> >> >> >> > (zmq/_zmq.c:5242) >> >> >> >> > ? File "/usr/lib/python2.6/json/__init__.py", line 307, in >> >> >> >> > loads >> >> >> >> > ??? return _default_decoder.decode(s) >> >> >> >> > ? File "/usr/lib/python2.6/json/decoder.py", line 319, in >> >> >> >> > decode >> >> >> >> > ??? obj, end = self.raw_decode(s, idx=_w(s, 0).end()) >> >> >> >> > ? File "/usr/lib/python2.6/json/decoder.py", line 338, in >> >> >> >> > raw_decode >> >> >> >> > ??? raise ValueError("No JSON object could be decoded") >> >> >> >> > ValueError: No JSON object could be decoded >> >> >> >> > >> >> >> >> > >> >> >> >> > have you some idea? >> >> >> >> > maybe, do I need encode my message before send it? >> >> >> >> > I have the last version of zeromq2 from official repo and pyzmq >> >> >> >> > http://github.com/ellisonbg/pyzmq/, I am using python2.6 >> >> >> >> > >> >> >> >> > Brian? said me that the problem is that I have outdated version >> >> >> >> > of >> >> >> >> > zeromq >> >> >> >> > and pyzmq but I update zeromq and pyzmq and It is not working >> >> >> >> > yet. >> >> >> >> > >> >> >> >> > >> >> >> >> > thks >> >> >> >> > >> >> >> >> > _______________________________________________ >> >> >> >> > IPython-dev mailing list >> >> >> >> > IPython-dev at scipy.org >> >> >> >> > http://mail.scipy.org/mailman/listinfo/ipython-dev >> >> >> >> > >> >> >> >> > >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> -- >> >> >> >> Brian E. Granger, Ph.D. >> >> >> >> Assistant Professor of Physics >> >> >> >> Cal Poly State University, San Luis Obispo >> >> >> >> bgranger at calpoly.edu >> >> >> >> ellisonbg at gmail.com >> >> >> > >> >> >> > >> >> >> >> >> >> >> >> >> >> >> >> -- >> >> >> Brian E. Granger, Ph.D. >> >> >> Assistant Professor of Physics >> >> >> Cal Poly State University, San Luis Obispo >> >> >> bgranger at calpoly.edu >> >> >> ellisonbg at gmail.com >> >> > >> >> > >> >> >> >> >> >> >> >> -- >> >> Brian E. Granger, Ph.D. >> >> Assistant Professor of Physics >> >> Cal Poly State University, San Luis Obispo >> >> bgranger at calpoly.edu >> >> ellisonbg at gmail.com >> > >> > >> > _______________________________________________ >> > IPython-dev mailing list >> > IPython-dev at scipy.org >> > http://mail.scipy.org/mailman/listinfo/ipython-dev >> > >> > >> >> >> >> -- >> Brian E. Granger, Ph.D. >> Assistant Professor of Physics >> Cal Poly State University, San Luis Obispo >> bgranger at calpoly.edu >> ellisonbg at gmail.com > > > _______________________________________________ > IPython-dev mailing list > IPython-dev at scipy.org > http://mail.scipy.org/mailman/listinfo/ipython-dev > > -- Brian E. Granger, Ph.D. Assistant Professor of Physics Cal Poly State University, San Luis Obispo bgranger at calpoly.edu ellisonbg at gmail.com From andresete.chaos at gmail.com Thu May 27 22:02:52 2010 From: andresete.chaos at gmail.com (=?UTF-8?Q?Omar_Andr=C3=A9s_Zapata_Mesa?=) Date: Thu, 27 May 2010 21:02:52 -0500 Subject: [IPython-dev] pyzmq problems In-Reply-To: References: Message-ID: Hi I think that you desing is ok, the I wrote the new implementation and put changes in git repo and I and doing a grafical desing of all ipython -zmq parts. this part are in http://gfif.udea.edu.co/ipzmq_connections.jpeg see and suggest me how I do heartbeat test with server. can I send heartbeat like a simple request? O. El 26 de mayo de 2010 16:16, Brian Granger escribi?: > 2010/5/25 Omar Andr?s Zapata Mesa : > > Ok I get it! > > but. > > can you send messages just on request socket from frontend? > > Yes, but I think that is all we need. > > > because I can not send messages with zmq.SUB socket, and I can not recv > with > > PUB socket. > > that information appears in the zmq APIs. > > The REQ/REP channel is for cases where the frontend needs to make a > request that the kernel will reply to immediately and there is only 1 > reply. > > The PUB/SBU channel allows the kernel to send messages to the frontend > where there wasn't a corresponding reply. A perfect example is > stdout, which can happen at any moment, without a corresponding > request from the frontend. In this sense, the PUB/SUB channel is for > out of channel side effects. > > Again, I would look carefully at the example in the pyzmq source tree. > > Cheers, > > Brian > > http://api.zeromq.org/zmq_socket.html > > and I try do this in pyzmq and the message is ZMQError: Operation not > > supported > > is it the idea? > > O. > > > > El 25 de mayo de 2010 20:56, Brian Granger > escribi?: > >> > >> 2010/5/25 Omar Andr?s Zapata Mesa : > >> > Hi, > >> > then, > >> > are not pub/sub sockets needed? > >> > >> We definitely need one PUB/SUB channel. This channel will have the > >> PUB socket in the kernel and the SUB socket in the frontends. This > >> will be used by the kernel to send stdout, stderr, pyin, pyout, etc to > >> the frontends. But your examples also has a 2nd PUB/SUB channel going > >> the other direction from the frontend to the kernel. What are you > >> planning on sending over that channel? > >> > >> > because I think that maybe with pub/sub you can to synchronize the > >> > frontends, sending messages that let know a kernel`s user_ns, > heartbeat > >> > or > >> > tab_completion, because request and reply maybe have some longs > >> > processes in > >> > queue. > >> > >> Possibly, can you explain more of the these usage cases. So far, we > >> have done tab completion over the REQ/REP channel. The heartbeat > >> stuff would probably be a separate socket altogether. > >> > >> Brian > >> > >> > O. > >> > El 25 de mayo de 2010 17:02, Brian Granger > >> > escribi?: > >> >> > >> >> 2010/5/25 Omar Andr?s Zapata Mesa : > >> >> > hi. > >> >> > the idea is to have 2 types of channels for different types of > >> >> > messages. > >> >> > As is specified in the next file: > >> >> > > >> >> > > >> >> > > http://github.com/ellisonbg/pyzmq/blob/master/examples/kernel/message_spec.rst > >> >> > >> >> This document is very out of date. We wrote it before writing the > >> >> prototype here: > >> >> > >> >> http://github.com/ellisonbg/pyzmq/tree/master/examples/kernel/ > >> >> > >> >> But never updated the design doc. At this point, I would consider > our > >> >> prototype as the design doc. I don't see a need for the client to > >> >> have a PUB socket that the kernel is SUB'd to. > >> >> > >> >> > Another thing to discuss is the different types of messages that I > >> >> > don't > >> >> > find correct or clear on the previous link. > >> >> > I mean, I think there are redundancies in that proposal such as the > >> >> > pyin > >> >> > and > >> >> > execute request types of messages as a variable assignation in > >> >> > pub/sub > >> >> > and > >> >> > req/rep. > >> >> > What I suggest is the following: > >> >> > REQ/REP: > >> >> > Request: > >> >> > # msg_type = 'execute_request' content = { > >> >> > code : 'a = 10', > >> >> > } > >> >> > Reply: > >> >> > # msg_type = 'execute_reply' content = { > >> >> > 'status' : 'ok' OR 'error' OR 'abort' # data depends on status > value > >> >> > 'message':'error_message' or 'output_message' > >> >> > } > >> >> > PUB/SUB: > >> >> > Complete: > >> >> > # msg_type = 'complete_request' content = { > >> >> > text : 'a.f', # complete on this line : 'print a.f' # full line > >> >> > } > >> >> > # msg_type = 'complete_reply' content = { > >> >> > matches : ['a.foo', 'a.bar'] > >> >> > } > >> >> > Control: > >> >> > # msg_type = 'heartbeat' content = { > >> >> > } > >> >> > >> >> For now, please have a close look at the prototype in the link above. > >> >> > >> >> Cheers, > >> >> > >> >> Brian > >> >> > >> >> > What do you think about this? > >> >> > Do you think that a sole req/rep channel is enough? > >> >> > Best > >> >> > O. > >> >> > El 25 de mayo de 2010 15:08, Brian Granger > >> >> > escribi?: > >> >> >> > >> >> >> I guess I am not clear why the kernel needs to have the SUB > socket. > >> >> >> If the client needs to send a message to the kernel, can't it > simply > >> >> >> use the REQ/REP channel? But I do think the kernel needs the REP > >> >> >> and > >> >> >> PUB sockets. > >> >> >> > >> >> >> Brian > >> >> >> > >> >> >> 2010/5/24 Omar Andr?s Zapata Mesa : > >> >> >> > I have now read the zmq doc from zmq`s website reference. > >> >> >> > I think we need to use for the kernel 3 ports for the > >> >> >> > communication > >> >> >> > system. > >> >> >> > Kernel description: > >> >> >> > > >> >> >> > > >> >> >> > > >> >> >> > > http://github.com/omazapa/ipython/blob/master/IPython/core/ipzmq_server.py > >> >> >> > -> port 5555 have subscribe socket into kernel class to read > >> >> >> > publisher > >> >> >> > messages from frontend. > >> >> >> > > >> >> >> > > >> >> >> > > >> >> >> > > >> >> >> > > >> >> >> > self._subscribe_socket = self._Context.socket(zmq.SUB) > >> >> >> > > >> >> >> > self._subscribe_socket.bind(self._subscribe_connection) > >> >> >> > > >> >> >> > self._subscribe_socket.setsockopt(zmq.SUBSCRIBE,"") > >> >> >> > > >> >> >> > > >> >> >> > > >> >> >> > > >> >> >> > > >> >> >> > > >> >> >> > since the subscribe socket can not send messages "it was read in > >> >> >> > the > >> >> >> > reference", we need to implement another socket called publisher > >> >> >> > to > >> >> >> > send > >> >> >> > messages to frontend, then > >> >> >> > > >> >> >> > > >> >> >> > -> port 5556 has a socket which allow kernel class to send > >> >> >> > messages > >> >> >> > to > >> >> >> > frontend, then the subbscribe and publisher sockets will > >> >> >> > communicate. > >> >> >> > > >> >> >> > > >> >> >> > > >> >> >> > self._publisher_socket = self._Context.socket(zmq.PUB) > >> >> >> > > >> >> >> > self._publisher_socket.bind(self._publisher__connection) > >> >> >> > > >> >> >> > > >> >> >> > -> and 5557 will be implemented to request and publisher sockets > >> >> >> > that > >> >> >> > are > >> >> >> > working very well. > >> >> >> > > >> >> >> > do you think this 3-socket model is a good idea? You can check > it > >> >> >> > because > >> >> >> > I've already implemented it an and it's working fine. > >> >> >> > > >> >> >> > > >> >> >> > > >> >> >> > > http://github.com/omazapa/ipython/blob/master/IPython/core/ipzmq_client.py > >> >> >> > O. > >> >> >> > El 24 de mayo de 2010 13:49, Brian Granger > > >> >> >> > escribi?: > >> >> >> >> > >> >> >> >> Omar, > >> >> >> >> > >> >> >> >> I am busy today but here are some ideas: > >> >> >> >> > >> >> >> >> * To get to know pyzmq better, I would open up 2-3 IPython > >> >> >> >> sessions, > >> >> >> >> import zmq on all of them and then start to create sockets and > >> >> >> >> send > >> >> >> >> messages between the different IPython sessions. This works > >> >> >> >> really > >> >> >> >> well and will give you a better idea of how the different > socket > >> >> >> >> types > >> >> >> >> work, how the json stuff works, etc. This would be invaluable. > >> >> >> >> > >> >> >> >> * To simplify debugging, create a version of the code that has > >> >> >> >> the > >> >> >> >> absolute minimal code - no objects, config, etc. Just the raw > >> >> >> >> zmq > >> >> >> >> messaging stuff. > >> >> >> >> > >> >> >> >> I think if you do these 2 things, the error will be more > obvious. > >> >> >> >> Keep posting back to the list so I or Fernando can help with > this > >> >> >> >> though. > >> >> >> >> > >> >> >> >> Cheers, > >> >> >> >> > >> >> >> >> Brian > >> >> >> >> > >> >> >> >> 2010/5/22 Omar Andr?s Zapata Mesa : > >> >> >> >> > hi all > >> >> >> >> > I am working yet in zmq module to ipython, but I have the > next > >> >> >> >> > problem > >> >> >> >> > using > >> >> >> >> > json. > >> >> >> >> > the code are in http://github.com/omazapa/ipython > >> >> >> >> > into the dir ipython/IPython/core/ > >> >> >> >> > > >> >> >> >> > I run my zmq server prototype > >> >> >> >> > > >> >> >> >> > the output is > >> >> >> >> > > >> >> >> >> > omazapa at tuxhome:~/MyProjects/GSoC/tmp/ipython/IPython/core$ > >> >> >> >> > python > >> >> >> >> > ipzmq_server.py > >> >> >> >> > reply socket= tcp://127.0.0.1:5555 > >> >> >> >> > publisher socket = tcp://127.0.0.1:5556 > >> >> >> >> > Server started. > >> >> >> >> > > >> >> >> >> > > >> >> >> >> > in this moment I am waiting json`s message in reply socket. > >> >> >> >> > > >> >> >> >> > > >> >> >> >> > then I run my client prototype > >> >> >> >> > > >> >> >> >> > the output is > >> >> >> >> > > >> >> >> >> > omazapa at tuxhome:~/MyProjects/GSoC/tmp/ipython/IPython/core$ > >> >> >> >> > python > >> >> >> >> > ipzmq_client.py > >> >> >> >> > request socket = tcp://127.0.0.1:5556 > >> >> >> >> > subscribe socket = tcp://127.0.0.1:5555 > >> >> >> >> > > >> >> >> >> > > >> >> >> >> > but server no recieve the message. > >> >> >> >> > > >> >> >> >> > the output is > >> >> >> >> > > >> >> >> >> > Traceback (most recent call last): > >> >> >> >> > File "ipzmq_server.py", line 112, in > >> >> >> >> > msg=server.recieve_reply() > >> >> >> >> > File "ipzmq_server.py", line 79, in recieve_reply > >> >> >> >> > msg=self._reply_socket.recv_json() > >> >> >> >> > File "_zmq.pyx", line 709, in zmq._zmq.Socket.recv_json > >> >> >> >> > (zmq/_zmq.c:5242) > >> >> >> >> > File "/usr/lib/python2.6/json/__init__.py", line 307, in > >> >> >> >> > loads > >> >> >> >> > return _default_decoder.decode(s) > >> >> >> >> > File "/usr/lib/python2.6/json/decoder.py", line 319, in > >> >> >> >> > decode > >> >> >> >> > obj, end = self.raw_decode(s, idx=_w(s, 0).end()) > >> >> >> >> > File "/usr/lib/python2.6/json/decoder.py", line 338, in > >> >> >> >> > raw_decode > >> >> >> >> > raise ValueError("No JSON object could be decoded") > >> >> >> >> > ValueError: No JSON object could be decoded > >> >> >> >> > > >> >> >> >> > > >> >> >> >> > have you some idea? > >> >> >> >> > maybe, do I need encode my message before send it? > >> >> >> >> > I have the last version of zeromq2 from official repo and > pyzmq > >> >> >> >> > http://github.com/ellisonbg/pyzmq/, I am using python2.6 > >> >> >> >> > > >> >> >> >> > Brian said me that the problem is that I have outdated > version > >> >> >> >> > of > >> >> >> >> > zeromq > >> >> >> >> > and pyzmq but I update zeromq and pyzmq and It is not working > >> >> >> >> > yet. > >> >> >> >> > > >> >> >> >> > > >> >> >> >> > thks > >> >> >> >> > > >> >> >> >> > _______________________________________________ > >> >> >> >> > IPython-dev mailing list > >> >> >> >> > IPython-dev at scipy.org > >> >> >> >> > http://mail.scipy.org/mailman/listinfo/ipython-dev > >> >> >> >> > > >> >> >> >> > > >> >> >> >> > >> >> >> >> > >> >> >> >> > >> >> >> >> -- > >> >> >> >> Brian E. Granger, Ph.D. > >> >> >> >> Assistant Professor of Physics > >> >> >> >> Cal Poly State University, San Luis Obispo > >> >> >> >> bgranger at calpoly.edu > >> >> >> >> ellisonbg at gmail.com > >> >> >> > > >> >> >> > > >> >> >> > >> >> >> > >> >> >> > >> >> >> -- > >> >> >> Brian E. Granger, Ph.D. > >> >> >> Assistant Professor of Physics > >> >> >> Cal Poly State University, San Luis Obispo > >> >> >> bgranger at calpoly.edu > >> >> >> ellisonbg at gmail.com > >> >> > > >> >> > > >> >> > >> >> > >> >> > >> >> -- > >> >> Brian E. Granger, Ph.D. > >> >> Assistant Professor of Physics > >> >> Cal Poly State University, San Luis Obispo > >> >> bgranger at calpoly.edu > >> >> ellisonbg at gmail.com > >> > > >> > > >> > _______________________________________________ > >> > IPython-dev mailing list > >> > IPython-dev at scipy.org > >> > http://mail.scipy.org/mailman/listinfo/ipython-dev > >> > > >> > > >> > >> > >> > >> -- > >> Brian E. Granger, Ph.D. > >> Assistant Professor of Physics > >> Cal Poly State University, San Luis Obispo > >> bgranger at calpoly.edu > >> ellisonbg at gmail.com > > > > > > _______________________________________________ > > IPython-dev mailing list > > IPython-dev at scipy.org > > http://mail.scipy.org/mailman/listinfo/ipython-dev > > > > > > > > -- > Brian E. Granger, Ph.D. > Assistant Professor of Physics > Cal Poly State University, San Luis Obispo > bgranger at calpoly.edu > ellisonbg at gmail.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From andresete.chaos at gmail.com Thu May 27 22:59:41 2010 From: andresete.chaos at gmail.com (=?UTF-8?Q?Omar_Andr=C3=A9s_Zapata_Mesa?=) Date: Thu, 27 May 2010 21:59:41 -0500 Subject: [IPython-dev] ipython-zmq messages Message-ID: Hi all. well, I have written a IpMessage class that content different information to share between frontend and kernel. but I think we need discuss this topic in this list, to take more ideas and a better desing. Brian and Fernando have written a initial desing, you can found it in http://github.com/ellisonbg/pyzmq/tree/master/examples/kernel/ I thought a similar desing and I am doing a graphical desing. *REQ/REP:* * Request: # msg_type = 'request' content = { code : 'a = 10', } Reply: # msg_type = 'reply' content = { 'status' : 'ok' OR 'error' OR 'abort' # data depends on status value 'message':'error_message' or 'output_message' } * * * *PUB/SUB:* * * *Complete: * for tab-completion we can update user_namespace in all frontends and do a local search we readline # msg_type = 'update_ns' content = { user_sn : ns_dict # ns_dict is a dictionary with ns information of kernel, kernel can send a broadcast # message with this information. } # msg_type = 'stream' content = { name : 'stdout', data : 'blob', # "name" can be too stderr and you can send this message to see # output while process is running } *Control:* # msg_type = 'heartbeat' content = { } best! O. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ellisonbg at gmail.com Fri May 28 17:20:25 2010 From: ellisonbg at gmail.com (Brian Granger) Date: Fri, 28 May 2010 14:20:25 -0700 Subject: [IPython-dev] pyzmq problems In-Reply-To: References: Message-ID: 2010/5/27 Omar Andr?s Zapata Mesa : > Hi > I think that you desing is ok, the I wrote the new implementation and put > changes in git repo and I and doing a grafical desing of all ipython -zmq > parts. > > this part are in?http://gfif.udea.edu.co/ipzmq_connections.jpeg?see and > suggest me how I do heartbeat test with server. > can I send heartbeat like a simple request? > O. The heartbeat capability is not quite there yet and will require writing some C++ code in pyzmq. So let's wait on that part of the design. Brian > > El 26 de mayo de 2010 16:16, Brian Granger escribi?: >> >> 2010/5/25 Omar Andr?s Zapata Mesa : >> > Ok I get it! >> > but. >> > can you send ?messages just on request socket from frontend? >> >> Yes, but I think that is all we need. >> >> > because I can not send messages with zmq.SUB socket, and I can not recv >> > with >> > PUB socket. >> > that information appears in the zmq APIs. >> >> The REQ/REP channel is for cases where the frontend needs to make a >> request that the kernel will reply to immediately and there is only 1 >> reply. >> >> The PUB/SBU channel allows the kernel to send messages to the frontend >> where there wasn't a corresponding reply. ?A perfect example is >> stdout, which can happen at any moment, without a corresponding >> request from the frontend. ?In this sense, the PUB/SUB channel is for >> out of channel side effects. >> >> Again, I would look carefully at the example in the pyzmq source tree. >> >> Cheers, >> >> Brian >> > http://api.zeromq.org/zmq_socket.html >> > and I try do this in pyzmq and the message is?ZMQError: Operation not >> > supported >> > is it the idea? >> > O. >> > >> > El 25 de mayo de 2010 20:56, Brian Granger >> > escribi?: >> >> >> >> 2010/5/25 Omar Andr?s Zapata Mesa : >> >> > Hi, >> >> > then, >> >> > are not?pub/sub sockets ?needed? >> >> >> >> We definitely need one PUB/SUB channel. ?This channel will have the >> >> PUB socket in the kernel and the SUB socket in the frontends. ?This >> >> will be used by the kernel to send stdout, stderr, pyin, pyout, etc to >> >> the frontends. ?But your examples also has a 2nd PUB/SUB channel going >> >> the other direction from the frontend to the kernel. ?What are you >> >> planning on sending over that channel? >> >> >> >> > because I think that maybe with pub/sub you can to synchronize the >> >> > frontends, sending messages that let know a kernel`s user_ns, >> >> > heartbeat >> >> > or >> >> > tab_completion, because request and reply maybe have some longs >> >> > processes in >> >> > queue. >> >> >> >> Possibly, can you explain more of the these usage cases. ?So far, we >> >> have done tab completion over the REQ/REP channel. ?The heartbeat >> >> stuff would probably be a separate socket altogether. >> >> >> >> Brian >> >> >> >> > O. >> >> > El 25 de mayo de 2010 17:02, Brian Granger >> >> > escribi?: >> >> >> >> >> >> 2010/5/25 Omar Andr?s Zapata Mesa : >> >> >> > hi. >> >> >> > the idea is to have 2 types of channels for different types of >> >> >> > messages. >> >> >> > As is specified in the next file: >> >> >> > >> >> >> > >> >> >> > >> >> >> > http://github.com/ellisonbg/pyzmq/blob/master/examples/kernel/message_spec.rst >> >> >> >> >> >> This document is very out of date. ?We wrote it before writing the >> >> >> prototype here: >> >> >> >> >> >> http://github.com/ellisonbg/pyzmq/tree/master/examples/kernel/ >> >> >> >> >> >> But never updated the design doc. ?At this point, I would consider >> >> >> our >> >> >> prototype as the design doc. ?I don't see a need for the client to >> >> >> have a PUB socket that the kernel is SUB'd to. >> >> >> >> >> >> > Another thing to discuss is the different types of messages that I >> >> >> > don't >> >> >> > find correct or clear on the previous link. >> >> >> > I mean, I think there are redundancies in that proposal such as >> >> >> > the >> >> >> > pyin >> >> >> > and >> >> >> > execute request types of messages as a variable assignation in >> >> >> > pub/sub >> >> >> > and >> >> >> > req/rep. >> >> >> > What I suggest is the following: >> >> >> > REQ/REP: >> >> >> > Request: >> >> >> > # msg_type = 'execute_request' content = { >> >> >> > code : 'a = 10', >> >> >> > } >> >> >> > Reply: >> >> >> > # msg_type = 'execute_reply' content = { >> >> >> > 'status' : 'ok' OR 'error' OR 'abort' # data depends on status >> >> >> > value >> >> >> > 'message':'error_message' or 'output_message' >> >> >> > } >> >> >> > PUB/SUB: >> >> >> > Complete: >> >> >> > # msg_type = 'complete_request' content = { >> >> >> > text : 'a.f', # complete on this line : 'print a.f' # full line >> >> >> > } >> >> >> > # msg_type = 'complete_reply' content = { >> >> >> > matches : ['a.foo', 'a.bar'] >> >> >> > } >> >> >> > Control: >> >> >> > # msg_type = 'heartbeat' content = { >> >> >> > } >> >> >> >> >> >> For now, please have a close look at the prototype in the link >> >> >> above. >> >> >> >> >> >> Cheers, >> >> >> >> >> >> Brian >> >> >> >> >> >> > What do you think about this? >> >> >> > Do you think that a sole req/rep channel is enough? >> >> >> > Best >> >> >> > O. >> >> >> > El 25 de mayo de 2010 15:08, Brian Granger >> >> >> > escribi?: >> >> >> >> >> >> >> >> I guess I am not clear why the kernel needs to have the SUB >> >> >> >> socket. >> >> >> >> If the client needs to send a message to the kernel, can't it >> >> >> >> simply >> >> >> >> use the REQ/REP channel? ?But I do think the kernel needs the REP >> >> >> >> and >> >> >> >> PUB sockets. >> >> >> >> >> >> >> >> Brian >> >> >> >> >> >> >> >> 2010/5/24 Omar Andr?s Zapata Mesa : >> >> >> >> > I have now read the zmq doc from zmq`s website reference. >> >> >> >> > I think we need to use for the kernel 3 ports for the >> >> >> >> > communication >> >> >> >> > system. >> >> >> >> > Kernel description: >> >> >> >> > >> >> >> >> > >> >> >> >> > >> >> >> >> > >> >> >> >> > ?http://github.com/omazapa/ipython/blob/master/IPython/core/ipzmq_server.py >> >> >> >> > -> port 5555 have subscribe socket into kernel class to read >> >> >> >> > publisher >> >> >> >> > messages from frontend. >> >> >> >> > >> >> >> >> > >> >> >> >> > >> >> >> >> > >> >> >> >> > >> >> >> >> > self._subscribe_socket = self._Context.socket(zmq.SUB) >> >> >> >> > >> >> >> >> > self._subscribe_socket.bind(self._subscribe_connection) >> >> >> >> > >> >> >> >> > self._subscribe_socket.setsockopt(zmq.SUBSCRIBE,"") >> >> >> >> > >> >> >> >> > >> >> >> >> > >> >> >> >> > >> >> >> >> > >> >> >> >> > >> >> >> >> > since the subscribe socket can not send messages "it was read >> >> >> >> > in >> >> >> >> > the >> >> >> >> > reference", we need to implement another socket called >> >> >> >> > publisher >> >> >> >> > to >> >> >> >> > send >> >> >> >> > messages to frontend, then >> >> >> >> > >> >> >> >> > >> >> >> >> > -> port 5556 has a socket which allow kernel class to send >> >> >> >> > messages >> >> >> >> > to >> >> >> >> > frontend, then the subbscribe and publisher sockets will >> >> >> >> > communicate. >> >> >> >> > >> >> >> >> > >> >> >> >> > >> >> >> >> > self._publisher_socket = self._Context.socket(zmq.PUB) >> >> >> >> > >> >> >> >> > self._publisher_socket.bind(self._publisher__connection) >> >> >> >> > >> >> >> >> > >> >> >> >> > -> and 5557 will be implemented to request and publisher >> >> >> >> > sockets >> >> >> >> > that >> >> >> >> > are >> >> >> >> > working very well. >> >> >> >> > >> >> >> >> > do you think this 3-socket model is a good idea? You can check >> >> >> >> > it >> >> >> >> > because >> >> >> >> > I've already implemented it an and it's working fine. >> >> >> >> > >> >> >> >> > >> >> >> >> > >> >> >> >> > >> >> >> >> > http://github.com/omazapa/ipython/blob/master/IPython/core/ipzmq_client.py >> >> >> >> > O. >> >> >> >> > El 24 de mayo de 2010 13:49, Brian Granger >> >> >> >> > >> >> >> >> > escribi?: >> >> >> >> >> >> >> >> >> >> Omar, >> >> >> >> >> >> >> >> >> >> I am busy today but here are some ideas: >> >> >> >> >> >> >> >> >> >> * To get to know pyzmq better, I would open up 2-3 IPython >> >> >> >> >> sessions, >> >> >> >> >> import zmq on all of them and then start to create sockets and >> >> >> >> >> send >> >> >> >> >> messages between the different IPython sessions. ?This works >> >> >> >> >> really >> >> >> >> >> well and will give you a better idea of how the different >> >> >> >> >> socket >> >> >> >> >> types >> >> >> >> >> work, how the json stuff works, etc. ?This would be >> >> >> >> >> invaluable. >> >> >> >> >> >> >> >> >> >> * To simplify debugging, create a version of the code that has >> >> >> >> >> the >> >> >> >> >> absolute minimal code - no objects, config, etc. ?Just the raw >> >> >> >> >> zmq >> >> >> >> >> messaging stuff. >> >> >> >> >> >> >> >> >> >> I think if you do these 2 things, the error will be more >> >> >> >> >> obvious. >> >> >> >> >> Keep posting back to the list so I or Fernando can help with >> >> >> >> >> this >> >> >> >> >> though. >> >> >> >> >> >> >> >> >> >> Cheers, >> >> >> >> >> >> >> >> >> >> Brian >> >> >> >> >> >> >> >> >> >> 2010/5/22 Omar Andr?s Zapata Mesa : >> >> >> >> >> > hi all >> >> >> >> >> > I am working yet in zmq module to ipython, but I have the >> >> >> >> >> > next >> >> >> >> >> > problem >> >> >> >> >> > using >> >> >> >> >> > json. >> >> >> >> >> > the code are in http://github.com/omazapa/ipython >> >> >> >> >> > into the dir? ipython/IPython/core/ >> >> >> >> >> > >> >> >> >> >> > I run my zmq server prototype >> >> >> >> >> > >> >> >> >> >> > the output is >> >> >> >> >> > >> >> >> >> >> > omazapa at tuxhome:~/MyProjects/GSoC/tmp/ipython/IPython/core$ >> >> >> >> >> > python >> >> >> >> >> > ipzmq_server.py >> >> >> >> >> > reply socket= tcp://127.0.0.1:5555 >> >> >> >> >> > publisher socket = tcp://127.0.0.1:5556 >> >> >> >> >> > Server started. >> >> >> >> >> > >> >> >> >> >> > >> >> >> >> >> > in this moment I am waiting json`s message in reply socket. >> >> >> >> >> > >> >> >> >> >> > >> >> >> >> >> > then I run my client prototype >> >> >> >> >> > >> >> >> >> >> > the output is >> >> >> >> >> > >> >> >> >> >> > omazapa at tuxhome:~/MyProjects/GSoC/tmp/ipython/IPython/core$ >> >> >> >> >> > python >> >> >> >> >> > ipzmq_client.py >> >> >> >> >> > request socket = tcp://127.0.0.1:5556 >> >> >> >> >> > subscribe socket = tcp://127.0.0.1:5555 >> >> >> >> >> > >> >> >> >> >> > >> >> >> >> >> > but server no recieve the message. >> >> >> >> >> > >> >> >> >> >> > the output is >> >> >> >> >> > >> >> >> >> >> > Traceback (most recent call last): >> >> >> >> >> > ? File "ipzmq_server.py", line 112, in >> >> >> >> >> > ??? msg=server.recieve_reply() >> >> >> >> >> > ? File "ipzmq_server.py", line 79, in recieve_reply >> >> >> >> >> > ??? msg=self._reply_socket.recv_json() >> >> >> >> >> > ? File "_zmq.pyx", line 709, in zmq._zmq.Socket.recv_json >> >> >> >> >> > (zmq/_zmq.c:5242) >> >> >> >> >> > ? File "/usr/lib/python2.6/json/__init__.py", line 307, in >> >> >> >> >> > loads >> >> >> >> >> > ??? return _default_decoder.decode(s) >> >> >> >> >> > ? File "/usr/lib/python2.6/json/decoder.py", line 319, in >> >> >> >> >> > decode >> >> >> >> >> > ??? obj, end = self.raw_decode(s, idx=_w(s, 0).end()) >> >> >> >> >> > ? File "/usr/lib/python2.6/json/decoder.py", line 338, in >> >> >> >> >> > raw_decode >> >> >> >> >> > ??? raise ValueError("No JSON object could be decoded") >> >> >> >> >> > ValueError: No JSON object could be decoded >> >> >> >> >> > >> >> >> >> >> > >> >> >> >> >> > have you some idea? >> >> >> >> >> > maybe, do I need encode my message before send it? >> >> >> >> >> > I have the last version of zeromq2 from official repo and >> >> >> >> >> > pyzmq >> >> >> >> >> > http://github.com/ellisonbg/pyzmq/, I am using python2.6 >> >> >> >> >> > >> >> >> >> >> > Brian? said me that the problem is that I have outdated >> >> >> >> >> > version >> >> >> >> >> > of >> >> >> >> >> > zeromq >> >> >> >> >> > and pyzmq but I update zeromq and pyzmq and It is not >> >> >> >> >> > working >> >> >> >> >> > yet. >> >> >> >> >> > >> >> >> >> >> > >> >> >> >> >> > thks >> >> >> >> >> > >> >> >> >> >> > _______________________________________________ >> >> >> >> >> > IPython-dev mailing list >> >> >> >> >> > IPython-dev at scipy.org >> >> >> >> >> > http://mail.scipy.org/mailman/listinfo/ipython-dev >> >> >> >> >> > >> >> >> >> >> > >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> -- >> >> >> >> >> Brian E. Granger, Ph.D. >> >> >> >> >> Assistant Professor of Physics >> >> >> >> >> Cal Poly State University, San Luis Obispo >> >> >> >> >> bgranger at calpoly.edu >> >> >> >> >> ellisonbg at gmail.com >> >> >> >> > >> >> >> >> > >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> -- >> >> >> >> Brian E. Granger, Ph.D. >> >> >> >> Assistant Professor of Physics >> >> >> >> Cal Poly State University, San Luis Obispo >> >> >> >> bgranger at calpoly.edu >> >> >> >> ellisonbg at gmail.com >> >> >> > >> >> >> > >> >> >> >> >> >> >> >> >> >> >> >> -- >> >> >> Brian E. Granger, Ph.D. >> >> >> Assistant Professor of Physics >> >> >> Cal Poly State University, San Luis Obispo >> >> >> bgranger at calpoly.edu >> >> >> ellisonbg at gmail.com >> >> > >> >> > >> >> > _______________________________________________ >> >> > IPython-dev mailing list >> >> > IPython-dev at scipy.org >> >> > http://mail.scipy.org/mailman/listinfo/ipython-dev >> >> > >> >> > >> >> >> >> >> >> >> >> -- >> >> Brian E. Granger, Ph.D. >> >> Assistant Professor of Physics >> >> Cal Poly State University, San Luis Obispo >> >> bgranger at calpoly.edu >> >> ellisonbg at gmail.com >> > >> > >> > _______________________________________________ >> > IPython-dev mailing list >> > IPython-dev at scipy.org >> > http://mail.scipy.org/mailman/listinfo/ipython-dev >> > >> > >> >> >> >> -- >> Brian E. Granger, Ph.D. >> Assistant Professor of Physics >> Cal Poly State University, San Luis Obispo >> bgranger at calpoly.edu >> ellisonbg at gmail.com > > -- Brian E. Granger, Ph.D. Assistant Professor of Physics Cal Poly State University, San Luis Obispo bgranger at calpoly.edu ellisonbg at gmail.com From andresete.chaos at gmail.com Mon May 31 14:46:39 2010 From: andresete.chaos at gmail.com (=?UTF-8?Q?Omar_Andr=C3=A9s_Zapata_Mesa?=) Date: Mon, 31 May 2010 13:46:39 -0500 Subject: [IPython-dev] Help. pyzmq to ipython-zmq Message-ID: Hi all. How can I write a queue of request in pyzmq? in zmq website this part is not documented http://api.zeromq.org/zmq_queue.html is it now implemented in pyzmq? O. -------------- next part -------------- An HTML attachment was scrubbed... URL: