From gvwilson at third-bit.com Thu Aug 1 09:41:55 2013 From: gvwilson at third-bit.com (Greg Wilson) Date: Thu, 01 Aug 2013 09:41:55 -0400 Subject: [IPython-dev] diffing and merging? Message-ID: <51FA65A3.6060001@third-bit.com> Hi everyone, Was there any discussion of a notebook diff and merge tool during last week's sprint? If so, I'd be grateful if someone could bring me up to speed --- now that I'm converting Software Carpentry lessons to notebooks, it has suddenly become my #1 need :-) Thanks, Greg From nborwankar at gmail.com Thu Aug 1 13:30:25 2013 From: nborwankar at gmail.com (Nitin Borwankar) Date: Thu, 1 Aug 2013 10:30:25 -0700 Subject: [IPython-dev] prose, notebooks, and thrashing (and a re-direct to another list) In-Reply-To: <51F6868D.1050903@third-bit.com> References: <51F6868D.1050903@third-bit.com> Message-ID: Hi Greg, I've just released a very early beta of LearnDataScience ( http://nborwankar.github.io/LearnDataScience) and I had to struggle with a subset of these issues. I am not sure my experience is the relevant to you but here's what I did. The goal with my content is to teach developers data science so it's possibly complementary to what you folks are doing. My content for a topic was divided into three parts a) Overview - an exposition of the theory in very understandable, simple but technically accurate terms. Often diagrams here were statically rendered although I could have included code. I chose to omit code where it would be a distraction and included it in supporting libraries. b) Exploration - here's where the student starts getting their hand dirty. Data cleaning etc. c) Analysis - here's where they produce results. Run ML algorithms, compute precision etc. That's what I started with - 3 notebooks per algorithm. But I felt the need for a separate worksheet notebook which was just all the code segments without the verbiage so students could just get in there and mess things up if they wanted and not be concerned they would mess up their "textbook". I did not have any presentation metaphor - currently it's all teach yourself. Not sure if this was helpful, Nitin P.S. To admins - if user list is deprecated - where do usage questions go - stackoverflow? And what happens when users want to let the community know about interesting notebooks? dev list ? Thanks. ------------------------------------------------------------------ Nitin Borwankar nborwankar at gmail.com On Mon, Jul 29, 2013 at 8:13 AM, Greg Wilson wrote: > Hi, > I posted the message below to the Software Carpentry discussion list a > few minutes ago. I'd appreciate input from the IPython Notebook > community as well. > Thanks, > Greg > p.s. if anyone has a good way to manage discussion that spans two > independent mailing lists, I'm all ears... :-) > > > --------------------------------------------------------------------------------------------------- > > Hi, > > I'm trying to figure out how to satisfy four sets of needs for > instructional material using IPython Notebooks in Software Carpentry. I > feel like I'm thrashing, so I'd appreciate your help. > > My four use cases are: > > * instructor's guide: to help instructors prepare for a bootcamp > * lecture notes: what instructors use during a bootcamp > * workbook: what learners are given to use during a bootcamp > * textbook: what learners use on their own outside a bootcamp > > To make this more concrete, have a look at > http://swcarpentry.github.io/bc/lessons/guide-shell/tutorial.html, which > describes how I teach the shell: > > * > > http://swcarpentry.github.io/bc/lessons/guide-shell/tutorial.html#s:shell:instructors > is a high-level intro for instructors. Learners might find it useful, > but it's not really aimed at them. > > * Each topic opens with objectives (see > > http://swcarpentry.github.io/bc/lessons/guide-shell/tutorial.html#s:shell:filedir:objectives > for an example). This belongs in the instructor's guide ("Here's what > you're supposed to teach"), possibly the lecture notes ("Hi class, > here's what I'm about to teach"), probably not the workbook, and > definitely the textbook. > > * Each topic ends with key points and challenge exercises: see > > http://swcarpentry.github.io/bc/lessons/guide-shell/tutorial.html#s:shell:filedir:keypoints > , > and > > http://swcarpentry.github.io/bc/lessons/guide-shell/tutorial.html#s:shell:filedir:challenges > for examples. Again, both belong in the instructor's guide and the > textbook; they probably belong in the lecture notes, and probably also > in the workbook (as a starting point for learners' own notes, and so > that they know what to work on during practicals). > > Problem #1: how do we manage material that ought to appear in multiple > documents? Right now, each snippet is a separate HTML file that is > %include'd to assemble the entire document (see > > https://raw.github.com/swcarpentry/bc/gh-pages/lessons/guide-shell/tutorial.html > ). > That works OK for HTML pages, but not for notebooks: as far as I know, > there's no way to %include file X in notebook Y. > > Problem #2: what exactly ought to be in the lecture notes? For example, > compare > > http://swcarpentry.github.io/bc/lessons/guide-pyblocks/tutorial.html#s:pyblocks:logic:lesson > (the introduction to "if/else" in the incomplete instructor's guide to > basic Python programming using IPythonBlocks) to > > http://nbviewer.ipython.org/urls/raw.github.com/swcarpentry/bc/gh-pages/lessons/guide-pyblocks/pyblocks-logic.ipynb > (an IPython Notebook with the same handful of examples). The former is > much too verbose to put in front of learners in a classroom setting. I > feel the latter is too sparse, but when I add point-form notes and then > lecture over the result, it feels clumsy: > > * If the item I'm currently talking about is centered on the screen, > learners can see my next points on screen below it. This is distracting > --- so distracting that tools like PowerPoint and other slideshow tools > were invented in part to prevent it happening. If the notebook > supported progressive reveal of pre-made cells, that problem would lessen. > > * Even with that, though, one of the most compelling reasons to use the > notebook as a lecture tool is lost. Learners really like us typing code > in as we teach: it slows us down, which makes it more likely that we'll > walk through the code at a comprehensible pace, and it makes it a lot > easier for us to extemporize in response to their questions or > confusion. We lose both benefits if we start using "reveal next", > though we're still better off than if we use slides. > > Problem #3: what ought to be in the workbook (the notebook we give > students as a starting point)? Option A is "nothing": they should type > things in as the lecture proceeds. This forces them to learn hands-on > (which is good), but greatly increases the risk that they'll fall behind: > > * They're more likely to mis-type something than the instructor, so > their actual lines-per-second is (much) lower. > * They want to take notes, which slows them down even further. > > Option B is to give learners a notebook with notes and code already in > it, but that gets us back to them (a) being distracted by material they > haven't seen yet, but which is already on their screen, and (b) not > having to be hands-on while learning. On the other hand, these are > grownups, and there's only so much we can do with existing tools. > > So here's what I think I've come to: > > 1. For now, we create two documents: an instructor's guide that doubles > as a textbook (no harm in learners reading notes for instructors, and > possibly some benefit), and a notebook that interleaves point-form notes > with code examples that instructors will use for lecturing, and learners > will be given as a starting point. > > 2. We build a tool to extract point-form notes from an IPython Notebook > to create the "objectives.html" and "keypoints.html" sections that are > %include'd in #1 above. This will help keep the notebook and the guide > in step; whoever changes the notebook will still have to read the > section of the guide to make sure, but it's a step. > > 3. The code cells in our notebooks have markers for the instructors and > the learners to fill in. For example, instead of giving people: > > for char in "GATTACA": > if char == "A": > print "found an A" > > we have: > > for ________ in "GATTACA": > if ________: > print "found an A" > > Filling in the ________'s in the code snippets forces the instructor and > the learners to do some typing. As a bonus, if chosen judiciously they > will also encourage both to focus on the key features of the example. > > But there are problems. (You knew there would be...) > > * (medium-sized) In order to extract stuff from notebooks for use > elsewhere, we need a way to tag the cells that contain objectives, notes > for the key point summary, etc., so that our tool can find and > differentiate them. I *think* this can be done via a plugin: the > notebook has an extensible metadata field per cell, which we've already > exploited for another tool, so adding something that allows "right click > and select tag" should be straightforward. It would be nice if this info > was available as a class on the HTML element containing the material, so > that it could be styled via CSS, but that's a bonus. > > * (large) Literal ________'s in our code will make it invalid, so our > notebooks either won't have any output (because we didn't execute the > code) or will be littered with error messages (because we did). The > former's not bad, but if we do that, we need two version of each > notebook: one with the blanks, and one with them filled in. As always, > keeping variants of a single document in sync will be a pain. > > If the code in cells was stored as HTML, we could put something like > char == "A" in the code, then use Javascript > to display ________, erase it when the learner started typing, or fill > it in with the right answer when the learner just wanted to catch up or > see the answer. Problem is, the code *isn't* stored as HTML; it's stored > as JSON that looks like this: > > { > "cell_type": "code", > "collapsed": false, > "input": [ > "import math\n", > "print \"pi is\", math.pi\n", > "print \"square root of 5 is\", math.sqrt(5)" > ], > "language": "python", > "metadata": {}, > "outputs": [ > { > "output_type": "stream", > "stream": "stdout", > "text": [ > "pi is 3.14159265359\n", > "square root of 5 is 2.2360679775\n" > ] > } > ], > "prompt_number": 1 > } > > We could use character offsets to identify subsections of the code, then > intercept rendering to insert span elements, but that way lies madness... > > So as I said at the outset, I feel like I'm thrashing here, and would > welcome suggestions on how to proceed. > > Thanks, > Greg > > _______________________________________________ > IPython-dev mailing list > IPython-dev at scipy.org > http://mail.scipy.org/mailman/listinfo/ipython-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From takowl at gmail.com Thu Aug 1 13:34:57 2013 From: takowl at gmail.com (Thomas Kluyver) Date: Thu, 1 Aug 2013 18:34:57 +0100 Subject: [IPython-dev] prose, notebooks, and thrashing (and a re-direct to another list) In-Reply-To: References: <51F6868D.1050903@third-bit.com> Message-ID: On 1 August 2013 18:30, Nitin Borwankar wrote: > P.S. To admins - if user list is deprecated - where do usage questions go > - stackoverflow? > And what happens when users want to let the community know about > interesting notebooks? dev list ? > We try to answer questions on SO, on the hipchat chat room, or on the mailing list. The dev list is kind of just becoming 'the list' - we decided that the separate user and dev lists were unnecessary. Thomas -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdroe at stsci.edu Thu Aug 1 14:06:35 2013 From: mdroe at stsci.edu (Michael Droettboom) Date: Thu, 1 Aug 2013 14:06:35 -0400 Subject: [IPython-dev] ANN: matplotlib 1.3.0 released Message-ID: <51FAA3AB.6020506@stsci.edu> On behalf of a veritable army of super coders, I'm pleased to announce the release of matplotlib 1.3.0. Downloads Downloads are available here: http://matplotlib.org/downloads.html as well as through |pip|. Check with your distro for when matplotlib 1.3.0 will become packaged for your environment. (Note: Mac .dmg installers are still forthcoming due to some issues with the new installation approach.) Important known issues matplotlib no longer ships with its Python dependencies, including dateutil, pytz, pyparsing and six. When installing from source or |pip|, |pip| will install these for you automatically. When installing from packages (on Linux distributions, MacPorts, homebrew etc.) these dependencies should also be handled automatically. The Windows binary installers do not include or install these dependencies. You may need to remove any old matplotlib installations before installing 1.3.0 to ensure matplotlib has access to the latest versions of these dependencies. The following backends have been removed: QtAgg (Qt version 3.x only), FlktAgg and Emf. For a complete list of removed features, see http://matplotlib.org/api/api_changes.html#changes-in-1-3 What's new * xkcd-style sketch plotting * webagg backend for displaying and interacting with plots in a web browser * event plots * triangular grid interpolation * control of baselines in stackplot * many improvements to text and color handling For a complete list of what's new, see http://matplotlib.org/users/whats_new.html#new-in-matplotlib-1-3 Have fun, and enjoy matplotlib! Michael Droettboom -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdroe at stsci.edu Thu Aug 1 14:42:00 2013 From: mdroe at stsci.edu (Michael Droettboom) Date: Thu, 1 Aug 2013 14:42:00 -0400 Subject: [IPython-dev] MEP19: Continuous integration virtual meeting Message-ID: <51FAABF8.1080707@stsci.edu> (Apologies for cross-posting). matplotlib has a dire need to improve its continuous integration testing. I've drafted MEP19 and solicited comments, but there hasn't been a lot of feedback thus far. As an alternative to mailing list discussion, where this sort of upfront planning can sometimes be difficult, I'm considering holding a Google Hangout in the next few weeks on the subject. It's ok to participate even if you don't have the time to work on matplotlib -- I would also like feedback from advice from those that have configured similar systems for other projects. matplotlib's needs are somewhat more complex in terms of dependencies, cpu, ram and storage, so we're pushing things pretty far here. If there's enough people with an interest in participating in the discussion, I'll send around a Doodle poll to find a good time. Mike From fperez.net at gmail.com Thu Aug 1 14:46:56 2013 From: fperez.net at gmail.com (Fernando Perez) Date: Thu, 1 Aug 2013 11:46:56 -0700 Subject: [IPython-dev] Our "single mailing list" policy, the future of the IPython-user list... Message-ID: Hi all, [ deliberately cross-posting to -dev and -user] over the last few months, you may have noticed comments we've made about how we are "slowly deprecating" the user list, and questions have come up regarding that. We're sorry for not having communicated better the intent of this, so let me recap briefly... # The What We are moving to having a single mailing list for IPython *where the core team participates*, rather than separate -dev and -user lists. Note that we are NOT closing or removing the user list, and if over time a self-sustaining community of users helping each other develops there, that's totally fine. But the core team is basically 'signing off' from the -user list. Some of us may occasionally see something there and reply to it, but we won't be monitoring it much further. We're not renaming the lists simply because that would force us to shut them down and make new ones, messing up our entire history and subscriber base. That's too much turmoil for little benefit. # The Why 1. We want people to think of themselves as members and stakeholders of a community, not as 'users' vs 'developers'. Every one of our developers should be a user (that's what makes IPython good, that we actually *use it*), and vice-versa, every user of IPython should be a potential developer when the itch comes up. So rather than separating into two distinct groups, we want to think of a single community where 'user' and 'developer' are simply *roles* that everyone can play interchangeably. 2. IPython is a project for programming and computing, so we expect most of its users to be technically savvy to some extent. Even if they are beginners, it's not a web browser or word processor. For Chrome, for example, the gap between a user and a developer is potentially enormous, whereas for us that gap is naturally much smaller. 3. We were noticing that, simply out of bandwidth limitations, there was a natural tendency for us to ignore the -user list. This just makes that official so core devs don't need to feel guilty about ignoring that list :) # Where to ask questions? - We've noticed that different people tend to prefer different modes of communication, and that not everyone likes mailing lists. So we've tried to also make it easy to engage IPython expertise on multiple channels: 1. The -dev list remains active, and the core team will do our best to help out there. If you like asking questions via mailing list, just hop on there. 2. If you want real-time quick help with a question, we have a help chat room now that's persistent: https://www.hipchat.com/ghSp7E1uY This is basically similar to IRC, but with the advantage that we can have permanent accounts for anyone who is willing to be logged in frequently (core devs and users who would like to help 'staff it'). For visitors the experience is no worse than IRC-over-the-web, and for persistent users we have email mentions, searchable history, etc, without having to manage a bunch of IRC bots. 3. If you prefer web fora, StackOverflow works well and several core devs post regularly there. Just tag your question with 'ipython': http://stackoverflow.com/questions/tagged/ipython 4. If you're a reddit user, there's an IPython subreddit too: http://www.reddit.com/r/IPython Hopefully this provides ample means for different kinds of usage patterns, while allowing the core team to manage our badly overcommitted bandwidth somewhat sensibly. # The future of the -user list As indicated above, it won't be closed. We'll simply have to see if a community of users builds around it or not. If it does, that's great! But we (the core dev team) won't be playing an active role there, besides managing any problems that may arise with spam/abuse. I hope this clarifies things. Having put this now in writing, I'd like to ask that other devs NOT say to users further that we're "deprecating" the user list, since that may be misinterpreted as closing it. Instead, if you see a question linger there that you're willing to answer, do so with CC to the dev list and post a link to this message for the explanation. Finally, I want to stress that while we do our best to engage our entire community as much as possible, feed from your ideas and questions and help with all reported problems, we also have to balance that with our job of moving IPython forward. So we really hope that as many of you will pitch in to help one another as possible, so that the core team can continue improving the project itself. And keep in mind that 'core devs' is a term *strictly* defined by the engagement of the community! All it takes is enough high-quality contributions from a new github user for us to nominate you to be a member of the core team. There is NO secret cabal of core IPython developers, only a community of engaged stakeholders, some of whom accept to shoulder a higher load :) Thanks for reading this rather long email. Don't hesitate to ask any questions (in this case, either list is fine :) f -- Fernando Perez (@fperez_org; http://fperez.org) fperez.net-at-gmail: mailing lists only (I ignore this when swamped!) fernando.perez-at-berkeley: contact me here for any direct mail From benjaminrk at gmail.com Thu Aug 1 15:09:32 2013 From: benjaminrk at gmail.com (MinRK) Date: Thu, 1 Aug 2013 12:09:32 -0700 Subject: [IPython-dev] diffing and merging? In-Reply-To: <51FA65A3.6060001@third-bit.com> References: <51FA65A3.6060001@third-bit.com> Message-ID: No, I am afraid we did not have time to get to diffing and merging notebooks during last week's planning meeting. -MinRK On Thu, Aug 1, 2013 at 6:41 AM, Greg Wilson wrote: > Hi everyone, > Was there any discussion of a notebook diff and merge tool during last > week's sprint? If so, I'd be grateful if someone could bring me up to > speed --- now that I'm converting Software Carpentry lessons to > notebooks, it has suddenly become my #1 need :-) > Thanks, > Greg > _______________________________________________ > IPython-dev mailing list > IPython-dev at scipy.org > http://mail.scipy.org/mailman/listinfo/ipython-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fperez.net at gmail.com Thu Aug 1 15:16:39 2013 From: fperez.net at gmail.com (Fernando Perez) Date: Thu, 1 Aug 2013 12:16:39 -0700 Subject: [IPython-dev] prose, notebooks, and thrashing (and a re-direct to another list) In-Reply-To: References: <51F6868D.1050903@third-bit.com> Message-ID: Hi Nitin, On Thu, Aug 1, 2013 at 10:30 AM, Nitin Borwankar wrote: > > I've just released a very early beta of LearnDataScience > (http://nborwankar.github.io/LearnDataScience) and I had to struggle with a > subset of these issues. I am not sure my experience is the relevant to you > but here's what I did. > The goal with my content is to teach developers data science so it's > possibly complementary to what you folks are doing. quick note: this looks great, but you should make the notebook titles on your main page and in the README.md be nbviewer links, just like for the Bayesian Methods book: http://camdavidsonpilon.github.io/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers/ That simple change will make the page far more useful immediately for users who may not know how to read notebooks. We have a simple script to auto-generate those links that you can copy and modify for your URL pattern in a minute: https://github.com/ipython/ipython/blob/master/tools/mknbindex.py Cheers, f -- Fernando Perez (@fperez_org; http://fperez.org) fperez.net-at-gmail: mailing lists only (I ignore this when swamped!) fernando.perez-at-berkeley: contact me here for any direct mail From rgbkrk at gmail.com Thu Aug 1 15:30:49 2013 From: rgbkrk at gmail.com (Kyle Kelley) Date: Thu, 1 Aug 2013 14:30:49 -0500 Subject: [IPython-dev] Bookstore for IPython Notebooks Message-ID: Hey all, I put together a module called bookstore which saves IPython notebooks to OpenStack Swift clusters. It also has a notebook manager with simplified authentication for Rackspace specifically (subclass of SwiftNotebookManager). Bookstore only works against the current dev branch, but I'll clean up the installation documentation (and install itself) when IPython 1.0 releases. You can find the current version at: https://github.com/rgbkrk/bookstore I'm aiming to make a real release on PyPI to coincide with the official release of IPython 1.0. It will also get added as an option for the IPython Chef cookbook I've been building (https://github.com/rgbkrk/ipynb-cookbook). To be honest, I would be more than happy to accept pull requests that add in other 3rd party notebook managers there as well. This includes the currently included Azure notebook manager and could include others such as davidbrai's version for S3 (https://github.com/davidbrai/ipythonnb-s3). Let me know what you think! Thanks, Kyle Kelley -------------- next part -------------- An HTML attachment was scrubbed... URL: From nborwankar at gmail.com Fri Aug 2 00:48:54 2013 From: nborwankar at gmail.com (Nitin Borwankar) Date: Thu, 1 Aug 2013 21:48:54 -0700 Subject: [IPython-dev] prose, notebooks, and thrashing (and a re-direct to another list) In-Reply-To: References: <51F6868D.1050903@third-bit.com> Message-ID: Fernando, Thanks very much, I've been very heads down for the past few months and out of touch with the "best practices" and tools emerging so rapidly around IPython. Interesting you mention Cam's book - I stole his CSS styling with his permission :-) Going and trying out the auto generate script shortly. Thanks for the tip and for your awesome project. Nitin ------------------------------------------------------------------ Nitin Borwankar nborwankar at gmail.com On Thu, Aug 1, 2013 at 12:16 PM, Fernando Perez wrote: > Hi Nitin, > > > On Thu, Aug 1, 2013 at 10:30 AM, Nitin Borwankar > wrote: > > > > I've just released a very early beta of LearnDataScience > > (http://nborwankar.github.io/LearnDataScience) and I had to struggle > with a > > subset of these issues. I am not sure my experience is the relevant to > you > > but here's what I did. > > The goal with my content is to teach developers data science so it's > > possibly complementary to what you folks are doing. > > quick note: this looks great, but you should make the notebook titles > on your main page and in the README.md be nbviewer links, just like > for the Bayesian Methods book: > > > http://camdavidsonpilon.github.io/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers/ > > That simple change will make the page far more useful immediately for > users who may not know how to read notebooks. > > We have a simple script to auto-generate those links that you can copy > and modify for your URL pattern in a minute: > > https://github.com/ipython/ipython/blob/master/tools/mknbindex.py > > Cheers, > > f > > -- > Fernando Perez (@fperez_org; http://fperez.org) > fperez.net-at-gmail: mailing lists only (I ignore this when swamped!) > fernando.perez-at-berkeley: contact me here for any direct mail > _______________________________________________ > IPython-dev mailing list > IPython-dev at scipy.org > http://mail.scipy.org/mailman/listinfo/ipython-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From benjaminrk at gmail.com Fri Aug 2 18:20:22 2013 From: benjaminrk at gmail.com (MinRK) Date: Fri, 2 Aug 2013 15:20:22 -0700 Subject: [IPython-dev] [ANN] IPython 1.0.0 Release Candidate Message-ID: After much fiddling and bugfixing, IPython 1.0.0 has its first release candidate: http://archive.ipython.org/testing/1.0.0 quickest way to test: pip install http://archive.ipython.org/testing/1.0.0/ipython-1.0.0-rc1.tar.gz If all goes smoothly, expect 1.0.0 final by the end of next week. -MinRK -------------- next part -------------- An HTML attachment was scrubbed... URL: From pi at berkeley.edu Fri Aug 2 19:09:26 2013 From: pi at berkeley.edu (Paul Ivanov) Date: Fri, 2 Aug 2013 16:09:26 -0700 Subject: [IPython-dev] [ANN] IPython 1.0.0 Release Candidate In-Reply-To: References: Message-ID: <20130802230926.GR6006@HbI-OTOH.berkeley.edu> Thanks for getting this out, Min! MinRK, on 2013-08-02 15:20, wrote: > quickest way to test: > pip install http://archive.ipython.org/testing/1.0.0/ipython-1.0.0-rc1.tar.gz and then run iptest Report any issues on github best, -- _ / \ A* \^ - ,./ _.`\\ / \ / ,--.S \/ \ / `"~,_ \ \ __o ? _ \<,_ /:\ --(_)/-(_)----.../ | \ --------------.......J Paul Ivanov http://pirsquared.org From benjaminrk at gmail.com Fri Aug 2 19:39:54 2013 From: benjaminrk at gmail.com (MinRK) Date: Fri, 2 Aug 2013 16:39:54 -0700 Subject: [IPython-dev] [IPython-User] [ANN] IPython 1.0.0 Release Candidate In-Reply-To: References: <20130802230926.GR6006@HbI-OTOH.berkeley.edu> Message-ID: To first order, pip and easy_install are the same, so yes, easy_install works just fine as well. On Fri, Aug 2, 2013 at 4:35 PM, Comer Duncan wrote: > Is there an easy_install option? I don't have pip installed anymore... > Or if I download the tarxz file how would I proceed to install? > > Comer > > > On Fri, Aug 2, 2013 at 7:09 PM, Paul Ivanov wrote: > >> Thanks for getting this out, Min! >> >> MinRK, on 2013-08-02 15:20, wrote: >> > quickest way to test: >> > pip install >> http://archive.ipython.org/testing/1.0.0/ipython-1.0.0-rc1.tar.gz >> >> and then run iptest >> >> Report any issues on github >> >> best, >> -- >> _ >> / \ >> A* \^ - >> ,./ _.`\\ / \ >> / ,--.S \/ \ >> / `"~,_ \ \ >> __o ? >> _ \<,_ /:\ >> --(_)/-(_)----.../ | \ >> --------------.......J >> Paul Ivanov >> http://pirsquared.org >> _______________________________________________ >> IPython-User mailing list >> IPython-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/ipython-user >> > > > _______________________________________________ > IPython-User mailing list > IPython-User at scipy.org > http://mail.scipy.org/mailman/listinfo/ipython-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pi at berkeley.edu Fri Aug 2 20:00:20 2013 From: pi at berkeley.edu (Paul Ivanov) Date: Fri, 2 Aug 2013 17:00:20 -0700 Subject: [IPython-dev] [IPython-User] [ANN] IPython 1.0.0 Release Candidate In-Reply-To: References: <20130802230926.GR6006@HbI-OTOH.berkeley.edu> Message-ID: <20130803000020.GS6006@HbI-OTOH.berkeley.edu> > On Fri, Aug 2, 2013 at 4:35 PM, Comer Duncan wrote: > > Is there an easy_install option? As Min said, you can just do: easy_install http://archive.ipython.org/testing/1.0.0/ipython-1.0.0-rc1.tar.gz > Or if I download the tarxz file how would I proceed to install? to install "by hand" just change into the directory with the decompressed files and do python setup.py install (you may need to either put sudo out front of that command, or specify the --user flag if you've got your path set up to include .local/bin) best, -- _ / \ A* \^ - ,./ _.`\\ / \ / ,--.S \/ \ / `"~,_ \ \ __o ? _ \<,_ /:\ --(_)/-(_)----.../ | \ --------------.......J Paul Ivanov http://pirsquared.org From pi at berkeley.edu Sun Aug 4 03:29:39 2013 From: pi at berkeley.edu (Paul Ivanov) Date: Sun, 4 Aug 2013 00:29:39 -0700 Subject: [IPython-dev] Marking PRs as [ not for 1.0 ] Message-ID: <20130804072939.GD6006@HbI-OTOH.berkeley.edu> Hey gang, I just went through and edited a bunch of open PRs so that the ones that aren't going into 1.0 are marked as such, with a "[ not for 1.0 ]" at the very beginning of the PR description. Since with the current GitHub interface we can't filter PRs, this at least makes it easier to see what needs to be acted upon for 1.0 in this view, since the first line of a PR is displayed there: https://github.com/ipython/ipython/pulls I had to "break" the TODO-style checkboxes by just triple quoting those section, since they cause the PR summary to just read "(X of Y tasks remaining)" The ones I wasn't sure about: #3570 Handle raw html tags in markdown (nbconvert -> latex) #3631 xkcd mode for the IPython notebook best, -- _ / \ A* \^ - ,./ _.`\\ / \ / ,--.S \/ \ / `"~,_ \ \ __o ? _ \<,_ /:\ --(_)/-(_)----.../ | \ --------------.......J Paul Ivanov http://pirsquared.org From juanlu001 at gmail.com Sun Aug 4 06:13:21 2013 From: juanlu001 at gmail.com (Juan Luis Cano) Date: Sun, 04 Aug 2013 12:13:21 +0200 Subject: [IPython-dev] New video of IPython notebook Message-ID: <51FE2941.7010201@gmail.com> Hello everyone, First of all, congratulations to the team for all the hard work put in this 1.0 release: we're all very excited about it and I'm sure it's going to be the beginning of a big story. Keep rocking! :) I am the author of the video demo that Fernando gently featured on the IPython notebook page[1] (thanks F!), which apparently has received quite a few views (~15000 in a bit more than 5 months). I was thinking of recording a new one with the 1.0 version (I'm already triying it and works like a charm so far), so I would like to ask the team if there are any interesting features worth including or changes that should be made. For example, digging into the docs I just saw that a new %matplotlib magic was included, which IIUC is the preferred method to produce inline figures (instead of the %pylab mode). [1]: http://ipython.org/notebook Because of some time constraints I won't be able to have it by the time you make the release but anyway I would like to hear some feedback from the community :) Otherwise I will make a new video with a similar structure, in full English, maybe briefly show nbconvert and little more. And probably keep in shorter than 5 minutes and with some chiptune background music ;) Best regards, Juan Luis Cano From lists at hilboll.de Sun Aug 4 10:17:38 2013 From: lists at hilboll.de (Andreas Hilboll) Date: Sun, 04 Aug 2013 16:17:38 +0200 Subject: [IPython-dev] Using IPython Cluster with SGE -- help needed Message-ID: <51FE6282.7090104@hilboll.de> Hi, I would like to use IPython for calculations on our cluster. It's a total of 11 compute + 1 management nodes (all running Linux), and we're using SGE's qsub to submit jobs. The $HOME directory is shared via NFS between all the nodes. Even after reading the documentation, I'm unsure about how to get things running. I assume that I'll have to execute ``?pcluster -n 16`` on all compute nodes (they have 16 cores each). I'd have the ipython shell (notebook won't work due to firewall restrictions I cannot change) on the management node. But how does the management node know about the kernels which are running on the compute nodes and waiting for a job? And how can I tell the management node that it shall use qsub to submit the jobs to the individual kernels? As I think this is a common use case, I'd be willing to write up a nice tutorial about the setup, but I fear I need some help from you guys to get things running ... Cheers, -- Andreas. From matthieu.brucher at gmail.com Sun Aug 4 10:20:02 2013 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Sun, 4 Aug 2013 16:20:02 +0200 Subject: [IPython-dev] Using IPython Cluster with SGE -- help needed In-Reply-To: <51FE6282.7090104@hilboll.de> References: <51FE6282.7090104@hilboll.de> Message-ID: Hi, I guess we may want to start with the ipython documentation on this topic: http://ipython.org/ipython-doc/stable/parallel/parallel_process.html Cheers, 2013/8/4 Andreas Hilboll : > Hi, > > I would like to use IPython for calculations on our cluster. It's a > total of 11 compute + 1 management nodes (all running Linux), and we're > using SGE's qsub to submit jobs. The $HOME directory is shared via NFS > between all the nodes. > > Even after reading the documentation, I'm unsure about how to get things > running. I assume that I'll have to execute ``?pcluster -n 16`` on all > compute nodes (they have 16 cores each). I'd have the ipython shell > (notebook won't work due to firewall restrictions I cannot change) on > the management node. But how does the management node know about the > kernels which are running on the compute nodes and waiting for a job? > And how can I tell the management node that it shall use qsub to submit > the jobs to the individual kernels? > > As I think this is a common use case, I'd be willing to write up a nice > tutorial about the setup, but I fear I need some help from you guys to > get things running ... > > Cheers, > > -- Andreas. > _______________________________________________ > IPython-dev mailing list > IPython-dev at scipy.org > http://mail.scipy.org/mailman/listinfo/ipython-dev -- Information System Engineer, Ph.D. Blog: http://matt.eifelle.com LinkedIn: http://www.linkedin.com/in/matthieubrucher Music band: http://liliejay.com/ From lists at hilboll.de Sun Aug 4 10:29:48 2013 From: lists at hilboll.de (Andreas Hilboll) Date: Sun, 04 Aug 2013 16:29:48 +0200 Subject: [IPython-dev] Using IPython Cluster with SGE -- help needed In-Reply-To: References: <51FE6282.7090104@hilboll.de> Message-ID: <51FE655C.2070909@hilboll.de> Am 04.08.2013 16:20, schrieb Matthieu Brucher: > Hi, > > I guess we may want to start with the ipython documentation on this > topic: http://ipython.org/ipython-doc/stable/parallel/parallel_process.html > > Cheers, > > 2013/8/4 Andreas Hilboll : >> Hi, >> >> I would like to use IPython for calculations on our cluster. It's a >> total of 11 compute + 1 management nodes (all running Linux), and we're >> using SGE's qsub to submit jobs. The $HOME directory is shared via NFS >> between all the nodes. >> >> Even after reading the documentation, I'm unsure about how to get things >> running. I assume that I'll have to execute ``?pcluster -n 16`` on all >> compute nodes (they have 16 cores each). I'd have the ipython shell >> (notebook won't work due to firewall restrictions I cannot change) on >> the management node. But how does the management node know about the >> kernels which are running on the compute nodes and waiting for a job? >> And how can I tell the management node that it shall use qsub to submit >> the jobs to the individual kernels? >> >> As I think this is a common use case, I'd be willing to write up a nice >> tutorial about the setup, but I fear I need some help from you guys to >> get things running ... >> >> Cheers, >> >> -- Andreas. >> _______________________________________________ >> IPython-dev mailing list >> IPython-dev at scipy.org >> http://mail.scipy.org/mailman/listinfo/ipython-dev Thanks, Matthieu, not sure how I missed that :/ Is there a way to "re-use" the ipengines on the nodes, like using qsub to "inject" a job into a (idle) running ipengine? That way the sometimes longish startup times of the Python interpreter on shared network filesystems could be avoided. Something like 1. start idling ipengines on all cores 2. use qsub to inject jobs into these 3. "cleanup" the ipengine after the job is done, making it ready for another job. Cheers, -- Andreas. From dave.hirschfeld at gmail.com Sun Aug 4 12:06:09 2013 From: dave.hirschfeld at gmail.com (Dave Hirschfeld) Date: Sun, 4 Aug 2013 16:06:09 +0000 (UTC) Subject: [IPython-dev] IPython Parallel KeyError Message-ID: > > I've updated to the latest IPython and I'm now seeing a KeyError when trying to instantiate a parallel Client as neither `key` nor `signature_scheme` are in the `extra_args` dict. Is this a bug? I'm sure this config has worked previously. In [4]: from IPython.parallel import Client rc = Client(profile='mygrid') --------------------------------------------------------------------------- KeyError Traceback (most recent call last) in () 1 from IPython.parallel import Client ---> 2 rc = Client(profile='mygrid') c:\dev\code\ipython\IPython\parallel\client\client.pyc in __init__(self, url_file, profile, profile_dir, ipython_dir, context, debug, sshserver, sshkey, password, paramiko, timeout, cluster_id, **extra_args) 463 extra_args['packer'] = cfg['pack'] 464 extra_args['unpacker'] = cfg['unpack'] -> 465 extra_args['key'] = cast_bytes(cfg['key']) 466 extra_args['signature_scheme'] = cfg['signature_scheme'] 467 KeyError: 'key' In [5]: %debug > c:\dev\code\ipython\ipython\parallel\client\client.py(465)__init__() 464 extra_args['unpacker'] = cfg['unpack'] -> 465 extra_args['key'] = cast_bytes(cfg['key']) 466 extra_args['signature_scheme'] = cfg['signature_scheme'] ipdb> extra_args.keys() ['packer', 'unpacker'] ipdb> exit In [7]: from IPython import sys_info print sys_info() {'codename': 'An Afternoon Hack', 'commit_hash': 'c2cabd0', 'commit_source': 'repository', 'default_encoding': 'cp1252', 'ipython_path': 'c:\\dev\\code\\ipython\\IPython', 'ipython_version': '1.0.0-dev', 'os_name': 'nt', 'platform': 'Windows-2008ServerR2-6.1.7601-SP1', 'sys_executable': 'C:\\dev\\bin\\Python27\\python.exe', 'sys_platform': 'win32', 'sys_version': '2.7.3 (default, Apr 10 2012, 23:24:47) [MSC v.1500 64 bit (AMD64)]'} In case it helps the ipcontroller-client.json file is shown below (with values blanked out): { "control": XXXX, "task": XXXX, "notification": XXXX, "exec_key": "XXXX", "task_scheme": "leastload", "mux": XXXX, "iopub": XXXX, "ssh": "", "registration": XXXX, "interface": "XXXX", "pack": "json", "unpack": "json", "location": "XXXX" } From gregor.thalhammer at gmail.com Sun Aug 4 14:46:11 2013 From: gregor.thalhammer at gmail.com (Gregor Thalhammer) Date: Sun, 4 Aug 2013 20:46:11 +0200 Subject: [IPython-dev] nbconvert to latex fails on notebooks with spaces in file name Message-ID: <9CE66EFF-4B07-4699-9575-D14CFF9C02FD@gmail.com> Dear all, first, thanks for the great work on ipython, I enjoy using the ipython notebook every day. With latest ipython (from git) I discovered a problem when converting a notebook with spaces in the file name and containing inline figures to latex and further to pdf. pdflatex chokes on the \includegraphics command if the image file name, which is derived from the notebook file name, contains spaces. As a possible fix I found that adding '\usepackage{grffile}' (after \usepackage{graphicx}) to the tex file solves this problem, at least when using pdflatex, see http://www.ctan.org/pkg/grffile Gregor From pi at berkeley.edu Sun Aug 4 16:25:22 2013 From: pi at berkeley.edu (Paul Ivanov) Date: Sun, 4 Aug 2013 13:25:22 -0700 Subject: [IPython-dev] nbconvert to latex fails on notebooks with spaces in file name In-Reply-To: <9CE66EFF-4B07-4699-9575-D14CFF9C02FD@gmail.com> References: <9CE66EFF-4B07-4699-9575-D14CFF9C02FD@gmail.com> Message-ID: <20130804202522.GG6006@HbI-OTOH.berkeley.edu> Hi Gregor, Thanks, this was a well-phrased bug report, and this issue's already in our bug tracker: https://github.com/ipython/ipython/issues/3774 There is already an pull request addressing this: https://github.com/ipython/ipython/issues/3898 best, -- _ / \ A* \^ - ,./ _.`\\ / \ / ,--.S \/ \ / `"~,_ \ \ __o ? _ \<,_ /:\ --(_)/-(_)----.../ | \ --------------.......J Paul Ivanov http://pirsquared.org From asmeurer at gmail.com Sun Aug 4 17:50:48 2013 From: asmeurer at gmail.com (Aaron Meurer) Date: Sun, 4 Aug 2013 15:50:48 -0600 Subject: [IPython-dev] Marking PRs as [ not for 1.0 ] In-Reply-To: <20130804072939.GD6006@HbI-OTOH.berkeley.edu> References: <20130804072939.GD6006@HbI-OTOH.berkeley.edu> Message-ID: On Sun, Aug 4, 2013 at 1:29 AM, Paul Ivanov wrote: > Hey gang, > > I just went through and edited a bunch of open PRs so that the ones that > aren't going into 1.0 are marked as such, with a "[ not for 1.0 ]" at > the very beginning of the PR description. > > Since with the current GitHub interface we can't filter PRs, this at > least makes it easier to see what needs to be acted upon for 1.0 in this > view, since the first line of a PR is displayed there: > https://github.com/ipython/ipython/pulls You can use milestones and filter them with the issues interface. Aaron Meurer > > I had to "break" the TODO-style checkboxes by just triple quoting those > section, since they cause the PR summary to just read "(X of Y tasks > remaining)" > > The ones I wasn't sure about: > > #3570 Handle raw html tags in markdown (nbconvert -> latex) > > #3631 xkcd mode for the IPython notebook > > best, > -- > _ > / \ > A* \^ - > ,./ _.`\\ / \ > / ,--.S \/ \ > / `"~,_ \ \ > __o ? > _ \<,_ /:\ > --(_)/-(_)----.../ | \ > --------------.......J > Paul Ivanov > http://pirsquared.org > _______________________________________________ > IPython-dev mailing list > IPython-dev at scipy.org > http://mail.scipy.org/mailman/listinfo/ipython-dev From matthieu.brucher at gmail.com Sun Aug 4 18:17:13 2013 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Mon, 5 Aug 2013 00:17:13 +0200 Subject: [IPython-dev] Using IPython Cluster with SGE -- help needed In-Reply-To: <51FE655C.2070909@hilboll.de> References: <51FE6282.7090104@hilboll.de> <51FE655C.2070909@hilboll.de> Message-ID: Once the job is submitted, the client kernels are running. You can't use qsub anymore, it's ipython's job to feed work job in the kernels. I think it already behaves that way. Le 4 ao?t 2013 16:29, "Andreas Hilboll" a ?crit : > Am 04.08.2013 16:20, schrieb Matthieu Brucher: > > Hi, > > > > I guess we may want to start with the ipython documentation on this > > topic: > http://ipython.org/ipython-doc/stable/parallel/parallel_process.html > > > > Cheers, > > > > 2013/8/4 Andreas Hilboll : > >> Hi, > >> > >> I would like to use IPython for calculations on our cluster. It's a > >> total of 11 compute + 1 management nodes (all running Linux), and we're > >> using SGE's qsub to submit jobs. The $HOME directory is shared via NFS > >> between all the nodes. > >> > >> Even after reading the documentation, I'm unsure about how to get things > >> running. I assume that I'll have to execute ``?pcluster -n 16`` on all > >> compute nodes (they have 16 cores each). I'd have the ipython shell > >> (notebook won't work due to firewall restrictions I cannot change) on > >> the management node. But how does the management node know about the > >> kernels which are running on the compute nodes and waiting for a job? > >> And how can I tell the management node that it shall use qsub to submit > >> the jobs to the individual kernels? > >> > >> As I think this is a common use case, I'd be willing to write up a nice > >> tutorial about the setup, but I fear I need some help from you guys to > >> get things running ... > >> > >> Cheers, > >> > >> -- Andreas. > >> _______________________________________________ > >> IPython-dev mailing list > >> IPython-dev at scipy.org > >> http://mail.scipy.org/mailman/listinfo/ipython-dev > > Thanks, Matthieu, > > not sure how I missed that :/ Is there a way to "re-use" the ipengines > on the nodes, like using qsub to "inject" a job into a (idle) running > ipengine? That way the sometimes longish startup times of the Python > interpreter on shared network filesystems could be avoided. Something like > > 1. start idling ipengines on all cores > 2. use qsub to inject jobs into these > 3. "cleanup" the ipengine after the job is done, making it ready for > another job. > > Cheers, > > -- Andreas. > _______________________________________________ > IPython-dev mailing list > IPython-dev at scipy.org > http://mail.scipy.org/mailman/listinfo/ipython-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From benjaminrk at gmail.com Sun Aug 4 18:46:13 2013 From: benjaminrk at gmail.com (MinRK) Date: Sun, 4 Aug 2013 15:46:13 -0700 Subject: [IPython-dev] IPython Parallel KeyError In-Reply-To: References: Message-ID: Sounds like a version mismatch - make sure you don't have multiple versions of IPython installed. Current IPython will not write a connection file without the 'key' or 'signature_scheme' keys. On Sun, Aug 4, 2013 at 9:06 AM, Dave Hirschfeld wrote: > > > > > > I've updated to the latest IPython and I'm now seeing a KeyError when > trying > to instantiate a parallel Client as neither `key` nor `signature_scheme` > are > in the `extra_args` dict. > > Is this a bug? I'm sure this config has worked previously. > > > In [4]: from IPython.parallel import Client > rc = Client(profile='mygrid') > > --------------------------------------------------------------------------- > KeyError Traceback (most recent call last) > in () > 1 from IPython.parallel import Client > ---> 2 rc = Client(profile='mygrid') > > c:\dev\code\ipython\IPython\parallel\client\client.pyc in __init__(self, > url_file, profile, profile_dir, ipython_dir, context, debug, sshserver, > sshkey, password, paramiko, timeout, cluster_id, **extra_args) > 463 extra_args['packer'] = cfg['pack'] > 464 extra_args['unpacker'] = cfg['unpack'] > -> 465 extra_args['key'] = cast_bytes(cfg['key']) > 466 extra_args['signature_scheme'] = cfg['signature_scheme'] > 467 > > KeyError: 'key' > > > In [5]: %debug > > > c:\dev\code\ipython\ipython\parallel\client\client.py(465)__init__() > 464 extra_args['unpacker'] = cfg['unpack'] > -> 465 extra_args['key'] = cast_bytes(cfg['key']) > 466 extra_args['signature_scheme'] = cfg['signature_scheme'] > > ipdb> extra_args.keys() > ['packer', 'unpacker'] > ipdb> exit > > > In [7]: from IPython import sys_info > print sys_info() > > {'codename': 'An Afternoon Hack', > 'commit_hash': 'c2cabd0', > 'commit_source': 'repository', > 'default_encoding': 'cp1252', > 'ipython_path': 'c:\\dev\\code\\ipython\\IPython', > 'ipython_version': '1.0.0-dev', > 'os_name': 'nt', > 'platform': 'Windows-2008ServerR2-6.1.7601-SP1', > 'sys_executable': 'C:\\dev\\bin\\Python27\\python.exe', > 'sys_platform': 'win32', > 'sys_version': '2.7.3 (default, Apr 10 2012, 23:24:47) [MSC v.1500 64 bit > (AMD64)]'} > > > In case it helps the ipcontroller-client.json file is shown below (with > values blanked out): > > { > "control": XXXX, > "task": XXXX, > "notification": XXXX, > "exec_key": "XXXX", > "task_scheme": "leastload", > "mux": XXXX, > "iopub": XXXX, > "ssh": "", > "registration": XXXX, > "interface": "XXXX", > "pack": "json", > "unpack": "json", > "location": "XXXX" > } > > > _______________________________________________ > IPython-dev mailing list > IPython-dev at scipy.org > http://mail.scipy.org/mailman/listinfo/ipython-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From benjaminrk at gmail.com Sun Aug 4 18:46:13 2013 From: benjaminrk at gmail.com (MinRK) Date: Sun, 4 Aug 2013 15:46:13 -0700 Subject: [IPython-dev] IPython Parallel KeyError In-Reply-To: References: Message-ID: Sounds like a version mismatch - make sure you don't have multiple versions of IPython installed. Current IPython will not write a connection file without the 'key' or 'signature_scheme' keys. On Sun, Aug 4, 2013 at 9:06 AM, Dave Hirschfeld wrote: > > > > > > I've updated to the latest IPython and I'm now seeing a KeyError when > trying > to instantiate a parallel Client as neither `key` nor `signature_scheme` > are > in the `extra_args` dict. > > Is this a bug? I'm sure this config has worked previously. > > > In [4]: from IPython.parallel import Client > rc = Client(profile='mygrid') > > --------------------------------------------------------------------------- > KeyError Traceback (most recent call last) > in () > 1 from IPython.parallel import Client > ---> 2 rc = Client(profile='mygrid') > > c:\dev\code\ipython\IPython\parallel\client\client.pyc in __init__(self, > url_file, profile, profile_dir, ipython_dir, context, debug, sshserver, > sshkey, password, paramiko, timeout, cluster_id, **extra_args) > 463 extra_args['packer'] = cfg['pack'] > 464 extra_args['unpacker'] = cfg['unpack'] > -> 465 extra_args['key'] = cast_bytes(cfg['key']) > 466 extra_args['signature_scheme'] = cfg['signature_scheme'] > 467 > > KeyError: 'key' > > > In [5]: %debug > > > c:\dev\code\ipython\ipython\parallel\client\client.py(465)__init__() > 464 extra_args['unpacker'] = cfg['unpack'] > -> 465 extra_args['key'] = cast_bytes(cfg['key']) > 466 extra_args['signature_scheme'] = cfg['signature_scheme'] > > ipdb> extra_args.keys() > ['packer', 'unpacker'] > ipdb> exit > > > In [7]: from IPython import sys_info > print sys_info() > > {'codename': 'An Afternoon Hack', > 'commit_hash': 'c2cabd0', > 'commit_source': 'repository', > 'default_encoding': 'cp1252', > 'ipython_path': 'c:\\dev\\code\\ipython\\IPython', > 'ipython_version': '1.0.0-dev', > 'os_name': 'nt', > 'platform': 'Windows-2008ServerR2-6.1.7601-SP1', > 'sys_executable': 'C:\\dev\\bin\\Python27\\python.exe', > 'sys_platform': 'win32', > 'sys_version': '2.7.3 (default, Apr 10 2012, 23:24:47) [MSC v.1500 64 bit > (AMD64)]'} > > > In case it helps the ipcontroller-client.json file is shown below (with > values blanked out): > > { > "control": XXXX, > "task": XXXX, > "notification": XXXX, > "exec_key": "XXXX", > "task_scheme": "leastload", > "mux": XXXX, > "iopub": XXXX, > "ssh": "", > "registration": XXXX, > "interface": "XXXX", > "pack": "json", > "unpack": "json", > "location": "XXXX" > } > > > _______________________________________________ > IPython-dev mailing list > IPython-dev at scipy.org > http://mail.scipy.org/mailman/listinfo/ipython-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From asmeurer at gmail.com Sun Aug 4 19:34:58 2013 From: asmeurer at gmail.com (Aaron Meurer) Date: Sun, 4 Aug 2013 17:34:58 -0600 Subject: [IPython-dev] Treating Python 3 as a first-class citizen Message-ID: I'm sending this email to the lists of three projects that I am involved with that have this issue. Please reply-all to all the lists you are on. Feel free to add in any other lists that have the same issue. As many of you may know, SymPy recently converted its codebase to a single code base for Python 2 and Python 3. This, along with some of the work I've done at Continuum with the conda package manager this summer, has gotten me thinking about how we treat Python 3 when we install our software. In particular, we tend to install entry points that have a 3 suffix. So for example, IPython installs ipython3. PuDB installs pudb3 (this one was entirely my contribution, but I didn't know any better at the time). SymPy is considering installing an isympy3. I think this is bad for anyone who wants to use Python 3, because if they install IPython (for example), they won't get "ipython", but rather "ipython3". This makes Python 3, and anything installed in it, a second class citizen, because the default "ipython" is always pointing to Python 2. Now, the reason that this was done is that the typical installation uses a shared bin directory (generally /usr/bin/), so if you wanted to install both Python 2 and Python 3 versions of the software, the entry point in bin would be whatever was installed most recently. If you use something like conda environments or virtualenvs, this issue doesn't present itself, because you can only have one Python installed in an environment at a time. In that situation, it is really annoying to install IPython and not have "ipython" (it can also be confusing, because if you prepend the environment to your PATH, typing "ipython" will still point to some other Python 2 IPython installed somewhere else). It's also annoying if you want to use only Python 3. So I'm wondering what people think we should do about this. I definitely think that it should be possible to install "ipython", "pudb", and "isympy" entry points that point to Python 3. But should that be the default? Should we keep the 3 suffixed versions for backwards compatibility? Aaron Meurer From ondrej.certik at gmail.com Sun Aug 4 20:11:08 2013 From: ondrej.certik at gmail.com (=?UTF-8?B?T25kxZllaiDEjGVydMOtaw==?=) Date: Sun, 4 Aug 2013 18:11:08 -0600 Subject: [IPython-dev] [sympy] Treating Python 3 as a first-class citizen In-Reply-To: References: Message-ID: On Sun, Aug 4, 2013 at 5:34 PM, Aaron Meurer wrote: > I'm sending this email to the lists of three projects that I am > involved with that have this issue. Please reply-all to all the lists > you are on. Feel free to add in any other lists that have the same > issue. > > As many of you may know, SymPy recently converted its codebase to a > single code base for Python 2 and Python 3. This, along with some of > the work I've done at Continuum with the conda package manager this > summer, has gotten me thinking about how we treat Python 3 when we > install our software. > > In particular, we tend to install entry points that have a 3 suffix. > So for example, IPython installs ipython3. PuDB installs pudb3 (this > one was entirely my contribution, but I didn't know any better at the > time). SymPy is considering installing an isympy3. > > I think this is bad for anyone who wants to use Python 3, because if > they install IPython (for example), they won't get "ipython", but > rather "ipython3". This makes Python 3, and anything installed in it, > a second class citizen, because the default "ipython" is always > pointing to Python 2. > > Now, the reason that this was done is that the typical installation > uses a shared bin directory (generally /usr/bin/), so if you wanted to > install both Python 2 and Python 3 versions of the software, the entry > point in bin would be whatever was installed most recently. > > If you use something like conda environments or virtualenvs, this > issue doesn't present itself, because you can only have one Python > installed in an environment at a time. In that situation, it is really > annoying to install IPython and not have "ipython" (it can also be > confusing, because if you prepend the environment to your PATH, typing > "ipython" will still point to some other Python 2 IPython installed > somewhere else). It's also annoying if you want to use only Python 3. The standard way that Python is installed in Debian/Ubuntu is that you have python3.2, python2.7, python2.6, python2.5, ..., and then you have "python", which is just a symlink, on my system it is: ondrej at hawk:~$ ll /usr/bin/python lrwxrwxrwx 1 root root 9 Jun 18 11:26 /usr/bin/python -> python2.7* As such, I think the "setup.py" install should simply install just one ipython (or isympy, pudb), which uses whatever Python it is installed into. I would treat Python 3.2 or 3.3 exactly like 2.7 or 2.5 ---- just another version of Python. In particular, if we agree to treat Python 3.2 or 3.3 just like 2.5 or 2.6, then if you use python 2.6 to call "setup.py", then isympy should really be using this very same Python 2.6, not any other 2.x. The reason is, that for ipython you install all the dependencies into Python 2.6, so you can't then call other Python. Distributions like Debian then simply install it couple times for each Python, or create the proper symlinks themselves. > So I'm wondering what people think we should do about this. I > definitely think that it should be possible to install "ipython", > "pudb", and "isympy" entry points that point to Python 3. But should > that be the default? Based on the above, if you use python 3.3 to run "setup.py", then ipython/pudb/isympy should use Python 3.3. If you use Python 2.7, it should use Python 2.7. And if you use Python 2.6, you should use Python 2.6. > Should we keep the 3 suffixed versions for > backwards compatibility? I wouldn't. Alternatively, we can create isympy2.6 and then isympy would link to this (and the same for other Python versions), but I would probably leave this for the Debian (and other) distribution to handle. Ondrej From pi at berkeley.edu Sun Aug 4 21:29:54 2013 From: pi at berkeley.edu (Paul Ivanov) Date: Sun, 4 Aug 2013 18:29:54 -0700 Subject: [IPython-dev] Marking PRs as [ not for 1.0 ] In-Reply-To: References: <20130804072939.GD6006@HbI-OTOH.berkeley.edu> Message-ID: <20130805012954.GA29763@HbI-OTOH.berkeley.edu> Aaron Meurer, on 2013-08-04 15:50, wrote: > You can use milestones and filter them with the issues interface. Yes, but there you have to see both issues and PRs. Also, I think with the way we're using milestones, only issues should have milestones attached to them, Min can correct me if I'm wrong about that. -- _ / \ A* \^ - ,./ _.`\\ / \ / ,--.S \/ \ / `"~,_ \ \ __o ? _ \<,_ /:\ --(_)/-(_)----.../ | \ --------------.......J Paul Ivanov http://pirsquared.org From rgbkrk at gmail.com Sun Aug 4 22:33:51 2013 From: rgbkrk at gmail.com (Kyle Kelley) Date: Sun, 4 Aug 2013 21:33:51 -0500 Subject: [IPython-dev] IPython Notebook Multiple Checkpoints Message-ID: Hey all, I added checkpoints to the notebook manager I've been working on: [image: Inline image 1] However, I'm having trouble with multiple checkpoints. Single checkpoints work beautifully but multiples are downright broken. It seems like my last checkpoint reigns supreme. Multiple checkpoints get loaded when opening up a previously saved notebook, but any new checkpoints write over the top of the displayed list (still saved properly on the backend though). What am I missing from the current implementation and documentation that I should be looking to? I copied the formatting of the returned structure needed from filenbmanager.py -- (list containing dictionaries with fields "checkpoint_id" and "last_modified").? The new multi dir sexiness in https://github.com/ipython/ipython/pull/3619 is doing the same thing. Has anyone tried out multiple checkpoints? -- Kyle -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Screen Shot 2013-08-04 at 8.55.27 PM.png Type: image/png Size: 23401 bytes Desc: not available URL: From benjaminrk at gmail.com Mon Aug 5 01:09:24 2013 From: benjaminrk at gmail.com (MinRK) Date: Sun, 4 Aug 2013 22:09:24 -0700 Subject: [IPython-dev] Marking PRs as [ not for 1.0 ] In-Reply-To: <20130805012954.GA29763@HbI-OTOH.berkeley.edu> References: <20130804072939.GD6006@HbI-OTOH.berkeley.edu> <20130805012954.GA29763@HbI-OTOH.berkeley.edu> Message-ID: Last release we started using milestones on PRs for exactly this - we don't do it during most of the release cycle though. On Sun, Aug 4, 2013 at 6:29 PM, Paul Ivanov wrote: > Aaron Meurer, on 2013-08-04 15:50, wrote: > > You can use milestones and filter them with the issues interface. > > Yes, but there you have to see both issues and PRs. > > Also, I think with the way we're using milestones, only issues > should have milestones attached to them, Min can correct me if > I'm wrong about that. > -- > _ > / \ > A* \^ - > ,./ _.`\\ / \ > / ,--.S \/ \ > / `"~,_ \ \ > __o ? > _ \<,_ /:\ > --(_)/-(_)----.../ | \ > --------------.......J > Paul Ivanov > http://pirsquared.org > _______________________________________________ > IPython-dev mailing list > IPython-dev at scipy.org > http://mail.scipy.org/mailman/listinfo/ipython-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From benjaminrk at gmail.com Mon Aug 5 01:12:42 2013 From: benjaminrk at gmail.com (MinRK) Date: Sun, 4 Aug 2013 22:12:42 -0700 Subject: [IPython-dev] IPython Notebook Multiple Checkpoints In-Reply-To: References: Message-ID: It's probably bugs in the javascript for some combination / subset of listing / selecting / updating checkpoints. If you point me to your repo I will look into it, since I haven't had a multi-checkpoint backend to test against. On Sun, Aug 4, 2013 at 7:33 PM, Kyle Kelley wrote: > Hey all, > > I added checkpoints to the notebook manager I've been working on: > > [image: Inline image 1] > > However, I'm having trouble with multiple checkpoints. Single checkpoints > work beautifully but multiples are downright broken. It seems like my last > checkpoint reigns supreme. Multiple checkpoints get loaded when opening up > a previously saved notebook, but any new checkpoints write over the top of > the displayed list (still saved properly on the backend though). > > What am I missing from the current implementation and documentation that I > should be looking to? > > I copied the formatting of the returned structure needed from > filenbmanager.py -- (list containing dictionaries with fields > "checkpoint_id" and "last_modified").? The new multi dir sexiness in > https://github.com/ipython/ipython/pull/3619 is doing the same thing. Has > anyone tried out multiple checkpoints? > > -- Kyle > > _______________________________________________ > IPython-dev mailing list > IPython-dev at scipy.org > http://mail.scipy.org/mailman/listinfo/ipython-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Screen Shot 2013-08-04 at 8.55.27 PM.png Type: image/png Size: 23401 bytes Desc: not available URL: From bussonniermatthias at gmail.com Mon Aug 5 01:51:29 2013 From: bussonniermatthias at gmail.com (Matthias Bussonnier) Date: Mon, 5 Aug 2013 07:51:29 +0200 Subject: [IPython-dev] [sympy] Treating Python 3 as a first-class citizen In-Reply-To: References: Message-ID: For what it is worth, I've already came into question on stackoverflow where ipython was launching python3 and ipython2 was use to launch ipython+python2.x. I think it make sense to have the non suffixed version run with python system default. Though, it might also make harder for us to distinguish between people running the package on python 2 and python 3, and make bug report more confuse. For ipython itself, it's code base is not yet 2 and 3 compatible (moving to it) so python 3 is not yet first class citizen. But we are doing our best and once this is done and we have more core dev running on python3 this will fixed faster. (Should we say the same with windows??) Short from my phone. -- Matt Le 5 ao?t 2013 ? 02:11, Ond?ej ?ert?k a ?crit : > On Sun, Aug 4, 2013 at 5:34 PM, Aaron Meurer wrote: >> I'm sending this email to the lists of three projects that I am >> involved with that have this issue. Please reply-all to all the lists >> you are on. Feel free to add in any other lists that have the same >> issue. >> >> As many of you may know, SymPy recently converted its codebase to a >> single code base for Python 2 and Python 3. This, along with some of >> the work I've done at Continuum with the conda package manager this >> summer, has gotten me thinking about how we treat Python 3 when we >> install our software. >> >> In particular, we tend to install entry points that have a 3 suffix. >> So for example, IPython installs ipython3. PuDB installs pudb3 (this >> one was entirely my contribution, but I didn't know any better at the >> time). SymPy is considering installing an isympy3. >> >> I think this is bad for anyone who wants to use Python 3, because if >> they install IPython (for example), they won't get "ipython", but >> rather "ipython3". This makes Python 3, and anything installed in it, >> a second class citizen, because the default "ipython" is always >> pointing to Python 2. >> >> Now, the reason that this was done is that the typical installation >> uses a shared bin directory (generally /usr/bin/), so if you wanted to >> install both Python 2 and Python 3 versions of the software, the entry >> point in bin would be whatever was installed most recently. >> >> If you use something like conda environments or virtualenvs, this >> issue doesn't present itself, because you can only have one Python >> installed in an environment at a time. In that situation, it is really >> annoying to install IPython and not have "ipython" (it can also be >> confusing, because if you prepend the environment to your PATH, typing >> "ipython" will still point to some other Python 2 IPython installed >> somewhere else). It's also annoying if you want to use only Python 3. > > The standard way that Python is installed in Debian/Ubuntu is that > you have python3.2, python2.7, python2.6, python2.5, ..., and then you have > "python", which is just a symlink, on my system it is: > > ondrej at hawk:~$ ll /usr/bin/python > lrwxrwxrwx 1 root root 9 Jun 18 11:26 /usr/bin/python -> python2.7* > > > As such, I think the "setup.py" install should simply install just one > ipython (or isympy, pudb), > which uses whatever Python it is installed into. I would treat Python > 3.2 or 3.3 exactly like > 2.7 or 2.5 ---- just another version of Python. > In particular, if we agree to treat Python 3.2 or 3.3 just like 2.5 or > 2.6, then if you use python 2.6 > to call "setup.py", then isympy should really be using this very same > Python 2.6, not any other 2.x. > The reason is, that for ipython you install all the dependencies into > Python 2.6, so you can't then > call other Python. > > Distributions like Debian then simply install it couple times for each > Python, or create the proper > symlinks themselves. > >> So I'm wondering what people think we should do about this. I >> definitely think that it should be possible to install "ipython", >> "pudb", and "isympy" entry points that point to Python 3. But should >> that be the default? > > Based on the above, if you use python 3.3 to run "setup.py", then > ipython/pudb/isympy should > use Python 3.3. If you use Python 2.7, it should use Python 2.7. And > if you use Python 2.6, > you should use Python 2.6. > >> Should we keep the 3 suffixed versions for >> backwards compatibility? > > I wouldn't. Alternatively, we can create isympy2.6 and then isympy > would link to this (and the same > for other Python versions), but I would probably leave this for the > Debian (and other) distribution to handle. > > Ondrej > _______________________________________________ > IPython-dev mailing list > IPython-dev at scipy.org > http://mail.scipy.org/mailman/listinfo/ipython-dev From dave.hirschfeld at gmail.com Mon Aug 5 03:40:27 2013 From: dave.hirschfeld at gmail.com (Dave Hirschfeld) Date: Mon, 5 Aug 2013 07:40:27 +0000 (UTC) Subject: [IPython-dev] IPython Parallel KeyError References: Message-ID: MinRK gmail.com> writes: > Sounds like a version mismatch - make sure you don't have multiple versions of IPython installed. ?Current IPython will not write a connection file without the 'key' or 'signature_scheme' keys. > > On Sun, Aug 4, 2013 at 9:06 AM, Dave Hirschfeld gmail.com> wrote:> > > > I've updated to the latest IPython and I'm now seeing a KeyError when trying > to instantiate a parallel Client as neither `key` nor `signature_scheme` are > in the `extra_args` dict. Pretty sure I've just got the one IPython. The connection file was written by an older version of IPython than what we're running now so it sounds like I'll just have to recreate them? Will give that a go... Thanks, Dave From benjaminrk at gmail.com Mon Aug 5 04:47:25 2013 From: benjaminrk at gmail.com (Min RK) Date: Mon, 5 Aug 2013 01:47:25 -0700 Subject: [IPython-dev] IPython Parallel KeyError In-Reply-To: References: Message-ID: <34DFA9BD-4494-4DCB-96B4-DCF05171AD0D@gmail.com> Ah yes, I bet it's a bug in the --reuse flag that's not writing the new info to the file. if that's the case, rove old connection files and it should be fine. -MinRK On Aug 5, 2013, at 0:40, Dave Hirschfeld wrote: > MinRK gmail.com> writes: > >> Sounds like a version mismatch - make sure you don't have multiple versions > of IPython installed. Current IPython will not write a connection file > without the 'key' or 'signature_scheme' keys. >> >> On Sun, Aug 4, 2013 at 9:06 AM, Dave Hirschfeld > gmail.com> wrote:> >> I've updated to the latest IPython and I'm now seeing a KeyError when trying >> to instantiate a parallel Client as neither `key` nor `signature_scheme` are >> in the `extra_args` dict. > > Pretty sure I've just got the one IPython. The connection file was written by > an older version of IPython than what we're running now so it sounds like I'll > just have to recreate them? Will give that a go... > > Thanks, > Dave > > _______________________________________________ > IPython-dev mailing list > IPython-dev at scipy.org > http://mail.scipy.org/mailman/listinfo/ipython-dev From benjaminrk at gmail.com Mon Aug 5 04:47:25 2013 From: benjaminrk at gmail.com (Min RK) Date: Mon, 5 Aug 2013 01:47:25 -0700 Subject: [IPython-dev] IPython Parallel KeyError In-Reply-To: References: Message-ID: <34DFA9BD-4494-4DCB-96B4-DCF05171AD0D@gmail.com> Ah yes, I bet it's a bug in the --reuse flag that's not writing the new info to the file. if that's the case, rove old connection files and it should be fine. -MinRK On Aug 5, 2013, at 0:40, Dave Hirschfeld wrote: > MinRK gmail.com> writes: > >> Sounds like a version mismatch - make sure you don't have multiple versions > of IPython installed. Current IPython will not write a connection file > without the 'key' or 'signature_scheme' keys. >> >> On Sun, Aug 4, 2013 at 9:06 AM, Dave Hirschfeld > gmail.com> wrote:> >> I've updated to the latest IPython and I'm now seeing a KeyError when trying >> to instantiate a parallel Client as neither `key` nor `signature_scheme` are >> in the `extra_args` dict. > > Pretty sure I've just got the one IPython. The connection file was written by > an older version of IPython than what we're running now so it sounds like I'll > just have to recreate them? Will give that a go... > > Thanks, > Dave > > _______________________________________________ > IPython-dev mailing list > IPython-dev at scipy.org > http://mail.scipy.org/mailman/listinfo/ipython-dev From dave.hirschfeld at gmail.com Mon Aug 5 05:12:01 2013 From: dave.hirschfeld at gmail.com (Dave Hirschfeld) Date: Mon, 5 Aug 2013 09:12:01 +0000 (UTC) Subject: [IPython-dev] IPython Parallel KeyError References: <34DFA9BD-4494-4DCB-96B4-DCF05171AD0D@gmail.com> Message-ID: Min RK gmail.com> writes: > > Ah yes, I bet it's a bug in the --reuse flag that's not writing the new info to the file. if that's the case, rove > old connection files and it should be fine. > > -MinRK > > On Aug 5, 2013, at 0:40, Dave Hirschfeld gmail.com> wrote: > > > MinRK gmail.com> writes: > > > >> Sounds like a version mismatch - make sure you don't have multiple versions > > of IPython installed. Current IPython will not write a connection file > > without the 'key' or 'signature_scheme' keys. > >> > >> On Sun, Aug 4, 2013 at 9:06 AM, Dave Hirschfeld > > gmail.com> wrote:> > >> I've updated to the latest IPython and I'm now seeing a KeyError when trying > >> to instantiate a parallel Client as neither `key` nor `signature_scheme` are > >> in the `extra_args` dict. > > > > Pretty sure I've just got the one IPython. The connection file was written by > > an older version of IPython than what we're running now so it sounds like I'll > > just have to recreate them? Will give that a go... > > > > Thanks, > > Dave > > It's easy enough to fix (if you know what your're doing) - just change `exec_key` to `key` and specify the `signature_scheme` as `hmac-md5`. In case you think it's worthwhile having it Just Work with older parallel configs I've opened a PR (https://github.com/ipython/ipython/pull/3904) with a fix that seems to work for me. Even if the decision is to handle this in the code it probably deserves a note in the What's New. HTH, Dave From takowl at gmail.com Mon Aug 5 06:04:16 2013 From: takowl at gmail.com (Thomas Kluyver) Date: Mon, 5 Aug 2013 11:04:16 +0100 Subject: [IPython-dev] [sympy] Treating Python 3 as a first-class citizen In-Reply-To: References: Message-ID: On 5 August 2013 06:51, Matthias Bussonnier wrote: > For what it is worth, I've already came into question on stackoverflow > where ipython was launching python3 and ipython2 was use to launch > ipython+python2.x. I'm guessing that would be Arch? They have the 'python' symlink pointing to Python 3, and a separate python2 executable. Ondrej: > The standard way that Python is installed in Debian/Ubuntu is that > you have python3.2, python2.7, python2.6, python2.5, ..., and then you have > "python", which is just a symlink, on my system it is: And a 'python3' symlink. On Debian based systems, the plan is to keep 'python' pointing to the latest Python 2 version ~forever, and python3 is being treated as a wholly separate thing. > As such, I think the "setup.py" install should simply install just one > ipython (or isympy, pudb), I think it's valuable to be able to start ipython for Python 3 or Python 2 on the same system without having to specify paths or activate some kind of environment. Fixing the names to 'ipython' and 'ipython3' admittedly isn't ideal, but it's simple and mostly seems to work well. I think the biggest practical issue is that a Python 3 environment where you can start Python 3 with 'python' still gets 'ipython3' rather than 'ipython'. We could solve that by checking at installation whether sys.executable matches 'python(\d)', and copying the trailing digit. But that could also lead to confusion if you have a Python 3 environment with 'ipython' inside the environment and 'ipython3' outside it on the system (but still on PATH). Thanks, Thomas -------------- next part -------------- An HTML attachment was scrubbed... URL: From lists at hilboll.de Mon Aug 5 09:45:08 2013 From: lists at hilboll.de (Andreas Hilboll) Date: Mon, 05 Aug 2013 15:45:08 +0200 Subject: [IPython-dev] Using IPython Cluster with SGE -- help needed In-Reply-To: References: <51FE6282.7090104@hilboll.de> Message-ID: <51FFAC64.5070908@hilboll.de> Am 04.08.2013 16:20, schrieb Matthieu Brucher: > Hi, > > I guess we may want to start with the ipython documentation on this > topic: http://ipython.org/ipython-doc/stable/parallel/parallel_process.html > > Cheers, > > 2013/8/4 Andreas Hilboll : >> Hi, >> >> I would like to use IPython for calculations on our cluster. It's a >> total of 11 compute + 1 management nodes (all running Linux), and we're >> using SGE's qsub to submit jobs. The $HOME directory is shared via NFS >> between all the nodes. >> >> Even after reading the documentation, I'm unsure about how to get things >> running. I assume that I'll have to execute ``?pcluster -n 16`` on all >> compute nodes (they have 16 cores each). I'd have the ipython shell >> (notebook won't work due to firewall restrictions I cannot change) on >> the management node. But how does the management node know about the >> kernels which are running on the compute nodes and waiting for a job? >> And how can I tell the management node that it shall use qsub to submit >> the jobs to the individual kernels? >> >> As I think this is a common use case, I'd be willing to write up a nice >> tutorial about the setup, but I fear I need some help from you guys to >> get things running ... >> >> Cheers, >> >> -- Andreas. >> _______________________________________________ >> IPython-dev mailing list >> IPython-dev at scipy.org >> http://mail.scipy.org/mailman/listinfo/ipython-dev > > > Okay, thanks to the good docs, I was able to start a cluster: (test_py27)hilboll at login:~> ipcluster start --profile=nexus_py2.7 -n 12 2013-08-05 15:26:04,264.264 [IPClusterStart] Using existing profile dir: u'/gpfs/hb/hilboll/.config/ipython/profile_nexus_py2.7' 2013-08-05 15:26:04.272 [IPClusterStart] Starting ipcluster with [daemon=False] 2013-08-05 15:26:04.273 [IPClusterStart] Creating pid file: /gpfs/hb/hilboll/.config/ipython/profile_nexus_py2.7/pid/ipcluster.pid 2013-08-05 15:26:04.273 [IPClusterStart] Starting Controller with SGEControllerLauncher 2013-08-05 15:26:04.289 [IPClusterStart] Job submitted with job id: '60' 2013-08-05 15:26:05.289 [IPClusterStart] Starting 12 Engines with SGEEngineSetLauncher 2013-08-05 15:26:05.306 [IPClusterStart] Job submitted with job id: '61' 2013-08-05 15:26:35.351 [IPClusterStart] Engines appear to have started successfully However, using qstat, I can only see one job in the queue, which is the controller: hilboll at login:~> qstat job-ID prior name user state submit/start at queue slots ja-task-ID ----------------------------------------------------------------------------------------------------------------- 60 0.57500 ipython hilboll r 08/05/2013 15:26:06 all.q at login.cluster 1 I used the following job template: c.SGEEngineSetLauncher.batch_template = '''#!/bin/bash #$ -N ipython #- Name optional! #$ -q all.q #- Nutze die Queue 'all.q'. #$ -S /bin/bash #- erforderlich ! #$ -V #- Verwendet Pfade wie in aktueller Shell #$ -j y #- merge STDOUT and STDERR #$ -o log_ipython_{n}.log source /hb/hilboll/local/anaconda/bin/activate test_py27 mpiexec -n {n} ipengine --profile-dir={profile_dir} ''' If I use a 'blank' ``ipengine --profile-dir={profile_dir}`` instead of the mpiexec call, I get exactly two jobs in the queue, one for the controller and one for the first engine. My naive understanding would be that exactly {n} jobs get submitted via the SGEEngineSetLauncher. Is my expectation wrong? In the logfile, I get this here, 12 times: 2013-08-05 15:26:09.038 [IPEngineApp] Registration timed out after 2.0 seconds Any help resolving this issue is greatly appreciated :) Cheers, -- Andreas. From matthieu.brucher at gmail.com Mon Aug 5 10:02:47 2013 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Mon, 5 Aug 2013 15:02:47 +0100 Subject: [IPython-dev] Using IPython Cluster with SGE -- help needed In-Reply-To: <51FFAC64.5070908@hilboll.de> References: <51FE6282.7090104@hilboll.de> <51FFAC64.5070908@hilboll.de> Message-ID: Hi, I don't know why the registration was not complete. Is your home folder the same on all nodes and on the login node? You won't see 12 jobs. You asked for 12 engines, and they will all be submitted in one job and the 12 engines will be started by mpiexec -n 12. This is the standard way of using batch schedulers. Ask for some cores, run an mpi application on these cores. You can also try to submit additional engines now that the controller is up and running. Check that the configuration files are present and readable. Cheers, 2013/8/5 Andreas Hilboll : > Am 04.08.2013 16:20, schrieb Matthieu Brucher: >> Hi, >> >> I guess we may want to start with the ipython documentation on this >> topic: http://ipython.org/ipython-doc/stable/parallel/parallel_process.html >> >> Cheers, >> >> 2013/8/4 Andreas Hilboll : >>> Hi, >>> >>> I would like to use IPython for calculations on our cluster. It's a >>> total of 11 compute + 1 management nodes (all running Linux), and we're >>> using SGE's qsub to submit jobs. The $HOME directory is shared via NFS >>> between all the nodes. >>> >>> Even after reading the documentation, I'm unsure about how to get things >>> running. I assume that I'll have to execute ``?pcluster -n 16`` on all >>> compute nodes (they have 16 cores each). I'd have the ipython shell >>> (notebook won't work due to firewall restrictions I cannot change) on >>> the management node. But how does the management node know about the >>> kernels which are running on the compute nodes and waiting for a job? >>> And how can I tell the management node that it shall use qsub to submit >>> the jobs to the individual kernels? >>> >>> As I think this is a common use case, I'd be willing to write up a nice >>> tutorial about the setup, but I fear I need some help from you guys to >>> get things running ... >>> >>> Cheers, >>> >>> -- Andreas. >>> _______________________________________________ >>> IPython-dev mailing list >>> IPython-dev at scipy.org >>> http://mail.scipy.org/mailman/listinfo/ipython-dev >> >> >> > > Okay, thanks to the good docs, I was able to start a cluster: > > (test_py27)hilboll at login:~> ipcluster start --profile=nexus_py2.7 -n 12 > 2013-08-05 15:26:04,264.264 [IPClusterStart] Using existing profile dir: > u'/gpfs/hb/hilboll/.config/ipython/profile_nexus_py2.7' > 2013-08-05 15:26:04.272 [IPClusterStart] Starting ipcluster with > [daemon=False] > 2013-08-05 15:26:04.273 [IPClusterStart] Creating pid file: > /gpfs/hb/hilboll/.config/ipython/profile_nexus_py2.7/pid/ipcluster.pid > 2013-08-05 15:26:04.273 [IPClusterStart] Starting Controller with > SGEControllerLauncher > 2013-08-05 15:26:04.289 [IPClusterStart] Job submitted with job id: '60' > 2013-08-05 15:26:05.289 [IPClusterStart] Starting 12 Engines with > SGEEngineSetLauncher > 2013-08-05 15:26:05.306 [IPClusterStart] Job submitted with job id: '61' > 2013-08-05 15:26:35.351 [IPClusterStart] Engines appear to have started > successfully > > However, using qstat, I can only see one job in the queue, which is the > controller: > > hilboll at login:~> qstat > job-ID prior name user state submit/start at queue > slots ja-task-ID > ----------------------------------------------------------------------------------------------------------------- > 60 0.57500 ipython hilboll r 08/05/2013 15:26:06 > all.q at login.cluster 1 > > > I used the following job template: > > c.SGEEngineSetLauncher.batch_template = '''#!/bin/bash > #$ -N ipython #- Name optional! > #$ -q all.q #- Nutze die Queue 'all.q'. > #$ -S /bin/bash #- erforderlich ! > #$ -V #- Verwendet Pfade wie in aktueller Shell > #$ -j y #- merge STDOUT and STDERR > #$ -o log_ipython_{n}.log > > source /hb/hilboll/local/anaconda/bin/activate test_py27 > mpiexec -n {n} ipengine --profile-dir={profile_dir} > ''' > > If I use a 'blank' ``ipengine --profile-dir={profile_dir}`` instead of > the mpiexec call, I get exactly two jobs in the queue, one for the > controller and one for the first engine. > > My naive understanding would be that exactly {n} jobs get submitted via > the SGEEngineSetLauncher. Is my expectation wrong? > > In the logfile, I get this here, 12 times: > > 2013-08-05 15:26:09.038 [IPEngineApp] Registration timed out after 2.0 > seconds > > Any help resolving this issue is greatly appreciated :) > > Cheers, > > -- Andreas. -- Information System Engineer, Ph.D. Blog: http://matt.eifelle.com LinkedIn: http://www.linkedin.com/in/matthieubrucher Music band: http://liliejay.com/ From lists at hilboll.de Mon Aug 5 10:19:27 2013 From: lists at hilboll.de (Andreas Hilboll) Date: Mon, 05 Aug 2013 16:19:27 +0200 Subject: [IPython-dev] Using IPython Cluster with SGE -- help needed In-Reply-To: References: <51FE6282.7090104@hilboll.de> <51FFAC64.5070908@hilboll.de> Message-ID: <51FFB46F.3060309@hilboll.de> Thanks, Mathieu, answers inline: Am 05.08.2013 16:02, schrieb Matthieu Brucher: > Hi, > > I don't know why the registration was not complete. Is your home > folder the same on all nodes and on the login node? Yes, it is. Could this be some firewall issue? > You won't see 12 jobs. You asked for 12 engines, and they will all be > submitted in one job and the 12 engines will be started by mpiexec -n > 12. This is the standard way of using batch schedulers. Ask for some > cores, run an mpi application on these cores. Well, then I guess our IT department doesn't like "the standard way". We have a multi-node cluster, comprising 12 nodes, one 'management' and 11 'computing' nodes. And we don't have/use mpi usually. What I would need in order to use our multi-node cluster the way our sysadmins want us to, I'd need to submit a total of {n} ipengines via {n} calls to ``qsub``. Any idea how I can accomplish this? Thanks for your help! Andreas. > > You can also try to submit additional engines now that the controller > is up and running. Check that the configuration files are present and > readable. > > Cheers, > > > 2013/8/5 Andreas Hilboll : >> Am 04.08.2013 16:20, schrieb Matthieu Brucher: >>> Hi, >>> >>> I guess we may want to start with the ipython documentation on this >>> topic: http://ipython.org/ipython-doc/stable/parallel/parallel_process.html >>> >>> Cheers, >>> >>> 2013/8/4 Andreas Hilboll : >>>> Hi, >>>> >>>> I would like to use IPython for calculations on our cluster. It's a >>>> total of 11 compute + 1 management nodes (all running Linux), and we're >>>> using SGE's qsub to submit jobs. The $HOME directory is shared via NFS >>>> between all the nodes. >>>> >>>> Even after reading the documentation, I'm unsure about how to get things >>>> running. I assume that I'll have to execute ``?pcluster -n 16`` on all >>>> compute nodes (they have 16 cores each). I'd have the ipython shell >>>> (notebook won't work due to firewall restrictions I cannot change) on >>>> the management node. But how does the management node know about the >>>> kernels which are running on the compute nodes and waiting for a job? >>>> And how can I tell the management node that it shall use qsub to submit >>>> the jobs to the individual kernels? >>>> >>>> As I think this is a common use case, I'd be willing to write up a nice >>>> tutorial about the setup, but I fear I need some help from you guys to >>>> get things running ... >>>> >>>> Cheers, >>>> >>>> -- Andreas. >>>> _______________________________________________ >>>> IPython-dev mailing list >>>> IPython-dev at scipy.org >>>> http://mail.scipy.org/mailman/listinfo/ipython-dev >>> >>> >>> >> >> Okay, thanks to the good docs, I was able to start a cluster: >> >> (test_py27)hilboll at login:~> ipcluster start --profile=nexus_py2.7 -n 12 >> 2013-08-05 15:26:04,264.264 [IPClusterStart] Using existing profile dir: >> u'/gpfs/hb/hilboll/.config/ipython/profile_nexus_py2.7' >> 2013-08-05 15:26:04.272 [IPClusterStart] Starting ipcluster with >> [daemon=False] >> 2013-08-05 15:26:04.273 [IPClusterStart] Creating pid file: >> /gpfs/hb/hilboll/.config/ipython/profile_nexus_py2.7/pid/ipcluster.pid >> 2013-08-05 15:26:04.273 [IPClusterStart] Starting Controller with >> SGEControllerLauncher >> 2013-08-05 15:26:04.289 [IPClusterStart] Job submitted with job id: '60' >> 2013-08-05 15:26:05.289 [IPClusterStart] Starting 12 Engines with >> SGEEngineSetLauncher >> 2013-08-05 15:26:05.306 [IPClusterStart] Job submitted with job id: '61' >> 2013-08-05 15:26:35.351 [IPClusterStart] Engines appear to have started >> successfully >> >> However, using qstat, I can only see one job in the queue, which is the >> controller: >> >> hilboll at login:~> qstat >> job-ID prior name user state submit/start at queue >> slots ja-task-ID >> ----------------------------------------------------------------------------------------------------------------- >> 60 0.57500 ipython hilboll r 08/05/2013 15:26:06 >> all.q at login.cluster 1 >> >> >> I used the following job template: >> >> c.SGEEngineSetLauncher.batch_template = '''#!/bin/bash >> #$ -N ipython #- Name optional! >> #$ -q all.q #- Nutze die Queue 'all.q'. >> #$ -S /bin/bash #- erforderlich ! >> #$ -V #- Verwendet Pfade wie in aktueller Shell >> #$ -j y #- merge STDOUT and STDERR >> #$ -o log_ipython_{n}.log >> >> source /hb/hilboll/local/anaconda/bin/activate test_py27 >> mpiexec -n {n} ipengine --profile-dir={profile_dir} >> ''' >> >> If I use a 'blank' ``ipengine --profile-dir={profile_dir}`` instead of >> the mpiexec call, I get exactly two jobs in the queue, one for the >> controller and one for the first engine. >> >> My naive understanding would be that exactly {n} jobs get submitted via >> the SGEEngineSetLauncher. Is my expectation wrong? >> >> In the logfile, I get this here, 12 times: >> >> 2013-08-05 15:26:09.038 [IPEngineApp] Registration timed out after 2.0 >> seconds >> >> Any help resolving this issue is greatly appreciated :) >> >> Cheers, >> >> -- Andreas. > > > -- -- Andreas. From rgbkrk at gmail.com Mon Aug 5 10:51:43 2013 From: rgbkrk at gmail.com (Kyle Kelley) Date: Mon, 5 Aug 2013 09:51:43 -0500 Subject: [IPython-dev] IPython Notebook Multiple Checkpoints In-Reply-To: References: Message-ID: Min, The repository is located at https://github.com/rgbkrk/bookstore. If you use the Rackspace backend, you'll need an account. IPython should have free resources courtesy Rackspace (via Jesse Noller) -- do let me know if you need any help. Hopefully I'll be able to help figure it out. If you have any tips for working with IPython's javascript I'm all years. I used Firefox's developer tools to figure out some issues with my original code, but haven't sorted this one out. -- Kyle On Mon, Aug 5, 2013 at 12:12 AM, MinRK wrote: > It's probably bugs in the javascript for some combination / subset of > listing / selecting / updating checkpoints. If you point me to your repo I > will look into it, since I haven't had a multi-checkpoint backend to test > against. > > > On Sun, Aug 4, 2013 at 7:33 PM, Kyle Kelley wrote: > >> Hey all, >> >> I added checkpoints to the notebook manager I've been working on: >> >> [image: Inline image 1] >> >> However, I'm having trouble with multiple checkpoints. Single checkpoints >> work beautifully but multiples are downright broken. It seems like my last >> checkpoint reigns supreme. Multiple checkpoints get loaded when opening up >> a previously saved notebook, but any new checkpoints write over the top of >> the displayed list (still saved properly on the backend though). >> >> What am I missing from the current implementation and documentation that >> I should be looking to? >> >> I copied the formatting of the returned structure needed from >> filenbmanager.py -- (list containing dictionaries with fields >> "checkpoint_id" and "last_modified").? The new multi dir sexiness in >> https://github.com/ipython/ipython/pull/3619 is doing the same thing. >> Has anyone tried out multiple checkpoints? >> >> -- Kyle >> >> _______________________________________________ >> IPython-dev mailing list >> IPython-dev at scipy.org >> http://mail.scipy.org/mailman/listinfo/ipython-dev >> >> > > _______________________________________________ > IPython-dev mailing list > IPython-dev at scipy.org > http://mail.scipy.org/mailman/listinfo/ipython-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Screen Shot 2013-08-04 at 8.55.27 PM.png Type: image/png Size: 23401 bytes Desc: not available URL: From bussonniermatthias at gmail.com Mon Aug 5 11:42:28 2013 From: bussonniermatthias at gmail.com (Matthias BUSSONNIER) Date: Mon, 5 Aug 2013 17:42:28 +0200 Subject: [IPython-dev] New video of IPython notebook In-Reply-To: <51FE2941.7010201@gmail.com> References: <51FE2941.7010201@gmail.com> Message-ID: <8E62845C-813A-4A74-B2E7-D642AEAED4E2@gmail.com> Hi there, Le 4 ao?t 2013 ? 12:13, Juan Luis Cano a ?crit : > Hello everyone, > > First of all, congratulations to the team for all the hard work put in > this 1.0 release: we're all very excited about it and I'm sure it's > going to be the beginning of a big story. Keep rocking! :) > > I am the author of the video demo that Fernando gently featured on the > IPython notebook page[1] (thanks F!), which apparently has received > quite a few views (~15000 in a bit more than 5 months). I was thinking > of recording a new one with the 1.0 version (I'm already triying it and > works like a charm so far), so I would like to ask the team if there are > any interesting features worth including or changes that should be made. > For example, digging into the docs I just saw that a new %matplotlib > magic was included, which IIUC is the preferred method to produce inline > figures (instead of the %pylab mode). > > [1]: http://ipython.org/notebook > > Because of some time constraints I won't be able to have it by the time > you make the release but anyway I would like to hear some feedback from > the community :) Otherwise I will make a new video with a similar > structure, in full English, maybe briefly show nbconvert and little > more. And probably keep in shorter than 5 minutes and with some chiptune > background music ;) I liked the video, I really think we should make more standalone video of what can be done with the notebook. Like really small tutorial/demos. If I had to do the video I wouldn't show how to start up a notebook from the command line, start the video in the dashboard. Would it be possible to record only the current window, and possibly without any clutter like tabs/ menubar? and much less wider than previous video, it would make it more readable I found that it was a little difficult to see was was beeing typed. Now, if you are more comfortable with english than I am, I would really appreciate a voice that describe things instead of the music, but that might be a little too much asked :-) As for the feature that need to be shown, I'm probably biased as I develop the notebook a lot, and think that the input of user will be better :-) If you plan on making short transcript on what you think showing in a shared ether pad or along that would be great. If you want to "cheat" a little to type long cell, I might be able to code you a patch that allow to create cell already full of code or mimic the fact that you are typing really fast :-) I should completely do that for my tutorial at euroscipy :-) -- Matthias > > Best regards, > > Juan Luis Cano > _______________________________________________ > IPython-dev mailing list > IPython-dev at scipy.org > http://mail.scipy.org/mailman/listinfo/ipython-dev From nborwankar at gmail.com Mon Aug 5 14:07:38 2013 From: nborwankar at gmail.com (Nitin Borwankar) Date: Mon, 5 Aug 2013 11:07:38 -0700 Subject: [IPython-dev] prose, notebooks, and thrashing (and a re-direct to another list) In-Reply-To: References: <51F6868D.1050903@third-bit.com> Message-ID: The Learn Data Science notebook collection now has links to nbviewer in the README. So all the notebooks can now be browsed via the homepage at http://learnds.com. Please feel free to browse and provide feedback. Some notebooks take time to render as a non-default LAF has been used - taken from http://camdavidsonpilon.github.io/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers/ ------------------------------------------------------------------ Nitin Borwankar nborwankar at gmail.com On Thu, Aug 1, 2013 at 9:48 PM, Nitin Borwankar wrote: > Fernando, > > Thanks very much, I've been very heads down for the past few months and > out of touch with the "best practices" and tools emerging so rapidly around > IPython. > Interesting you mention Cam's book - I stole his CSS styling with his > permission :-) > Going and trying out the auto generate script shortly. > Thanks for the tip and for your awesome project. > > Nitin > > > ------------------------------------------------------------------ > Nitin Borwankar > nborwankar at gmail.com > > > On Thu, Aug 1, 2013 at 12:16 PM, Fernando Perez wrote: > >> Hi Nitin, >> >> >> On Thu, Aug 1, 2013 at 10:30 AM, Nitin Borwankar >> wrote: >> > >> > I've just released a very early beta of LearnDataScience >> > (http://nborwankar.github.io/LearnDataScience) and I had to struggle >> with a >> > subset of these issues. I am not sure my experience is the relevant to >> you >> > but here's what I did. >> > The goal with my content is to teach developers data science so it's >> > possibly complementary to what you folks are doing. >> >> quick note: this looks great, but you should make the notebook titles >> on your main page and in the README.md be nbviewer links, just like >> for the Bayesian Methods book: >> >> >> http://camdavidsonpilon.github.io/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers/ >> >> That simple change will make the page far more useful immediately for >> users who may not know how to read notebooks. >> >> We have a simple script to auto-generate those links that you can copy >> and modify for your URL pattern in a minute: >> >> https://github.com/ipython/ipython/blob/master/tools/mknbindex.py >> >> Cheers, >> >> f >> >> -- >> Fernando Perez (@fperez_org; http://fperez.org) >> fperez.net-at-gmail: mailing lists only (I ignore this when swamped!) >> fernando.perez-at-berkeley: contact me here for any direct mail >> _______________________________________________ >> IPython-dev mailing list >> IPython-dev at scipy.org >> http://mail.scipy.org/mailman/listinfo/ipython-dev >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ondrej.certik at gmail.com Mon Aug 5 14:32:03 2013 From: ondrej.certik at gmail.com (=?UTF-8?B?T25kxZllaiDEjGVydMOtaw==?=) Date: Mon, 5 Aug 2013 12:32:03 -0600 Subject: [IPython-dev] [sympy] Treating Python 3 as a first-class citizen In-Reply-To: References: Message-ID: On Mon, Aug 5, 2013 at 4:04 AM, Thomas Kluyver wrote: > On 5 August 2013 06:51, Matthias Bussonnier > wrote: >> >> For what it is worth, I've already came into question on stackoverflow >> where ipython was launching python3 and ipython2 was use to launch >> ipython+python2.x. > > > I'm guessing that would be Arch? They have the 'python' symlink pointing to > Python 3, and a separate python2 executable. I just found this very relevant PEP: http://www.python.org/dev/peps/pep-0394/ > > Ondrej: > >> The standard way that Python is installed in Debian/Ubuntu is that >> you have python3.2, python2.7, python2.6, python2.5, ..., and then you >> have >> "python", which is just a symlink, on my system it is: > > And a 'python3' symlink. On Debian based systems, the plan is to keep > 'python' pointing to the latest Python 2 version ~forever, and python3 is > being treated as a wholly separate thing. I didn't know that. But it looks like you are right: https://wiki.ubuntu.com/Python/3 https://wiki.ubuntu.com/Python/FoundationsQPythonVersions Though the PEP above says that eventually "python" should point to python 3. > >> As such, I think the "setup.py" install should simply install just one >> ipython (or isympy, pudb), > > I think it's valuable to be able to start ipython for Python 3 or Python 2 > on the same system without having to specify paths or activate some kind of > environment. Fixing the names to 'ipython' and 'ipython3' admittedly isn't > ideal, but it's simple and mostly seems to work well. > > I think the biggest practical issue is that a Python 3 environment where you > can start Python 3 with 'python' still gets 'ipython3' rather than > 'ipython'. We could solve that by checking at installation whether > sys.executable matches 'python(\d)', and copying the trailing digit. But > that could also lead to confusion if you have a Python 3 environment with > 'ipython' inside the environment and 'ipython3' outside it on the system > (but still on PATH). What is confusing to me is what is fundamentally different in Python 3.2, as opposed to Python 2.5 or 2.6, when you have a single code base. E.g. we do not bother with creating ipython2.5 and ipython2.6, so that they can be run side by side, and people simply use virtualenv to run them side by side. So why cannot the same approach be used for Python 3.2? Ondrej From bussonniermatthias at gmail.com Mon Aug 5 14:32:34 2013 From: bussonniermatthias at gmail.com (Matthias BUSSONNIER) Date: Mon, 5 Aug 2013 20:32:34 +0200 Subject: [IPython-dev] prose, notebooks, and thrashing (and a re-direct to another list) In-Reply-To: References: <51F6868D.1050903@third-bit.com> Message-ID: Le 5 ao?t 2013 ? 20:07, Nitin Borwankar a ?crit : > The Learn Data Science notebook collection now has links to nbviewer in the README. So all the notebooks can now be browsed via the homepage at http://learnds.com. > > Please feel free to browse and provide feedback. > Some notebooks take time to render as a non-default LAF has been used - taken from > http://camdavidsonpilon.github.io/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers/ FYI, we'll probably add the possibility to link external css on nbviewer. If @cdp and you are sure to share the same css, and want to have a specific name for it That would be great. -- M From takowl at gmail.com Mon Aug 5 15:40:09 2013 From: takowl at gmail.com (Thomas Kluyver) Date: Mon, 5 Aug 2013 20:40:09 +0100 Subject: [IPython-dev] [sympy] Treating Python 3 as a first-class citizen In-Reply-To: References: Message-ID: On 5 August 2013 19:32, Ond?ej ?ert?k wrote: > Though the PEP above says that eventually "python" should point to python > 3. > Debian developers are strongly against ever making that change. It's possible that they'll change their minds in a few years, but I wouldn't bank on it. What is confusing to me is what is fundamentally different in Python 3.2, > as opposed to Python 2.5 or 2.6, when you have a single code base. > E.g. we do not bother with creating ipython2.5 and ipython2.6, so that they > can be run side by side, and people simply use virtualenv to run them > side by side. > So why cannot the same approach be used for Python 3.2? > Many more systems have a 2.x and a 3.x installed together, and many more users will want to run a 2.x and a 3.x version in parallel than wanted to run, say, 2.5 once 2.6 was available. I think the Debian approach of treating Python 2 and 3 as two separate, albeit similar, platforms works nicely. (In fact I think IPython used to make versioned entry points like ipython2.6, but that's not important). Thomas -------------- next part -------------- An HTML attachment was scrubbed... URL: From mhearne at usgs.gov Mon Aug 5 16:40:46 2013 From: mhearne at usgs.gov (Hearne, Mike) Date: Mon, 5 Aug 2013 14:40:46 -0600 Subject: [IPython-dev] Using nbconvert with older versions of ipython Message-ID: I am using Enthought Canopy, which is using IPython 0.13.1. This version does not appear to have nbconvert in it (or at least not anyplace I can find it), and the standalone project has been merged into the latest versions of IPython. What is the easiest way for me to get a copy of nbconvert? I can think of a couple of possible ways: 1) Upgrade IPython manually to version X.XX. What is the X.XX that will contain nbconvert? 2) Find an old copy of the stand-alone nbconvert code. ?? Thanks, Mike From benjaminrk at gmail.com Mon Aug 5 17:15:17 2013 From: benjaminrk at gmail.com (MinRK) Date: Mon, 5 Aug 2013 14:15:17 -0700 Subject: [IPython-dev] Using nbconvert with older versions of ipython In-Reply-To: References: Message-ID: nbconvert is new in IPython 1.0. The easiest way to get nbconvert is to get IPython 1.0. You can try the release candidate: http://archive.ipython.org/testing/1.0.0 If you don't want to clobber your canopy version, you can install it in a virtual-env. It is not advised to try to uses a snapshot of the standalone nbconvert project. -MinRK On Mon, Aug 5, 2013 at 1:40 PM, Hearne, Mike wrote: > I am using Enthought Canopy, which is using IPython 0.13.1. This > version does not appear to have nbconvert in it (or at least not > anyplace I can find it), and the standalone project has been merged > into the latest versions of IPython. > > What is the easiest way for me to get a copy of nbconvert? > > I can think of a couple of possible ways: > 1) Upgrade IPython manually to version X.XX. What is the X.XX that > will contain nbconvert? > 2) Find an old copy of the stand-alone nbconvert code. ?? > > Thanks, > > Mike > _______________________________________________ > IPython-dev mailing list > IPython-dev at scipy.org > http://mail.scipy.org/mailman/listinfo/ipython-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From asmeurer at gmail.com Mon Aug 5 23:24:37 2013 From: asmeurer at gmail.com (Aaron Meurer) Date: Mon, 5 Aug 2013 21:24:37 -0600 Subject: [IPython-dev] [sympy] Treating Python 3 as a first-class citizen In-Reply-To: References: Message-ID: On Mon, Aug 5, 2013 at 1:40 PM, Thomas Kluyver wrote: > On 5 August 2013 19:32, Ond?ej ?ert?k wrote: >> >> Though the PEP above says that eventually "python" should point to python >> 3. > > > Debian developers are strongly against ever making that change. It's > possible that they'll change their minds in a few years, but I wouldn't bank > on it. They will change their minds. I thought that Arch was stupid making "python" point to Python 3, but that was because at the time, no libraries were Python 3 compatible. Nowadays, all major libraries are compatible. In the future, Python 2 will be defunct, and it will be stupid to never call Python "python". > > >> What is confusing to me is what is fundamentally different in Python 3.2, >> as opposed to Python 2.5 or 2.6, when you have a single code base. >> E.g. we do not bother with creating ipython2.5 and ipython2.6, so that >> they >> can be run side by side, and people simply use virtualenv to run them >> side by side. >> So why cannot the same approach be used for Python 3.2? > > > Many more systems have a 2.x and a 3.x installed together, and many more > users will want to run a 2.x and a 3.x version in parallel than wanted to > run, say, 2.5 once 2.6 was available. I think the Debian approach of > treating Python 2 and 3 as two separate, albeit similar, platforms works > nicely. > > (In fact I think IPython used to make versioned entry points like > ipython2.6, but that's not important). > > Thomas Once you switch to a single codebase, you start to see things differently. Python 3 is just another version of Python. Supporting Python 2.6 - 3.3 is no different than supporting Python 2.4 - 2.7. The Python community made a big fuss about it, but I think that was a mistake, and it has kept people from transitioning. Aaron Meurer > > _______________________________________________ > IPython-dev mailing list > IPython-dev at scipy.org > http://mail.scipy.org/mailman/listinfo/ipython-dev > From ondrej.certik at gmail.com Tue Aug 6 00:05:17 2013 From: ondrej.certik at gmail.com (=?UTF-8?B?T25kxZllaiDEjGVydMOtaw==?=) Date: Mon, 5 Aug 2013 22:05:17 -0600 Subject: [IPython-dev] [sympy] Treating Python 3 as a first-class citizen In-Reply-To: References: Message-ID: On Mon, Aug 5, 2013 at 9:24 PM, Aaron Meurer wrote: > On Mon, Aug 5, 2013 at 1:40 PM, Thomas Kluyver wrote: >> On 5 August 2013 19:32, Ond?ej ?ert?k wrote: >>> >>> Though the PEP above says that eventually "python" should point to python >>> 3. >> >> >> Debian developers are strongly against ever making that change. It's >> possible that they'll change their minds in a few years, but I wouldn't bank >> on it. > > They will change their minds. I thought that Arch was stupid making > "python" point to Python 3, but that was because at the time, no > libraries were Python 3 compatible. Nowadays, all major libraries are > compatible. In the future, Python 2 will be defunct, and it will be > stupid to never call Python "python". I think so too. > >> >> >>> What is confusing to me is what is fundamentally different in Python 3.2, >>> as opposed to Python 2.5 or 2.6, when you have a single code base. >>> E.g. we do not bother with creating ipython2.5 and ipython2.6, so that >>> they >>> can be run side by side, and people simply use virtualenv to run them >>> side by side. >>> So why cannot the same approach be used for Python 3.2? >> >> >> Many more systems have a 2.x and a 3.x installed together, and many more >> users will want to run a 2.x and a 3.x version in parallel than wanted to >> run, say, 2.5 once 2.6 was available. I think the Debian approach of >> treating Python 2 and 3 as two separate, albeit similar, platforms works >> nicely. >> >> (In fact I think IPython used to make versioned entry points like >> ipython2.6, but that's not important). >> >> Thomas > > Once you switch to a single codebase, you start to see things > differently. Python 3 is just another version of Python. Supporting > Python 2.6 - 3.3 is no different than supporting Python 2.4 - 2.7. > > The Python community made a big fuss about it, but I think that was a > mistake, and it has kept people from transitioning. Exactly. I recently wrote this blog post: http://ondrejcertik.blogspot.com/2013/08/how-to-support-both-python-2-and-3.html and I got a lot of feedback --- and unless I missed any, everybody agreed that "of course, single code base is the way to go" and "of course, Python 3.x is just another version of Python, with single code base there is no difference in supporting 2.6-2.7 or 2.6-3.3", as Aaron said. This was a big surprise to me, that so many people see it this way, because the core devs, so far, have not "blessed" this approach (in fact they were discouraging this approach in the past). As far as Debian goes --- they made the decision to do python3 and python2 long time ago, when Guido very clearly said (see my blog post for reference), that Python 3 will be a new language. So creating python3 was the only logical conclusion. But now when it is clear, that with single code base Python 3 is not a new language, it really make sense to have just one "python" and point to whatever default version, be it 2.7 or 3.2 or whatever. Fernando, I would be very much interested what you think of this issue, as you have more touch with the Python core devs. Ondrej From brad.froehle at gmail.com Tue Aug 6 01:59:59 2013 From: brad.froehle at gmail.com (Bradley M. Froehle) Date: Mon, 5 Aug 2013 22:59:59 -0700 Subject: [IPython-dev] [sympy] Treating Python 3 as a first-class citizen In-Reply-To: References: Message-ID: Hi Ondrej, On Mon, Aug 5, 2013 at 9:05 PM, Ond?ej ?ert?k wrote: > On Mon, Aug 5, 2013 at 9:24 PM, Aaron Meurer wrote: >> On Mon, Aug 5, 2013 at 1:40 PM, Thomas Kluyver wrote: >>> On 5 August 2013 19:32, Ond?ej ?ert?k wrote: >>>> Though the PEP above says that eventually "python" should point to python >>>> 3. >>> >>> Debian developers are strongly against ever making that change. It's >>> possible that they'll change their minds in a few years, but I wouldn't bank >>> on it. >> >> They will change their minds. I thought that Arch was stupid making >> "python" point to Python 3, but that was because at the time, no >> libraries were Python 3 compatible. Nowadays, all major libraries are >> compatible. In the future, Python 2 will be defunct, and it will be >> stupid to never call Python "python". > > I think so too. I have to think that this will eventually be the case as well, but even very recent conversations [1] on the debian-python list suggest that many outright reject the notion that `python` would ever launch Python 3. >> Once you switch to a single codebase, you start to see things >> differently. Python 3 is just another version of Python. Supporting >> Python 2.6 - 3.3 is no different than supporting Python 2.4 - 2.7. >> >> The Python community made a big fuss about it, but I think that was a >> mistake, and it has kept people from transitioning. > > Exactly. I recently wrote this blog post: > > http://ondrejcertik.blogspot.com/2013/08/how-to-support-both-python-2-and-3.html > > and I got a lot of feedback --- and unless I missed any, everybody > agreed that "of course, single code base is the way to go" and "of > course, Python 3.x is just another version of Python, with single code > base there is no difference in supporting 2.6-2.7 or 2.6-3.3", as > Aaron said. I agree with the general sentiment that a single code base is preferable. In terms of the install process I'd expect that eventually we'll just install a `ipython` script which uses the same interpreter as during installation. Distributions which want a ipython3.3 script can clean up what we do in post-processing. As you mentioned, it's nearly impossible to develop using Python 3 in these projects which require 2to3 translation. I struggled with this a year ago when trying to develop IPython using Python 3. Among other things, I ended up writing a rather crazy 'Runtime 2to3' [2] meta path hook which could do a (cached) 2to3 conversion on the fly. We've discussed making IPython natively support 2.x and 3.x in the same code base [3,4], and while I don't see it on the roadmap [5], its probably in several developers' minds in the somewhat distant future. At the moment the lack of unicode literal support in Python 3.2 is pretty much a showstopper, but once we require Python >= 3.3 I don't see a technical reason why we couldn't have a unified code base. In the mean time, we should continue the judicious examination of issues like this one which improve Python 3 end user experience because Python 3 is the future and we are all going to have to migrate there eventually. [1]: http://lists.debian.org/debian-python/2013/07/msg00052.html [2]: https://github.com/bfroehle/rt2to3 [3]: https://github.com/ipython/ipython/wiki/IPEP-4:-Python-3-Compatibility [4]: https://github.com/ipython/ipython/issues/2440 [5]: https://github.com/ipython/ipython/wiki/Roadmap:-IPython Cheers, Brad From bussonniermatthias at gmail.com Tue Aug 6 02:56:04 2013 From: bussonniermatthias at gmail.com (Matthias BUSSONNIER) Date: Tue, 6 Aug 2013 08:56:04 +0200 Subject: [IPython-dev] [sympy] Treating Python 3 as a first-class citizen In-Reply-To: References: Message-ID: <59E60EAC-AC96-4127-A29F-9B863969E272@gmail.com> Hey Brad, Le 6 ao?t 2013 ? 07:59, Bradley M. Froehle a ?crit : > Hi Ondrej, > ... > > We've discussed making IPython natively support 2.x and 3.x in > the same code base [3,4], and while I don't see it on the roadmap > [5], its probably in several developers' minds in the somewhat > distant future. At the moment the lack of unicode literal support > in Python 3.2 is pretty much a showstopper, but once we require > Python >= 3.3 I don't see a technical reason why we couldn't have > a unified code base. No so distant future. Yes, we quickly spoke about it. The new unpublished roadmap is here : https://hackpad.com/IPython-Summer-2013-Development-Meeting-D1UR23usGnA It should mainly be the job of Thomas that should arrives soon in the US. - Drop 2.6 - Drop 3.2 - Drop 2to3 I think we will more do it along the way to 2.0 than in a pig PR that make all 2 and 3 compatible, like we don't want a single commit that makes everything pep-8 compliant (or it is painful for git-blame) > > In the mean time, we should continue the judicious examination of > issues like this one which improve Python 3 end user experience > because Python 3 is the future and we are all going to have to > migrate there eventually. I Disagree: > "I don't know what the language of the year 2000 will look like, but I know it will be called Fortran." -- Tony Hoare -- Matthias > > [1]: http://lists.debian.org/debian-python/2013/07/msg00052.html > [2]: https://github.com/bfroehle/rt2to3 > [3]: https://github.com/ipython/ipython/wiki/IPEP-4:-Python-3-Compatibility > [4]: https://github.com/ipython/ipython/issues/2440 > [5]: https://github.com/ipython/ipython/wiki/Roadmap:-IPython > > Cheers, > Brad > _______________________________________________ > IPython-dev mailing list > IPython-dev at scipy.org > http://mail.scipy.org/mailman/listinfo/ipython-dev From tarun.gaba7 at gmail.com Tue Aug 6 13:09:47 2013 From: tarun.gaba7 at gmail.com (TARUN GABA) Date: Tue, 6 Aug 2013 22:39:47 +0530 Subject: [IPython-dev] Regarding serving JS from installation directory. Message-ID: Hi How can I serve Javascripts using IPython display function from the module installation directory. The module structure is something like .. --- master ----------------js ----------------python_module/module.py ----------------docs ----------------setup.py The IPython.display function is called from python_module/module.py. Also Since this module would be installed in site-packages, Since a user can run IPython notebook, in any of his directories. How can I achieve this? -------------- next part -------------- An HTML attachment was scrubbed... URL: From jason-sage at creativetrax.com Tue Aug 6 14:31:17 2013 From: jason-sage at creativetrax.com (Jason Grout) Date: Tue, 06 Aug 2013 13:31:17 -0500 Subject: [IPython-dev] system_raw exit code Message-ID: <520140F5.9040902@creativetrax.com> I'm working on getting Sage updated to 1.0 RC1, and ran into some problems with interpreting exit codes from the InteractiveShell's system_raw command. According to https://github.com/ipython/ipython/blob/master/IPython/core/interactiveshell.py#L2239, if (and only if) the exit code is nonzero, the signal part of the status code is overwritten with the exit code. If the exit code is 0, then the signal information in the low bit is allowed to persist as the 'exit code'. We're trying to interpret the _exit_code variable using the various official os.WIFEXITED, os.WSTOPSIG, etc. functions. It works fine if the exit code was 0, but fails when the exit code is nonzero. Can we either not touch the exit code in system_raw (and rely on the user to use the os.W* functions to interpret it), or change it to the exit status all the time, perhaps with something like: if os.WIFEXITED(ec): self.user_ns['_exit_code'] = os.WEXITSTATUS(ec) else: self.user_ns['_exit_code'] = None Perhaps since the variable is called _exit_code instead of _exit_status, it makes more sense to not touch the exit code and delete the "if ec > 255: ec >>= 8" code. Thanks, Jason From moorepants at gmail.com Tue Aug 6 15:50:40 2013 From: moorepants at gmail.com (Jason Moore) Date: Tue, 6 Aug 2013 15:50:40 -0400 Subject: [IPython-dev] Regarding serving JS from installation directory. In-Reply-To: References: Message-ID: Could you setup hidden symlinks the user's working directory that point to the files that are outside of the directory? Jason moorepants.info +01 530-601-9791 On Tue, Aug 6, 2013 at 1:09 PM, TARUN GABA wrote: > Hi > > How can I serve Javascripts using IPython display function from the module > installation directory. > The module structure is something like .. > > --- master > ----------------js > ----------------python_module/module.py > ----------------docs > ----------------setup.py > > The IPython.display function is called from python_module/module.py. Also > Since this module would be installed in site-packages, Since a user can run > IPython notebook, in any of his directories. > How can I achieve this? > > > _______________________________________________ > IPython-dev mailing list > IPython-dev at scipy.org > http://mail.scipy.org/mailman/listinfo/ipython-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nborwankar at gmail.com Tue Aug 6 21:18:54 2013 From: nborwankar at gmail.com (Nitin Borwankar) Date: Tue, 6 Aug 2013 18:18:54 -0700 Subject: [IPython-dev] UC Berkeley open house with the development team, 7/24, 4pm In-Reply-To: References: Message-ID: Dang! I missed this. Is this a monthly event or a quarterly or ....? Nitin Nitin Borwankar nborwankar at gmail.com On Jul 18, 2013 2:34 PM, "Fernando Perez" wrote: > Hi all, > > [ while this is a local Berkeley event, I know this list reaches > campus folks as well who may not otherwise hear about it]. > > As part of our ongoing work, we're having an all-hands development > meeting during the week of July 22-26, including overseas members of > our team and partners in related projects from industry and academia > (including the Julia language team). > > While the bulk of that meeting will focus on technical issues, on > Wednesday the 24th, at 4pm, we'll hold an 'open house' event. Come > for snacks, meet the team, ask questions about IPython, discuss ideas > and suggestions, or demo your own work that uses IPython. We are as > eager to learn from what the community is doing with IPython as we are > to answer your own questions. > > Event: IPython open house > Date: Wednesday, July 24th, 4pm. > Location: Tolman Hall 5101. > > More details: > https://github.com/ipython/ipython/wiki/Dev:-Meeting,-July-2013 > > Please forward this to interested colleagues on campus. > > Cheers, > > f > > -- > Fernando Perez (@fperez_org; http://fperez.org) > fperez.net-at-gmail: mailing lists only (I ignore this when swamped!) > fernando.perez-at-berkeley: contact me here for any direct mail > _______________________________________________ > IPython-dev mailing list > IPython-dev at scipy.org > http://mail.scipy.org/mailman/listinfo/ipython-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fperez.net at gmail.com Tue Aug 6 22:11:47 2013 From: fperez.net at gmail.com (Fernando Perez) Date: Tue, 6 Aug 2013 19:11:47 -0700 Subject: [IPython-dev] UC Berkeley open house with the development team, 7/24, 4pm In-Reply-To: References: Message-ID: No, it was just part of our dev meeting. So that means it's a biannual event, sorry :) On Tue, Aug 6, 2013 at 6:18 PM, Nitin Borwankar wrote: > Dang! I missed this. Is this a monthly event or a quarterly or ....? > Nitin > > Nitin Borwankar nborwankar at gmail.com > > On Jul 18, 2013 2:34 PM, "Fernando Perez" wrote: >> >> Hi all, >> >> [ while this is a local Berkeley event, I know this list reaches >> campus folks as well who may not otherwise hear about it]. >> >> As part of our ongoing work, we're having an all-hands development >> meeting during the week of July 22-26, including overseas members of >> our team and partners in related projects from industry and academia >> (including the Julia language team). >> >> While the bulk of that meeting will focus on technical issues, on >> Wednesday the 24th, at 4pm, we'll hold an 'open house' event. Come >> for snacks, meet the team, ask questions about IPython, discuss ideas >> and suggestions, or demo your own work that uses IPython. We are as >> eager to learn from what the community is doing with IPython as we are >> to answer your own questions. >> >> Event: IPython open house >> Date: Wednesday, July 24th, 4pm. >> Location: Tolman Hall 5101. >> >> More details: >> https://github.com/ipython/ipython/wiki/Dev:-Meeting,-July-2013 >> >> Please forward this to interested colleagues on campus. >> >> Cheers, >> >> f >> >> -- >> Fernando Perez (@fperez_org; http://fperez.org) >> fperez.net-at-gmail: mailing lists only (I ignore this when swamped!) >> fernando.perez-at-berkeley: contact me here for any direct mail >> _______________________________________________ >> IPython-dev mailing list >> IPython-dev at scipy.org >> http://mail.scipy.org/mailman/listinfo/ipython-dev > > > _______________________________________________ > IPython-dev mailing list > IPython-dev at scipy.org > http://mail.scipy.org/mailman/listinfo/ipython-dev > -- Fernando Perez (@fperez_org; http://fperez.org) fperez.net-at-gmail: mailing lists only (I ignore this when swamped!) fernando.perez-at-berkeley: contact me here for any direct mail From fperez.net at gmail.com Tue Aug 6 23:46:27 2013 From: fperez.net at gmail.com (Fernando Perez) Date: Tue, 6 Aug 2013 20:46:27 -0700 Subject: [IPython-dev] IJulia: IPython+Julia Message-ID: Hi all, this is a repost from g+, but I'm really happy about it, and I think a number of you here might be interested... >From the inimitable Julia team, the happy marriage of IPython and Julia brings IJulia: a fully native Julia kernel for the IPython architecture, capable of using all of our clients up to and including the notebook: http://nbviewer.ipython.org/url/jdj.mit.edu/~stevenj/IJulia%2520Preview.ipynb This all began back in March, with Jeff Bezanson and I sitting at a hotel coffee shop during the SIAM CSE'13 conference, hacking away at IPython/Julia integration instead of going to talks. After one more visit to Harvard a month later, and an incredibly intense week during our dev meeting here in Berkeley where +Steven G. Johnson and Stefan Karpinski joined Jeff, they've now pulled off the complete implementation. As described by Steven in his post announcing this: https://groups.google.com/d/msg/julia-users/3wM7RqJJ6R8/YNSiBBZcIvEJ it's still not for the faint of heart. Furthermore, once IPython 1.0 is out (coming in two days!), we'll need to make some changes to our messaging protocol to clean up some hacks they had to make. So this should still be considered 'tech preview' material for a few more months, but hopefully before long it will be production ready. My hat's off to the Julia team for this. Working with them has been an absolute pleasure, and their technical competence is just jaw-dropping. I look forward to lots of fun in the coming years in high-level technical computing by having both Python and Julia in my toolbox. Cheers, f -- Fernando Perez (@fperez_org; http://fperez.org) fperez.net-at-gmail: mailing lists only (I ignore this when swamped!) fernando.perez-at-berkeley: contact me here for any direct mail From damianavila at gmail.com Wed Aug 7 01:23:48 2013 From: damianavila at gmail.com (=?ISO-8859-1?Q?Dami=E1n_Avila?=) Date: Wed, 07 Aug 2013 02:23:48 -0300 Subject: [IPython-dev] Playing with cells in the notebook... and some problems. Message-ID: <5201D9E4.3070004@gmail.com> I began to play with javascript and jquery to make a "live" version of reveal slideshow... And I found myself with some problem trying to do some things... Essentially, all the javascript libraries available to do slideshows (reveal, deck, impress, flowtime, bespoke, jimpress, scrolldeck, etc...) needs a custom markup to make the slides... esentially adding html elements (ie.
) or classes or IDs... So, I can modify the notebook cells on the fly to add the classes or IDs (I have done some of this things), but if I want to group some cells together in one slide, I need to "wrap" them inside a container (ie, a
or
) and there is where the problem arise... because I try to wrap a cell on the fly in the notebook, this cell is no longer responsive to the execution of code (neither focus it)... Ah... if I "wrapinner", the cell is executed without problems, but I need to wrap outer, I mean add html elements enclosing the current cell... Note: beware if you try the following inside the notebook because if you wrap a cell, it will be deleted after closing and reopen of the notebook... An example: ``` %%javascript var cells = IPython.notebook.get_cells(); for(var i in cells){ var cell = cells[i]; if (cell.metadata.slideshow.slide_type == 'fragment') { $('.cell:nth('+i+')').wrap('
'); } } ``` With the current state, I mean, without grouping cells, I can build live slideshow... but each cell would be a slide, and I feel that sometimes is better to combine text and code cells in one slide (group cells)... Probably I am missing some basic conceptual ideas behind js and jquery... or IPython internals... if this is the case, please if some of you can give me some pointers, it will be great. Sorry for the long post... Cheers. Dami?n. From bussonniermatthias at gmail.com Wed Aug 7 03:40:15 2013 From: bussonniermatthias at gmail.com (Matthias BUSSONNIER) Date: Wed, 7 Aug 2013 09:40:15 +0200 Subject: [IPython-dev] Playing with cells in the notebook... and some problems. In-Reply-To: <5201D9E4.3070004@gmail.com> References: <5201D9E4.3070004@gmail.com> Message-ID: <9E273B98-1749-4C2F-A086-F4310598B4AE@gmail.com> Le 7 ao?t 2013 ? 07:23, Dami?n Avila a ?crit : > > Ah... if I "wrapinner", the cell is executed without problems, but I > need to wrap outer, I mean add html elements enclosing the current cell... Cell object are attached to a DOM element, if yo wrap outer, you "change" this element so the binding do not exist anymore. it's like this slight difference between (teleporting someone), and (cloning him and burn the original immediately) from an outsider perspective it's the same. You could probably re-attach cell.element to the wrapped tag, I think it is doable. But code mirror might also have some assumption on the DOM. I don't quite remember how are cell inserted and everything, but I would try when wrapping to do cell.to_json, create the sections create n
cell.from_json on each of theses div. -- Matthias From nborwankar at gmail.com Wed Aug 7 13:54:32 2013 From: nborwankar at gmail.com (Nitin Borwankar) Date: Wed, 7 Aug 2013 10:54:32 -0700 Subject: [IPython-dev] xkcd plots on Matplotlib gallery ?? Message-ID: I was about to show some enterprise folks at an F100 company how IPy NB is awesome for data science visualization - I went to matplotlib gallery and saw to my horror that *all* gallery examples have been replaced with xkcd plot styling. Or... was the site hacked? That was another thought. This makes it pretty much impossible to show to exactly those constituencies that will make IPy NB massively mainstream dues to its agile visualization power in data science. Well its possible to show it - but there goes the chance of being taken seriously as it is quite cartoonish. I enjoy it but does it have to be the default styling - it really detracts from the clean visualization that matplotlib brings. On the other hand, if it was hacked - apologies and sympathies. Nitin ------------------------------------------------------------------ Nitin Borwankar nborwankar at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From jsw at fnal.gov Wed Aug 7 13:57:34 2013 From: jsw at fnal.gov (Jon Wilson) Date: Wed, 7 Aug 2013 12:57:34 -0500 Subject: [IPython-dev] xkcd plots on Matplotlib gallery ?? In-Reply-To: References: Message-ID: <52028A8E.1080506@fnal.gov> It all looks normal to me... On 08/07/2013 12:54 PM, Nitin Borwankar wrote: > I was about to show some enterprise folks at an F100 company how IPy > NB is awesome for data science visualization - I went to matplotlib > gallery and saw to my horror that *all* gallery examples have been > replaced with xkcd plot styling. > > Or... was the site hacked? That was another thought. > > This makes it pretty much impossible to show to exactly those > constituencies that will make IPy NB massively mainstream dues to its > agile visualization power in data science. Well its possible to show > it - but there goes the chance of being taken seriously as it is quite > cartoonish. I enjoy it but does it have to be the default styling - > it really detracts from the clean visualization that matplotlib brings. > > On the other hand, if it was hacked - apologies and sympathies. > > Nitin > > > ------------------------------------------------------------------ > Nitin Borwankar > nborwankar at gmail.com > > > _______________________________________________ > IPython-dev mailing list > IPython-dev at scipy.org > http://mail.scipy.org/mailman/listinfo/ipython-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From jiffyclub at gmail.com Wed Aug 7 13:58:09 2013 From: jiffyclub at gmail.com (Matt Davis) Date: Wed, 7 Aug 2013 10:58:09 -0700 Subject: [IPython-dev] xkcd plots on Matplotlib gallery ?? In-Reply-To: References: Message-ID: <23CED915-E2F0-471F-A62D-BEB48E60D093@gmail.com> Are you talking about http://matplotlib.org/gallery.html? That looks normal to me. - Matt On Aug 7, 2013, at 10:54 AM, Nitin Borwankar wrote: > I was about to show some enterprise folks at an F100 company how IPy NB is awesome for data science visualization - I went to matplotlib gallery and saw to my horror that *all* gallery examples have been replaced with xkcd plot styling. > > Or... was the site hacked? That was another thought. > > This makes it pretty much impossible to show to exactly those constituencies that will make IPy NB massively mainstream dues to its agile visualization power in data science. Well its possible to show it - but there goes the chance of being taken seriously as it is quite cartoonish. I enjoy it but does it have to be the default styling - it really detracts from the clean visualization that matplotlib brings. > > On the other hand, if it was hacked - apologies and sympathies. > > Nitin > > > ------------------------------------------------------------------ > Nitin Borwankar > nborwankar at gmail.com > _______________________________________________ > IPython-dev mailing list > IPython-dev at scipy.org > http://mail.scipy.org/mailman/listinfo/ipython-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From jiffyclub at gmail.com Wed Aug 7 13:59:23 2013 From: jiffyclub at gmail.com (Matt Davis) Date: Wed, 7 Aug 2013 10:59:23 -0700 Subject: [IPython-dev] xkcd plots on Matplotlib gallery ?? In-Reply-To: References: Message-ID: You may have accidentally stumbled onto http://matplotlib.org/xkcd/gallery.html. - Matt On Aug 7, 2013, at 10:54 AM, Nitin Borwankar wrote: > I was about to show some enterprise folks at an F100 company how IPy NB is awesome for data science visualization - I went to matplotlib gallery and saw to my horror that *all* gallery examples have been replaced with xkcd plot styling. > > Or... was the site hacked? That was another thought. > > This makes it pretty much impossible to show to exactly those constituencies that will make IPy NB massively mainstream dues to its agile visualization power in data science. Well its possible to show it - but there goes the chance of being taken seriously as it is quite cartoonish. I enjoy it but does it have to be the default styling - it really detracts from the clean visualization that matplotlib brings. > > On the other hand, if it was hacked - apologies and sympathies. > > Nitin > > > ------------------------------------------------------------------ > Nitin Borwankar > nborwankar at gmail.com > _______________________________________________ > IPython-dev mailing list > IPython-dev at scipy.org > http://mail.scipy.org/mailman/listinfo/ipython-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From andrew.reid at nist.gov Wed Aug 7 14:01:49 2013 From: andrew.reid at nist.gov (Andrew Reid) Date: Wed, 7 Aug 2013 14:01:49 -0400 Subject: [IPython-dev] xkcd plots on Matplotlib gallery ?? In-Reply-To: References: Message-ID: <20130807180148.GD5488@poppins.nist.gov> On Wed, Aug 07, 2013 at 10:54:32AM -0700, Nitin Borwankar wrote: > I was about to show some enterprise folks at an F100 company how IPy NB is > awesome for data science visualization - I went to matplotlib gallery and saw > to my horror that *all* gallery examples have been replaced with xkcd plot > styling. > > Or... was the site hacked? That was another thought. It seems to be working normally for me -- There's an "xkcd" sub-directory, http://matplotlib.org/xkcd/examples/index.html, where all the internal links go to xkcd-ified images, so it's perhaps non-obvious how to get out of it. The normal site, at http://matplotlib.org/examples/index.html, seems to be working OK for me right now. Maybe you just bookmarked the wrong one? -- A. -- Dr. Andrew C. E. Reid Physical Scientist, Computer Operations Administrator Center for Theoretical and Computational Materials Science National Institute of Standards and Technology, Mail Stop 8555 Gaithersburg MD 20899 USA andrew.reid at nist.gov From ccordoba12 at gmail.com Wed Aug 7 14:40:46 2013 From: ccordoba12 at gmail.com (=?UTF-8?B?Q2FybG9zIEPDs3Jkb2Jh?=) Date: Wed, 07 Aug 2013 13:40:46 -0500 Subject: [IPython-dev] Heartbeat channel paused by default in the qt frontend? Message-ID: <520294AE.3010805@gmail.com> Hi, I was trying to understand why some things stopped working after I updated Spyder to work with 1.0, and after a couple of hours of digging I noticed that the heartbeat channel is paused by default. This prevents us to detect and inform our users when our kernels have (accidentally) died. The fix is easy (just unpause it at client creation time) but I'm wondering if this is this the expected behavior or it's a bug (so you guys can fix it before 1.0 final). Cheers, Carlos From benjaminrk at gmail.com Wed Aug 7 15:36:04 2013 From: benjaminrk at gmail.com (MinRK) Date: Wed, 7 Aug 2013 12:36:04 -0700 Subject: [IPython-dev] Heartbeat channel paused by default in the qt frontend? In-Reply-To: <520294AE.3010805@gmail.com> References: <520294AE.3010805@gmail.com> Message-ID: Kernels are automatically restarted now, and owned kernels no longer use the heartbeat channel, it should only be used for *external* kernels (i.e. ones not started by the QtConsole). -------------- next part -------------- An HTML attachment was scrubbed... URL: From ccordoba12 at gmail.com Wed Aug 7 16:07:45 2013 From: ccordoba12 at gmail.com (=?UTF-8?B?Q2FybG9zIEPDs3Jkb2Jh?=) Date: Wed, 07 Aug 2013 15:07:45 -0500 Subject: [IPython-dev] Heartbeat channel paused by default in the qt frontend? In-Reply-To: References: <520294AE.3010805@gmail.com> Message-ID: <5202A911.9020900@gmail.com> Ok, got it. I was thinking that was the case (but better be sure :-). Our kernels have to be external ones because we need to connect them to our other plugins (which is currently not possible using regular kernels). El 07/08/13 14:36, MinRK escribi?: > Kernels are automatically restarted now, and owned kernels no longer > use the heartbeat channel, it should only be used for *external* > kernels (i.e. ones not started by the QtConsole). > > > _______________________________________________ > IPython-dev mailing list > IPython-dev at scipy.org > http://mail.scipy.org/mailman/listinfo/ipython-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From juanlu001 at gmail.com Wed Aug 7 17:04:05 2013 From: juanlu001 at gmail.com (Juan Luis Cano) Date: Wed, 07 Aug 2013 23:04:05 +0200 Subject: [IPython-dev] New video of IPython notebook In-Reply-To: <8E62845C-813A-4A74-B2E7-D642AEAED4E2@gmail.com> References: <51FE2941.7010201@gmail.com> <8E62845C-813A-4A74-B2E7-D642AEAED4E2@gmail.com> Message-ID: <5202B645.5090003@gmail.com> On 08/05/2013 05:42 PM, Matthias BUSSONNIER wrote: > Hi there, > > > Le 4 ao?t 2013 ? 12:13, Juan Luis Cano a ?crit : > >> Hello everyone, >> >> First of all, congratulations to the team for all the hard work put in >> this 1.0 release: we're all very excited about it and I'm sure it's >> going to be the beginning of a big story. Keep rocking! :) >> >> I am the author of the video demo that Fernando gently featured on the >> IPython notebook page[1] (thanks F!), which apparently has received >> quite a few views (~15000 in a bit more than 5 months). I was thinking >> of recording a new one with the 1.0 version (I'm already triying it and >> works like a charm so far), so I would like to ask the team if there are >> any interesting features worth including or changes that should be made. >> For example, digging into the docs I just saw that a new %matplotlib >> magic was included, which IIUC is the preferred method to produce inline >> figures (instead of the %pylab mode). >> >> [1]: http://ipython.org/notebook >> >> Because of some time constraints I won't be able to have it by the time >> you make the release but anyway I would like to hear some feedback from >> the community :) Otherwise I will make a new video with a similar >> structure, in full English, maybe briefly show nbconvert and little >> more. And probably keep in shorter than 5 minutes and with some chiptune >> background music ;) > I liked the video, I really think we should make more standalone video of what can be done with the notebook. > Like really small tutorial/demos. > > If I had to do the video I wouldn't show how to start up a notebook from the command line, start the video in the dashboard. > Would it be possible to record only the current window, and possibly without any clutter like tabs/ menubar? > and much less wider than previous video, it would make it more readable I found that it was a little difficult to see > was was beeing typed. > > Now, if you are more comfortable with english than I am, I would really appreciate a voice that describe > things instead of the music, but that might be a little too much asked :-) > > As for the feature that need to be shown, I'm probably biased as I develop the notebook a lot, > and think that the input of user will be better :-) > > If you plan on making short transcript on what you think showing in a shared ether pad or along that would be great. > > If you want to "cheat" a little to type long cell, I might be able to code you a patch that allow to create cell already full of code or mimic > the fact that you are typing really fast :-) I should completely do that for my tutorial at euroscipy :-) Thanks a lot for the feedback! I will definitely keep it in mind when recording the next one. With the exception of the thing about recording some explanations: I hate my recorded voice so much, I just cannot do it :) I will try next week to create that ether pad. Thank you for offering the fast typing patch but I prefer keeping it as-is :) Too bad I won't be able to attend EuroSciPy, but if any of you is willing to come to PyCon Spain in November (or submit a proposal) you will be more than welcome! :D Cheers Juan Luis From benjaminrk at gmail.com Wed Aug 7 19:08:28 2013 From: benjaminrk at gmail.com (MinRK) Date: Wed, 7 Aug 2013 16:08:28 -0700 Subject: [IPython-dev] matplotlib webagg benchmarks In-Reply-To: <51F8300A.5080303@stsci.edu> References: <51F8300A.5080303@stsci.edu> Message-ID: On Tue, Jul 30, 2013 at 2:28 PM, Michael Droettboom wrote: > As promised in last week's Google Hangout to the IPython developers > meeting -- I have some concrete timings and numbers on the matplotlib > WebAgg backend in a couple of different scenarios. > > First, let me apologize -- the way I was timing binary websockets vs. > text websockets previously was wrong. The actual impact of it is much > smaller than I had originally estimated -- so the discussion about > whether to include binary websockets in IPython may have been all for > naught. > Part of our message spec includes binary blobs trailing after the JSONable message dicts. Currently this is used by `data_pub` and `apply` messages, but it could theoretically be extended to display data for streaming output, such as video or audio. Right now, we have no way of propagating that part of the message spec up to notebook frontends, because we do not yet have any binary messages that the notebook frontend can understand. In these cases, a switch to binary websocket may still make sense, even without a performance argument. > > For benchmarking, I used two different plots. One is the classic > "simple_plot.py" sine wave, which tests sort of the "easy case" where > very little of the image is updated in each frame, and the other was > "animation/dynamic_image.py" in which most of the plot is updated in > each frame. > > I tested both scenarios with client and server on my local machine, and > through an ssh tunnel that goes over wifi, the public university > network, to my home's 15/5 MBps cable connection 28 miles away and back. > > For (A), the average frame weighs in at around 20kb. For (B), it's > around 90kb. For base64, multiply by those numbers by 4 / 3. > > On my local machine, I can push through about 18 fps, so a bandwidth of > 2.8MBps (were it sustained, which it rarely is). On the tunnel, I > fluctuate between 7 and 10 fps, which is quite usable, and quite near > the practical upper limit on the bandwidth of that connection. > > However, the problematic thing for the remote connection is the > latency. Locally, I average a fairly steady 250ms to roundtrip from a > mouse event to an updated frame. Remotely, it fluctuates randomly > between 400ms (still usable) and 3000ms. Some more careful dynamic > scaling of events can probably make that easier to use, perhaps. I know > games often use UDP and handle robustness to packet loss in a different > way as a way to remove some of the latency of TCP. I have no idea if > such a thing would be possible over a web socket, of course. > > I could not measure any statistically significant change in framerate or > latency between a binary websocket and a non-binary one. However, there > is a 10% increase in CPU time on both the python side and the browser. > It so happens that I wasn't saturating my CPU, so it had no net impact. > Likewise, I am not saturating my bandwidth, so the additional size > doesn't matter in this case. But I suspect if either one of those > resources is starved, the additional 10% cpu time and 25% bandwidth > increase may matter. > Thanks for these numbers - I suspect the potential penalty for an extra hop between the kernel and the notebook will not be significant in any case where the kernel is local to the server and the client is remote. -MinRK > > Mike > _______________________________________________ > IPython-dev mailing list > IPython-dev at scipy.org > http://mail.scipy.org/mailman/listinfo/ipython-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From songofacandy at gmail.com Wed Aug 7 19:55:15 2013 From: songofacandy at gmail.com (INADA Naoki) Date: Thu, 8 Aug 2013 08:55:15 +0900 Subject: [IPython-dev] matplotlib webagg benchmarks In-Reply-To: <51F8300A.5080303@stsci.edu> References: <51F8300A.5080303@stsci.edu> Message-ID: FYI, My wsaccel may make websocket faster. On Wed, Jul 31, 2013 at 6:28 AM, Michael Droettboom wrote: > As promised in last week's Google Hangout to the IPython developers > meeting -- I have some concrete timings and numbers on the matplotlib > WebAgg backend in a couple of different scenarios. > > First, let me apologize -- the way I was timing binary websockets vs. > text websockets previously was wrong. The actual impact of it is much > smaller than I had originally estimated -- so the discussion about > whether to include binary websockets in IPython may have been all for > naught. > > For benchmarking, I used two different plots. One is the classic > "simple_plot.py" sine wave, which tests sort of the "easy case" where > very little of the image is updated in each frame, and the other was > "animation/dynamic_image.py" in which most of the plot is updated in > each frame. > > I tested both scenarios with client and server on my local machine, and > through an ssh tunnel that goes over wifi, the public university > network, to my home's 15/5 MBps cable connection 28 miles away and back. > > For (A), the average frame weighs in at around 20kb. For (B), it's > around 90kb. For base64, multiply by those numbers by 4 / 3. > > On my local machine, I can push through about 18 fps, so a bandwidth of > 2.8MBps (were it sustained, which it rarely is). On the tunnel, I > fluctuate between 7 and 10 fps, which is quite usable, and quite near > the practical upper limit on the bandwidth of that connection. > > However, the problematic thing for the remote connection is the > latency. Locally, I average a fairly steady 250ms to roundtrip from a > mouse event to an updated frame. Remotely, it fluctuates randomly > between 400ms (still usable) and 3000ms. Some more careful dynamic > scaling of events can probably make that easier to use, perhaps. I know > games often use UDP and handle robustness to packet loss in a different > way as a way to remove some of the latency of TCP. I have no idea if > such a thing would be possible over a web socket, of course. > > I could not measure any statistically significant change in framerate or > latency between a binary websocket and a non-binary one. However, there > is a 10% increase in CPU time on both the python side and the browser. > It so happens that I wasn't saturating my CPU, so it had no net impact. > Likewise, I am not saturating my bandwidth, so the additional size > doesn't matter in this case. But I suspect if either one of those > resources is starved, the additional 10% cpu time and 25% bandwidth > increase may matter. > > Mike > _______________________________________________ > IPython-dev mailing list > IPython-dev at scipy.org > http://mail.scipy.org/mailman/listinfo/ipython-dev > -- INADA Naoki -------------- next part -------------- An HTML attachment was scrubbed... URL: From fperez.net at gmail.com Wed Aug 7 19:59:48 2013 From: fperez.net at gmail.com (Fernando Perez) Date: Wed, 7 Aug 2013 16:59:48 -0700 Subject: [IPython-dev] matplotlib webagg benchmarks In-Reply-To: <51F8300A.5080303@stsci.edu> References: <51F8300A.5080303@stsci.edu> Message-ID: Hi Mike, thanks a lot for providing these numbers... I think for now, the plan we hatched at the dev meeting continues to look reasonable (integrate interactive webagg support into the %matplotlib magic so it would be seamless to users on localhost or very open networks). But the fact that the overhead is lower than we'd thought lends weight to the idea of giving mpl an easier way to integrate this functionality into our existing design before we cross the binary ws bridge... One way or another, exciting times! Cheers, f From damianavila at gmail.com Thu Aug 8 02:01:29 2013 From: damianavila at gmail.com (=?ISO-8859-1?Q?Dami=E1n_Avila?=) Date: Thu, 08 Aug 2013 03:01:29 -0300 Subject: [IPython-dev] Playing with cells in the notebook... and some problems. In-Reply-To: <9E273B98-1749-4C2F-A086-F4310598B4AE@gmail.com> References: <5201D9E4.3070004@gmail.com> <9E273B98-1749-4C2F-A086-F4310598B4AE@gmail.com> Message-ID: <52033439.30406@gmail.com> El 07/08/13 04:40, Matthias BUSSONNIER escribi?: > Le 7 ao?t 2013 ? 07:23, Dami?n Avila a ?crit : > >> Ah... if I "wrapinner", the cell is executed without problems, but I >> need to wrap outer, I mean add html elements enclosing the current cell... > Cell object are attached to a DOM element, if yo wrap outer, you "change" this element > so the binding do not exist anymore. > > it's like this slight difference between (teleporting someone), and (cloning him and burn the original immediately) > from an outsider perspective it's the same. > > You could probably re-attach cell.element to the wrapped tag, I think it is doable. But code mirror might also > have some assumption on the DOM. > > I don't quite remember how are cell inserted and everything, but I would try when wrapping to do > cell.to_json, > create the sections > create n
> cell.from_json on each of theses div. > Hi Matthias, thank for you feedback... I was experimenting in the notebook with no successful results. Let me paste a little code to explain more deeply... Essentially, I follow your guidelines... I select a cell marked as a fragment and pass it to JSON. I created a new div prepended to the 'div#notebook-container' Then I create a new code cell (I took the "kernel" status from the current cells, is there a better way to do it?) and pass back from JSON to this new "ccell". Finally, I append the "ccell.element" to the new div... In this way, I get the a copy of the cell but it is not executable... If I do the same, but instead of appending the "ccell.element" to the new div, I append it to the 'div#notebook-container', the cell is executable again... so I think I am losing the DOM again... (ah... in both cases I get the cell, but the input form is minimized and you have to click on it to expand it and write content... but this is not the main problem... just a detail for now...) ``` %%javascript var cells = IPython.notebook.get_cells(); var jcell; for(var i in cells){ var cell = cells[i]; if (cell.metadata.slideshow.slide_type == 'fragment') { jcell = cell.toJSON(); } } $('div#notebook-container').prepend('
'); var kernel = cells[0].kernel var ccell = new IPython.CodeCell(kernel); ccell.fromJSON(jcell); $("div#slide").append(ccell.element); ``` Any insights? Cheers. Dami?n. From bussonniermatthias at gmail.com Thu Aug 8 02:26:22 2013 From: bussonniermatthias at gmail.com (Matthias BUSSONNIER) Date: Thu, 8 Aug 2013 08:26:22 +0200 Subject: [IPython-dev] Playing with cells in the notebook... and some problems. In-Reply-To: <52033439.30406@gmail.com> References: <5201D9E4.3070004@gmail.com> <9E273B98-1749-4C2F-A086-F4310598B4AE@gmail.com> <52033439.30406@gmail.com> Message-ID: <2591250C-DD48-4ED4-9CB6-083A8009B2FB@gmail.com> Le 8 ao?t 2013 ? 08:01, Dami?n Avila a ?crit : > In this way, I get the a copy of the cell but it is not executable? Yes it is, Shift-Enter is just not bound to it. Shift-Enter is handled by IPython.notebook and is bound to execute selected cell As notebook object does nt know of your cell, you are just sending the execute request to the wrong cell. adding this : window.ccell = cell; and then ccell.execute() in JSconsole works. -- M -------------- next part -------------- An HTML attachment was scrubbed... URL: From damianavila at gmail.com Thu Aug 8 02:53:21 2013 From: damianavila at gmail.com (=?ISO-8859-1?Q?Dami=E1n_Avila?=) Date: Thu, 08 Aug 2013 03:53:21 -0300 Subject: [IPython-dev] Playing with cells in the notebook... and some problems. In-Reply-To: <2591250C-DD48-4ED4-9CB6-083A8009B2FB@gmail.com> References: <5201D9E4.3070004@gmail.com> <9E273B98-1749-4C2F-A086-F4310598B4AE@gmail.com> <52033439.30406@gmail.com> <2591250C-DD48-4ED4-9CB6-083A8009B2FB@gmail.com> Message-ID: <52034061.6040400@gmail.com> El 08/08/13 03:26, Matthias BUSSONNIER escribi?: > > Le 8 ao?t 2013 ? 08:01, Dami?n Avila a ?crit : > >> In this way, I get the a copy of the cell but it is not executable... > > > Yes it is, Shift-Enter is just not bound to it. > Shift-Enter is handled by IPython.notebook and is bound to execute > selected cell > > As notebook object does nt know of your cell, you are just sending the > execute request to the wrong cell. > > adding this : > > window.ccell = cell; > > and then ccell.execute() in JSconsole works. > > -- > M > > > > _______________________________________________ > IPython-dev mailing list > IPython-dev at scipy.org > http://mail.scipy.org/mailman/listinfo/ipython-dev Thanks! I will test it tomorrow... going to bed now... thanks! One step closer to "live" reveal ;-) Cheers. Dami?n. -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmbecker at ucdavis.edu Thu Aug 8 05:17:45 2013 From: gmbecker at ucdavis.edu (Gabriel Becker) Date: Thu, 8 Aug 2013 02:17:45 -0700 Subject: [IPython-dev] Playing with cells in the notebook... and some problems. In-Reply-To: <52034061.6040400@gmail.com> References: <5201D9E4.3070004@gmail.com> <9E273B98-1749-4C2F-A086-F4310598B4AE@gmail.com> <52033439.30406@gmail.com> <2591250C-DD48-4ED4-9CB6-083A8009B2FB@gmail.com> <52034061.6040400@gmail.com> Message-ID: Damian, I happen to be familiar with this issue for reasons that dovetail with your desired usecase (though they aren't identical). More on that near the end of this email. The issue you're running into is that the javascript in the IPython notebook assumes that all cells are direct children of the container element representing the entire notebook. You can see the assumption here (from IPython/html/static/notebook/notebook.js) /** * Get all cell elements in the notebook. * * @method get_cell_elements * @return {jQuery} A selector of all cell elements */ Notebook.prototype.get_cell_elements = function () { return this.container.children("div.cell"); }; Note the use of the children() method. That is what is killing you. All of the indexing, etc in notebook.js is based off of what is returned from get_cell_elements. Matthias is of course correct that the cell still exists, and all the cell-level machinery will still work (thus ccell.execute() working fine). The issue is that NONE of the notebook-level machinery is going to work. This means many of the things you probably think of as core behaviors of the notebook (selecting the next cell when a cell is executed, deleting or moving cells, running the entire notebook from start to finish, the notebook understanding that that cell is selected, etc) will fail with respect to that particular cell, because as far as the notebook-level js is concerned, the cell *isn't in the notebook at all*. I have a research-stage fork of ipython at https://github.com/gmbecker/ipython which allows cells to be contained within other cells. This required me to change the indexing strategy, which is why I'm familiar with this bit of the codebase. Nested cells would remove the need for the extra container div, because the grouping would be happening within the cells themselves. You would assumedly be able to just attach the css/js slide machinery to the parent grouping cells themselves. There was a very lengthy discussion about the concept of these nesting type cells, their benefits and their drawbacks, and whether they should be pursued here. The long and short (AFAIK) of it is that the IPython core team is not yet convinced that the idea is mature enough to pursue. Furthermore, the fact that it requires modification of a core assumption of the notebook machinery makes such pursuit unlikely in at least the short and medium terms, if ever. The team is, of course, also very busy doing all sorts of other awesome stuff as detailed on their roadmap. Anyway, all that doesn't really help you now. Here is something that might: If custom js/extensions are able to clobber core machinery on the IPython object then replacing IPython.Notebook.prototype.get_cell_elements with /** ** Version of get_cell_elements that will see cell divs at any depth in the HTML tree, allowing container divs, etc to be used without breaking notebook machinery. ** You'll need to make sure the cells are getting detected in the right order, but I think they will **/ Notebook.prototype.get_cell_elements = function () { return this.container.*find*("div.cell"); }; Will get your cell noticed again. Or, if extensions get loaded after the notebook object exists, you might have to modify the actual notebook instead of its prototype. That is stored in IPython.notebook if I'm not mistaken. Figuring out how to get the notebook to store (in ipynb form), remember, and restore the fact that you grouped your cells into slides is possible in principle using the metadata facilities already in place. Because the metadata is at the individual cell level, however, prepare for some "fun" hackrobatics implementing the ability to track the groupings in a non-fragile way (e.g. able to handle regrouping or inserting new slides). If the core machinery is protected from modification by js in the extensions somehow I think it would take a *lot* of wheel reinvention on your part or intervention from the core devs to get the functionality you want. HTH. ~G ... Someday I will write a message to this list that isn't a novel. But not today. On Wed, Aug 7, 2013 at 11:53 PM, Dami?n Avila wrote: > El 08/08/13 03:26, Matthias BUSSONNIER escribi?: > > > Le 8 ao?t 2013 ? 08:01, Dami?n Avila a ?crit : > > In this way, I get the a copy of the cell but it is not executable? > > > > Yes it is, Shift-Enter is just not bound to it. > Shift-Enter is handled by IPython.notebook and is bound to execute > selected cell > > As notebook object does nt know of your cell, you are just sending the > execute request to the wrong cell. > > adding this : > > window.ccell = cell; > > and then ccell.execute() in JSconsole works. > > -- > M > > > > _______________________________________________ > IPython-dev mailing listIPython-dev at scipy.orghttp://mail.scipy.org/mailman/listinfo/ipython-dev > > > Thanks! I will test it tomorrow... going to bed now... thanks! > > One step closer to "live" reveal ;-) > > Cheers. > > Dami?n. > > _______________________________________________ > IPython-dev mailing list > IPython-dev at scipy.org > http://mail.scipy.org/mailman/listinfo/ipython-dev > > -- Gabriel Becker Graduate Student Statistics Department University of California, Davis -------------- next part -------------- An HTML attachment was scrubbed... URL: From hans_meine at gmx.net Thu Aug 8 08:52:36 2013 From: hans_meine at gmx.net (Hans Meine) Date: Thu, 08 Aug 2013 14:52:36 +0200 Subject: [IPython-dev] IPyTables - simple table construction for IPython In-Reply-To: References: Message-ID: <1529378.gENb797ell@hmeine-pc> Am Mittwoch, 6. M?rz 2013, 12:16:34 schrieb Thomas Kluyver: > Demo: http://nbviewer.ipython.org/5098827 > Module: https://gist.github.com/takluyver/5098835 > > This is a prototype that I've thrown together quickly: feedback is very > welcome. Obvious extensions include: > > - Expose more style attributes for customisation (so far it's just text and > background colour) > - Add a plain text repr(), so that tables are useable in the terminal I could help with the latter; at least I have code for pretty-printing tables in python in a terminal already. (Can I clone a gist? IIRC yes, but I never worked with gists so far.) What about sorting by columns? (I.e. spicing up the HTML version with appropriate JS magic?) Best regards Hans From bussonniermatthias at gmail.com Thu Aug 8 09:03:44 2013 From: bussonniermatthias at gmail.com (Matthias BUSSONNIER) Date: Thu, 8 Aug 2013 15:03:44 +0200 Subject: [IPython-dev] IPyTables - simple table construction for IPython In-Reply-To: <1529378.gENb797ell@hmeine-pc> References: <1529378.gENb797ell@hmeine-pc> Message-ID: <2D2A3D1C-26B8-465F-B743-23D8931A58E6@gmail.com> Le 8 ao?t 2013 ? 14:52, Hans Meine a ?crit : > Am Mittwoch, 6. M?rz 2013, 12:16:34 schrieb Thomas Kluyver: >> Demo: http://nbviewer.ipython.org/5098827 >> Module: https://gist.github.com/takluyver/5098835 >> >> This is a prototype that I've thrown together quickly: feedback is very >> welcome. Obvious extensions include: >> >> - Expose more style attributes for customisation (so far it's just text and >> background colour) >> - Add a plain text repr(), so that tables are useable in the terminal > > I could help with the latter; at least I have code for pretty-printing tables > in python in a terminal already. (Can I clone a gist? IIRC yes, but I never > worked with gists so far.) Yes, side left panel : clone this gist : git clone https://gist.github.com/5098835.git (I would suggest forking it first) > > What about sorting by columns? (I.e. spicing up the HTML version with > appropriate JS magic?) IMHO : let's wait for Js-plugin so that you "just" have to publish this table data instead of hacking something that display javascript and HTML. It would also have the advantage that you could have a plugin that does it in emacs or nbconvert. -- Matthias > > Best regards > Hans > _______________________________________________ > IPython-dev mailing list > IPython-dev at scipy.org > http://mail.scipy.org/mailman/listinfo/ipython-dev From markbak at gmail.com Thu Aug 8 11:33:10 2013 From: markbak at gmail.com (Mark Bakker) Date: Thu, 8 Aug 2013 17:33:10 +0200 Subject: [IPython-dev] tiny equations in Notebook on Windows Message-ID: Dear List, I created some notebooks for a class I am teaching on my Mac. The students, however, are mostly having Windows machines. It turns out the equations used in the Notebook markup cells are displayed really tiny on Windows. Any (easy) solution to this problem? They look beautiful on my Mac. Thanks, Mark -------------- next part -------------- An HTML attachment was scrubbed... URL: From bussonniermatthias at gmail.com Thu Aug 8 11:48:45 2013 From: bussonniermatthias at gmail.com (Matthias BUSSONNIER) Date: Thu, 8 Aug 2013 17:48:45 +0200 Subject: [IPython-dev] tiny equations in Notebook on Windows In-Reply-To: References: Message-ID: <30281487-C88C-491D-A3EF-09764B3E0E31@gmail.com> Le 8 ao?t 2013 ? 17:33, Mark Bakker a ?crit : > Dear List, > > I created some notebooks for a class I am teaching on my Mac. > The students, however, are mostly having Windows machines. > > It turns out the equations used in the Notebook markup cells are displayed really tiny on Windows. Right click on a Math equation > Math Settings > Scale All Math ? (enter value) But need to be done on each notebooks by each student. I think it is more a problem of browser than OS... -- Matt > Any (easy) solution to this problem? They look beautiful on my Mac. > > Thanks, > > Mark > _______________________________________________ > IPython-dev mailing list > IPython-dev at scipy.org > http://mail.scipy.org/mailman/listinfo/ipython-dev From damianavila at gmail.com Thu Aug 8 12:12:23 2013 From: damianavila at gmail.com (=?ISO-8859-1?Q?Dami=E1n_Avila?=) Date: Thu, 08 Aug 2013 13:12:23 -0300 Subject: [IPython-dev] Playing with cells in the notebook... and some problems. In-Reply-To: <2591250C-DD48-4ED4-9CB6-083A8009B2FB@gmail.com> References: <5201D9E4.3070004@gmail.com> <9E273B98-1749-4C2F-A086-F4310598B4AE@gmail.com> <52033439.30406@gmail.com> <2591250C-DD48-4ED4-9CB6-083A8009B2FB@gmail.com> Message-ID: <5203C367.5010203@gmail.com> El 08/08/13 03:26, Matthias BUSSONNIER escribi?: > > Le 8 ao?t 2013 ? 08:01, Dami?n Avila a ?crit : > >> In this way, I get the a copy of the cell but it is not executable... > > > Yes it is, Shift-Enter is just not bound to it. > Shift-Enter is handled by IPython.notebook and is bound to execute > selected cell > > As notebook object does nt know of your cell, you are just sending the > execute request to the wrong cell. > > adding this : > > window.ccell = cell; > > and then ccell.execute() in JSconsole works. > > -- > M > > > > _______________________________________________ > IPython-dev mailing list > IPython-dev at scipy.org > http://mail.scipy.org/mailman/listinfo/ipython-dev I do not understand what `window.ccell = cell;` is doing... so I do not know where to add it... ;-) Dami?n. -------------- next part -------------- An HTML attachment was scrubbed... URL: From bussonniermatthias at gmail.com Thu Aug 8 12:28:55 2013 From: bussonniermatthias at gmail.com (Matthias Bussonnier) Date: Thu, 8 Aug 2013 18:28:55 +0200 Subject: [IPython-dev] Playing with cells in the notebook... and some problems. In-Reply-To: <5203C367.5010203@gmail.com> References: <5201D9E4.3070004@gmail.com> <9E273B98-1749-4C2F-A086-F4310598B4AE@gmail.com> <52033439.30406@gmail.com> <2591250C-DD48-4ED4-9CB6-083A8009B2FB@gmail.com> <5203C367.5010203@gmail.com> Message-ID: <4C90697A-48D4-4836-AAD7-14496642D567@gmail.com> Window is the global object. You can do window.ccell = ccell pretty much everywhere ccell in defined. Then you can just access ccell from te JavaScript console Envoy? de mon iPhone Le 8 ao?t 2013 ? 18:12, Dami?n Avila a ?crit : > El 08/08/13 03:26, Matthias BUSSONNIER escribi?: >> >> Le 8 ao?t 2013 ? 08:01, Dami?n Avila a ?crit : >> >>> In this way, I get the a copy of the cell but it is not executable? >> >> >> Yes it is, Shift-Enter is just not bound to it. >> Shift-Enter is handled by IPython.notebook and is bound to execute selected cell >> >> As notebook object does nt know of your cell, you are just sending the execute request to the wrong cell. >> >> adding this : >> >> window.ccell = cell; >> >> and then ccell.execute() in JSconsole works. >> >> -- >> M >> >> >> >> _______________________________________________ >> IPython-dev mailing list >> IPython-dev at scipy.org >> http://mail.scipy.org/mailman/listinfo/ipython-dev > > I do not understand what `window.ccell = cell;` is doing... so I do not know where to add it... ;-) > > Dami?n. > > > _______________________________________________ > IPython-dev mailing list > IPython-dev at scipy.org > http://mail.scipy.org/mailman/listinfo/ipython-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From abie at uw.edu Thu Aug 8 12:34:49 2013 From: abie at uw.edu (Abraham D. Flaxman) Date: Thu, 8 Aug 2013 16:34:49 +0000 Subject: [IPython-dev] IPyTables - simple table construction for IPython In-Reply-To: <1529378.gENb797ell@hmeine-pc> References: <1529378.gENb797ell@hmeine-pc> Message-ID: This is quite cool! You might want to develop it with an eye towards inclusion in pandas, a very nice table data package that already has some html representation of notebooks when using ipython. Perhaps this is already familiar to you, but if not, see, for example, http://nbviewer.ipython.org/urls/bitbucket.org/hrojas/learn-pandas/raw/master/lessons/01%20-%20Lesson.ipynb -----Original Message----- From: ipython-dev-bounces at scipy.org [mailto:ipython-dev-bounces at scipy.org] On Behalf Of Hans Meine Sent: Thursday, August 08, 2013 5:53 AM To: IPython developers list Subject: Re: [IPython-dev] IPyTables - simple table construction for IPython Am Mittwoch, 6. M?rz 2013, 12:16:34 schrieb Thomas Kluyver: > Demo: http://nbviewer.ipython.org/5098827 > Module: https://gist.github.com/takluyver/5098835 > > This is a prototype that I've thrown together quickly: feedback is > very welcome. Obvious extensions include: > > - Expose more style attributes for customisation (so far it's just > text and background colour) > - Add a plain text repr(), so that tables are useable in the terminal I could help with the latter; at least I have code for pretty-printing tables in python in a terminal already. (Can I clone a gist? IIRC yes, but I never worked with gists so far.) What about sorting by columns? (I.e. spicing up the HTML version with appropriate JS magic?) Best regards Hans _______________________________________________ IPython-dev mailing list IPython-dev at scipy.org http://mail.scipy.org/mailman/listinfo/ipython-dev From damianavila at gmail.com Thu Aug 8 12:52:09 2013 From: damianavila at gmail.com (=?ISO-8859-1?Q?Dami=E1n_Avila?=) Date: Thu, 08 Aug 2013 13:52:09 -0300 Subject: [IPython-dev] Playing with cells in the notebook... and some problems. In-Reply-To: <4C90697A-48D4-4836-AAD7-14496642D567@gmail.com> References: <5201D9E4.3070004@gmail.com> <9E273B98-1749-4C2F-A086-F4310598B4AE@gmail.com> <52033439.30406@gmail.com> <2591250C-DD48-4ED4-9CB6-083A8009B2FB@gmail.com> <5203C367.5010203@gmail.com> <4C90697A-48D4-4836-AAD7-14496642D567@gmail.com> Message-ID: <5203CCB9.7000209@gmail.com> El 08/08/13 13:28, Matthias Bussonnier escribi?: > Window is the global object. You can do window.ccell = ccell pretty > much everywhere ccell in defined. Then you can just access ccell from > te JavaScript console > > Envoy? de mon iPhone > > Le 8 ao?t 2013 ? 18:12, Dami?n Avila > a ?crit : > >> El 08/08/13 03:26, Matthias BUSSONNIER escribi?: >>> >>> Le 8 ao?t 2013 ? 08:01, Dami?n Avila a ?crit : >>> >>>> In this way, I get the a copy of the cell but it is not executable... >>> >>> >>> Yes it is, Shift-Enter is just not bound to it. >>> Shift-Enter is handled by IPython.notebook and is bound to execute >>> selected cell >>> >>> As notebook object does nt know of your cell, you are just sending >>> the execute request to the wrong cell. >>> >>> adding this : >>> >>> window.ccell = cell; >>> >>> and then ccell.execute() in JSconsole works. >>> >>> -- >>> M >>> >>> >>> >>> _______________________________________________ >>> IPython-dev mailing list >>> IPython-dev at scipy.org >>> http://mail.scipy.org/mailman/listinfo/ipython-dev >> >> I do not understand what `window.ccell = cell;` is doing... so I do >> not know where to add it... ;-) >> >> Dami?n. >> >> >> _______________________________________________ >> IPython-dev mailing list >> IPython-dev at scipy.org >> http://mail.scipy.org/mailman/listinfo/ipython-dev > > > _______________________________________________ > IPython-dev mailing list > IPython-dev at scipy.org > http://mail.scipy.org/mailman/listinfo/ipython-dev Ah... window.ccell = ccell not window.ccell = cell ;-) I did not understand it because the missing "c"... sorry, I would have to have noticed... Thanks. Dami?n. -------------- next part -------------- An HTML attachment was scrubbed... URL: From bussonniermatthias at gmail.com Thu Aug 8 12:58:55 2013 From: bussonniermatthias at gmail.com (Matthias BUSSONNIER) Date: Thu, 8 Aug 2013 18:58:55 +0200 Subject: [IPython-dev] Playing with cells in the notebook... and some problems. In-Reply-To: <5203CCB9.7000209@gmail.com> References: <5201D9E4.3070004@gmail.com> <9E273B98-1749-4C2F-A086-F4310598B4AE@gmail.com> <52033439.30406@gmail.com> <2591250C-DD48-4ED4-9CB6-083A8009B2FB@gmail.com> <5203C367.5010203@gmail.com> <4C90697A-48D4-4836-AAD7-14496642D567@gmail.com> <5203CCB9.7000209@gmail.com> Message-ID: <41D36786-6C0B-445E-B6D0-B0FD1A74FCA8@gmail.com> Le 8 ao?t 2013 ? 18:52, Dami?n Avila a ?crit : > > Ah... window.ccell = ccell not window.ccell = cell ;-) > I did not understand it because the missing "c"... sorry, I would have to have noticed... F!?&g autocorrection. -- Matt From damianavila at gmail.com Thu Aug 8 13:07:31 2013 From: damianavila at gmail.com (=?ISO-8859-1?Q?Dami=E1n_Avila?=) Date: Thu, 08 Aug 2013 14:07:31 -0300 Subject: [IPython-dev] Playing with cells in the notebook... and some problems. In-Reply-To: References: <5201D9E4.3070004@gmail.com> <9E273B98-1749-4C2F-A086-F4310598B4AE@gmail.com> <52033439.30406@gmail.com> <2591250C-DD48-4ED4-9CB6-083A8009B2FB@gmail.com> <52034061.6040400@gmail.com> Message-ID: <5203D053.8080006@gmail.com> El 08/08/13 06:17, Gabriel Becker escribi?: > Damian, > > I happen to be familiar with this issue for reasons that dovetail with > your desired usecase (though they aren't identical). More on that near > the end of this email. > > The issue you're running into is that the javascript in the IPython > notebook assumes that all cells are direct children of the container > element representing the entire notebook. > > You can see the assumption here (from > IPython/html/static/notebook/notebook.js) > > /** > * Get all cell elements in the notebook. > * > * @method get_cell_elements > * @return {jQuery} A selector of all cell elements > */ > Notebook.prototype.get_cell_elements = function () { > return this.container.children("div.cell"); > }; > > Note the use of the children() method. That is what is killing you. > All of the indexing, etc in notebook.js is based off of what is > returned from get_cell_elements. > > Matthias is of course correct that the cell still exists, and all the > cell-level machinery will still work (thus ccell.execute() working fine). > > The issue is that NONE of the notebook-level machinery is going to > work. This means many of the things you probably think of as core > behaviors of the notebook (selecting the next cell when a cell is > executed, deleting or moving cells, running the entire notebook from > start to finish, the notebook understanding that that cell is > selected, etc) will fail with respect to that particular cell, because > as far as the notebook-level js is concerned, the cell /isn't in the > notebook at all/. > > I have a research-stage fork of ipython at > https://github.com/gmbecker/ipython which allows cells to be contained > within other cells. This required me to change the indexing strategy, > which is why I'm familiar with this bit of the codebase. > > Nested cells would remove the need for the extra container div, > because the grouping would be happening within the cells themselves. > You would assumedly be able to just attach the css/js slide machinery > to the parent grouping cells themselves. > > There was a very lengthy discussion about the concept of these nesting > type cells, their benefits and their drawbacks, and whether they > should be pursued here > . > The long and short (AFAIK) of it is that the IPython core team is not > yet convinced that the idea is mature enough to pursue. Furthermore, > the fact that it requires modification of a core assumption of the > notebook machinery makes such pursuit unlikely in at least the short > and medium terms, if ever. > > The team is, of course, also very busy doing all sorts of other > awesome stuff as detailed on their roadmap. > > Anyway, all that doesn't really help you now. Here is something that > might: > > If custom js/extensions are able to clobber core machinery on the > IPython object then replacing > IPython.Notebook.prototype.get_cell_elements with > > /** > ** Version of get_cell_elements that will see cell divs at any depth > in the HTML tree, allowing container divs, etc to be used without > breaking notebook machinery. > ** You'll need to make sure the cells are getting detected in the > right order, but I think they will > **/ > Notebook.prototype.get_cell_elements = function () { > return this.container.*find*("div.cell"); > }; > > Will get your cell noticed again. Or, if extensions get loaded after > the notebook object exists, you might have to modify the actual > notebook instead of its prototype. That is stored in IPython.notebook > if I'm not mistaken. > > Figuring out how to get the notebook to store (in ipynb form), > remember, and restore the fact that you grouped your cells into slides > is possible in principle using the metadata facilities already in > place. Because the metadata is at the individual cell level, however, > prepare for some "fun" hackrobatics implementing the ability to track > the groupings in a non-fragile way (e.g. able to handle regrouping or > inserting new slides). > > If the core machinery is protected from modification by js in the > extensions somehow I think it would take a /lot/ of wheel reinvention > on your part or intervention from the core devs to get the > functionality you want. > > HTH. > ~G > > ... Someday I will write a message to this list that isn't a novel. > But not today. > > > > On Wed, Aug 7, 2013 at 11:53 PM, Dami?n Avila > wrote: > > El 08/08/13 03:26, Matthias BUSSONNIER escribi?: >> >> Le 8 ao?t 2013 ? 08:01, Dami?n Avila a ?crit : >> >>> In this way, I get the a copy of the cell but it is not >>> executable... >> >> >> Yes it is, Shift-Enter is just not bound to it. >> Shift-Enter is handled by IPython.notebook and is bound to >> execute selected cell >> >> As notebook object does nt know of your cell, you are just >> sending the execute request to the wrong cell. >> >> adding this : >> >> window.ccell = cell; >> >> and then ccell.execute() in JSconsole works. >> >> -- >> M >> >> >> >> _______________________________________________ >> IPython-dev mailing list >> IPython-dev at scipy.org >> http://mail.scipy.org/mailman/listinfo/ipython-dev > > Thanks! I will test it tomorrow... going to bed now... thanks! > > One step closer to "live" reveal ;-) > > Cheers. > > Dami?n. > > _______________________________________________ > IPython-dev mailing list > IPython-dev at scipy.org > http://mail.scipy.org/mailman/listinfo/ipython-dev > > > > > -- > Gabriel Becker > Graduate Student > Statistics Department > University of California, Davis > > > _______________________________________________ > IPython-dev mailing list > IPython-dev at scipy.org > http://mail.scipy.org/mailman/listinfo/ipython-dev Gabriel, thank for your tip... It was very very helpful... Now, I can run the cells wrapped inside new divs... OK, time to deal with metadata and grouping of the cells... Seriously, you help and the detail description is very appreciated... BTW, I will probably go deeper in your fork to take some ideas... Cheers. Dami?n. -------------- next part -------------- An HTML attachment was scrubbed... URL: From damianavila at gmail.com Thu Aug 8 13:19:30 2013 From: damianavila at gmail.com (=?ISO-8859-1?Q?Dami=E1n_Avila?=) Date: Thu, 08 Aug 2013 14:19:30 -0300 Subject: [IPython-dev] Playing with cells in the notebook... and some problems. In-Reply-To: References: <5201D9E4.3070004@gmail.com> <9E273B98-1749-4C2F-A086-F4310598B4AE@gmail.com> <52033439.30406@gmail.com> <2591250C-DD48-4ED4-9CB6-083A8009B2FB@gmail.com> <52034061.6040400@gmail.com> Message-ID: <5203D322.3030305@gmail.com> El 08/08/13 06:17, Gabriel Becker escribi?: > Damian, > > I happen to be familiar with this issue for reasons that dovetail with > your desired usecase (though they aren't identical). More on that near > the end of this email. > > The issue you're running into is that the javascript in the IPython > notebook assumes that all cells are direct children of the container > element representing the entire notebook. > > You can see the assumption here (from > IPython/html/static/notebook/notebook.js) > > /** > * Get all cell elements in the notebook. > * > * @method get_cell_elements > * @return {jQuery} A selector of all cell elements > */ > Notebook.prototype.get_cell_elements = function () { > return this.container.children("div.cell"); > }; > > Note the use of the children() method. That is what is killing you. > All of the indexing, etc in notebook.js is based off of what is > returned from get_cell_elements. > > Matthias is of course correct that the cell still exists, and all the > cell-level machinery will still work (thus ccell.execute() working fine). > > The issue is that NONE of the notebook-level machinery is going to > work. This means many of the things you probably think of as core > behaviors of the notebook (selecting the next cell when a cell is > executed, deleting or moving cells, running the entire notebook from > start to finish, the notebook understanding that that cell is > selected, etc) will fail with respect to that particular cell, because > as far as the notebook-level js is concerned, the cell /isn't in the > notebook at all/. > > I have a research-stage fork of ipython at > https://github.com/gmbecker/ipython which allows cells to be contained > within other cells. This required me to change the indexing strategy, > which is why I'm familiar with this bit of the codebase. > > Nested cells would remove the need for the extra container div, > because the grouping would be happening within the cells themselves. > You would assumedly be able to just attach the css/js slide machinery > to the parent grouping cells themselves. > > There was a very lengthy discussion about the concept of these nesting > type cells, their benefits and their drawbacks, and whether they > should be pursued here > . > The long and short (AFAIK) of it is that the IPython core team is not > yet convinced that the idea is mature enough to pursue. Furthermore, > the fact that it requires modification of a core assumption of the > notebook machinery makes such pursuit unlikely in at least the short > and medium terms, if ever. > > The team is, of course, also very busy doing all sorts of other > awesome stuff as detailed on their roadmap. > > Anyway, all that doesn't really help you now. Here is something that > might: > > If custom js/extensions are able to clobber core machinery on the > IPython object then replacing > IPython.Notebook.prototype.get_cell_elements with > > /** > ** Version of get_cell_elements that will see cell divs at any depth > in the HTML tree, allowing container divs, etc to be used without > breaking notebook machinery. > ** You'll need to make sure the cells are getting detected in the > right order, but I think they will > **/ > Notebook.prototype.get_cell_elements = function () { > return this.container.*find*("div.cell"); > }; > > Will get your cell noticed again. Or, if extensions get loaded after > the notebook object exists, you might have to modify the actual > notebook instead of its prototype. That is stored in IPython.notebook > if I'm not mistaken. > > Figuring out how to get the notebook to store (in ipynb form), > remember, and restore the fact that you grouped your cells into slides > is possible in principle using the metadata facilities already in > place. Because the metadata is at the individual cell level, however, > prepare for some "fun" hackrobatics implementing the ability to track > the groupings in a non-fragile way (e.g. able to handle regrouping or > inserting new slides). > > If the core machinery is protected from modification by js in the > extensions somehow I think it would take a /lot/ of wheel reinvention > on your part or intervention from the core devs to get the > functionality you want. > > HTH. > ~G > > ... Someday I will write a message to this list that isn't a novel. > But not today. > > > > On Wed, Aug 7, 2013 at 11:53 PM, Dami?n Avila > wrote: > > El 08/08/13 03:26, Matthias BUSSONNIER escribi?: >> >> Le 8 ao?t 2013 ? 08:01, Dami?n Avila a ?crit : >> >>> In this way, I get the a copy of the cell but it is not >>> executable... >> >> >> Yes it is, Shift-Enter is just not bound to it. >> Shift-Enter is handled by IPython.notebook and is bound to >> execute selected cell >> >> As notebook object does nt know of your cell, you are just >> sending the execute request to the wrong cell. >> >> adding this : >> >> window.ccell = cell; >> >> and then ccell.execute() in JSconsole works. >> >> -- >> M >> >> >> >> _______________________________________________ >> IPython-dev mailing list >> IPython-dev at scipy.org >> http://mail.scipy.org/mailman/listinfo/ipython-dev > > Thanks! I will test it tomorrow... going to bed now... thanks! > > One step closer to "live" reveal ;-) > > Cheers. > > Dami?n. > > _______________________________________________ > IPython-dev mailing list > IPython-dev at scipy.org > http://mail.scipy.org/mailman/listinfo/ipython-dev > > > > > -- > Gabriel Becker > Graduate Student > Statistics Department > University of California, Davis > > > _______________________________________________ > IPython-dev mailing list > IPython-dev at scipy.org > http://mail.scipy.org/mailman/listinfo/ipython-dev Gabriel, thank for your tip... It was very very helpful... Now, I can run the cells wrapped inside new divs... OK, time to deal with metadata and grouping of the cells... Seriously, you help and the detail description is very appreciated... BTW, I will probably go deeper in your fork to take some ideas... Cheers. Dami?n. -------------- next part -------------- An HTML attachment was scrubbed... URL: From takowl at gmail.com Thu Aug 8 15:08:43 2013 From: takowl at gmail.com (Thomas Kluyver) Date: Thu, 8 Aug 2013 12:08:43 -0700 Subject: [IPython-dev] IPyTables - simple table construction for IPython In-Reply-To: <1529378.gENb797ell@hmeine-pc> References: <1529378.gENb797ell@hmeine-pc> Message-ID: On 8 August 2013 05:52, Hans Meine wrote: > What about sorting by columns? (I.e. spicing up the HTML version with > appropriate JS magic?) > We should have a look at how that works with security - IIRC, the plan is that notebooks will be able to run arbitrary Javascript on execution, but not on loading. But even just getting a good HTML and terminal table there is a pretty big win, I thin. Matthias has explained about forking and cloning gists. At some point, we should probably put this into a proper git repository somewhere. We should also rename it to something more distinct from ipy_table. Maybe something like tableview? Abraham: > This is quite cool! You might want to develop it with an eye towards inclusion in pandas, a very nice table data package that already has some html representation of notebooks when using ipython. Thanks Abraham. I think pandas is already ahead on this front. The aim of IPyTables (or whatever we rename it to) is to have a simple high-level interface for building visual tables in hand-written notebook code. If we end up improving the basic table machinery, we could pull out the low level stuff to share with pandas, but that's probably something for another day. Thomas -------------- next part -------------- An HTML attachment was scrubbed... URL: From fperez.net at gmail.com Thu Aug 8 21:35:56 2013 From: fperez.net at gmail.com (Fernando Perez) Date: Thu, 8 Aug 2013 18:35:56 -0700 Subject: [IPython-dev] [ANN] IPython 1.0 is finally released, nearly 12 years in the making! Message-ID: Hi all, I am incredibly thrilled, on behalf of the amazing IPython Dev Team, to announce the official release of IPython 1.0 today, an effort nearly 12 years in the making. The previous version (0.13) was released on June 30, 2012, and in this development cycle we had: ~12 months of work. ~700 pull requests merged. ~600 issues closed (non-pull requests). contributions from ~150 authors. ~4000 commits. # A little context What does "1.0" mean for IPython? Obviously IPython has been a staple of the scientific Python community for years, and we've made every effort to make it a robust and production ready tool for a long time, so what exactly do we mean by tagging this particular release as 1.0? Basically, we feel that the core design of IPython, and the scope of the project, is where we want it to be. What we have today is what we consider a reasonably complete, design- and scope-wise, IPython 1.0: an architecture for interactive computing, that can drive kernels in a number of ways using a well-defined protocol, and rich and powerful clients that let users control those kernels effectively. Our different clients serve different needs, with the old workhorse of the terminal still being very useful, but much of our current development energy going into the Notebook, obviously. The Notebook enables interactive exploration to become Literate Computing, bridging the gaps from individual work to collaboration and publication, all with an open file format that is a direct record of the underlying communication protocol. There are obviously plenty of open issues (many of them very important) that need fixing, and large and ambitious new lines of development for the years to come. But the work of the last four years, since the summer of 2009 when Brian Granger was able to devote a summer (thanks to funding from the NiPy project - nipy.org) to refactoring the old IPython core code, finally opened up or infrastructure for real innovation. By disentangling what was a useful but impenetrable codebase, it became possible for us to start building a flexible, modern system for interactive computing that abstracted the old REPL model into a generic protocol that kernels could use to talk to clients. This led at first to the creation of the Qt console, and then to the Notebook and out-of-process terminal client. It also allowed us to (finally!) unify our parallel computing machinery with the rest of the interactive system, which Min Ragan-Kelley pulled off in a development tour de force that involved rewriting in a few weeks a huge and complex Twisted-based system. We are very happy with how the Notebook work has turned out, and it seems the entire community agrees with us, as the uptake has been phenomenal. Back from the very first "IPython 0.0.1" that I started in 2001: https://gist.github.com/fperez/1579699 there were already hints of tools like Mathematica: it was my everyday workhorse as a theoretical physicist and I found its Notebook environment invaluable. But as a grad student trying out "just an afternoon hack" (IPython was my very first Python program as I was learning the language), I didn't have the resources, skills or vision to attempt building an entire notebook system, and to be honest the tools of the day would have made that enterprise a miserable one. But those ideas were always driving our efforts, and as IPython started becoming a project with a team, we made multiple attempts to get a good Notebook built around IPython. Those interested can read an old blog post of mine with the history (http://blog.fperez.org/2012/01/ipython-notebook-historical.html). The short story is that in 2011, on our sixth attempt, Brian was again able to devote a focused summer into using our client-server architecture and, with the stack of the modern web (Javascript, CSS, websockets, Tornado, ...), finally build a robust system for Literate Computing across programming languages. Today, thanks to the generous support and vision of Josh Greenberg at the Alfred P. Sloan Foundation, we are working very hard on building the notebook infrastructure, and this release contains major advances on that front. We have high hopes for what we'll do next; as a glimpse of the future that this enables, now there is a native Julia kernel that speaks to our clients, notebook included: https://github.com/JuliaLang/IJulia.jl. # Team I can't stress enough how impressed I am with the work people are doing in IPython, and what a privilege it is to work with colleagues like these. Brian Granger and Min Ragan-Kelley joined IPython around 2005, initially working on the parallel machinery, but since ~ 2009 they have become the heart of the project. Today Min is our top committer and knows our codebase better than anyone else, and I can't imagine better partners for an effort like this. And from regulars in our core team like Thomas Kluyver, Matthias Bussonnier, Brad Froehle and Paul Ivanov to newcomers like Jonathan Frederic and Zach Sailer, in addition to the many more whose names are in our logs, we have a crazy amount of energy being poured into IPython. I hope we'll continue to harness it productively! The full list of contributors to this release can be seen here: http://ipython.org/ipython-doc/rel-1.0.0/whatsnew/github-stats-1.0.html # Release highlights * nbconvert: this is the major piece of new functionality in this cycle, and was an explicit part of our roadmap (https://github.com/ipython/ipython/wiki/Roadmap:-IPython). nbconvert is now an IPython subcommand to convert notebooks into other formats such as HTML or LaTeX, but more importantly, it's a very flexible system that lets you write custom templates to generate new output with arbitrary control over the formatting and transformations that are applied to the input. We want to stress that despite the fact that a huge amount of work went into nbconvert, this should be considered a *tech preview* release. We've come to realize how complex this problem is, and while we'll make every effort to keep the high-level command-line syntax and APIs as stable as possible, it is quite likely that the internals will continue to evolve, possibly in backwards-incompatible ways. So if you start building services and libraries that make heavy use of the nbconvert internals, please be prepared for some turmoil in the months to come, and ping us on the dev list with questions or concerns. * Notebook improvements: there has been a ton of polish work in the notebook at many levels, though the file format remains unchanged from 0.13, so you shouldn't have any problems sharing notebooks with colleagues still using 0.13. - Autosave: probably the most oft-requested feature, the notebook server now autosaves your files! You can still hit Ctrl-S to force a manual save (which also creates a special 'checkpoint' you can come back to). - The notebook supports raw_input(), and thus also %debug. This was probably the main deficiency of the notebook as a client compared to the terminal/qtconsole, and it has been finally fixed. - Add %%html, %%svg, %%javascript, and %%latex cell magics for writing raw output in notebook cells. - Fix an issue parsing LaTeX in markdown cells, which required users to type \\\, instead of \\. -Images support width and height metadata, and thereby 2x scaling (retina support). - %%file has been renamed %%writefile (%%file) is deprecated. * The input transofrmation code has been updated and rationalized. This is a somewhat specialized part of IPython, but of importance to projects that build upon it for custom environments, like Sympy and Sage. Our full release notes are here: http://ipython.org/ipython-doc/rel-1.0.0/whatsnew/version1.0.html and the gory details are here: http://ipython.org/ipython-doc/rel-1.0.0/whatsnew/github-stats-1.0.html # Installation Installation links and instructions are at: http://ipython.org/install.html And IPython is also on PyPI: http://pypi.python.org/pypi/ipython # Requirements IPython 1.0 requires Python ? 2.6.5 or ? 3.2.1. It does not support Python 3.0, 3.1, or 2.5. # Acknowledgments Last but not least, we'd like to acknowledge the generous support of those who make it possible for us to spend our time working on IPython. In particular, the Alfred P. Sloan Foundation today lets us have a solid team working full-time on the project, and without the support of Enthought Inc at multiple points in our history, we wouldn't be where we are today. The full list of our support is here: http://ipython.org/index.html#support Thanks to everyone! Please enjoy IPython 1.0, and report all bugs as usual! Fernando, on behalf of the IPython Dev Team. -- Fernando Perez (@fperez_org; http://fperez.org) fperez.net-at-gmail: mailing lists only (I ignore this when swamped!) fernando.perez-at-berkeley: contact me here for any direct mail -- Fernando Perez (@fperez_org; http://fperez.org) fperez.net-at-gmail: mailing lists only (I ignore this when swamped!) fernando.perez-at-berkeley: contact me here for any direct mail From jsw at fnal.gov Thu Aug 8 21:42:27 2013 From: jsw at fnal.gov (Jon Wilson) Date: Thu, 8 Aug 2013 20:42:27 -0500 Subject: [IPython-dev] [ANN] IPython 1.0 is finally released, nearly 12 years in the making! In-Reply-To: References: Message-ID: <52044903.60002@fnal.gov> Hurrah! Very well done. Please allow me to extend my gratitude for such an excellent tool. I have been evangelizing my colleagues in hep-ex, and gradually some of them are adopting IPython in their daily work. Soon, IPython will take over the world! Regards, Jon Wilson On 08/08/2013 08:35 PM, Fernando Perez wrote: > Hi all, > > I am incredibly thrilled, on behalf of the amazing IPython Dev Team, > to announce the official release of IPython 1.0 today, an effort > nearly 12 years in the making. The previous version (0.13) was > released on June 30, 2012, and in this development cycle we had: > > ~12 months of work. > ~700 pull requests merged. > ~600 issues closed (non-pull requests). > contributions from ~150 authors. > ~4000 commits. > > > # A little context > > What does "1.0" mean for IPython? Obviously IPython has been a staple > of the scientific Python community for years, and we've made every > effort to make it a robust and production ready tool for a long time, > so what exactly do we mean by tagging this particular release as 1.0? > Basically, we feel that the core design of IPython, and the scope of > the project, is where we want it to be. > > What we have today is what we consider a reasonably complete, design- > and scope-wise, IPython 1.0: an architecture for interactive > computing, that can drive kernels in a number of ways using a > well-defined protocol, and rich and powerful clients that let users > control those kernels effectively. Our different clients serve > different needs, with the old workhorse of the terminal still being > very useful, but much of our current development energy going into the > Notebook, obviously. The Notebook enables interactive exploration to > become Literate Computing, bridging the gaps from individual work to > collaboration and publication, all with an open file format that is a > direct record of the underlying communication protocol. > > There are obviously plenty of open issues (many of them very > important) that need fixing, and large and ambitious new lines of > development for the years to come. But the work of the last four > years, since the summer of 2009 when Brian Granger was able to devote > a summer (thanks to funding from the NiPy project - nipy.org) to > refactoring the old IPython core code, finally opened up or > infrastructure for real innovation. By disentangling what was a useful > but impenetrable codebase, it became possible for us to start building > a flexible, modern system for interactive computing that abstracted > the old REPL model into a generic protocol that kernels could use to > talk to clients. This led at first to the creation of the Qt console, > and then to the Notebook and out-of-process terminal client. It also > allowed us to (finally!) unify our parallel computing machinery with > the rest of the interactive system, which Min Ragan-Kelley pulled off > in a development tour de force that involved rewriting in a few weeks > a huge and complex Twisted-based system. > > We are very happy with how the Notebook work has turned out, and it > seems the entire community agrees with us, as the uptake has been > phenomenal. Back from the very first "IPython 0.0.1" that I started > in 2001: > > https://gist.github.com/fperez/1579699 > > there were already hints of tools like Mathematica: it was my everyday > workhorse as a theoretical physicist and I found its Notebook > environment invaluable. But as a grad student trying out "just an > afternoon hack" (IPython was my very first Python program as I was > learning the language), I didn't have the resources, skills or vision > to attempt building an entire notebook system, and to be honest the > tools of the day would have made that enterprise a miserable one. But > those ideas were always driving our efforts, and as IPython started > becoming a project with a team, we made multiple attempts to get a > good Notebook built around IPython. Those interested can read an old > blog post of mine with the history > (http://blog.fperez.org/2012/01/ipython-notebook-historical.html). > The short story is that in 2011, on our sixth attempt, Brian was again > able to devote a focused summer into using our client-server > architecture and, with the stack of the modern web (Javascript, CSS, > websockets, Tornado, ...), finally build a robust system for Literate > Computing across programming languages. > > Today, thanks to the generous support and vision of Josh Greenberg at > the Alfred P. Sloan Foundation, we are working very hard on building > the notebook infrastructure, and this release contains major advances > on that front. We have high hopes for what we'll do next; as a > glimpse of the future that this enables, now there is a native Julia > kernel that speaks to our clients, notebook included: > https://github.com/JuliaLang/IJulia.jl. > > > # Team > > I can't stress enough how impressed I am with the work people are > doing in IPython, and what a privilege it is to work with colleagues > like these. Brian Granger and Min Ragan-Kelley joined IPython around > 2005, initially working on the parallel machinery, but since ~ 2009 > they have become the heart of the project. Today Min is our top > committer and knows our codebase better than anyone else, and I can't > imagine better partners for an effort like this. > > And from regulars in our core team like Thomas Kluyver, Matthias > Bussonnier, Brad Froehle and Paul Ivanov to newcomers like Jonathan > Frederic and Zach Sailer, in addition to the many more whose names are > in our logs, we have a crazy amount of energy being poured into > IPython. I hope we'll continue to harness it productively! > > The full list of contributors to this release can be seen here: > > http://ipython.org/ipython-doc/rel-1.0.0/whatsnew/github-stats-1.0.html > > > # Release highlights > > * nbconvert: this is the major piece of new functionality in this > cycle, and was an explicit part of our roadmap > (https://github.com/ipython/ipython/wiki/Roadmap:-IPython). nbconvert > is now an IPython subcommand to convert notebooks into other formats > such as HTML or LaTeX, but more importantly, it's a very flexible > system that lets you write custom templates to generate new output > with arbitrary control over the formatting and transformations that > are applied to the input. > > We want to stress that despite the fact that a huge amount of work > went into nbconvert, this should be considered a *tech preview* > release. We've come to realize how complex this problem is, and while > we'll make every effort to keep the high-level command-line syntax and > APIs as stable as possible, it is quite likely that the internals will > continue to evolve, possibly in backwards-incompatible ways. So if > you start building services and libraries that make heavy use of the > nbconvert internals, please be prepared for some turmoil in the months > to come, and ping us on the dev list with questions or concerns. > > * Notebook improvements: there has been a ton of polish work in the > notebook at many levels, though the file format remains unchanged from > 0.13, so you shouldn't have any problems sharing notebooks with > colleagues still using 0.13. > > - Autosave: probably the most oft-requested feature, the notebook > server now autosaves your files! You can still hit Ctrl-S to force a > manual save (which also creates a special 'checkpoint' you can come > back to). > > - The notebook supports raw_input(), and thus also %debug. This was > probably the main deficiency of the notebook as a client compared to > the terminal/qtconsole, and it has been finally fixed. > > - Add %%html, %%svg, %%javascript, and %%latex cell magics for > writing raw output in notebook cells. > - Fix an issue parsing LaTeX in markdown cells, which required users > to type \\\, instead of \\. > -Images support width and height metadata, and thereby 2x scaling > (retina support). > - %%file has been renamed %%writefile (%%file) is deprecated. > > * The input transofrmation code has been updated and rationalized. > This is a somewhat specialized part of IPython, but of importance to > projects that build upon it for custom environments, like Sympy and > Sage. > > Our full release notes are here: > > http://ipython.org/ipython-doc/rel-1.0.0/whatsnew/version1.0.html > > and the gory details are here: > > http://ipython.org/ipython-doc/rel-1.0.0/whatsnew/github-stats-1.0.html > > > # Installation > > Installation links and instructions are at: http://ipython.org/install.html > And IPython is also on PyPI: http://pypi.python.org/pypi/ipython > > > # Requirements > > IPython 1.0 requires Python ? 2.6.5 or ? 3.2.1. It does not support > Python 3.0, 3.1, or 2.5. > > > # Acknowledgments > > Last but not least, we'd like to acknowledge the generous support of > those who make it possible for us to spend our time working on > IPython. In particular, the Alfred P. Sloan Foundation today lets us > have a solid team working full-time on the project, and without the > support of Enthought Inc at multiple points in our history, we > wouldn't be where we are today. > > The full list of our support is here: > > http://ipython.org/index.html#support > > > Thanks to everyone! Please enjoy IPython 1.0, and report all bugs as usual! > > Fernando, on behalf of the IPython Dev Team. > From satra at mit.edu Thu Aug 8 21:43:02 2013 From: satra at mit.edu (Satrajit Ghosh) Date: Thu, 8 Aug 2013 21:43:02 -0400 Subject: [IPython-dev] [ANN] IPython 1.0 is finally released, nearly 12 years in the making! In-Reply-To: References: Message-ID: hi fernando, this is wonderful to see and congratulations to the entire ipython team! keep up the great work. cheers, satra On Thu, Aug 8, 2013 at 9:35 PM, Fernando Perez wrote: > Hi all, > > I am incredibly thrilled, on behalf of the amazing IPython Dev Team, > to announce the official release of IPython 1.0 today, an effort > nearly 12 years in the making. The previous version (0.13) was > released on June 30, 2012, and in this development cycle we had: > > ~12 months of work. > ~700 pull requests merged. > ~600 issues closed (non-pull requests). > contributions from ~150 authors. > ~4000 commits. > > > # A little context > > What does "1.0" mean for IPython? Obviously IPython has been a staple > of the scientific Python community for years, and we've made every > effort to make it a robust and production ready tool for a long time, > so what exactly do we mean by tagging this particular release as 1.0? > Basically, we feel that the core design of IPython, and the scope of > the project, is where we want it to be. > > What we have today is what we consider a reasonably complete, design- > and scope-wise, IPython 1.0: an architecture for interactive > computing, that can drive kernels in a number of ways using a > well-defined protocol, and rich and powerful clients that let users > control those kernels effectively. Our different clients serve > different needs, with the old workhorse of the terminal still being > very useful, but much of our current development energy going into the > Notebook, obviously. The Notebook enables interactive exploration to > become Literate Computing, bridging the gaps from individual work to > collaboration and publication, all with an open file format that is a > direct record of the underlying communication protocol. > > There are obviously plenty of open issues (many of them very > important) that need fixing, and large and ambitious new lines of > development for the years to come. But the work of the last four > years, since the summer of 2009 when Brian Granger was able to devote > a summer (thanks to funding from the NiPy project - nipy.org) to > refactoring the old IPython core code, finally opened up or > infrastructure for real innovation. By disentangling what was a useful > but impenetrable codebase, it became possible for us to start building > a flexible, modern system for interactive computing that abstracted > the old REPL model into a generic protocol that kernels could use to > talk to clients. This led at first to the creation of the Qt console, > and then to the Notebook and out-of-process terminal client. It also > allowed us to (finally!) unify our parallel computing machinery with > the rest of the interactive system, which Min Ragan-Kelley pulled off > in a development tour de force that involved rewriting in a few weeks > a huge and complex Twisted-based system. > > We are very happy with how the Notebook work has turned out, and it > seems the entire community agrees with us, as the uptake has been > phenomenal. Back from the very first "IPython 0.0.1" that I started > in 2001: > > https://gist.github.com/fperez/1579699 > > there were already hints of tools like Mathematica: it was my everyday > workhorse as a theoretical physicist and I found its Notebook > environment invaluable. But as a grad student trying out "just an > afternoon hack" (IPython was my very first Python program as I was > learning the language), I didn't have the resources, skills or vision > to attempt building an entire notebook system, and to be honest the > tools of the day would have made that enterprise a miserable one. But > those ideas were always driving our efforts, and as IPython started > becoming a project with a team, we made multiple attempts to get a > good Notebook built around IPython. Those interested can read an old > blog post of mine with the history > (http://blog.fperez.org/2012/01/ipython-notebook-historical.html). > The short story is that in 2011, on our sixth attempt, Brian was again > able to devote a focused summer into using our client-server > architecture and, with the stack of the modern web (Javascript, CSS, > websockets, Tornado, ...), finally build a robust system for Literate > Computing across programming languages. > > Today, thanks to the generous support and vision of Josh Greenberg at > the Alfred P. Sloan Foundation, we are working very hard on building > the notebook infrastructure, and this release contains major advances > on that front. We have high hopes for what we'll do next; as a > glimpse of the future that this enables, now there is a native Julia > kernel that speaks to our clients, notebook included: > https://github.com/JuliaLang/IJulia.jl. > > > # Team > > I can't stress enough how impressed I am with the work people are > doing in IPython, and what a privilege it is to work with colleagues > like these. Brian Granger and Min Ragan-Kelley joined IPython around > 2005, initially working on the parallel machinery, but since ~ 2009 > they have become the heart of the project. Today Min is our top > committer and knows our codebase better than anyone else, and I can't > imagine better partners for an effort like this. > > And from regulars in our core team like Thomas Kluyver, Matthias > Bussonnier, Brad Froehle and Paul Ivanov to newcomers like Jonathan > Frederic and Zach Sailer, in addition to the many more whose names are > in our logs, we have a crazy amount of energy being poured into > IPython. I hope we'll continue to harness it productively! > > The full list of contributors to this release can be seen here: > > http://ipython.org/ipython-doc/rel-1.0.0/whatsnew/github-stats-1.0.html > > > # Release highlights > > * nbconvert: this is the major piece of new functionality in this > cycle, and was an explicit part of our roadmap > (https://github.com/ipython/ipython/wiki/Roadmap:-IPython). nbconvert > is now an IPython subcommand to convert notebooks into other formats > such as HTML or LaTeX, but more importantly, it's a very flexible > system that lets you write custom templates to generate new output > with arbitrary control over the formatting and transformations that > are applied to the input. > > We want to stress that despite the fact that a huge amount of work > went into nbconvert, this should be considered a *tech preview* > release. We've come to realize how complex this problem is, and while > we'll make every effort to keep the high-level command-line syntax and > APIs as stable as possible, it is quite likely that the internals will > continue to evolve, possibly in backwards-incompatible ways. So if > you start building services and libraries that make heavy use of the > nbconvert internals, please be prepared for some turmoil in the months > to come, and ping us on the dev list with questions or concerns. > > * Notebook improvements: there has been a ton of polish work in the > notebook at many levels, though the file format remains unchanged from > 0.13, so you shouldn't have any problems sharing notebooks with > colleagues still using 0.13. > > - Autosave: probably the most oft-requested feature, the notebook > server now autosaves your files! You can still hit Ctrl-S to force a > manual save (which also creates a special 'checkpoint' you can come > back to). > > - The notebook supports raw_input(), and thus also %debug. This was > probably the main deficiency of the notebook as a client compared to > the terminal/qtconsole, and it has been finally fixed. > > - Add %%html, %%svg, %%javascript, and %%latex cell magics for > writing raw output in notebook cells. > - Fix an issue parsing LaTeX in markdown cells, which required users > to type \\\, instead of \\. > -Images support width and height metadata, and thereby 2x scaling > (retina support). > - %%file has been renamed %%writefile (%%file) is deprecated. > > * The input transofrmation code has been updated and rationalized. > This is a somewhat specialized part of IPython, but of importance to > projects that build upon it for custom environments, like Sympy and > Sage. > > Our full release notes are here: > > http://ipython.org/ipython-doc/rel-1.0.0/whatsnew/version1.0.html > > and the gory details are here: > > http://ipython.org/ipython-doc/rel-1.0.0/whatsnew/github-stats-1.0.html > > > # Installation > > Installation links and instructions are at: > http://ipython.org/install.html > And IPython is also on PyPI: http://pypi.python.org/pypi/ipython > > > # Requirements > > IPython 1.0 requires Python ? 2.6.5 or ? 3.2.1. It does not support > Python 3.0, 3.1, or 2.5. > > > # Acknowledgments > > Last but not least, we'd like to acknowledge the generous support of > those who make it possible for us to spend our time working on > IPython. In particular, the Alfred P. Sloan Foundation today lets us > have a solid team working full-time on the project, and without the > support of Enthought Inc at multiple points in our history, we > wouldn't be where we are today. > > The full list of our support is here: > > http://ipython.org/index.html#support > > > Thanks to everyone! Please enjoy IPython 1.0, and report all bugs as usual! > > Fernando, on behalf of the IPython Dev Team. > > -- > Fernando Perez (@fperez_org; http://fperez.org) > fperez.net-at-gmail: mailing lists only (I ignore this when swamped!) > fernando.perez-at-berkeley: contact me here for any direct mail > -- > Fernando Perez (@fperez_org; http://fperez.org) > fperez.net-at-gmail: mailing lists only (I ignore this when swamped!) > fernando.perez-at-berkeley: contact me here for any direct mail > _______________________________________________ > IPython-dev mailing list > IPython-dev at scipy.org > http://mail.scipy.org/mailman/listinfo/ipython-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From asmeurer at gmail.com Thu Aug 8 23:05:18 2013 From: asmeurer at gmail.com (Aaron Meurer) Date: Thu, 8 Aug 2013 21:05:18 -0600 Subject: [IPython-dev] [sympy] Treating Python 3 as a first-class citizen In-Reply-To: <59E60EAC-AC96-4127-A29F-9B863969E272@gmail.com> References: <59E60EAC-AC96-4127-A29F-9B863969E272@gmail.com> Message-ID: So I noticed that 1.0.0 still installs ipython3 only. Did I convince you guys to change the behavior? Fernando, what is your opinion? Aaron Meurer On Tue, Aug 6, 2013 at 12:56 AM, Matthias BUSSONNIER wrote: > Hey Brad, > > Le 6 ao?t 2013 ? 07:59, Bradley M. Froehle a ?crit : > >> Hi Ondrej, >> ... >> >> We've discussed making IPython natively support 2.x and 3.x in >> the same code base [3,4], and while I don't see it on the roadmap >> [5], its probably in several developers' minds in the somewhat >> distant future. At the moment the lack of unicode literal support >> in Python 3.2 is pretty much a showstopper, but once we require >> Python >= 3.3 I don't see a technical reason why we couldn't have >> a unified code base. > > No so distant future. > Yes, we quickly spoke about it. The new unpublished roadmap is here : > https://hackpad.com/IPython-Summer-2013-Development-Meeting-D1UR23usGnA > It should mainly be the job of Thomas that should arrives soon in the US. > - Drop 2.6 > - Drop 3.2 > - Drop 2to3 > > I think we will more do it along the way to 2.0 than in a pig PR that make all 2 and 3 compatible, > like we don't want a single commit that makes everything pep-8 compliant (or it is painful for git-blame) > >> >> In the mean time, we should continue the judicious examination of >> issues like this one which improve Python 3 end user experience >> because Python 3 is the future and we are all going to have to >> migrate there eventually. > > I Disagree: >> "I don't know what the language of the year 2000 will look like, but I know it will be called Fortran." -- Tony Hoare > > -- > Matthias > > >> >> [1]: http://lists.debian.org/debian-python/2013/07/msg00052.html >> [2]: https://github.com/bfroehle/rt2to3 >> [3]: https://github.com/ipython/ipython/wiki/IPEP-4:-Python-3-Compatibility >> [4]: https://github.com/ipython/ipython/issues/2440 >> [5]: https://github.com/ipython/ipython/wiki/Roadmap:-IPython >> >> Cheers, >> Brad >> _______________________________________________ >> IPython-dev mailing list >> IPython-dev at scipy.org >> http://mail.scipy.org/mailman/listinfo/ipython-dev > > _______________________________________________ > IPython-dev mailing list > IPython-dev at scipy.org > http://mail.scipy.org/mailman/listinfo/ipython-dev From fperez.net at gmail.com Fri Aug 9 01:51:39 2013 From: fperez.net at gmail.com (Fernando Perez) Date: Thu, 8 Aug 2013 22:51:39 -0700 Subject: [IPython-dev] [ANN] IPython 1.0 is finally released, nearly 12 years in the making! In-Reply-To: <52044903.60002@fnal.gov> References: <52044903.60002@fnal.gov> Message-ID: On Thu, Aug 8, 2013 at 6:42 PM, Jon Wilson wrote: > Hurrah! Very well done. Please allow me to extend my gratitude for > such an excellent tool. I have been evangelizing my colleagues in > hep-ex, and gradually some of them are adopting IPython in their daily Thanks for the kind words. Given that IPython was born as an excuse to procrastinate off my "real work" on hep-lat, that is particularly nice to hear! > work. Soon, IPython will take over the world! World domination, proceeding according to plan ;) Best, f From gager at ilsb.tuwien.ac.at Fri Aug 9 04:52:31 2013 From: gager at ilsb.tuwien.ac.at (Jakob Gager) Date: Fri, 09 Aug 2013 10:52:31 +0200 Subject: [IPython-dev] Custom template with nbconvert Message-ID: <5204ADCF.8050408@ilsb.tuwien.ac.at> Hi, I tried to create a custom nbconvert template to remove the input cells from a notebook when converting the html. My naive approach was to create a file called my_html.tpl which contains only {%- extends 'fullhtml.tpl' -%} {% block input %} {%- endblock input %} now I tried to use it as ipython nbconvert --to html --template my_html.tpl file1.ipynb. However, there seems to be some issue with the registration of the jinja filters as I get an error like File "/usr/local/lib/python2.7/dist-packages/IPython/nbconvert/exporters/../templates/basichtml.tpl", line 56, in {{ cell.source | markdown| rm_fake}} TemplateAssertionError: no filter named 'rm_fake' Any hints? Thanks! Jakob From bussonniermatthias at gmail.com Fri Aug 9 05:06:44 2013 From: bussonniermatthias at gmail.com (Matthias BUSSONNIER) Date: Fri, 9 Aug 2013 11:06:44 +0200 Subject: [IPython-dev] Custom template with nbconvert In-Reply-To: <5204ADCF.8050408@ilsb.tuwien.ac.at> References: <5204ADCF.8050408@ilsb.tuwien.ac.at> Message-ID: Le 9 ao?t 2013 ? 10:52, Jakob Gager a ?crit : > Hi, > > I tried to create a custom nbconvert template to remove the input cells from a notebook when converting > the html. > My naive approach was to create a file called my_html.tpl which contains only > {%- extends 'fullhtml.tpl' -%} > {% block input %} > {%- endblock input %} > now I tried to use it as ipython nbconvert --to html --template my_html.tpl file1.ipynb. > > However, there seems to be some issue with the registration of the jinja filters as I get an error like > > File "/usr/local/lib/python2.7/dist-packages/IPython/nbconvert/exporters/../templates/basichtml.tpl", Are you on 1.0 ? This file shoudl not exist in 1.0. (and you should inherit html_full.tpl now) -- M > line 56, in > {{ cell.source | markdown| rm_fake}} > TemplateAssertionError: no filter named 'rm_fake' > > Any hints? > > Thanks! > Jakob > _______________________________________________ > IPython-dev mailing list > IPython-dev at scipy.org > http://mail.scipy.org/mailman/listinfo/ipython-dev From gager at ilsb.tuwien.ac.at Fri Aug 9 05:52:58 2013 From: gager at ilsb.tuwien.ac.at (Jakob Gager) Date: Fri, 09 Aug 2013 11:52:58 +0200 Subject: [IPython-dev] Custom template with nbconvert In-Reply-To: References: <5204ADCF.8050408@ilsb.tuwien.ac.at> Message-ID: <5204BBFA.3070509@ilsb.tuwien.ac.at> Thanks Matthias, I figured out that I had a very unclean installation. Removed everything and reinstalled 1.0 and now it works as expected! Jakob On 08/09/2013 11:06 AM, Matthias BUSSONNIER wrote: > > Le 9 ao?t 2013 ? 10:52, Jakob Gager a ?crit : > >> Hi, >> >> I tried to create a custom nbconvert template to remove the input cells from a notebook when converting >> the html. >> My naive approach was to create a file called my_html.tpl which contains only >> {%- extends 'fullhtml.tpl' -%} >> {% block input %} >> {%- endblock input %} >> now I tried to use it as ipython nbconvert --to html --template my_html.tpl file1.ipynb. >> >> However, there seems to be some issue with the registration of the jinja filters as I get an error like >> >> File "/usr/local/lib/python2.7/dist-packages/IPython/nbconvert/exporters/../templates/basichtml.tpl", > > Are you on 1.0 ? This file shoudl not exist in 1.0. > > (and you should inherit html_full.tpl now) > From markbak at gmail.com Fri Aug 9 09:43:27 2013 From: markbak at gmail.com (Mark Bakker) Date: Fri, 9 Aug 2013 15:43:27 +0200 Subject: [IPython-dev] IPython-dev Digest, Vol 115, Issue 18 In-Reply-To: References: Message-ID: Thanks, Matthias. That works. Any chance I can set this with a (html?) command in the notebook? Unfortunately, the tiny equations only show up when running the notebook in Canopy, and only on Windows, but that is what I am planning to use in the class I am teaching. Thanks again, Mark > > Message: 3 > Date: Thu, 8 Aug 2013 17:48:45 +0200 > From: Matthias BUSSONNIER > Subject: Re: [IPython-dev] tiny equations in Notebook on Windows > To: IPython developers list > Message-ID: <30281487-C88C-491D-A3EF-09764B3E0E31 at gmail.com> > Content-Type: text/plain; charset=windows-1252 > > > Le 8 ao?t 2013 ? 17:33, Mark Bakker a ?crit : > > > Dear List, > > > > I created some notebooks for a class I am teaching on my Mac. > > The students, however, are mostly having Windows machines. > > > > It turns out the equations used in the Notebook markup cells are > displayed really tiny on Windows. > > Right click on a Math equation > Math Settings > Scale All Math ? (enter > value) > > But need to be done on each notebooks by each student. > > I think it is more a problem of browser than OS... > > -- > Matt > > > Any (easy) solution to this problem? They look beautiful on my Mac. > > > > Thanks, > > > > Mark > -------------- next part -------------- An HTML attachment was scrubbed... URL: From markbak at gmail.com Fri Aug 9 09:47:35 2013 From: markbak at gmail.com (Mark Bakker) Date: Fri, 9 Aug 2013 15:47:35 +0200 Subject: [IPython-dev] tiny equations in Notebook on Windows Message-ID: (Sorry, forgot to change the subject on the previous post) Thanks, Matthias. That works. Any chance I can set this with a (html?) command in the notebook? Unfortunately, the tiny equations only show up when running the notebook in Canopy, and only on Windows, but that is what I am planning to use in the class I am teaching. Thanks again, Mark > > Message: 3 > Date: Thu, 8 Aug 2013 17:48:45 +0200 > From: Matthias BUSSONNIER > Subject: Re: [IPython-dev] tiny equations in Notebook on Windows > To: IPython developers list > Message-ID: <30281487-C88C-491D-A3EF-09764B3E0E31 at gmail.com> > Content-Type: text/plain; charset=windows-1252 > > > Le 8 ao?t 2013 ? 17:33, Mark Bakker a ?crit : > > > Dear List, > > > > I created some notebooks for a class I am teaching on my Mac. > > The students, however, are mostly having Windows machines. > > > > It turns out the equations used in the Notebook markup cells are > displayed really tiny on Windows. > > Right click on a Math equation > Math Settings > Scale All Math ? (enter > value) > > But need to be done on each notebooks by each student. > > I think it is more a problem of browser than OS... > > -- > Matt > > > Any (easy) solution to this problem? They look beautiful on my Mac. > > > > Thanks, > > > > Mark -------------- next part -------------- An HTML attachment was scrubbed... URL: From bussonniermatthias at gmail.com Fri Aug 9 10:54:59 2013 From: bussonniermatthias at gmail.com (Matthias BUSSONNIER) Date: Fri, 9 Aug 2013 16:54:59 +0200 Subject: [IPython-dev] tiny equations in Notebook on Windows In-Reply-To: References: Message-ID: Le 9 ao?t 2013 ? 15:47, Mark Bakker a ?crit : > (Sorry, forgot to change the subject on the previous post) > Thanks, Matthias. That works. > Any chance I can set this with a (html?) command in the notebook? Sadly it does not seem to be that trivial to do. > Unfortunately, the tiny equations only show up when running the notebook in Canopy, and only on Windows, but that is what I am planning to use in > the class I am teaching. But apparently, using the scale-math option will be remembered[1] on each browser so you shouldn't have to do it too often. -- Matt [1]: http://www.mathjax.org/help/zoom/ > Thanks again, > Mark > > > Message: 3 > Date: Thu, 8 Aug 2013 17:48:45 +0200 > From: Matthias BUSSONNIER > Subject: Re: [IPython-dev] tiny equations in Notebook on Windows > To: IPython developers list > Message-ID: <30281487-C88C-491D-A3EF-09764B3E0E31 at gmail.com> > Content-Type: text/plain; charset=windows-1252 > > > Le 8 ao?t 2013 ? 17:33, Mark Bakker a ?crit : > > > Dear List, > > > > I created some notebooks for a class I am teaching on my Mac. > > The students, however, are mostly having Windows machines. > > > > It turns out the equations used in the Notebook markup cells are displayed really tiny on Windows. > > Right click on a Math equation > Math Settings > Scale All Math ? (enter value) > > But need to be done on each notebooks by each student. > > I think it is more a problem of browser than OS... > > -- > Matt > > > Any (easy) solution to this problem? They look beautiful on my Mac. > > > > Thanks, > > > > Mark > > > > _______________________________________________ > IPython-dev mailing list > IPython-dev at scipy.org > http://mail.scipy.org/mailman/listinfo/ipython-dev From markbak at gmail.com Fri Aug 9 11:03:45 2013 From: markbak at gmail.com (Mark Bakker) Date: Fri, 9 Aug 2013 17:03:45 +0200 Subject: [IPython-dev] tiny equations in Notebook on Windows Message-ID: Sadly enough, Canopy doesn't remember the scale-math option )-: Yet. I always have high expectations for the great guys at Enthought. Mark > Date: Fri, 9 Aug 2013 16:54:59 +0200 > From: Matthias BUSSONNIER > Subject: Re: [IPython-dev] tiny equations in Notebook on Windows > To: IPython developers list > Message-ID: > Content-Type: text/plain; charset=iso-8859-1 > > > Le 9 ao?t 2013 ? 15:47, Mark Bakker a ?crit : > > > (Sorry, forgot to change the subject on the previous post) > > Thanks, Matthias. That works. > > Any chance I can set this with a (html?) command in the notebook? > > Sadly it does not seem to be that trivial to do. > > > Unfortunately, the tiny equations only show up when running the notebook > in Canopy, and only on Windows, but that is what I am planning to use in > > the class I am teaching. > > But apparently, using the scale-math option will be remembered[1] on each > browser so you shouldn't have to do it too often. > -- > Matt > > [1]: http://www.mathjax.org/help/zoom/ > > > > Thanks again, > > Mark > > > > > > Message: 3 > > Date: Thu, 8 Aug 2013 17:48:45 +0200 > > From: Matthias BUSSONNIER > > Subject: Re: [IPython-dev] tiny equations in Notebook on Windows > > To: IPython developers list > > Message-ID: <30281487-C88C-491D-A3EF-09764B3E0E31 at gmail.com> > > Content-Type: text/plain; charset=windows-1252 > > > > > > Le 8 ao?t 2013 ? 17:33, Mark Bakker a ?crit : > > > > > Dear List, > > > > > > I created some notebooks for a class I am teaching on my Mac. > > > The students, however, are mostly having Windows machines. > > > > > > It turns out the equations used in the Notebook markup cells are > displayed really tiny on Windows. > > > > Right click on a Math equation > Math Settings > Scale All Math ? (enter > value) > > > > But need to be done on each notebooks by each student. > > > > I think it is more a problem of browser than OS... > > > > -- > > Matt > > > > > Any (easy) solution to this problem? They look beautiful on my Mac. > > > > > > Thanks, > > > > > > Mark > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From vbraun.name at gmail.com Fri Aug 9 11:07:35 2013 From: vbraun.name at gmail.com (Volker Braun) Date: Fri, 9 Aug 2013 08:07:35 -0700 (PDT) Subject: [IPython-dev] Raising a SyntaxError in InputTransformer Message-ID: <1376060855345-5027773.post@n6.nabble.com> I posted recently on this subject but didn't get any reply. Maybe I wasn't clear enough, so let me try again: Its great that IPython now has a nice framework for transforming input before passing it off to Python. But that also means that there must be a way to hand syntax errors back to the user. One could just not apply the input transformation, but then that would result in a very confusing error message. The Python interpreter obviously does not understand the un-transformed input, that is the whole point of the input transformation. In particular, Sage supports Magma-syntax to create new rings: sage: R. = QQ[] Now if there are unbalanced brackets ("QQ[}") then we would like to tell the user that, but right now any exceptions raised by the input transformer will not be caught and terminate IPython. I think it would be easy to patch IPython to catch SyntaxErrors from the input transformers, but then I was hoping that the developers would have already made some sort of plan for how to notify the user of syntax errors. Or maybe it is not desirable for some reason? Volker PS: I just tested this with IPython-1.0.0, didn't work with older versions either. -- View this message in context: http://python.6.x6.nabble.com/Raising-a-SyntaxError-in-InputTransformer-tp5027773.html Sent from the IPython - Development mailing list archive at Nabble.com. From takowl at gmail.com Fri Aug 9 12:36:25 2013 From: takowl at gmail.com (Thomas Kluyver) Date: Fri, 9 Aug 2013 09:36:25 -0700 Subject: [IPython-dev] Raising a SyntaxError in InputTransformer In-Reply-To: <1376060855345-5027773.post@n6.nabble.com> References: <1376060855345-5027773.post@n6.nabble.com> Message-ID: On 9 August 2013 08:07, Volker Braun wrote: > Now if there are unbalanced brackets ("QQ[}") then we would like to tell > the > user that, but right now any exceptions raised by the input transformer > will > not be caught and terminate IPython. I think it would be easy to patch > IPython to catch SyntaxErrors from the input transformers, but then I was > hoping that the developers would have already made some sort of plan for > how > to notify the user of syntax errors. Or maybe it is not desirable for some > reason? > I don't think we really thought about it. For IPython, our extra syntax is limited enough that if something's invalid, we just want a standard Python SyntaxError. I'm don't object to adding the ability to raise exceptions, if it can be done neatly without impacting our use cases. But I also partly feel that if Sage is extending the syntax that much, it should probably be thinking about writing a proper parser. Our input transformation machinery is really intended to support some extensions to Python syntax, not the definition of a whole Python-like programming language. Thanks, Thomas -------------- next part -------------- An HTML attachment was scrubbed... URL: From benjaminrk at gmail.com Fri Aug 9 13:07:05 2013 From: benjaminrk at gmail.com (MinRK) Date: Fri, 9 Aug 2013 10:07:05 -0700 Subject: [IPython-dev] [sympy] Treating Python 3 as a first-class citizen In-Reply-To: References: <59E60EAC-AC96-4127-A29F-9B863969E272@gmail.com> Message-ID: On Thu, Aug 8, 2013 at 8:05 PM, Aaron Meurer wrote: > So I noticed that 1.0.0 still installs ipython3 only. Did I convince > you guys to change the behavior? Fernando, what is your opinion? > I think it will happen, but this conversation started much too close to release for anything to change in 1.0. I doubt we would do it before the single codebase switch, and that won't happen before dropping support for 3.2 (probably either this winter or next summer). > > Aaron Meurer > > On Tue, Aug 6, 2013 at 12:56 AM, Matthias BUSSONNIER > wrote: > > Hey Brad, > > > > Le 6 ao?t 2013 ? 07:59, Bradley M. Froehle a ?crit : > > > >> Hi Ondrej, > >> ... > >> > >> We've discussed making IPython natively support 2.x and 3.x in > >> the same code base [3,4], and while I don't see it on the roadmap > >> [5], its probably in several developers' minds in the somewhat > >> distant future. At the moment the lack of unicode literal support > >> in Python 3.2 is pretty much a showstopper, but once we require > >> Python >= 3.3 I don't see a technical reason why we couldn't have > >> a unified code base. > > > > No so distant future. > > Yes, we quickly spoke about it. The new unpublished roadmap is here : > > https://hackpad.com/IPython-Summer-2013-Development-Meeting-D1UR23usGnA > > It should mainly be the job of Thomas that should arrives soon in the US. > > - Drop 2.6 > > - Drop 3.2 > > - Drop 2to3 > > > > I think we will more do it along the way to 2.0 than in a pig PR that > make all 2 and 3 compatible, > > like we don't want a single commit that makes everything pep-8 compliant > (or it is painful for git-blame) > > > >> > >> In the mean time, we should continue the judicious examination of > >> issues like this one which improve Python 3 end user experience > >> because Python 3 is the future and we are all going to have to > >> migrate there eventually. > > > > I Disagree: > >> "I don't know what the language of the year 2000 will look like, but I > know it will be called Fortran." -- Tony Hoare > > > > -- > > Matthias > > > > > >> > >> [1]: http://lists.debian.org/debian-python/2013/07/msg00052.html > >> [2]: https://github.com/bfroehle/rt2to3 > >> [3]: > https://github.com/ipython/ipython/wiki/IPEP-4:-Python-3-Compatibility > >> [4]: https://github.com/ipython/ipython/issues/2440 > >> [5]: https://github.com/ipython/ipython/wiki/Roadmap:-IPython > >> > >> Cheers, > >> Brad > >> _______________________________________________ > >> IPython-dev mailing list > >> IPython-dev at scipy.org > >> http://mail.scipy.org/mailman/listinfo/ipython-dev > > > > _______________________________________________ > > IPython-dev mailing list > > IPython-dev at scipy.org > > http://mail.scipy.org/mailman/listinfo/ipython-dev > _______________________________________________ > IPython-dev mailing list > IPython-dev at scipy.org > http://mail.scipy.org/mailman/listinfo/ipython-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fperez.net at gmail.com Fri Aug 9 16:25:28 2013 From: fperez.net at gmail.com (Fernando Perez) Date: Fri, 9 Aug 2013 13:25:28 -0700 Subject: [IPython-dev] [sympy] Treating Python 3 as a first-class citizen In-Reply-To: References: Message-ID: Hi Ondrej, On Mon, Aug 5, 2013 at 9:05 PM, Ond?ej ?ert?k wrote: > > Fernando, I would be very much interested what you think of this > issue, as you have more touch with the Python core devs. I don't really have any particular insights on this one. But I am concerned about the fact that, even with a unified 2/3 codebase (which I agree is the right approach and which we'll move towards in IPython), we're going to be living in this funky 2+3 (5 ?) ghetto for a long time to come. It's true that we can now write single-codebase codes, esp. cleanly if we're willing to drop both 2.6 and 3.2. But that means not using any of the new and sometimes appealing features of the language, like 'yield from', function type annotations, or new features in the library like concurrent.futures. This means that 2.x support is going to be a drag into people really embracing python 3 for a long time to come, since I'm sure that many projects will be reluctant to completely drop 2.x support altogether. If you ask me, the strategy to encourage py3 adoption should have been: 1. Provide the 2to3 tool meant mostly as a one-time use tool to help projects make the initial transition, assuming that it would require manual cleanup afterwards. 2. Provide a 3to2 tool meant to run at installation time automatically when setup.py detects it's getting run via python2. This could do whatever ugliness is necessary to emit python2-compatible code, possibly along shimming in a backports module along the lines of six. If this had been done, we'd all move happily along to python3, knowing that (perhaps with a bit of care to ensure we don't fall outside the scope of what 3to2 could do) we could support our 2.x user base without trouble. I know that such a hypothetical 3to2 isn't an easy job, but honestly given the years of slowing down of Python development the 3 transition has costed everyone, and the amount of grief and pleading for projects to move forward, I think it would have been a very worthwhile investment. I'm willing to bet it would massively speed up 3 adoption, and re-invigorate Python development. But hey, that's just my opinion. I'm not in the core team and I'm not the one who is going to do the work, so I have little right to complain. They have their priorities and I'm very grateful for the fact that on many fronts they do a spectacular job, so unless I'm going to roll up my sleeves and do something about this, I should probably shut up :) Cheers, f -- Fernando Perez (@fperez_org; http://fperez.org) fperez.net-at-gmail: mailing lists only (I ignore this when swamped!) fernando.perez-at-berkeley: contact me here for any direct mail From JDM at MarchRay.net Fri Aug 9 17:17:41 2013 From: JDM at MarchRay.net (Jonathan March) Date: Fri, 9 Aug 2013 16:17:41 -0500 Subject: [IPython-dev] tiny equations in Notebook on Windows In-Reply-To: References: Message-ID: Thanks for the implicit suggestion, Mark (and vote of confidence). Ticketed for triage. -- Jonathan (wearing Enthought hat) On Fri, Aug 9, 2013 at 10:03 AM, Mark Bakker wrote: > Sadly enough, Canopy doesn't remember the scale-math option )-: > Yet. > I always have high expectations for the great guys at Enthought. > Mark > > >> Date: Fri, 9 Aug 2013 16:54:59 +0200 >> >> From: Matthias BUSSONNIER >> Subject: Re: [IPython-dev] tiny equations in Notebook on Windows >> To: IPython developers list >> Message-ID: >> Content-Type: text/plain; charset=iso-8859-1 >> >> >> Le 9 ao?t 2013 ? 15:47, Mark Bakker a ?crit : >> >> >> > (Sorry, forgot to change the subject on the previous post) >> > Thanks, Matthias. That works. >> > Any chance I can set this with a (html?) command in the notebook? >> >> Sadly it does not seem to be that trivial to do. >> >> > Unfortunately, the tiny equations only show up when running the >> notebook in Canopy, and only on Windows, but that is what I am planning to >> use in >> > the class I am teaching. >> >> But apparently, using the scale-math option will be remembered[1] on each >> browser so you shouldn't have to do it too often. >> -- >> Matt >> >> [1]: http://www.mathjax.org/help/zoom/ >> >> >> > Thanks again, >> > Mark >> > >> > >> > Message: 3 >> > Date: Thu, 8 Aug 2013 17:48:45 +0200 >> > From: Matthias BUSSONNIER >> > Subject: Re: [IPython-dev] tiny equations in Notebook on Windows >> > To: IPython developers list >> > Message-ID: <30281487-C88C-491D-A3EF-09764B3E0E31 at gmail.com> >> > Content-Type: text/plain; charset=windows-1252 >> > >> > >> > Le 8 ao?t 2013 ? 17:33, Mark Bakker a ?crit : >> > >> > > Dear List, >> > > >> > > I created some notebooks for a class I am teaching on my Mac. >> > > The students, however, are mostly having Windows machines. >> > > >> > > It turns out the equations used in the Notebook markup cells are >> displayed really tiny on Windows. >> > >> > Right click on a Math equation > Math Settings > Scale All Math ? >> (enter value) >> > >> > But need to be done on each notebooks by each student. >> > >> > I think it is more a problem of browser than OS... >> > >> > -- >> > Matt >> > >> > > Any (easy) solution to this problem? They look beautiful on my Mac. >> > > >> > > Thanks, >> > > >> > > Mark >> >> > > _______________________________________________ > IPython-dev mailing list > IPython-dev at scipy.org > http://mail.scipy.org/mailman/listinfo/ipython-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From asmeurer at gmail.com Fri Aug 9 17:35:11 2013 From: asmeurer at gmail.com (Aaron Meurer) Date: Fri, 9 Aug 2013 15:35:11 -0600 Subject: [IPython-dev] [sympy] Treating Python 3 as a first-class citizen In-Reply-To: References: Message-ID: On Fri, Aug 9, 2013 at 2:25 PM, Fernando Perez wrote: > Hi Ondrej, > > On Mon, Aug 5, 2013 at 9:05 PM, Ond?ej ?ert?k wrote: >> >> Fernando, I would be very much interested what you think of this >> issue, as you have more touch with the Python core devs. > > I don't really have any particular insights on this one. But I am > concerned about the fact that, even with a unified 2/3 codebase (which > I agree is the right approach and which we'll move towards in > IPython), we're going to be living in this funky 2+3 (5 ?) ghetto for > a long time to come. It's true that we can now write single-codebase > codes, esp. cleanly if we're willing to drop both 2.6 and 3.2. But > that means not using any of the new and sometimes appealing features > of the language, like 'yield from', function type annotations, or new > features in the library like concurrent.futures. That's nothing new, though. You couldn't use with statements or ternary statements if you wanted to support 2.4, you couldn't use new-style string formatting if you wanted to support 2.5, and you can't use set literals or dictionary comprehensions if you want to support 2.6. > > This means that 2.x support is going to be a drag into people really > embracing python 3 for a long time to come, since I'm sure that many > projects will be reluctant to completely drop 2.x support altogether. > > If you ask me, the strategy to encourage py3 adoption should have been: > > 1. Provide the 2to3 tool meant mostly as a one-time use tool to help > projects make the initial transition, assuming that it would require > manual cleanup afterwards. > > 2. Provide a 3to2 tool meant to run at installation time automatically > when setup.py detects it's getting run via python2. This could do > whatever ugliness is necessary to emit python2-compatible code, > possibly along shimming in a backports module along the lines of six. > > If this had been done, we'd all move happily along to python3, knowing > that (perhaps with a bit of care to ensure we don't fall outside the > scope of what 3to2 could do) we could support our 2.x user base > without trouble. > > I know that such a hypothetical 3to2 isn't an easy job, but honestly > given the years of slowing down of Python development the 3 transition > has costed everyone, and the amount of grief and pleading for projects > to move forward, I think it would have been a very worthwhile > investment. I'm willing to bet it would massively speed up 3 > adoption, and re-invigorate Python development. I think 2to3 was a mistake. It gave credence to the idea that Python 3 is a different language from Python 2. What they should have done was just included something like six with the standard library (or at least had official documentation on all the ways to get around things, especially the tricky ones like unicode and metaclasses). That would have made it clear that supporting 2.x-3.y is no different from supporting 2.x-2.y, which is what writers of big Python libraries had already been doing for ages. That, and I think the print function was a mistake, because it has (and still is) scared everyone away from Python 3. Aaron Meurer > > But hey, that's just my opinion. I'm not in the core team and I'm not > the one who is going to do the work, so I have little right to > complain. They have their priorities and I'm very grateful for the > fact that on many fronts they do a spectacular job, so unless I'm > going to roll up my sleeves and do something about this, I should > probably shut up :) > > Cheers, > > f > > > -- > Fernando Perez (@fperez_org; http://fperez.org) > fperez.net-at-gmail: mailing lists only (I ignore this when swamped!) > fernando.perez-at-berkeley: contact me here for any direct mail > _______________________________________________ > IPython-dev mailing list > IPython-dev at scipy.org > http://mail.scipy.org/mailman/listinfo/ipython-dev From fperez.net at gmail.com Fri Aug 9 17:43:43 2013 From: fperez.net at gmail.com (Fernando Perez) Date: Fri, 9 Aug 2013 14:43:43 -0700 Subject: [IPython-dev] [sympy] Treating Python 3 as a first-class citizen In-Reply-To: References: Message-ID: On Fri, Aug 9, 2013 at 2:35 PM, Aaron Meurer wrote: > On Fri, Aug 9, 2013 at 2:25 PM, Fernando Perez wrote: >> Hi Ondrej, >> >> On Mon, Aug 5, 2013 at 9:05 PM, Ond?ej ?ert?k wrote: >>> >>> Fernando, I would be very much interested what you think of this >>> issue, as you have more touch with the Python core devs. >> >> I don't really have any particular insights on this one. But I am >> concerned about the fact that, even with a unified 2/3 codebase (which >> I agree is the right approach and which we'll move towards in >> IPython), we're going to be living in this funky 2+3 (5 ?) ghetto for >> a long time to come. It's true that we can now write single-codebase >> codes, esp. cleanly if we're willing to drop both 2.6 and 3.2. But >> that means not using any of the new and sometimes appealing features >> of the language, like 'yield from', function type annotations, or new >> features in the library like concurrent.futures. > > That's nothing new, though. You couldn't use with statements or > ternary statements if you wanted to support 2.4, you couldn't use > new-style string formatting if you wanted to support 2.5, and you > can't use set literals or dictionary comprehensions if you want to > support 2.6. That's true, but since the gap (perceived or otherwise) between 2 and 3 is larger than the one between 2.x and 2.y, I think the impact is much more significant in this case. Hence why I think that mitigation measures would be warranted here, beyond what would be considered necessary for a 2.x -> 2.x+1. Cheers, f -- Fernando Perez (@fperez_org; http://fperez.org) fperez.net-at-gmail: mailing lists only (I ignore this when swamped!) fernando.perez-at-berkeley: contact me here for any direct mail From asmeurer at gmail.com Fri Aug 9 23:10:35 2013 From: asmeurer at gmail.com (Aaron Meurer) Date: Fri, 9 Aug 2013 21:10:35 -0600 Subject: [IPython-dev] [ANN] IPython 1.0 is finally released, nearly 12 years in the making! In-Reply-To: References: Message-ID: I noticed that git says "2.0.0-dev". Are you being overly optimistic, or did you revert your policy of only supporting one version at a time? Aaron Meurer On Thu, Aug 8, 2013 at 7:35 PM, Fernando Perez wrote: > Hi all, > > I am incredibly thrilled, on behalf of the amazing IPython Dev Team, > to announce the official release of IPython 1.0 today, an effort > nearly 12 years in the making. The previous version (0.13) was > released on June 30, 2012, and in this development cycle we had: > > ~12 months of work. > ~700 pull requests merged. > ~600 issues closed (non-pull requests). > contributions from ~150 authors. > ~4000 commits. > > > # A little context > > What does "1.0" mean for IPython? Obviously IPython has been a staple > of the scientific Python community for years, and we've made every > effort to make it a robust and production ready tool for a long time, > so what exactly do we mean by tagging this particular release as 1.0? > Basically, we feel that the core design of IPython, and the scope of > the project, is where we want it to be. > > What we have today is what we consider a reasonably complete, design- > and scope-wise, IPython 1.0: an architecture for interactive > computing, that can drive kernels in a number of ways using a > well-defined protocol, and rich and powerful clients that let users > control those kernels effectively. Our different clients serve > different needs, with the old workhorse of the terminal still being > very useful, but much of our current development energy going into the > Notebook, obviously. The Notebook enables interactive exploration to > become Literate Computing, bridging the gaps from individual work to > collaboration and publication, all with an open file format that is a > direct record of the underlying communication protocol. > > There are obviously plenty of open issues (many of them very > important) that need fixing, and large and ambitious new lines of > development for the years to come. But the work of the last four > years, since the summer of 2009 when Brian Granger was able to devote > a summer (thanks to funding from the NiPy project - nipy.org) to > refactoring the old IPython core code, finally opened up or > infrastructure for real innovation. By disentangling what was a useful > but impenetrable codebase, it became possible for us to start building > a flexible, modern system for interactive computing that abstracted > the old REPL model into a generic protocol that kernels could use to > talk to clients. This led at first to the creation of the Qt console, > and then to the Notebook and out-of-process terminal client. It also > allowed us to (finally!) unify our parallel computing machinery with > the rest of the interactive system, which Min Ragan-Kelley pulled off > in a development tour de force that involved rewriting in a few weeks > a huge and complex Twisted-based system. > > We are very happy with how the Notebook work has turned out, and it > seems the entire community agrees with us, as the uptake has been > phenomenal. Back from the very first "IPython 0.0.1" that I started > in 2001: > > https://gist.github.com/fperez/1579699 > > there were already hints of tools like Mathematica: it was my everyday > workhorse as a theoretical physicist and I found its Notebook > environment invaluable. But as a grad student trying out "just an > afternoon hack" (IPython was my very first Python program as I was > learning the language), I didn't have the resources, skills or vision > to attempt building an entire notebook system, and to be honest the > tools of the day would have made that enterprise a miserable one. But > those ideas were always driving our efforts, and as IPython started > becoming a project with a team, we made multiple attempts to get a > good Notebook built around IPython. Those interested can read an old > blog post of mine with the history > (http://blog.fperez.org/2012/01/ipython-notebook-historical.html). > The short story is that in 2011, on our sixth attempt, Brian was again > able to devote a focused summer into using our client-server > architecture and, with the stack of the modern web (Javascript, CSS, > websockets, Tornado, ...), finally build a robust system for Literate > Computing across programming languages. > > Today, thanks to the generous support and vision of Josh Greenberg at > the Alfred P. Sloan Foundation, we are working very hard on building > the notebook infrastructure, and this release contains major advances > on that front. We have high hopes for what we'll do next; as a > glimpse of the future that this enables, now there is a native Julia > kernel that speaks to our clients, notebook included: > https://github.com/JuliaLang/IJulia.jl. > > > # Team > > I can't stress enough how impressed I am with the work people are > doing in IPython, and what a privilege it is to work with colleagues > like these. Brian Granger and Min Ragan-Kelley joined IPython around > 2005, initially working on the parallel machinery, but since ~ 2009 > they have become the heart of the project. Today Min is our top > committer and knows our codebase better than anyone else, and I can't > imagine better partners for an effort like this. > > And from regulars in our core team like Thomas Kluyver, Matthias > Bussonnier, Brad Froehle and Paul Ivanov to newcomers like Jonathan > Frederic and Zach Sailer, in addition to the many more whose names are > in our logs, we have a crazy amount of energy being poured into > IPython. I hope we'll continue to harness it productively! > > The full list of contributors to this release can be seen here: > > http://ipython.org/ipython-doc/rel-1.0.0/whatsnew/github-stats-1.0.html > > > # Release highlights > > * nbconvert: this is the major piece of new functionality in this > cycle, and was an explicit part of our roadmap > (https://github.com/ipython/ipython/wiki/Roadmap:-IPython). nbconvert > is now an IPython subcommand to convert notebooks into other formats > such as HTML or LaTeX, but more importantly, it's a very flexible > system that lets you write custom templates to generate new output > with arbitrary control over the formatting and transformations that > are applied to the input. > > We want to stress that despite the fact that a huge amount of work > went into nbconvert, this should be considered a *tech preview* > release. We've come to realize how complex this problem is, and while > we'll make every effort to keep the high-level command-line syntax and > APIs as stable as possible, it is quite likely that the internals will > continue to evolve, possibly in backwards-incompatible ways. So if > you start building services and libraries that make heavy use of the > nbconvert internals, please be prepared for some turmoil in the months > to come, and ping us on the dev list with questions or concerns. > > * Notebook improvements: there has been a ton of polish work in the > notebook at many levels, though the file format remains unchanged from > 0.13, so you shouldn't have any problems sharing notebooks with > colleagues still using 0.13. > > - Autosave: probably the most oft-requested feature, the notebook > server now autosaves your files! You can still hit Ctrl-S to force a > manual save (which also creates a special 'checkpoint' you can come > back to). > > - The notebook supports raw_input(), and thus also %debug. This was > probably the main deficiency of the notebook as a client compared to > the terminal/qtconsole, and it has been finally fixed. > > - Add %%html, %%svg, %%javascript, and %%latex cell magics for > writing raw output in notebook cells. > - Fix an issue parsing LaTeX in markdown cells, which required users > to type \\\, instead of \\. > -Images support width and height metadata, and thereby 2x scaling > (retina support). > - %%file has been renamed %%writefile (%%file) is deprecated. > > * The input transofrmation code has been updated and rationalized. > This is a somewhat specialized part of IPython, but of importance to > projects that build upon it for custom environments, like Sympy and > Sage. > > Our full release notes are here: > > http://ipython.org/ipython-doc/rel-1.0.0/whatsnew/version1.0.html > > and the gory details are here: > > http://ipython.org/ipython-doc/rel-1.0.0/whatsnew/github-stats-1.0.html > > > # Installation > > Installation links and instructions are at: http://ipython.org/install.html > And IPython is also on PyPI: http://pypi.python.org/pypi/ipython > > > # Requirements > > IPython 1.0 requires Python ? 2.6.5 or ? 3.2.1. It does not support > Python 3.0, 3.1, or 2.5. > > > # Acknowledgments > > Last but not least, we'd like to acknowledge the generous support of > those who make it possible for us to spend our time working on > IPython. In particular, the Alfred P. Sloan Foundation today lets us > have a solid team working full-time on the project, and without the > support of Enthought Inc at multiple points in our history, we > wouldn't be where we are today. > > The full list of our support is here: > > http://ipython.org/index.html#support > > > Thanks to everyone! Please enjoy IPython 1.0, and report all bugs as usual! > > Fernando, on behalf of the IPython Dev Team. > > -- > Fernando Perez (@fperez_org; http://fperez.org) > fperez.net-at-gmail: mailing lists only (I ignore this when swamped!) > fernando.perez-at-berkeley: contact me here for any direct mail > -- > Fernando Perez (@fperez_org; http://fperez.org) > fperez.net-at-gmail: mailing lists only (I ignore this when swamped!) > fernando.perez-at-berkeley: contact me here for any direct mail > _______________________________________________ > IPython-dev mailing list > IPython-dev at scipy.org > http://mail.scipy.org/mailman/listinfo/ipython-dev From benjaminrk at gmail.com Sat Aug 10 00:03:02 2013 From: benjaminrk at gmail.com (MinRK) Date: Fri, 9 Aug 2013 21:03:02 -0700 Subject: [IPython-dev] [ANN] IPython 1.0 is finally released, nearly 12 years in the making! In-Reply-To: References: Message-ID: On Fri, Aug 9, 2013 at 8:10 PM, Aaron Meurer wrote: > I noticed that git says "2.0.0-dev". Are you being overly optimistic, > or did you revert your policy of only supporting one version at a > time? > I'm not sure what you mean. We are certainly planning to make backward-incompatible changes to be released in the Fall, which suggests a new major version. We are trying out a roughly six month major release cycle (2.0 in Winter, 3.0 next Summer). We will backport fixes to 1.0 as long as it is tenable (1.x branch already has a few fixes), which may not be past Winter. -MinRK > > Aaron Meurer > > On Thu, Aug 8, 2013 at 7:35 PM, Fernando Perez > wrote: > > Hi all, > > > > I am incredibly thrilled, on behalf of the amazing IPython Dev Team, > > to announce the official release of IPython 1.0 today, an effort > > nearly 12 years in the making. The previous version (0.13) was > > released on June 30, 2012, and in this development cycle we had: > > > > ~12 months of work. > > ~700 pull requests merged. > > ~600 issues closed (non-pull requests). > > contributions from ~150 authors. > > ~4000 commits. > > > > > > # A little context > > > > What does "1.0" mean for IPython? Obviously IPython has been a staple > > of the scientific Python community for years, and we've made every > > effort to make it a robust and production ready tool for a long time, > > so what exactly do we mean by tagging this particular release as 1.0? > > Basically, we feel that the core design of IPython, and the scope of > > the project, is where we want it to be. > > > > What we have today is what we consider a reasonably complete, design- > > and scope-wise, IPython 1.0: an architecture for interactive > > computing, that can drive kernels in a number of ways using a > > well-defined protocol, and rich and powerful clients that let users > > control those kernels effectively. Our different clients serve > > different needs, with the old workhorse of the terminal still being > > very useful, but much of our current development energy going into the > > Notebook, obviously. The Notebook enables interactive exploration to > > become Literate Computing, bridging the gaps from individual work to > > collaboration and publication, all with an open file format that is a > > direct record of the underlying communication protocol. > > > > There are obviously plenty of open issues (many of them very > > important) that need fixing, and large and ambitious new lines of > > development for the years to come. But the work of the last four > > years, since the summer of 2009 when Brian Granger was able to devote > > a summer (thanks to funding from the NiPy project - nipy.org) to > > refactoring the old IPython core code, finally opened up or > > infrastructure for real innovation. By disentangling what was a useful > > but impenetrable codebase, it became possible for us to start building > > a flexible, modern system for interactive computing that abstracted > > the old REPL model into a generic protocol that kernels could use to > > talk to clients. This led at first to the creation of the Qt console, > > and then to the Notebook and out-of-process terminal client. It also > > allowed us to (finally!) unify our parallel computing machinery with > > the rest of the interactive system, which Min Ragan-Kelley pulled off > > in a development tour de force that involved rewriting in a few weeks > > a huge and complex Twisted-based system. > > > > We are very happy with how the Notebook work has turned out, and it > > seems the entire community agrees with us, as the uptake has been > > phenomenal. Back from the very first "IPython 0.0.1" that I started > > in 2001: > > > > https://gist.github.com/fperez/1579699 > > > > there were already hints of tools like Mathematica: it was my everyday > > workhorse as a theoretical physicist and I found its Notebook > > environment invaluable. But as a grad student trying out "just an > > afternoon hack" (IPython was my very first Python program as I was > > learning the language), I didn't have the resources, skills or vision > > to attempt building an entire notebook system, and to be honest the > > tools of the day would have made that enterprise a miserable one. But > > those ideas were always driving our efforts, and as IPython started > > becoming a project with a team, we made multiple attempts to get a > > good Notebook built around IPython. Those interested can read an old > > blog post of mine with the history > > (http://blog.fperez.org/2012/01/ipython-notebook-historical.html). > > The short story is that in 2011, on our sixth attempt, Brian was again > > able to devote a focused summer into using our client-server > > architecture and, with the stack of the modern web (Javascript, CSS, > > websockets, Tornado, ...), finally build a robust system for Literate > > Computing across programming languages. > > > > Today, thanks to the generous support and vision of Josh Greenberg at > > the Alfred P. Sloan Foundation, we are working very hard on building > > the notebook infrastructure, and this release contains major advances > > on that front. We have high hopes for what we'll do next; as a > > glimpse of the future that this enables, now there is a native Julia > > kernel that speaks to our clients, notebook included: > > https://github.com/JuliaLang/IJulia.jl. > > > > > > # Team > > > > I can't stress enough how impressed I am with the work people are > > doing in IPython, and what a privilege it is to work with colleagues > > like these. Brian Granger and Min Ragan-Kelley joined IPython around > > 2005, initially working on the parallel machinery, but since ~ 2009 > > they have become the heart of the project. Today Min is our top > > committer and knows our codebase better than anyone else, and I can't > > imagine better partners for an effort like this. > > > > And from regulars in our core team like Thomas Kluyver, Matthias > > Bussonnier, Brad Froehle and Paul Ivanov to newcomers like Jonathan > > Frederic and Zach Sailer, in addition to the many more whose names are > > in our logs, we have a crazy amount of energy being poured into > > IPython. I hope we'll continue to harness it productively! > > > > The full list of contributors to this release can be seen here: > > > > http://ipython.org/ipython-doc/rel-1.0.0/whatsnew/github-stats-1.0.html > > > > > > # Release highlights > > > > * nbconvert: this is the major piece of new functionality in this > > cycle, and was an explicit part of our roadmap > > (https://github.com/ipython/ipython/wiki/Roadmap:-IPython). nbconvert > > is now an IPython subcommand to convert notebooks into other formats > > such as HTML or LaTeX, but more importantly, it's a very flexible > > system that lets you write custom templates to generate new output > > with arbitrary control over the formatting and transformations that > > are applied to the input. > > > > We want to stress that despite the fact that a huge amount of work > > went into nbconvert, this should be considered a *tech preview* > > release. We've come to realize how complex this problem is, and while > > we'll make every effort to keep the high-level command-line syntax and > > APIs as stable as possible, it is quite likely that the internals will > > continue to evolve, possibly in backwards-incompatible ways. So if > > you start building services and libraries that make heavy use of the > > nbconvert internals, please be prepared for some turmoil in the months > > to come, and ping us on the dev list with questions or concerns. > > > > * Notebook improvements: there has been a ton of polish work in the > > notebook at many levels, though the file format remains unchanged from > > 0.13, so you shouldn't have any problems sharing notebooks with > > colleagues still using 0.13. > > > > - Autosave: probably the most oft-requested feature, the notebook > > server now autosaves your files! You can still hit Ctrl-S to force a > > manual save (which also creates a special 'checkpoint' you can come > > back to). > > > > - The notebook supports raw_input(), and thus also %debug. This was > > probably the main deficiency of the notebook as a client compared to > > the terminal/qtconsole, and it has been finally fixed. > > > > - Add %%html, %%svg, %%javascript, and %%latex cell magics for > > writing raw output in notebook cells. > > - Fix an issue parsing LaTeX in markdown cells, which required users > > to type \\\, instead of \\. > > -Images support width and height metadata, and thereby 2x scaling > > (retina support). > > - %%file has been renamed %%writefile (%%file) is deprecated. > > > > * The input transofrmation code has been updated and rationalized. > > This is a somewhat specialized part of IPython, but of importance to > > projects that build upon it for custom environments, like Sympy and > > Sage. > > > > Our full release notes are here: > > > > http://ipython.org/ipython-doc/rel-1.0.0/whatsnew/version1.0.html > > > > and the gory details are here: > > > > http://ipython.org/ipython-doc/rel-1.0.0/whatsnew/github-stats-1.0.html > > > > > > # Installation > > > > Installation links and instructions are at: > http://ipython.org/install.html > > And IPython is also on PyPI: http://pypi.python.org/pypi/ipython > > > > > > # Requirements > > > > IPython 1.0 requires Python ? 2.6.5 or ? 3.2.1. It does not support > > Python 3.0, 3.1, or 2.5. > > > > > > # Acknowledgments > > > > Last but not least, we'd like to acknowledge the generous support of > > those who make it possible for us to spend our time working on > > IPython. In particular, the Alfred P. Sloan Foundation today lets us > > have a solid team working full-time on the project, and without the > > support of Enthought Inc at multiple points in our history, we > > wouldn't be where we are today. > > > > The full list of our support is here: > > > > http://ipython.org/index.html#support > > > > > > Thanks to everyone! Please enjoy IPython 1.0, and report all bugs as > usual! > > > > Fernando, on behalf of the IPython Dev Team. > > > > -- > > Fernando Perez (@fperez_org; http://fperez.org) > > fperez.net-at-gmail: mailing lists only (I ignore this when swamped!) > > fernando.perez-at-berkeley: contact me here for any direct mail > > -- > > Fernando Perez (@fperez_org; http://fperez.org) > > fperez.net-at-gmail: mailing lists only (I ignore this when swamped!) > > fernando.perez-at-berkeley: contact me here for any direct mail > > _______________________________________________ > > IPython-dev mailing list > > IPython-dev at scipy.org > > http://mail.scipy.org/mailman/listinfo/ipython-dev > _______________________________________________ > IPython-dev mailing list > IPython-dev at scipy.org > http://mail.scipy.org/mailman/listinfo/ipython-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From damianavila at gmail.com Sat Aug 10 01:00:47 2013 From: damianavila at gmail.com (=?ISO-8859-1?Q?Dami=E1n_Avila?=) Date: Sat, 10 Aug 2013 02:00:47 -0300 Subject: [IPython-dev] It is alive... Message-ID: <5205C8FF.30005@gmail.com> First of all, I want to congratulates to all the IPython developers for achieving the release of 1.0! OK... when I first developed the reveal exporter in the old nbconvert, my last idea was to have a live version of a slideshow... Just like the notebook but in a "presentation" mode... At this time, I did not know enough about js... and now... neither... but I give it a try, a learn a little, I ask for some help in the list some day ago and now I come up with this prototype... What I want to show you a proof of concept about a "live" implementation of Reveal.js slideshow... You can see a short demo here: http://www.youtube.com/watch?v=bCb2HJy-yc0 Why develop a live version of Reveal if we have the live slidemode from Matthias? 1) Because is fun! 2) Because I learn a lot in the process... 3) Because having Reveal (slides) exporter in the main IPython, it would be great for the user to have the **same** slideshow but in live mode to be interactive with the audience, and a static version to easily distribute (or use when there is no need for an interactive talk). I will probably reopen the IPEP about slideshows to rebirth the discussion and to help me refine the code because it works... but I think it is a too much hackish, like the first implementation of reveal exporter in the old nbconvert, and there is lot of things to refine (and decide) if we can get it inside IPython... It would be great if we can arrive to something solid for the next 2.0 release... ;-) (I am not sure If the live slideshow implementation is inside the current Roadmap... it was inside at least in some discussions... I hope it is, because I think it is not only a nice but also a very useful feature to have). Saludos. Dami?n. -------------- next part -------------- An HTML attachment was scrubbed... URL: From fperez.net at gmail.com Sat Aug 10 01:05:39 2013 From: fperez.net at gmail.com (Fernando Perez) Date: Fri, 9 Aug 2013 22:05:39 -0700 Subject: [IPython-dev] [ANN] IPython 1.0 is finally released, nearly 12 years in the making! In-Reply-To: References: Message-ID: And just to add context to Min's comment, that was precisely the plan we outlined in our roadmap last spring: https://github.com/ipython/ipython/wiki/Roadmap:-IPython But Aaron, don't worry, we're not abandoning maintenance of 1.x. A branch is already open for bug fix backports: https://github.com/ipython/ipython/tree/1.x Cheers, f On Fri, Aug 9, 2013 at 9:03 PM, MinRK wrote: > On Fri, Aug 9, 2013 at 8:10 PM, Aaron Meurer wrote: >> >> I noticed that git says "2.0.0-dev". Are you being overly optimistic, >> or did you revert your policy of only supporting one version at a >> time? > > > I'm not sure what you mean. We are certainly planning to make > backward-incompatible changes to be released in the Fall, which suggests a > new major version. We are trying out a roughly six month major release > cycle (2.0 in Winter, 3.0 next Summer). We will backport fixes to 1.0 as > long as it is tenable (1.x branch already has a few fixes), which may not be > past Winter. > > -MinRK > >> >> >> Aaron Meurer >> >> On Thu, Aug 8, 2013 at 7:35 PM, Fernando Perez >> wrote: >> > Hi all, >> > >> > I am incredibly thrilled, on behalf of the amazing IPython Dev Team, >> > to announce the official release of IPython 1.0 today, an effort >> > nearly 12 years in the making. The previous version (0.13) was >> > released on June 30, 2012, and in this development cycle we had: >> > >> > ~12 months of work. >> > ~700 pull requests merged. >> > ~600 issues closed (non-pull requests). >> > contributions from ~150 authors. >> > ~4000 commits. >> > >> > >> > # A little context >> > >> > What does "1.0" mean for IPython? Obviously IPython has been a staple >> > of the scientific Python community for years, and we've made every >> > effort to make it a robust and production ready tool for a long time, >> > so what exactly do we mean by tagging this particular release as 1.0? >> > Basically, we feel that the core design of IPython, and the scope of >> > the project, is where we want it to be. >> > >> > What we have today is what we consider a reasonably complete, design- >> > and scope-wise, IPython 1.0: an architecture for interactive >> > computing, that can drive kernels in a number of ways using a >> > well-defined protocol, and rich and powerful clients that let users >> > control those kernels effectively. Our different clients serve >> > different needs, with the old workhorse of the terminal still being >> > very useful, but much of our current development energy going into the >> > Notebook, obviously. The Notebook enables interactive exploration to >> > become Literate Computing, bridging the gaps from individual work to >> > collaboration and publication, all with an open file format that is a >> > direct record of the underlying communication protocol. >> > >> > There are obviously plenty of open issues (many of them very >> > important) that need fixing, and large and ambitious new lines of >> > development for the years to come. But the work of the last four >> > years, since the summer of 2009 when Brian Granger was able to devote >> > a summer (thanks to funding from the NiPy project - nipy.org) to >> > refactoring the old IPython core code, finally opened up or >> > infrastructure for real innovation. By disentangling what was a useful >> > but impenetrable codebase, it became possible for us to start building >> > a flexible, modern system for interactive computing that abstracted >> > the old REPL model into a generic protocol that kernels could use to >> > talk to clients. This led at first to the creation of the Qt console, >> > and then to the Notebook and out-of-process terminal client. It also >> > allowed us to (finally!) unify our parallel computing machinery with >> > the rest of the interactive system, which Min Ragan-Kelley pulled off >> > in a development tour de force that involved rewriting in a few weeks >> > a huge and complex Twisted-based system. >> > >> > We are very happy with how the Notebook work has turned out, and it >> > seems the entire community agrees with us, as the uptake has been >> > phenomenal. Back from the very first "IPython 0.0.1" that I started >> > in 2001: >> > >> > https://gist.github.com/fperez/1579699 >> > >> > there were already hints of tools like Mathematica: it was my everyday >> > workhorse as a theoretical physicist and I found its Notebook >> > environment invaluable. But as a grad student trying out "just an >> > afternoon hack" (IPython was my very first Python program as I was >> > learning the language), I didn't have the resources, skills or vision >> > to attempt building an entire notebook system, and to be honest the >> > tools of the day would have made that enterprise a miserable one. But >> > those ideas were always driving our efforts, and as IPython started >> > becoming a project with a team, we made multiple attempts to get a >> > good Notebook built around IPython. Those interested can read an old >> > blog post of mine with the history >> > (http://blog.fperez.org/2012/01/ipython-notebook-historical.html). >> > The short story is that in 2011, on our sixth attempt, Brian was again >> > able to devote a focused summer into using our client-server >> > architecture and, with the stack of the modern web (Javascript, CSS, >> > websockets, Tornado, ...), finally build a robust system for Literate >> > Computing across programming languages. >> > >> > Today, thanks to the generous support and vision of Josh Greenberg at >> > the Alfred P. Sloan Foundation, we are working very hard on building >> > the notebook infrastructure, and this release contains major advances >> > on that front. We have high hopes for what we'll do next; as a >> > glimpse of the future that this enables, now there is a native Julia >> > kernel that speaks to our clients, notebook included: >> > https://github.com/JuliaLang/IJulia.jl. >> > >> > >> > # Team >> > >> > I can't stress enough how impressed I am with the work people are >> > doing in IPython, and what a privilege it is to work with colleagues >> > like these. Brian Granger and Min Ragan-Kelley joined IPython around >> > 2005, initially working on the parallel machinery, but since ~ 2009 >> > they have become the heart of the project. Today Min is our top >> > committer and knows our codebase better than anyone else, and I can't >> > imagine better partners for an effort like this. >> > >> > And from regulars in our core team like Thomas Kluyver, Matthias >> > Bussonnier, Brad Froehle and Paul Ivanov to newcomers like Jonathan >> > Frederic and Zach Sailer, in addition to the many more whose names are >> > in our logs, we have a crazy amount of energy being poured into >> > IPython. I hope we'll continue to harness it productively! >> > >> > The full list of contributors to this release can be seen here: >> > >> > http://ipython.org/ipython-doc/rel-1.0.0/whatsnew/github-stats-1.0.html >> > >> > >> > # Release highlights >> > >> > * nbconvert: this is the major piece of new functionality in this >> > cycle, and was an explicit part of our roadmap >> > (https://github.com/ipython/ipython/wiki/Roadmap:-IPython). nbconvert >> > is now an IPython subcommand to convert notebooks into other formats >> > such as HTML or LaTeX, but more importantly, it's a very flexible >> > system that lets you write custom templates to generate new output >> > with arbitrary control over the formatting and transformations that >> > are applied to the input. >> > >> > We want to stress that despite the fact that a huge amount of work >> > went into nbconvert, this should be considered a *tech preview* >> > release. We've come to realize how complex this problem is, and while >> > we'll make every effort to keep the high-level command-line syntax and >> > APIs as stable as possible, it is quite likely that the internals will >> > continue to evolve, possibly in backwards-incompatible ways. So if >> > you start building services and libraries that make heavy use of the >> > nbconvert internals, please be prepared for some turmoil in the months >> > to come, and ping us on the dev list with questions or concerns. >> > >> > * Notebook improvements: there has been a ton of polish work in the >> > notebook at many levels, though the file format remains unchanged from >> > 0.13, so you shouldn't have any problems sharing notebooks with >> > colleagues still using 0.13. >> > >> > - Autosave: probably the most oft-requested feature, the notebook >> > server now autosaves your files! You can still hit Ctrl-S to force a >> > manual save (which also creates a special 'checkpoint' you can come >> > back to). >> > >> > - The notebook supports raw_input(), and thus also %debug. This was >> > probably the main deficiency of the notebook as a client compared to >> > the terminal/qtconsole, and it has been finally fixed. >> > >> > - Add %%html, %%svg, %%javascript, and %%latex cell magics for >> > writing raw output in notebook cells. >> > - Fix an issue parsing LaTeX in markdown cells, which required users >> > to type \\\, instead of \\. >> > -Images support width and height metadata, and thereby 2x scaling >> > (retina support). >> > - %%file has been renamed %%writefile (%%file) is deprecated. >> > >> > * The input transofrmation code has been updated and rationalized. >> > This is a somewhat specialized part of IPython, but of importance to >> > projects that build upon it for custom environments, like Sympy and >> > Sage. >> > >> > Our full release notes are here: >> > >> > http://ipython.org/ipython-doc/rel-1.0.0/whatsnew/version1.0.html >> > >> > and the gory details are here: >> > >> > http://ipython.org/ipython-doc/rel-1.0.0/whatsnew/github-stats-1.0.html >> > >> > >> > # Installation >> > >> > Installation links and instructions are at: >> > http://ipython.org/install.html >> > And IPython is also on PyPI: http://pypi.python.org/pypi/ipython >> > >> > >> > # Requirements >> > >> > IPython 1.0 requires Python ? 2.6.5 or ? 3.2.1. It does not support >> > Python 3.0, 3.1, or 2.5. >> > >> > >> > # Acknowledgments >> > >> > Last but not least, we'd like to acknowledge the generous support of >> > those who make it possible for us to spend our time working on >> > IPython. In particular, the Alfred P. Sloan Foundation today lets us >> > have a solid team working full-time on the project, and without the >> > support of Enthought Inc at multiple points in our history, we >> > wouldn't be where we are today. >> > >> > The full list of our support is here: >> > >> > http://ipython.org/index.html#support >> > >> > >> > Thanks to everyone! Please enjoy IPython 1.0, and report all bugs as >> > usual! >> > >> > Fernando, on behalf of the IPython Dev Team. >> > >> > -- >> > Fernando Perez (@fperez_org; http://fperez.org) >> > fperez.net-at-gmail: mailing lists only (I ignore this when swamped!) >> > fernando.perez-at-berkeley: contact me here for any direct mail >> > -- >> > Fernando Perez (@fperez_org; http://fperez.org) >> > fperez.net-at-gmail: mailing lists only (I ignore this when swamped!) >> > fernando.perez-at-berkeley: contact me here for any direct mail >> > _______________________________________________ >> > IPython-dev mailing list >> > IPython-dev at scipy.org >> > http://mail.scipy.org/mailman/listinfo/ipython-dev >> _______________________________________________ >> IPython-dev mailing list >> IPython-dev at scipy.org >> http://mail.scipy.org/mailman/listinfo/ipython-dev > > > > _______________________________________________ > IPython-dev mailing list > IPython-dev at scipy.org > http://mail.scipy.org/mailman/listinfo/ipython-dev > -- Fernando Perez (@fperez_org; http://fperez.org) fperez.net-at-gmail: mailing lists only (I ignore this when swamped!) fernando.perez-at-berkeley: contact me here for any direct mail From damianavila at gmail.com Sat Aug 10 01:45:25 2013 From: damianavila at gmail.com (=?UTF-8?B?RGFtacOhbiBBdmlsYQ==?=) Date: Sat, 10 Aug 2013 02:45:25 -0300 Subject: [IPython-dev] It is alive... In-Reply-To: References: Message-ID: <5205D375.4080509@gmail.com> A lot of typos in the previous message... sorry for that, it is too late here... ;-) Anyway, you will get the main message ;-) Saludos. Dami?n. From pi at berkeley.edu Sat Aug 10 05:17:07 2013 From: pi at berkeley.edu (Paul Ivanov) Date: Sat, 10 Aug 2013 02:17:07 -0700 Subject: [IPython-dev] It is alive... In-Reply-To: <5205C8FF.30005@gmail.com> References: <5205C8FF.30005@gmail.com> Message-ID: <20130810091707.GC21911@HbI-OTOH.berkeley.edu> Dami?n Avila, on 2013-08-10 02:00, wrote: > You can see a short demo here: http://www.youtube.com/watch?v=bCb2HJy-yc0 so... ...angry. ;) as I said on twitter, looks great, Dami?n, great job! -- _ / \ A* \^ - ,./ _.`\\ / \ / ,--.S \/ \ / `"~,_ \ \ __o ? _ \<,_ /:\ --(_)/-(_)----.../ | \ --------------.......J Paul Ivanov http://pirsquared.org From fperez.net at gmail.com Sat Aug 10 15:25:26 2013 From: fperez.net at gmail.com (Fernando Perez) Date: Sat, 10 Aug 2013 12:25:26 -0700 Subject: [IPython-dev] It is alive... In-Reply-To: <20130810091707.GC21911@HbI-OTOH.berkeley.edu> References: <5205C8FF.30005@gmail.com> <20130810091707.GC21911@HbI-OTOH.berkeley.edu> Message-ID: On Sat, Aug 10, 2013 at 2:17 AM, Paul Ivanov wrote: > Dami?n Avila, on 2013-08-10 02:00, wrote: >> You can see a short demo here: http://www.youtube.com/watch?v=bCb2HJy-yc0 > > so... > > ...angry. > > ;) > > as I said on twitter, looks great, Dami?n, great job! Indeed, that looks absolutely great!! Excellent job, Dami?n. I'll need to try it out soon. Cheers, f From damianavila at gmail.com Sun Aug 11 15:01:51 2013 From: damianavila at gmail.com (=?ISO-8859-1?Q?Dami=E1n_Avila?=) Date: Sun, 11 Aug 2013 16:01:51 -0300 Subject: [IPython-dev] It is alive... In-Reply-To: References: <5205C8FF.30005@gmail.com> <20130810091707.GC21911@HbI-OTOH.berkeley.edu> Message-ID: <5207DF9F.1060303@gmail.com> El 10/08/13 16:25, Fernando Perez escribi?: > On Sat, Aug 10, 2013 at 2:17 AM, Paul Ivanov wrote: >> Dami?n Avila, on 2013-08-10 02:00, wrote: >>> You can see a short demo here: http://www.youtube.com/watch?v=bCb2HJy-yc0 >> so... >> >> ...angry. >> >> ;) >> >> as I said on twitter, looks great, Dami?n, great job! > Indeed, that looks absolutely great!! Excellent job, Dami?n. I'll > need to try it out soon. > > Cheers, > > f > _______________________________________________ > IPython-dev mailing list > IPython-dev at scipy.org > http://mail.scipy.org/mailman/listinfo/ipython-dev For now, the code lives inside a %%javascript cell in the notebook (easy to test things... in fact I developed this prototype from a empty cell in the notebook ["IPython-Driven Development"], hehe). I have to fix some things, and later encapsulate the code in a custom.js/custom.css extension... In that way, you and others can test it, and give me feedback to, later, let me make a strong PR ;-) I will let you know when it is available as extension... Saludos. Dami?n. From jrjohansson at gmail.com Sun Aug 11 21:38:25 2013 From: jrjohansson at gmail.com (jrjohansson at gmail.com) Date: Mon, 12 Aug 2013 10:38:25 +0900 Subject: [IPython-dev] Problem with IPython.parallel in 1.0 Message-ID: Hi The following simple example of using DirectView from IPython.parallel works fine in 0.13.2 but fails in 1.0. Has there been any API changes in 1.0 that could affect this example? Any help on how to modify the example to make it work with 1.0 would be highly appreciated! http://nbviewer.ipython.org/6207703 Cheers Rob -------------- next part -------------- An HTML attachment was scrubbed... URL: From benjaminrk at gmail.com Mon Aug 12 13:46:27 2013 From: benjaminrk at gmail.com (Min RK) Date: Mon, 12 Aug 2013 10:46:27 -0700 Subject: [IPython-dev] Problem with IPython.parallel in 1.0 In-Reply-To: References: Message-ID: <528194BF-EAC5-4F1B-A8DA-712A0ED5B3CB@gmail.com> Not an API change, just a bug. Will look into it. -MinRK On Aug 11, 2013, at 18:38, "jrjohansson at gmail.com" wrote: > Hi > > The following simple example of using DirectView from IPython.parallel works fine in 0.13.2 but fails in 1.0. Has there been any API changes in 1.0 that could affect this example? Any help on how to modify the example to make it work with 1.0 would be highly appreciated! > > http://nbviewer.ipython.org/6207703 > > Cheers > Rob > _______________________________________________ > IPython-dev mailing list > IPython-dev at scipy.org > http://mail.scipy.org/mailman/listinfo/ipython-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From erik.m.bray at gmail.com Mon Aug 12 15:36:50 2013 From: erik.m.bray at gmail.com (Erik Bray) Date: Mon, 12 Aug 2013 15:36:50 -0400 Subject: [IPython-dev] xkcd plots on Matplotlib gallery ?? In-Reply-To: References: Message-ID: On Wed, Aug 7, 2013 at 1:59 PM, Matt Davis wrote: > You may have accidentally stumbled onto > http://matplotlib.org/xkcd/gallery.html. > > - Matt Looking at all those, it occurs to me that I think for the xkcd renderer there should be a little less random wobble in lines the closer they are to straight lines. Looking at some of Randall Munroe's actual plots he's not really *that* bad at drawing straight lines. I'm too busy today but at some point I might make a PR for this.... Erik > On Aug 7, 2013, at 10:54 AM, Nitin Borwankar wrote: > > I was about to show some enterprise folks at an F100 company how IPy NB is > awesome for data science visualization - I went to matplotlib gallery and > saw to my horror that *all* gallery examples have been replaced with xkcd > plot styling. > > Or... was the site hacked? That was another thought. > > This makes it pretty much impossible to show to exactly those constituencies > that will make IPy NB massively mainstream dues to its agile visualization > power in data science. Well its possible to show it - but there goes the > chance of being taken seriously as it is quite cartoonish. I enjoy it but > does it have to be the default styling - it really detracts from the clean > visualization that matplotlib brings. > > On the other hand, if it was hacked - apologies and sympathies. > > Nitin > > > ------------------------------------------------------------------ > Nitin Borwankar > nborwankar at gmail.com > _______________________________________________ > IPython-dev mailing list > IPython-dev at scipy.org > http://mail.scipy.org/mailman/listinfo/ipython-dev > > > > _______________________________________________ > IPython-dev mailing list > IPython-dev at scipy.org > http://mail.scipy.org/mailman/listinfo/ipython-dev > From darcamo at gmail.com Mon Aug 12 16:50:23 2013 From: darcamo at gmail.com (Darlan Cavalcante Moreira) Date: Mon, 12 Aug 2013 17:50:23 -0300 Subject: [IPython-dev] Parallel: Report progress from the engines Message-ID: <87vc3a4os0.fsf@gmail.com> Hi list, I'm using the fantastic parallel computing machinery from IPython, which works really well. However, the simulations I'm performing take a few hours to finish and I'd like to see the progress from each engine. The number of tasks is small (usually from five to eight) such that only knowing how many tasks have finished is not very useful for me. I need to track the progress of each individual task. To give an example, with the multiprocessing module I could create a Queue that I can pass as argument to the task that will run in the other process. My tasks are basically a for loop running a few lines of code in each iteration. Therefore, the task can put the iteration number in the Queue indicating its progress, which is then read (every 5 seconds) from another process that finally prints the progress in the terminal. However, I have no idea how I can do this with the IPython parallel machinery. Is there any way for the engines to send data back (to the controller?) that I can use to track their progress? -- Darlan Cavalcante From sychan at lbl.gov Mon Aug 12 19:10:33 2013 From: sychan at lbl.gov (Stephen Chan) Date: Mon, 12 Aug 2013 16:10:33 -0700 Subject: [IPython-dev] Reading in passwords in IPython 1.0 Message-ID: Hi, MinRK had an extension module that implemented raw_input() and getpass() for the pre-1.0 IPython Notebook interface. In 1.0, raw_input() works nicely in a browser, but getpass.getpass() still goes to the stdin in the terminal window where the notebook was started. Is there a built in way to get a non-echoing password prompt in the browser interface, or is the nbinput.nbgetpass() function still the only game in town? Steve From fperez.net at gmail.com Mon Aug 12 19:25:07 2013 From: fperez.net at gmail.com (Fernando Perez) Date: Mon, 12 Aug 2013 16:25:07 -0700 Subject: [IPython-dev] Reading in passwords in IPython 1.0 In-Reply-To: References: Message-ID: getpass() relies on fairly low-level control of a TTY, so it won't quite work out of the box. The ideal solution would be for us to soup-up our STDIN replacement to the point where it behaves like a tty at least for the purposes of echo control, which is the only thing that getpass() needs to do. This has been an open issue for a while: https://github.com/ipython/ipython/issues/854 Now that we have proper stdin support, it's not as hard to finish fixing it as when the original issue was filed. From fperez.net at gmail.com Mon Aug 12 19:34:29 2013 From: fperez.net at gmail.com (Fernando Perez) Date: Mon, 12 Aug 2013 16:34:29 -0700 Subject: [IPython-dev] Parallel: Report progress from the engines In-Reply-To: <87vc3a4os0.fsf@gmail.com> References: <87vc3a4os0.fsf@gmail.com> Message-ID: Hi, these two notebooks provide examples of monitoring a parallel run (in this case an MPI one, but you can adapt this to your use case). The first uses engine-driven data publication, the second uses client-side polling in a thread; each approach has its pros and cons: http://nbviewer.ipython.org/urls/raw.github.com/ipython/ipython/master/examples/parallel/InteractiveMPI-publish-data.ipynb http://nbviewer.ipython.org/urls/raw.github.com/ipython/ipython/master/examples/parallel/InteractiveMPI.ipynb And here is a bit more info on the structure of async results that can be used for timing/monitoring: http://ipython.org/ipython-doc/rel-1.0.0/parallel/asyncresult.html#timing I seem to recall Min had some more examples of this, but can't seem to find them right now. Cheers, f On Mon, Aug 12, 2013 at 1:50 PM, Darlan Cavalcante Moreira wrote: > > Hi list, > > I'm using the fantastic parallel computing machinery from IPython, which > works really well. However, the simulations I'm performing take a few > hours to finish and I'd like to see the progress from each engine. The > number of tasks is small (usually from five to eight) such that only > knowing how many tasks have finished is not very useful for me. I need > to track the progress of each individual task. > > To give an example, with the multiprocessing module I could create a > Queue that I can pass as argument to the task that will run in the other > process. My tasks are basically a for loop running a few lines of code > in each iteration. Therefore, the task can put the iteration number in > the Queue indicating its progress, which is then read (every 5 seconds) > from another process that finally prints the progress in the terminal. > > However, I have no idea how I can do this with the IPython parallel > machinery. Is there any way for the engines to send data back (to the > controller?) that I can use to track their progress? > > > -- > Darlan Cavalcante > _______________________________________________ > IPython-dev mailing list > IPython-dev at scipy.org > http://mail.scipy.org/mailman/listinfo/ipython-dev -- Fernando Perez (@fperez_org; http://fperez.org) fperez.net-at-gmail: mailing lists only (I ignore this when swamped!) fernando.perez-at-berkeley: contact me here for any direct mail From benjaminrk at gmail.com Tue Aug 13 15:08:26 2013 From: benjaminrk at gmail.com (MinRK) Date: Tue, 13 Aug 2013 12:08:26 -0700 Subject: [IPython-dev] Parallel: Report progress from the engines In-Reply-To: References: <87vc3a4os0.fsf@gmail.com> Message-ID: You can also monitor the progress with simple print statements by viewing ar.stdout On Mon, Aug 12, 2013 at 4:34 PM, Fernando Perez wrote: > Hi, > > these two notebooks provide examples of monitoring a parallel run (in > this case an MPI one, but you can adapt this to your use case). The > first uses engine-driven data publication, the second uses client-side > polling in a thread; each approach has its pros and cons: > > > http://nbviewer.ipython.org/urls/raw.github.com/ipython/ipython/master/examples/parallel/InteractiveMPI-publish-data.ipynb > > > http://nbviewer.ipython.org/urls/raw.github.com/ipython/ipython/master/examples/parallel/InteractiveMPI.ipynb > > And here is a bit more info on the structure of async results that can > be used for timing/monitoring: > > http://ipython.org/ipython-doc/rel-1.0.0/parallel/asyncresult.html#timing > > I seem to recall Min had some more examples of this, but can't seem to > find them right now. > > Cheers, > > f > > On Mon, Aug 12, 2013 at 1:50 PM, Darlan Cavalcante Moreira > wrote: > > > > Hi list, > > > > I'm using the fantastic parallel computing machinery from IPython, which > > works really well. However, the simulations I'm performing take a few > > hours to finish and I'd like to see the progress from each engine. The > > number of tasks is small (usually from five to eight) such that only > > knowing how many tasks have finished is not very useful for me. I need > > to track the progress of each individual task. > > > > To give an example, with the multiprocessing module I could create a > > Queue that I can pass as argument to the task that will run in the other > > process. My tasks are basically a for loop running a few lines of code > > in each iteration. Therefore, the task can put the iteration number in > > the Queue indicating its progress, which is then read (every 5 seconds) > > from another process that finally prints the progress in the terminal. > > > > However, I have no idea how I can do this with the IPython parallel > > machinery. Is there any way for the engines to send data back (to the > > controller?) that I can use to track their progress? > > > > > > -- > > Darlan Cavalcante > > _______________________________________________ > > IPython-dev mailing list > > IPython-dev at scipy.org > > http://mail.scipy.org/mailman/listinfo/ipython-dev > > > > -- > Fernando Perez (@fperez_org; http://fperez.org) > fperez.net-at-gmail: mailing lists only (I ignore this when swamped!) > fernando.perez-at-berkeley: contact me here for any direct mail > _______________________________________________ > IPython-dev mailing list > IPython-dev at scipy.org > http://mail.scipy.org/mailman/listinfo/ipython-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From darcamo at gmail.com Tue Aug 13 23:07:20 2013 From: darcamo at gmail.com (Darlan Cavalcante Moreira) Date: Wed, 14 Aug 2013 00:07:20 -0300 Subject: [IPython-dev] Parallel: Report progress from the engines In-Reply-To: References: <87vc3a4os0.fsf@gmail.com> Message-ID: <87haetkm1j.fsf@gmail.com> benjaminrk at gmail.com writes: Thank you Fernando and MinRK. I haven't used MPI before. I'll need some time to dig through these examples, but it's a good start. I tried to run the notebooks, but I had a problem with the line below from IPython.kernel.zmq.datapub import publish_data which gives me an import error, since IPython.kernel is not used anymore. I could not find out where this function was moved to. Googling for publish_data I found the notebook below http://nbviewer.ipython.org/urls/raw.github.com/ellisonbg/ipython/8194b678147abb2d435a07714a9b3a0fdd99ca73/docs/examples/notebooks/publish_data.ipynb which confirms this function is exactly what I was looking for (as long as I can get the value of the published variable while the task is still running). Is it still in IPython? For now MinRK suggestion seems like a good workaround. I'll have to change my code to use asynchronous tasks instead of sync tasks as I do now, but that is easy enough. -- Darlan > You can also monitor the progress with simple print statements by viewing > ar.stdout > > > On Mon, Aug 12, 2013 at 4:34 PM, Fernando Perez wrote: > >> Hi, >> >> these two notebooks provide examples of monitoring a parallel run (in >> this case an MPI one, but you can adapt this to your use case). The >> first uses engine-driven data publication, the second uses client-side >> polling in a thread; each approach has its pros and cons: >> >> >> http://nbviewer.ipython.org/urls/raw.github.com/ipython/ipython/master/examples/parallel/InteractiveMPI-publish-data.ipynb >> >> >> http://nbviewer.ipython.org/urls/raw.github.com/ipython/ipython/master/examples/parallel/InteractiveMPI.ipynb >> >> And here is a bit more info on the structure of async results that can >> be used for timing/monitoring: >> >> http://ipython.org/ipython-doc/rel-1.0.0/parallel/asyncresult.html#timing >> >> I seem to recall Min had some more examples of this, but can't seem to >> find them right now. >> >> Cheers, >> >> f >> >> On Mon, Aug 12, 2013 at 1:50 PM, Darlan Cavalcante Moreira >> wrote: >> > >> > Hi list, >> > >> > I'm using the fantastic parallel computing machinery from IPython, which >> > works really well. However, the simulations I'm performing take a few >> > hours to finish and I'd like to see the progress from each engine. The >> > number of tasks is small (usually from five to eight) such that only >> > knowing how many tasks have finished is not very useful for me. I need >> > to track the progress of each individual task. >> > >> > To give an example, with the multiprocessing module I could create a >> > Queue that I can pass as argument to the task that will run in the other >> > process. My tasks are basically a for loop running a few lines of code >> > in each iteration. Therefore, the task can put the iteration number in >> > the Queue indicating its progress, which is then read (every 5 seconds) >> > from another process that finally prints the progress in the terminal. >> > >> > However, I have no idea how I can do this with the IPython parallel >> > machinery. Is there any way for the engines to send data back (to the >> > controller?) that I can use to track their progress? >> > >> > >> > -- >> > Darlan Cavalcante >> > _______________________________________________ >> > IPython-dev mailing list >> > IPython-dev at scipy.org >> > http://mail.scipy.org/mailman/listinfo/ipython-dev >> >> >> >> -- >> Fernando Perez (@fperez_org; http://fperez.org) >> fperez.net-at-gmail: mailing lists only (I ignore this when swamped!) >> fernando.perez-at-berkeley: contact me here for any direct mail >> _______________________________________________ >> IPython-dev mailing list >> IPython-dev at scipy.org >> http://mail.scipy.org/mailman/listinfo/ipython-dev >> > _______________________________________________ > IPython-dev mailing list > IPython-dev at scipy.org > http://mail.scipy.org/mailman/listinfo/ipython-dev -- Darlan Cavalcante Moreira darcamo at gmail.com From fperez.net at gmail.com Tue Aug 13 23:16:35 2013 From: fperez.net at gmail.com (Fernando Perez) Date: Tue, 13 Aug 2013 20:16:35 -0700 Subject: [IPython-dev] Parallel: Report progress from the engines In-Reply-To: <87haetkm1j.fsf@gmail.com> References: <87vc3a4os0.fsf@gmail.com> <87haetkm1j.fsf@gmail.com> Message-ID: Hi Darlan, On Tue, Aug 13, 2013 at 8:07 PM, Darlan Cavalcante Moreira wrote: > > benjaminrk at gmail.com writes: > > Thank you Fernando and MinRK. > > I haven't used MPI before. I'll need some time to dig through these > examples, but it's a good start. > > I tried to run the notebooks, but I had a problem with the line below > > from IPython.kernel.zmq.datapub import publish_data > > which gives me an import error, since IPython.kernel is not used > anymore. I could not find out where this function was moved to. Googling > for publish_data I found the notebook below > http://nbviewer.ipython.org/urls/raw.github.com/ellisonbg/ipython/8194b678147abb2d435a07714a9b3a0fdd99ca73/docs/examples/notebooks/publish_data.ipynb > which confirms this function is exactly what I was looking for (as long > as I can get the value of the published variable while the task is still > running). Is it still in IPython? Yes, in fact the above is the correct location for IPython 1.0. If you are using IPython 0.13.x, you'll need to upgrade to 1.0, as the above functionality was not present in the 0.13.x series. Cheers, f From fperez.net at gmail.com Tue Aug 13 23:17:14 2013 From: fperez.net at gmail.com (Fernando Perez) Date: Tue, 13 Aug 2013 20:17:14 -0700 Subject: [IPython-dev] Parallel: Report progress from the engines In-Reply-To: References: <87vc3a4os0.fsf@gmail.com> <87haetkm1j.fsf@gmail.com> Message-ID: ps - note that you don't need MPI at all, in case that wasn't clear. Those examples happen to be MPI ones, but the publish_data API has nothing to do with MPI. On Tue, Aug 13, 2013 at 8:16 PM, Fernando Perez wrote: > Hi Darlan, > > On Tue, Aug 13, 2013 at 8:07 PM, Darlan Cavalcante Moreira > wrote: >> >> benjaminrk at gmail.com writes: >> >> Thank you Fernando and MinRK. >> >> I haven't used MPI before. I'll need some time to dig through these >> examples, but it's a good start. >> >> I tried to run the notebooks, but I had a problem with the line below >> >> from IPython.kernel.zmq.datapub import publish_data >> >> which gives me an import error, since IPython.kernel is not used >> anymore. I could not find out where this function was moved to. Googling >> for publish_data I found the notebook below >> http://nbviewer.ipython.org/urls/raw.github.com/ellisonbg/ipython/8194b678147abb2d435a07714a9b3a0fdd99ca73/docs/examples/notebooks/publish_data.ipynb >> which confirms this function is exactly what I was looking for (as long >> as I can get the value of the published variable while the task is still >> running). Is it still in IPython? > > Yes, in fact the above is the correct location for IPython 1.0. If > you are using IPython 0.13.x, you'll need to upgrade to 1.0, as the > above functionality was not present in the 0.13.x series. > > Cheers, > > f -- Fernando Perez (@fperez_org; http://fperez.org) fperez.net-at-gmail: mailing lists only (I ignore this when swamped!) fernando.perez-at-berkeley: contact me here for any direct mail From johanbeke at hotmail.com Thu Aug 15 15:13:09 2013 From: johanbeke at hotmail.com (Johan Beke) Date: Thu, 15 Aug 2013 19:13:09 +0000 Subject: [IPython-dev] py2tex Message-ID: Hello All, As asked by Matthias B. on the Ipython-user mailing list, I sent hereby a copy of my e-mail sent to the user list: I wrote a little update of the py2tex extension: https://gist.github.com/4032651 (Not yet tested with latest version of IPython because I dit not update yet, but I believe it should work) Changes: - unum.units are always set in straight font + variables always in italic font (some bug in previous version) - assignment of a variable with units is detected by %%tex previous version e.g. a=1.0*m/s would output a=1.0*m/s = 1.0 m/s output now: a=1.0 m/s (I changed this because lines cells with each line %%tex or %%texnr where not handy) Possible future changes: - support for function calls like someclass.somefunction(arguments ...) I'm using this extension at my office with success and must say it is an easy replacement for mathcad. Doing unit aware calculations, with nice representation. Johan PS: installation link on the extension index page was not yet changed to this latest version. PS2: comments, idea's and bug reports are welcome. -------------- next part -------------- An HTML attachment was scrubbed... URL: From fperez.net at gmail.com Thu Aug 15 15:52:21 2013 From: fperez.net at gmail.com (Fernando Perez) Date: Thu, 15 Aug 2013 12:52:21 -0700 Subject: [IPython-dev] py2tex In-Reply-To: References: Message-ID: Great, very nice! I suggest you make it into a little named repo, along with an example notebook that illustrates its use. You should never discount the effect an immediate visual demo has on appeal for new users. Best f On Thu, Aug 15, 2013 at 12:13 PM, Johan Beke wrote: > Hello All, > > As asked by Matthias B. on the Ipython-user mailing list, I sent hereby a > copy of my e-mail sent to the user list: > > I wrote a little update of the py2tex extension: > https://gist.github.com/4032651 > (Not yet tested with latest version of IPython because I dit not update yet, > but I believe it should work) > > Changes: > - unum.units are always set in straight font + variables always in italic > font (some bug in previous version) > - assignment of a variable with units is detected by %%tex > previous version e.g. a=1.0*m/s would output a=1.0*m/s = 1.0 m/s > output now: a=1.0 m/s > (I changed this because lines cells with each line %%tex or %%texnr where > not handy) > > Possible future changes: > - support for function calls like someclass.somefunction(arguments ...) > > I'm using this extension at my office with success and must say it is an > easy replacement for mathcad. Doing unit aware calculations, with nice > representation. > > Johan > > PS: installation link on the extension index page was not yet changed to > this latest version. > PS2: comments, idea's and bug reports are welcome. > > _______________________________________________ > IPython-dev mailing list > IPython-dev at scipy.org > http://mail.scipy.org/mailman/listinfo/ipython-dev > -- Fernando Perez (@fperez_org; http://fperez.org) fperez.net-at-gmail: mailing lists only (I ignore this when swamped!) fernando.perez-at-berkeley: contact me here for any direct mail From takowl at gmail.com Thu Aug 15 18:45:36 2013 From: takowl at gmail.com (Thomas Kluyver) Date: Thu, 15 Aug 2013 15:45:36 -0700 Subject: [IPython-dev] Dropping support for Python 2.6 and 3.2 Message-ID: Hi all, I'll shortly be merging pull request #4002, which drops support for Python 2.6 and Python 3.2 from IPython master. In the unlikely event that you're running IPython from master but still using one of those Python versions, now is the time to switch to our 1.x branch. The changes in that pull request will immediately not work on Python 2.6. Before we release 2.0 around the end of this year, we'll move to a single codebase which runs on Python 2.7 and 3.3 without running 2to3. At that point, it will also stop working on Python 3.2. Thanks, Thomas -------------- next part -------------- An HTML attachment was scrubbed... URL: From cyrille.rossant at gmail.com Fri Aug 16 05:50:21 2013 From: cyrille.rossant at gmail.com (Cyrille Rossant) Date: Fri, 16 Aug 2013 10:50:21 +0100 Subject: [IPython-dev] Python and Javascript Message-ID: Hi, I'm involved in the development of a new OpenGL-based visualization library in Python (https://github.com/vispy/vispy). We consider adding a web backend in the future, and possibly integrating it within the IPython notebook. I was wondering if you had planned, as part of the development of IPython 2.0, to design a kind of "standardized" way to let Python and Javascript communicate (protocol messaging? API? etc.). In other words, how much IPython-specific will your protocols and interfaces be? I imagine it would be great to let non IPython-related projects develop Python-Javascript interfaces that are fully compatible with IPython. It would make it very easy to integrate these projects in the IPython notebook, and it would prevent every project from reinventing the wheel every time. Best, Cyrille From ellisonbg at gmail.com Fri Aug 16 14:11:54 2013 From: ellisonbg at gmail.com (Brian Granger) Date: Fri, 16 Aug 2013 11:11:54 -0700 Subject: [IPython-dev] Python and Javascript In-Reply-To: References: Message-ID: Cyrille, > I'm involved in the development of a new OpenGL-based visualization > library in Python (https://github.com/vispy/vispy). We consider adding > a web backend in the future, and possibly integrating it within the > IPython notebook. Very cool! > I was wondering if you had planned, as part of the development of > IPython 2.0, to design a kind of "standardized" way to let Python and > Javascript communicate (protocol messaging? API? etc.). In other > words, how much IPython-specific will your protocols and interfaces > be? Yes and no. Yes. I think our vision is that the IPython message spec would be that universal way of communicating from JavaScript to Python. We are trying to design it in a way that is independent of the backend language and can work in a variety of settings - including non-Python kernels. We are still working on the design of all the working pieces for this part of the spec though. Once we are further along we will solicit feedback on the ipython-dev list. No. We don't have any plans other than our message spec to standardize this type of thing. It is just beyond the scope of the project. Also, I think that if this was attempted, the result would be extremely similar to our message spec - at least if it has the same functionality - there is only so many ways of shoving this data over JSON+WebSockets. > I imagine it would be great to let non IPython-related projects > develop Python-Javascript interfaces that are fully compatible with > IPython. It would make it very easy to integrate these projects in the > IPython notebook, and it would prevent every project from reinventing > the wheel every time. I agree that projects shoudn't have to re-invent the wheel. If it turns out that our message spec isn't sufficient, to prevent this from happening, we will change it. That will be one of the most important tests of the design. Cheers, Brian > Best, > Cyrille > _______________________________________________ > IPython-dev mailing list > IPython-dev at scipy.org > http://mail.scipy.org/mailman/listinfo/ipython-dev -- Brian E. Granger Cal Poly State University, San Luis Obispo bgranger at calpoly.edu and ellisonbg at gmail.com From lsinger at caltech.edu Fri Aug 16 15:03:10 2013 From: lsinger at caltech.edu (Leo Singer) Date: Fri, 16 Aug 2013 12:03:10 -0700 Subject: [IPython-dev] Running notebook server and kernels as different users Message-ID: Hello, I am helping to set up a notebook server as part of the open data initiative for LIGO (http://ligo.org/), a physics experiment on which I am working as a graduate student. The idea is to have a runnable, on-demand tutorial to show users how to retrieve and manipulate our experiment's data. I have a question about security. We'd like to activate SSL, but since the notebook server and the Python kernels run as the same users, I am concerned that users would have the ability to read the server's private key and then compromise it. Almost as bad, users could send a kill signal to the notebook server. Is there a way to have the notebook server start as one user and then run the kernels as another user, to protect the notebook server itself from such attacks? Thanks, Leo Singer Graduate Student @ LIGO-Caltech From benjaminrk at gmail.com Fri Aug 16 15:14:37 2013 From: benjaminrk at gmail.com (MinRK) Date: Fri, 16 Aug 2013 12:14:37 -0700 Subject: [IPython-dev] Running notebook server and kernels as different users In-Reply-To: References: Message-ID: On Fri, Aug 16, 2013 at 12:03 PM, Leo Singer wrote: Hello, > > I am helping to set up a notebook server as part of the open data > initiative for LIGO (http://ligo.org/), a physics experiment on which I > am working as a graduate student. The idea is to have a runnable, on-demand > tutorial to show users how to retrieve and manipulate our experiment's data. > > I have a question about security. We'd like to activate SSL, but since the > notebook server and the Python kernels run as the same users, I am > concerned that users would have the ability to read the server's private > key and then compromise it. Almost as bad, users could send a kill signal > to the notebook server. > > Is there a way to have the notebook server start as one user and then run > the kernels as another user, to protect the notebook server itself from > such attacks? > This is not yet supported by IPython, but you could implement it with a custom KernelManager, though I would not actually recommend doing that. At this point, the notebook is a fundamentally single-user application, where the notebook server and kernel are the same user on the same machine. There are tools like ipydra that spin up a * server* for each user, which is likely the simplest way to go for now. -MinRK > Thanks, > Leo Singer > Graduate Student @ LIGO-Caltech > _______________________________________________ > IPython-dev mailing list > IPython-dev at scipy.org > http://mail.scipy.org/mailman/listinfo/ipython-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ellisonbg at gmail.com Fri Aug 16 15:35:02 2013 From: ellisonbg at gmail.com (Brian Granger) Date: Fri, 16 Aug 2013 12:35:02 -0700 Subject: [IPython-dev] Running notebook server and kernels as different users In-Reply-To: References: Message-ID: There is also Jiffylab: https://github.com/ptone/jiffylab On Fri, Aug 16, 2013 at 12:14 PM, MinRK wrote: > On Fri, Aug 16, 2013 at 12:03 PM, Leo Singer wrote: >> >> Hello, >> >> I am helping to set up a notebook server as part of the open data >> initiative for LIGO (http://ligo.org/), a physics experiment on which I am >> working as a graduate student. The idea is to have a runnable, on-demand >> tutorial to show users how to retrieve and manipulate our experiment's data. >> >> I have a question about security. We'd like to activate SSL, but since the >> notebook server and the Python kernels run as the same users, I am concerned >> that users would have the ability to read the server's private key and then >> compromise it. Almost as bad, users could send a kill signal to the notebook >> server. >> >> Is there a way to have the notebook server start as one user and then run >> the kernels as another user, to protect the notebook server itself from such >> attacks? > > This is not yet supported by IPython, but you could implement it with a > custom KernelManager, though I would not actually recommend doing that. At > this point, the notebook is a fundamentally single-user application, where > the notebook server and kernel are the same user on the same machine. There > are tools like ipydra that spin up a server for each user, which is likely > the simplest way to go for now. > > -MinRK >> >> >> Thanks, >> Leo Singer >> Graduate Student @ LIGO-Caltech >> _______________________________________________ >> IPython-dev mailing list >> IPython-dev at scipy.org >> http://mail.scipy.org/mailman/listinfo/ipython-dev > > > _______________________________________________ > IPython-dev mailing list > IPython-dev at scipy.org > http://mail.scipy.org/mailman/listinfo/ipython-dev > -- Brian E. Granger Cal Poly State University, San Luis Obispo bgranger at calpoly.edu and ellisonbg at gmail.com From jason-sage at creativetrax.com Fri Aug 16 15:36:36 2013 From: jason-sage at creativetrax.com (Jason Grout) Date: Fri, 16 Aug 2013 14:36:36 -0500 Subject: [IPython-dev] Running notebook server and kernels as different users In-Reply-To: References: Message-ID: <520E7F44.4080006@creativetrax.com> On 8/16/13 2:03 PM, Leo Singer wrote: > Hello, > > I am helping to set up a notebook server as part of the open data > initiative for LIGO (http://ligo.org/), a physics experiment on which > I am working as a graduate student. The idea is to have a runnable, > on-demand tutorial to show users how to retrieve and manipulate our > experiment's data. > > I have a question about security. We'd like to activate SSL, but > since the notebook server and the Python kernels run as the same > users, I am concerned that users would have the ability to read the > server's private key and then compromise it. Almost as bad, users > could send a kill signal to the notebook server. > > Is there a way to have the notebook server start as one user and then > run the kernels as another user, to protect the notebook server > itself from such attacks? > We do this with the Sage Cell Server [1], which uses the IPython infrastructure (but not most of the IPython notebook code). We wrote TrustedKernelManager and UntrustedKernelManager classes that basically forward all kernel manager requests between the trusted (web server) account and the untrusted (restricted worker) account over an ssh or zeromq link. Feel free to look at the source code, if you like: https://github.com/sagemath/sagecell But you can do the SSL part without compromising security. Just run HAProxy or nginx as a reverse proxy in front of the notebook, and have your SSL terminate at HAProxy or nginx. Another very common thing is to set up stunnel in front of your internal unencrypted server. In each of these situations, the SSL will be decrypted by a reverse proxy running on your computer, and then connections will be forwarded (unencrypted, but only locally) on the local computer to the IPython notebook. Thanks, Jason [1] https://sagecell.sagemath.org From fperez.net at gmail.com Fri Aug 16 16:57:13 2013 From: fperez.net at gmail.com (Fernando Perez) Date: Fri, 16 Aug 2013 13:57:13 -0700 Subject: [IPython-dev] Python and Javascript In-Reply-To: References: Message-ID: Hi Cyrille, just to add to what Brian said, here at Berkeley we're just getting started with a new grant to use the IPython/JS machinery for interactive visualization of neuroscience data (with emphasis on MRI, mainly functional and diffusion). We'll be working closely with a team at Stanford's imaging center on this over the next few months, and we'll post things here as they evolve. As we start having concrete implementation discussions, some of that will probably bubble up to our weekly public 'lab meetings', and we'd be happy to have feedback from other teams tackling these same questions. As Brian indicated, our view is to fix our protocols as needed until they can be used by a reasonably broad constituency, including non-python backends. Cheers, f From cyrille.rossant at gmail.com Sat Aug 17 17:30:17 2013 From: cyrille.rossant at gmail.com (Cyrille Rossant) Date: Sat, 17 Aug 2013 23:30:17 +0200 Subject: [IPython-dev] Python and Javascript In-Reply-To: References: Message-ID: > just to add to what Brian said, here at Berkeley we're just getting > started with a new grant to use the IPython/JS machinery for > interactive visualization of neuroscience data (with emphasis on MRI, > mainly functional and diffusion). We'll be working closely with a > team at Stanford's imaging center on this over the next few months, > and we'll post things here as they evolve. That sounds really exciting! I'll follow that project closely, as it is likely to be of great interest to Vispy. Cheers, Cyrille From jbzdak at gmail.com Sun Aug 18 18:42:52 2013 From: jbzdak at gmail.com (=?ISO-8859-2?Q?mgr_in=BF=2E_Jacek_Bzdak?=) Date: Mon, 19 Aug 2013 00:42:52 +0200 Subject: [IPython-dev] Ways to attach javascript files to ipython notebooks Message-ID: Hi, I'll be trying to use Ipython Notebook in conjunction with google maps to calculate some spatial distributions (in python) and display (in javascript) results on a map. I am looking for a way to attach static files to ipython notebook. I only found this question on StackOverflow: http://stackoverflow.com/questions/16852885/ipython-adding-javascript-scripts-to-ipython-notebook, which points me here. *What I have tried: * I've heard about ipython plugins, but I guess it is not what I want, because I don't want (yet) to create any generic solution. All my problems would also be solved if I could modify window object inside Javascript objects (like that): from IPython.display import display,Javascript Javascript("""bar = 5;""") Javascript("""alert(window.bar)""") In this case I would just load javascript files by hand from python code. *What I would like to do: * - Ability to attach javascript files from local filesystem (may be relative to ipython notebook location, but I'm ok with absolute files. I'm totally ok with loading these files globally, without any package management system, but I may use proper package management if needed.(this is required). - I also would have some way to manipulate global javascript context (to change output displayed on the map). (this is required). - Have ability to reload these files (when I'm making changes in js during developement). By reload I mean execute contents of these files again. (this would be nice to have) - Ability to generate some javascript on the fly (I might need it, but if it is hard nevermind). If I overlook it in Ipython Notebook Manual please point me to appropriate chapter. jb:) -------------- next part -------------- An HTML attachment was scrubbed... URL: From bussonniermatthias at gmail.com Mon Aug 19 03:57:17 2013 From: bussonniermatthias at gmail.com (Matthias BUSSONNIER) Date: Mon, 19 Aug 2013 09:57:17 +0200 Subject: [IPython-dev] Ways to attach javascript files to ipython notebooks In-Reply-To: References: Message-ID: <4BD010F7-295F-48C8-B0DB-3CA8BA20D350@gmail.com> > > from IPython.display import display,Javascript > Javascript("""bar = 5;""") > Javascript("""alert(window.bar)""") > both should work. Maybe try explicit Javascript("""window.bar = 5;""") > In this case I would just load javascript files by hand from python code. > > What I would like to do: > Ability to attach javascript files from local filesystem (may be relative to ipython notebook location, but I'm ok with absolute files. I'm totally ok with loading these files globally, without any package management system, but I may use proper package management if needed.(this is required). use $.getScript(url) with absolute or (relative-prepended-with-/file prefix) > I also would have some way to manipulate global javascript context (to change output displayed on the map). (this is required). you will need to assign an id to the a div and use this as handle. > Have ability to reload these files (when I'm making changes in js during developement). By reload I mean execute contents of these files again. (this would be nice to have) Not much we can do for that, we are limited by browser compatibility. IE reload the page and re-execute, unless your script can be executed many times. > Ability to generate some javascript on the fly (I might need it, but if it is hard nevermind). This I don't see what you mean. -- Matthias -------------- next part -------------- An HTML attachment was scrubbed... URL: From wagnerfl at student.ethz.ch Mon Aug 19 09:34:48 2013 From: wagnerfl at student.ethz.ch (Florian M. Wagner) Date: Mon, 19 Aug 2013 15:34:48 +0200 Subject: [IPython-dev] ipcluster (LSF) timing (check if all engines are running) Message-ID: <52121EF8.2070305@student.ethz.ch> Hey all, I am using IPython.parallel on a large cluster, where controller and engines are launched via LSF. My current workflow is as follows: #!/bin/bash python pre_processing.py ipcluster start --profile=cluster --n=128 > ipcluster.log 2>&1 sleep 120 python main_computation.py python post_processing.py I am not entirely happy with this, since the 2 minutes are not always enough depending on the load of the cluster. I believe that there is a much more elegant way to launch the cluster and check if all the eninges are running, before proceeding with the main computation. I would highly appreciate any help. Best regards Florian -------------- next part -------------- An HTML attachment was scrubbed... URL: From benjaminrk at gmail.com Mon Aug 19 10:28:38 2013 From: benjaminrk at gmail.com (MinRK) Date: Mon, 19 Aug 2013 07:28:38 -0700 Subject: [IPython-dev] ipcluster (LSF) timing (check if all engines are running) In-Reply-To: <52121EF8.2070305@student.ethz.ch> References: <52121EF8.2070305@student.ethz.ch> Message-ID: Something like this should work: from IPython import parallel def wait_for_cluster(engines=1, **kwargs): """Wait for an IPython cluster to startup and register a minimum number of engines""" # wait for the controller to come up while True: try: client = parallel.Client(**kwargs) except IOError: print "No ipcontroller-client.json, waiting..." time.sleep(10) except TimeoutError: print "No controller, waiting..." time.sleep(10) if not engines: return # wait for engines to register print "waiting for %i engines" % engines, running = len(client) sys.stdout.write('.' * running) while running < engines: time.sleep(1) previous = running running = len(client) sys.stdout.write('.' * (running - previous)) On Mon, Aug 19, 2013 at 6:34 AM, Florian M. Wagner wrote: > Hey all, > > I am using IPython.parallel on a large cluster, where controller and > engines are launched via LSF. My current workflow is as follows: > > #!/bin/bash > python pre_processing.py > ipcluster start --profile=cluster --n=128 > ipcluster.log 2>&1 > sleep 120 > python main_computation.py > python post_processing.py > > > I am not entirely happy with this, since the 2 minutes are not always > enough depending on the load of the cluster. I believe that there is a much > more elegant way to launch the cluster and check if all the eninges are > running, before proceeding with the main computation. I would highly > appreciate any help. > > Best regards > Florian > > > _______________________________________________ > IPython-dev mailing list > IPython-dev at scipy.org > http://mail.scipy.org/mailman/listinfo/ipython-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From stan.west at nrl.navy.mil Mon Aug 19 17:38:08 2013 From: stan.west at nrl.navy.mil (Stan West) Date: Mon, 19 Aug 2013 17:38:08 -0400 Subject: [IPython-dev] Fix for MathJax installer bug on Windows and more Message-ID: <52129040.5080908@nrl.navy.mil> Greetings. While using IPython.external.mathjax to install a downloaded ZIP file on Windows, I encountered the exception "zipfile.BadZipfile: File is not a zip file". The first of the attached patches resolves that by opening the file in binary mode. (I'm submitting patches instead of a pull request because I'm not on GitHub yet.) While I was in the neighborhood, I made some other changes, offered in the remainder of the patches. The most significant changes are edits to the installation instructions for MathJax and to various strings in the mathjax module. I hope that you find these helpful. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- >From 1dbc75a51d1aba4eb5cefb460e0c8d9ff8b4115c Mon Sep 17 00:00:00 2001 From: Stan West Date: Mon, 19 Aug 2013 14:03:56 -0400 Subject: [PATCH] Open MathJax archive in binary mode. Resolves exception "zipfile.BadZipfile: File is not a zip file" on Windows. --- IPython/external/mathjax.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/IPython/external/mathjax.py b/IPython/external/mathjax.py index 2e499fe..0c40113 100644 --- a/IPython/external/mathjax.py +++ b/IPython/external/mathjax.py @@ -251,7 +251,7 @@ def main() : else : extractor = extract_tar # do it - install_mathjax(file=open(fname, "r"), replace=replace, extractor=extractor, dest=dest ) + install_mathjax(file=open(fname, "rb"), replace=replace, extractor=extractor, dest=dest ) else: install_mathjax(replace=replace, dest=dest) -- 1.8.3.msysgit.0 -------------- next part -------------- >From 255b834ef89d9a4125aa45a1cffe6dab48208b59 Mon Sep 17 00:00:00 2001 From: Stan West Date: Mon, 19 Aug 2013 14:09:53 -0400 Subject: [PATCH] Close MathJax archive when done installing. --- IPython/external/mathjax.py | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/IPython/external/mathjax.py b/IPython/external/mathjax.py index 0c40113..40456c9 100644 --- a/IPython/external/mathjax.py +++ b/IPython/external/mathjax.py @@ -251,7 +251,9 @@ def main() : else : extractor = extract_tar # do it - install_mathjax(file=open(fname, "rb"), replace=replace, extractor=extractor, dest=dest ) + with open(fname, "rb") as fobj: + install_mathjax(file=fobj, replace=replace, extractor=extractor, + dest=dest) else: install_mathjax(replace=replace, dest=dest) -- 1.8.3.msysgit.0 -------------- next part -------------- >From 7be03fdc4c62f76d845ba2d22441b2bb5a7e3bc1 Mon Sep 17 00:00:00 2001 From: Stan West Date: Mon, 19 Aug 2013 14:22:00 -0400 Subject: [PATCH] Remove superfluous print statement in mathjax.extract_zip. No longer prints initial extraction directory, consistent with extract_tar. --- IPython/external/mathjax.py | 1 - 1 file changed, 1 deletion(-) diff --git a/IPython/external/mathjax.py b/IPython/external/mathjax.py index 40456c9..d203ffe 100644 --- a/IPython/external/mathjax.py +++ b/IPython/external/mathjax.py @@ -130,7 +130,6 @@ def extract_zip( fd, dest ) : # it will be mathjax-MathJax-, rename to just mathjax d = os.path.join(parent, topdir) - print d os.rename(os.path.join(parent, topdir), dest) ## -- 1.8.3.msysgit.0 -------------- next part -------------- >From 12daf6f1ef1bc4498aad0c0620eb488ebdd95619 Mon Sep 17 00:00:00 2001 From: Stan West Date: Mon, 19 Aug 2013 14:53:42 -0400 Subject: [PATCH] Suppress undocumented "test" argument from MathJax installer help. --- IPython/external/mathjax.py | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/IPython/external/mathjax.py b/IPython/external/mathjax.py index d203ffe..d24196c 100644 --- a/IPython/external/mathjax.py +++ b/IPython/external/mathjax.py @@ -216,7 +216,8 @@ def main() : parser.add_argument( '-t', '--test', - action='store_true') + action='store_true', + help=argparse.SUPPRESS) parser.add_argument('tarball', help="the local tar/zip-ball containing mathjax", nargs='?', -- 1.8.3.msysgit.0 -------------- next part -------------- >From 812e27b73e183dac2f49aac1f59c7528d616fe1e Mon Sep 17 00:00:00 2001 From: Stan West Date: Mon, 19 Aug 2013 15:12:51 -0400 Subject: [PATCH] Edit strings in MathJax installer. Edit argument help, print statements, docs, and exceptions. --- IPython/external/mathjax.py | 61 ++++++++++++++++++++++----------------------- 1 file changed, 30 insertions(+), 31 deletions(-) diff --git a/IPython/external/mathjax.py b/IPython/external/mathjax.py index d24196c..a976918 100644 --- a/IPython/external/mathjax.py +++ b/IPython/external/mathjax.py @@ -1,6 +1,5 @@ #!/usr/bin/python -"""Utility function for installing MathJax javascript library into -the notebook's 'static' directory, for offline use. +"""Utility for installing the MathJax JavaScript library for offline use. Authors: @@ -77,7 +76,7 @@ def check_perms(dest, replace=False): existing_path = filter(os.path.exists, subpaths) last_writable = existing_path[-1] if not os.access(last_writable, os.W_OK): - raise IOError("Need have write access to %s" % parent) + raise IOError('Need to have write access to "%s"' % parent) not_existing = [ path for path in subpaths if path not in existing_path] # subfolder we will create, will obviously be writable # should we still considere checking separately that @@ -88,12 +87,12 @@ def check_perms(dest, replace=False): if os.path.exists(dest): if replace: if not os.access(dest, os.W_OK): - raise IOError("Need have write access to %s" % dest) - print "removing previous MathJax install" + raise IOError('Need to have write access to "%s"' % dest) + print "Removing previous MathJax installation." shutil.rmtree(dest) return True else: - print "offline MathJax apparently already installed" + print "Offline MathJax is apparently already installed." return False else : return True @@ -135,28 +134,27 @@ def extract_zip( fd, dest ) : ## def install_mathjax(tag='v2.0', dest=default_dest, replace=False, file=None, extractor=extract_tar ): - """Download and/or install MathJax for offline use. - - This will install mathjax to the 'static' dir in the IPython notebook - package, so it will fail if the caller does not have write access - to that location. + """Install MathJax for offline use, optionally downloading it. + This will fail if the caller does not have write access to the destination. MathJax is a ~15MB download, and ~150MB installed. Parameters ---------- - replace : bool [False] - Whether to remove and replace an existing install. - dest : str [path to default profile] - Where to locally install mathjax tag : str ['v2.0'] - Which tag to download. Default is 'v2.0', the current stable release, + Which tag to download. Default is 'v2.0', the current stable release, but alternatives include 'v1.1a' and 'master'. - file : file like object [ defualt to content of https://github.com/mathjax/MathJax/tarball/#{tag}] - File handle from which to untar/unzip/... mathjax + dest : str [path to default profile] + Where to locally install MathJax. + replace : bool [False] + Whether to remove and replace an existing install. + file : file-like object [None] + File handle from which to extract MathJax. If None, MathJax will be + downloaded from https://github.com/mathjax/MathJax/tarball/#{tag}. extractor : function - Method tu use to untar/unzip/... `file` + Method to use to untar/unzip/... `file`. The default handles tar + files. """ if not check_perms(dest, replace) : return @@ -164,25 +162,25 @@ def install_mathjax(tag='v2.0', dest=default_dest, replace=False, file=None, ext if file is None : # download mathjax mathjax_url = "https://github.com/mathjax/MathJax/tarball/%s" % tag - print "Downloading mathjax source from %s" % mathjax_url + print "Downloading MathJax source from %s." % mathjax_url response = urllib2.urlopen(mathjax_url) file = response.fp - print "Extracting to %s" % dest + print 'Extracting to "%s".' % dest extractor( file, dest ) ## def test_func( remove, dest) : - """See if mathjax appears to be installed correctly""" + """See if MathJax appears to be installed correctly.""" status = 0 if not os.path.isdir( dest ) : - print "%s directory not found" % dest + print 'Directory "%s" not found.' % dest status = 1 if not os.path.exists( dest + "/MathJax.js" ) : - print "MathJax.js not present in %s" % dest + print 'MathJax.js not present in "%s".' % dest status = 1 - print "ok" + print "OK." if remove and os.path.exists(dest): shutil.rmtree( dest ) return status @@ -195,31 +193,32 @@ def main() : # What directory is mathjax in? parser = argparse.ArgumentParser( - description="""Install mathjax from internet or local archive""", - ) + description=('Install MathJax from the Internet or a local ' + 'archive.')) parser.add_argument( '-i', '--install-dir', default=default_dest, - help='installation directory (by default : %s)' % (default_dest)) + help='installation directory (default: %s)' % (default_dest)) parser.add_argument( '-d', '--dest', action='store_true', - help='print where is current mathjax would be installed and exit') + help='print where MathJax would be installed and exit') parser.add_argument( '-r', '--replace', action='store_true', - help='Wether to replace current mathjax if already exist') + help='replace an existing MathJax in the installation directory') parser.add_argument( '-t', '--test', action='store_true', help=argparse.SUPPRESS) parser.add_argument('tarball', - help="the local tar/zip-ball containing mathjax", + help=('the local tar/zip-ball containing MathJax; omit to ' + 'download MathJax'), nargs='?', metavar='tarball') -- 1.8.3.msysgit.0 -------------- next part -------------- >From b760b78e2a13ded8318212d5563dd3775f5da225 Mon Sep 17 00:00:00 2001 From: Stan West Date: Mon, 19 Aug 2013 17:17:57 -0400 Subject: [PATCH] Edit installation instructions for MathJax. Correct the command for finding the desired installation directory. Provide information about system-wide installation, as that information is not in the module. --- docs/source/install/install.rst | 12 +++++++----- 1 file changed, 7 insertions(+), 5 deletions(-) diff --git a/docs/source/install/install.rst b/docs/source/install/install.rst index a25e305..66403b7 100644 --- a/docs/source/install/install.rst +++ b/docs/source/install/install.rst @@ -381,15 +381,17 @@ If you need tighter configuration control, you can download your own copy of MathJax from http://www.mathjax.org/download/ - use the MathJax-2.0 link. When you have the file stored locally, install it with:: - python -m IPython.external.mathjax /path/to/source/mathjax-MathJax-v2.0-20-g07669ac.zip + python -m IPython.external.mathjax /path/to/source/mathjax-MathJax-v2.0-20-g07669ac.zip For unusual needs, IPython can tell you what directory it wants to find MathJax in:: - python -m IPython.external.mathjax -d /some/other/mathjax + python -m IPython.external.mathjax -d + +By default, MathJax will be installed in your IPython default profile +directory, but you can make a system-wide installation by placing the MathJax +source into the IPython package in :file:`.../IPython/html/static/mathjax` +(so that, for example, :file:`MathJax.js` lies in that directory). -By default Mathjax will be installed in your ipython profile directory, but you -can make system wide install, please refer to the documentation and helper function -of :mod:`IPython.external.mathjax` Browser Compatibility --------------------- -- 1.8.3.msysgit.0 From pi at berkeley.edu Mon Aug 19 19:52:24 2013 From: pi at berkeley.edu (Paul Ivanov) Date: Mon, 19 Aug 2013 16:52:24 -0700 Subject: [IPython-dev] documenting what's new in the repo Message-ID: <20130819235224.GC908@HbI-OTOH.berkeley.edu> Hey gang, We've been getting better about documenting the What's New changeset as we go along lately (instead of doing it from memory around release time), but in doing so, have discovered that it's actually hard to do this without getting merge conflicts any time you have multiple PR which start off with the same ancestor, with both editing the what's new file in the same place. Thomas Kluyver and I tried a few things: adding identical marker lines around a commit, adding only to the top of a file with a bunch of blank line context, but none of those worked. We then asked Matthew Brett for ideas, and he suggested we just have a whatsnew.d/ directory to hold files with descriptions of the changesets. We just talked this over during lunch with Fernando, it seems fairly clean and will work, we'll just have to write a little script when we get a bunch of such files in the whatsnew.d directory, which will collapse them all together, add them to the what's new *file*, and delete the contents of whatsnew.d Given that we already have a whatsnew directory, which is where development.rst current lives, I decided to name the new directory pr, so docs/source/whatsnew/pr is the place where new PRs should add a file which documents new features, as well as backwards incompatible ones. For backwards incompatible changes, use the incompat- prefix for a filename. In a self-referential strange loop, I've opened a PR which documents this proposed scheme while using it. See here: https://github.com/ipython/ipython/pull/4070/ best, -- _ / \ A* \^ - ,./ _.`\\ / \ / ,--.S \/ \ / `"~,_ \ \ __o ? _ \<,_ /:\ --(_)/-(_)----.../ | \ --------------.......J Paul Ivanov http://pirsquared.org From wagnerfl at student.ethz.ch Tue Aug 20 09:20:55 2013 From: wagnerfl at student.ethz.ch (Florian M. Wagner) Date: Tue, 20 Aug 2013 15:20:55 +0200 Subject: [IPython-dev] ipcluster (LSF) timing (check if all engines are running) In-Reply-To: References: <52121EF8.2070305@student.ethz.ch> Message-ID: <52136D37.9050400@student.ethz.ch> Hey MIN, thanks for the example. The first while statement waits for the json file as expected, but when I start the cluster and it finds it, a zeromq error occurs: Too many open files (signaler.cpp:330) Do you have an idea? Am 19.08.2013 16:28, schrieb MinRK: > Something like this should work: > > from IPython import parallel > > def wait_for_cluster(engines=1, **kwargs): > """Wait for an IPython cluster to startup and register a minimum > number of engines""" > # wait for the controller to come up > while True: > try: > client = parallel.Client(**kwargs) > except IOError: > print "No ipcontroller-client.json, waiting..." > time.sleep(10) > except TimeoutError: > print "No controller, waiting..." > time.sleep(10) > if not engines: > return > # wait for engines to register > print "waiting for %i engines" % engines, > running = len(client) > sys.stdout.write('.' * running) > while running < engines: > time.sleep(1) > previous = running > running = len(client) > sys.stdout.write('.' * (running - previous)) > > > > On Mon, Aug 19, 2013 at 6:34 AM, Florian M. Wagner > > wrote: > > Hey all, > > I am using IPython.parallel on a large cluster, where controller > and engines are launched via LSF. My current workflow is as follows: > > #!/bin/bash > python pre_processing.py > ipcluster start --profile=cluster --n=128 > ipcluster.log 2>&1 > sleep 120 > python main_computation.py > python post_processing.py > > > I am not entirely happy with this, since the 2 minutes are not > always enough depending on the load of the cluster. I believe that > there is a much more elegant way to launch the cluster and check > if all the eninges are running, before proceeding with the main > computation. I would highly appreciate any help. > > Best regards > Florian > > > _______________________________________________ > IPython-dev mailing list > IPython-dev at scipy.org > http://mail.scipy.org/mailman/listinfo/ipython-dev > > > > > _______________________________________________ > IPython-dev mailing list > IPython-dev at scipy.org > http://mail.scipy.org/mailman/listinfo/ipython-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From jakevdp at cs.washington.edu Tue Aug 20 13:06:52 2013 From: jakevdp at cs.washington.edu (Jacob Vanderplas) Date: Tue, 20 Aug 2013 10:06:52 -0700 Subject: [IPython-dev] preprocessors and post_processors Message-ID: Hi, I was looking through the recent changes, and noticed that in the renaming of transformers, there are now submodules called IPython.nbconvert.preprocessors and IPython.nbconvert.post_processors It's just a silly aesthetic thing, but it strikes me that either both submodules should have an underscore, or neither. Thanks, Jake -------------- next part -------------- An HTML attachment was scrubbed... URL: From ribonucleico at gmail.com Tue Aug 20 13:48:06 2013 From: ribonucleico at gmail.com (Josh Wasserstein) Date: Tue, 20 Aug 2013 13:48:06 -0400 Subject: [IPython-dev] Guppy (Heapy) for variables in %who? Message-ID: As you may already know, Heapy provides nice memory statistics of the object heap. Here are a couple of links discussing it: * http://stackoverflow.com/questions/110259/which-python-memory-profiler-is-recommended * http://guppy-pe.sourceforge.net/#Heapy IPython already has a magic for printing interactive variables: *who*, similar to MATLAB's *who*: http://www.mathworks.com/help/matlab/ref/who.html except that the IPython version does not show the size of objects in memory. This is where Guppy comes into play, since it can provide memory statistics of the object heap. I was wondering if there have been any efforts or discussions in integrating Heapy (or any other memory profiler) into IPython. Thanks! Josh -------------- next part -------------- An HTML attachment was scrubbed... URL: From takowl at gmail.com Tue Aug 20 14:27:29 2013 From: takowl at gmail.com (Thomas Kluyver) Date: Tue, 20 Aug 2013 11:27:29 -0700 Subject: [IPython-dev] Guppy (Heapy) for variables in %who? In-Reply-To: References: Message-ID: Hi Josh, On 20 August 2013 10:48, Josh Wasserstein wrote: > As you may already know, Heapy provides nice memory statistics of the > object heap. Here are a couple of links discussing it: > * > http://stackoverflow.com/questions/110259/which-python-memory-profiler-is-recommended > * http://guppy-pe.sourceforge.net/#Heapy > > IPython already has a magic for printing interactive variables: *who*, > similar to MATLAB's *who*: > http://www.mathworks.com/help/matlab/ref/who.html except that the IPython > version does not show the size of objects in memory. This is where Guppy > comes into play, since it can provide memory statistics of the object heap. > > I was wondering if there have been any efforts or discussions in > integrating Heapy (or any other memory profiler) into IPython. > I don't recall any discussion about Heapy. The memory_profiler module has IPython integration: https://pypi.python.org/pypi/memory_profiler/0.27#ipython-integration If you're interested, the best thing is to write an IPython extension to do what you want: http://ipython.org/ipython-doc/stable/config/extensions/index.html#writing-extensions Best wishes, Thomas -------------- next part -------------- An HTML attachment was scrubbed... URL: From ribonucleico at gmail.com Tue Aug 20 15:11:08 2013 From: ribonucleico at gmail.com (Josh Wasserstein) Date: Tue, 20 Aug 2013 15:11:08 -0400 Subject: [IPython-dev] Guppy (Heapy) for variables in %who? In-Reply-To: References: Message-ID: Thanks Thomas. On this topic, how do I profile a script from IPython with %mprun? Do I necessarily need to import it as a module to profile it? Josh On Tue, Aug 20, 2013 at 2:27 PM, Thomas Kluyver wrote: > Hi Josh, > > On 20 August 2013 10:48, Josh Wasserstein wrote: > >> As you may already know, Heapy provides nice memory statistics of the >> object heap. Here are a couple of links discussing it: >> * >> http://stackoverflow.com/questions/110259/which-python-memory-profiler-is-recommended >> * http://guppy-pe.sourceforge.net/#Heapy >> >> IPython already has a magic for printing interactive variables: *who*, >> similar to MATLAB's *who*: >> http://www.mathworks.com/help/matlab/ref/who.html except that the >> IPython version does not show the size of objects in memory. This is where >> Guppy comes into play, since it can provide memory statistics of the object >> heap. >> >> I was wondering if there have been any efforts or discussions in >> integrating Heapy (or any other memory profiler) into IPython. >> > > I don't recall any discussion about Heapy. The memory_profiler module has > IPython integration: > > https://pypi.python.org/pypi/memory_profiler/0.27#ipython-integration > > If you're interested, the best thing is to write an IPython extension to > do what you want: > > > http://ipython.org/ipython-doc/stable/config/extensions/index.html#writing-extensions > > Best wishes, > Thomas > > _______________________________________________ > IPython-dev mailing list > IPython-dev at scipy.org > http://mail.scipy.org/mailman/listinfo/ipython-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From takowl at gmail.com Tue Aug 20 15:32:03 2013 From: takowl at gmail.com (Thomas Kluyver) Date: Tue, 20 Aug 2013 12:32:03 -0700 Subject: [IPython-dev] Guppy (Heapy) for variables in %who? In-Reply-To: References: Message-ID: On 20 August 2013 12:11, Josh Wasserstein wrote: > Thanks Thomas. On this topic, how do I profile a script from IPython with > %mprun? Do I necessarily need to import it as a module to profile it? It looks like it's designed for functions, not scripts, so you would need to import a module. I don't know that much about it, though. Thomas -------------- next part -------------- An HTML attachment was scrubbed... URL: From benjaminrk at gmail.com Tue Aug 20 17:51:34 2013 From: benjaminrk at gmail.com (MinRK) Date: Tue, 20 Aug 2013 23:51:34 +0200 Subject: [IPython-dev] preprocessors and post_processors In-Reply-To: References: Message-ID: yes, the post_processors should remove the underscore - underscores in package names is generally frowned upon. On Tue, Aug 20, 2013 at 7:06 PM, Jacob Vanderplas wrote: > Hi, > I was looking through the recent changes, and noticed that in the renaming > of transformers, there are now submodules called > > IPython.nbconvert.preprocessors > > and > > IPython.nbconvert.post_processors > > It's just a silly aesthetic thing, but it strikes me that either both > submodules should have an underscore, or neither. > Thanks, > Jake > > _______________________________________________ > IPython-dev mailing list > IPython-dev at scipy.org > http://mail.scipy.org/mailman/listinfo/ipython-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ellisonbg at gmail.com Tue Aug 20 19:24:25 2013 From: ellisonbg at gmail.com (Brian Granger) Date: Tue, 20 Aug 2013 16:24:25 -0700 Subject: [IPython-dev] preprocessors and post_processors In-Reply-To: References: Message-ID: We should definitely be consistent and I agree with removing the underscore. On Tue, Aug 20, 2013 at 2:51 PM, MinRK wrote: > yes, the post_processors should remove the underscore - underscores in > package names is generally frowned upon. > > > On Tue, Aug 20, 2013 at 7:06 PM, Jacob Vanderplas > wrote: >> >> Hi, >> I was looking through the recent changes, and noticed that in the renaming >> of transformers, there are now submodules called >> >> IPython.nbconvert.preprocessors >> >> and >> >> IPython.nbconvert.post_processors >> >> It's just a silly aesthetic thing, but it strikes me that either both >> submodules should have an underscore, or neither. >> Thanks, >> Jake >> >> _______________________________________________ >> IPython-dev mailing list >> IPython-dev at scipy.org >> http://mail.scipy.org/mailman/listinfo/ipython-dev >> > > > _______________________________________________ > IPython-dev mailing list > IPython-dev at scipy.org > http://mail.scipy.org/mailman/listinfo/ipython-dev > -- Brian E. Granger Cal Poly State University, San Luis Obispo bgranger at calpoly.edu and ellisonbg at gmail.com From benjaminrk at gmail.com Tue Aug 20 19:44:29 2013 From: benjaminrk at gmail.com (MinRK) Date: Wed, 21 Aug 2013 01:44:29 +0200 Subject: [IPython-dev] ipcluster (LSF) timing (check if all engines are running) In-Reply-To: <52136D37.9050400@student.ethz.ch> References: <52121EF8.2070305@student.ethz.ch> <52136D37.9050400@student.ethz.ch> Message-ID: That's a bug that sockets aren't properly cleaned up if the Client never finishes getting connected. Should be fixed by [PR #4074]( https://github.com/ipython/ipython/pull/4074). On Tue, Aug 20, 2013 at 3:20 PM, Florian M. Wagner wrote: > Hey MIN, > > thanks for the example. The first while statement waits for the json file > as expected, but when I start the cluster and it finds it, a zeromq error > occurs: Too many open files (signaler.cpp:330) > Do you have an idea? > > Am 19.08.2013 16:28, schrieb MinRK: > > Something like this should work: > > from IPython import parallel > > def wait_for_cluster(engines=1, **kwargs): > """Wait for an IPython cluster to startup and register a minimum > number of engines""" > # wait for the controller to come up > while True: > try: > client = parallel.Client(**kwargs) > except IOError: > print "No ipcontroller-client.json, waiting..." > time.sleep(10) > except TimeoutError: > print "No controller, waiting..." > time.sleep(10) > if not engines: > return > # wait for engines to register > print "waiting for %i engines" % engines, > running = len(client) > sys.stdout.write('.' * running) > while running < engines: > time.sleep(1) > previous = running > running = len(client) > sys.stdout.write('.' * (running - previous)) > > > > On Mon, Aug 19, 2013 at 6:34 AM, Florian M. Wagner < > wagnerfl at student.ethz.ch> wrote: > >> Hey all, >> >> I am using IPython.parallel on a large cluster, where controller and >> engines are launched via LSF. My current workflow is as follows: >> >> #!/bin/bash >> python pre_processing.py >> ipcluster start --profile=cluster --n=128 > ipcluster.log 2>&1 >> sleep 120 >> python main_computation.py >> python post_processing.py >> >> >> I am not entirely happy with this, since the 2 minutes are not always >> enough depending on the load of the cluster. I believe that there is a much >> more elegant way to launch the cluster and check if all the eninges are >> running, before proceeding with the main computation. I would highly >> appreciate any help. >> >> Best regards >> Florian >> >> >> _______________________________________________ >> IPython-dev mailing list >> IPython-dev at scipy.org >> http://mail.scipy.org/mailman/listinfo/ipython-dev >> >> > > > _______________________________________________ > IPython-dev mailing listIPython-dev at scipy.orghttp://mail.scipy.org/mailman/listinfo/ipython-dev > > > > _______________________________________________ > IPython-dev mailing list > IPython-dev at scipy.org > http://mail.scipy.org/mailman/listinfo/ipython-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sychan at lbl.gov Tue Aug 20 20:29:00 2013 From: sychan at lbl.gov (Stephen Chan) Date: Tue, 20 Aug 2013 17:29:00 -0700 Subject: [IPython-dev] Front end ''status_started.Kernel" event and websocket status Message-ID: I've been doing some front end work on the notebook and started using the status_started.Kernel event to push authentication tokens from the browser to the ipython kernel for use in 'backend' code. The browser code initializes before IPython.notebook.kernel is initialized, so I bound handler to status_started.Kernel to wait for the kernel to come up before calling IPython.notebook.kernel.execute() The problem is that when I call IPython.note.kernel.execute() I am getting a DOM error that the web socket is not in readyState 1. It looks like the websocket constuctors in Kernel.prototype.start_channels() are non-blocking and return before the shell_channel is ready for a send(). This is happening on the latest Chrome (haven't tried it on FF or Safari yet). I'm kludging around this currently by checking the readyState on the shell_channel, and if it isn't ready, using setTimeout for 500ms before calling IPython.notebook.kernel.execute() There's an event for websocket closed, but not one for websocket opening. Would it make sense to either add another event for the websocket, or else only trigger the status_started.Kernel event once all the websockets are up? Is there another way to deal with this? Thanks, Steve From benjaminrk at gmail.com Wed Aug 21 03:47:14 2013 From: benjaminrk at gmail.com (MinRK) Date: Wed, 21 Aug 2013 09:47:14 +0200 Subject: [IPython-dev] Front end ''status_started.Kernel" event and websocket status In-Reply-To: References: Message-ID: See Pull Request #4079 . On Wed, Aug 21, 2013 at 2:29 AM, Stephen Chan wrote: I've been doing some front end work on the notebook and started > using the status_started.Kernel event to push authentication tokens > from the browser to the ipython kernel for use in 'backend' code. > > The browser code initializes before IPython.notebook.kernel is > initialized, so I bound handler to status_started.Kernel to wait for > the kernel to come up before calling IPython.notebook.kernel.execute() > The problem is that when I call IPython.note.kernel.execute() I am > getting a DOM error that the web socket is not in readyState 1. It > looks like the websocket constuctors in > Kernel.prototype.start_channels() are non-blocking and return before > the shell_channel is ready for a send(). This is happening on the > latest Chrome (haven't tried it on FF or Safari yet). > > I'm kludging around this currently by checking the readyState on > the shell_channel, and if it isn't ready, using setTimeout for 500ms > before calling IPython.notebook.kernel.execute() > > There's an event for websocket closed, but not one for websocket > opening. Would it make sense to either add another event for the > websocket, or else only trigger the status_started.Kernel event once > all the websockets are up? Is there another way to deal with this? > > Thanks, > Steve > _______________________________________________ > IPython-dev mailing list > IPython-dev at scipy.org > http://mail.scipy.org/mailman/listinfo/ipython-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From wagnerfl at student.ethz.ch Fri Aug 23 11:24:39 2013 From: wagnerfl at student.ethz.ch (Florian M. Wagner) Date: Fri, 23 Aug 2013 17:24:39 +0200 Subject: [IPython-dev] ipcluster(LSF): some engines do not start Message-ID: <52177EB7.9090100@student.ethz.ch> Dear all, I am starting an IPython Cluster using LSF. Sometimes all engines startup as expected, other times a few engines are missing. So it might depend on the clusters load? The ipengine error logs yield: 2013-08-23 16:58:27.855 [IPEngineApp] Using existing profile dir: u'/cluster/home02/erdw/wagnerfl/.ipython/profile_cluster' 2013-08-23 16:58:27.881 [IPEngineApp] ERROR | Couldn't start the Engine Traceback (most recent call last): File "/cluster/home02/erdw/wagnerfl/.local/lib64/python2.7/site-packages/IPython/parallel/apps/ipengineapp.py", line 342, in init_engine connection_info=self.connection_info, AttributeError: 'IPEngineApp' object has no attribute 'connection_info' Other engines are interrupted after missing four heartbeats (3010 ms). My changes from the default settings are: c.IPClusterStart.delay = 5.0 c.IPClusterStart.early_shutdown = 90 c.IPEngineApp.wait_for_url_file = 60 c.EngineFactory.timeout = 20 Anything else you would recommend to tweak? Thank you! Florian -------------- next part -------------- An HTML attachment was scrubbed... URL: From mmckerns at caltech.edu Fri Aug 23 15:19:27 2013 From: mmckerns at caltech.edu (Michael McKerns) Date: Fri, 23 Aug 2013 15:19:27 -0400 (EDT) Subject: [IPython-dev] saving state of ipython session to a pickled file Message-ID: <60042.62.50.247.60.1377285567.squirrel@webmail.caltech.edu> I just pushed an update that *should* fix this bug: https://bugs.launchpad.net/ipython/+bug/488953 detailed by this question: http://stackoverflow.com/questions/18381348/dill-dump-session-with-ipython The updates to dill have been posted to github at https://github.com/uqfoundation, and the on the issue tracker http://trac.mystic.cacr.caltech.edu/project/pathos/ticket/131. Until I post a new tarball, dill+ipython users should use the version off of the github link. There might be other "magic" things that ipython does that I'm not catching... but this change should make sure the default ones that ipython uses are fine. --- Mike McKerns California Institute of Technology http://www.its.caltech.edu/~mmckerns From takowl at gmail.com Fri Aug 23 15:31:58 2013 From: takowl at gmail.com (Thomas Kluyver) Date: Fri, 23 Aug 2013 12:31:58 -0700 Subject: [IPython-dev] saving state of ipython session to a pickled file In-Reply-To: <60042.62.50.247.60.1377285567.squirrel@webmail.caltech.edu> References: <60042.62.50.247.60.1377285567.squirrel@webmail.caltech.edu> Message-ID: Thanks Mike. I've been keeping one eye on dill for precisely this sort of thing, so it's good to know that it's basically working. On 23 August 2013 12:19, Michael McKerns wrote: > I just pushed an update that *should* fix this bug: > https://bugs.launchpad.net/ipython/+bug/488953 > Since the move to Github, that's now this bug: https://github.com/ipython/ipython/issues/112 Best wishes, Thomas -------------- next part -------------- An HTML attachment was scrubbed... URL: From takowl at gmail.com Fri Aug 23 15:39:21 2013 From: takowl at gmail.com (Thomas Kluyver) Date: Fri, 23 Aug 2013 12:39:21 -0700 Subject: [IPython-dev] saving state of ipython session to a pickled file In-Reply-To: <60042.62.50.247.60.1377285567.squirrel@webmail.caltech.edu> References: <60042.62.50.247.60.1377285567.squirrel@webmail.caltech.edu> Message-ID: The technical details: 'exit' and 'quit' are singletons in that we only create the one instance of ExitAutocall (the two names refer to the same object). The class itself doesn't do anything to prevent there being further instances of it, but there's no obvious reason why you would make any more. It's also possible to reassign 'exit' or 'quit', but it's non-trivial, because of the way we rewrite the syntax. Thomas On 23 August 2013 12:19, Michael McKerns wrote: > I just pushed an update that *should* fix this bug: > https://bugs.launchpad.net/ipython/+bug/488953 > > detailed by this question: > http://stackoverflow.com/questions/18381348/dill-dump-session-with-ipython > > The updates to dill have been posted to github at > https://github.com/uqfoundation, > and the on the issue tracker > http://trac.mystic.cacr.caltech.edu/project/pathos/ticket/131. > Until I post a new tarball, dill+ipython users should use the version off > of the github link. > > There might be other "magic" things that ipython does that I'm not > catching... > but this change should make sure the default ones that ipython uses are > fine. > > --- > > Mike McKerns > California Institute of Technology > http://www.its.caltech.edu/~mmckerns > > > _______________________________________________ > IPython-dev mailing list > IPython-dev at scipy.org > http://mail.scipy.org/mailman/listinfo/ipython-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mmckerns at caltech.edu Fri Aug 23 15:47:41 2013 From: mmckerns at caltech.edu (Michael McKerns) Date: Fri, 23 Aug 2013 15:47:41 -0400 (EDT) Subject: [IPython-dev] saving state of ipython session to a pickled file In-Reply-To: References: <60042.62.50.247.60.1377285567.squirrel@webmail.caltech.edu> Message-ID: <60101.62.50.247.60.1377287261.squirrel@webmail.caltech.edu> Thomas, Yep. My fix was to treat them like singletons, and then pop them from the copy of __main__.__dict__ that I serialize. So as long as that assumption is not broken, that should take care of it. I didn't see anything else in some limited testing... and could run a few commands, and save the session... and start up again and have full history and any interactively built functions and stuff within the limits of what dill does in normal python. Guys, let me know if you see anything else... > The technical details: 'exit' and 'quit' are singletons in that we only > create the one instance of ExitAutocall (the two names refer to the same > object). The class itself doesn't do anything to prevent there being > further instances of it, but there's no obvious reason why you would make > any more. It's also possible to reassign 'exit' or 'quit', but it's > non-trivial, because of the way we rewrite the syntax. > > Thomas > > > On 23 August 2013 12:19, Michael McKerns wrote: > >> I just pushed an update that *should* fix this bug: >> https://bugs.launchpad.net/ipython/+bug/488953 >> >> detailed by this question: >> http://stackoverflow.com/questions/18381348/dill-dump-session-with-ipython >> >> The updates to dill have been posted to github at >> https://github.com/uqfoundation, >> and the on the issue tracker >> http://trac.mystic.cacr.caltech.edu/project/pathos/ticket/131. >> Until I post a new tarball, dill+ipython users should use the version >> off >> of the github link. >> >> There might be other "magic" things that ipython does that I'm not >> catching... >> but this change should make sure the default ones that ipython uses are >> fine. >> >> --- >> >> Mike McKerns >> California Institute of Technology >> http://www.its.caltech.edu/~mmckerns >> >> >> _______________________________________________ >> IPython-dev mailing list >> IPython-dev at scipy.org >> http://mail.scipy.org/mailman/listinfo/ipython-dev >> > _______________________________________________ > IPython-dev mailing list > IPython-dev at scipy.org > http://mail.scipy.org/mailman/listinfo/ipython-dev > --- Mike McKerns California Institute of Technology TEL: (626)395-5773 or (626)590-8470 http://www.its.caltech.edu/~mmckerns mmckerns at caltech.edu From rgbkrk at gmail.com Fri Aug 23 18:45:47 2013 From: rgbkrk at gmail.com (Kyle Kelley) Date: Fri, 23 Aug 2013 17:45:47 -0500 Subject: [IPython-dev] IPython notebook+nginx proxy+ssl+websockets Message-ID: Hey all, Has anyone gotten nginx proxying to work when using ssl? I started off with MinRK's nginx config (https://twitter.com/minrk/status/329376092420993024), adding on to it like so: server { listen 80; rewrite ^ https://$host$request_uri? permanent; } server { listen 443; ssl on; ssl_certificate /etc/nginx/ssl/cert.pem; ssl_certificate_key /etc/nginx/ssl/cert.key%>; error_log /var/log/nginx/error.log; location ^~ /static/ { alias /home/ipynb/ipyvirt/lib/python2.7/site-packages/IPython/html/static/; } location / { proxy_pass http://localhost:9999; #proxy_set_header X-Real-IP $remote_addr; #proxy_set_header Host $http_host; #proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; #proxy_set_header X-NginX-Proxy true; # WebSocket support proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; proxy_read_timeout 86400; } } Most of the proxying works just fine, but websockets fail. I could host this using the certfile and keyfile setting in IPython (and not use nginx), but was hoping to proxy from some high port by a non-privileged user to port 443 (with a redirect from 80). -- Kyle -------------- next part -------------- An HTML attachment was scrubbed... URL: From takowl at gmail.com Fri Aug 23 20:50:38 2013 From: takowl at gmail.com (Thomas Kluyver) Date: Fri, 23 Aug 2013 17:50:38 -0700 Subject: [IPython-dev] IPython user survey 2 - picking questions Message-ID: It's nearly two years since we started the first IPython user survey ( http://ipython.org/usersurvey2011.html ), and now that 1.0 is out, I'd like to repeat it, and also ask a few questions that we didn't ask the first time round. So far the questions that I've got are: - What country do you live in? - Which version of IPython do you use? (multiple choice) - Which IPython components do you use? (multiple choice) - On what platforms do you use IPython? (multiple choice) - On which versions of Python do you use IPython? (multiple choice) - Do you use any projects that integrate or extend IPython? - How did you install IPython? (multiple choice) - In what role do you use IPython? (academia/industry/education/hobby) - Briefly describe the main projects for which you use IPython: - How would you like to see IPython improved? I'm keen to keep it short so that many people are willing to complete it, but we can add a few more questions if there's something we'd particularly like to know about our users. Any suggestions? I plan to launch the survey some time next week. Thanks, Thomas -------------- next part -------------- An HTML attachment was scrubbed... URL: From fperez.net at gmail.com Fri Aug 23 21:11:59 2013 From: fperez.net at gmail.com (Fernando Perez) Date: Fri, 23 Aug 2013 18:11:59 -0700 Subject: [IPython-dev] IPython user survey 2 - picking questions In-Reply-To: References: Message-ID: These look good, and I'd favor making only small changes (if any) so it's easier to make comparisons across surveys to see the evolution of things. Having one of these every couple of years is great, thanks for staying on top of it! Cheers, f On Fri, Aug 23, 2013 at 5:50 PM, Thomas Kluyver wrote: > It's nearly two years since we started the first IPython user survey > (http://ipython.org/usersurvey2011.html ), and now that 1.0 is out, I'd like > to repeat it, and also ask a few questions that we didn't ask the first time > round. > > So far the questions that I've got are: > - What country do you live in? > - Which version of IPython do you use? (multiple choice) > - Which IPython components do you use? (multiple choice) > - On what platforms do you use IPython? (multiple choice) > - On which versions of Python do you use IPython? (multiple choice) > - Do you use any projects that integrate or extend IPython? > - How did you install IPython? (multiple choice) > - In what role do you use IPython? (academia/industry/education/hobby) > - Briefly describe the main projects for which you use IPython: > - How would you like to see IPython improved? > > I'm keen to keep it short so that many people are willing to complete it, > but we can add a few more questions if there's something we'd particularly > like to know about our users. Any suggestions? > > I plan to launch the survey some time next week. > > Thanks, > Thomas > > _______________________________________________ > IPython-dev mailing list > IPython-dev at scipy.org > http://mail.scipy.org/mailman/listinfo/ipython-dev > -- Fernando Perez (@fperez_org; http://fperez.org) fperez.net-at-gmail: mailing lists only (I ignore this when swamped!) fernando.perez-at-berkeley: contact me here for any direct mail From takowl at gmail.com Fri Aug 23 21:51:39 2013 From: takowl at gmail.com (Thomas Kluyver) Date: Fri, 23 Aug 2013 18:51:39 -0700 Subject: [IPython-dev] IPython user survey 2 - picking questions In-Reply-To: References: Message-ID: On 23 August 2013 18:11, Fernando Perez wrote: > These look good, and I'd favor making only small changes (if any) so > it's easier to make comparisons across surveys to see the evolution of > things. Having one of these every couple of years is great, thanks > for staying on top of it! > For reference, the questions last time were: - What country do you live in? - On what platforms do you use IPython? (multiple choice) - What parts of IPython do you use? - How do you use IPython? - How would you like IPython to improve in the future? So we've already added a few. -------------- next part -------------- An HTML attachment was scrubbed... URL: From fperez.net at gmail.com Fri Aug 23 22:04:46 2013 From: fperez.net at gmail.com (Fernando Perez) Date: Fri, 23 Aug 2013 19:04:46 -0700 Subject: [IPython-dev] IPython user survey 2 - picking questions In-Reply-To: References: Message-ID: Ah, got it. I do like the new list, don't see anything to add to it. On Fri, Aug 23, 2013 at 6:51 PM, Thomas Kluyver wrote: > On 23 August 2013 18:11, Fernando Perez wrote: >> >> These look good, and I'd favor making only small changes (if any) so >> it's easier to make comparisons across surveys to see the evolution of >> things. Having one of these every couple of years is great, thanks >> for staying on top of it! > > > For reference, the questions last time were: > - What country do you live in? > - On what platforms do you use IPython? (multiple choice) > - What parts of IPython do you use? > - How do you use IPython? > - How would you like IPython to improve in the future? > > So we've already added a few. > > _______________________________________________ > IPython-dev mailing list > IPython-dev at scipy.org > http://mail.scipy.org/mailman/listinfo/ipython-dev > -- Fernando Perez (@fperez_org; http://fperez.org) fperez.net-at-gmail: mailing lists only (I ignore this when swamped!) fernando.perez-at-berkeley: contact me here for any direct mail From benjaminrk at gmail.com Sat Aug 24 03:03:44 2013 From: benjaminrk at gmail.com (MinRK) Date: Sat, 24 Aug 2013 09:03:44 +0200 Subject: [IPython-dev] ipcluster(LSF): some engines do not start In-Reply-To: <52177EB7.9090100@student.ethz.ch> References: <52177EB7.9090100@student.ethz.ch> Message-ID: This is a bug, but it is a consequence of the connection file not existing when the engine starts. It suggests your engines are starting too early. On Fri, Aug 23, 2013 at 5:24 PM, Florian M. Wagner wrote: > Dear all, > > I am starting an IPython Cluster using LSF. Sometimes all engines startup > as expected, other times a few engines are missing. So it might depend on > the clusters load? The ipengine error logs yield: > > 2013-08-23 16:58:27.855 [IPEngineApp] Using existing profile dir: > u'/cluster/home02/erdw/wagnerfl/.ipython/profile_cluster' > 2013-08-23 16:58:27.881 [IPEngineApp] ERROR | Couldn't start the Engine > Traceback (most recent call last): > File > "/cluster/home02/erdw/wagnerfl/.local/lib64/python2.7/site-packages/IPython/parallel/apps/ipengineapp.py", > line 342, in init_engine > connection_info=self.connection_info, > AttributeError: 'IPEngineApp' object has no attribute 'connection_info' > > Other engines are interrupted after missing four heartbeats (3010 ms). > > My changes from the default settings are: > > c.IPClusterStart.delay = 5.0 > c.IPClusterStart.early_shutdown = 90 > c.IPEngineApp.wait_for_url_file = 60 > c.EngineFactory.timeout = 20 > > Anything else you would recommend to tweak? > > Thank you! > Florian > > > > > > > > > > _______________________________________________ > IPython-dev mailing list > IPython-dev at scipy.org > http://mail.scipy.org/mailman/listinfo/ipython-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From asmeurer at gmail.com Sat Aug 24 03:14:36 2013 From: asmeurer at gmail.com (Aaron Meurer) Date: Sat, 24 Aug 2013 01:14:36 -0600 Subject: [IPython-dev] IPython user survey 2 - picking questions In-Reply-To: References: Message-ID: I would add a "additional comments" freeform question at the end incase people want to add something that doesn't exactly fit one of the given questions. Aaron Meurer On Fri, Aug 23, 2013 at 8:04 PM, Fernando Perez wrote: > Ah, got it. I do like the new list, don't see anything to add to it. > > On Fri, Aug 23, 2013 at 6:51 PM, Thomas Kluyver wrote: >> On 23 August 2013 18:11, Fernando Perez wrote: >>> >>> These look good, and I'd favor making only small changes (if any) so >>> it's easier to make comparisons across surveys to see the evolution of >>> things. Having one of these every couple of years is great, thanks >>> for staying on top of it! >> >> >> For reference, the questions last time were: >> - What country do you live in? >> - On what platforms do you use IPython? (multiple choice) >> - What parts of IPython do you use? >> - How do you use IPython? >> - How would you like IPython to improve in the future? >> >> So we've already added a few. >> >> _______________________________________________ >> IPython-dev mailing list >> IPython-dev at scipy.org >> http://mail.scipy.org/mailman/listinfo/ipython-dev >> > > > > -- > Fernando Perez (@fperez_org; http://fperez.org) > fperez.net-at-gmail: mailing lists only (I ignore this when swamped!) > fernando.perez-at-berkeley: contact me here for any direct mail > _______________________________________________ > IPython-dev mailing list > IPython-dev at scipy.org > http://mail.scipy.org/mailman/listinfo/ipython-dev From gvwilson at third-bit.com Sat Aug 24 06:44:10 2013 From: gvwilson at third-bit.com (Greg Wilson) Date: Sat, 24 Aug 2013 06:44:10 -0400 Subject: [IPython-dev] IPython user survey 2 - picking questions In-Reply-To: References: Message-ID: <52188E7A.7050308@third-bit.com> On 2013-08-23 8:50 PM, Thomas Kluyver wrote: > So far the questions that I've got are: > [snip] > I'm keen to keep it short so that many people are willing to complete > it, but we can add a few more questions if there's something we'd > particularly like to know about our users. Any suggestions? - How did you find out about IPython? - Where do you go for help with IPython (StackOverflow, this list, colleagues, the docs, ...) Thanks Greg From benjaminrk at gmail.com Sun Aug 25 13:00:02 2013 From: benjaminrk at gmail.com (MinRK) Date: Sun, 25 Aug 2013 19:00:02 +0200 Subject: [IPython-dev] IPython notebook+nginx proxy+ssl+websockets In-Reply-To: References: Message-ID: nginx support for websocket proxying is a relatively recent addition ([docs](http://nginx.org/en/docs/http/websocket.html) suggests 1.3.13. Is it possible your version doesn't have this support? -MinRK On Sat, Aug 24, 2013 at 12:45 AM, Kyle Kelley wrote: > Hey all, > > Has anyone gotten nginx proxying to work when using ssl? I started off > with MinRK's nginx config ( > https://twitter.com/minrk/status/329376092420993024), adding on to it > like so: > > server { > listen 80; > rewrite ^ https://$host$request_uri? permanent; > } > > server { > listen 443; > ssl on; > ssl_certificate /etc/nginx/ssl/cert.pem; > ssl_certificate_key /etc/nginx/ssl/cert.key%>; > > error_log /var/log/nginx/error.log; > > location ^~ /static/ { > alias > /home/ipynb/ipyvirt/lib/python2.7/site-packages/IPython/html/static/; > } > > location / { > proxy_pass http://localhost:9999; > > #proxy_set_header X-Real-IP $remote_addr; > #proxy_set_header Host $http_host; > #proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; > > #proxy_set_header X-NginX-Proxy true; > > # WebSocket support > proxy_http_version 1.1; > proxy_set_header Upgrade $http_upgrade; > proxy_set_header Connection "upgrade"; > proxy_read_timeout 86400; > > } > } > > Most of the proxying works just fine, but websockets fail. I could host > this using the certfile and keyfile setting in IPython (and not use nginx), > but was hoping to proxy from some high port by a non-privileged user to > port 443 (with a redirect from 80). > > -- Kyle > > > _______________________________________________ > IPython-dev mailing list > IPython-dev at scipy.org > http://mail.scipy.org/mailman/listinfo/ipython-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rgbkrk at gmail.com Sun Aug 25 13:17:25 2013 From: rgbkrk at gmail.com (Kyle Kelley) Date: Sun, 25 Aug 2013 12:17:25 -0500 Subject: [IPython-dev] IPython notebook+nginx proxy+ssl+websockets In-Reply-To: References: Message-ID: Interesting. I'm actually installing nginx using the nginx cookbook ( https://github.com/opscode-cookbooks/nginx). The version on the box is rather old, and apparently ignoring my config. vagrant at ipynb-cookbook-berkshelf:~$ nginx -v nginx version: nginx/1.1.19 I'll definitely get a newer version installed pronto! Thanks Min! -- Kyle On Sun, Aug 25, 2013 at 12:00 PM, MinRK wrote: > nginx support for websocket proxying is a relatively recent addition > ([docs](http://nginx.org/en/docs/http/websocket.html) suggests 1.3.13. > Is it possible your version doesn't have this support? > > -MinRK > > > On Sat, Aug 24, 2013 at 12:45 AM, Kyle Kelley wrote: > >> Hey all, >> >> Has anyone gotten nginx proxying to work when using ssl? I started off >> with MinRK's nginx config ( >> https://twitter.com/minrk/status/329376092420993024), adding on to it >> like so: >> >> server { >> listen 80; >> rewrite ^ https://$host$request_uri? permanent; >> } >> >> server { >> listen 443; >> ssl on; >> ssl_certificate /etc/nginx/ssl/cert.pem; >> ssl_certificate_key /etc/nginx/ssl/cert.key%>; >> >> error_log /var/log/nginx/error.log; >> >> location ^~ /static/ { >> alias >> /home/ipynb/ipyvirt/lib/python2.7/site-packages/IPython/html/static/; >> } >> >> location / { >> proxy_pass http://localhost:9999; >> >> #proxy_set_header X-Real-IP $remote_addr; >> #proxy_set_header Host $http_host; >> #proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; >> >> #proxy_set_header X-NginX-Proxy true; >> >> # WebSocket support >> proxy_http_version 1.1; >> proxy_set_header Upgrade $http_upgrade; >> proxy_set_header Connection "upgrade"; >> proxy_read_timeout 86400; >> >> } >> } >> >> Most of the proxying works just fine, but websockets fail. I could host >> this using the certfile and keyfile setting in IPython (and not use nginx), >> but was hoping to proxy from some high port by a non-privileged user to >> port 443 (with a redirect from 80). >> >> -- Kyle >> >> >> _______________________________________________ >> IPython-dev mailing list >> IPython-dev at scipy.org >> http://mail.scipy.org/mailman/listinfo/ipython-dev >> >> > > _______________________________________________ > IPython-dev mailing list > IPython-dev at scipy.org > http://mail.scipy.org/mailman/listinfo/ipython-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ronena at gmail.com Sun Aug 25 15:54:04 2013 From: ronena at gmail.com (Ronen Abravanel) Date: Sun, 25 Aug 2013 22:54:04 +0300 Subject: [IPython-dev] Right to left markdown in IPython (1.0) Message-ID: Hello all, I'm planning a python class, and my hope is to present it as "live reavel"[1] interactive presentation On of my problems is that I want some of the text and explanations to be in Hebrew, e.g., be written from right to left, which markdown dose not support. One option is to write html with dir=rtl inside the markdown block, but then I have to write all-html, which is incontinent. another option is to use this cool bidiweb[2] script. bidiweb can operate in two modes: 1. Process html (as returned from the markdown processor) and add dir=rtl when needed. 2. Post process the page (searching for specific class and add right-to-left tags when needed) In IPython 0.13, option seems cool, as pagedown, the markdown implementation used, supported post processing hook. In IPython 1.0, I did the following: custom.js: "using strict"; requirejs.config({ shim: { 'bidiweb.style': ['bidiweb'] } }); $([IPython.events]).on('app_initialized.NotebookApp', function(){ require(['custom/bidiweb'],function(style){ bidiweb.style('.rendered_html *'); }) }); +custom.css with .rtl and .ltr definitions. This work only partially: It's add RTL support to the markdown blocked that exists when the page is loaded, but when I add new markdown or edit existing one, the RTL is killed. Is there any way to call bidiweb.style whenever a markdown blocked is update? or any good way to use mode 1 without changing IPython's code? Thanks, Ronen Abravanel [1] http://www.youtube.com/watch?v=bCb2HJy-yc0 I hoped it will be usable and published by the end of October. otherwise, I'll just present a notebook. [2] https://github.com/hasenj/bidiweb -------------- next part -------------- An HTML attachment was scrubbed... URL: From rgbkrk at gmail.com Sun Aug 25 16:00:26 2013 From: rgbkrk at gmail.com (Kyle Kelley) Date: Sun, 25 Aug 2013 15:00:26 -0500 Subject: [IPython-dev] IPython notebook+nginx proxy+ssl+websockets In-Reply-To: References: Message-ID: Yay! Got it working (amidst toddler distractions). Had to install from source and didn't realize the primary cookbook sets it up as a service but doesn't put it on the path. Totally working now, but having to muck with MathJax now (can't load from CDN, would like to install locally). :-/ On Sun, Aug 25, 2013 at 12:17 PM, Kyle Kelley wrote: > Interesting. I'm actually installing nginx using the nginx cookbook ( > https://github.com/opscode-cookbooks/nginx). The version on the box is > rather old, and apparently ignoring my config. > > vagrant at ipynb-cookbook-berkshelf:~$ nginx -v > nginx version: nginx/1.1.19 > > I'll definitely get a newer version installed pronto! Thanks Min! > > -- Kyle > > > > On Sun, Aug 25, 2013 at 12:00 PM, MinRK wrote: > >> nginx support for websocket proxying is a relatively recent addition >> ([docs](http://nginx.org/en/docs/http/websocket.html) suggests 1.3.13. >> Is it possible your version doesn't have this support? >> >> -MinRK >> >> >> On Sat, Aug 24, 2013 at 12:45 AM, Kyle Kelley wrote: >> >>> Hey all, >>> >>> Has anyone gotten nginx proxying to work when using ssl? I started off >>> with MinRK's nginx config ( >>> https://twitter.com/minrk/status/329376092420993024), adding on to it >>> like so: >>> >>> server { >>> listen 80; >>> rewrite ^ https://$host$request_uri? permanent; >>> } >>> >>> server { >>> listen 443; >>> ssl on; >>> ssl_certificate /etc/nginx/ssl/cert.pem; >>> ssl_certificate_key /etc/nginx/ssl/cert.key%>; >>> >>> error_log /var/log/nginx/error.log; >>> >>> location ^~ /static/ { >>> alias >>> /home/ipynb/ipyvirt/lib/python2.7/site-packages/IPython/html/static/; >>> } >>> >>> location / { >>> proxy_pass http://localhost:9999; >>> >>> #proxy_set_header X-Real-IP $remote_addr; >>> #proxy_set_header Host $http_host; >>> #proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; >>> >>> #proxy_set_header X-NginX-Proxy true; >>> >>> # WebSocket support >>> proxy_http_version 1.1; >>> proxy_set_header Upgrade $http_upgrade; >>> proxy_set_header Connection "upgrade"; >>> proxy_read_timeout 86400; >>> >>> } >>> } >>> >>> Most of the proxying works just fine, but websockets fail. I could host >>> this using the certfile and keyfile setting in IPython (and not use nginx), >>> but was hoping to proxy from some high port by a non-privileged user to >>> port 443 (with a redirect from 80). >>> >>> -- Kyle >>> >>> >>> _______________________________________________ >>> IPython-dev mailing list >>> IPython-dev at scipy.org >>> http://mail.scipy.org/mailman/listinfo/ipython-dev >>> >>> >> >> _______________________________________________ >> IPython-dev mailing list >> IPython-dev at scipy.org >> http://mail.scipy.org/mailman/listinfo/ipython-dev >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rgbkrk at gmail.com Sun Aug 25 16:05:12 2013 From: rgbkrk at gmail.com (Kyle Kelley) Date: Sun, 25 Aug 2013 15:05:12 -0500 Subject: [IPython-dev] Right to left markdown in IPython (1.0) In-Reply-To: References: Message-ID: Oh. This is important for all the RTL languages. Great ideas. It looks like @amitkot added this as an issue on GitHub: https://github.com/ipython/ipython/issues/3278. Wonder if there's a way to detect it when rendering it. On Sun, Aug 25, 2013 at 2:54 PM, Ronen Abravanel wrote: > Hello all, > > I'm planning a python class, and my hope is to present it as "live > reavel"[1] interactive presentation > > On of my problems is that I want some of the text and explanations to be > in Hebrew, e.g., be written from right to left, which markdown dose not > support. > > One option is to write html with dir=rtl inside the markdown block, but > then I have to write all-html, which is incontinent. > > another option is to use this cool bidiweb[2] script. > bidiweb can operate in two modes: > 1. Process html (as returned from the markdown processor) and add dir=rtl > when needed. > 2. Post process the page (searching for specific class and add > right-to-left tags when needed) > > In IPython 0.13, option seems cool, as pagedown, the markdown > implementation used, supported post processing hook. > > In IPython 1.0, I did the following: > custom.js: > > "using strict"; > > requirejs.config({ > shim: { > 'bidiweb.style': ['bidiweb'] > } > }); > > $([IPython.events]).on('app_initialized.NotebookApp', function(){ > require(['custom/bidiweb'],function(style){ > bidiweb.style('.rendered_html *'); > }) > > }); > > > +custom.css with .rtl and .ltr definitions. > > This work only partially: It's add RTL support to the markdown blocked > that exists when the page is loaded, but when I add new markdown or edit > existing one, the RTL is killed. > Is there any way to call bidiweb.style whenever a markdown blocked is > update? or any good way to use mode 1 without changing IPython's code? > > > Thanks, > Ronen Abravanel > > > > [1] http://www.youtube.com/watch?v=bCb2HJy-yc0 I hoped it will be usable > and published by the end of October. otherwise, I'll just present a > notebook. > [2] https://github.com/hasenj/bidiweb > > _______________________________________________ > IPython-dev mailing list > IPython-dev at scipy.org > http://mail.scipy.org/mailman/listinfo/ipython-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ronena at gmail.com Sun Aug 25 16:11:44 2013 From: ronena at gmail.com (Ronen Abravanel) Date: Sun, 25 Aug 2013 23:11:44 +0300 Subject: [IPython-dev] Right to left markdown in IPython (1.0) In-Reply-To: References: Message-ID: The issue itself is general, but all the response are about editing, which seems for me less important then display. My 'extension' tries to solve the display part. Markdown dose not define any behavior for RTL text, and all I found online is some hacks (like the one I used). Real solution will be to extend markdown (marked?) both for implicit Right-to-left (like bidiweb is doing) and explicit (invent some 'set RTL' mark?). But that's seems for me event outside the scope of IPython. On Sun, Aug 25, 2013 at 11:05 PM, Kyle Kelley wrote: > Oh. This is important for all the RTL languages. Great ideas. > > It looks like @amitkot added this as an issue on GitHub: > https://github.com/ipython/ipython/issues/3278. > > Wonder if there's a way to detect it when rendering it. > > > On Sun, Aug 25, 2013 at 2:54 PM, Ronen Abravanel wrote: > >> Hello all, >> >> I'm planning a python class, and my hope is to present it as "live >> reavel"[1] interactive presentation >> >> On of my problems is that I want some of the text and explanations to be >> in Hebrew, e.g., be written from right to left, which markdown dose not >> support. >> >> One option is to write html with dir=rtl inside the markdown block, but >> then I have to write all-html, which is incontinent. >> >> another option is to use this cool bidiweb[2] script. >> bidiweb can operate in two modes: >> 1. Process html (as returned from the markdown processor) and add dir=rtl >> when needed. >> 2. Post process the page (searching for specific class and add >> right-to-left tags when needed) >> >> In IPython 0.13, option seems cool, as pagedown, the markdown >> implementation used, supported post processing hook. >> >> In IPython 1.0, I did the following: >> custom.js: >> >> "using strict"; >> >> requirejs.config({ >> shim: { >> 'bidiweb.style': ['bidiweb'] >> } >> }); >> >> $([IPython.events]).on('app_initialized.NotebookApp', function(){ >> require(['custom/bidiweb'],function(style){ >> bidiweb.style('.rendered_html *'); >> }) >> >> }); >> >> >> +custom.css with .rtl and .ltr definitions. >> >> This work only partially: It's add RTL support to the markdown blocked >> that exists when the page is loaded, but when I add new markdown or edit >> existing one, the RTL is killed. >> Is there any way to call bidiweb.style whenever a markdown blocked is >> update? or any good way to use mode 1 without changing IPython's code? >> >> >> Thanks, >> Ronen Abravanel >> >> >> >> [1] http://www.youtube.com/watch?v=bCb2HJy-yc0 I hoped it will be usable >> and published by the end of October. otherwise, I'll just present a >> notebook. >> [2] https://github.com/hasenj/bidiweb >> >> _______________________________________________ >> IPython-dev mailing list >> IPython-dev at scipy.org >> http://mail.scipy.org/mailman/listinfo/ipython-dev >> >> > > _______________________________________________ > IPython-dev mailing list > IPython-dev at scipy.org > http://mail.scipy.org/mailman/listinfo/ipython-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bussonniermatthias at gmail.com Mon Aug 26 03:57:57 2013 From: bussonniermatthias at gmail.com (Matthias BUSSONNIER) Date: Mon, 26 Aug 2013 09:57:57 +0200 Subject: [IPython-dev] Right to left markdown in IPython (1.0) In-Reply-To: References: Message-ID: Le 25 ao?t 2013 ? 22:11, Ronen Abravanel a ?crit : > The issue itself is general, but all the response are about editing, which seems for me less important then display. My 'extension' tries to solve the display part. > > Markdown dose not define any behavior for RTL text, and all I found online is some hacks (like the one I used). Real solution will be to extend markdown (marked?) both for implicit Right-to-left (like bidiweb is doing) and explicit (invent some 'set RTL' mark?). But that's seems for me event outside the scope of IPython. > Yes extending marked is outside of scope of IPython. Sadly we don't trigger even on markdown rendering. You can open an issue on github about that. the current rendering code is the following : MarkdownCell.prototype.render = function () { if (this.rendered === false) { var text = this.get_text(); var math = null; if (text === "") { text = this.placeholder; } var text_and_math = IPython.mathjaxutils.remove_math(text); text = text_and_math[0]; math = text_and_math[1]; var html = marked.parser(marked.lexer(text)); html = $(IPython.mathjaxutils.replace_math(html, math)); // links in markdown cells should open in new tabs html.find("a[href]").not('[href^="#"]').attr("target", "_blank"); try { this.set_rendered(html); } catch (e) { console.log("Error running Javascript in Markdown:"); console.log(e); this.set_rendered($("
").addClass("js-error").html( "Error rendering Markdown!
" + e.toString()) ); } this.element.find('div.text_cell_input').hide(); this.element.find("div.text_cell_render").show(); this.typeset() this.rendered = true; } }; I can give you the **bad** advice to monkey patch MarkdownCell.prototype.render to be the following + what you need to handle RTL. It is **only** a temporary solution that **will** break later. In custom js that would look ilke : IPython.MarkdownCell.prototype.render = function () { if (this.rendered === false) { ? same as above this.typeset() // triger even and/or bidiweb.style('.rendered_html *'); // or anly the `html` DOM above for speed. // if you are greedy. this.rendered = true; } }; -- M From ronena at gmail.com Mon Aug 26 08:59:07 2013 From: ronena at gmail.com (Ronen Abravanel) Date: Mon, 26 Aug 2013 15:59:07 +0300 Subject: [IPython-dev] Right to left markdown in IPython (1.0) In-Reply-To: References: Message-ID: Works, thanks! But only combining with my previews hack ("my" code to alter the preloaded page and Matthias' to handle new-generated content. On Mon, Aug 26, 2013 at 10:57 AM, Matthias BUSSONNIER < bussonniermatthias at gmail.com> wrote: > > Le 25 ao?t 2013 ? 22:11, Ronen Abravanel a ?crit : > > > The issue itself is general, but all the response are about editing, > which seems for me less important then display. My 'extension' tries to > solve the display part. > > > > Markdown dose not define any behavior for RTL text, and all I found > online is some hacks (like the one I used). Real solution will be to extend > markdown (marked?) both for implicit Right-to-left (like bidiweb is doing) > and explicit (invent some 'set RTL' mark?). But that's seems for me event > outside the scope of IPython. > > > > > Yes extending marked is outside of scope of IPython. > > Sadly we don't trigger even on markdown rendering. You can open an issue > on github about that. > > the current rendering code is the following : > > > MarkdownCell.prototype.render = function () { > if (this.rendered === false) { > var text = this.get_text(); > var math = null; > if (text === "") { text = this.placeholder; } > var text_and_math = IPython.mathjaxutils.remove_math(text); > text = text_and_math[0]; > math = text_and_math[1]; > var html = marked.parser(marked.lexer(text)); > html = $(IPython.mathjaxutils.replace_math(html, math)); > // links in markdown cells should open in new tabs > html.find("a[href]").not('[href^="#"]').attr("target", > "_blank"); > try { > this.set_rendered(html); > } catch (e) { > console.log("Error running Javascript in Markdown:"); > console.log(e); > this.set_rendered($("
").addClass("js-error").html( > "Error rendering Markdown!
" + e.toString()) > ); > } > this.element.find('div.text_cell_input').hide(); > this.element.find("div.text_cell_render").show(); > this.typeset() > this.rendered = true; > } > }; > > I can give you the **bad** advice to monkey patch > MarkdownCell.prototype.render > > to be the following + what you need to handle RTL. > It is **only** a temporary solution that **will** break later. > > > In custom js that would look ilke : > > IPython.MarkdownCell.prototype.render = function () { > if (this.rendered === false) { > ? same as above > this.typeset() > > // triger even and/or bidiweb.style('.rendered_html *'); > // or anly the `html` DOM above for speed. > // if you are greedy. > this.rendered = true; > } > }; > > > -- > M > _______________________________________________ > IPython-dev mailing list > IPython-dev at scipy.org > http://mail.scipy.org/mailman/listinfo/ipython-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ronena at gmail.com Mon Aug 26 09:37:25 2013 From: ronena at gmail.com (Ronen Abravanel) Date: Mon, 26 Aug 2013 16:37:25 +0300 Subject: [IPython-dev] Right to left markdown in IPython (1.0) In-Reply-To: References: Message-ID: And now on github: https://github.com/ronenabr/ipython_bidi_markdown On Mon, Aug 26, 2013 at 3:59 PM, Ronen Abravanel wrote: > Works, thanks! > > But only combining with my previews hack ("my" code to alter the preloaded > page and Matthias' to handle new-generated content. > > > > > On Mon, Aug 26, 2013 at 10:57 AM, Matthias BUSSONNIER < > bussonniermatthias at gmail.com> wrote: > >> >> Le 25 ao?t 2013 ? 22:11, Ronen Abravanel a ?crit : >> >> > The issue itself is general, but all the response are about editing, >> which seems for me less important then display. My 'extension' tries to >> solve the display part. >> > >> > Markdown dose not define any behavior for RTL text, and all I found >> online is some hacks (like the one I used). Real solution will be to extend >> markdown (marked?) both for implicit Right-to-left (like bidiweb is doing) >> and explicit (invent some 'set RTL' mark?). But that's seems for me event >> outside the scope of IPython. >> > >> >> >> Yes extending marked is outside of scope of IPython. >> >> Sadly we don't trigger even on markdown rendering. You can open an issue >> on github about that. >> >> the current rendering code is the following : >> >> >> MarkdownCell.prototype.render = function () { >> if (this.rendered === false) { >> var text = this.get_text(); >> var math = null; >> if (text === "") { text = this.placeholder; } >> var text_and_math = IPython.mathjaxutils.remove_math(text); >> text = text_and_math[0]; >> math = text_and_math[1]; >> var html = marked.parser(marked.lexer(text)); >> html = $(IPython.mathjaxutils.replace_math(html, math)); >> // links in markdown cells should open in new tabs >> html.find("a[href]").not('[href^="#"]').attr("target", >> "_blank"); >> try { >> this.set_rendered(html); >> } catch (e) { >> console.log("Error running Javascript in Markdown:"); >> console.log(e); >> this.set_rendered($("
").addClass("js-error").html( >> "Error rendering Markdown!
" + e.toString()) >> ); >> } >> this.element.find('div.text_cell_input').hide(); >> this.element.find("div.text_cell_render").show(); >> this.typeset() >> this.rendered = true; >> } >> }; >> >> I can give you the **bad** advice to monkey patch >> MarkdownCell.prototype.render >> >> to be the following + what you need to handle RTL. >> It is **only** a temporary solution that **will** break later. >> >> >> In custom js that would look ilke : >> >> IPython.MarkdownCell.prototype.render = function () { >> if (this.rendered === false) { >> ? same as above >> this.typeset() >> >> // triger even and/or bidiweb.style('.rendered_html *'); >> // or anly the `html` DOM above for speed. >> // if you are greedy. >> this.rendered = true; >> } >> }; >> >> >> -- >> M >> _______________________________________________ >> IPython-dev mailing list >> IPython-dev at scipy.org >> http://mail.scipy.org/mailman/listinfo/ipython-dev >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From takowl at gmail.com Mon Aug 26 15:43:17 2013 From: takowl at gmail.com (Thomas Kluyver) Date: Mon, 26 Aug 2013 12:43:17 -0700 Subject: [IPython-dev] IPython User Survey 2 Message-ID: Please take a few minutes to fill in the second IPython user survey: https://docs.google.com/spreadsheet/viewform?formkey=dHF2WmlKdTZTRlZVRGFGTDgtUXFBVUE6MQ#gid=0 All questions are optional, so you can give as much or as little information as you want. Note that we will publish the responses, so don't put anything you want kept private. All responses are anonymous. We've made the user survey so that we can get a better idea of who is using IPython and how they're working with it. We ran the first user survey two years ago (http://ipython.org/usersurvey2011.html ), just before we launched the first version of the notebook. We're interested in any changes in our user base since then, and we're also asking some more specific questions that we didn't think of last time. Please complete this survey whether or not you took part in the last one - we want the most complete sample possible. If you know of people using IPython who don't subscribe to the mailing list, please pass this on to them. Thanks, Thomas -------------- next part -------------- An HTML attachment was scrubbed... URL: From cyrille.rossant at gmail.com Mon Aug 26 15:50:48 2013 From: cyrille.rossant at gmail.com (Cyrille Rossant) Date: Mon, 26 Aug 2013 21:50:48 +0200 Subject: [IPython-dev] Python and Javascript In-Reply-To: References: Message-ID: Hi, Just an update about our investigation of a possible web backend for Vispy, notably for an integration in the IPython notebook. We've made some progress on this question during EuroSciPy. I also made a proof of concept during the sprint (see https://groups.google.com/forum/#!topic/vispy/16SO-JLVVII, BTW those interested in following this work can subscribe to this mailing list). Our conclusion is that there are multiple ways of making a web backend for an OpenGL-based Python visualization toolkit like Vispy. GL rendering can happen server-side (VNC-like approach, similar to the Matplotlib web backend), client-side (GL commands are streamed to the browser through WebSocket), or a standalone HTML page could be generated, containing all the visualization info and data. I think all approaches are complementary and would be possible in the IPython notebook. My experiment streams GL commands from Python to JS (using WebGL) through WebSocket and a basic JSON messaging protocol. NumPy arrays can be transferred with base64. This is likely to be fast in most situations, where the data are only transferred upon initialization. In other cases, with datasets containing millions of points and that need to be transferred often, it may be too slow (I think the bottleneck is base64 serialization/deserialization, and there's also the transfer through WebSocket). We're now investigating a possible binary protocol to avoid serialization of the data buffer. In addition, there might be a way of sharing a same data buffer in memory between a Python process and the web browser, although I've no idea how. I'll keep you in the loop (unless this is not the right place for that, in which case I'm sorry about the noise!). Cyrille From takowl at gmail.com Mon Aug 26 15:56:19 2013 From: takowl at gmail.com (Thomas Kluyver) Date: Mon, 26 Aug 2013 12:56:19 -0700 Subject: [IPython-dev] Python and Javascript In-Reply-To: References: Message-ID: On 26 August 2013 12:50, Cyrille Rossant wrote: > (I think the > bottleneck is base64 serialization/deserialization, and there's also > the transfer through WebSocket). > I think someone mentioned that the IPython messaging protocol does allow the possibility of sending JSON metadata followed by a chunk of binary data, which would avoid the base64 cost. I don't know much about it, but if I'm not making that up, others will be able to give you more details. Thomas -------------- next part -------------- An HTML attachment was scrubbed... URL: From benjaminrk at gmail.com Tue Aug 27 00:25:21 2013 From: benjaminrk at gmail.com (MinRK) Date: Mon, 26 Aug 2013 21:25:21 -0700 Subject: [IPython-dev] Python and Javascript In-Reply-To: References: Message-ID: On Mon, Aug 26, 2013 at 12:56 PM, Thomas Kluyver wrote: > On 26 August 2013 12:50, Cyrille Rossant wrote: > >> (I think the >> bottleneck is base64 serialization/deserialization, and there's also >> the transfer through WebSocket). >> > > I think someone mentioned that the IPython messaging protocol does allow > the possibility of sending JSON metadata followed by a chunk of binary > data, which would avoid the base64 cost. I don't know much about it, but if > I'm not making that up, others will be able to give you more details. > It is true, but it is only a part of the message spec used in Python-Python messages (data-pub and apply). At the dev meeting, we did discuss the possibility of switching to binary websockets, which opens the door to this part of the message spec making it all the way to javascript and becoming a part of the widget spec. -MinRK > > > Thomas > > > _______________________________________________ > IPython-dev mailing list > IPython-dev at scipy.org > http://mail.scipy.org/mailman/listinfo/ipython-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bussonniermatthias at gmail.com Tue Aug 27 04:34:13 2013 From: bussonniermatthias at gmail.com (Matthias Bussonnier) Date: Tue, 27 Aug 2013 10:34:13 +0200 Subject: [IPython-dev] Python and Javascript In-Reply-To: References: Message-ID: I don't think you will be able to share memory unless you code your own browser. Usually for security reason, js process are sandboxed as much as possible. And the case where webpage and server are on the same machine is so rare that I doubt this is even an envisaged feature. Envoy? de mon iPhone Le 26 ao?t 2013 ? 21:50, Cyrille Rossant a ?crit : > Hi, > > Just an update about our investigation of a possible web backend for > Vispy, notably for an integration in the IPython notebook. We've made > some progress on this question during EuroSciPy. I also made a proof > of concept during the sprint (see > https://groups.google.com/forum/#!topic/vispy/16SO-JLVVII, BTW those > interested in following this work can subscribe to this mailing list). > > Our conclusion is that there are multiple ways of making a web backend > for an OpenGL-based Python visualization toolkit like Vispy. GL > rendering can happen server-side (VNC-like approach, similar to the > Matplotlib web backend), client-side (GL commands are streamed to the > browser through WebSocket), or a standalone HTML page could be > generated, containing all the visualization info and data. I think all > approaches are complementary and would be possible in the IPython > notebook. > > My experiment streams GL commands from Python to JS (using WebGL) > through WebSocket and a basic JSON messaging protocol. NumPy arrays > can be transferred with base64. This is likely to be fast in most > situations, where the data are only transferred upon initialization. > In other cases, with datasets containing millions of points and that > need to be transferred often, it may be too slow (I think the > bottleneck is base64 serialization/deserialization, and there's also > the transfer through WebSocket). > > We're now investigating a possible binary protocol to avoid > serialization of the data buffer. In addition, there might be a way of > sharing a same data buffer in memory between a Python process and the > web browser, although I've no idea how. I'll keep you in the loop > (unless this is not the right place for that, in which case I'm sorry > about the noise!). > > Cyrille > _______________________________________________ > IPython-dev mailing list > IPython-dev at scipy.org > http://mail.scipy.org/mailman/listinfo/ipython-dev From jchendy at gmail.com Tue Aug 27 11:53:01 2013 From: jchendy at gmail.com (Jeff Hendy) Date: Tue, 27 Aug 2013 11:53:01 -0400 Subject: [IPython-dev] IPython User Survey 2 In-Reply-To: References: Message-ID: Hi Thomas, Do you have any estimate for when the responses will be published? Cheers, Jeff On Mon, Aug 26, 2013 at 3:43 PM, Thomas Kluyver wrote: > Please take a few minutes to fill in the second IPython user survey: > > > https://docs.google.com/spreadsheet/viewform?formkey=dHF2WmlKdTZTRlZVRGFGTDgtUXFBVUE6MQ#gid=0 > > All questions are optional, so you can give as much or as little > information as you want. Note that we will publish the responses, so don't > put anything you want kept private. All responses are anonymous. > > We've made the user survey so that we can get a better idea of who is > using IPython and how they're working with it. We ran the first user survey > two years ago (http://ipython.org/usersurvey2011.html ), just before we > launched the first version of the notebook. We're interested in any changes > in our user base since then, and we're also asking some more specific > questions that we didn't think of last time. Please complete this survey > whether or not you took part in the last one - we want the most complete > sample possible. > > If you know of people using IPython who don't subscribe to the mailing > list, please pass this on to them. > > Thanks, > Thomas > > _______________________________________________ > IPython-dev mailing list > IPython-dev at scipy.org > http://mail.scipy.org/mailman/listinfo/ipython-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From takowl at gmail.com Tue Aug 27 12:31:17 2013 From: takowl at gmail.com (Thomas Kluyver) Date: Tue, 27 Aug 2013 09:31:17 -0700 Subject: [IPython-dev] IPython User Survey 2 In-Reply-To: References: Message-ID: Hi Jeff, On 27 August 2013 08:53, Jeff Hendy wrote: > Do you have any estimate for when the responses will be published? The survey will remain open for some time - perhaps a couple of weeks, depending on how quickly the rate of responses drops. After that I'll write up a summary of the results. But if you want to see the raw responses as they come in, have a look here: https://docs.google.com/spreadsheet/ccc?key=0AqIElKUDQl8tdHF2WmlKdTZTRlZVRGFGTDgtUXFBVUE&usp=sharing Thanks, Thomas -------------- next part -------------- An HTML attachment was scrubbed... URL: From erik.m.bray at gmail.com Tue Aug 27 13:01:02 2013 From: erik.m.bray at gmail.com (Erik Bray) Date: Tue, 27 Aug 2013 13:01:02 -0400 Subject: [IPython-dev] Guppy (Heapy) for variables in %who? In-Reply-To: References: Message-ID: On Tue, Aug 20, 2013 at 2:27 PM, Thomas Kluyver wrote: > Hi Josh, > > On 20 August 2013 10:48, Josh Wasserstein wrote: >> >> As you may already know, Heapy provides nice memory statistics of the >> object heap. Here are a couple of links discussing it: >> * >> http://stackoverflow.com/questions/110259/which-python-memory-profiler-is-recommended >> * http://guppy-pe.sourceforge.net/#Heapy >> >> IPython already has a magic for printing interactive variables: who, >> similar to MATLAB's who: http://www.mathworks.com/help/matlab/ref/who.html >> except that the IPython version does not show the size of objects in memory. >> This is where Guppy comes into play, since it can provide memory statistics >> of the object heap. >> >> I was wondering if there have been any efforts or discussions in >> integrating Heapy (or any other memory profiler) into IPython. > > > I don't recall any discussion about Heapy. The memory_profiler module has > IPython integration: > > https://pypi.python.org/pypi/memory_profiler/0.27#ipython-integration > > If you're interested, the best thing is to write an IPython extension to do > what you want: > > http://ipython.org/ipython-doc/stable/config/extensions/index.html#writing-extensions I would love to see a Heapy extension for IPython--Josh, are you planning to work on something like this? I use Heapy a lot to test for reference cycles and memory leaks. I'm actually about to embark on restructuring some code that is very prone to such things, so having an easy way to test for this in IPython would be fantastically useful and I might work on it myself if you're not going to. Erik From damianavila at gmail.com Tue Aug 27 17:48:54 2013 From: damianavila at gmail.com (=?ISO-8859-1?Q?Dami=E1n_Avila?=) Date: Tue, 27 Aug 2013 18:48:54 -0300 Subject: [IPython-dev] LIVE reveal extension available (highly experimental but usable)... Message-ID: <521D1EC6.7070004@gmail.com> Hi people, As you know, I have been working in a LIVE version of reveal.js- based slideshow for the notebook... (for all of you that do not know about what I am talking about, see this video I posted a few days: http://www.youtube.com/watch?v=Pc-1FS0l2vg). As you also know, I am learning JS and the internal of the JS architecture from the notebook, so I come up with a very ugly but usable code ;-) The repo lives in the IPython-contrib community at this adress: https://github.com/ipython-contrib/live_reveal I encourage you to test it and report me any bug or comment (there will be a lot of the... bugs I mean). I repeat myself, it is a mediocre implementation, but it works! And you can test it now ;-) (just read the readme in the repo before trying). I am also open to code reviews... you will probably have to deal with a lot of code only meaningful to me ;-), but I am available to explain you what I tried to do... and it would be great to get your help to make this thing better (and a great opportunity for me to learn more...). My goal is to make this enough great to integrate to the notebook... I hope we can achieve that ;-) As far as I know, there are only a few implementations of "executable" slideshows (only for js, html and css)... and far far away of the potentiality that the notebook gives us, so I am very excited about this little thing (it is a new conceptually and interesting way to show and demo your info to the audience). Dami?n. PD: I am proposing this thing (and other slideshow-related ideas) for a talk to PyCon 2014, let see if I get some lucky again ;-) -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonathan.taylor at stanford.edu Wed Aug 28 12:37:32 2013 From: jonathan.taylor at stanford.edu (Jonathan Taylor) Date: Wed, 28 Aug 2013 09:37:32 -0700 Subject: [IPython-dev] nbviewer bug report Message-ID: I'm not sure this is the right address for bug reports (the HTTP 400 error on nbviewer.ipython.org doesn't have a link when requesting bug reports). I have two notebooks that seemed to render fine earlier but today I'm getting bad request messages. They open fine with "ipython notebook" .. http://nbviewer.ipython.org/url/stat.stanford.edu/~jtaylo/notebooks/Covariance%20with%20means.ipynb http://nbviewer.ipython.org/url/stat.stanford.edu/~jtaylo/notebooks/howlonglasso.ipynb Maybe it is the server. Should I be able to tell? Thanks, Jonathan -- Jonathan Taylor Dept. of Statistics Sequoia Hall, 137 390 Serra Mall Stanford, CA 94305 Tel: 650.723.9230 Fax: 650.725.8977 Web: http://www-stat.stanford.edu/~jtaylo -------------- next part -------------- An HTML attachment was scrubbed... URL: From bussonniermatthias at gmail.com Wed Aug 28 12:44:16 2013 From: bussonniermatthias at gmail.com (Matthias BUSSONNIER) Date: Wed, 28 Aug 2013 18:44:16 +0200 Subject: [IPython-dev] nbviewer bug report In-Reply-To: References: Message-ID: <0AB32671-ECF9-4913-BC86-E9FD7637337A@gmail.com> The 400 was probably cached for 10 min or so, they seem fine now. -- M Le 28 ao?t 2013 ? 18:37, Jonathan Taylor a ?crit : > I'm not sure this is the right address for bug reports (the HTTP 400 error on nbviewer.ipython.org doesn't have a link when requesting bug reports). > > I have two notebooks that seemed to render fine earlier but today I'm getting bad request messages. They open fine with "ipython notebook" .. > > http://nbviewer.ipython.org/url/stat.stanford.edu/~jtaylo/notebooks/Covariance%20with%20means.ipynb > http://nbviewer.ipython.org/url/stat.stanford.edu/~jtaylo/notebooks/howlonglasso.ipynb > > Maybe it is the server. Should I be able to tell? > > Thanks, > > Jonathan From benjaminrk at gmail.com Wed Aug 28 12:45:42 2013 From: benjaminrk at gmail.com (MinRK) Date: Wed, 28 Aug 2013 09:45:42 -0700 Subject: [IPython-dev] nbviewer bug report In-Reply-To: <0AB32671-ECF9-4913-BC86-E9FD7637337A@gmail.com> References: <0AB32671-ECF9-4913-BC86-E9FD7637337A@gmail.com> Message-ID: Should we be caching errors? On Wed, Aug 28, 2013 at 9:44 AM, Matthias BUSSONNIER < bussonniermatthias at gmail.com> wrote: > The 400 was probably cached for 10 min or so, > they seem fine now. > > -- > M > Le 28 ao?t 2013 ? 18:37, Jonathan Taylor a ?crit : > > > I'm not sure this is the right address for bug reports (the HTTP 400 > error on nbviewer.ipython.org doesn't have a link when requesting bug > reports). > > > > I have two notebooks that seemed to render fine earlier but today I'm > getting bad request messages. They open fine with "ipython notebook" .. > > > > > http://nbviewer.ipython.org/url/stat.stanford.edu/~jtaylo/notebooks/Covariance%20with%20means.ipynb > > > http://nbviewer.ipython.org/url/stat.stanford.edu/~jtaylo/notebooks/howlonglasso.ipynb > > > > Maybe it is the server. Should I be able to tell? > > > > Thanks, > > > > Jonathan > > _______________________________________________ > IPython-dev mailing list > IPython-dev at scipy.org > http://mail.scipy.org/mailman/listinfo/ipython-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bussonniermatthias at gmail.com Wed Aug 28 12:50:52 2013 From: bussonniermatthias at gmail.com (Matthias BUSSONNIER) Date: Wed, 28 Aug 2013 18:50:52 +0200 Subject: [IPython-dev] nbviewer bug report In-Reply-To: References: <0AB32671-ECF9-4913-BC86-E9FD7637337A@gmail.com> Message-ID: <6001D72B-15EA-4459-B155-7B8B126F2D24@gmail.com> Le 28 ao?t 2013 ? 18:45, MinRK a ?crit : > Should we be caching errors? Yes/no depends on the error. If the remote server throw a 400 or 500 or timeout, it is pretty useless to hammer it again on every page reload. It might be under DDOS, or disconnected. In the other end people might have pushed new version to fix bad notebook. But it's always difficult to know. -- M > > > On Wed, Aug 28, 2013 at 9:44 AM, Matthias BUSSONNIER wrote: > The 400 was probably cached for 10 min or so, > they seem fine now. > > -- > M > Le 28 ao?t 2013 ? 18:37, Jonathan Taylor a ?crit : > > > I'm not sure this is the right address for bug reports (the HTTP 400 error on nbviewer.ipython.org doesn't have a link when requesting bug reports). > > > > I have two notebooks that seemed to render fine earlier but today I'm getting bad request messages. They open fine with "ipython notebook" .. > > > > http://nbviewer.ipython.org/url/stat.stanford.edu/~jtaylo/notebooks/Covariance%20with%20means.ipynb > > http://nbviewer.ipython.org/url/stat.stanford.edu/~jtaylo/notebooks/howlonglasso.ipynb > > > > Maybe it is the server. Should I be able to tell? > > > > Thanks, > > > > Jonathan > > _______________________________________________ > IPython-dev mailing list > IPython-dev at scipy.org > http://mail.scipy.org/mailman/listinfo/ipython-dev > > _______________________________________________ > IPython-dev mailing list > IPython-dev at scipy.org > http://mail.scipy.org/mailman/listinfo/ipython-dev From jonathan.taylor at stanford.edu Wed Aug 28 13:37:45 2013 From: jonathan.taylor at stanford.edu (Jonathan Taylor) Date: Wed, 28 Aug 2013 10:37:45 -0700 Subject: [IPython-dev] nbviewer bug report In-Reply-To: <6001D72B-15EA-4459-B155-7B8B126F2D24@gmail.com> References: <0AB32671-ECF9-4913-BC86-E9FD7637337A@gmail.com> <6001D72B-15EA-4459-B155-7B8B126F2D24@gmail.com> Message-ID: Thanks -- wasn't sure what the issue was. I can't tell from the answers above if the 400 was what the remote server first returned and that was cached by nbviewer or whether the nbviewer server retrieved the notebook and could not render it. Also, it might be helpful to include an email link on the page requesting a bug report (or at least the email address). I know it, but not every person viewing a notebook will. On Wed, Aug 28, 2013 at 9:50 AM, Matthias BUSSONNIER < bussonniermatthias at gmail.com> wrote: > > Le 28 ao?t 2013 ? 18:45, MinRK a ?crit : > > > Should we be caching errors? > > Yes/no depends on the error. > > If the remote server throw a 400 or 500 or timeout, it is pretty useless > to hammer it again on every page reload. > It might be under DDOS, or disconnected. > In the other end people might have pushed new version to fix bad notebook. > But it's always difficult to know. > -- > M > > > > > > > > > On Wed, Aug 28, 2013 at 9:44 AM, Matthias BUSSONNIER < > bussonniermatthias at gmail.com> wrote: > > The 400 was probably cached for 10 min or so, > > they seem fine now. > > > > -- > > M > > Le 28 ao?t 2013 ? 18:37, Jonathan Taylor a ?crit : > > > > > I'm not sure this is the right address for bug reports (the HTTP 400 > error on nbviewer.ipython.org doesn't have a link when requesting bug > reports). > > > > > > I have two notebooks that seemed to render fine earlier but today I'm > getting bad request messages. They open fine with "ipython notebook" .. > > > > > > > http://nbviewer.ipython.org/url/stat.stanford.edu/~jtaylo/notebooks/Covariance%20with%20means.ipynb > > > > http://nbviewer.ipython.org/url/stat.stanford.edu/~jtaylo/notebooks/howlonglasso.ipynb > > > > > > Maybe it is the server. Should I be able to tell? > > > > > > Thanks, > > > > > > Jonathan > > > > _______________________________________________ > > IPython-dev mailing list > > IPython-dev at scipy.org > > http://mail.scipy.org/mailman/listinfo/ipython-dev > > > > _______________________________________________ > > IPython-dev mailing list > > IPython-dev at scipy.org > > http://mail.scipy.org/mailman/listinfo/ipython-dev > > _______________________________________________ > IPython-dev mailing list > IPython-dev at scipy.org > http://mail.scipy.org/mailman/listinfo/ipython-dev > -- Jonathan Taylor Dept. of Statistics Sequoia Hall, 137 390 Serra Mall Stanford, CA 94305 Tel: 650.723.9230 Fax: 650.725.8977 Web: http://www-stat.stanford.edu/~jtaylo -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmbecker at ucdavis.edu Wed Aug 28 17:31:26 2013 From: gmbecker at ucdavis.edu (Gabriel Becker) Date: Wed, 28 Aug 2013 14:31:26 -0700 Subject: [IPython-dev] LIVE reveal extension available (highly experimental but usable)... In-Reply-To: <521D1EC6.7070004@gmail.com> References: <521D1EC6.7070004@gmail.com> Message-ID: This is very cool! I look forward to digging into this a bit more when I can find some time! ~G On Tue, Aug 27, 2013 at 2:48 PM, Dami?n Avila wrote: > Hi people, > > As you know, I have been working in a LIVE version of reveal.js- based > slideshow for the notebook... (for all of you that do not know about what I > am talking about, see this video I posted a few days: > http://www.youtube.com/watch?v=Pc-1FS0l2vg). > > As you also know, I am learning JS and the internal of the JS architecture > from the notebook, so I come up with a very ugly but usable code ;-) > > The repo lives in the IPython-contrib community at this adress: > https://github.com/ipython-contrib/live_reveal > > I encourage you to test it and report me any bug or comment (there will be > a lot of the... bugs I mean). > > I repeat myself, it is a mediocre implementation, but it works! And you > can test it now ;-) (just read the readme in the repo before trying). > > I am also open to code reviews... you will probably have to deal with a > lot of code only meaningful to me ;-), but I am available to explain you > what I tried to do... and it would be great to get your help to make this > thing better (and a great opportunity for me to learn more...). > > My goal is to make this enough great to integrate to the notebook... I > hope we can achieve that ;-) > > As far as I know, there are only a few implementations of "executable" > slideshows (only for js, html and css)... and far far away of the > potentiality that the notebook gives us, so I am very excited about this > little thing (it is a new conceptually and interesting way to show and demo > your info to the audience). > > Dami?n. > > PD: I am proposing this thing (and other slideshow-related ideas) for a > talk to PyCon 2014, let see if I get some lucky again ;-) > > _______________________________________________ > IPython-dev mailing list > IPython-dev at scipy.org > http://mail.scipy.org/mailman/listinfo/ipython-dev > > -- Gabriel Becker Graduate Student Statistics Department University of California, Davis -------------- next part -------------- An HTML attachment was scrubbed... URL: From cyrille.rossant at gmail.com Wed Aug 28 18:31:02 2013 From: cyrille.rossant at gmail.com (Cyrille Rossant) Date: Thu, 29 Aug 2013 00:31:02 +0200 Subject: [IPython-dev] Using multiple %%cython cell magics in the Notebook Message-ID: I'd like to use a function defined in a %%cython cell magic in another %%cython cell magic. I could try to find the module's name of the first cell magic (which is randomly generated) and import it in the second, but that's not really convenient when dealing with many cells. I tried %%cython_pyximport (which accepts a module name as argument) but it does not work with "cimport numpy as np" (ImportError: Building module mymodule0 failed: ["CompileError: command 'cl.exe' failed with exit status 2\n"], I'm on Windows 8 64 bits, and this command does work with %%cython). And the documentation suggests to use %%cython anyway. AFAIK %%cython does not accept an argument with the module name, which could be used in subsequent modules for import. I think it would be straightforward to add such option, I could do it if you think that's a good idea. What would you recommend? Best, Cyrille From fperez.net at gmail.com Wed Aug 28 18:42:27 2013 From: fperez.net at gmail.com (Fernando Perez) Date: Wed, 28 Aug 2013 15:42:27 -0700 Subject: [IPython-dev] Using multiple %%cython cell magics in the Notebook In-Reply-To: References: Message-ID: Makes total sense, waiting for your PR :) On Wed, Aug 28, 2013 at 3:31 PM, Cyrille Rossant wrote: > I'd like to use a function defined in a %%cython cell magic in another > %%cython cell magic. I could try to find the module's name of the > first cell magic (which is randomly generated) and import it in the > second, but that's not really convenient when dealing with many cells. > > I tried %%cython_pyximport (which accepts a module name as argument) > but it does not work with "cimport numpy as np" (ImportError: Building > module mymodule0 failed: ["CompileError: command 'cl.exe' failed with > exit status 2\n"], I'm on Windows 8 64 bits, and this command does > work with %%cython). And the documentation suggests to use %%cython > anyway. > > AFAIK %%cython does not accept an argument with the module name, which > could be used in subsequent modules for import. I think it would be > straightforward to add such option, I could do it if you think that's > a good idea. > > What would you recommend? > > Best, > Cyrille > _______________________________________________ > IPython-dev mailing list > IPython-dev at scipy.org > http://mail.scipy.org/mailman/listinfo/ipython-dev -- Fernando Perez (@fperez_org; http://fperez.org) fperez.net-at-gmail: mailing lists only (I ignore this when swamped!) fernando.perez-at-berkeley: contact me here for any direct mail From nborwankar at gmail.com Thu Aug 29 14:28:12 2013 From: nborwankar at gmail.com (Nitin Borwankar) Date: Thu, 29 Aug 2013 11:28:12 -0700 Subject: [IPython-dev] pyspark and IPython Message-ID: I'm at AmpCamp3 at UCB and see that there would be huge benefits to integrating pyspark with IPython and IPyNB. Questions: a) has this been attempted/done? if so pointers pl. b) does this overlap the IPyNB parallel computing effort in conflicting/competing ways? c) if this has not been done yet - does anyone have a sense of how much effort this might be? (I've done a small hack integrating postgres psql into ipynb so I'm not terrified by that level of deep digging, but are there any show stopper gotchas?) Thanks much, Nitin ------------------------------------------------------------------ Nitin Borwankar nborwankar at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From ellisonbg at gmail.com Thu Aug 29 16:35:26 2013 From: ellisonbg at gmail.com (Brian Granger) Date: Thu, 29 Aug 2013 13:35:26 -0700 Subject: [IPython-dev] pyspark and IPython In-Reply-To: References: Message-ID: >From a quick glance, it looks like both pyspark and IPython use similar parallel computing models in terms of the process model. You might think that would help them to integrate, but in this case I think it will get in the way of integration. Without learning more about the low-level details of their architecture it is really difficult to know if it is possible or not. But I think the bigger question is what would the motivation for integration be? Both IPython and spark provide self-contained parallel computing capabilties - what usage cases are there for using both at the same time? I think the biggest potential show stopper is that pyspark is not designed in any way to be interactive as far as I can tell. Pyspark jobs basically run in batch mode, which is going to make it really tough to fit into IPython's interactive model. Worth looking more into though.. Cheers, Brian On Thu, Aug 29, 2013 at 11:28 AM, Nitin Borwankar wrote: > I'm at AmpCamp3 at UCB and see that there would be huge benefits to > integrating pyspark with IPython and IPyNB. > > Questions: > > a) has this been attempted/done? if so pointers pl. > > b) does this overlap the IPyNB parallel computing effort in > conflicting/competing ways? > > c) if this has not been done yet - does anyone have a sense of how much > effort this might be? (I've done a small hack integrating postgres psql into > ipynb so I'm not terrified by that level of deep digging, but are there any > show stopper gotchas?) > > Thanks much, > > Nitin > ------------------------------------------------------------------ > Nitin Borwankar > nborwankar at gmail.com > > _______________________________________________ > IPython-dev mailing list > IPython-dev at scipy.org > http://mail.scipy.org/mailman/listinfo/ipython-dev > -- Brian E. Granger Cal Poly State University, San Luis Obispo bgranger at calpoly.edu and ellisonbg at gmail.com From nborwankar at gmail.com Thu Aug 29 17:41:07 2013 From: nborwankar at gmail.com (Nitin Borwankar) Date: Thu, 29 Aug 2013 14:41:07 -0700 Subject: [IPython-dev] pyspark and IPython In-Reply-To: References: Message-ID: Hi Brian, The advantage IMHO is that pyspark and the larger UCB AMP effort are a huge open source effort for distributed parallel computing that improves upon the Hadoop model. Spark the underlying layer + Shark the Hive compatible query language adds performance gains of 10x - 100x. The effort has 20+ companies contributing code including Yahoo and 70+ contributors. AMP has a 10M$ grant from NSF. So a) it's not going away soon b) it may be hard to compete with it without that level of resources c) they do have a Python shell (have not used it yet) and they appear committed to have Python as a first class language in their effort. d) lets see if we can find ways to integrate with it. I think integration at the level of the interactive interface might make sense. Just my 2c but I think this effort may leapfrog pure Hadoop over the next 2-3 years. Nitin. ------------------------------------------------------------------ Nitin Borwankar nborwankar at gmail.com On Thu, Aug 29, 2013 at 1:35 PM, Brian Granger wrote: > >From a quick glance, it looks like both pyspark and IPython use > similar parallel computing models in terms of the process model. You > might think that would help them to integrate, but in this case I > think it will get in the way of integration. Without learning more > about the low-level details of their architecture it is really > difficult to know if it is possible or not. But I think the bigger > question is what would the motivation for integration be? Both > IPython and spark provide self-contained parallel computing > capabilties - what usage cases are there for using both at the same > time? I think the biggest potential show stopper is that pyspark is > not designed in any way to be interactive as far as I can tell. > Pyspark jobs basically run in batch mode, which is going to make it > really tough to fit into IPython's interactive model. Worth looking > more into though.. > > Cheers, > > Brian > > On Thu, Aug 29, 2013 at 11:28 AM, Nitin Borwankar > wrote: > > I'm at AmpCamp3 at UCB and see that there would be huge benefits to > > integrating pyspark with IPython and IPyNB. > > > > Questions: > > > > a) has this been attempted/done? if so pointers pl. > > > > b) does this overlap the IPyNB parallel computing effort in > > conflicting/competing ways? > > > > c) if this has not been done yet - does anyone have a sense of how much > > effort this might be? (I've done a small hack integrating postgres psql > into > > ipynb so I'm not terrified by that level of deep digging, but are there > any > > show stopper gotchas?) > > > > Thanks much, > > > > Nitin > > ------------------------------------------------------------------ > > Nitin Borwankar > > nborwankar at gmail.com > > > > _______________________________________________ > > IPython-dev mailing list > > IPython-dev at scipy.org > > http://mail.scipy.org/mailman/listinfo/ipython-dev > > > > > > -- > Brian E. Granger > Cal Poly State University, San Luis Obispo > bgranger at calpoly.edu and ellisonbg at gmail.com > _______________________________________________ > IPython-dev mailing list > IPython-dev at scipy.org > http://mail.scipy.org/mailman/listinfo/ipython-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ellisonbg at gmail.com Thu Aug 29 17:58:45 2013 From: ellisonbg at gmail.com (Brian Granger) Date: Thu, 29 Aug 2013 14:58:45 -0700 Subject: [IPython-dev] pyspark and IPython In-Reply-To: References: Message-ID: Sorry I wasn't clear in my question. I am very aware of how amazing Spark and Shark are. I do think you are right that they are looking very promising right now. What I don't see is what IPython can offer in working with them. Given their architecture, I don't see how for example you could run spark jobs from the IPython Notebook interactively. Is that the type of thing you are thinking about? Or are you more thinking about direct integration of spark and IPython.parallel. I am more wondering what the benefit of IPython+Spark integration would be. I know that Fernando and Min have talked with some of the AMP lab people and I would love to see what can be done. I would probably be best to sit down and talk further with the spark/shark devs at some point. But if you can learn more about their architecture and investigate the possibilities and report back, that would be fantastic. On Thu, Aug 29, 2013 at 2:41 PM, Nitin Borwankar wrote: > Hi Brian, > > The advantage IMHO is that pyspark and the larger UCB AMP effort are a huge > open source effort for distributed parallel computing that improves upon the > Hadoop model. Spark the underlying layer + Shark the Hive compatible query > language adds performance gains of 10x - 100x. The effort has 20+ companies > contributing code including Yahoo and 70+ contributors. AMP has a 10M$ grant > from NSF. So > a) it's not going away soon > b) it may be hard to compete with it without that level of resources > c) they do have a Python shell (have not used it yet) and they appear > committed to have Python as a first class language in their effort. > d) lets see if we can find ways to integrate with it. > > I think integration at the level of the interactive interface might make > sense. > > Just my 2c but I think this effort may leapfrog pure Hadoop over the next > 2-3 years. > > > Nitin. > > > > > ------------------------------------------------------------------ > Nitin Borwankar > nborwankar at gmail.com > > > On Thu, Aug 29, 2013 at 1:35 PM, Brian Granger wrote: >> >> >From a quick glance, it looks like both pyspark and IPython use >> similar parallel computing models in terms of the process model. You >> might think that would help them to integrate, but in this case I >> think it will get in the way of integration. Without learning more >> about the low-level details of their architecture it is really >> difficult to know if it is possible or not. But I think the bigger >> question is what would the motivation for integration be? Both >> IPython and spark provide self-contained parallel computing >> capabilties - what usage cases are there for using both at the same >> time? I think the biggest potential show stopper is that pyspark is >> not designed in any way to be interactive as far as I can tell. >> Pyspark jobs basically run in batch mode, which is going to make it >> really tough to fit into IPython's interactive model. Worth looking >> more into though.. >> >> Cheers, >> >> Brian >> >> On Thu, Aug 29, 2013 at 11:28 AM, Nitin Borwankar >> wrote: >> > I'm at AmpCamp3 at UCB and see that there would be huge benefits to >> > integrating pyspark with IPython and IPyNB. >> > >> > Questions: >> > >> > a) has this been attempted/done? if so pointers pl. >> > >> > b) does this overlap the IPyNB parallel computing effort in >> > conflicting/competing ways? >> > >> > c) if this has not been done yet - does anyone have a sense of how much >> > effort this might be? (I've done a small hack integrating postgres psql >> > into >> > ipynb so I'm not terrified by that level of deep digging, but are there >> > any >> > show stopper gotchas?) >> > >> > Thanks much, >> > >> > Nitin >> > ------------------------------------------------------------------ >> > Nitin Borwankar >> > nborwankar at gmail.com >> > >> > _______________________________________________ >> > IPython-dev mailing list >> > IPython-dev at scipy.org >> > http://mail.scipy.org/mailman/listinfo/ipython-dev >> > >> >> >> >> -- >> Brian E. Granger >> Cal Poly State University, San Luis Obispo >> bgranger at calpoly.edu and ellisonbg at gmail.com >> _______________________________________________ >> IPython-dev mailing list >> IPython-dev at scipy.org >> http://mail.scipy.org/mailman/listinfo/ipython-dev > > > > _______________________________________________ > IPython-dev mailing list > IPython-dev at scipy.org > http://mail.scipy.org/mailman/listinfo/ipython-dev > -- Brian E. Granger Cal Poly State University, San Luis Obispo bgranger at calpoly.edu and ellisonbg at gmail.com From nborwankar at gmail.com Thu Aug 29 18:15:29 2013 From: nborwankar at gmail.com (Nitin Borwankar) Date: Thu, 29 Aug 2013 15:15:29 -0700 Subject: [IPython-dev] pyspark and IPython In-Reply-To: References: Message-ID: Hi Brian, Yes, ok I wasn't clear either. Meta thing - IPython and NB have spoilt me - I want IPy as a cmd line for everything - and be able to launch all cmdline programs from IPy and IPyNB. So that's the meta goal. Every new cmdline I encounter I try to see if !ls works if not it is not a good enough cmdline any more and I try to see if there is a cell magic for that cmdline :-) ! In the context of Spark/Shark and family, they are early efforts and I want to be able to play with the many moving parts in there fast and furiously, without being limited by the earliness of their interface. So if I can plug these into IPy then all the better. I am not sure if there's any value in integrating the two parallel computing models as they seem to serve different audiences. The IPy parallel computing model seems closer to what the scientific community needs and at first sight the Spark/Shark model seems to serve the more business oriented data demographic. Where there is an intersection is IMHO, in the ML area - we will learn about that tomorrow in the conference. In any case in the spirit of "IPy over everything" I'd like to hope I can do some integration. Also Fernando is here too and we chatted at lunch but pretty much about everything else except the AMP stuff. I think it makes more sense to wait till the end of day tomorrow to report on the content. Nitin P.S. In an IETF meetings decades ago Vint Cerf wore a t-shirt that said "IP over everything", so "IPy over everything" is my homage to that t-shirt. ------------------------------------------------------------------ Nitin Borwankar nborwankar at gmail.com On Thu, Aug 29, 2013 at 2:58 PM, Brian Granger wrote: > Sorry I wasn't clear in my question. I am very aware of how amazing > Spark and Shark are. I do think you are right that they are looking > very promising right now. What I don't see is what IPython can offer > in working with them. Given their architecture, I don't see how for > example you could run spark jobs from the IPython Notebook > interactively. Is that the type of thing you are thinking about? Or > are you more thinking about direct integration of spark and > IPython.parallel. I am more wondering what the benefit of > IPython+Spark integration would be. I know that Fernando and Min have > talked with some of the AMP lab people and I would love to see what > can be done. I would probably be best to sit down and talk further > with the spark/shark devs at some point. But if you can learn more > about their architecture and investigate the possibilities and report > back, that would be fantastic. > > On Thu, Aug 29, 2013 at 2:41 PM, Nitin Borwankar > wrote: > > Hi Brian, > > > > The advantage IMHO is that pyspark and the larger UCB AMP effort are a > huge > > open source effort for distributed parallel computing that improves upon > the > > Hadoop model. Spark the underlying layer + Shark the Hive compatible > query > > language adds performance gains of 10x - 100x. The effort has 20+ > companies > > contributing code including Yahoo and 70+ contributors. AMP has a 10M$ > grant > > from NSF. So > > a) it's not going away soon > > b) it may be hard to compete with it without that level of resources > > c) they do have a Python shell (have not used it yet) and they appear > > committed to have Python as a first class language in their effort. > > d) lets see if we can find ways to integrate with it. > > > > I think integration at the level of the interactive interface might make > > sense. > > > > Just my 2c but I think this effort may leapfrog pure Hadoop over the next > > 2-3 years. > > > > > > Nitin. > > > > > > > > > > ------------------------------------------------------------------ > > Nitin Borwankar > > nborwankar at gmail.com > > > > > > On Thu, Aug 29, 2013 at 1:35 PM, Brian Granger > wrote: > >> > >> >From a quick glance, it looks like both pyspark and IPython use > >> similar parallel computing models in terms of the process model. You > >> might think that would help them to integrate, but in this case I > >> think it will get in the way of integration. Without learning more > >> about the low-level details of their architecture it is really > >> difficult to know if it is possible or not. But I think the bigger > >> question is what would the motivation for integration be? Both > >> IPython and spark provide self-contained parallel computing > >> capabilties - what usage cases are there for using both at the same > >> time? I think the biggest potential show stopper is that pyspark is > >> not designed in any way to be interactive as far as I can tell. > >> Pyspark jobs basically run in batch mode, which is going to make it > >> really tough to fit into IPython's interactive model. Worth looking > >> more into though.. > >> > >> Cheers, > >> > >> Brian > >> > >> On Thu, Aug 29, 2013 at 11:28 AM, Nitin Borwankar > > >> wrote: > >> > I'm at AmpCamp3 at UCB and see that there would be huge benefits to > >> > integrating pyspark with IPython and IPyNB. > >> > > >> > Questions: > >> > > >> > a) has this been attempted/done? if so pointers pl. > >> > > >> > b) does this overlap the IPyNB parallel computing effort in > >> > conflicting/competing ways? > >> > > >> > c) if this has not been done yet - does anyone have a sense of how > much > >> > effort this might be? (I've done a small hack integrating postgres > psql > >> > into > >> > ipynb so I'm not terrified by that level of deep digging, but are > there > >> > any > >> > show stopper gotchas?) > >> > > >> > Thanks much, > >> > > >> > Nitin > >> > ------------------------------------------------------------------ > >> > Nitin Borwankar > >> > nborwankar at gmail.com > >> > > >> > _______________________________________________ > >> > IPython-dev mailing list > >> > IPython-dev at scipy.org > >> > http://mail.scipy.org/mailman/listinfo/ipython-dev > >> > > >> > >> > >> > >> -- > >> Brian E. Granger > >> Cal Poly State University, San Luis Obispo > >> bgranger at calpoly.edu and ellisonbg at gmail.com > >> _______________________________________________ > >> IPython-dev mailing list > >> IPython-dev at scipy.org > >> http://mail.scipy.org/mailman/listinfo/ipython-dev > > > > > > > > _______________________________________________ > > IPython-dev mailing list > > IPython-dev at scipy.org > > http://mail.scipy.org/mailman/listinfo/ipython-dev > > > > > > -- > Brian E. Granger > Cal Poly State University, San Luis Obispo > bgranger at calpoly.edu and ellisonbg at gmail.com > _______________________________________________ > IPython-dev mailing list > IPython-dev at scipy.org > http://mail.scipy.org/mailman/listinfo/ipython-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nborwankar at gmail.com Thu Aug 29 18:49:28 2013 From: nborwankar at gmail.com (Nitin Borwankar) Date: Thu, 29 Aug 2013 15:49:28 -0700 Subject: [IPython-dev] pyspark and IPython In-Reply-To: References: Message-ID: Brian, This is awesome - Fernando did the "integration" I was looking for all the way to the NB integration in the last 15 mins or so! Bottom line a) pyspark *is* a basic Python cmd line already that calls out to the Java and Scala parts b) setting an env var IPYTHON=1 gets you IPython inst of Python in a) c) our fearless leader just did the IPython NB part. Advantages of having all the people in one room! He asked them to open up port 8888 on the cluster master. Nitin ------------------------------------------------------------------ Nitin Borwankar nborwankar at gmail.com On Thu, Aug 29, 2013 at 3:15 PM, Nitin Borwankar wrote: > Hi Brian, > > Yes, ok I wasn't clear either. Meta thing - IPython and NB have spoilt me > - I want IPy as a cmd line for everything - and be able to launch all > cmdline programs from IPy and IPyNB. So that's the meta goal. Every new > cmdline I encounter I try to see if !ls works if not it is not a good > enough cmdline any more and I try to see if there is a cell magic for that > cmdline :-) ! > > In the context of Spark/Shark and family, they are early efforts and I > want to be able to play with the many moving parts in there fast and > furiously, without being limited by the earliness of their interface. So > if I can plug these into IPy then all the better. > > I am not sure if there's any value in integrating the two parallel > computing models as they seem to serve different audiences. The IPy > parallel computing model seems closer to what the scientific community > needs and at first sight the Spark/Shark model seems to serve the more > business oriented data demographic. > > Where there is an intersection is IMHO, in the ML area - we will learn > about that tomorrow in the conference. In any case in the spirit of "IPy > over everything" I'd like to hope I can do some integration. > > Also Fernando is here too and we chatted at lunch but pretty much about > everything else except the AMP stuff. I think it makes more sense to wait > till the end of day tomorrow to report on the content. > > Nitin > > P.S. In an IETF meetings decades ago Vint Cerf wore a t-shirt that said > "IP over everything", so "IPy over everything" is my homage to that t-shirt. > > > > > ------------------------------------------------------------------ > Nitin Borwankar > nborwankar at gmail.com > > > On Thu, Aug 29, 2013 at 2:58 PM, Brian Granger wrote: > >> Sorry I wasn't clear in my question. I am very aware of how amazing >> Spark and Shark are. I do think you are right that they are looking >> very promising right now. What I don't see is what IPython can offer >> in working with them. Given their architecture, I don't see how for >> example you could run spark jobs from the IPython Notebook >> interactively. Is that the type of thing you are thinking about? Or >> are you more thinking about direct integration of spark and >> IPython.parallel. I am more wondering what the benefit of >> IPython+Spark integration would be. I know that Fernando and Min have >> talked with some of the AMP lab people and I would love to see what >> can be done. I would probably be best to sit down and talk further >> with the spark/shark devs at some point. But if you can learn more >> about their architecture and investigate the possibilities and report >> back, that would be fantastic. >> >> On Thu, Aug 29, 2013 at 2:41 PM, Nitin Borwankar >> wrote: >> > Hi Brian, >> > >> > The advantage IMHO is that pyspark and the larger UCB AMP effort are a >> huge >> > open source effort for distributed parallel computing that improves >> upon the >> > Hadoop model. Spark the underlying layer + Shark the Hive compatible >> query >> > language adds performance gains of 10x - 100x. The effort has 20+ >> companies >> > contributing code including Yahoo and 70+ contributors. AMP has a 10M$ >> grant >> > from NSF. So >> > a) it's not going away soon >> > b) it may be hard to compete with it without that level of resources >> > c) they do have a Python shell (have not used it yet) and they appear >> > committed to have Python as a first class language in their effort. >> > d) lets see if we can find ways to integrate with it. >> > >> > I think integration at the level of the interactive interface might make >> > sense. >> > >> > Just my 2c but I think this effort may leapfrog pure Hadoop over the >> next >> > 2-3 years. >> > >> > >> > Nitin. >> > >> > >> > >> > >> > ------------------------------------------------------------------ >> > Nitin Borwankar >> > nborwankar at gmail.com >> > >> > >> > On Thu, Aug 29, 2013 at 1:35 PM, Brian Granger >> wrote: >> >> >> >> >From a quick glance, it looks like both pyspark and IPython use >> >> similar parallel computing models in terms of the process model. You >> >> might think that would help them to integrate, but in this case I >> >> think it will get in the way of integration. Without learning more >> >> about the low-level details of their architecture it is really >> >> difficult to know if it is possible or not. But I think the bigger >> >> question is what would the motivation for integration be? Both >> >> IPython and spark provide self-contained parallel computing >> >> capabilties - what usage cases are there for using both at the same >> >> time? I think the biggest potential show stopper is that pyspark is >> >> not designed in any way to be interactive as far as I can tell. >> >> Pyspark jobs basically run in batch mode, which is going to make it >> >> really tough to fit into IPython's interactive model. Worth looking >> >> more into though.. >> >> >> >> Cheers, >> >> >> >> Brian >> >> >> >> On Thu, Aug 29, 2013 at 11:28 AM, Nitin Borwankar < >> nborwankar at gmail.com> >> >> wrote: >> >> > I'm at AmpCamp3 at UCB and see that there would be huge benefits to >> >> > integrating pyspark with IPython and IPyNB. >> >> > >> >> > Questions: >> >> > >> >> > a) has this been attempted/done? if so pointers pl. >> >> > >> >> > b) does this overlap the IPyNB parallel computing effort in >> >> > conflicting/competing ways? >> >> > >> >> > c) if this has not been done yet - does anyone have a sense of how >> much >> >> > effort this might be? (I've done a small hack integrating postgres >> psql >> >> > into >> >> > ipynb so I'm not terrified by that level of deep digging, but are >> there >> >> > any >> >> > show stopper gotchas?) >> >> > >> >> > Thanks much, >> >> > >> >> > Nitin >> >> > ------------------------------------------------------------------ >> >> > Nitin Borwankar >> >> > nborwankar at gmail.com >> >> > >> >> > _______________________________________________ >> >> > IPython-dev mailing list >> >> > IPython-dev at scipy.org >> >> > http://mail.scipy.org/mailman/listinfo/ipython-dev >> >> > >> >> >> >> >> >> >> >> -- >> >> Brian E. Granger >> >> Cal Poly State University, San Luis Obispo >> >> bgranger at calpoly.edu and ellisonbg at gmail.com >> >> _______________________________________________ >> >> IPython-dev mailing list >> >> IPython-dev at scipy.org >> >> http://mail.scipy.org/mailman/listinfo/ipython-dev >> > >> > >> > >> > _______________________________________________ >> > IPython-dev mailing list >> > IPython-dev at scipy.org >> > http://mail.scipy.org/mailman/listinfo/ipython-dev >> > >> >> >> >> -- >> Brian E. Granger >> Cal Poly State University, San Luis Obispo >> bgranger at calpoly.edu and ellisonbg at gmail.com >> _______________________________________________ >> IPython-dev mailing list >> IPython-dev at scipy.org >> http://mail.scipy.org/mailman/listinfo/ipython-dev >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fperez.net at gmail.com Fri Aug 30 12:11:38 2013 From: fperez.net at gmail.com (Fernando Perez) Date: Fri, 30 Aug 2013 09:11:38 -0700 Subject: [IPython-dev] pyspark and IPython In-Reply-To: References: Message-ID: Hey guys, On Thu, Aug 29, 2013 at 2:58 PM, Brian Granger wrote: > Sorry I wasn't clear in my question. I am very aware of how amazing > Spark and Shark are. I do think you are right that they are looking > very promising right now. What I don't see is what IPython can offer > in working with them. Given their architecture, I don't see how for > example you could run spark jobs from the IPython Notebook > interactively. Is that the type of thing you are thinking about? Or Sorry, I'm at ampcamp again and can't reply in as much detail as I'd like, but the pyspark architecture indeed can be used interactively from the notebook, and in fact it works much, much better than their default shell. Here's my quick port of the pyspark tutorial: http://nbviewer.ipython.org/6384491/Data%20Exploration%20Using%20Spark.ipynb which I ran yesterday on an AMP cluster that had been configured according to my little tutorial: http://nbviewer.ipython.org/6384491/IPythonNotebookPySparkHowTo.ipynb I'm already talking to the AMPLab folks on how to make this integration work seamlessly out of the box with their AMIs, it should be absolutely trivial to do once we have a couple of hours to spend on it. Once we ship 1.1 (so the super() bug is fixed and we don't have to go patching things manually), I'll sit down with them and finish this up. The deeper question of ipython.parallel/spark integration/competition/complementarity is much harder to answer, and I'm not really sure what the answer is yet, to be honest. A good part of the reason I'm here is precisely to think about that. Cheers, f From ellisonbg at gmail.com Fri Aug 30 13:44:56 2013 From: ellisonbg at gmail.com (Brian Granger) Date: Fri, 30 Aug 2013 10:44:56 -0700 Subject: [IPython-dev] pyspark and IPython In-Reply-To: References: Message-ID: > Sorry, I'm at ampcamp again and can't reply in as much detail as I'd > like, but the pyspark architecture indeed can be used interactively > from the notebook, and in fact it works much, much better than their > default shell. Here's my quick port of the pyspark tutorial: > > http://nbviewer.ipython.org/6384491/Data%20Exploration%20Using%20Spark.ipynb > > which I ran yesterday on an AMP cluster that had been configured > according to my little tutorial: > > http://nbviewer.ipython.org/6384491/IPythonNotebookPySparkHowTo.ipynb This is absolutely fantastic - I am glad it was this easy to get going. Also glad that pyspark was written in a way that it can be used interactively. > > I'm already talking to the AMPLab folks on how to make this > integration work seamlessly out of the box with their AMIs, it should > be absolutely trivial to do once we have a couple of hours to spend on > it. Yep, that is so cool. This is where open source just rocks - anyone can plug the notebook into their own systems with ease and no license/$ hassle. > Once we ship 1.1 (so the super() bug is fixed and we don't have to go > patching things manually), I'll sit down with them and finish this up. > > The deeper question of ipython.parallel/spark > integration/competition/complementarity is much harder to answer, and > I'm not really sure what the answer is yet, to be honest. A good part > of the reason I'm here is precisely to think about that. Great, keep us posted. Cheers, Brian > > Cheers, > > f > _______________________________________________ > IPython-dev mailing list > IPython-dev at scipy.org > http://mail.scipy.org/mailman/listinfo/ipython-dev -- Brian E. Granger Cal Poly State University, San Luis Obispo bgranger at calpoly.edu and ellisonbg at gmail.com From takowl at gmail.com Fri Aug 30 16:38:55 2013 From: takowl at gmail.com (Thomas Kluyver) Date: Fri, 30 Aug 2013 13:38:55 -0700 Subject: [IPython-dev] Dropping our parametric tests system Message-ID: I propose that we get rid of our parametric test system, in favour of either writing regular tests, or using nose's test generators. The parametric tests have failed to run on at least three occasions: - Old versions of nose, which we ended up monkeypatching - Again on newer versions of nose with Python 2, which Min's PR #4148 and my #4150 fix using monkeypatches - On Python 3, it only runs to the first yield, and presumably has done ever since we first had a Python 3 port (also addressed in #4150) So it fails a lot, and it fails quietly, without any warnings or error messages, which is exactly what you don't want for a test suite. You just realise one day that lots of tests aren't actually being run. Finally, the fixes that we've had to introduce are conceptually horrible, and the ParametricTestCase code is fairly complex itself. The advantage of parametric tests over conventional tests is mainly that the testing output reports a greater number of tests being run, even though it's not actually exercising any more code. The advantage over nose test generators is that the code is executed in the real context in which you'd want to debug it. I don't think that either benefit, and especially the benefit over plain tests, is worth the pain this is causing us. Min suggested that I write an IPEP for this, but in my opinion this isn't a big enough change to require an IPEP. I'm happy to write one if people feel I need to, though. Thomas -------------- next part -------------- An HTML attachment was scrubbed... URL: From franz.bergesund at gmail.com Fri Aug 30 16:41:19 2013 From: franz.bergesund at gmail.com (Francesco Montesano) Date: Fri, 30 Aug 2013 22:41:19 +0200 Subject: [IPython-dev] [Notebook] Strange rendering svg Message-ID: Dear List, I'm running Ipython 1.0.0 with python2.7 and python3.3 under two kubuntu 13.04 machines. In both I've installed matplotlib 1.3.0. While making a notebook with %pylab --no-import-all inline %config InlineBackend.figure_format = 'svg' I've noticed a couple of "problems" (I don't know up to which point they are bugs. The following http://nbviewer.ipython.org/6391748 https://gist.github.com/montefra/6391748 shows the problems. Problem 1): Why is the figure is smaller if I use png as inline backend instead of svg? Problem 2): *The important one* In my figure I want to draw a vertical dashed line with ax.axvline (from matplotlib). The line appears *always* if I use the png backend, but the svg acts weird. i) Chromium (28.0.1500.71): If I run the notebook in the vertical lines does not appear in the first plot (in some case I've seen it appearing after executing an other plotting command in a cell). If I close and reopen the notebook (with and without shutting it down) I see all the lines. When I open the nbviewer I don't see the vertical lines in the first two plots ii) Firefox (23.0): All the vertical lines appears as they should (dashed) iii) Reconq (2.3.2): if I run it the first plot does not have the vertical line, the other two svg have a *solid* vertical line, and the png shows up as it should. If I close and reopen the notebook (with and without shutting it down) the first remains without the vline, and the other two svg plots have a *solid* vline. From the nbviewer link I don't see the first two vertical lines, and the last one is solid. Does anyone have the same problems? How could I fix it (exept using png and/or firefox)? Cheers, Fra -------------- next part -------------- An HTML attachment was scrubbed... URL: From ellisonbg at gmail.com Fri Aug 30 18:18:43 2013 From: ellisonbg at gmail.com (Brian Granger) Date: Fri, 30 Aug 2013 15:18:43 -0700 Subject: [IPython-dev] Dropping our parametric tests system In-Reply-To: References: Message-ID: I am fine with this plan. On Fri, Aug 30, 2013 at 1:38 PM, Thomas Kluyver wrote: > I propose that we get rid of our parametric test system, in favour of either > writing regular tests, or using nose's test generators. > > The parametric tests have failed to run on at least three occasions: > - Old versions of nose, which we ended up monkeypatching > - Again on newer versions of nose with Python 2, which Min's PR #4148 and my > #4150 fix using monkeypatches > - On Python 3, it only runs to the first yield, and presumably has done ever > since we first had a Python 3 port (also addressed in #4150) > > So it fails a lot, and it fails quietly, without any warnings or error > messages, which is exactly what you don't want for a test suite. You just > realise one day that lots of tests aren't actually being run. Finally, the > fixes that we've had to introduce are conceptually horrible, and the > ParametricTestCase code is fairly complex itself. > > The advantage of parametric tests over conventional tests is mainly that the > testing output reports a greater number of tests being run, even though it's > not actually exercising any more code. The advantage over nose test > generators is that the code is executed in the real context in which you'd > want to debug it. I don't think that either benefit, and especially the > benefit over plain tests, is worth the pain this is causing us. > > Min suggested that I write an IPEP for this, but in my opinion this isn't a > big enough change to require an IPEP. I'm happy to write one if people feel > I need to, though. > > Thomas > > _______________________________________________ > IPython-dev mailing list > IPython-dev at scipy.org > http://mail.scipy.org/mailman/listinfo/ipython-dev > -- Brian E. Granger Cal Poly State University, San Luis Obispo bgranger at calpoly.edu and ellisonbg at gmail.com From pi at berkeley.edu Fri Aug 30 18:32:00 2013 From: pi at berkeley.edu (Paul Ivanov) Date: Fri, 30 Aug 2013 15:32:00 -0700 Subject: [IPython-dev] Dropping our parametric tests system In-Reply-To: References: Message-ID: <20130830223200.GN14065@HbI-OTOH.berkeley.edu> > On Fri, Aug 30, 2013 at 1:38 PM, Thomas Kluyver wrote: > > Min suggested that I write an IPEP for this, but in my opinion this isn't a > > big enough change to require an IPEP. I'm happy to write one if people feel > > I need to, though. I agree that we shouldn't need an IPEP - I don't think *anyone* is counting on our test suite to be a particular way - it is not a part of the codebase that we document (outside of API docs) or advertise to outside consumers. I'm glad we'll finallly stop using such tricky (and fallible) machinary, though I admit I'll miss the extra dots... Though with the potential change to our testing infrastructure in the future, as discussed at this week's Lab Meeting On Air [1], we might not have the dots in the future anyway. best, -- _ / \ A* \^ - ,./ _.`\\ / \ / ,--.S \/ \ / `"~,_ \ \ __o ? _ \<,_ /:\ --(_)/-(_)----.../ | \ --------------.......J Paul Ivanov http://pirsquared.org From benjaminrk at gmail.com Fri Aug 30 20:24:57 2013 From: benjaminrk at gmail.com (MinRK) Date: Fri, 30 Aug 2013 17:24:57 -0700 Subject: [IPython-dev] Dropping our parametric tests system In-Reply-To: <20130830223200.GN14065@HbI-OTOH.berkeley.edu> References: <20130830223200.GN14065@HbI-OTOH.berkeley.edu> Message-ID: Just chiming in that I will be sad to see them go, but I have been convinced that their cost outweighs their benefit. -MinRK On Fri, Aug 30, 2013 at 3:32 PM, Paul Ivanov wrote: > > On Fri, Aug 30, 2013 at 1:38 PM, Thomas Kluyver > wrote: > > > Min suggested that I write an IPEP for this, but in my opinion this > isn't a > > > big enough change to require an IPEP. I'm happy to write one if people > feel > > > I need to, though. > > I agree that we shouldn't need an IPEP - I don't think *anyone* > is counting on our test suite to be a particular way - it is not > a part of the codebase that we document (outside of API docs) or > advertise to outside consumers. > > I'm glad we'll finallly stop using such tricky (and fallible) > machinary, though I admit I'll miss the extra dots... Though with > the potential change to our testing infrastructure in the future, > as discussed at this week's Lab Meeting On Air [1], we might not > have the dots in the future anyway. > > best, > -- > _ > / \ > A* \^ - > ,./ _.`\\ / \ > / ,--.S \/ \ > / `"~,_ \ \ > __o ? > _ \<,_ /:\ > --(_)/-(_)----.../ | \ > --------------.......J > Paul Ivanov > http://pirsquared.org > _______________________________________________ > IPython-dev mailing list > IPython-dev at scipy.org > http://mail.scipy.org/mailman/listinfo/ipython-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fperez.net at gmail.com Fri Aug 30 20:31:38 2013 From: fperez.net at gmail.com (Fernando Perez) Date: Fri, 30 Aug 2013 17:31:38 -0700 Subject: [IPython-dev] Dropping our parametric tests system In-Reply-To: <20130830223200.GN14065@HbI-OTOH.berkeley.edu> References: <20130830223200.GN14065@HbI-OTOH.berkeley.edu> Message-ID: Hi all, On Fri, Aug 30, 2013 at 3:32 PM, Paul Ivanov wrote: > I agree that we shouldn't need an IPEP - I don't think *anyone* > is counting on our test suite to be a particular way - it is not > a part of the codebase that we document (outside of API docs) or > advertise to outside consumers. > > I'm glad we'll finallly stop using such tricky (and fallible) > machinary, though I admit I'll miss the extra dots... Though with > the potential change to our testing infrastructure in the future, > as discussed at this week's Lab Meeting On Air [1], we might not > have the dots in the future anyway. Just for the record, we just had a chat here also with Min, who was perhaps the last happy user of the system. I also don't think an IPEP is necessary, and the main reason to get rid of this is ultimately its brittleness. As much as I thought it was a good idea at the time to do this (because it does have its benefits), over time I've had it break on me at the worst possible time: when you are debugging, the last thing you want to be fighting is your test suite. Paraphrasing JWZ: you have a test problem and you think "aha! I'll write some parametric tests". Now you have two problems. When your test suite breaks while you're trying to hunt down a bug, you get stopped dead on your tracks and are forced to switch gears from the problem you were solving first to debugging the testing machinery itself. And in this particular case, that means debugging the horrifically convoluted intersection of our own parametric extensions, nose and unittest. I have done it and trust me, you don't want to. Min mentioned that part of his attachment to the system was that it seemed like such a natural and simple thing to do that it *should* be easy and robust. I totally agree with that sentiment, but sadly the internal architecture of unittest is so absurdly nasty (right up there with the beauty of distutils) that it makes what should be a very simple job a very difficult one. As an indicator of the fact that this is *not* trivial, when I first wrote that code I also filed an issue on Python: http://bugs.python.org/issue7897 and it has spawned two more: http://bugs.python.org/issue12600 http://bugs.python.org/issue16997 spanning over 3 1/2 years of work and discussion. And the current resolution they are offering for 3.4 doesn't actually satisfy all the problems raised... In any case, while I don't think we need an IPEP for this, as Min pointed out having the discussion in public is good to record the decision and our rationale for it. Cheers, f From jiffyclub at gmail.com Fri Aug 30 20:40:05 2013 From: jiffyclub at gmail.com (Matt Davis) Date: Fri, 30 Aug 2013 17:40:05 -0700 Subject: [IPython-dev] Dropping our parametric tests system In-Reply-To: References: <20130830223200.GN14065@HbI-OTOH.berkeley.edu> Message-ID: Hi, I'm not at all familiar with IPython's tests so this may be of no value whatever to you, but I find pytest's parametrization features pretty simple and easy to use: http://pytest.org/latest/parametrize.html. Something like that may allow you to keep the benefits of parametrized tests (less code duplication) without the pain of your custom setup. - Matt (who intensely dislikes unittest) On Fri, Aug 30, 2013 at 5:31 PM, Fernando Perez wrote: > Hi all, > > On Fri, Aug 30, 2013 at 3:32 PM, Paul Ivanov wrote: > > > I agree that we shouldn't need an IPEP - I don't think *anyone* > > is counting on our test suite to be a particular way - it is not > > a part of the codebase that we document (outside of API docs) or > > advertise to outside consumers. > > > > I'm glad we'll finallly stop using such tricky (and fallible) > > machinary, though I admit I'll miss the extra dots... Though with > > the potential change to our testing infrastructure in the future, > > as discussed at this week's Lab Meeting On Air [1], we might not > > have the dots in the future anyway. > > Just for the record, we just had a chat here also with Min, who was > perhaps the last happy user of the system. > > I also don't think an IPEP is necessary, and the main reason to get > rid of this is ultimately its brittleness. As much as I thought it > was a good idea at the time to do this (because it does have its > benefits), over time I've had it break on me at the worst possible > time: when you are debugging, the last thing you want to be fighting > is your test suite. Paraphrasing JWZ: > > you have a test problem and you think "aha! I'll write some parametric > tests". Now you have two problems. > > When your test suite breaks while you're trying to hunt down a bug, > you get stopped dead on your tracks and are forced to switch gears > from the problem you were solving first to debugging the testing > machinery itself. And in this particular case, that means debugging > the horrifically convoluted intersection of our own parametric > extensions, nose and unittest. I have done it and trust me, you don't > want to. > > Min mentioned that part of his attachment to the system was that it > seemed like such a natural and simple thing to do that it *should* be > easy and robust. I totally agree with that sentiment, but sadly the > internal architecture of unittest is so absurdly nasty (right up there > with the beauty of distutils) that it makes what should be a very > simple job a very difficult one. As an indicator of the fact that this > is *not* trivial, when I first wrote that code I also filed an issue > on Python: > > http://bugs.python.org/issue7897 > > and it has spawned two more: > > http://bugs.python.org/issue12600 > http://bugs.python.org/issue16997 > > spanning over 3 1/2 years of work and discussion. And the current > resolution they are offering for 3.4 doesn't actually satisfy all the > problems raised... > > In any case, while I don't think we need an IPEP for this, as Min > pointed out having the discussion in public is good to record the > decision and our rationale for it. > > Cheers, > > f > _______________________________________________ > IPython-dev mailing list > IPython-dev at scipy.org > http://mail.scipy.org/mailman/listinfo/ipython-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From takowl at gmail.com Fri Aug 30 21:32:29 2013 From: takowl at gmail.com (Thomas Kluyver) Date: Fri, 30 Aug 2013 18:32:29 -0700 Subject: [IPython-dev] Dropping our parametric tests system In-Reply-To: References: <20130830223200.GN14065@HbI-OTOH.berkeley.edu> Message-ID: On 30 August 2013 17:40, Matt Davis wrote: > I'm not at all familiar with IPython's tests so this may be of no value > whatever to you, but I find pytest's parametrization features pretty simple > and easy to use: http://pytest.org/latest/parametrize.html. Something > like that may allow you to keep the benefits of parametrized tests (less > code duplication) without the pain of your custom setup. > I think part of the issue is that we don't use the term 'parametric tests' properly. The logical meaning is "a test which accepts parameters, and can be run multiple times with different parameters". That's simple to implement, and both py.test and nose have ways of doing that. py.test's API is interesting, so thanks for pointing that out. In IPython's case, a parametric test means "a test which is a generator, and we count each time it yields as a separate result". The only benefit of this is psychological: you can quickly increase the number of tests by writing tests with yields sprinkled in, without any other changes to the test function, and without testing any more code. Thomas -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefan at sun.ac.za Sat Aug 31 00:23:19 2013 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Sat, 31 Aug 2013 06:23:19 +0200 Subject: [IPython-dev] [Notebook] Strange rendering svg In-Reply-To: References: Message-ID: Hi Francesco On Fri, Aug 30, 2013 at 10:41 PM, Francesco Montesano wrote: > I'm running Ipython 1.0.0 with python2.7 and python3.3 under two kubuntu > 13.04 machines. In both I've installed matplotlib 1.3.0. > > While making a notebook with > > %pylab --no-import-all inline > %config InlineBackend.figure_format = 'svg' > > I've noticed a couple of "problems" (I don't know up to which point they are > bugs. > > The following > http://nbviewer.ipython.org/6391748 > https://gist.github.com/montefra/6391748 > shows the problems. I wouldn't be surprised if these were related to: https://github.com/ipython/ipython/issues/1866 St?fan From jiffyclub at gmail.com Sat Aug 31 12:36:53 2013 From: jiffyclub at gmail.com (Matt Davis) Date: Sat, 31 Aug 2013 09:36:53 -0700 Subject: [IPython-dev] Curly braces stripped from magic input Message-ID: Hi all, Software Carpentry is working on a magic for the Notebook for learning regex. It'll take a pattern and some strings and highlight matches. We've run into a problem because curly braces seem to be stripped from magic input before it gets to the function. See an example at http://nbviewer.ipython.org/6399267. It looks like the input is being sent through .format or some Jinja formatting because sending in {{ }} results in { }. Advice appreciated. Thanks, Matt & Greg -------------- next part -------------- An HTML attachment was scrubbed... URL: From takowl at gmail.com Sat Aug 31 13:03:00 2013 From: takowl at gmail.com (Thomas Kluyver) Date: Sat, 31 Aug 2013 10:03:00 -0700 Subject: [IPython-dev] Curly braces stripped from magic input In-Reply-To: References: Message-ID: Yes, we run a derivative of format which allows you to embed Python variables in magic or system calls like this: myvar='foo' %magic {myvar} On 31 August 2013 09:36, Matt Davis wrote: > Hi all, > > Software Carpentry is working on a magic for the Notebook for learning > regex. It'll take a pattern and some strings and highlight matches. We've > run into a problem because curly braces seem to be stripped from magic > input before it gets to the function. See an example at > http://nbviewer.ipython.org/6399267. > > It looks like the input is being sent through .format or some Jinja > formatting because sending in {{ }} results in { }. > > Advice appreciated. > > Thanks, > Matt & Greg > > _______________________________________________ > IPython-dev mailing list > IPython-dev at scipy.org > http://mail.scipy.org/mailman/listinfo/ipython-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jiffyclub at gmail.com Sat Aug 31 13:07:41 2013 From: jiffyclub at gmail.com (Matt Davis) Date: Sat, 31 Aug 2013 10:07:41 -0700 Subject: [IPython-dev] Curly braces stripped from magic input In-Reply-To: References: Message-ID: Is there any way around this, or do we just have to tell our students to use \d{{3,4}} instead of valid regex? Thanks, Matt On Sat, Aug 31, 2013 at 10:03 AM, Thomas Kluyver wrote: > Yes, we run a derivative of format which allows you to embed Python > variables in magic or system calls like this: > > myvar='foo' > %magic {myvar} > > > On 31 August 2013 09:36, Matt Davis wrote: > >> Hi all, >> >> Software Carpentry is working on a magic for the Notebook for learning >> regex. It'll take a pattern and some strings and highlight matches. We've >> run into a problem because curly braces seem to be stripped from magic >> input before it gets to the function. See an example at >> http://nbviewer.ipython.org/6399267. >> >> It looks like the input is being sent through .format or some Jinja >> formatting because sending in {{ }} results in { }. >> >> Advice appreciated. >> >> Thanks, >> Matt & Greg >> >> _______________________________________________ >> IPython-dev mailing list >> IPython-dev at scipy.org >> http://mail.scipy.org/mailman/listinfo/ipython-dev >> >> > > _______________________________________________ > IPython-dev mailing list > IPython-dev at scipy.org > http://mail.scipy.org/mailman/listinfo/ipython-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From takowl at gmail.com Sat Aug 31 14:21:18 2013 From: takowl at gmail.com (Thomas Kluyver) Date: Sat, 31 Aug 2013 11:21:18 -0700 Subject: [IPython-dev] Curly braces stripped from magic input In-Reply-To: References: Message-ID: On 31 August 2013 10:07, Matt Davis wrote: > Is there any way around this, or do we just have to tell our students to > use \d{{3,4}} instead of valid regex? No, I don't think we currently have any way to disable this. You may prefer to declare regexes in regular Python code rather than within IPython magic commands. Thomas -------------- next part -------------- An HTML attachment was scrubbed... URL: From gvwilson at third-bit.com Sat Aug 31 14:35:24 2013 From: gvwilson at third-bit.com (Greg Wilson) Date: Sat, 31 Aug 2013 14:35:24 -0400 Subject: [IPython-dev] Curly braces stripped from magic input In-Reply-To: References: Message-ID: <5222376C.7070609@third-bit.com> On 2013-08-31 2:21 PM, Thomas Kluyver wrote: > On 31 August 2013 10:07, Matt Davis > wrote: > > Is there any way around this, or do we just have to tell our > students to use \d{{3,4}} instead of valid regex? > > > No, I don't think we currently have any way to disable this. You may > prefer to declare regexes in regular Python code rather than within > IPython magic commands. Hi Thomas, The point of this tool is to allow us to teach regular expressions *without* having to embed them in Python (or anything else) --- we've found they're easier for newcomers to digest if they can wrap their heads around \d+\s+\d+\b first, and then worry about raw strings, re.whatever, match objects, and so on. What we have now lets us do this: %%regex a+b xyz aaabxx xabbbx xyzab xabxabx which is about as simple as it can get. We can tell them to double the {}'s if we have to, but it seems weird not to have access to the original (unchomped) string... Thanks, Greg -------------- next part -------------- An HTML attachment was scrubbed... URL: