From davidmenhur at gmail.com Wed Sep 2 04:36:29 2015 From: davidmenhur at gmail.com (=?UTF-8?B?RGHPgGlk?=) Date: Wed, 2 Sep 2015 10:36:29 +0200 Subject: [SciPy-Dev] Missing releases at Scipy.org news Message-ID: The releases of Scipy 0.16 and Numpy 1.9.2 are missing from the front page: https://www.scipy.org/#news That has led people to believe 0.16 was not out, and thus they couldn't depend on it. If updating it is difficult, can we put a big fat button to Pypi, that will always hold the latest release? /David. -------------- next part -------------- An HTML attachment was scrubbed... URL: From johann.cohentanugi at gmail.com Fri Sep 11 06:48:08 2015 From: johann.cohentanugi at gmail.com (Johann Cohen-Tanugi) Date: Fri, 11 Sep 2015 12:48:08 +0200 Subject: [SciPy-Dev] issue pickling an interp1d object Message-ID: <55F2B168.8040707@gmail.com> Dear Scipy-ers, I sent the email below to the user list yesterday, but perhaps this one is a better medium. I forgot to mention that the problem is likely related to the fact that I am using the interpolators within parallel snippet of codes, using the multiprocessing module. It seems that interp1d is not safe in the context of such usage, contrary (seemingly) to InterpolatedUnivariateSpline. I do not know whether this is a bug or a feature, and I have an obvious workaround, saving the arrays rather than the interpolator objects themselves. best, Johann ------------------- Dear Scipy-ers, I am using scipy (0.15.1) to interpolate a fairly complicate double integral for several parameters, for later use in yet a third integral. The pickling is thus of a dict of interpolators. When I am using InterpolatedUnivariateSpline my code runs smoothly and dump a pickled file. But when I use interp1d (with default protocol 0), I crash : Traceback (most recent call last): pickle.dump( interpolators, f ) File "/usr/lib/python2.7/pickle.py", line 1370, in dump Pickler(file, protocol).dump(obj) File "/usr/lib/python2.7/pickle.py", line 224, in dump self.save(obj) File "/usr/lib/python2.7/pickle.py", line 286, in save f(self, obj) # Call unbound method with explicit self File "/usr/lib/python2.7/pickle.py", line 649, in save_dict self._batch_setitems(obj.iteritems()) File "/usr/lib/python2.7/pickle.py", line 663, in _batch_setitems save(v) File "/usr/lib/python2.7/pickle.py", line 306, in save rv = reduce(self.proto) File "/usr/lib/python2.7/copy_reg.py", line 77, in _reduce_ex raise TypeError("a class that defines __slots__ without " TypeError: a class that defines __slots__ without defining __getstate__ cannot be pickled When I set the protocol to -1, I get a different crash : pickle.dump( interpolators, f, protocol=-1 ) File "/usr/lib/python2.7/pickle.py", line 1370, in dump Pickler(file, protocol).dump(obj) File "/usr/lib/python2.7/pickle.py", line 224, in dump self.save(obj) File "/usr/lib/python2.7/pickle.py", line 286, in save f(self, obj) # Call unbound method with explicit self File "/usr/lib/python2.7/pickle.py", line 649, in save_dict self._batch_setitems(obj.iteritems()) File "/usr/lib/python2.7/pickle.py", line 681, in _batch_setitems save(v) File "/usr/lib/python2.7/pickle.py", line 331, in save self.save_reduce(obj=obj, *rv) File "/usr/lib/python2.7/pickle.py", line 419, in save_reduce save(state) File "/usr/lib/python2.7/pickle.py", line 286, in save f(self, obj) # Call unbound method with explicit self File "/usr/lib/python2.7/pickle.py", line 548, in save_tuple save(element) File "/usr/lib/python2.7/pickle.py", line 286, in save f(self, obj) # Call unbound method with explicit self File "/usr/lib/python2.7/pickle.py", line 649, in save_dict self._batch_setitems(obj.iteritems()) File "/usr/lib/python2.7/pickle.py", line 681, in _batch_setitems save(v) File "/usr/lib/python2.7/pickle.py", line 331, in save self.save_reduce(obj=obj, *rv) File "/usr/lib/python2.7/pickle.py", line 396, in save_reduce save(cls) File "/usr/lib/python2.7/pickle.py", line 286, in save f(self, obj) # Call unbound method with explicit self File "/usr/lib/python2.7/pickle.py", line 748, in save_global (obj, module, name)) pickle.PicklingError: Can't pickle : it's not found as __builtin__.instancemethod Does that ring a bell to anyone, before I start simplifying my code to provide this list with a test case? Thanks a lot in advance, Johann From travis at continuum.io Sat Sep 12 17:51:51 2015 From: travis at continuum.io (Travis Oliphant) Date: Sat, 12 Sep 2015 16:51:51 -0500 Subject: [SciPy-Dev] Looking for a developer who will work with me for at least 6 months to fix NumPy's dtype system. Message-ID: Hi all, Apologies for cross-posting, but I need to get the word out and twitter doesn't provide enough explanation. I've been working on a second edition of my "Guide to NumPy" book. It's been a time-pressured activity, but it's helped me put more meat around my ideas for how to fix NumPy's dtype system -- which I've been contemplating off an on for 8 years. I'm pretty sure I know exactly how to do it --- in a way that fits more cleanly into Python. It will take 3-6 months and will have residual efforts needed that will last another 6 months --- making more types available with NumPy, improving calculations etc. This work will be done completely in public view and allow for public comment. It will not solve *all* of NumPy's problems, but it will put NumPy's dtype system on the footing it in retrospect should have been put on in the first place (if I had known then what I know now). It won't be a grandiose rewrite. It will be a pretty surgical fix to a few key places in the code. However, it will break the ABI and require recompilation of NumPy extensions (and so would need to be called NumPy 2.0). This is unavoidable, but I don't see any problem with breaking the ABI today given how easy it is to get distributions of Python these days from a variety of sources (including using conda --- but not only using conda). For those that remember what happened in Python dev land, the changes will be similar to when Guido changed Python 1.5.2 to Python 2.0. I can mentor and work closely with someone who will work on this and we will invite full participation and feedback from whomever in the community also wants to participate --- but I can't do it myself full time (and it needs someone full time+). Fortunately, I can pay someone to do it if they are willing to commit at least 6 months (it is not required to work at Continuum for this, but you can have a job at Continuum if you want one). I'm only looking for people who have enough experience with C or preferably the Python C-API. You also have to *want* to work on this. You need to be willing to work with me on the project directly and work to have a mind-meld with my ideas which will undoubtedly give rise to additional perspectives and ideas for later work for you. When I wrote NumPy 1.0, I put in 80+ hour weeks for about 6 months or more and then 60+ weeks for another year. I was pretty obsessed with it. This won't need quite that effort, but it will need something like it. Being able to move to Austin is a plus but not required. I can sponsor a visa for the right candidate as well (though it's not guaranteed you will get one with the immigration policies what they are). This is a labor of love for so many of us and my desire to help the dtype situation in NumPy comes from the same space that my desire to work on NumPy in the first place came. I will be interviewing people to work on this as not everyone who may want to will really be qualified to do it --- especially with so many people writing Cython these days instead of good-ole C-API code :-) Feel free to spread the news to anyone you can. I won't say more until I've found someone to work with me on this --- because I won't have the time to follow-up with any questions or comments. Even if I can't find someone I will publish the ideas --- but that also takes time and effort that is in short supply for me right now. If there is someone willing to fund this work, please let me know as well -- that could free up more of my time. Best, -Travis -- *Travis Oliphant* *Co-founder and CEO* @teoliphant 512-222-5440 http://www.continuum.io -------------- next part -------------- An HTML attachment was scrubbed... URL: From carsonc at gmail.com Sun Sep 13 17:09:17 2015 From: carsonc at gmail.com (Cantwell Carson) Date: Sun, 13 Sep 2015 17:09:17 -0400 Subject: [SciPy-Dev] Fwd: Pre-release of reentrant Qhull on github In-Reply-To: <6.2.5.6.2.20150831224901.02c7f6d8@shore.net> References: <6.2.5.6.2.20150831224901.02c7f6d8@shore.net> Message-ID: Can we update the Qhull on Scipy with 2015? The vertex_id and ridge_id fields have increased from 24 bits to 32 bits, which solves an issue with data sets larger than 2^24 points. There are other issues that appear to be addressed, but this is the one that I was excited about. Cantwell ---------- Forwarded message ---------- From: Brad Barber Date: Mon, Aug 31, 2015 at 11:32 PM Subject: Pre-release of reentrant Qhull on github To: Alejandro.LindeCerezo at dlr.de, axhiao at gmail.com, bakgenc at tamu.edu, carsonc at gmail.com, Chaim.Dryzun at sarine.com, colonel at monmouth.com, david.c.sterratt at ed.ac.uk, drum at gwu.edu, eric.perim at duke.edu, keith.briggs at bt.com, L1101801 at qub.ac.uk, matteo.ragaglia at polimi.it, michael.guyonnet at gmail.com, noe.alvarado at upc.edu, pshafer at lbl.gov, robert.rambo at diamond.ac.uk, schwehr at gmail.com, shrikant.mehre at gmail.com, tomilovanatoliy at gmail.com, zachrg at gmail.com [Sent to recent contacts regarding Qhull] Hi all, The pre-release of reentrant Qhull 2015.0.2 is available on github. git clone git at github.com:qhull/qhull.git It features a new reentrant library qhull_r which replaces the current qhull library. The old libraries (w/ and w/o qh_QHpointer) remain available. qhull_r does not use global variables. Instead, a qhT pointer is the first argument to each procedure. This approach was pioneered by Pete Klosterman in 2010. The C++ interface is redone using qhull_r. It should be easier to use. Comments, errors, suggestions are welcome. --Brad P.S.: [Kurt] It does not include your edits. They're next. -------------- next part -------------- An HTML attachment was scrubbed... URL: From travis at continuum.io Sun Sep 13 18:51:12 2015 From: travis at continuum.io (Travis Oliphant) Date: Sun, 13 Sep 2015 17:51:12 -0500 Subject: [SciPy-Dev] The process I intend to follow for any proposed changes to NumPy Message-ID: Hey all, I just wanted to clarify, that I am very excited about a few ideas I have --- but I don't have time myself to engage in the community process to get these changes into NumPy. However, those are real processes --- I've been coaching a few people in those processes for the past several years already. So, rather than do nothing, what I'm looking to do is to work with a few people who I can share my ideas with, get excited about the ideas, and then who will work with the community to get them implemented. That's what I was announcing and talking about yesterday --- looking for interested people who want to work on NumPy *with* the NumPy community. In my enthusiasm, I realize that some may have mis-understood my intention. There is no 'imminent' fork, nor am I planning on doing some crazy amount of work that I then try to force on other developers of NumPy. What I'm planning to do is find people to train on NumPy code base (people to increase the diversity of the developers would be ideal -- but hard to accomplish). I plan to train them on NumPy based on my experience, and on what I think should be done --- and then have *them* work through the community process and engage with others to get consensus (hopefully not losing too much in translation in the process --- but instead getting even better). During that process I will engage as a member of the community and help write NEPs and other documents and help clarify where it makes sense as I can. I will be filtering for people that actually want to see NumPy get better. Until I identify the people and work with them, it will be hard to tell how this will best work. So, stay tuned. If all goes well, what you should see in a few weeks time are specific proposals, a branch or two, and the beginnings of some pull requests. If you don't see that, then I will not have found the right people to help me, and we will all continue to go back to searching. While I'm expecting the best, in the worst case, we get additional people who know the NumPy code base and can help squash bugs as well as implement changes that are desired. Three things are needed if you want to participate in this: 1) A willingness to work with the open source community, 2) a deep knowledge of C and in-particular CPython's brand of C, and 3) a willingness to engage with me, do a mind-meld and dump around the NumPy code base, and then improve on what is in my head with the rest of the community. Thanks, -Travis -------------- next part -------------- An HTML attachment was scrubbed... URL: From christoph at grothesque.org Thu Sep 24 12:49:12 2015 From: christoph at grothesque.org (Christoph Groth) Date: Thu, 24 Sep 2015 18:49:12 +0200 Subject: [SciPy-Dev] vectorized scipy.integrate.quad Message-ID: <87bncrg3uf.fsf@grothesque.org> Dear SciPy experts, We would like to speed-up some numerical integrations that we do by parallelizing them with MPI (our integrands take a very long time to evaluate). Looking at the implementation of QAGSE (that is performing the work for scipy.integrate.quad) I see that there is potential for 21-fold vectorization in the first pass and subsequently for 42-fold vectorization of the evaluation of the integrand. That?s quite good! Once a vectorized scipy.integrand.quad is available, any parallelization technique can be used in the integrand can use. But even without parallelization a vectorized integrand could already provide significant speedups. On the Python level, I think that the interface to the new function could remain the same as that of the existing quad routine, with the only difference that the first argument of the integrand will be not a number, but a 1d-array of numbers. Would such a vectorized quad be a welcome addition to SciPy? I imagine proceeding as follows: (1) Download the relevant parts of QUADPACK and convert them to C using f2c. (2) Vectorize the C code. (3) Add Python bindings. (4) Write a test that verifies that the vectorized routine evaluates the same points and gives the same results as the sequential one. Cheers Christoph -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 810 bytes Desc: not available URL: From kasturi.surya at gmail.com Fri Sep 25 09:03:16 2015 From: kasturi.surya at gmail.com (Surya) Date: Fri, 25 Sep 2015 18:33:16 +0530 Subject: [SciPy-Dev] Mailing list working? Message-ID: Hi, I've sent emails last week but, weren't delivered. Is there any problem? Thanks Surya -------------- next part -------------- An HTML attachment was scrubbed... URL: From samkit993 at gmail.com Fri Sep 25 09:13:27 2015 From: samkit993 at gmail.com (Samkit Shah) Date: Fri, 25 Sep 2015 18:43:27 +0530 Subject: [SciPy-Dev] Mailing list working? In-Reply-To: References: Message-ID: And I want to unsubscribe list temporarily, but I am not able to do so. Any pointers?? On Sep 25, 2015 6:33 PM, "Surya" wrote: > Hi, I've sent emails last week but, weren't delivered. > > Is there any problem? > > Thanks > Surya > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > https://mail.scipy.org/mailman/listinfo/scipy-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kasturi.surya at gmail.com Fri Sep 25 09:24:52 2015 From: kasturi.surya at gmail.com (Surya) Date: Fri, 25 Sep 2015 18:54:52 +0530 Subject: [SciPy-Dev] The new scipy central site is live Message-ID: Hello everyone, I'm excited to announce the new SciPy Central website is live at central.scipy.org. The URLs from the old domain scipy-central.org hosted by Kevin will be redirected to the latest. Major improvements on front-end ------------------------------- 1. New UI using Twitter Bootstrap 2.3 2. Comments system for submissions 3. Preview during submission is simplified 4. RSS/Atom feeds for submissions, comments 5. ACE Editor to show code snippets, and highlight syntax Possible future improvements ---------------------------- 1. OpenID integration to login. I'm thinking of Google Identity Toolkit now 2. Up/down voting submissions. Its great if someone wants to improve this 3. Cookbooks integration + iPyNb support. Any ideas on how to integrate iPython Notebooks would be great. Possibly, we may want to rethink different submission types. Thanks to Clever Cloud for kindly providing hosting services to our open source project at no cost. Help + Contributions by [1] thanks, Surya [1] In alphabetic order: Allen (Enthought), Julien (Clever Cloud), Kevin, Pauli, Ralf, Sebastian -------------- next part -------------- An HTML attachment was scrubbed... URL: From Nicolas.Rougier at inria.fr Fri Sep 25 09:41:59 2015 From: Nicolas.Rougier at inria.fr (Nicolas P. Rougier) Date: Fri, 25 Sep 2015 15:41:59 +0200 Subject: [SciPy-Dev] The new scipy central site is live In-Reply-To: References: Message-ID: Thanks for the nice website. I tried to submit a link but got a "A severe server error: ooops 500." Nicolas > On 25 Sep 2015, at 15:24, Surya wrote: > > Hello everyone, > > I'm excited to announce the new SciPy Central website is live at central.scipy.org. > > The URLs from the old domain scipy-central.org hosted by Kevin will be redirected to the latest. > > Major improvements on front-end > ------------------------------- > > 1. New UI using Twitter Bootstrap 2.3 > 2. Comments system for submissions > 3. Preview during submission is simplified > 4. RSS/Atom feeds for submissions, comments > 5. ACE Editor to show code snippets, and highlight syntax > > Possible future improvements > ---------------------------- > > 1. OpenID integration to login. I'm thinking of Google Identity Toolkit now > 2. Up/down voting submissions. Its great if someone wants to improve this > 3. Cookbooks integration + iPyNb support. Any ideas on how to integrate iPython Notebooks would be great. Possibly, we may want to rethink different submission types. > > Thanks to Clever Cloud for kindly providing hosting services to our open source project at no cost. > > Help + Contributions by [1] > > thanks, > Surya > > [1] In alphabetic order: Allen (Enthought), Julien (Clever Cloud), Kevin, Pauli, Ralf, Sebastian > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > https://mail.scipy.org/mailman/listinfo/scipy-dev From njs at pobox.com Fri Sep 25 17:09:34 2015 From: njs at pobox.com (Nathaniel Smith) Date: Fri, 25 Sep 2015 14:09:34 -0700 Subject: [SciPy-Dev] Mailing list working? In-Reply-To: References: Message-ID: Well, I got your message, so I guess the list is working again :-). It was down for quite some time though, probably as part of some infrastructure reorganization. This thread on github I think has all the information that's public right now: https://github.com/numpy/numpy/issues/6325 -n On Sep 25, 2015 6:03 AM, "Surya" wrote: > Hi, I've sent emails last week but, weren't delivered. > > Is there any problem? > > Thanks > Surya > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > https://mail.scipy.org/mailman/listinfo/scipy-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From superscript92 at yahoo.com Fri Sep 25 17:17:13 2015 From: superscript92 at yahoo.com (Jamie Tsao) Date: Fri, 25 Sep 2015 14:17:13 -0700 Subject: [SciPy-Dev] Mailing list working? In-Reply-To: Message-ID: <1443215833.87220.YahooMailAndroidMobile@web122906.mail.ne1.yahoo.com> Unsubscribe temporarily by? 1) going to the mailing list main page (http://projects.scipy.org/mailman/listinfo/scipy-dev) 2) sign in below to see the subscribers list 3) click on your own email address in the list 4) on the top of subscription options in the middle of the page, for "mailing delivery" choose Disabled. This will prevent you from getting mail from the list for a while. Does this not work? Sent from Yahoo Mail on Android From:"Samkit Shah" Date:Fri, Sep 25, 2015 at 6:13 AM Subject:Re: [SciPy-Dev] Mailing list working? And I want to unsubscribe list temporarily, but I am not able to do so. Any pointers?? On Sep 25, 2015 6:33 PM, "Surya" wrote: Hi, I've sent emails last week but, weren't delivered.? Is there any problem? Thanks Surya _______________________________________________ SciPy-Dev mailing list SciPy-Dev at scipy.org https://mail.scipy.org/mailman/listinfo/scipy-dev _______________________________________________ SciPy-Dev mailing list SciPy-Dev at scipy.org https://mail.scipy.org/mailman/listinfo/scipy-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesnwoods at gmail.com Sun Sep 27 10:15:07 2015 From: charlesnwoods at gmail.com (Nathan Woods) Date: Sun, 27 Sep 2015 08:15:07 -0600 Subject: [SciPy-Dev] vectorized scipy.integrate.quad In-Reply-To: <87bncrg3uf.fsf@grothesque.org> References: <87bncrg3uf.fsf@grothesque.org> Message-ID: I think that there has been some cautious optimism about breaking away from the original QUADPACK code, which is really starting to get creaky, so this might be an interesting idea. Unless I'm mistaken, though, integration of a python function seems unlikely to take advantage of a vectorized C integrator, so the speed ups would be limited to those calling quad with ctypes functions. Also, I think breaking the API would be a bad idea, but you could, for instance, allow quad to accept both numbers and sequences, and then wrap the number in an array before sending it on to the integrator. This would preserve the API, while allowing the desired vectorization. The biggest hurdle, I think, will be testing, which will have to be very thorough in order to warrant replacing the battle-tested QUADPACK code. Nathan > On Sep 24, 2015, at 10:49 AM, Christoph Groth wrote: > > Dear SciPy experts, > > We would like to speed-up some numerical integrations that we do by > parallelizing them with MPI (our integrands take a very long time to > evaluate). > > Looking at the implementation of QAGSE (that is > performing the work for scipy.integrate.quad) I see that there is > potential for 21-fold vectorization in the first pass and subsequently for > 42-fold vectorization of the evaluation of the integrand. That?s quite > good! > > Once a vectorized scipy.integrand.quad is available, any parallelization > technique can be used in the integrand can use. But even without > parallelization a vectorized integrand could already provide significant > speedups. > > On the Python level, I think that the interface to the new function > could remain the same as that of the existing quad routine, with the > only difference that the first argument of the integrand will be not a > number, but a 1d-array of numbers. > > Would such a vectorized quad be a welcome addition to SciPy? I imagine > proceeding as follows: > > (1) Download the relevant parts of QUADPACK and convert them to C using > f2c. > > (2) Vectorize the C code. > > (3) Add Python bindings. > > (4) Write a test that verifies that the vectorized routine evaluates the same > points and gives the same results as the sequential one. > > Cheers > Christoph > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > https://mail.scipy.org/mailman/listinfo/scipy-dev From charlesr.harris at gmail.com Sun Sep 27 14:24:33 2015 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sun, 27 Sep 2015 12:24:33 -0600 Subject: [SciPy-Dev] Numpy 1.10.0rc2 coming Monday, Sep 28. Message-ID: Hi All, Just a heads up. If you have encountered any unreported errors with rc1, please let us know. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From archibald at astron.nl Sun Sep 27 19:05:44 2015 From: archibald at astron.nl (Anne Archibald) Date: Sun, 27 Sep 2015 23:05:44 +0000 Subject: [SciPy-Dev] vectorized scipy.integrate.quad In-Reply-To: <87bncrg3uf.fsf@grothesque.org> References: <87bncrg3uf.fsf@grothesque.org> Message-ID: On Thu, Sep 24, 2015 at 8:40 PM Christoph Groth wrote: > We would like to speed-up some numerical integrations that we do by > parallelizing them with MPI (our integrands take a very long time to > evaluate). > > Looking at the implementation of QAGSE (that is > performing the work for scipy.integrate.quad) I see that there is > potential for 21-fold vectorization in the first pass and subsequently for > 42-fold vectorization of the evaluation of the integrand. That?s quite > good! > > > Once a vectorized scipy.integrand.quad is available, any parallelization > technique can be used in the integrand can use. But even without > parallelization a vectorized integrand could already provide significant > speedups. > Just to make sure I understand what you're asking for: You have a function to integrate that takes a really long time to evaluate. At the moment scipy.integrate.quad calls it once, waits for it to evaluate, then calls it again, and so forth. But the algorithm in scipy.integrate.quad could just as easily fire off all 21 of the individual calls and wait until they are all done. It's a concurrent algorithm - it doesn't care about the order of evaluation of the sub-parts - but the current implementation does not express that concurrency and so parallel implementations are not possible. This is actually a problem that happens a lot: our languages have not traditionally had a way to express concurrency, so places parallelism should be easy are hard to exploit. I'll point out that there is actually a lot more concurrency in this algorithm than it seems: it's (I think) basically a recursive algorithm, where you evaluate the integral and its truncation error on an interval; if the truncation error is too high, you subdivide the integral and repeat the process on both sides. These two sub-evaluations are also concurrent - it doesn't matter which is done first. I have been thinking about how to handle this sort of concurrency. The only easy way to express concurrency now is with a vectorized call: if you call sin(x) on a vector x, you are saying that you don't care what order the sines are computed in. So that's what you're asking for, and it wouldn't be too hard to get working in this situation (though re-porting QUADPACK is going to be a fairly major task). I have recently done some experimentation on a similar situation: I have a very expensive function to minimize, and gradient calculation is another example of concurrency. My solution uses either coroutines or threads to handle algorithm-level concurrency (like subdividing the interval, here) and concurrent.futures.ProcessPoolExecutor to handle function evaluation. There are surely other approaches as well, but I do think we need to think about how to express concurrency in a way that makes parallelization easy. Making something as robust as QUADPACK is probably going to be hard. But the basic algorithm is not too complicated - unfortunately scipy only has Gaussian quadrature coefficients, not Clenshaw-Curtis - so for a basic adaptive implementation it would not be too difficult to write one in python in a way that makes the concurrency explicit. Anne -------------- next part -------------- An HTML attachment was scrubbed... URL: From archibald at astron.nl Sun Sep 27 20:35:31 2015 From: archibald at astron.nl (Anne Archibald) Date: Mon, 28 Sep 2015 00:35:31 +0000 Subject: [SciPy-Dev] vectorized scipy.integrate.quad In-Reply-To: References: <87bncrg3uf.fsf@grothesque.org> Message-ID: On Mon, Sep 28, 2015 at 1:05 AM Anne Archibald wrote: > On Thu, Sep 24, 2015 at 8:40 PM Christoph Groth > wrote: > >> We would like to speed-up some numerical integrations that we do by >> parallelizing them with MPI (our integrands take a very long time to >> evaluate). >> >> Looking at the implementation of QAGSE (that is >> performing the work for scipy.integrate.quad) I see that there is >> potential for 21-fold vectorization in the first pass and subsequently for >> 42-fold vectorization of the evaluation of the integrand. That?s quite >> good! >> > > Making something as robust as QUADPACK is probably going to be hard. But > the basic algorithm is not too complicated - unfortunately scipy only has > Gaussian quadrature coefficients, not Clenshaw-Curtis - so for a basic > adaptive implementation it would not be too difficult to write one in > python in a way that makes the concurrency explicit. > As a concrete demonstration of what I mean, I attach an example of a very simple recursive Simpson's rule adaptive integrator that hands off all calculation to a process pool and that is capable of using a large number of cores (I got a 47-fold speedup using 64 "cores"). It unfortunately requires python 3.4 because I used coroutines. Anne -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: async_integrate.py Type: text/x-python Size: 1745 bytes Desc: not available URL: From christoph at grothesque.org Mon Sep 28 06:28:14 2015 From: christoph at grothesque.org (Christoph Groth) Date: Mon, 28 Sep 2015 12:28:14 +0200 Subject: [SciPy-Dev] vectorized scipy.integrate.quad References: <87bncrg3uf.fsf@grothesque.org> Message-ID: <871tdihm81.fsf@grothesque.org> Nathan Woods wrote: > Unless I'm mistaken, though, integration of a python function seems > unlikely to take advantage of a vectorized C integrator, so the speed > ups would be limited to those calling quad with ctypes functions. What I proposed is making it possible that the integrand gets called with a sequence of x-values instead of a single x-value at a time. It would be then up to the user-specified integrand function to perform the computation. I can think of at least three ways that can provide speed-ups: ? vectorized evaluation using vectorized numpy operations (it should be faster to call np.sin(x) once with x being an array of 21 elements than to call math.sin(x) 21 times). ? concurrent.futures ? MPI Is there a problem that I fail to see? > Also, I think breaking the API would be a bad idea, but you could, for > instance, allow quad to accept both numbers and sequences, and then > wrap the number in an array before sending it on to the > integrator. This would preserve the API, while allowing the desired > vectorization. Of course breaking the API is a bad idea. That?s why I have in mind keeping scipy.integrate.quad as it is, and adding a sibling that could be called scipy.integrate.vectorized_quad. If the algorithm of vectorized_quad remains the same (as planned), quad could be eventually implemented as a thin wrapper around vectorized_quad. This would allow to remove the unomodified QUADPACK from scipy. I am sorry, I do not understand your proposal for extending the API in a backwards-compatible way without introducing a new function. Could you please explain it in more detail? Christoph -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 810 bytes Desc: not available URL: From kasturi.surya at gmail.com Mon Sep 28 12:22:36 2015 From: kasturi.surya at gmail.com (Surya) Date: Mon, 28 Sep 2015 21:52:36 +0530 Subject: [SciPy-Dev] The new scipy central site is live In-Reply-To: References: Message-ID: I noticed the error - mainly caused by corrupt search indexes. Everyone should now be able to submit freely. Cheers Surya On Fri, Sep 25, 2015 at 7:11 PM, Nicolas P. Rougier < Nicolas.Rougier at inria.fr> wrote: > > Thanks for the nice website. > I tried to submit a link but got a "A severe server error: ooops 500." > > Nicolas > > > > On 25 Sep 2015, at 15:24, Surya wrote: > > > > Hello everyone, > > > > I'm excited to announce the new SciPy Central website is live at > central.scipy.org. > > > > The URLs from the old domain scipy-central.org hosted by Kevin will be > redirected to the latest. > > > > Major improvements on front-end > > ------------------------------- > > > > 1. New UI using Twitter Bootstrap 2.3 > > 2. Comments system for submissions > > 3. Preview during submission is simplified > > 4. RSS/Atom feeds for submissions, comments > > 5. ACE Editor to show code snippets, and highlight syntax > > > > Possible future improvements > > ---------------------------- > > > > 1. OpenID integration to login. I'm thinking of Google Identity Toolkit > now > > 2. Up/down voting submissions. Its great if someone wants to improve this > > 3. Cookbooks integration + iPyNb support. Any ideas on how to integrate > iPython Notebooks would be great. Possibly, we may want to rethink > different submission types. > > > > Thanks to Clever Cloud for kindly providing hosting services to our open > source project at no cost. > > > > Help + Contributions by [1] > > > > thanks, > > Surya > > > > [1] In alphabetic order: Allen (Enthought), Julien (Clever Cloud), > Kevin, Pauli, Ralf, Sebastian > > _______________________________________________ > > SciPy-Dev mailing list > > SciPy-Dev at scipy.org > > https://mail.scipy.org/mailman/listinfo/scipy-dev > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > https://mail.scipy.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gael.varoquaux at normalesup.org Mon Sep 28 17:15:36 2015 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Mon, 28 Sep 2015 23:15:36 +0200 Subject: [SciPy-Dev] New version of "scipy lecture notes" Message-ID: <20150928211536.GI2445658@phare.normalesup.org> Dear Pythonistas, We have just released a new version of the "scipy lecture notes": http://www.scipy-lectures.org/ These are a consistent set of materials to learn the core aspects of the scientific Python ecosystem, from beginner to expert. They are written and maintained by a set of volunteers and published under a CC-BY license. Highlights of the latest version includes: * a chapter giving a introduction to statistics in Python * a new layout with emphasis on readability including on small devices * fully doctesting for Python 2 and 3 compatibility We hope that you will find these notes useful, for you, your colleagues, or your students. Ga?l From charlesnwoods at gmail.com Mon Sep 28 20:22:13 2015 From: charlesnwoods at gmail.com (Nathan Woods) Date: Mon, 28 Sep 2015 18:22:13 -0600 Subject: [SciPy-Dev] vectorized scipy.integrate.quad In-Reply-To: <871tdihm81.fsf@grothesque.org> References: <87bncrg3uf.fsf@grothesque.org> <871tdihm81.fsf@grothesque.org> Message-ID: <36412B38-004A-4EB2-8190-085F38E1232D@gmail.com> The API part is simple. Something like, try: func([xes]) except TypeError: func = lambda xlist: [func x for x in xlist] A much smarter way would use a class instead of a lambda, but hopefully this conveys the idea. > On Sep 28, 2015, at 4:28 AM, Christoph Groth wrote: > > Nathan Woods wrote: > >> Unless I'm mistaken, though, integration of a python function seems >> unlikely to take advantage of a vectorized C integrator, so the speed >> ups would be limited to those calling quad with ctypes functions. > > What I proposed is making it possible that the integrand gets called > with a sequence of x-values instead of a single x-value at a time. It > would be then up to the user-specified integrand function to perform the > computation. I can think of at least three ways that can provide speed-ups: > > ? vectorized evaluation using vectorized numpy operations (it should be faster > to call np.sin(x) once with x being an array of 21 elements than to > call math.sin(x) 21 times). > > ? concurrent.futures > > ? MPI > > Is there a problem that I fail to see? > >> Also, I think breaking the API would be a bad idea, but you could, for >> instance, allow quad to accept both numbers and sequences, and then >> wrap the number in an array before sending it on to the >> integrator. This would preserve the API, while allowing the desired >> vectorization. > > Of course breaking the API is a bad idea. That?s why I have in mind > keeping scipy.integrate.quad as it is, and adding a sibling that could > be called scipy.integrate.vectorized_quad. If the algorithm of > vectorized_quad remains the same (as planned), quad could be eventually > implemented as a thin wrapper around vectorized_quad. This would allow > to remove the unomodified QUADPACK from scipy. > > I am sorry, I do not understand your proposal for extending the API in a > backwards-compatible way without introducing a new function. Could you > please explain it in more detail? > > Christoph > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > https://mail.scipy.org/mailman/listinfo/scipy-dev From charlesnwoods at gmail.com Mon Sep 28 22:16:12 2015 From: charlesnwoods at gmail.com (Nathan Woods) Date: Mon, 28 Sep 2015 20:16:12 -0600 Subject: [SciPy-Dev] vectorized scipy.integrate.quad In-Reply-To: <871tdihm81.fsf@grothesque.org> References: <87bncrg3uf.fsf@grothesque.org> <871tdihm81.fsf@grothesque.org> Message-ID: <2AC99458-94D3-4BD0-B0EF-36696099810B@gmail.com> Here's an idea of what the class might look like. The idea is, you try to use the function one way, and then you do it the other way if the first fails. class VectorIntegrand: def __init__(self, func): self.func = func def __call__(self, numpy_array_of_x_values): out = np.empty(...) for ind, value in enumerate(numpy_array_of_x_values): out[ind] = self.func(value) return out A caution, though. You'll need to make sure that the vectorized quad function is compatible with the recursive usage in nquad, as well as just the scalar quad. I don't see any problems right off, but it will have to be checked. N > On Sep 28, 2015, at 4:28 AM, Christoph Groth wrote: > > Nathan Woods wrote: > >> Unless I'm mistaken, though, integration of a python function seems >> unlikely to take advantage of a vectorized C integrator, so the speed >> ups would be limited to those calling quad with ctypes functions. > > What I proposed is making it possible that the integrand gets called > with a sequence of x-values instead of a single x-value at a time. It > would be then up to the user-specified integrand function to perform the > computation. I can think of at least three ways that can provide speed-ups: > > ? vectorized evaluation using vectorized numpy operations (it should be faster > to call np.sin(x) once with x being an array of 21 elements than to > call math.sin(x) 21 times). > > ? concurrent.futures > > ? MPI > > Is there a problem that I fail to see? > >> Also, I think breaking the API would be a bad idea, but you could, for >> instance, allow quad to accept both numbers and sequences, and then >> wrap the number in an array before sending it on to the >> integrator. This would preserve the API, while allowing the desired >> vectorization. > > Of course breaking the API is a bad idea. That?s why I have in mind > keeping scipy.integrate.quad as it is, and adding a sibling that could > be called scipy.integrate.vectorized_quad. If the algorithm of > vectorized_quad remains the same (as planned), quad could be eventually > implemented as a thin wrapper around vectorized_quad. This would allow > to remove the unomodified QUADPACK from scipy. > > I am sorry, I do not understand your proposal for extending the API in a > backwards-compatible way without introducing a new function. Could you > please explain it in more detail? > > Christoph > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > https://mail.scipy.org/mailman/listinfo/scipy-dev From charlesr.harris at gmail.com Tue Sep 29 00:38:42 2015 From: charlesr.harris at gmail.com (Charles R Harris) Date: Mon, 28 Sep 2015 22:38:42 -0600 Subject: [SciPy-Dev] Numpy 1.10.0rc2 released Message-ID: Hi all, I'm pleased to announce the availability of Numpy 1.10.0rc12. Sources and 32 bit binary packages for Windows may be found at Sourceforge . There have been a few fixes since rc1. If there are no more problems I hope to release the final in a week or so. Cheers Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From cimrman3 at ntc.zcu.cz Wed Sep 30 03:32:26 2015 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Wed, 30 Sep 2015 09:32:26 +0200 Subject: [SciPy-Dev] Fwd: ANN: SfePy 2015.3 In-Reply-To: <560288BA.1040305@ntc.zcu.cz> References: <560288BA.1040305@ntc.zcu.cz> Message-ID: <560B900A.9080905@ntc.zcu.cz> FYI: resending due to mailing list problems last week, apologies if you already got this. -------- Forwarded Message -------- I am pleased to announce release 2015.3 of SfePy. Description ----------- SfePy (simple finite elements in Python) is a software for solving systems of coupled partial differential equations by the finite element method or by the isogeometric analysis (preliminary support). It is distributed under the new BSD license. Home page: http://sfepy.org Mailing list: http://groups.google.com/group/sfepy-devel Git (source) repository, issue tracker, wiki: http://github.com/sfepy Highlights of this release -------------------------- - preliminary support for parallel computing - unified evaluation of basis functions (= isogeometric analysis fields can be evaluated in arbitrary points) - (mostly) fixed finding of reference element coordinates of physical points - several new or improved examples For full release notes see http://docs.sfepy.org/doc/release_notes.html#id1 (rather long and technical). Best regards, Robert Cimrman on behalf of the SfePy development team --- Contributors to this release in alphabetical order: Robert Cimrman Vladimir Lukes From pav at iki.fi Wed Sep 30 12:27:03 2015 From: pav at iki.fi (Pauli Virtanen) Date: Wed, 30 Sep 2015 16:27:03 +0000 (UTC) Subject: [SciPy-Dev] vectorized scipy.integrate.quad References: <87bncrg3uf.fsf@grothesque.org> Message-ID: Christoph Groth grothesque.org> writes: [clip] > On the Python level, I think that the interface to the new function > could remain the same as that of the existing quad routine, with the > only difference that the first argument of the integrand will be not a > number, but a 1d-array of numbers. > > Would such a vectorized quad be a welcome addition to SciPy? I imagine > proceeding as follows: This sounds like a very reasonable addition to me. Some suggestions: - The other possible API would be to add a vectorized=False keyword argument, controlling whether vectorized evaluations are done or not. I think I would prefer this option, as it would result to less duplication. - I would somewhat prefer working with the original QUADPACK F77 code, rather than f2c'ing it and then modifying, unless it is necessary to replace everything. As far as I see, this might be possible to do by just replacing the DQK* routines. - While at it, it would be useful to make it possible to pass a void* pointer to the callable function, so that we could drop the messy code dealing with enabling reentrancy. -- Pauli Virtanen