From matthias at matthias-k.org Mon Dec 2 11:40:02 2013 From: matthias at matthias-k.org (Matthias =?ISO-8859-1?Q?K=FCmmerer?=) Date: Mon, 02 Dec 2013 17:40:02 +0100 Subject: [SciPy-Dev] Callback for SLSQP minimization Message-ID: <8613320.Y9L2dd3tlR@klio> Hi, in scipy.optimize, there is at the moment no possibility to add a callback function when minimizing using the SLSQP method. I was wondering whether there is some deeper reason for that. If there are no reasons against that, I would like to implement a callback for this method. Guessing from the code, it should be quite easy. I thought, the callback should be called after each function evaluation in _minimize_slsqp, that is around [1]. By the way: Is it better to discuss this kind of question on the mailing list, or should I open an issue in GitHub for such things? Best, Matthias [1] https://github.com/scipy/scipy/blob/v0.13.0/scipy/optimize/slsqp.py#L353 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 836 bytes Desc: This is a digitally signed message part. URL: From denis at laxalde.org Mon Dec 2 14:55:03 2013 From: denis at laxalde.org (Denis Laxalde) Date: Mon, 2 Dec 2013 20:55:03 +0100 Subject: [SciPy-Dev] Callback for SLSQP minimization In-Reply-To: <8613320.Y9L2dd3tlR@klio> References: <8613320.Y9L2dd3tlR@klio> Message-ID: Hi, 2013/12/2 Matthias K?mmerer : > in scipy.optimize, there is at the moment no possibility to add a callback > function when minimizing using the SLSQP method. I was wondering whether there > is some deeper reason for that. If there are no reasons against that, I would > like to implement a callback for this method. Guessing from the code, it > should be quite easy. I thought, the callback should be called after each > function evaluation in _minimize_slsqp, that is around [1]. The deep reason is that nobody tackled the problem so far ,though It does not look too difficult. Feel free to open an issue, or better a pull request, on github. Cheers, Denis From jjstickel at gmail.com Mon Dec 2 15:38:21 2013 From: jjstickel at gmail.com (Jonathan Stickel) Date: Mon, 02 Dec 2013 13:38:21 -0700 Subject: [SciPy-Dev] Savitzky-Golay and other smoothing filters In-Reply-To: References: Message-ID: <529CEFBD.9090309@gmail.com> On 11/27/13 11:00 , scipy-dev-request at scipy.org wrote: > It took a while, but eventually a Savitzky-Golay filter made it into > scipy:https://github.com/scipy/scipy/pull/470 Smoothing filters continue to be of interest to scipy users. I wrote a regularization-based smoothing method a few years ago: https://github.com/jjstickel/scikit-datasmooth/ https://pypi.python.org/pypi/scikits.datasmooth/0.61 and suggested then that several smoothing methods be co-located inside or outside scipy but did not get enough traction to make this happen. Any further interest in this now? It seems that a subbranch of scipy.signal would be logical (e.g. scipy.signal.smoothing). Or perhaps a single function interface to several smoothing methods (spline, S-G, wavelet, regularization, others). I have limited time, but I would be willing to work on a pull request or such with some coaching (it would be my first). Thanks, Jonathan From ralf.gommers at gmail.com Mon Dec 2 16:25:55 2013 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Mon, 2 Dec 2013 22:25:55 +0100 Subject: [SciPy-Dev] 0.13.2 release plan Message-ID: Hi all, I plan to do a 0.13.2 release this week, due to a memory leak in griddata with the Cython version used to release 0.13.1: https://github.com/scipy/scipy/issues/2388. 0.13.x also contains a fix for another ndimage.label regression. Is there anything else that should go into 0.13.2? Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From pav at iki.fi Tue Dec 3 05:02:32 2013 From: pav at iki.fi (Pauli Virtanen) Date: Tue, 3 Dec 2013 10:02:32 +0000 (UTC) Subject: [SciPy-Dev] 0.13.2 release plan References: Message-ID: Ralf Gommers gmail.com> writes: > Is there anything else that should go into 0.13.2? https://github.com/scipy/scipy/issues/3108 -- Pauli Virtanen From djpine at gmail.com Tue Dec 3 05:19:06 2013 From: djpine at gmail.com (David J Pine) Date: Tue, 3 Dec 2013 11:19:06 +0100 Subject: [SciPy-Dev] adding linear fitting routine Message-ID: I would like to get some feedback and generate some discussion about a least squares fitting routine I submitted last Friday [please see adding linear fitting routine (29 Nov 2013)]. I know that everybody is very busy, but it would be helpful to get some feedback and, I hope, eventually to get this routine added to one of the basic numpy/scipy libraries. David Pine -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthias at matthias-k.org Wed Dec 4 05:08:49 2013 From: matthias at matthias-k.org (Matthias =?ISO-8859-1?Q?K=FCmmerer?=) Date: Wed, 04 Dec 2013 11:08:49 +0100 Subject: [SciPy-Dev] Callback for SLSQP minimization In-Reply-To: References: <8613320.Y9L2dd3tlR@klio> Message-ID: <2675422.RM4oUaX9av@klio> Hi, On Monday, December 02, 2013 20:55:03 Denis Laxalde wrote: > > in scipy.optimize, there is at the moment no possibility to add a callback > > function when minimizing using the SLSQP method. I was wondering whether > > there is some deeper reason for that. If there are no reasons against > > that, I would like to implement a callback for this method. Guessing from > > the code, it should be quite easy. I thought, the callback should be > > called after each function evaluation in _minimize_slsqp, that is around > > [1]. > > The deep reason is that nobody tackled the problem so far ,though It > does not look too difficult. Feel free to open an issue, or better a > pull request, on github. I implemented the callback und opened a pull request at [1]. I hope everything is okay, this is my first contribution to scipy. [1] https://github.com/scipy/scipy/pull/3120 Best, Matthias -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 836 bytes Desc: This is a digitally signed message part. URL: From thomas.haslwanter at fh-linz.at Wed Dec 4 05:52:09 2013 From: thomas.haslwanter at fh-linz.at (Thomas Haslwanter) Date: Wed, 4 Dec 2013 10:52:09 +0000 (UTC) Subject: [SciPy-Dev] adding linear fitting routine References: Message-ID: I quite like a routine for weighted least squares in scipy. In fact, the lack of a module to calculate the confidence intervals for least square fits (for the mean values and for the predictions) has made me write my own routine (fit_line in thLib in pipy). From thomas.haslwanter at fh-linz.at Wed Dec 4 05:59:26 2013 From: thomas.haslwanter at fh-linz.at (Thomas Haslwanter) Date: Wed, 4 Dec 2013 10:59:26 +0000 (UTC) Subject: [SciPy-Dev] Savitzky-Golay and other smoothing filters References: <529CEFBD.9090309@gmail.com> Message-ID: Hi Jonathan, adding your smoothing module to scipy would be good. As for the location, I am not sure where the savitzky-golay filter is located. The same module would be good for your contribution, alternatively it could perhaps be placed in filter. I am not sure if a "smoothing" module is justified for two or so functions. From thomas.haslwanter at fh-linz.at Wed Dec 4 06:03:03 2013 From: thomas.haslwanter at fh-linz.at (Thomas Haslwanter) Date: Wed, 4 Dec 2013 11:03:03 +0000 (UTC) Subject: [SciPy-Dev] thLib: package for 3D kinematics and data analysis (including Savitzky-Golay filter) References: <1CFD8CBC30E0454B9DFAADEB36AD1739B050568340@LNZEXCHANGE001.linz.fhooe.at> Message-ID: Hey Warren, Great that you got it done! I am not yet that familiar with finding my way around the modules: where in scipy is your savgol-filter now located? From warren.weckesser at gmail.com Wed Dec 4 06:24:32 2013 From: warren.weckesser at gmail.com (Warren Weckesser) Date: Wed, 4 Dec 2013 06:24:32 -0500 Subject: [SciPy-Dev] thLib: package for 3D kinematics and data analysis (including Savitzky-Golay filter) In-Reply-To: References: <1CFD8CBC30E0454B9DFAADEB36AD1739B050568340@LNZEXCHANGE001.linz.fhooe.at> Message-ID: On 12/4/13, Thomas Haslwanter wrote: > Hey Warren, > Great that you got it done! > I am not yet that familiar with finding my way around the modules: where in > scipy is your savgol-filter now located? The functions are in the `signal` module, `scipy.signal.savgol_coeffs` and `scipy.signal.savgol_filter`. The code is here: https://github.com/scipy/scipy/blob/master/scipy/signal/_savitzky_golay.py The pull request was merged after the release of 0.13, so these will be in the 0.14 release of scipy. Warren > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > From davidmenhur at gmail.com Wed Dec 4 08:20:14 2013 From: davidmenhur at gmail.com (=?UTF-8?B?RGHPgGlk?=) Date: Wed, 4 Dec 2013 14:20:14 +0100 Subject: [SciPy-Dev] adding linear fitting routine In-Reply-To: References: Message-ID: On 3 December 2013 11:19, David J Pine wrote: > I would like to get some feedback and generate some discussion about a > least squares fitting routine I submitted last Friday > On the wishlist level, I would like to see a complete model fitting, considering errors in both axis and correlation, and the option for a robust fitting system. See details, for example, here: http://arxiv.org/abs/1008.4686 I haven't really needed it myself, so I haven't taken the time to implement it yet. /David. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jjstickel at gmail.com Wed Dec 4 12:01:43 2013 From: jjstickel at gmail.com (Jonathan Stickel) Date: Wed, 04 Dec 2013 10:01:43 -0700 Subject: [SciPy-Dev] Savitzky-Golay and other smoothing filters In-Reply-To: References: <529CEFBD.9090309@gmail.com> Message-ID: <529F5FF7.6000806@gmail.com> Thomas Thank you for the email. It looks like the new savitzky-golay functions went into scipy.signal. Already in scipy.signal are other types of smoothing capabilities, including splines, Fourier, and Wavelet, but these are not grouped together nor are all the user interfaces obvious for smoothing use. I have come across other smoothing methods and discussions of methods outside of scipy (but written with python/scipy), including "lowess" in statsmodels: http://statsmodels.sourceforge.net/devel/generated/statsmodels.nonparametric.smoothers_lowess.lowess.html a discussion of a faster lowess method here: http://slendrmeans.wordpress.com/2012/05/14/how-do-you-speed-up-40000-weighted-least-squares-calculations-skip-36000-of-them/ my regularization method: https://github.com/jjstickel/scikit-datasmooth/ and discussion of smoothing by regularization in N-dimensions: http://pav.iki.fi/blog/2010-09-19/nd-smoothing.html I think it would be great to group all of these together in scipy. To do this will take a champion, though, and I personally can't afford the time. Without one, perhaps I can work to put my regularization-smoothing method into scipy.signal alongside savitzky-golay. Regards, Jonathan On 12/4/13 03:59 , Thomas Haslwanter wrote: > Hi Jonathan, > adding your smoothing module to scipy would be good. As for the location, I > am not sure where the savitzky-golay filter is located. The same module > would be good for your contribution, alternatively it could perhaps be > placed in filter. I am not sure if a "smoothing" module is justified for two > or so functions. > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > From josef.pktd at gmail.com Wed Dec 4 12:13:22 2013 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Wed, 4 Dec 2013 12:13:22 -0500 Subject: [SciPy-Dev] Savitzky-Golay and other smoothing filters In-Reply-To: <529F5FF7.6000806@gmail.com> References: <529CEFBD.9090309@gmail.com> <529F5FF7.6000806@gmail.com> Message-ID: On Wed, Dec 4, 2013 at 12:01 PM, Jonathan Stickel wrote: > Thomas > > Thank you for the email. It looks like the new savitzky-golay functions > went into scipy.signal. Already in scipy.signal are other types of > smoothing capabilities, including splines, Fourier, and Wavelet, but > these are not grouped together nor are all the user interfaces obvious > for smoothing use. As far as I remember: The difference of the function in scipy.signal compared to other ones like splines in scipy.interpolate is that scipy.signal assumes for most functions that we have an equal space grid, for example given by discrete times. In this case, the calculations are much faster than for arbitrary x values in smoothers. Similar ndimage works on regular nd grids for large parts. Most other functions that can be used as general smoothers are currently still in scipy.interpolate. (I don't know if wavelets in scipy assume equal spaced grid or not.) Josef > > I have come across other smoothing methods and discussions of methods > outside of scipy (but written with python/scipy), including "lowess" in > statsmodels: > > http://statsmodels.sourceforge.net/devel/generated/statsmodels.nonparametric.smoothers_lowess.lowess.html > > a discussion of a faster lowess method here: > > http://slendrmeans.wordpress.com/2012/05/14/how-do-you-speed-up-40000-weighted-least-squares-calculations-skip-36000-of-them/ > > my regularization method: > > https://github.com/jjstickel/scikit-datasmooth/ > > and discussion of smoothing by regularization in N-dimensions: > > http://pav.iki.fi/blog/2010-09-19/nd-smoothing.html > > I think it would be great to group all of these together in scipy. To do > this will take a champion, though, and I personally can't afford the > time. Without one, perhaps I can work to put my regularization-smoothing > method into scipy.signal alongside savitzky-golay. > > Regards, > Jonathan > > > > On 12/4/13 03:59 , Thomas Haslwanter wrote: >> Hi Jonathan, >> adding your smoothing module to scipy would be good. As for the location, I >> am not sure where the savitzky-golay filter is located. The same module >> would be good for your contribution, alternatively it could perhaps be >> placed in filter. I am not sure if a "smoothing" module is justified for two >> or so functions. >> >> _______________________________________________ >> SciPy-Dev mailing list >> SciPy-Dev at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-dev >> > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev From pav at iki.fi Wed Dec 4 14:20:20 2013 From: pav at iki.fi (Pauli Virtanen) Date: Wed, 04 Dec 2013 21:20:20 +0200 Subject: [SciPy-Dev] adding linear fitting routine In-Reply-To: References: Message-ID: 04.12.2013 15:20, Da?id kirjoitti: [clip] > On the wishlist level, I would like to see a complete model > fitting, considering errors in both axis and correlation, and the > option for a robust fitting system. See details, for example, > here: > > http://arxiv.org/abs/1008.4686 > > I haven't really needed it myself, so I haven't taken the time to > implement it yet. When discussing fitting, it's good to keep also scipy.odr in mind. -- Pauli Virtanen From djpine at gmail.com Wed Dec 4 14:50:34 2013 From: djpine at gmail.com (David J Pine) Date: Wed, 4 Dec 2013 20:50:34 +0100 Subject: [SciPy-Dev] adding linear fitting routine In-Reply-To: References: Message-ID: Matt & Josef, Actually, there is something nice about returning a dictionary, since the output is self-labeling in that case. I could be convinced. David On Wed, Dec 4, 2013 at 8:20 PM, Pauli Virtanen wrote: > 04.12.2013 15:20, Da?id kirjoitti: > [clip] > > On the wishlist level, I would like to see a complete model > > fitting, considering errors in both axis and correlation, and the > > option for a robust fitting system. See details, for example, > > here: > > > > http://arxiv.org/abs/1008.4686 > > > > I haven't really needed it myself, so I haven't taken the time to > > implement it yet. > > When discussing fitting, it's good to keep also scipy.odr in mind. > > -- > Pauli Virtanen > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Wed Dec 4 15:06:36 2013 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Wed, 4 Dec 2013 15:06:36 -0500 Subject: [SciPy-Dev] adding linear fitting routine In-Reply-To: References: Message-ID: On Wed, Dec 4, 2013 at 2:50 PM, David J Pine wrote: > Matt & Josef, > > Actually, there is something nice about returning a dictionary, since the > output is self-labeling in that case. I could be convinced. In this case there is still the choice of always returning everything in a dictionary, or to return minimal only, and the full result as dictionary as option `return_all`. stats.linregress can only delegate if all results can be obtained or easily calculated. In this case if some results are missing, then it's more efficient to just to repeat start from scratch instead of almost calculating the same thing again. Josef > > David > > > On Wed, Dec 4, 2013 at 8:20 PM, Pauli Virtanen wrote: >> >> 04.12.2013 15:20, Da?id kirjoitti: >> [clip] >> > On the wishlist level, I would like to see a complete model >> > fitting, considering errors in both axis and correlation, and the >> > option for a robust fitting system. See details, for example, >> > here: >> > >> > http://arxiv.org/abs/1008.4686 >> > >> > I haven't really needed it myself, so I haven't taken the time to >> > implement it yet. >> >> When discussing fitting, it's good to keep also scipy.odr in mind. >> >> -- >> Pauli Virtanen >> >> _______________________________________________ >> SciPy-Dev mailing list >> SciPy-Dev at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-dev > > > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > From daniele at grinta.net Wed Dec 4 15:59:44 2013 From: daniele at grinta.net (Daniele Nicolodi) Date: Wed, 04 Dec 2013 21:59:44 +0100 Subject: [SciPy-Dev] adding linear fitting routine In-Reply-To: References: Message-ID: <529F97C0.80807@grinta.net> On 04/12/2013 20:50, David J Pine wrote: > Matt & Josef, > > Actually, there is something nice about returning a dictionary, since > the output is self-labeling in that case. I could be convinced. A named tuple would be much preferable over a dictionary IMHO. It is a much leaner structure and it still support unpacking if someone is inclined to do that. The Python standard library returns named tuples in a few places as well. Cheers, Daniele From josef.pktd at gmail.com Wed Dec 4 19:32:43 2013 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Wed, 4 Dec 2013 19:32:43 -0500 Subject: [SciPy-Dev] search in docs Message-ID: Is it intentional that the documentation doesn't have a search field ? http://docs.scipy.org/doc/scipy/reference/index.html although, Google seems to have indexed it very well, at least searching by function names. Josef From newville at cars.uchicago.edu Thu Dec 5 22:35:13 2013 From: newville at cars.uchicago.edu (Matt Newville) Date: Thu, 5 Dec 2013 21:35:13 -0600 Subject: [SciPy-Dev] SciPy-Dev Digest, Vol 122, Issue 4 In-Reply-To: References: Message-ID: Hi Daniele, > A named tuple would be much preferable over a dictionary IMHO. It is a > much leaner structure and it still support unpacking if someone is > inclined to do that. The Python standard library returns named tuples > in a few places as well. > > Cheers, > Daniele > Perhaps you could enlighten me on this? I don't claim to know one way or the other whether a named tuple is "a much leaner structure" than a dict or a Bunch or an instance of an empty class. Is this is an important consideration for how to structure a collection of return values of high-level scipy functions? Named tuple can be accessed either by index or by field name. Any future changes to such a return argument *must* be made by appending to the existing values, and would probably cause a lot of grief anyway (for those relying on there being exactly 8 return values, say, it would break code). Since none of the other approaches even have an order, it would seem that the point of advocating for named tuple would be because order was important. If that were the case, I would think everyone you asked would immediately come up with the same answers to these questions: a) should the set of return values be ordered? b) what is the right order? c) are you confident this order s not only right but will always be right? If you were writing divmod(), I think you could get such answers (and probably just use a tuple). I would have a harder time answering these for minimize(), curve_fit(), or linfit(). The discussion here about linfit() was not so much working out the right order for outputs, but about what the right outputs were. David expressed the idea of just sticking with two outputs at least partly (I think) because those two (best_values, covariance) had an obvious order and everything else could derived from them, and it was hard to decide what a right order for any possibly useful derived values might be. To me, that sort of suggests that an ordered output is not the best choice I'm happy to be shown why named tuples would be the way to go, but I'm not seeing it right now. I hope you can help me out. Cheers, --Matt Newville From pav at iki.fi Fri Dec 6 06:58:19 2013 From: pav at iki.fi (Pauli Virtanen) Date: Fri, 06 Dec 2013 13:58:19 +0200 Subject: [SciPy-Dev] adding linear fitting routine In-Reply-To: <529F97C0.80807@grinta.net> References: <529F97C0.80807@grinta.net> Message-ID: 04.12.2013 22:59, Daniele Nicolodi kirjoitti: > On 04/12/2013 20:50, David J Pine wrote: >> Matt & Josef, >> >> Actually, there is something nice about returning a dictionary, >> since the output is self-labeling in that case. I could be >> convinced. > > A named tuple would be much preferable over a dictionary IMHO. It > is a much leaner structure and it still support unpacking if > someone is inclined to do that. The Python standard library > returns named tuples in a few places as well. I'd just recommend a custom class for the return value. If you want some additional leanness, define `__slots__`. This probably has most of the advantages of a named tuple, but without the disadvantage of ordered like a tuple. It also avoids using a dictionary for data storage. -- Pauli Virtanen From suryak at ieee.org Fri Dec 6 08:29:38 2013 From: suryak at ieee.org (Surya Kasturi) Date: Fri, 6 Dec 2013 18:59:38 +0530 Subject: [SciPy-Dev] search in docs In-Reply-To: References: Message-ID: If we are not expecting complex situations.. I think we try to implement with JS without any server; I guess docs docs are static files.. On Thu, Dec 5, 2013 at 6:02 AM, wrote: > Is it intentional that the documentation doesn't have a search field ? > > http://docs.scipy.org/doc/scipy/reference/index.html > > although, Google seems to have indexed it very well, at least > searching by function names. > > Josef > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Fri Dec 6 08:48:09 2013 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Fri, 6 Dec 2013 08:48:09 -0500 Subject: [SciPy-Dev] search in docs In-Reply-To: References: Message-ID: On Fri, Dec 6, 2013 at 8:29 AM, Surya Kasturi wrote: > If we are not expecting complex situations.. I think we try to implement > with JS without any server; > I guess docs docs are static files.. For the docs: Is there a problem with the sphinx built in search field ? except that it's usually slow > > > On Thu, Dec 5, 2013 at 6:02 AM, wrote: >> >> Is it intentional that the documentation doesn't have a search field ? >> >> http://docs.scipy.org/doc/scipy/reference/index.html >> >> although, Google seems to have indexed it very well, at least >> searching by function names. someone reported the same for numpy on the numpy mailing list. Josef >> >> Josef >> _______________________________________________ >> SciPy-Dev mailing list >> SciPy-Dev at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-dev > > > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > From ralf.gommers at gmail.com Sun Dec 8 05:06:55 2013 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Sun, 8 Dec 2013 11:06:55 +0100 Subject: [SciPy-Dev] ANN: Scipy 0.13.2 release Message-ID: Hi, I'm happy to announce the availability of the scipy 0.13.2 release. This is a bugfix only release; it contains fixes for ndimage and optimize, and most importantly was compiled with Cython 0.19.2 to fix memory leaks in code using Cython fused types. Source tarballs, binaries and release notes can be found at http://sourceforge.net/projects/scipy/files/scipy/0.13.2/ Cheers, Ralf ========================== SciPy 0.13.2 Release Notes ========================== SciPy 0.13.2 is a bug-fix release with no new features compared to 0.13.1. Issues fixed ------------ - 3096: require Cython 0.19, earlier versions have memory leaks in fused types - 3079: ``ndimage.label`` fix swapped 64-bitness test - 3108: ``optimize.fmin_slsqp`` constraint violation -------------- next part -------------- An HTML attachment was scrubbed... URL: From mail at stevesimmons.com Thu Dec 12 19:17:12 2013 From: mail at stevesimmons.com (Stephen Simmons) Date: Fri, 13 Dec 2013 00:17:12 +0000 Subject: [SciPy-Dev] Compiling scipy 0.13.2 with Python 3.4 rc 1 Message-ID: <52AA5208.9030403@stevesimmons.com> Hi all, In case anyone wants to try scipy with the new Python 3.4, there is a problem with importlib in rc1 that results in import errors while building scipy. Basically f2py can't do its imports properly and some autogenerated code comes out with missing symbols. It can be hard to see this amidst the stream of build messages and warnings. They show up clearly though when starting python3.4 and running scipy.test(). First time through I got 174 failing tests. If you don't want to wait for Python 3.4 rc2, you can download the fixed version of importlib.__init__.py from here (updated on 12 Dec): http://hg.python.org/cpython/file/c1a7ba57b4ff/Lib/importlib/__init__.py After doing this, I removed numpy and scipy (sudo pip3.4 uninstall numpy etc), recompiled numpy and scipy, and got no test failures (using Linux Mint 14). Regards Stephen From cournape at gmail.com Fri Dec 13 05:02:05 2013 From: cournape at gmail.com (David Cournapeau) Date: Fri, 13 Dec 2013 10:02:05 +0000 Subject: [SciPy-Dev] ANN: Scipy 0.13.2 release In-Reply-To: References: Message-ID: Hi Ralf, Thanks a lot for the quick fix release. I can confirm it builds and tests correctly on windows, rhel5 and osx (both 32 and 64 bits). cheers, David On Sun, Dec 8, 2013 at 10:06 AM, Ralf Gommers wrote: > Hi, > > I'm happy to announce the availability of the scipy 0.13.2 release. This > is a bugfix only release; it contains fixes for ndimage and optimize, and > most importantly was compiled with Cython 0.19.2 to fix memory leaks in > code using Cython fused types. > > Source tarballs, binaries and release notes can be found at > http://sourceforge.net/projects/scipy/files/scipy/0.13.2/ > > Cheers, > Ralf > > > ========================== > SciPy 0.13.2 Release Notes > ========================== > > SciPy 0.13.2 is a bug-fix release with no new features compared to 0.13.1. > > > Issues fixed > ------------ > > - 3096: require Cython 0.19, earlier versions have memory leaks in fused > types > - 3079: ``ndimage.label`` fix swapped 64-bitness test > - 3108: ``optimize.fmin_slsqp`` constraint violation > > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From suryak at ieee.org Fri Dec 13 05:58:56 2013 From: suryak at ieee.org (Surya Kasturi) Date: Fri, 13 Dec 2013 16:28:56 +0530 Subject: [SciPy-Dev] A command line interface for submissions in scipy central Message-ID: The idea is to build an interface that lets to submit snippets scipycentral using command line. How it could be for example: $ spc submit ./code/filter.py -- snippet Its a new idea. So, your opinions and suggestions are all welcome. If there are more +1 I will proceed with building. Thanks Surya From ralf.gommers at gmail.com Fri Dec 13 17:38:27 2013 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Fri, 13 Dec 2013 23:38:27 +0100 Subject: [SciPy-Dev] Compiling scipy 0.13.2 with Python 3.4 rc 1 In-Reply-To: <52AA5208.9030403@stevesimmons.com> References: <52AA5208.9030403@stevesimmons.com> Message-ID: On Fri, Dec 13, 2013 at 1:17 AM, Stephen Simmons wrote: > Hi all, > > In case anyone wants to try scipy with the new Python 3.4, there is a > problem with importlib in rc1 that results in import errors while > building scipy. Basically f2py can't do its imports properly and some > autogenerated code comes out with missing symbols. It can be hard to see > this amidst the stream of build messages and warnings. They show up > clearly though when starting python3.4 and running scipy.test(). First > time through I got 174 failing tests. > > If you don't want to wait for Python 3.4 rc2, you can download the fixed > version of importlib.__init__.py from here (updated on 12 Dec): > http://hg.python.org/cpython/file/c1a7ba57b4ff/Lib/importlib/__init__.py > > After doing this, I removed numpy and scipy (sudo pip3.4 uninstall numpy > etc), recompiled numpy and scipy, and got no test failures (using Linux > Mint 14). > Thanks for the report Stephen, quite helpful. Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From jaime.frio at gmail.com Thu Dec 19 01:55:26 2013 From: jaime.frio at gmail.com (=?ISO-8859-1?Q?Jaime_Fern=E1ndez_del_R=EDo?=) Date: Wed, 18 Dec 2013 22:55:26 -0800 Subject: [SciPy-Dev] ENH: add distance gufuncs to `scipy.spatial` Message-ID: Hi, I have just submitted a quite bulky pull request ( https://github.com/scipy/scipy/pull/3163), that deserves a lengthier explanation... It basically adds a new submodule, umath_distance, to scipy.spatial. This module reproduces some of the functionality in the distance submodule, using gufuncs, and thus adding all the broadcasting sweetness that comes with them. Some examples ============= >>> import numpy as np >>> import scipy.spatial.umath_distance as udist >>> a = np.random.rand(3, 10) >>> b = np.random.rand(4, 10) Compute the distance between corresponding vectors in two multidimensional arrays: >>> udist.euclidean(a, b[:3]) array([ 1.25080976, 1.00788142, 0.89158743]) Compute all distances between to sets of vectors, i.e. what distance.cdist does, using broadcasting... >>> udist.euclidean(a[:, np.newaxis, :], b) array([[ 1.25080976, 1.20893157, 1.63674343, 1.62432255], [ 1.36665537, 1.00788142, 1.10137183, 1.29196888], [ 1.24924713, 1.27379914, 0.89158743, 1.317478 ]]) ...or a convenience argument in the Python wrapper: >>> udist.euclidean(a, b, cdist=True) array([[ 1.25080976, 1.20893157, 1.63674343, 1.62432255], [ 1.36665537, 1.00788142, 1.10137183, 1.29196888], [ 1.24924713, 1.27379914, 0.89158743, 1.317478 ]]) If a single input is given, compute all pairwise distances, i.e. what distance.pdist does: >>> udist.euclidean(a) array([ 1.59655565, 1.73136646, 0.99252024]) And check the result is correct with a cdist-style call: >>> udist.euclidean(a, a, cdist=True) array([[ 0. , 1.59655565, 1.73136646], [ 1.59655565, 0. , 0.99252024], [ 1.73136646, 0.99252024, 0. ]]) What's included =============== So far only the distance functions of only two inputs are included, which are all in scipy.spatial.distance except mahalanobis, minkowski, seuclidean and wminkowski. If you people like this gufunc approach, I will happily expand it to include those as well. The code so far is very modular, both at the C and Python level, and makes it relatively easy to add a new distance function from a C function doing a single distance calculation between two vectors. I haven't figured out all the details, but I think it would be possible to move the gufunc kernels to a header file, and make it easy(er) to add a new distance gufunc, perhaps even from Cython, in a similar way to how it can be done with PyUFunc_dd_d and the like. I have also spent a fair amount of time checking the definition of the implemented distances from original sources, and I think I have found some errors there, both in the definitions (Sokal-Michener is wrong), and in the way pathological edge cases are handled. The C code is heavily commented on the subject, starting around line 154. Performance =========== For the existing cdist and pdist functions, replacing inline functions with function pointers does take a toll on the performance of the gufunc approach, which is worse the smaller the dimension of the vectors being compared, i.e. the less work each function call does: >>> import timeit >>> import scipy.spatial.distance as dist >>> a = np.random.rand(1000, 3) >>> b = np.random.rand(2000, 3) >>> min(timeit.repeat("dist.cdist(a, b, metric='euclidean')", 'from __main__ import dist, a, b', number=1, repeat=100)) 0.017471377426346635 >>> min(timeit.repeat("udist.euclidean(a, b, cdist=True)", 'from __main__ import udist, a, b', number=1, repeat=100)) 0.023448978561347644 >>> a = np.random.rand(1000, 100) >>> b = np.random.rand(2000, 100) >>> min(timeit.repeat("dist.cdist(a, b, metric='euclidean')", 'from __main__ import dist, a, b', number=1, repeat=100)) 0.20550328478645952 >>> min(timeit.repeat("udist.euclidean(a, b, cdist=True)", 'from __main__ import udist, a, b', number=1, repeat=100)) 0.22972938023184497 On the other hand, the C implementations in distance of most of the boolean functions are grossly inefficient, so there are significant speed-ups there despite the extra overhead: >>> a = np.random.rand(1000, 100) < 0.5 >>> b = np.random.rand(2000, 100) < 0.5 >>> min(timeit.repeat("dist.cdist(a, b, metric='sokalsneath')", 'from __main__ import dist, a, b', number=1, repeat=100)) 1.43636283024901 >>> min(timeit.repeat("udist.sokalsneath(a, b, cdist=True)", 'from __main__ import udist, a, b', number=1, repeat=100)) 0.3878355139542009 -- (\__/) ( O.o) ( > <) Este es Conejo. Copia a Conejo en tu firma y ay?dale en sus planes de dominaci?n mundial. -------------- next part -------------- An HTML attachment was scrubbed... URL: From amicitas at gmail.com Mon Dec 23 02:33:47 2013 From: amicitas at gmail.com (mir amicitas) Date: Mon, 23 Dec 2013 07:33:47 +0000 Subject: [SciPy-Dev] Add a "Wrapping Code From Other Languages" section to topical-software Message-ID: Dear Scipy-dev, I would like to add a new section to the topical-software page for wrapping code in Matlab, IDL etc. Some of these tools are currently listed in Miscellaneous, while they would naturally fall under "Running Code Written In Other Languages". As part of this I will add a new link to the package pyidlrpc (https://bitbucket.org/amicitas/pyidlrpc/) which can be used to wrap IDL code. My specific suggestion is to add a section under: "Running Code Written In Other Languages" With the title: "Wrapping/Executing Code in Other Languages" Under this heading I will add the MATLAB and IDL excecution/wrapping libraries. Please reply with any comments. Otherwise I will make a pull request with the above changes. Sincerely, Novimir Pablant -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Mon Dec 23 03:20:10 2013 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Mon, 23 Dec 2013 09:20:10 +0100 Subject: [SciPy-Dev] Add a "Wrapping Code From Other Languages" section to topical-software In-Reply-To: References: Message-ID: On Mon, Dec 23, 2013 at 8:33 AM, mir amicitas wrote: > Dear Scipy-dev, > > I would like to add a new section to the topical-software page for > wrapping code in Matlab, IDL etc. Some of these tools are currently listed > in Miscellaneous, while they would naturally fall under "Running Code > Written In Other Languages". As part of this I will add a new link to the > package pyidlrpc (https://bitbucket.org/amicitas/pyidlrpc/) which can be > used to wrap IDL code. > > My specific suggestion is to add a section under: > "Running Code Written In Other Languages" > > With the title: > "Wrapping/Executing Code in Other Languages" > > Under this heading I will add the MATLAB and IDL excecution/wrapping > libraries. > Hi Novimir, that topic could indeed use some attention. Your proposed title is almost the same as http://scipy.org/topical-software.html#running-code-written-in-other-languagesthough. Would it make sense to add separate a separate subsection under it for Matlab/IDL/R? Collecting your new link, some Matlab links that are in other places and RPy which is under "basic science". > Please reply with any comments. Otherwise I will make a pull request with > the above changes. > A PR would be very welcome. Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From awebster at falsecolour.com Tue Dec 31 07:43:28 2013 From: awebster at falsecolour.com (awebster at falsecolour.com) Date: Tue, 31 Dec 2013 12:44:28 +0001 Subject: [SciPy-Dev] conv2 and normxcorr2 Message-ID: <52c2bbf2.c57bcc0a.61c6.4ceb@mx.google.com> I noticed a couple of popular matlab functions - conv2 and normxcorr2 were not present in the scipy.signal packages. I would like to submit them for addition. Can anyone point me on instructions on how to write such a thing? Below are examples. # 2D convolution using the convolution's FFT property def conv2(a,b): ma,na = a.shape mb,nb = b.shape return fft.ifft2(fft.fft2(a,[2*ma-1,2*na-1])*fft.fft2(b,[2*mb-1,2*nb-1])) # compute a normalized 2D cross correlation using convolutions # this will give the same output as matlab, albeit in row-major order def normxcorr2(b,a): c = conv2(a,flipud(fliplr(b))) a = conv2(a**2, ones(b.shape)) b = sum(b.flatten()**2) c = c/sqrt(a*b) return c -- Aaron Webster -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Tue Dec 31 08:42:59 2013 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Tue, 31 Dec 2013 14:42:59 +0100 Subject: [SciPy-Dev] conv2 and normxcorr2 In-Reply-To: <52c2bbf2.c57bcc0a.61c6.4ceb@mx.google.com> References: <52c2bbf2.c57bcc0a.61c6.4ceb@mx.google.com> Message-ID: On Tue, Dec 31, 2013 at 1:43 PM, awebster at falsecolour.com < awebster at falsecolour.com> wrote: > I noticed a couple of popular matlab functions - conv2 and normxcorr2 were > not present in the scipy.signal packages. I would like to submit them for > addition. Can anyone point me on instructions on how to write such a > thing? Below are examples. > Hi Aaron, isn't conv2 the same as signal.convolve2d? And can what normxcorr2 does be done with signal.correlate2d? For instructions on how to contribute, see http://docs.scipy.org/doc/scipy-dev/reference/hacking.html Cheers, Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From awebster at falsecolour.com Tue Dec 31 09:49:10 2013 From: awebster at falsecolour.com (Aaron Webster) Date: Tue, 31 Dec 2013 14:50:10 +0001 Subject: [SciPy-Dev] conv2 and normxcorr2 In-Reply-To: References: <52c2bbf2.c57bcc0a.61c6.4ceb@mx.google.com> < CABL7CQixhENvsvZMjPZDHc2Qh80caWREAfs7mnP2XHfnNCVPWg@mail.gmail.com> Message-ID: <52c2d968.0937cc0a.334f.5101@mx.google.com> On Tue, Dec 31, 2013 at 2:42 PM, Ralf Gommers wrote: > > > > On Tue, Dec 31, 2013 at 1:43 PM, awebster at falsecolour.com > wrote: >> I noticed a couple of popular matlab functions - conv2 and >> normxcorr2 were not present in the scipy.signal packages. I would >> like to submit them for addition. Can anyone point me on >> instructions on how to write such a thing? Below are examples. >> > > Hi Aaron, isn't conv2 the same as signal.convolve2d? And can what > normxcorr2 does be done with signal.correlate2d? > I did a quick test and it seems that you are correct: signal.convolve2d appears to generate basically the same output as conv2, and following normxcorr2 can be done with signal.correlate2d. However, I noticed while doing this that both signal.convolve2d and signal.correlate2d are *extremely* slow. For example, on my computer with a random 100x100 matrix signal.correlate2d takes 4.73 seconds while normxcorr2 take 0.253 seconds. The results are similar for signal.convolve2d and conv2. As a practical matter, would it make most sense to fix signal.correlate2d and signal.convolve2d, or implement new functions? -- Aaron Webster From luke.pfister at gmail.com Tue Dec 31 10:07:51 2013 From: luke.pfister at gmail.com (Luke Pfister) Date: Tue, 31 Dec 2013 09:07:51 -0600 Subject: [SciPy-Dev] conv2 and normxcorr2 In-Reply-To: <52c2d968.0937cc0a.334f.5101@mx.google.com> References: <52c2bbf2.c57bcc0a.61c6.4ceb@mx.google.com> <52c2d968.0937cc0a.334f.5101@mx.google.com> Message-ID: I *believe* that Matlab is calling either the Intel MKL or Intel IPP convolution routines, which is why they are so much faster. I ran into a situation where I needed to perform many, many small 2D convolutions, and wound up writing a Cython wrapper to call the IPP convolution. I seem to remember getting speedups of ~200x when convolving an 8x8 kernel with a 512x512 image. I'm not familiar with how the Scipy convolution functions are implemented under the hood. Do they use efficient algorithms for small convolution sizes (ie, overlap-add, overlap-save)? -- Luke On Tue, Dec 31, 2013 at 8:49 AM, Aaron Webster wrote: > On Tue, Dec 31, 2013 at 2:42 PM, Ralf Gommers > wrote: >> >> >> >> On Tue, Dec 31, 2013 at 1:43 PM, awebster at falsecolour.com >> wrote: >>> I noticed a couple of popular matlab functions - conv2 and >>> normxcorr2 were not present in the scipy.signal packages. I would >>> like to submit them for addition. Can anyone point me on >>> instructions on how to write such a thing? Below are examples. >>> >> >> Hi Aaron, isn't conv2 the same as signal.convolve2d? And can what >> normxcorr2 does be done with signal.correlate2d? >> > I did a quick test and it seems that you are correct: signal.convolve2d > appears to generate basically the same output as conv2, and following > normxcorr2 can be done with signal.correlate2d. However, I noticed > while doing this that both signal.convolve2d and signal.correlate2d are > *extremely* slow. For example, on my computer with a random 100x100 > matrix signal.correlate2d takes 4.73 seconds while normxcorr2 take > 0.253 seconds. The results are similar for signal.convolve2d and conv2. > > As a practical matter, would it make most sense to fix > signal.correlate2d and signal.convolve2d, or implement new functions? > > -- > Aaron Webster > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev From ralf.gommers at gmail.com Tue Dec 31 10:50:32 2013 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Tue, 31 Dec 2013 16:50:32 +0100 Subject: [SciPy-Dev] conv2 and normxcorr2 In-Reply-To: References: <52c2bbf2.c57bcc0a.61c6.4ceb@mx.google.com> <52c2d968.0937cc0a.334f.5101@mx.google.com> Message-ID: On Tue, Dec 31, 2013 at 4:07 PM, Luke Pfister wrote: > I *believe* that Matlab is calling either the Intel MKL or Intel IPP > convolution routines, which is why they are so much faster. > > I ran into a situation where I needed to perform many, many small 2D > convolutions, and wound up writing a Cython wrapper to call the IPP > convolution. I seem to remember getting speedups of ~200x when > convolving an 8x8 kernel with a 512x512 image. > > I'm not familiar with how the Scipy convolution functions are > implemented under the hood. Do they use efficient algorithms for > small convolution sizes (ie, overlap-add, overlap-save)? > It looks like the implementation is very straightforward and could benefit from some optimization: Convolve2d: https://github.com/scipy/scipy/blob/master/scipy/signal/sigtoolsmodule.c#L1006 https://github.com/scipy/scipy/blob/master/scipy/signal/firfilter.c#L84 And correlate2d just calls convolve2d: https://github.com/scipy/scipy/blob/master/scipy/signal/signaltools.py#L503 > > -- > Luke > > On Tue, Dec 31, 2013 at 8:49 AM, Aaron Webster > wrote: > > On Tue, Dec 31, 2013 at 2:42 PM, Ralf Gommers > > wrote: > >> > >> > >> > >> On Tue, Dec 31, 2013 at 1:43 PM, awebster at falsecolour.com > >> wrote: > >>> I noticed a couple of popular matlab functions - conv2 and > >>> normxcorr2 were not present in the scipy.signal packages. I would > >>> like to submit them for addition. Can anyone point me on > >>> instructions on how to write such a thing? Below are examples. > >>> > >> > >> Hi Aaron, isn't conv2 the same as signal.convolve2d? And can what > >> normxcorr2 does be done with signal.correlate2d? > >> > > I did a quick test and it seems that you are correct: signal.convolve2d > > appears to generate basically the same output as conv2, and following > > normxcorr2 can be done with signal.correlate2d. However, I noticed > > while doing this that both signal.convolve2d and signal.correlate2d are > > *extremely* slow. For example, on my computer with a random 100x100 > > matrix signal.correlate2d takes 4.73 seconds while normxcorr2 take > > 0.253 seconds. The results are similar for signal.convolve2d and conv2. > > > > As a practical matter, would it make most sense to fix > > signal.correlate2d and signal.convolve2d, or implement new functions? > Speeding up the existing function would be preferable. firfilter.c already contains a suggestion on how to do that. Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From jtaylor.debian at googlemail.com Tue Dec 31 11:57:18 2013 From: jtaylor.debian at googlemail.com (Julian Taylor) Date: Tue, 31 Dec 2013 17:57:18 +0100 Subject: [SciPy-Dev] ANN: NumPy 1.7.2 release Message-ID: <52C2F76E.9010509@googlemail.com> Hello, I'm happy to announce the of Numpy 1.7.2. This is a bugfix only release supporting Python 2.4 - 2.7 and 3.1 - 3.3. More than 42 issues were fixed, the most important issues are listed in the release notes: https://github.com/numpy/numpy/blob/v1.7.2/doc/release/1.7.2-notes.rst Compared to the last release candidate four additional minor issues have been fixed and compatibility with python 3.4b1 improved. Source tarballs, installers and release notes can be found at https://sourceforge.net/projects/numpy/files/NumPy/1.7.2 Cheers, Julian Taylor