From cell at michaelclerx.com Wed Oct 1 05:01:42 2014 From: cell at michaelclerx.com (Michael Clerx) Date: Wed, 01 Oct 2014 11:01:42 +0200 Subject: [SciPy-Dev] Numpy's handling of conditions In-Reply-To: <53E00D36.3080500@googlemail.com> References: <53E003B7.3090100@googlemail.com> <53E00562.5070602@googlemail.com> <53E00D36.3080500@googlemail.com> Message-ID: <542BC2F6.2000100@michaelclerx.com> Hi Everyone, I'm a bit confused about the way numpy treats conditions and I'm hoping you can help. Starting from this array: x = np.array(x) >>> x array([1, 2, 3, 2, 1]) >>> x == 1 array([ True, False, False, False, True], dtype=bool) works as expected, but the following is horribly wrong: >>> x == 1 + x == 3 Traceback (most recent call last): File "", line 1, in ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all() I can live with that, but why do brackets solve this? >>> (x == 1) + (x == 3) array([ True, False, True, False, True], dtype=bool) Which is the same result as >>> (x == 1) | (x == 3) array([ True, False, True, False, True], dtype=bool) It gets a bit weirder: >>> (x == 1) + (x < 3) array([ True, True, False, True, True], dtype=bool) but >>> True + True 2 Is this the way it's supposed to work? Since it's quite different from python's scalar types, perhaps the documentation should mention it more clearly (for example, why isn't "or" implemented as a piecewise or). cheers, Michael From robert.kern at gmail.com Wed Oct 1 05:23:12 2014 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 1 Oct 2014 10:23:12 +0100 Subject: [SciPy-Dev] Numpy's handling of conditions In-Reply-To: <542BC2F6.2000100@michaelclerx.com> References: <53E003B7.3090100@googlemail.com> <53E00562.5070602@googlemail.com> <53E00D36.3080500@googlemail.com> <542BC2F6.2000100@michaelclerx.com> Message-ID: On Wed, Oct 1, 2014 at 10:01 AM, Michael Clerx wrote: > Hi Everyone, > > I'm a bit confused about the way numpy treats conditions and I'm hoping > you can help. > > Starting from this array: > > x = np.array(x) > >>> x > array([1, 2, 3, 2, 1]) > >>> x == 1 > array([ True, False, False, False, True], dtype=bool) > > works as expected, but the following is horribly wrong: > > >>> x == 1 + x == 3 > Traceback (most recent call last): > File "", line 1, in > ValueError: The truth value of an array with more than one element is > ambiguous. Use a.any() or a.all() Python parses this expression as the following, due to operator precedence: >>> x == (1 + x) == 3 The repeated == == has a special meaning in Python. It gets translated to this: >>> (x == (1+x)) and (x == 3) The `and` is the problem here because it tries to evaluate each part as a bool, which is forbidden for arrays (hence the ValueError). > I can live with that, but why do brackets solve this? > > >>> (x == 1) + (x == 3) > array([ True, False, True, False, True], dtype=bool) > > Which is the same result as > > >>> (x == 1) | (x == 3) > array([ True, False, True, False, True], dtype=bool) > > It gets a bit weirder: > > >>> (x == 1) + (x < 3) > array([ True, True, False, True, True], dtype=bool) > > but > > >>> True + True > 2 > > Is this the way it's supposed to work? Since it's quite different from > python's scalar types, perhaps the documentation should mention it more > clearly Yes. For most of the binary ufuncs, especially the ones that implement the standard arithmetic operators +-*/, we try to make sure that if both arguments are the same dtype, the output is also the same dtype and do not promote. I believe that there was some discussion about this in the past year or so, and it turns out that this is the desired semantics for addition over boolean matrices when you start implementing linear algebra. The Python `bool` object has other concerns, namely backwards compatibility with older versions of Python that just used `True = 1; False = 0`. > (for example, why isn't "or" implemented as a piecewise or). Do you mean "or" the Python keyword? This is documented. http://docs.scipy.org/doc/numpy/reference/ufuncs.html#comparison-functions -- Robert Kern From cell at michaelclerx.com Wed Oct 1 05:43:47 2014 From: cell at michaelclerx.com (Michael Clerx) Date: Wed, 01 Oct 2014 11:43:47 +0200 Subject: [SciPy-Dev] Numpy's handling of conditions In-Reply-To: References: <53E003B7.3090100@googlemail.com> <53E00562.5070602@googlemail.com> <53E00D36.3080500@googlemail.com> <542BC2F6.2000100@michaelclerx.com> Message-ID: <542BCCD3.7080000@michaelclerx.com> Thanks! I'd been looking at the documentation here: http://docs.scipy.org/doc/numpy/reference/routines.logic.html I'd never have thought to look at the ufunc page. Perhaps the docs are written slightly too much from an experienced numpy'er point of view. For example, on the logic page np.any and all are listed first. I can see how they're the most important functions, but the first I'd look for are and or & not. Is there a strict protocol for editing the docs? On 10/01/2014 11:23 AM, Robert Kern wrote: > On Wed, Oct 1, 2014 at 10:01 AM, Michael Clerx wrote: >> Hi Everyone, >> >> I'm a bit confused about the way numpy treats conditions and I'm hoping >> you can help. >> >> Starting from this array: >> >> x = np.array(x) >> >>> x >> array([1, 2, 3, 2, 1]) >> >>> x == 1 >> array([ True, False, False, False, True], dtype=bool) >> >> works as expected, but the following is horribly wrong: >> >> >>> x == 1 + x == 3 >> Traceback (most recent call last): >> File "", line 1, in >> ValueError: The truth value of an array with more than one element is >> ambiguous. Use a.any() or a.all() > Python parses this expression as the following, due to operator precedence: > >>>> x == (1 + x) == 3 > The repeated == == has a special meaning in Python. It gets translated to this: > >>>> (x == (1+x)) and (x == 3) > The `and` is the problem here because it tries to evaluate each part > as a bool, which is forbidden for arrays (hence the ValueError). > >> I can live with that, but why do brackets solve this? >> >> >>> (x == 1) + (x == 3) >> array([ True, False, True, False, True], dtype=bool) >> >> Which is the same result as >> >> >>> (x == 1) | (x == 3) >> array([ True, False, True, False, True], dtype=bool) >> >> It gets a bit weirder: >> >> >>> (x == 1) + (x < 3) >> array([ True, True, False, True, True], dtype=bool) >> >> but >> >> >>> True + True >> 2 >> >> Is this the way it's supposed to work? Since it's quite different from >> python's scalar types, perhaps the documentation should mention it more >> clearly > Yes. For most of the binary ufuncs, especially the ones that implement > the standard arithmetic operators +-*/, we try to make sure that if > both arguments are the same dtype, the output is also the same dtype > and do not promote. I believe that there was some discussion about > this in the past year or so, and it turns out that this is the desired > semantics for addition over boolean matrices when you start > implementing linear algebra. The Python `bool` object has other > concerns, namely backwards compatibility with older versions of Python > that just used `True = 1; False = 0`. > >> (for example, why isn't "or" implemented as a piecewise or). > Do you mean "or" the Python keyword? This is documented. > > http://docs.scipy.org/doc/numpy/reference/ufuncs.html#comparison-functions > From charlesr.harris at gmail.com Wed Oct 1 08:33:03 2014 From: charlesr.harris at gmail.com (Charles R Harris) Date: Wed, 1 Oct 2014 06:33:03 -0600 Subject: [SciPy-Dev] Numpy's handling of conditions In-Reply-To: <542BCCD3.7080000@michaelclerx.com> References: <53E003B7.3090100@googlemail.com> <53E00562.5070602@googlemail.com> <53E00D36.3080500@googlemail.com> <542BC2F6.2000100@michaelclerx.com> <542BCCD3.7080000@michaelclerx.com> Message-ID: On Wed, Oct 1, 2014 at 3:43 AM, Michael Clerx wrote: > Thanks! > > I'd been looking at the documentation here: > > http://docs.scipy.org/doc/numpy/reference/routines.logic.html > > I'd never have thought to look at the ufunc page. Perhaps the docs are > written slightly too much from an experienced numpy'er point of view. > For example, on the logic page np.any and all are listed first. I can > see how they're the most important functions, but the first I'd look for > are and or & not. > > Is there a strict protocol for editing the docs? > Just make an ordinary pull request. See doc/source/dev/gitwash/development_workflow.rst Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From chadfulton at gmail.com Wed Oct 1 14:12:08 2014 From: chadfulton at gmail.com (Chad Fulton) Date: Wed, 1 Oct 2014 11:12:08 -0700 Subject: [SciPy-Dev] Expose Lapack for Cython In-Reply-To: References: <20140911152440.GN19676@phare.normalesup.org> Message-ID: On Thu, Sep 25, 2014 at 6:53 PM, Sturla Molden wrote: > On 26/09/14 03:47, Sturla Molden wrote: > > On 25/09/14 23:22, Ian Henriksen wrote: > > > >> I've opened this pull request > >> that implements this. There > >> are still several things I'd like to finish before it gets merged, but > >> the basic functionality is there. Feedback is welcome. > > > It would be great to have this in Scipy - we have done the same thing, but on an ad hoc basis, in Statsmodels. It would be much preferred to have it centralized. Thanks for the work. -------------- next part -------------- An HTML attachment was scrubbed... URL: From andrea.gavana at gmail.com Wed Oct 1 15:39:07 2014 From: andrea.gavana at gmail.com (Andrea Gavana) Date: Wed, 1 Oct 2014 21:39:07 +0200 Subject: [SciPy-Dev] AMPGO - Take 3 Message-ID: So, after receiving a bunch of requests for the AMPGO global optimization code and the associated benchmarks, I decided to put what I have on the public domain. I don't have that much time to work on it myself *but* I would still appreciate bug reports and feature requests. The AMPGO algorithm code and the 184 ND test functions - plus 20 1D test functions - live here: https://code.google.com/p/ampgo/ The algorithm itself can be tuned to use various local optimizers from the SciPy stack - and also from the OpenOpt stack, although the overall performances of the OpenOpt algorithms ranges between poor and abysmal - with the exception of the BOBYQA algorithm, and I'd say it's high time whoever is developing SciPy should include it as numerical optimization option. The AMPGO code as it stands is *not* suitable for inclusion into SciPy: the docstrings are almost non-existent, the return values are probably not compatible with the SciPy standards and in any case I am using SciPy 0.12.b1 still. Some time ago a kind soul provided me with the analytic definition of the gradient of the Tunnelling function inside AMPGO but I never managed to code it. This will give a great boost to AMPGO for problems with readily available gradient information. My main page of results and conclusions about global optimization algorithms available in Python is here: http://infinity77.net/global_optimization/ All the conclusions and results and ranking criteria are based on the *number of function evaluations*: everything else- like CPU time and internal algorithm performances, has been scrapped in the most brutal and merciless way. Most of us deal with real life problems where a single function evaluation (a simulation) may take hours to complete, so a few milliseconds gain in the internal algorithm processing feels like a joke. That,of course, has not stopped any of the "big" Global Optimization Projects (see COCONUT and funny friends), nor any of the reported benchmarks for "famous" solvers (BARON, I'm looking at you) from using ridiculous CPU-time metrics. Comments and suggestions are most welcome - especially if they help improving the overall performance of the AMPGO code (Tunnelling function gradient is a good start) or fixing bugs in the test functions - I'm sure there are a few lurking by, it's not that easy to stay focused while coding so much numerical optimization bruhaha. -- Andrea. "Imagination Is The Only Weapon In The War Against Reality." http://www.infinity77.net # ------------------------------------------------------------- # def ask_mailing_list_support(email): if mention_platform_and_version() and include_sample_app(): send_message(email) else: install_malware() erase_hard_drives() # ------------------------------------------------------------- # -------------- next part -------------- An HTML attachment was scrubbed... URL: From andyfaff at gmail.com Wed Oct 1 18:23:30 2014 From: andyfaff at gmail.com (Andrew Nelson) Date: Thu, 2 Oct 2014 08:23:30 +1000 Subject: [SciPy-Dev] AMPGO - Take 3 In-Reply-To: References: Message-ID: On 2 October 2014 05:39, Andrea Gavana wrote: > So, after receiving a bunch of requests for the AMPGO global optimization > code and the associated benchmarks, I decided to put what I have on the > public domain. I don't have that much time to work on it myself *but* I > would still appreciate bug reports and feature requests. > > > Comments and suggestions are most welcome - especially if they help > improving the overall performance of the AMPGO code (Tunnelling function > gradient is a good start) or fixing bugs in the test functions - I'm sure > there are a few lurking by, it's not that easy to stay focused while coding > so much numerical optimization bruhaha. > -- > Andrea. > Andrea, thank you for making all that available, that's certainly a large amount of work, wow. For my part I am particularly interested in the benchmark suite and would like to see that eventually included in scipy as a way of benchmarking scipy.optimize. I have a small amount of spare time to work on this and would like to take this on. Firstly, what do people think about inclusion of such a large set of benchmarking functions to the code base for scipy.optimize? We have a small amount, but could do with a lot more. Secondly, Andrea, would you mind if I worked towards this end goal? cheers, Andrew. -------------- next part -------------- An HTML attachment was scrubbed... URL: From toddrjen at gmail.com Thu Oct 2 13:28:21 2014 From: toddrjen at gmail.com (Todd) Date: Thu, 2 Oct 2014 19:28:21 +0200 Subject: [SciPy-Dev] Circular statistics revisted Message-ID: Recently I added a vector strength function to scipy. I was interested in also adding some tests of circular statistics, which scipy currently lacks, and noticed there is already a circmean, circvar, and circstd functions. Matlab and R both have circular statistics toolboxes. I don't think this is something that would qualify for an entirely different namespace, but what about moving "vector_strength" to scipy.stats and having a separate section for circular statistics in the scipy.stats documentation? Then we might be able to add some more circular statistics functions there. -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Fri Oct 3 03:16:30 2014 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 3 Oct 2014 08:16:30 +0100 Subject: [SciPy-Dev] Circular statistics revisted In-Reply-To: References: Message-ID: On Thu, Oct 2, 2014 at 6:28 PM, Todd wrote: > Recently I added a vector strength function to scipy. I was interested in > also adding some tests of circular statistics, which scipy currently lacks, > and noticed there is already a circmean, circvar, and circstd functions. > Matlab and R both have circular statistics toolboxes. > > I don't think this is something that would qualify for an entirely different > namespace, but what about moving "vector_strength" to scipy.stats and > having a separate section for circular statistics in the scipy.stats > documentation? Then we might be able to add some more circular statistics > functions there. I think scipy.signal.vectorstrength is fine where it is. It can be mentioned in the circular stats docs, though. Adding a new section to the docs for the circular stats is a fine idea. -- Robert Kern From sturla.molden at gmail.com Fri Oct 3 06:08:49 2014 From: sturla.molden at gmail.com (Sturla Molden) Date: Fri, 3 Oct 2014 10:08:49 +0000 (UTC) Subject: [SciPy-Dev] Circular statistics revisted References: Message-ID: <664182662434023605.146636sturla.molden-gmail.com@news.gmane.org> Todd wrote: > Recently I added a vector strength function to scipy. I was interested in > also adding some tests of circular statistics, which scipy currently lacks, Notice that scipy.stats.rayleigh is already taken, in case you want to add the Rayleigh test. Sturla From toddrjen at gmail.com Fri Oct 3 11:04:00 2014 From: toddrjen at gmail.com (Todd) Date: Fri, 3 Oct 2014 17:04:00 +0200 Subject: [SciPy-Dev] Circular statistics revisted In-Reply-To: <664182662434023605.146636sturla.molden-gmail.com@news.gmane.org> References: <664182662434023605.146636sturla.molden-gmail.com@news.gmane.org> Message-ID: On Oct 3, 2014 12:09 PM, "Sturla Molden" wrote: > > Todd wrote: > > Recently I added a vector strength function to scipy. I was interested in > > also adding some tests of circular statistics, which scipy currently lacks, > > Notice that scipy.stats.rayleigh is already taken, in case you want to add > the Rayleigh test. Thanks, I noticed. I was planning on something like scipy.stats.rayleigh_test but that would be open to discussion. -------------- next part -------------- An HTML attachment was scrubbed... URL: From andrea.gavana at gmail.com Sat Oct 4 12:06:38 2014 From: andrea.gavana at gmail.com (Andrea Gavana) Date: Sat, 4 Oct 2014 18:06:38 +0200 Subject: [SciPy-Dev] AMPGO - Take 3 In-Reply-To: References: Message-ID: Andrew, On 2 October 2014 00:23, Andrew Nelson wrote: > On 2 October 2014 05:39, Andrea Gavana wrote: > >> So, after receiving a bunch of requests for the AMPGO global optimization >> code and the associated benchmarks, I decided to put what I have on the >> public domain. I don't have that much time to work on it myself *but* I >> would still appreciate bug reports and feature requests. >> >> >> Comments and suggestions are most welcome - especially if they help >> improving the overall performance of the AMPGO code (Tunnelling function >> gradient is a good start) or fixing bugs in the test functions - I'm sure >> there are a few lurking by, it's not that easy to stay focused while coding >> so much numerical optimization bruhaha. >> -- >> Andrea. >> > > > Andrea, > thank you for making all that available, that's certainly a large amount > of work, wow. > For my part I am particularly interested in the benchmark suite and would > like to see that eventually included in scipy as a way of benchmarking > scipy.optimize. I have a small amount of spare time to work on this and > would like to take this on. > Firstly, what do people think about inclusion of such a large set of > benchmarking functions to the code base for scipy.optimize? We have a small > amount, but could do with a lot more. > > Secondly, Andrea, would you mind if I worked towards this end goal? > I don't have any problem with it, and I'd be happy if the benchmark suite ended up into scipy. I'd be even happier if AMPGO could make it as well... Andrea. "Imagination Is The Only Weapon In The War Against Reality." http://www.infinity77.net -------------- next part -------------- An HTML attachment was scrubbed... URL: From clarkfitzg at gmail.com Sat Oct 4 13:16:39 2014 From: clarkfitzg at gmail.com (Clark Fitzgerald) Date: Sat, 4 Oct 2014 10:16:39 -0700 (PDT) Subject: [SciPy-Dev] Bootstrap confidence limits code In-Reply-To: References: Message-ID: Discovered this thread after searching for statistical bootstrap in Python. It would be nice to have the capability to see a general bootstrap either in statsmodels or scipy.stats. Are people interested in this? Here's an object oriented approach that I took for it- https://github.com/clarkfitzg/stat-bootstrap/blob/master/bootstrap.py Regards, Clark Fitzgerald On Wednesday, August 8, 2012 11:57:20 AM UTC-7, jseabold wrote: > > On Wed, Aug 8, 2012 at 2:38 PM, Constantine Evans > wrote: > >> Hello everyone, >> >> > Hi, > > >> A few years ago I implemented a scikit for bootstrap confidence limits >> (https://github.com/cgevans/scikits-bootstrap). I didn?t think much >> about it after that until recently, when I realized that some people >> are actually using it, and that there?s apparently been some talk >> about implementing this functionality in either scipy.stats or >> statsmodels (I should thank Randal Olson for discussing this and >> bringing it to my attention). >> >> As such I?ve rewritten most of the code, and written up some >> docstrings. The current code can do confidence intervals with basic >> percentile interval, bias-corrected accelerated, and approximate >> bootstrap confidence methods, and can also provide bootstrap and >> jackknife indexes. Most of it is implemented from the descriptions in >> Efron and Tibshirani?s Introduction to the Bootstrap, but the ABC code >> at the moment is a port from the modified-BSD-licensed bootstrap >> package for R (not the boot package) as I?m not entirely confident in >> my understanding of the method. >> >> And so, I have a few questions for everyone: >> >> * Is there any interest in including this sort of code in either >> scipy.stats or statsmodels? If so, where do people think would be the >> better place? The code is relatively small; at the moment it is less >> than 200 lines, with docstrings probably making up 100 of those lines. >> > > I think it would be great to have this in statsmodels. I filed an > enhancement ticket about it this morning (also brought to my attention by > Randy's blog post). > > https://github.com/statsmodels/statsmodels/issues/420 > > >> * Also, if so, what would need to be changed, added, and improved >> beyond what is mentioned in the Contributing to Scipy part of the >> reference guide? I?m never a fan of my own code, and imagine quite a >> bit would need to be fixed; I know tests will need to be added too. >> >> > We can discuss further on the statsmodels mailing list (cc'd) unless > someone feels strongly that this should go into scipy. I'm not sure about > API yet so that it can be general and used across all the models in > statsmodels. It's one of the reasons I've put off incorporating code like > this for so long. > > >> In addition, I have a few questions about what would be better >> practice for the API, and I haven?t really found a guide on best >> practices for Scipy: >> > >> * When I started writing the code, I wrote a single function ci for >> confidence intervals, with a method argument to choose the method. >> This is easy for users, especially so that they don?t have to look >> through documentation to realize that BCA is the most generally useful >> method (at least from everything I?ve read) and that there really >> isn?t any reason to use many of the simpler methods. However, ABC >> takes different paramenters, and needs a statistic function that takes >> weights, which makes this single-function organization trickier. At >> the moment, I have a separate function for ABC. Would it be better to >> split up all the methods to their own functions? >> > > I think this might be preferable. > > >> * ABC requires a statistic function that takes weights. I?ve noticed >> that things like np.average takes a weights= argument. Would it be >> better to require input of a stat(data,weights) function, or input of >> a stat(data,weights=) with weights as a named argument? The latter >> would be nice in terms of allowing the same function to be used for >> all methods, but would make it impossible to use a lambda for the >> function. Is there some other method of doing this entirely? >> * Are there any missing features that anyone thinks should be added? >> >> I apologize if much of this is answered elsewhere, I just haven?t >> found any of it; I also apologize if this is far too long-winded and >> confusing! >> >> Regards, >> Constantine Evans >> _______________________________________________ >> SciPy-Dev mailing list >> SciP... at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-dev >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From andyfaff at gmail.com Sat Oct 4 23:54:09 2014 From: andyfaff at gmail.com (Andrew Nelson) Date: Sun, 5 Oct 2014 14:54:09 +1100 Subject: [SciPy-Dev] Global optimisation test functions Message-ID: Dear team, Andrea Gavana has kindly made the code to his benchmark suite and the AMPGO solver available. I am working on getting the benchmark suite suitable for inclusion into Scipy (pull request to follow). It's essentially going to be a vastly expanded version of scipy/optimize/benchmarks/test_functions.py. The benchmark suite needs to be accompanied by tests, but I'm not sure where to put them, presumably in a tests directory below the benchmarks directory. However, the benchmark directory is not a scipy module, so how would I import go_benchmark_functions from the testing code? cheers, Andrew -- _____________________________________ Dr. Andrew Nelson _____________________________________ -------------- next part -------------- An HTML attachment was scrubbed... URL: From toddrjen at gmail.com Sun Oct 5 05:09:17 2014 From: toddrjen at gmail.com (Todd) Date: Sun, 5 Oct 2014 11:09:17 +0200 Subject: [SciPy-Dev] Global optimisation test functions In-Reply-To: References: Message-ID: On Oct 5, 2014 5:54 AM, "Andrew Nelson" wrote: > > Dear team, > Andrea Gavana has kindly made the code to his benchmark suite and the AMPGO solver available. I am working on getting the benchmark suite suitable for inclusion into Scipy (pull request to follow). It's essentially going to be a vastly expanded version of scipy/optimize/benchmarks/test_functions.py. The benchmark suite needs to be accompanied by tests, but I'm not sure where to put them, presumably in a tests directory below the benchmarks directory. > However, the benchmark directory is not a scipy module, so how would I import go_benchmark_functions from the testing code? > Could the benchmark modules automatically rest themselves when they are run? Since it it's code that is normally only run in testing, making sure the benchmarks work when they are run might make sense. -------------- next part -------------- An HTML attachment was scrubbed... URL: From clarkfitzg at gmail.com Mon Oct 6 21:50:59 2014 From: clarkfitzg at gmail.com (Clark Fitzgerald) Date: Mon, 6 Oct 2014 18:50:59 -0700 Subject: [SciPy-Dev] consistent namedtuple output in scipy.stats Message-ID: Hello, We've had some good discussion on Github PR 3991 regarding namedtuple output. Warren raised the important issue of maintaining consistent names for the same stats across the module, ie, 'pvalue' versus 'p_value', or 'standard_errror'. I went through most of the scipy.stats module and wrote down some suggestions for what the attributes of the returned namedtuples could be. My notes can be found below. Feedback welcome for these names. Thanks, Clark The pattern returned for each element of bayes_mvs is important because it is an estimate with confidence interval. It might be even be worth exposing this one publicly. bayes_mvs: mean, variance, std_dev (center, (lower, upper)) -> estimate, minmax describe: n, minmax, mean, variance, skewness, exc_kurtosis mode: mode, count sigmaclip: clipped, lower, upper linregress: slope, intercept, rvalue, pvalue, stderr anderson: stat, critical_vals, sig_level Note `shapiro` doesn't follow the pattern below normaltest: stat, pvalue skewtest: stat, pvalue kurtosistest: stat, pvalue f_oneway: stat, pvalue ttest_1samp: stat, pvalue ttest_ind: stat, pvalue ttest_rel: stat, pvalue kstest: stat, pvalue ks_2samp: stat, pvalue power_divergence: stat, pvalue mannwhitneyu: stat, pvalue ranksums: stat, pvalue wilcoxon: stat, pvalue kruskal: stat, pvalue friedmanchisquare: stat, pvalue ansari: stat, pvalue bartlett: stat, pvalue levene: stat, pvalue spearmanr: correlation, pvalue pointbiserialr: correlation, pvalue kendalltau: correlation, pvalue cumfreq: cumcount, lowerlimit, binsize, extrapoints histogram: count, lowerlimit, binsize, extrapoints relfreq: frequency, lowerlimit, binsize, extrapoints binned_statistic: stat, bin_edges, binnumber binned_statistic_2d: stat, x_edge, y_edge, binnumber binned_statistic_2d: stat, bin_edges, binnumber -------------- next part -------------- An HTML attachment was scrubbed... URL: From daw at clemson.edu Tue Oct 7 11:59:49 2014 From: daw at clemson.edu (Murray Daw) Date: Tue, 7 Oct 2014 15:59:49 +0000 (UTC) Subject: [SciPy-Dev] =?utf-8?q?Linalg=3A_function_for_diagonalize_hermitia?= =?utf-8?q?n_commuting=09matrices_simultaneously?= References: <6F943707-EA79-4D66-8526-3F641289F732@me.com> Message-ID: Lars-Hendrik Frahm googlemail.com> writes: > > Dear SciPy Developers, > > I like to contribute a function for simultaneously diagonalize a set of > hermitian commuting matrices. > Especially for matrices with degenerate eigenvalues. > > Is there need for something like that in SciPy ? I think it could be > helpful for someone else and I didn't find > it in recent SciPy versions or on the mailing list. > > Hope to hear from you. > I do have a need for exactly that capability in python. Have you developed such a module? Best, Murray Daw From saullogiovani at gmail.com Fri Oct 10 06:00:06 2014 From: saullogiovani at gmail.com (Saullo Castro) Date: Fri, 10 Oct 2014 12:00:06 +0200 Subject: [SciPy-Dev] Function to check if sparse matrix is symmetric Message-ID: I developed a function (shown below) to check if a sparse matrix is symmetric and would like to know if the community is interested to include in scipy.sparse. Regards, Saullo def is_symmetric(m): """Check if a sparse matrix is symmetric Parameters ---------- m : array or sparse matrix A square matrix. Returns ------- check : bool The check result. """ if m.shape[0] != m.shape[1]: raise ValueError('m must be a square matrix') if not isinstance(m, coo_matrix): m = coo_matrix(m) r, c, v = m.row, m.col, m.data tril_no_diag = r > c triu_no_diag = c > r if triu_no_diag.sum() != tril_no_diag.sum(): return False rl = r[tril_no_diag] cl = c[tril_no_diag] vl = v[tril_no_diag] ru = r[triu_no_diag] cu = c[triu_no_diag] vu = v[triu_no_diag] sortl = np.lexsort((cl, rl)) sortu = np.lexsort((ru, cu)) vl = vl[sortl] vu = vu[sortu] check = np.allclose(vl, vu) return check -------------- next part -------------- An HTML attachment was scrubbed... URL: From nils106 at googlemail.com Fri Oct 10 06:49:51 2014 From: nils106 at googlemail.com (Nils Wagner) Date: Fri, 10 Oct 2014 12:49:51 +0200 Subject: [SciPy-Dev] Function to check if sparse matrix is symmetric In-Reply-To: References: Message-ID: IMHO "is_Hermitian" is more general. Nils On Fri, Oct 10, 2014 at 12:00 PM, Saullo Castro wrote: > I developed a function (shown below) to check if a sparse matrix is > symmetric and would like to know if the community is interested to include > in scipy.sparse. > > Regards, > Saullo > > def is_symmetric(m): > """Check if a sparse matrix is symmetric > > Parameters > ---------- > m : array or sparse matrix > A square matrix. > > Returns > ------- > check : bool > The check result. > > """ > if m.shape[0] != m.shape[1]: > raise ValueError('m must be a square matrix') > > if not isinstance(m, coo_matrix): > m = coo_matrix(m) > > r, c, v = m.row, m.col, m.data > tril_no_diag = r > c > triu_no_diag = c > r > > if triu_no_diag.sum() != tril_no_diag.sum(): > return False > > rl = r[tril_no_diag] > cl = c[tril_no_diag] > vl = v[tril_no_diag] > ru = r[triu_no_diag] > cu = c[triu_no_diag] > vu = v[triu_no_diag] > > sortl = np.lexsort((cl, rl)) > sortu = np.lexsort((ru, cu)) > vl = vl[sortl] > vu = vu[sortu] > > check = np.allclose(vl, vu) > > return check > > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From saullogiovani at gmail.com Fri Oct 10 14:00:57 2014 From: saullogiovani at gmail.com (Saullo Castro) Date: Fri, 10 Oct 2014 20:00:57 +0200 Subject: [SciPy-Dev] Function to check if sparse matrix is symmetric (Saullo Castro) Message-ID: A check "is_Hermitian" would involve basically the same function, adding something like: m = m.real and checking if the diagonal elements are all real... Saullo 2014-10-10 19:00 GMT+02:00 : > Send SciPy-Dev mailing list submissions to > scipy-dev at scipy.org > > To subscribe or unsubscribe via the World Wide Web, visit > http://mail.scipy.org/mailman/listinfo/scipy-dev > or, via email, send a message with subject or body 'help' to > scipy-dev-request at scipy.org > > You can reach the person managing the list at > scipy-dev-owner at scipy.org > > When replying, please edit your Subject line so it is more specific > than "Re: Contents of SciPy-Dev digest..." > > > Today's Topics: > > 1. Function to check if sparse matrix is symmetric (Saullo Castro) > 2. Re: Function to check if sparse matrix is symmetric (Nils Wagner) > > > ---------------------------------------------------------------------- > > Message: 1 > Date: Fri, 10 Oct 2014 12:00:06 +0200 > From: Saullo Castro > Subject: [SciPy-Dev] Function to check if sparse matrix is symmetric > To: Scipy - Dev > Message-ID: > i5VH_fbMtJq0E_UDhLfEqQxNeq3c10-7cEQ at mail.gmail.com> > Content-Type: text/plain; charset="utf-8" > > I developed a function (shown below) to check if a sparse matrix is > symmetric and would like to know if the community is interested to include > in scipy.sparse. > > Regards, > Saullo > > def is_symmetric(m): > """Check if a sparse matrix is symmetric > > Parameters > ---------- > m : array or sparse matrix > A square matrix. > > Returns > ------- > check : bool > The check result. > > """ > if m.shape[0] != m.shape[1]: > raise ValueError('m must be a square matrix') > > if not isinstance(m, coo_matrix): > m = coo_matrix(m) > > r, c, v = m.row, m.col, m.data > tril_no_diag = r > c > triu_no_diag = c > r > > if triu_no_diag.sum() != tril_no_diag.sum(): > return False > > rl = r[tril_no_diag] > cl = c[tril_no_diag] > vl = v[tril_no_diag] > ru = r[triu_no_diag] > cu = c[triu_no_diag] > vu = v[triu_no_diag] > > sortl = np.lexsort((cl, rl)) > sortu = np.lexsort((ru, cu)) > vl = vl[sortl] > vu = vu[sortu] > > check = np.allclose(vl, vu) > > return check > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: > http://mail.scipy.org/pipermail/scipy-dev/attachments/20141010/b4027017/attachment-0001.html > > ------------------------------ > > Message: 2 > Date: Fri, 10 Oct 2014 12:49:51 +0200 > From: Nils Wagner > Subject: Re: [SciPy-Dev] Function to check if sparse matrix is > symmetric > To: SciPy Developers List > Message-ID: > ysMfFmcw at mail.gmail.com> > Content-Type: text/plain; charset="utf-8" > > IMHO "is_Hermitian" is more general. > > Nils > > > On Fri, Oct 10, 2014 at 12:00 PM, Saullo Castro > wrote: > > > I developed a function (shown below) to check if a sparse matrix is > > symmetric and would like to know if the community is interested to > include > > in scipy.sparse. > > > > Regards, > > Saullo > > > > def is_symmetric(m): > > """Check if a sparse matrix is symmetric > > > > Parameters > > ---------- > > m : array or sparse matrix > > A square matrix. > > > > Returns > > ------- > > check : bool > > The check result. > > > > """ > > if m.shape[0] != m.shape[1]: > > raise ValueError('m must be a square matrix') > > > > if not isinstance(m, coo_matrix): > > m = coo_matrix(m) > > > > r, c, v = m.row, m.col, m.data > > tril_no_diag = r > c > > triu_no_diag = c > r > > > > if triu_no_diag.sum() != tril_no_diag.sum(): > > return False > > > > rl = r[tril_no_diag] > > cl = c[tril_no_diag] > > vl = v[tril_no_diag] > > ru = r[triu_no_diag] > > cu = c[triu_no_diag] > > vu = v[triu_no_diag] > > > > sortl = np.lexsort((cl, rl)) > > sortu = np.lexsort((ru, cu)) > > vl = vl[sortl] > > vu = vu[sortu] > > > > check = np.allclose(vl, vu) > > > > return check > > > > > > _______________________________________________ > > SciPy-Dev mailing list > > SciPy-Dev at scipy.org > > http://mail.scipy.org/mailman/listinfo/scipy-dev > > > > > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: > http://mail.scipy.org/pipermail/scipy-dev/attachments/20141010/34aab777/attachment-0001.html > > ------------------------------ > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > > > End of SciPy-Dev Digest, Vol 132, Issue 8 > ***************************************** > -------------- next part -------------- An HTML attachment was scrubbed... URL: From clarkfitzg at gmail.com Fri Oct 10 14:05:00 2014 From: clarkfitzg at gmail.com (Clark Fitzgerald) Date: Fri, 10 Oct 2014 11:05:00 -0700 Subject: [SciPy-Dev] [pystatsmodels] Re: Bootstrap confidence limits code In-Reply-To: <471d41f6-b641-4cbd-8acb-eef2ab9a074c@googlegroups.com> References: <471d41f6-b641-4cbd-8acb-eef2ab9a074c@googlegroups.com> Message-ID: Thanks for the link. It looks like you've put a lot of thought into the design. I'll give it a try next time I need to use bootstrap. On Thu, Oct 9, 2014 at 7:41 PM, Kevin Sheppard wrote: > I have an almost complete bootstrap here: > > https://github.com/bashtage/arch/blob/bootstrap/arch/bootstrap/base.py > > Just needs me to finish BCA, add docs on how to use and merge. > > > On Saturday, October 4, 2014 1:16:39 PM UTC-4, Clark Fitzgerald wrote: >> >> Discovered this thread after searching for statistical bootstrap in >> Python. It would be nice to have the capability to see a general bootstrap >> either in statsmodels or scipy.stats. Are people interested in this? >> >> Here's an object oriented approach that I took for it- >> https://github.com/clarkfitzg/stat-bootstrap/blob/master/bootstrap.py >> >> Regards, >> Clark Fitzgerald >> >> >> On Wednesday, August 8, 2012 11:57:20 AM UTC-7, jseabold wrote: >>> >>> On Wed, Aug 8, 2012 at 2:38 PM, Constantine Evans >>> wrote: >>> >>>> Hello everyone, >>>> >>>> >>> Hi, >>> >>> >>>> A few years ago I implemented a scikit for bootstrap confidence limits >>>> (https://github.com/cgevans/scikits-bootstrap). I didn?t think much >>>> about it after that until recently, when I realized that some people >>>> are actually using it, and that there?s apparently been some talk >>>> about implementing this functionality in either scipy.stats or >>>> statsmodels (I should thank Randal Olson for discussing this and >>>> bringing it to my attention). >>>> >>>> As such I?ve rewritten most of the code, and written up some >>>> docstrings. The current code can do confidence intervals with basic >>>> percentile interval, bias-corrected accelerated, and approximate >>>> bootstrap confidence methods, and can also provide bootstrap and >>>> jackknife indexes. Most of it is implemented from the descriptions in >>>> Efron and Tibshirani?s Introduction to the Bootstrap, but the ABC code >>>> at the moment is a port from the modified-BSD-licensed bootstrap >>>> package for R (not the boot package) as I?m not entirely confident in >>>> my understanding of the method. >>>> >>>> And so, I have a few questions for everyone: >>>> >>>> * Is there any interest in including this sort of code in either >>>> scipy.stats or statsmodels? If so, where do people think would be the >>>> better place? The code is relatively small; at the moment it is less >>>> than 200 lines, with docstrings probably making up 100 of those lines. >>>> >>> >>> I think it would be great to have this in statsmodels. I filed an >>> enhancement ticket about it this morning (also brought to my attention by >>> Randy's blog post). >>> >>> https://github.com/statsmodels/statsmodels/issues/420 >>> >>> >>>> * Also, if so, what would need to be changed, added, and improved >>>> beyond what is mentioned in the Contributing to Scipy part of the >>>> reference guide? I?m never a fan of my own code, and imagine quite a >>>> bit would need to be fixed; I know tests will need to be added too. >>>> >>>> >>> We can discuss further on the statsmodels mailing list (cc'd) unless >>> someone feels strongly that this should go into scipy. I'm not sure about >>> API yet so that it can be general and used across all the models in >>> statsmodels. It's one of the reasons I've put off incorporating code like >>> this for so long. >>> >>> >>>> In addition, I have a few questions about what would be better >>>> practice for the API, and I haven?t really found a guide on best >>>> practices for Scipy: >>>> >>> >>>> * When I started writing the code, I wrote a single function ci for >>>> confidence intervals, with a method argument to choose the method. >>>> This is easy for users, especially so that they don?t have to look >>>> through documentation to realize that BCA is the most generally useful >>>> method (at least from everything I?ve read) and that there really >>>> isn?t any reason to use many of the simpler methods. However, ABC >>>> takes different paramenters, and needs a statistic function that takes >>>> weights, which makes this single-function organization trickier. At >>>> the moment, I have a separate function for ABC. Would it be better to >>>> split up all the methods to their own functions? >>>> >>> >>> I think this might be preferable. >>> >>> >>>> * ABC requires a statistic function that takes weights. I?ve noticed >>>> that things like np.average takes a weights= argument. Would it be >>>> better to require input of a stat(data,weights) function, or input of >>>> a stat(data,weights=) with weights as a named argument? The latter >>>> would be nice in terms of allowing the same function to be used for >>>> all methods, but would make it impossible to use a lambda for the >>>> function. Is there some other method of doing this entirely? >>>> * Are there any missing features that anyone thinks should be added? >>>> >>>> I apologize if much of this is answered elsewhere, I just haven?t >>>> found any of it; I also apologize if this is far too long-winded and >>>> confusing! >>>> >>>> Regards, >>>> Constantine Evans >>>> _______________________________________________ >>>> SciPy-Dev mailing list >>>> SciP... at scipy.org >>>> http://mail.scipy.org/mailman/listinfo/scipy-dev >>>> >>> >>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From saullogiovani at gmail.com Fri Oct 10 14:07:48 2014 From: saullogiovani at gmail.com (Saullo Castro) Date: Fri, 10 Oct 2014 20:07:48 +0200 Subject: [SciPy-Dev] Function to check if sparse matrix is symmetric Message-ID: If the community says "yes, why not..." I can submit a push request with the "is_Hermitian" function... please, let me know your opinion.... Saullo 2014-10-10 20:05 GMT+02:00 : > Send SciPy-Dev mailing list submissions to > scipy-dev at scipy.org > > To subscribe or unsubscribe via the World Wide Web, visit > http://mail.scipy.org/mailman/listinfo/scipy-dev > or, via email, send a message with subject or body 'help' to > scipy-dev-request at scipy.org > > You can reach the person managing the list at > scipy-dev-owner at scipy.org > > When replying, please edit your Subject line so it is more specific > than "Re: Contents of SciPy-Dev digest..." > > > Today's Topics: > > 1. Re: Function to check if sparse matrix is symmetric (Saullo > Castro) (Saullo Castro) > 2. Re: [pystatsmodels] Re: Bootstrap confidence limits code > (Clark Fitzgerald) > > > ---------------------------------------------------------------------- > > Message: 1 > Date: Fri, 10 Oct 2014 20:00:57 +0200 > From: Saullo Castro > Subject: Re: [SciPy-Dev] Function to check if sparse matrix is > symmetric (Saullo Castro) > To: Scipy - Dev > Message-ID: > < > CAHbwRz4gFGPZFX2huOyfYDuLMtkvCpyPzuARgaxGEAY4A6Vdvw at mail.gmail.com> > Content-Type: text/plain; charset="utf-8" > > A check "is_Hermitian" would involve basically the same function, adding > something like: > > m = m.real > > and checking if the diagonal elements are all real... > > Saullo > > 2014-10-10 19:00 GMT+02:00 : > > > Send SciPy-Dev mailing list submissions to > > scipy-dev at scipy.org > > > > To subscribe or unsubscribe via the World Wide Web, visit > > http://mail.scipy.org/mailman/listinfo/scipy-dev > > or, via email, send a message with subject or body 'help' to > > scipy-dev-request at scipy.org > > > > You can reach the person managing the list at > > scipy-dev-owner at scipy.org > > > > When replying, please edit your Subject line so it is more specific > > than "Re: Contents of SciPy-Dev digest..." > > > > > > Today's Topics: > > > > 1. Function to check if sparse matrix is symmetric (Saullo Castro) > > 2. Re: Function to check if sparse matrix is symmetric (Nils Wagner) > > > > > > ---------------------------------------------------------------------- > > > > Message: 1 > > Date: Fri, 10 Oct 2014 12:00:06 +0200 > > From: Saullo Castro > > Subject: [SciPy-Dev] Function to check if sparse matrix is symmetric > > To: Scipy - Dev > > Message-ID: > > > i5VH_fbMtJq0E_UDhLfEqQxNeq3c10-7cEQ at mail.gmail.com> > > Content-Type: text/plain; charset="utf-8" > > > > I developed a function (shown below) to check if a sparse matrix is > > symmetric and would like to know if the community is interested to > include > > in scipy.sparse. > > > > Regards, > > Saullo > > > > def is_symmetric(m): > > """Check if a sparse matrix is symmetric > > > > Parameters > > ---------- > > m : array or sparse matrix > > A square matrix. > > > > Returns > > ------- > > check : bool > > The check result. > > > > """ > > if m.shape[0] != m.shape[1]: > > raise ValueError('m must be a square matrix') > > > > if not isinstance(m, coo_matrix): > > m = coo_matrix(m) > > > > r, c, v = m.row, m.col, m.data > > tril_no_diag = r > c > > triu_no_diag = c > r > > > > if triu_no_diag.sum() != tril_no_diag.sum(): > > return False > > > > rl = r[tril_no_diag] > > cl = c[tril_no_diag] > > vl = v[tril_no_diag] > > ru = r[triu_no_diag] > > cu = c[triu_no_diag] > > vu = v[triu_no_diag] > > > > sortl = np.lexsort((cl, rl)) > > sortu = np.lexsort((ru, cu)) > > vl = vl[sortl] > > vu = vu[sortu] > > > > check = np.allclose(vl, vu) > > > > return check > > -------------- next part -------------- > > An HTML attachment was scrubbed... > > URL: > > > http://mail.scipy.org/pipermail/scipy-dev/attachments/20141010/b4027017/attachment-0001.html > > > > ------------------------------ > > > > Message: 2 > > Date: Fri, 10 Oct 2014 12:49:51 +0200 > > From: Nils Wagner > > Subject: Re: [SciPy-Dev] Function to check if sparse matrix is > > symmetric > > To: SciPy Developers List > > Message-ID: > > > ysMfFmcw at mail.gmail.com> > > Content-Type: text/plain; charset="utf-8" > > > > IMHO "is_Hermitian" is more general. > > > > Nils > > > > > > On Fri, Oct 10, 2014 at 12:00 PM, Saullo Castro > > > wrote: > > > > > I developed a function (shown below) to check if a sparse matrix is > > > symmetric and would like to know if the community is interested to > > include > > > in scipy.sparse. > > > > > > Regards, > > > Saullo > > > > > > def is_symmetric(m): > > > """Check if a sparse matrix is symmetric > > > > > > Parameters > > > ---------- > > > m : array or sparse matrix > > > A square matrix. > > > > > > Returns > > > ------- > > > check : bool > > > The check result. > > > > > > """ > > > if m.shape[0] != m.shape[1]: > > > raise ValueError('m must be a square matrix') > > > > > > if not isinstance(m, coo_matrix): > > > m = coo_matrix(m) > > > > > > r, c, v = m.row, m.col, m.data > > > tril_no_diag = r > c > > > triu_no_diag = c > r > > > > > > if triu_no_diag.sum() != tril_no_diag.sum(): > > > return False > > > > > > rl = r[tril_no_diag] > > > cl = c[tril_no_diag] > > > vl = v[tril_no_diag] > > > ru = r[triu_no_diag] > > > cu = c[triu_no_diag] > > > vu = v[triu_no_diag] > > > > > > sortl = np.lexsort((cl, rl)) > > > sortu = np.lexsort((ru, cu)) > > > vl = vl[sortl] > > > vu = vu[sortu] > > > > > > check = np.allclose(vl, vu) > > > > > > return check > > > > > > > > > _______________________________________________ > > > SciPy-Dev mailing list > > > SciPy-Dev at scipy.org > > > http://mail.scipy.org/mailman/listinfo/scipy-dev > > > > > > > > -------------- next part -------------- > > An HTML attachment was scrubbed... > > URL: > > > http://mail.scipy.org/pipermail/scipy-dev/attachments/20141010/34aab777/attachment-0001.html > > > > ------------------------------ > > > > _______________________________________________ > > SciPy-Dev mailing list > > SciPy-Dev at scipy.org > > http://mail.scipy.org/mailman/listinfo/scipy-dev > > > > > > End of SciPy-Dev Digest, Vol 132, Issue 8 > > ***************************************** > > > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: > http://mail.scipy.org/pipermail/scipy-dev/attachments/20141010/5b6a3a8e/attachment-0001.html > > ------------------------------ > > Message: 2 > Date: Fri, 10 Oct 2014 11:05:00 -0700 > From: Clark Fitzgerald > Subject: Re: [SciPy-Dev] [pystatsmodels] Re: Bootstrap confidence > limits code > To: pystatsmodels at googlegroups.com > Cc: SciPy Developers List , Constantine Evans > > Message-ID: > rKqxX7jHe8f6yhpbwVBuzJOn8jENXJCQ at mail.gmail.com> > Content-Type: text/plain; charset="utf-8" > > Thanks for the link. It looks like you've put a lot of thought into the > design. I'll give it a try next time I need to use bootstrap. > > On Thu, Oct 9, 2014 at 7:41 PM, Kevin Sheppard > > wrote: > > > I have an almost complete bootstrap here: > > > > https://github.com/bashtage/arch/blob/bootstrap/arch/bootstrap/base.py > > > > Just needs me to finish BCA, add docs on how to use and merge. > > > > > > On Saturday, October 4, 2014 1:16:39 PM UTC-4, Clark Fitzgerald wrote: > >> > >> Discovered this thread after searching for statistical bootstrap in > >> Python. It would be nice to have the capability to see a general > bootstrap > >> either in statsmodels or scipy.stats. Are people interested in this? > >> > >> Here's an object oriented approach that I took for it- > >> https://github.com/clarkfitzg/stat-bootstrap/blob/master/bootstrap.py > >> > >> Regards, > >> Clark Fitzgerald > >> > >> > >> On Wednesday, August 8, 2012 11:57:20 AM UTC-7, jseabold wrote: > >>> > >>> On Wed, Aug 8, 2012 at 2:38 PM, Constantine Evans < > cev... at evanslabs.org> > >>> wrote: > >>> > >>>> Hello everyone, > >>>> > >>>> > >>> Hi, > >>> > >>> > >>>> A few years ago I implemented a scikit for bootstrap confidence limits > >>>> (https://github.com/cgevans/scikits-bootstrap). I didn?t think much > >>>> about it after that until recently, when I realized that some people > >>>> are actually using it, and that there?s apparently been some talk > >>>> about implementing this functionality in either scipy.stats or > >>>> statsmodels (I should thank Randal Olson for discussing this and > >>>> bringing it to my attention). > >>>> > >>>> As such I?ve rewritten most of the code, and written up some > >>>> docstrings. The current code can do confidence intervals with basic > >>>> percentile interval, bias-corrected accelerated, and approximate > >>>> bootstrap confidence methods, and can also provide bootstrap and > >>>> jackknife indexes. Most of it is implemented from the descriptions in > >>>> Efron and Tibshirani?s Introduction to the Bootstrap, but the ABC code > >>>> at the moment is a port from the modified-BSD-licensed bootstrap > >>>> package for R (not the boot package) as I?m not entirely confident in > >>>> my understanding of the method. > >>>> > >>>> And so, I have a few questions for everyone: > >>>> > >>>> * Is there any interest in including this sort of code in either > >>>> scipy.stats or statsmodels? If so, where do people think would be the > >>>> better place? The code is relatively small; at the moment it is less > >>>> than 200 lines, with docstrings probably making up 100 of those lines. > >>>> > >>> > >>> I think it would be great to have this in statsmodels. I filed an > >>> enhancement ticket about it this morning (also brought to my attention > by > >>> Randy's blog post). > >>> > >>> https://github.com/statsmodels/statsmodels/issues/420 > >>> > >>> > >>>> * Also, if so, what would need to be changed, added, and improved > >>>> beyond what is mentioned in the Contributing to Scipy part of the > >>>> reference guide? I?m never a fan of my own code, and imagine quite a > >>>> bit would need to be fixed; I know tests will need to be added too. > >>>> > >>>> > >>> We can discuss further on the statsmodels mailing list (cc'd) unless > >>> someone feels strongly that this should go into scipy. I'm not sure > about > >>> API yet so that it can be general and used across all the models in > >>> statsmodels. It's one of the reasons I've put off incorporating code > like > >>> this for so long. > >>> > >>> > >>>> In addition, I have a few questions about what would be better > >>>> practice for the API, and I haven?t really found a guide on best > >>>> practices for Scipy: > >>>> > >>> > >>>> * When I started writing the code, I wrote a single function ci for > >>>> confidence intervals, with a method argument to choose the method. > >>>> This is easy for users, especially so that they don?t have to look > >>>> through documentation to realize that BCA is the most generally useful > >>>> method (at least from everything I?ve read) and that there really > >>>> isn?t any reason to use many of the simpler methods. However, ABC > >>>> takes different paramenters, and needs a statistic function that takes > >>>> weights, which makes this single-function organization trickier. At > >>>> the moment, I have a separate function for ABC. Would it be better to > >>>> split up all the methods to their own functions? > >>>> > >>> > >>> I think this might be preferable. > >>> > >>> > >>>> * ABC requires a statistic function that takes weights. I?ve noticed > >>>> that things like np.average takes a weights= argument. Would it be > >>>> better to require input of a stat(data,weights) function, or input of > >>>> a stat(data,weights=) with weights as a named argument? The latter > >>>> would be nice in terms of allowing the same function to be used for > >>>> all methods, but would make it impossible to use a lambda for the > >>>> function. Is there some other method of doing this entirely? > >>>> * Are there any missing features that anyone thinks should be added? > >>>> > >>>> I apologize if much of this is answered elsewhere, I just haven?t > >>>> found any of it; I also apologize if this is far too long-winded and > >>>> confusing! > >>>> > >>>> Regards, > >>>> Constantine Evans > >>>> _______________________________________________ > >>>> SciPy-Dev mailing list > >>>> SciP... at scipy.org > >>>> http://mail.scipy.org/mailman/listinfo/scipy-dev > >>>> > >>> > >>> > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: > http://mail.scipy.org/pipermail/scipy-dev/attachments/20141010/41d62c4f/attachment.html > > ------------------------------ > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > > > End of SciPy-Dev Digest, Vol 132, Issue 9 > ***************************************** > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sturla.molden at gmail.com Sat Oct 11 08:53:36 2014 From: sturla.molden at gmail.com (Sturla Molden) Date: Sat, 11 Oct 2014 12:53:36 +0000 (UTC) Subject: [SciPy-Dev] Use vecLibFort instead of Accelerate? Message-ID: <358013229434724162.309130sturla.molden-gmail.com@news.gmane.org> There is a library called vecLibFort that re-exports all the BLAS and LAPACK symbols in Accelerate with gfortran ABI: https://github.com/mcg1969/vecLibFort For scipy.linalg vecLibFort would solve several issues: - Special wrappers for Accelerate would no longer be needed and can be removed from the source. - For the new Cython layer, the fortranname bug in f2py would go away ? the wrappers are no longer needed. - The sgemv AVX segfault in Mavericks is corrected for by calling sgemm when data are not 32 byte aligned. Sturla From matthew.brett at gmail.com Sat Oct 11 13:25:32 2014 From: matthew.brett at gmail.com (Matthew Brett) Date: Sat, 11 Oct 2014 10:25:32 -0700 Subject: [SciPy-Dev] Use vecLibFort instead of Accelerate? In-Reply-To: <358013229434724162.309130sturla.molden-gmail.com@news.gmane.org> References: <358013229434724162.309130sturla.molden-gmail.com@news.gmane.org> Message-ID: Hi, On Sat, Oct 11, 2014 at 5:53 AM, Sturla Molden wrote: > There is a library called vecLibFort that re-exports all the BLAS and > LAPACK symbols in Accelerate with gfortran ABI: > > https://github.com/mcg1969/vecLibFort > > For scipy.linalg vecLibFort would solve several issues: > > - Special wrappers for Accelerate would no longer be needed and can be > removed from the source. > > - For the new Cython layer, the fortranname bug in f2py would go away ? the > wrappers are no longer needed. > > - The sgemv AVX segfault in Mavericks is corrected for by calling sgemm > when data are not 32 byte aligned. That sounds like an excellent idea. Sturla - would you consider making a PR for this? I would be very happy to change the numpy and scipy OSX automated builds ... Cheers, Matthew From pav at iki.fi Sat Oct 11 14:13:53 2014 From: pav at iki.fi (Pauli Virtanen) Date: Sat, 11 Oct 2014 21:13:53 +0300 Subject: [SciPy-Dev] Use vecLibFort instead of Accelerate? In-Reply-To: <358013229434724162.309130sturla.molden-gmail.com@news.gmane.org> References: <358013229434724162.309130sturla.molden-gmail.com@news.gmane.org> Message-ID: 11.10.2014, 15:53, Sturla Molden kirjoitti: > There is a library called vecLibFort that re-exports all the BLAS and > LAPACK symbols in Accelerate with gfortran ABI: > > https://github.com/mcg1969/vecLibFort One drawback in this is that asking users to download and install vecLibFort properly to be able to use Scipy on OSX at all seems quite brittle. Bundling that set of wrappers, which unlike our own works via the much better (osx-specific?) dynamic library symbol overrides, might be possible. Not sure how well the __attribute__(()) statements required will work on clang and various gcc versions? Fighting distutils to set up rpaths etc. for distributing the additional .so may however be the biggest job. @Matthew: what is the problem with the current Numpy and Scipy OSX builds? In principle, it should just work out of the box currently? > - For the new Cython layer, the fortranname bug in f2py would go away ? the > wrappers are no longer needed. The fortranname bug is still a bug in f2py IMHO. The issue is also not blocking the Cython work, as there is a straightforward workaround. Pauli From matthew.brett at gmail.com Sat Oct 11 14:35:42 2014 From: matthew.brett at gmail.com (Matthew Brett) Date: Sat, 11 Oct 2014 11:35:42 -0700 Subject: [SciPy-Dev] Use vecLibFort instead of Accelerate? In-Reply-To: References: <358013229434724162.309130sturla.molden-gmail.com@news.gmane.org> Message-ID: Hi, On Sat, Oct 11, 2014 at 11:13 AM, Pauli Virtanen wrote: > 11.10.2014, 15:53, Sturla Molden kirjoitti: >> There is a library called vecLibFort that re-exports all the BLAS and >> LAPACK symbols in Accelerate with gfortran ABI: >> >> https://github.com/mcg1969/vecLibFort > > One drawback in this is that asking users to download and install > vecLibFort properly to be able to use Scipy on OSX at all seems quite > brittle. > > Bundling that set of wrappers, which unlike our own works via the much > better (osx-specific?) dynamic library symbol overrides, might be > possible. Not sure how well the __attribute__(()) statements required > will work on clang and various gcc versions? I guess they are likely to work for clang given the library is OSX specific? > Fighting distutils to set up rpaths etc. for distributing the additional > .so may however be the biggest job. I can use the OSX delocate [1] machinery for that job, for our own builds. > @Matthew: what is the problem with the current Numpy and Scipy OSX > builds? In principle, it should just work out of the box currently? It does work out of the numpy-atlas-binaries [2] box, but it has a few problems: * It links against ATLAS for SSE2 as optimized for my laptop, so probably isn't the best performance that the OSX machine could produce; * The builds (via ATLAS and numpy) use gcc runtimes including libquadmath, so we have to take care of LGPL and GPLv3 runtime library exception licensing; * I used gcc / gfortran 4.8 rather than clang to get best performance and least likelihood of build errors from ATLAS, so someone replicating the build would need to install this compiler; * The build, though automated, is complicated in that it has to separately build 32 and 64 bit extensions then fuse them, largely because of the gcc version. So - I think building via clang and Accelerate using vecLibFort would be far preferable if we can make it work. Cheers, Matthew [1] https://pypi.python.org/pypi/delocate [2] https://github.com/MacPython/numpy-atlas-binaries From sturla.molden at gmail.com Sat Oct 11 14:42:37 2014 From: sturla.molden at gmail.com (Sturla Molden) Date: Sat, 11 Oct 2014 18:42:37 +0000 (UTC) Subject: [SciPy-Dev] Use vecLibFort instead of Accelerate? References: <358013229434724162.309130sturla.molden-gmail.com@news.gmane.org> Message-ID: <1377725029434743746.279235sturla.molden-gmail.com@news.gmane.org> Matthew Brett wrote: > That sounds like an excellent idea. Sturla - would you consider > making a PR for this? I would be very happy to change the numpy and > scipy OSX automated builds ... I can look into it, I was just going to ask your opinion first. :-) It is actually just a matter of adding one C file and two header files to the build process, and then we link against Accelerate as before. vecLibFort is actually a very tiny library. The reason this is different from the current ABI wrappers is some magic it does with dlload and dlsym to replace the patched BLAS and LAPACK symbols. For the proposed Cython BLAS and LAPACK layer it should silence the fortranname bug in f2py, which causes a problem with the current ABI wrappers. I don't think the numpy.dot problem with Accelerate would be fixed though, but the parallel problem with scipy.linalg.lapack.sgemv would go away. vecLibFort only patches the Fortran subroutine it seems. Presumably one could also add a fix for cblas_sgemv the same way in a modified vecLibFort. That way we could fix dotblas for the AVX segfault without actually modifying the code. But that is a NumPy issue and this is the SciPy list. So we should keep this separate I think. Sturla From sturla.molden at gmail.com Sat Oct 11 14:48:48 2014 From: sturla.molden at gmail.com (Sturla Molden) Date: Sat, 11 Oct 2014 18:48:48 +0000 (UTC) Subject: [SciPy-Dev] Use vecLibFort instead of Accelerate? References: <358013229434724162.309130sturla.molden-gmail.com@news.gmane.org> Message-ID: <96605170434745800.033492sturla.molden-gmail.com@news.gmane.org> Pauli Virtanen wrote: > One drawback in this is that asking users to download and install > vecLibFort properly to be able to use Scipy on OSX at all seems quite > brittle. No. It is just a C file with to header files that can be added to SciPy. No external build is needed for vecLibFort. > @Matthew: what is the problem with the current Numpy and Scipy OSX > builds? In principle, it should just work out of the box currently? Not Matthew, but I'll answer anyway: In Accelerate, sgemv and cblas_sgemv segfaults if the CPU supports AVX and the arrays are not 32 byte aligned. So one needs to either make an aligned temporary copy or reimplement sgemv with sgemm. > The fortranname bug is still a bug in f2py IMHO. Yes. Sturla From pav at iki.fi Sat Oct 11 14:55:23 2014 From: pav at iki.fi (Pauli Virtanen) Date: Sat, 11 Oct 2014 21:55:23 +0300 Subject: [SciPy-Dev] Use vecLibFort instead of Accelerate? In-Reply-To: <96605170434745800.033492sturla.molden-gmail.com@news.gmane.org> References: <358013229434724162.309130sturla.molden-gmail.com@news.gmane.org> <96605170434745800.033492sturla.molden-gmail.com@news.gmane.org> Message-ID: 11.10.2014, 21:48, Sturla Molden kirjoitti: > Pauli Virtanen wrote: > >> One drawback in this is that asking users to download and install >> vecLibFort properly to be able to use Scipy on OSX at all seems quite >> brittle. > > No. It is just a C file with to header files that can be added to SciPy. No > external build is needed for vecLibFort. Does the symbol used in the end depend on link order? Or does the linker/dynamic linker always pick the static symbol over the dynamic one? Pauli From matthew.brett at gmail.com Sat Oct 11 14:56:41 2014 From: matthew.brett at gmail.com (Matthew Brett) Date: Sat, 11 Oct 2014 11:56:41 -0700 Subject: [SciPy-Dev] Use vecLibFort instead of Accelerate? In-Reply-To: <1377725029434743746.279235sturla.molden-gmail.com@news.gmane.org> References: <358013229434724162.309130sturla.molden-gmail.com@news.gmane.org> <1377725029434743746.279235sturla.molden-gmail.com@news.gmane.org> Message-ID: Hi, On Sat, Oct 11, 2014 at 11:42 AM, Sturla Molden wrote: > Matthew Brett wrote: > >> That sounds like an excellent idea. Sturla - would you consider >> making a PR for this? I would be very happy to change the numpy and >> scipy OSX automated builds ... > > I can look into it, I was just going to ask your opinion first. :-) > > It is actually just a matter of adding one C file and two header files to > the build process, and then we link against Accelerate as before. > vecLibFort is actually a very tiny library. The reason this is different > from the current ABI wrappers is some magic it does with dlload and dlsym > to replace the patched BLAS and LAPACK symbols. For the proposed Cython > BLAS and LAPACK layer it should silence the fortranname bug in f2py, which > causes a problem with the current ABI wrappers. > > I don't think the numpy.dot problem with Accelerate would be fixed though, > but the parallel problem with scipy.linalg.lapack.sgemv would go away. > vecLibFort only patches the Fortran subroutine it seems. > > Presumably one could also add a fix for cblas_sgemv the same way in a > modified vecLibFort. That way we could fix dotblas for the AVX segfault > without actually modifying the code. But that is a NumPy issue and this is > the SciPy list. So we should keep this separate I think. I think the discussion is OK here because it relates to numpy and scipy OSX binaries - maybe Cc the numpy list though? Is that correct - that using vecLibFort will _not_ fix this issue of segfaults for float32 with numpy.dot? : https://github.com/numpy/numpy/issues/4007 That would make it much less attractive... Cheers, Matthew From pav at iki.fi Sat Oct 11 15:02:50 2014 From: pav at iki.fi (Pauli Virtanen) Date: Sat, 11 Oct 2014 22:02:50 +0300 Subject: [SciPy-Dev] Use vecLibFort instead of Accelerate? In-Reply-To: References: <358013229434724162.309130sturla.molden-gmail.com@news.gmane.org> <1377725029434743746.279235sturla.molden-gmail.com@news.gmane.org> Message-ID: 11.10.2014, 21:56, Matthew Brett kirjoitti: [clip] > Is that correct - that using vecLibFort will _not_ fix this issue of > segfaults for float32 with numpy.dot? : > > https://github.com/numpy/numpy/issues/4007 > > That would make it much less attractive... vecLibFort is basically just a slightly simpler version of the wrappers we already have in Scipy, plus one workaround for something that appears to be an Accelerate bug. numpy.dot uses Accelerate's C-API, so vecLibFort has no effect on it. If the issue in Numpy is the Accelerate bug, a similar workaround would need to be added in Numpy. Pauli From pav at iki.fi Sat Oct 11 15:08:55 2014 From: pav at iki.fi (Pauli Virtanen) Date: Sat, 11 Oct 2014 22:08:55 +0300 Subject: [SciPy-Dev] Use vecLibFort instead of Accelerate? In-Reply-To: References: <358013229434724162.309130sturla.molden-gmail.com@news.gmane.org> Message-ID: 11.10.2014, 21:35, Matthew Brett kirjoitti: [clip] > So - I think building via clang and Accelerate using vecLibFort would > be far preferable if we can make it work. Scipy (>= 0.14.0) should work OK with Accelerate out of the box. vecLibFort etc. are not required. The same is true for Numpy as far as I know --- numpy.linalg does not use the problematic parts of Accelerate Fortran ABI, and numpy.dot uses the C-API. Pauli From sturla.molden at gmail.com Sat Oct 11 15:17:22 2014 From: sturla.molden at gmail.com (Sturla Molden) Date: Sat, 11 Oct 2014 19:17:22 +0000 (UTC) Subject: [SciPy-Dev] Use vecLibFort instead of Accelerate? References: <358013229434724162.309130sturla.molden-gmail.com@news.gmane.org> <1377725029434743746.279235sturla.molden-gmail.com@news.gmane.org> Message-ID: <78735044434747662.273034sturla.molden-gmail.com@news.gmane.org> Matthew Brett wrote: > Is that correct - that using vecLibFort will _not_ fix this issue of > segfaults for float32 with numpy.dot? : > > https://github.com/numpy/numpy/issues/4007 > > That would make it much less attractive... It fixes sgemv but not cblas_sgemv. A similar dlload/dlsym hack could be used for NumPy to replace cblas_sgemv in the same way. Sturla From matthew.brett at gmail.com Sat Oct 11 15:26:07 2014 From: matthew.brett at gmail.com (Matthew Brett) Date: Sat, 11 Oct 2014 12:26:07 -0700 Subject: [SciPy-Dev] Use vecLibFort instead of Accelerate? In-Reply-To: <78735044434747662.273034sturla.molden-gmail.com@news.gmane.org> References: <358013229434724162.309130sturla.molden-gmail.com@news.gmane.org> <1377725029434743746.279235sturla.molden-gmail.com@news.gmane.org> <78735044434747662.273034sturla.molden-gmail.com@news.gmane.org> Message-ID: Hi, On Sat, Oct 11, 2014 at 12:17 PM, Sturla Molden wrote: > Matthew Brett wrote: > >> Is that correct - that using vecLibFort will _not_ fix this issue of >> segfaults for float32 with numpy.dot? : >> >> https://github.com/numpy/numpy/issues/4007 >> >> That would make it much less attractive... > > It fixes sgemv but not cblas_sgemv. OK - thanks - I am starting to understand. To state clearly for my own benefit: * sgemv is the Fortran API call; * cblas_sgemv is the C API call, used by numpy.dot vecLibFort would fix any calls from scipy Fortran code to buggy Accelerate sgemv, but would not affect the numpy.dot calls. Do we (scipy) call sgemv at any point? If not, how likely is it we will save a user from hitting the sgemv Accelerate bug by including the vecLibFort fix? > A similar dlload/dlsym hack could be used for NumPy to replace cblas_sgemv > in the same way. Sturla - I know you worked on this before; would you consider putting in a numpy PR to add this fix? I will try to review with my limited competence. Cheers, Matthew From sturla.molden at gmail.com Sat Oct 11 15:57:23 2014 From: sturla.molden at gmail.com (Sturla Molden) Date: Sat, 11 Oct 2014 19:57:23 +0000 (UTC) Subject: [SciPy-Dev] Use vecLibFort instead of Accelerate? References: <358013229434724162.309130sturla.molden-gmail.com@news.gmane.org> <1377725029434743746.279235sturla.molden-gmail.com@news.gmane.org> <78735044434747662.273034sturla.molden-gmail.com@news.gmane.org> Message-ID: <1919070835434750188.231420sturla.molden-gmail.com@news.gmane.org> Matthew Brett wrote: > Do we (scipy) call sgemv at any point? If not, how likely is it we > will save a user from hitting the sgemv Accelerate bug by including > the vecLibFort fix? scipy.linalg.blas.sgemv From pav at iki.fi Sat Oct 11 16:02:06 2014 From: pav at iki.fi (Pauli Virtanen) Date: Sat, 11 Oct 2014 23:02:06 +0300 Subject: [SciPy-Dev] Use vecLibFort instead of Accelerate? In-Reply-To: References: <358013229434724162.309130sturla.molden-gmail.com@news.gmane.org> <1377725029434743746.279235sturla.molden-gmail.com@news.gmane.org> <78735044434747662.273034sturla.molden-gmail.com@news.gmane.org> Message-ID: 11.10.2014, 22:26, Matthew Brett kirjoitti: [clip] > OK - thanks - I am starting to understand. > > To state clearly for my own benefit: > > * sgemv is the Fortran API call; > * cblas_sgemv is the C API call, used by numpy.dot > > vecLibFort would fix any calls from scipy Fortran code to buggy > Accelerate sgemv, but would not affect the numpy.dot calls. Correct. The workaround itself is fairly simple https://github.com/mcg1969/vecLibFort/blob/master/static.h#L104 The Fortran ABI issues vecLibFort corrects are non-issues for Scipy and Numpy. The main advantage in using vecLibFort rather than adding the workarounds ourselves is that the problem is then someone else's and we don't need to duplicate work in tracking Accelerate bugs. > Do we (scipy) call sgemv at any point? If not, how likely is it we > will save a user from hitting the sgemv Accelerate bug by including > the vecLibFort fix? There are calls to sgemv in superlu and arpack. These may potentially hit the Accelerate bug. -- Pauli Virtanen From sturla.molden at gmail.com Sat Oct 11 16:01:30 2014 From: sturla.molden at gmail.com (Sturla Molden) Date: Sat, 11 Oct 2014 20:01:30 +0000 (UTC) Subject: [SciPy-Dev] Use vecLibFort instead of Accelerate? References: <358013229434724162.309130sturla.molden-gmail.com@news.gmane.org> <1377725029434743746.279235sturla.molden-gmail.com@news.gmane.org> <78735044434747662.273034sturla.molden-gmail.com@news.gmane.org> Message-ID: <768744799434750275.997712sturla.molden-gmail.com@news.gmane.org> Matthew Brett wrote: > Sturla - I know you worked on this before; would you consider putting > in a numpy PR to add this fix? I will try to review with my limited > competence. I have a fix for NumPy. I can make a PR of it now that my NumPy repo is back online. Sturla From sturla.molden at gmail.com Sat Oct 11 16:07:27 2014 From: sturla.molden at gmail.com (Sturla Molden) Date: Sat, 11 Oct 2014 20:07:27 +0000 (UTC) Subject: [SciPy-Dev] Use vecLibFort instead of Accelerate? References: <358013229434724162.309130sturla.molden-gmail.com@news.gmane.org> Message-ID: <2033297632434750599.297611sturla.molden-gmail.com@news.gmane.org> Pauli Virtanen wrote: > Scipy (>= 0.14.0) should work OK with Accelerate out of the box. > vecLibFort etc. are not required. > > The same is true for Numpy as far as I know --- numpy.linalg does not > use the problematic parts of Accelerate Fortran ABI, and numpy.dot uses > the C-API. There is a bug in cblas_sgemv (and Fortran sgemv) that Apple does not seem to fix. It can produce segfaults under certain conditions. Having NumPy and Scipy crash randomly is not acceptable. We know how to work around it, but the question is should we, because the bug is clearly in Accelerate. Sturla From sturla.molden at gmail.com Sat Oct 11 16:10:09 2014 From: sturla.molden at gmail.com (Sturla Molden) Date: Sat, 11 Oct 2014 20:10:09 +0000 (UTC) Subject: [SciPy-Dev] Use vecLibFort instead of Accelerate? References: <358013229434724162.309130sturla.molden-gmail.com@news.gmane.org> <1377725029434743746.279235sturla.molden-gmail.com@news.gmane.org> <78735044434747662.273034sturla.molden-gmail.com@news.gmane.org> Message-ID: <1628144159434750898.722133sturla.molden-gmail.com@news.gmane.org> Pauli Virtanen wrote: > There are calls to sgemv in superlu and arpack. These may potentially > hit the Accelerate bug. There are also a lot of calls to numpy.dot in SciPy. Some of them can hit the Accelerate bug by accident. Sturla From matthew.brett at gmail.com Sat Oct 11 16:28:38 2014 From: matthew.brett at gmail.com (Matthew Brett) Date: Sat, 11 Oct 2014 13:28:38 -0700 Subject: [SciPy-Dev] Use vecLibFort instead of Accelerate? In-Reply-To: <2033297632434750599.297611sturla.molden-gmail.com@news.gmane.org> References: <358013229434724162.309130sturla.molden-gmail.com@news.gmane.org> <2033297632434750599.297611sturla.molden-gmail.com@news.gmane.org> Message-ID: Hi, On Sat, Oct 11, 2014 at 1:07 PM, Sturla Molden wrote: > Pauli Virtanen wrote: > >> Scipy (>= 0.14.0) should work OK with Accelerate out of the box. >> vecLibFort etc. are not required. >> >> The same is true for Numpy as far as I know --- numpy.linalg does not >> use the problematic parts of Accelerate Fortran ABI, and numpy.dot uses >> the C-API. > > There is a bug in cblas_sgemv (and Fortran sgemv) that Apple does not seem > to fix. It can produce segfaults under certain conditions. Having NumPy and > Scipy crash randomly is not acceptable. We know how to work around it, but > the question is should we, because the bug is clearly in Accelerate. We have three main options here: * Work round Accelerate bug; * Use ATLAS; * Use OpenBLAS. It seems to me that the consensus is currently that OpenBLAS is not reliable enough for us, but I'm happy to be corrected. ATLAS has the problems I listed earlier, of which the main one is that we can only optimize for the least common denominator machine. So working round the Accelerate bug seems like the greatest gain for the smallest pain - if the maintenance burden is acceptable. Do I understand right that we have two options for this? 1) Pull in our own versions of the sgemv workaround in scipy; 2) Use vecLibFort to replace our own Accelerate wrappers. Having the Accelerate wrappers and fixes go in one place seems like a good idea in general, but I know nothing about the issues, I'm afraid. Sturla - I saw you asking the author of vecLibFort about a cblas_sgemv fix at [1] - would the plan be to use vecLibFort in numpy also? Presumably we could do that via a git submodule? Cheers, Matthew [1] https://github.com/numpy/numpy/issues/4007 From insertinterestingnamehere at gmail.com Sat Oct 11 16:40:13 2014 From: insertinterestingnamehere at gmail.com (Ian Henriksen) Date: Sat, 11 Oct 2014 14:40:13 -0600 Subject: [SciPy-Dev] Use vecLibFort instead of Accelerate? In-Reply-To: References: <358013229434724162.309130sturla.molden-gmail.com@news.gmane.org> Message-ID: On Sat, Oct 11, 2014 at 12:13 PM, Pauli Virtanen wrote: > 11.10.2014, 15:53, Sturla Molden kirjoitti: > > There is a library called vecLibFort that re-exports all the BLAS and > > LAPACK symbols in Accelerate with gfortran ABI: > > > > https://github.com/mcg1969/vecLibFort > > One drawback in this is that asking users to download and install > vecLibFort properly to be able to use Scipy on OSX at all seems quite > brittle. > > Bundling that set of wrappers, which unlike our own works via the much > better (osx-specific?) dynamic library symbol overrides, might be > possible. Not sure how well the __attribute__(()) statements required > will work on clang and various gcc versions? > > Fighting distutils to set up rpaths etc. for distributing the additional > .so may however be the biggest job. > > @Matthew: what is the problem with the current Numpy and Scipy OSX > builds? In principle, it should just work out of the box currently? > > > - For the new Cython layer, the fortranname bug in f2py would go away ? > the > > wrappers are no longer needed. > > The fortranname bug is still a bug in f2py IMHO. The issue is also not > blocking the Cython work, as there is a straightforward workaround. > Yes, it's not a problem for the Cython work. The main issue at the moment is that I've had significantly less time than usual over the last two weeks to work on it. Hopefully this week will be a little calmer and I'll be able to do a little more. On the other hand, though I have no experience working with accelerate, this does still sound like a really good idea. Thanks! -Ian > Pauli > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sturla.molden at gmail.com Sat Oct 11 17:04:26 2014 From: sturla.molden at gmail.com (Sturla Molden) Date: Sat, 11 Oct 2014 21:04:26 +0000 (UTC) Subject: [SciPy-Dev] Use vecLibFort instead of Accelerate? References: <358013229434724162.309130sturla.molden-gmail.com@news.gmane.org> <2033297632434750599.297611sturla.molden-gmail.com@news.gmane.org> Message-ID: <1392384412434753957.374306sturla.molden-gmail.com@news.gmane.org> Matthew Brett wrote: > 1) Pull in our own versions of the sgemv workaround in scipy; > 2) Use vecLibFort to replace our own Accelerate wrappers. > > Having the Accelerate wrappers and fixes go in one place seems like a > good idea in general, but I know nothing about the issues, I'm afraid. A global fix would solve the problem everywhere. Probably easier to maintain. > Sturla - I saw you asking the author of vecLibFort about a cblas_sgemv > fix at [1] - would the plan be to use vecLibFort in numpy also? That was why I asked. My current fix for NumPy adds some minor changes the dotblas C code. I would rather avoid this. Sturla From matthew.brett at gmail.com Sat Oct 11 17:56:54 2014 From: matthew.brett at gmail.com (Matthew Brett) Date: Sat, 11 Oct 2014 14:56:54 -0700 Subject: [SciPy-Dev] Use vecLibFort instead of Accelerate? In-Reply-To: <1392384412434753957.374306sturla.molden-gmail.com@news.gmane.org> References: <358013229434724162.309130sturla.molden-gmail.com@news.gmane.org> <2033297632434750599.297611sturla.molden-gmail.com@news.gmane.org> <1392384412434753957.374306sturla.molden-gmail.com@news.gmane.org> Message-ID: On Sat, Oct 11, 2014 at 2:04 PM, Sturla Molden wrote: > Matthew Brett wrote: > >> 1) Pull in our own versions of the sgemv workaround in scipy; >> 2) Use vecLibFort to replace our own Accelerate wrappers. >> >> Having the Accelerate wrappers and fixes go in one place seems like a >> good idea in general, but I know nothing about the issues, I'm afraid. > > A global fix would solve the problem everywhere. Probably easier to > maintain. > > >> Sturla - I saw you asking the author of vecLibFort about a cblas_sgemv >> fix at [1] - would the plan be to use vecLibFort in numpy also? > > That was why I asked. My current fix for NumPy adds some minor changes the > dotblas C code. I would rather avoid this. Would both numpy and scipy need a copy of vecLibFort at build time? Thanks for following up on this, Matthew From rlucente at pipeline.com Sat Oct 11 18:03:03 2014 From: rlucente at pipeline.com (Robert Lucente - Pipeline) Date: Sat, 11 Oct 2014 18:03:03 -0400 Subject: [SciPy-Dev] scipy.optimize.anneal - deprecation Message-ID: <097501cfe59f$2150d4e0$63f27ea0$@pipeline.com> Did not hear back. Not sure how to interpret that. Would really like to know why anneal was deprecated and why recommend basin hopping which won't apply for discrete optimization. I realize that simulated annealing requires a lot of tweeking. From: Robert Lucente - Pipeline [mailto:rlucente at pipeline.com] Sent: Tuesday, September 30, 2014 9:45 PM To: 'scipy-dev at scipy.org' Cc: Robert Gmail Backup 1 Lucente Gmail Backup 1 (robert.backup.lucente at gmail.com) Subject: scipy.optimize.anneal - deprecation Hi everyone, I am a newbie to open source and so I am not sure what the appropriate tribal norms are. So, if the "To" emailing is too broad, I apologize ahead of time. Please let me know if there is a more appropriate mailing list. I am also a newbie to Python. I haven't gotten much past the hello world stage. However, as I am poking around, simulated annealing got my attention because of a project that I am tangentially involved w/. As usual, I love "open source" because well, it is open. I can look at code and know exactly what is going on. I was surprised to learn that scipy.optimize.anneal is being deprecated. It is a "standard" mathematical optimization technique which is used. Also, there is a decent amount of literature on it. For some references, refer to the blog "Simulated Annealing (SA) for Mathematical Optimization ." The recommendation seems to be to use basinhopping. Unfortunately, it assumes "smooth scalar function". Unfortunately, this smoothness does not apply in my case. I am sure that the deprecation of anneal was given a lot of thought. Is that documented anywhere or would someone be willing to share why it was deprecated? -------------- next part -------------- An HTML attachment was scrubbed... URL: From nils106 at googlemail.com Sun Oct 12 12:39:22 2014 From: nils106 at googlemail.com (Nils Wagner) Date: Sun, 12 Oct 2014 18:39:22 +0200 Subject: [SciPy-Dev] make latex failure Message-ID: -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: scipy-ref.log.gz Type: application/x-gzip Size: 97275 bytes Desc: not available URL: From derek at astro.physik.uni-goettingen.de Sun Oct 12 15:40:33 2014 From: derek at astro.physik.uni-goettingen.de (Derek Homeier) Date: Sun, 12 Oct 2014 21:40:33 +0200 Subject: [SciPy-Dev] scipy.optimize.anneal - deprecation In-Reply-To: <097501cfe59f$2150d4e0$63f27ea0$@pipeline.com> References: <097501cfe59f$2150d4e0$63f27ea0$@pipeline.com> Message-ID: Hi Robert, On 12 Oct 2014, at 12:03 am, Robert Lucente - Pipeline wrote: > Did not hear back. Not sure how to interpret that. Would really like to know why anneal was deprecated and why recommend basin hopping which won?t apply for discrete optimization. I realize that simulated annealing requires a lot of tweeking. > I?d think this should be the right forum to bring up this issue; the idea to replace anneal with basin hopping seems to have been originally brought up in this thread http://mail.scipy.org/pipermail/scipy-dev/2012-October/018032.html which also discusses some of the former?s perceived shortcomings, but I don?t know how the decision eventually emerged. As another option you might perhaps comment on the corresponding changeset on github and try to start a discussion there. HTH Derek From charlesr.harris at gmail.com Sun Oct 12 19:27:07 2014 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sun, 12 Oct 2014 17:27:07 -0600 Subject: [SciPy-Dev] scipy.optimize.anneal - deprecation In-Reply-To: References: <097501cfe59f$2150d4e0$63f27ea0$@pipeline.com> Message-ID: On Sun, Oct 12, 2014 at 1:40 PM, Derek Homeier < derek at astro.physik.uni-goettingen.de> wrote: > Hi Robert, > > On 12 Oct 2014, at 12:03 am, Robert Lucente - Pipeline < > rlucente at pipeline.com> wrote: > > > Did not hear back. Not sure how to interpret that. Would really like to > know why anneal was deprecated and why recommend basin hopping which won?t > apply for discrete optimization. I realize that simulated annealing > requires a lot of tweeking. > > > I?d think this should be the right forum to bring up this issue; the idea > to replace anneal with > basin hopping seems to have been originally brought up in this thread > http://mail.scipy.org/pipermail/scipy-dev/2012-October/018032.html > which also discusses some of the former?s perceived shortcomings, but I > don?t know how > the decision eventually emerged. > As another option you might perhaps comment on the corresponding changeset > on github > and try to start a discussion there. > > Looking at the tests at http://infinity77.net/global_optimization/multidimensional.html the scipy version of simulated annealing, SIMANN, performs horribly. However, ANA seems to do pretty well. So the problem with scipy seems to have been a poor version of the algorithm and probably we should just fix that. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From andyfaff at gmail.com Sun Oct 12 19:37:14 2014 From: andyfaff at gmail.com (Andrew Nelson) Date: Mon, 13 Oct 2014 10:37:14 +1100 Subject: [SciPy-Dev] scipy.optimize.anneal - deprecation In-Reply-To: References: <097501cfe59f$2150d4e0$63f27ea0$@pipeline.com> Message-ID: On 13 October 201 > > Looking at the tests at > http://infinity77.net/global_optimization/multidimensional.html the > scipy version of simulated annealing, SIMANN, performs horribly. However, > ANA seems to do pretty well. So the problem with scipy seems to have been a > poor version of the algorithm and probably we should just fix that. > > Chuck > Andrea Gavana kindly made the code to the benchmark suite available. I'm modifying to a form for inclusion into scipy: https://github.com/andyfaff/scipy/tree/go_benchmark/scipy/optimize/benchmarks. As we speak I'm running the test optimization functions (I added a few and modified some) for basinhopping, differential_evolution and anneal. I'll put these up in a Gist in the next day or so. -- _____________________________________ Dr. Andrew Nelson _____________________________________ -------------- next part -------------- An HTML attachment was scrubbed... URL: From andyfaff at gmail.com Mon Oct 13 01:17:09 2014 From: andyfaff at gmail.com (Andrew Nelson) Date: Mon, 13 Oct 2014 16:17:09 +1100 Subject: [SciPy-Dev] scipy.optimize.anneal - deprecation In-Reply-To: References: <097501cfe59f$2150d4e0$63f27ea0$@pipeline.com> Message-ID: On 13 October 2014 10:37, Andrew Nelson wrote: > On 13 October 201 >> >> Looking at the tests at >> http://infinity77.net/global_optimization/multidimensional.html the >> scipy version of simulated annealing, SIMANN, performs horribly. However, >> ANA seems to do pretty well. So the problem with scipy seems to have been a >> poor version of the algorithm and probably we should just fix that. >> >> Chuck >> > > Andrea Gavana kindly made the code to the benchmark suite available. I'm > modifying to a form for inclusion into scipy: > https://github.com/andyfaff/scipy/tree/go_benchmark/scipy/optimize/benchmarks. > As we speak I'm running the test optimization functions (I added a few and > modified some) for basinhopping, differential_evolution and anneal. > I'll put these up in a Gist in the next day or so. > > As promised, the first run through of the test functions for global optimizers is at: https://gist.github.com/andyfaff/24c96a3d5dbc7b0272b2. This was for a total of 150 random starting vectors. There are still some things to be ironed out, particularly how a success is judged (atol vs rtol, etc). For example, in the Thurber problem (a NIST regression standard) a lot of failures are because the minimizer used for polishing (L-BFGS-B) doesn't seem to be able to take the energy from 5645 to 5642. The parameters are so close to the optimum solution I'm wondering if it's a precision problem with the numerical derivatives. I think most of those fails could be counted as successes. In any case, I'm not going to comment any further, but will let the numbers tell their own story. For each problem the percentage of successes is reported (whether the minimizer found the global minimum), as well as the mean number of function evaluations (irrespective of whether it was a success or failure). regards, Andrew. -------------- next part -------------- An HTML attachment was scrubbed... URL: From rlucente at pipeline.com Mon Oct 13 23:45:33 2014 From: rlucente at pipeline.com (Robert Lucente - Pipeline) Date: Mon, 13 Oct 2014 23:45:33 -0400 Subject: [SciPy-Dev] scipy.optimize.anneal - deprecation Message-ID: <022901cfe761$4f9f07e0$eedd17a0$@pipeline.com> Hummm. I think I am missing something fundamental. The official doco for scipy.optimize.basinhopping (http://docs.scipy.org/doc/scipy-0.13.0/reference/generated/scipy.optimize.b asinhopping.html) states "Basin-hopping is a stochastic algorithm which attempts to find the global minimum of a smooth scalar function". The key word to note is smooth. The non-math version of smooth is that there are no gaps and no breaks. Also, there are no kinks and no corners. For a more in depth explanation of this refer to pages 297 - 298 of Optimization Modeling with Spreadsheets, Second Edition by Kenneth R. Baker. The math version is that the functions are continuous and at least twice continuous differentiable. Please note that the second derivative must also be continuous. For a more in depth explanation of this refer to pages 11 - 12 of Introduction to Optimum Design, Third Edition by Jasbir Arora. What happens when the problem you are dealing w/ is not smooth? You can't use basin hopping. For example, a price schedule with a price break is a standard example of a non-smooth function. My personal and immediate non-smooth problem is placing advertising in available time slots. Also, the SciPy lecture notes (http://scipy-lectures.github.io/advanced/mathematical_optimization/#smooth- and-non-smooth-problems) states "2.7.1.2. Smooth and non-smooth problems ... Optimizing smooth functions is easier" Simulated annealing is not perfect. It does have issues like having a lot of knobs to turn. Bechmarks are a tough thing. My only comment is that smooth and non-smooth should be 2 different sets of benchmarks. BTW, I love the comment "I got fed up with the current attitude of most mathematicians/numerical optimization experts, who tend to demonstrate the advantages of an algorithm based on "elapsed time" or "CPU time" or similar meaningless performance indicators" Minor nit: I think that there is a disconnect between the code for SA and its doco. The doco says "T0 ... initial temperature. If None, then estimate it as 1.2 times the largest cost-function deviation over random points in the box-shaped region specified by the `lower, upper` input parameters." but the code says "self.T0 = (fmax-fmin)*1.5" Would you be willing to enlarge the references for SA to include some the references at Simulated Annealing (SA) for Mathematical Optimization (http://rlucente.blogspot.com/2014/09/simulated-annealing-sa-for-mathematica l.html). Is it possible to create an example where an end user could pass in a custom schedule to the simulated annealing code? >comment on the corresponding changeset on github How do I execute on this? -----Original Message----- From: scipy-dev-bounces at scipy.org [mailto:scipy-dev-bounces at scipy.org] On Behalf Of scipy-dev-request at scipy.org Sent: Monday, October 13, 2014 1:00 PM To: scipy-dev at scipy.org Subject: SciPy-Dev Digest, Vol 132, Issue 15 Send SciPy-Dev mailing list submissions to scipy-dev at scipy.org To subscribe or unsubscribe via the World Wide Web, visit http://mail.scipy.org/mailman/listinfo/scipy-dev or, via email, send a message with subject or body 'help' to scipy-dev-request at scipy.org You can reach the person managing the list at scipy-dev-owner at scipy.org When replying, please edit your Subject line so it is more specific than "Re: Contents of SciPy-Dev digest..." Today's Topics: 1. Re: scipy.optimize.anneal - deprecation (Derek Homeier) 2. Re: scipy.optimize.anneal - deprecation (Charles R Harris) 3. Re: scipy.optimize.anneal - deprecation (Andrew Nelson) 4. Re: scipy.optimize.anneal - deprecation (Andrew Nelson) ---------------------------------------------------------------------- Message: 1 Date: Sun, 12 Oct 2014 21:40:33 +0200 From: Derek Homeier Subject: Re: [SciPy-Dev] scipy.optimize.anneal - deprecation To: SciPy Developers List Message-ID: Content-Type: text/plain; charset=windows-1252 Hi Robert, On 12 Oct 2014, at 12:03 am, Robert Lucente - Pipeline wrote: > Did not hear back. Not sure how to interpret that. Would really like to know why anneal was deprecated and why recommend basin hopping which won?t apply for discrete optimization. I realize that simulated annealing requires a lot of tweeking. > I?d think this should be the right forum to bring up this issue; the idea to replace anneal with basin hopping seems to have been originally brought up in this thread http://mail.scipy.org/pipermail/scipy-dev/2012-October/018032.html which also discusses some of the former?s perceived shortcomings, but I don?t know how the decision eventually emerged. As another option you might perhaps comment on the corresponding changeset on github and try to start a discussion there. HTH Derek ------------------------------ Message: 2 Date: Sun, 12 Oct 2014 17:27:07 -0600 From: Charles R Harris Subject: Re: [SciPy-Dev] scipy.optimize.anneal - deprecation To: SciPy Developers List Message-ID: Content-Type: text/plain; charset="utf-8" On Sun, Oct 12, 2014 at 1:40 PM, Derek Homeier < derek at astro.physik.uni-goettingen.de> wrote: > Hi Robert, > > On 12 Oct 2014, at 12:03 am, Robert Lucente - Pipeline < > rlucente at pipeline.com> wrote: > > > Did not hear back. Not sure how to interpret that. Would really like > > to > know why anneal was deprecated and why recommend basin hopping which > won?t apply for discrete optimization. I realize that simulated > annealing requires a lot of tweeking. > > > I?d think this should be the right forum to bring up this issue; the > idea to replace anneal with basin hopping seems to have been > originally brought up in this thread > http://mail.scipy.org/pipermail/scipy-dev/2012-October/018032.html > which also discusses some of the former?s perceived shortcomings, but > I don?t know how the decision eventually emerged. > As another option you might perhaps comment on the corresponding > changeset on github and try to start a discussion there. > > Looking at the tests at http://infinity77.net/global_optimization/multidimensional.html the scipy version of simulated annealing, SIMANN, performs horribly. However, ANA seems to do pretty well. So the problem with scipy seems to have been a poor version of the algorithm and probably we should just fix that. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.scipy.org/pipermail/scipy-dev/attachments/20141012/eed240c1/atta chment-0001.html ------------------------------ Message: 3 Date: Mon, 13 Oct 2014 10:37:14 +1100 From: Andrew Nelson Subject: Re: [SciPy-Dev] scipy.optimize.anneal - deprecation To: SciPy Developers List Message-ID: Content-Type: text/plain; charset="utf-8" On 13 October 201 > > Looking at the tests at > http://infinity77.net/global_optimization/multidimensional.html the > scipy version of simulated annealing, SIMANN, performs horribly. > However, ANA seems to do pretty well. So the problem with scipy seems > to have been a poor version of the algorithm and probably we should just fix that. > > Chuck > Andrea Gavana kindly made the code to the benchmark suite available. I'm modifying to a form for inclusion into scipy: https://github.com/andyfaff/scipy/tree/go_benchmark/scipy/optimize/benchmark s. As we speak I'm running the test optimization functions (I added a few and modified some) for basinhopping, differential_evolution and anneal. I'll put these up in a Gist in the next day or so. -- _____________________________________ Dr. Andrew Nelson _____________________________________ -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.scipy.org/pipermail/scipy-dev/attachments/20141013/0e913ac5/atta chment-0001.html ------------------------------ Message: 4 Date: Mon, 13 Oct 2014 16:17:09 +1100 From: Andrew Nelson Subject: Re: [SciPy-Dev] scipy.optimize.anneal - deprecation To: SciPy Developers List Message-ID: Content-Type: text/plain; charset="utf-8" On 13 October 2014 10:37, Andrew Nelson wrote: > On 13 October 201 >> >> Looking at the tests at >> http://infinity77.net/global_optimization/multidimensional.html the >> scipy version of simulated annealing, SIMANN, performs horribly. However, >> ANA seems to do pretty well. So the problem with scipy seems to have been a >> poor version of the algorithm and probably we should just fix that. >> >> Chuck >> > > Andrea Gavana kindly made the code to the benchmark suite available. I'm > modifying to a form for inclusion into scipy: > https://github.com/andyfaff/scipy/tree/go_benchmark/scipy/optimize/benchmark s. > As we speak I'm running the test optimization functions (I added a few and > modified some) for basinhopping, differential_evolution and anneal. > I'll put these up in a Gist in the next day or so. > > As promised, the first run through of the test functions for global optimizers is at: https://gist.github.com/andyfaff/24c96a3d5dbc7b0272b2. This was for a total of 150 random starting vectors. There are still some things to be ironed out, particularly how a success is judged (atol vs rtol, etc). For example, in the Thurber problem (a NIST regression standard) a lot of failures are because the minimizer used for polishing (L-BFGS-B) doesn't seem to be able to take the energy from 5645 to 5642. The parameters are so close to the optimum solution I'm wondering if it's a precision problem with the numerical derivatives. I think most of those fails could be counted as successes. In any case, I'm not going to comment any further, but will let the numbers tell their own story. For each problem the percentage of successes is reported (whether the minimizer found the global minimum), as well as the mean number of function evaluations (irrespective of whether it was a success or failure). regards, Andrew. -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.scipy.org/pipermail/scipy-dev/attachments/20141013/47306ebc/atta chment-0001.html ------------------------------ _______________________________________________ SciPy-Dev mailing list SciPy-Dev at scipy.org http://mail.scipy.org/mailman/listinfo/scipy-dev End of SciPy-Dev Digest, Vol 132, Issue 15 ****************************************** From jstevenson131 at gmail.com Tue Oct 14 03:53:37 2014 From: jstevenson131 at gmail.com (Jacob Stevenson) Date: Tue, 14 Oct 2014 08:53:37 +0100 Subject: [SciPy-Dev] scipy.optimize.anneal - deprecation In-Reply-To: <022901cfe761$4f9f07e0$eedd17a0$@pipeline.com> References: <022901cfe761$4f9f07e0$eedd17a0$@pipeline.com> Message-ID: I wrote the documentation for basin hopping and added the ?smooth? comment. Basinhopping is a very simple algorithm, it?s essentially just a Monte Carlo routine with local minimisation. All the hard work is done by the local minimizer, so it?s performance is only as good as the local minimiser. Most of the local minimisers in scipy (l-bfgs-b, cg, etc) are designed to work with smooth functions, so basinhopping will really only work well on smooth functions if one of those minimisers were used. I can?t think of a reason that basin hopping wouldn't work on non-smooth functions if a non-smooth minimiser were used (power, coblya, etc). I have little direct experience with this so I can?t say for sure. Maybe the ?smooth" restriction in the basin hopping documentation should be removed > Bechmarks are a tough thing. Very true. Andrea?s collection of benchmark functions is great (http://infinity77.net/global_optimization/multidimensional.html), but almost all of them are low dimensional (2-5 dimensions). I primarily deal with optimisation problems with greater than 100 dimensions. An optimisation routine which works well in low dimensions will not necessarily scale to well to high dimensions. > My only comment is that smooth and non-smooth should be 2 different sets of benchmarks This is a good idea Jake On 14 Oct 2014, at 04:45, Robert Lucente - Pipeline wrote: > Hummm. I think I am missing something fundamental. > > The official doco for scipy.optimize.basinhopping > (http://docs.scipy.org/doc/scipy-0.13.0/reference/generated/scipy.optimize.b > asinhopping.html) states "Basin-hopping is a stochastic algorithm which > attempts to find the global minimum of a smooth scalar function". The key > word to note is smooth. The non-math version of smooth is that there are no > gaps and no breaks. Also, there are no kinks and no corners. For a more in > depth explanation of this refer to pages 297 - 298 of Optimization Modeling > with Spreadsheets, Second Edition by Kenneth R. Baker. The math version is > that the functions are continuous and at least twice continuous > differentiable. Please note that the second derivative must also be > continuous. For a more in depth explanation of this refer to pages 11 - 12 > of Introduction to Optimum Design, Third Edition by Jasbir Arora. > > What happens when the problem you are dealing w/ is not smooth? You can't > use basin hopping. For example, a price schedule with a price break is a > standard example of a non-smooth function. My personal and immediate > non-smooth problem is placing advertising in available time slots. > > Also, the SciPy lecture notes > (http://scipy-lectures.github.io/advanced/mathematical_optimization/#smooth- > and-non-smooth-problems) states "2.7.1.2. Smooth and non-smooth problems ... > Optimizing smooth functions is easier" > > Simulated annealing is not perfect. It does have issues like having a lot of > knobs to turn. > > Bechmarks are a tough thing. My only comment is that smooth and non-smooth > should be 2 different sets of benchmarks. BTW, I love the comment "I got fed > up with the current attitude of most mathematicians/numerical optimization > experts, who tend to demonstrate the advantages of an algorithm based on > "elapsed time" or "CPU time" or similar meaningless performance indicators" > > Minor nit: I think that there is a disconnect between the code for SA and > its doco. The doco says "T0 ... initial temperature. If None, then estimate > it as 1.2 times the largest cost-function deviation over random points in > the box-shaped region specified by the `lower, upper` input parameters." but > the code says "self.T0 = (fmax-fmin)*1.5" > > Would you be willing to enlarge the references for SA to include some the > references at Simulated Annealing (SA) for Mathematical Optimization > (http://rlucente.blogspot.com/2014/09/simulated-annealing-sa-for-mathematica > l.html). > > Is it possible to create an example where an end user could pass in a custom > schedule to the simulated annealing code? > >> comment on the corresponding changeset on github > How do I execute on this? > > -----Original Message----- > From: scipy-dev-bounces at scipy.org [mailto:scipy-dev-bounces at scipy.org] On > Behalf Of scipy-dev-request at scipy.org > Sent: Monday, October 13, 2014 1:00 PM > To: scipy-dev at scipy.org > Subject: SciPy-Dev Digest, Vol 132, Issue 15 > > Send SciPy-Dev mailing list submissions to > scipy-dev at scipy.org > > To subscribe or unsubscribe via the World Wide Web, visit > http://mail.scipy.org/mailman/listinfo/scipy-dev > or, via email, send a message with subject or body 'help' to > scipy-dev-request at scipy.org > > You can reach the person managing the list at > scipy-dev-owner at scipy.org > > When replying, please edit your Subject line so it is more specific than > "Re: Contents of SciPy-Dev digest..." > > > Today's Topics: > > 1. Re: scipy.optimize.anneal - deprecation (Derek Homeier) > 2. Re: scipy.optimize.anneal - deprecation (Charles R Harris) > 3. Re: scipy.optimize.anneal - deprecation (Andrew Nelson) > 4. Re: scipy.optimize.anneal - deprecation (Andrew Nelson) > > > ---------------------------------------------------------------------- > > Message: 1 > Date: Sun, 12 Oct 2014 21:40:33 +0200 > From: Derek Homeier > Subject: Re: [SciPy-Dev] scipy.optimize.anneal - deprecation > To: SciPy Developers List > Message-ID: > > > Content-Type: text/plain; charset=windows-1252 > > Hi Robert, > > On 12 Oct 2014, at 12:03 am, Robert Lucente - Pipeline > wrote: > >> Did not hear back. Not sure how to interpret that. Would really like to > know why anneal was deprecated and why recommend basin hopping which won?t > apply for discrete optimization. I realize that simulated annealing requires > a lot of tweeking. >> > I?d think this should be the right forum to bring up this issue; the idea to > replace anneal with basin hopping seems to have been originally brought up > in this thread > http://mail.scipy.org/pipermail/scipy-dev/2012-October/018032.html > which also discusses some of the former?s perceived shortcomings, but I > don?t know how the decision eventually emerged. > As another option you might perhaps comment on the corresponding changeset > on github and try to start a discussion there. > > HTH > Derek > > > > ------------------------------ > > Message: 2 > Date: Sun, 12 Oct 2014 17:27:07 -0600 > From: Charles R Harris > Subject: Re: [SciPy-Dev] scipy.optimize.anneal - deprecation > To: SciPy Developers List > Message-ID: > > Content-Type: text/plain; charset="utf-8" > > On Sun, Oct 12, 2014 at 1:40 PM, Derek Homeier < > derek at astro.physik.uni-goettingen.de> wrote: > >> Hi Robert, >> >> On 12 Oct 2014, at 12:03 am, Robert Lucente - Pipeline < >> rlucente at pipeline.com> wrote: >> >>> Did not hear back. Not sure how to interpret that. Would really like >>> to >> know why anneal was deprecated and why recommend basin hopping which >> won?t apply for discrete optimization. I realize that simulated >> annealing requires a lot of tweeking. >>> >> I?d think this should be the right forum to bring up this issue; the >> idea to replace anneal with basin hopping seems to have been >> originally brought up in this thread >> http://mail.scipy.org/pipermail/scipy-dev/2012-October/018032.html >> which also discusses some of the former?s perceived shortcomings, but >> I don?t know how the decision eventually emerged. >> As another option you might perhaps comment on the corresponding >> changeset on github and try to start a discussion there. >> >> > Looking at the tests at > http://infinity77.net/global_optimization/multidimensional.html the scipy > version of simulated annealing, SIMANN, performs horribly. However, ANA > seems to do pretty well. So the problem with scipy seems to have been a poor > version of the algorithm and probably we should just fix that. > > Chuck > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: > http://mail.scipy.org/pipermail/scipy-dev/attachments/20141012/eed240c1/atta > chment-0001.html > > ------------------------------ > > Message: 3 > Date: Mon, 13 Oct 2014 10:37:14 +1100 > From: Andrew Nelson > Subject: Re: [SciPy-Dev] scipy.optimize.anneal - deprecation > To: SciPy Developers List > Message-ID: > > Content-Type: text/plain; charset="utf-8" > > On 13 October 201 >> >> Looking at the tests at >> http://infinity77.net/global_optimization/multidimensional.html the >> scipy version of simulated annealing, SIMANN, performs horribly. >> However, ANA seems to do pretty well. So the problem with scipy seems >> to have been a poor version of the algorithm and probably we should just > fix that. >> >> Chuck >> > > Andrea Gavana kindly made the code to the benchmark suite available. I'm > modifying to a form for inclusion into scipy: > https://github.com/andyfaff/scipy/tree/go_benchmark/scipy/optimize/benchmark > s. > As we speak I'm running the test optimization functions (I added a few and > modified some) for basinhopping, differential_evolution and anneal. > I'll put these up in a Gist in the next day or so. > > > -- > _____________________________________ > Dr. Andrew Nelson > > > _____________________________________ > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: > http://mail.scipy.org/pipermail/scipy-dev/attachments/20141013/0e913ac5/atta > chment-0001.html > > ------------------------------ > > Message: 4 > Date: Mon, 13 Oct 2014 16:17:09 +1100 > From: Andrew Nelson > Subject: Re: [SciPy-Dev] scipy.optimize.anneal - deprecation > To: SciPy Developers List > Message-ID: > > Content-Type: text/plain; charset="utf-8" > > On 13 October 2014 10:37, Andrew Nelson wrote: > >> On 13 October 201 >>> >>> Looking at the tests at >>> http://infinity77.net/global_optimization/multidimensional.html the >>> scipy version of simulated annealing, SIMANN, performs horribly. However, >>> ANA seems to do pretty well. So the problem with scipy seems to have been > a >>> poor version of the algorithm and probably we should just fix that. >>> >>> Chuck >>> >> >> Andrea Gavana kindly made the code to the benchmark suite available. I'm >> modifying to a form for inclusion into scipy: >> > https://github.com/andyfaff/scipy/tree/go_benchmark/scipy/optimize/benchmark > s. >> As we speak I'm running the test optimization functions (I added a few and >> modified some) for basinhopping, differential_evolution and anneal. >> I'll put these up in a Gist in the next day or so. >> >> > > As promised, the first run through of the test functions for global > optimizers is at: https://gist.github.com/andyfaff/24c96a3d5dbc7b0272b2. > This was for a total of 150 random starting vectors. There are still some > things to be ironed out, particularly how a success is judged (atol vs > rtol, etc). For example, in the Thurber problem (a NIST regression > standard) a lot of failures are because the minimizer used for polishing > (L-BFGS-B) doesn't seem to be able to take the energy from 5645 to 5642. > The parameters are so close to the optimum solution I'm wondering if it's a > precision problem with the numerical derivatives. I think most of those > fails could be counted as successes. > > In any case, I'm not going to comment any further, but will let the numbers > tell their own story. For each problem the percentage of successes is > reported (whether the minimizer found the global minimum), as well as the > mean number of function evaluations (irrespective of whether it was a > success or failure). > > regards, > Andrew. > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: > http://mail.scipy.org/pipermail/scipy-dev/attachments/20141013/47306ebc/atta > chment-0001.html > > ------------------------------ > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > > > End of SciPy-Dev Digest, Vol 132, Issue 15 > ****************************************** > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From rlucente at pipeline.com Tue Oct 14 07:20:02 2014 From: rlucente at pipeline.com (Robert Lucente - Pipeline) Date: Tue, 14 Oct 2014 07:20:02 -0400 Subject: [SciPy-Dev] scipy.optimize.anneal - deprecation Message-ID: <026801cfe7a0$cca235c0$65e6a140$@pipeline.com> "Maybe the ?smooth" restriction in the basin hopping documentation should be removed" Instead, could we add words like if smooth ... . if non-smooth ... -----Original Message----- From: scipy-dev-bounces at scipy.org [mailto:scipy-dev-bounces at scipy.org] On Behalf Of scipy-dev-request at scipy.org Sent: Tuesday, October 14, 2014 3:54 AM To: scipy-dev at scipy.org Subject: SciPy-Dev Digest, Vol 132, Issue 16 Send SciPy-Dev mailing list submissions to scipy-dev at scipy.org To subscribe or unsubscribe via the World Wide Web, visit http://mail.scipy.org/mailman/listinfo/scipy-dev or, via email, send a message with subject or body 'help' to scipy-dev-request at scipy.org You can reach the person managing the list at scipy-dev-owner at scipy.org When replying, please edit your Subject line so it is more specific than "Re: Contents of SciPy-Dev digest..." Today's Topics: 1. scipy.optimize.anneal - deprecation (Robert Lucente - Pipeline) 2. Re: scipy.optimize.anneal - deprecation (Jacob Stevenson) ---------------------------------------------------------------------- Message: 1 Date: Mon, 13 Oct 2014 23:45:33 -0400 From: "Robert Lucente - Pipeline" Subject: [SciPy-Dev] scipy.optimize.anneal - deprecation To: Cc: Robert Gmail Backup 1 Lucente Gmail Backup 1 Message-ID: <022901cfe761$4f9f07e0$eedd17a0$@pipeline.com> Content-Type: text/plain; charset="us-ascii" Hummm. I think I am missing something fundamental. The official doco for scipy.optimize.basinhopping (http://docs.scipy.org/doc/scipy-0.13.0/reference/generated/scipy.optimize.b asinhopping.html) states "Basin-hopping is a stochastic algorithm which attempts to find the global minimum of a smooth scalar function". The key word to note is smooth. The non-math version of smooth is that there are no gaps and no breaks. Also, there are no kinks and no corners. For a more in depth explanation of this refer to pages 297 - 298 of Optimization Modeling with Spreadsheets, Second Edition by Kenneth R. Baker. The math version is that the functions are continuous and at least twice continuous differentiable. Please note that the second derivative must also be continuous. For a more in depth explanation of this refer to pages 11 - 12 of Introduction to Optimum Design, Third Edition by Jasbir Arora. What happens when the problem you are dealing w/ is not smooth? You can't use basin hopping. For example, a price schedule with a price break is a standard example of a non-smooth function. My personal and immediate non-smooth problem is placing advertising in available time slots. Also, the SciPy lecture notes (http://scipy-lectures.github.io/advanced/mathematical_optimization/#smooth- and-non-smooth-problems) states "2.7.1.2. Smooth and non-smooth problems ... Optimizing smooth functions is easier" Simulated annealing is not perfect. It does have issues like having a lot of knobs to turn. Bechmarks are a tough thing. My only comment is that smooth and non-smooth should be 2 different sets of benchmarks. BTW, I love the comment "I got fed up with the current attitude of most mathematicians/numerical optimization experts, who tend to demonstrate the advantages of an algorithm based on "elapsed time" or "CPU time" or similar meaningless performance indicators" Minor nit: I think that there is a disconnect between the code for SA and its doco. The doco says "T0 ... initial temperature. If None, then estimate it as 1.2 times the largest cost-function deviation over random points in the box-shaped region specified by the `lower, upper` input parameters." but the code says "self.T0 = (fmax-fmin)*1.5" Would you be willing to enlarge the references for SA to include some the references at Simulated Annealing (SA) for Mathematical Optimization (http://rlucente.blogspot.com/2014/09/simulated-annealing-sa-for-mathematica l.html). Is it possible to create an example where an end user could pass in a custom schedule to the simulated annealing code? >comment on the corresponding changeset on github How do I execute on this? -----Original Message----- From: scipy-dev-bounces at scipy.org [mailto:scipy-dev-bounces at scipy.org] On Behalf Of scipy-dev-request at scipy.org Sent: Monday, October 13, 2014 1:00 PM To: scipy-dev at scipy.org Subject: SciPy-Dev Digest, Vol 132, Issue 15 Send SciPy-Dev mailing list submissions to scipy-dev at scipy.org To subscribe or unsubscribe via the World Wide Web, visit http://mail.scipy.org/mailman/listinfo/scipy-dev or, via email, send a message with subject or body 'help' to scipy-dev-request at scipy.org You can reach the person managing the list at scipy-dev-owner at scipy.org When replying, please edit your Subject line so it is more specific than "Re: Contents of SciPy-Dev digest..." Today's Topics: 1. Re: scipy.optimize.anneal - deprecation (Derek Homeier) 2. Re: scipy.optimize.anneal - deprecation (Charles R Harris) 3. Re: scipy.optimize.anneal - deprecation (Andrew Nelson) 4. Re: scipy.optimize.anneal - deprecation (Andrew Nelson) ---------------------------------------------------------------------- Message: 1 Date: Sun, 12 Oct 2014 21:40:33 +0200 From: Derek Homeier Subject: Re: [SciPy-Dev] scipy.optimize.anneal - deprecation To: SciPy Developers List Message-ID: Content-Type: text/plain; charset=windows-1252 Hi Robert, On 12 Oct 2014, at 12:03 am, Robert Lucente - Pipeline wrote: > Did not hear back. Not sure how to interpret that. Would really like > to know why anneal was deprecated and why recommend basin hopping which won?t apply for discrete optimization. I realize that simulated annealing requires a lot of tweeking. > I?d think this should be the right forum to bring up this issue; the idea to replace anneal with basin hopping seems to have been originally brought up in this thread http://mail.scipy.org/pipermail/scipy-dev/2012-October/018032.html which also discusses some of the former?s perceived shortcomings, but I don?t know how the decision eventually emerged. As another option you might perhaps comment on the corresponding changeset on github and try to start a discussion there. HTH Derek ------------------------------ Message: 2 Date: Sun, 12 Oct 2014 17:27:07 -0600 From: Charles R Harris Subject: Re: [SciPy-Dev] scipy.optimize.anneal - deprecation To: SciPy Developers List Message-ID: Content-Type: text/plain; charset="utf-8" On Sun, Oct 12, 2014 at 1:40 PM, Derek Homeier < derek at astro.physik.uni-goettingen.de> wrote: > Hi Robert, > > On 12 Oct 2014, at 12:03 am, Robert Lucente - Pipeline < > rlucente at pipeline.com> wrote: > > > Did not hear back. Not sure how to interpret that. Would really like > > to > know why anneal was deprecated and why recommend basin hopping which > won?t apply for discrete optimization. I realize that simulated > annealing requires a lot of tweeking. > > > I?d think this should be the right forum to bring up this issue; the > idea to replace anneal with basin hopping seems to have been > originally brought up in this thread > http://mail.scipy.org/pipermail/scipy-dev/2012-October/018032.html > which also discusses some of the former?s perceived shortcomings, but > I don?t know how the decision eventually emerged. > As another option you might perhaps comment on the corresponding > changeset on github and try to start a discussion there. > > Looking at the tests at http://infinity77.net/global_optimization/multidimensional.html the scipy version of simulated annealing, SIMANN, performs horribly. However, ANA seems to do pretty well. So the problem with scipy seems to have been a poor version of the algorithm and probably we should just fix that. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.scipy.org/pipermail/scipy-dev/attachments/20141012/eed240c1/atta chment-0001.html ------------------------------ Message: 3 Date: Mon, 13 Oct 2014 10:37:14 +1100 From: Andrew Nelson Subject: Re: [SciPy-Dev] scipy.optimize.anneal - deprecation To: SciPy Developers List Message-ID: Content-Type: text/plain; charset="utf-8" On 13 October 201 > > Looking at the tests at > http://infinity77.net/global_optimization/multidimensional.html the > scipy version of simulated annealing, SIMANN, performs horribly. > However, ANA seems to do pretty well. So the problem with scipy seems > to have been a poor version of the algorithm and probably we should > just fix that. > > Chuck > Andrea Gavana kindly made the code to the benchmark suite available. I'm modifying to a form for inclusion into scipy: https://github.com/andyfaff/scipy/tree/go_benchmark/scipy/optimize/benchmark s. As we speak I'm running the test optimization functions (I added a few and modified some) for basinhopping, differential_evolution and anneal. I'll put these up in a Gist in the next day or so. -- _____________________________________ Dr. Andrew Nelson _____________________________________ -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.scipy.org/pipermail/scipy-dev/attachments/20141013/0e913ac5/atta chment-0001.html ------------------------------ Message: 4 Date: Mon, 13 Oct 2014 16:17:09 +1100 From: Andrew Nelson Subject: Re: [SciPy-Dev] scipy.optimize.anneal - deprecation To: SciPy Developers List Message-ID: Content-Type: text/plain; charset="utf-8" On 13 October 2014 10:37, Andrew Nelson wrote: > On 13 October 201 >> >> Looking at the tests at >> http://infinity77.net/global_optimization/multidimensional.html the >> scipy version of simulated annealing, SIMANN, performs horribly. >> However, ANA seems to do pretty well. So the problem with scipy seems >> to have been a >> poor version of the algorithm and probably we should just fix that. >> >> Chuck >> > > Andrea Gavana kindly made the code to the benchmark suite available. > I'm modifying to a form for inclusion into scipy: > https://github.com/andyfaff/scipy/tree/go_benchmark/scipy/optimize/benchmark s. > As we speak I'm running the test optimization functions (I added a few > and modified some) for basinhopping, differential_evolution and anneal. > I'll put these up in a Gist in the next day or so. > > As promised, the first run through of the test functions for global optimizers is at: https://gist.github.com/andyfaff/24c96a3d5dbc7b0272b2. This was for a total of 150 random starting vectors. There are still some things to be ironed out, particularly how a success is judged (atol vs rtol, etc). For example, in the Thurber problem (a NIST regression standard) a lot of failures are because the minimizer used for polishing (L-BFGS-B) doesn't seem to be able to take the energy from 5645 to 5642. The parameters are so close to the optimum solution I'm wondering if it's a precision problem with the numerical derivatives. I think most of those fails could be counted as successes. In any case, I'm not going to comment any further, but will let the numbers tell their own story. For each problem the percentage of successes is reported (whether the minimizer found the global minimum), as well as the mean number of function evaluations (irrespective of whether it was a success or failure). regards, Andrew. -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.scipy.org/pipermail/scipy-dev/attachments/20141013/47306ebc/atta chment-0001.html ------------------------------ _______________________________________________ SciPy-Dev mailing list SciPy-Dev at scipy.org http://mail.scipy.org/mailman/listinfo/scipy-dev End of SciPy-Dev Digest, Vol 132, Issue 15 ****************************************** ------------------------------ Message: 2 Date: Tue, 14 Oct 2014 08:53:37 +0100 From: Jacob Stevenson Subject: Re: [SciPy-Dev] scipy.optimize.anneal - deprecation To: SciPy Developers List Message-ID: Content-Type: text/plain; charset="windows-1252" I wrote the documentation for basin hopping and added the ?smooth? comment. Basinhopping is a very simple algorithm, it?s essentially just a Monte Carlo routine with local minimisation. All the hard work is done by the local minimizer, so it?s performance is only as good as the local minimiser. Most of the local minimisers in scipy (l-bfgs-b, cg, etc) are designed to work with smooth functions, so basinhopping will really only work well on smooth functions if one of those minimisers were used. I can?t think of a reason that basin hopping wouldn't work on non-smooth functions if a non-smooth minimiser were used (power, coblya, etc). I have little direct experience with this so I can?t say for sure. Maybe the ?smooth" restriction in the basin hopping documentation should be removed > Bechmarks are a tough thing. Very true. Andrea?s collection of benchmark functions is great (http://infinity77.net/global_optimization/multidimensional.html), but almost all of them are low dimensional (2-5 dimensions). I primarily deal with optimisation problems with greater than 100 dimensions. An optimisation routine which works well in low dimensions will not necessarily scale to well to high dimensions. > My only comment is that smooth and non-smooth should be 2 different > sets of benchmarks This is a good idea Jake On 14 Oct 2014, at 04:45, Robert Lucente - Pipeline wrote: > Hummm. I think I am missing something fundamental. > > The official doco for scipy.optimize.basinhopping > (http://docs.scipy.org/doc/scipy-0.13.0/reference/generated/scipy.opti > mize.b > asinhopping.html) states "Basin-hopping is a stochastic algorithm > which attempts to find the global minimum of a smooth scalar > function". The key word to note is smooth. The non-math version of > smooth is that there are no gaps and no breaks. Also, there are no > kinks and no corners. For a more in depth explanation of this refer to > pages 297 - 298 of Optimization Modeling with Spreadsheets, Second > Edition by Kenneth R. Baker. The math version is that the functions > are continuous and at least twice continuous differentiable. Please > note that the second derivative must also be continuous. For a more in > depth explanation of this refer to pages 11 - 12 of Introduction to Optimum Design, Third Edition by Jasbir Arora. > > What happens when the problem you are dealing w/ is not smooth? You > can't use basin hopping. For example, a price schedule with a price > break is a standard example of a non-smooth function. My personal and > immediate non-smooth problem is placing advertising in available time slots. > > Also, the SciPy lecture notes > (http://scipy-lectures.github.io/advanced/mathematical_optimization/#s > mooth- > and-non-smooth-problems) states "2.7.1.2. Smooth and non-smooth problems ... > Optimizing smooth functions is easier" > > Simulated annealing is not perfect. It does have issues like having a > lot of knobs to turn. > > Bechmarks are a tough thing. My only comment is that smooth and > non-smooth should be 2 different sets of benchmarks. BTW, I love the > comment "I got fed up with the current attitude of most > mathematicians/numerical optimization experts, who tend to demonstrate > the advantages of an algorithm based on "elapsed time" or "CPU time" or similar meaningless performance indicators" > > Minor nit: I think that there is a disconnect between the code for SA > and its doco. The doco says "T0 ... initial temperature. If None, then > estimate it as 1.2 times the largest cost-function deviation over > random points in the box-shaped region specified by the `lower, upper` > input parameters." but the code says "self.T0 = (fmax-fmin)*1.5" > > Would you be willing to enlarge the references for SA to include some > the references at Simulated Annealing (SA) for Mathematical > Optimization > (http://rlucente.blogspot.com/2014/09/simulated-annealing-sa-for-mathe > matica > l.html). > > Is it possible to create an example where an end user could pass in a > custom schedule to the simulated annealing code? > >> comment on the corresponding changeset on github > How do I execute on this? > > -----Original Message----- > From: scipy-dev-bounces at scipy.org [mailto:scipy-dev-bounces at scipy.org] > On Behalf Of scipy-dev-request at scipy.org > Sent: Monday, October 13, 2014 1:00 PM > To: scipy-dev at scipy.org > Subject: SciPy-Dev Digest, Vol 132, Issue 15 > > Send SciPy-Dev mailing list submissions to > scipy-dev at scipy.org > > To subscribe or unsubscribe via the World Wide Web, visit > http://mail.scipy.org/mailman/listinfo/scipy-dev > or, via email, send a message with subject or body 'help' to > scipy-dev-request at scipy.org > > You can reach the person managing the list at > scipy-dev-owner at scipy.org > > When replying, please edit your Subject line so it is more specific > than > "Re: Contents of SciPy-Dev digest..." > > > Today's Topics: > > 1. Re: scipy.optimize.anneal - deprecation (Derek Homeier) > 2. Re: scipy.optimize.anneal - deprecation (Charles R Harris) > 3. Re: scipy.optimize.anneal - deprecation (Andrew Nelson) > 4. Re: scipy.optimize.anneal - deprecation (Andrew Nelson) > > > ---------------------------------------------------------------------- > > Message: 1 > Date: Sun, 12 Oct 2014 21:40:33 +0200 > From: Derek Homeier > Subject: Re: [SciPy-Dev] scipy.optimize.anneal - deprecation > To: SciPy Developers List > Message-ID: > > > Content-Type: text/plain; charset=windows-1252 > > Hi Robert, > > On 12 Oct 2014, at 12:03 am, Robert Lucente - Pipeline > wrote: > >> Did not hear back. Not sure how to interpret that. Would really like >> to > know why anneal was deprecated and why recommend basin hopping which > won?t apply for discrete optimization. I realize that simulated > annealing requires a lot of tweeking. >> > I?d think this should be the right forum to bring up this issue; the > idea to replace anneal with basin hopping seems to have been > originally brought up in this thread > http://mail.scipy.org/pipermail/scipy-dev/2012-October/018032.html > which also discusses some of the former?s perceived shortcomings, but > I don?t know how the decision eventually emerged. > As another option you might perhaps comment on the corresponding > changeset on github and try to start a discussion there. > > HTH > Derek > > > > ------------------------------ > > Message: 2 > Date: Sun, 12 Oct 2014 17:27:07 -0600 > From: Charles R Harris > Subject: Re: [SciPy-Dev] scipy.optimize.anneal - deprecation > To: SciPy Developers List > Message-ID: > > Content-Type: text/plain; charset="utf-8" > > On Sun, Oct 12, 2014 at 1:40 PM, Derek Homeier < > derek at astro.physik.uni-goettingen.de> wrote: > >> Hi Robert, >> >> On 12 Oct 2014, at 12:03 am, Robert Lucente - Pipeline < >> rlucente at pipeline.com> wrote: >> >>> Did not hear back. Not sure how to interpret that. Would really like >>> to >> know why anneal was deprecated and why recommend basin hopping which >> won?t apply for discrete optimization. I realize that simulated >> annealing requires a lot of tweeking. >>> >> I?d think this should be the right forum to bring up this issue; the >> idea to replace anneal with basin hopping seems to have been >> originally brought up in this thread >> http://mail.scipy.org/pipermail/scipy-dev/2012-October/018032.html >> which also discusses some of the former?s perceived shortcomings, but >> I don?t know how the decision eventually emerged. >> As another option you might perhaps comment on the corresponding >> changeset on github and try to start a discussion there. >> >> > Looking at the tests at > http://infinity77.net/global_optimization/multidimensional.html the > scipy version of simulated annealing, SIMANN, performs horribly. > However, ANA seems to do pretty well. So the problem with scipy seems > to have been a poor version of the algorithm and probably we should just fix that. > > Chuck > -------------- next part -------------- An HTML attachment was > scrubbed... > URL: > http://mail.scipy.org/pipermail/scipy-dev/attachments/20141012/eed240c > 1/atta > chment-0001.html > > ------------------------------ > > Message: 3 > Date: Mon, 13 Oct 2014 10:37:14 +1100 > From: Andrew Nelson > Subject: Re: [SciPy-Dev] scipy.optimize.anneal - deprecation > To: SciPy Developers List > Message-ID: > > Content-Type: text/plain; charset="utf-8" > > On 13 October 201 >> >> Looking at the tests at >> http://infinity77.net/global_optimization/multidimensional.html the >> scipy version of simulated annealing, SIMANN, performs horribly. >> However, ANA seems to do pretty well. So the problem with scipy seems >> to have been a poor version of the algorithm and probably we should >> just > fix that. >> >> Chuck >> > > Andrea Gavana kindly made the code to the benchmark suite available. > I'm modifying to a form for inclusion into scipy: > https://github.com/andyfaff/scipy/tree/go_benchmark/scipy/optimize/ben > chmark > s. > As we speak I'm running the test optimization functions (I added a few > and modified some) for basinhopping, differential_evolution and anneal. > I'll put these up in a Gist in the next day or so. > > > -- > _____________________________________ > Dr. Andrew Nelson > > > _____________________________________ > -------------- next part -------------- An HTML attachment was > scrubbed... > URL: > http://mail.scipy.org/pipermail/scipy-dev/attachments/20141013/0e913ac > 5/atta > chment-0001.html > > ------------------------------ > > Message: 4 > Date: Mon, 13 Oct 2014 16:17:09 +1100 > From: Andrew Nelson > Subject: Re: [SciPy-Dev] scipy.optimize.anneal - deprecation > To: SciPy Developers List > Message-ID: > > Content-Type: text/plain; charset="utf-8" > > On 13 October 2014 10:37, Andrew Nelson wrote: > >> On 13 October 201 >>> >>> Looking at the tests at >>> http://infinity77.net/global_optimization/multidimensional.html the >>> scipy version of simulated annealing, SIMANN, performs horribly. >>> However, ANA seems to do pretty well. So the problem with scipy >>> seems to have been > a >>> poor version of the algorithm and probably we should just fix that. >>> >>> Chuck >>> >> >> Andrea Gavana kindly made the code to the benchmark suite available. >> I'm modifying to a form for inclusion into scipy: >> > https://github.com/andyfaff/scipy/tree/go_benchmark/scipy/optimize/ben > chmark > s. >> As we speak I'm running the test optimization functions (I added a >> few and modified some) for basinhopping, differential_evolution and anneal. >> I'll put these up in a Gist in the next day or so. >> >> > > As promised, the first run through of the test functions for global > optimizers is at: https://gist.github.com/andyfaff/24c96a3d5dbc7b0272b2. > This was for a total of 150 random starting vectors. There are still > some things to be ironed out, particularly how a success is judged > (atol vs rtol, etc). For example, in the Thurber problem (a NIST > regression > standard) a lot of failures are because the minimizer used for > polishing > (L-BFGS-B) doesn't seem to be able to take the energy from 5645 to 5642. > The parameters are so close to the optimum solution I'm wondering if > it's a precision problem with the numerical derivatives. I think most > of those fails could be counted as successes. > > In any case, I'm not going to comment any further, but will let the > numbers tell their own story. For each problem the percentage of > successes is reported (whether the minimizer found the global > minimum), as well as the mean number of function evaluations > (irrespective of whether it was a success or failure). > > regards, > Andrew. > -------------- next part -------------- An HTML attachment was > scrubbed... > URL: > http://mail.scipy.org/pipermail/scipy-dev/attachments/20141013/47306eb > c/atta > chment-0001.html > > ------------------------------ > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > > > End of SciPy-Dev Digest, Vol 132, Issue 15 > ****************************************** > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.scipy.org/pipermail/scipy-dev/attachments/20141014/e9d62449/atta chment.html ------------------------------ _______________________________________________ SciPy-Dev mailing list SciPy-Dev at scipy.org http://mail.scipy.org/mailman/listinfo/scipy-dev End of SciPy-Dev Digest, Vol 132, Issue 16 ****************************************** From pav at iki.fi Wed Oct 15 16:43:52 2014 From: pav at iki.fi (Pauli Virtanen) Date: Wed, 15 Oct 2014 23:43:52 +0300 Subject: [SciPy-Dev] scipy.optimize.anneal - deprecation In-Reply-To: References: <097501cfe59f$2150d4e0$63f27ea0$@pipeline.com> Message-ID: 13.10.2014, 02:27, Charles R Harris kirjoitti: [clip] > Looking at the tests at > http://infinity77.net/global_optimization/multidimensional.html the scipy > version of simulated annealing, SIMANN, performs horribly. However, ANA > seems to do pretty well. So the problem with scipy seems to have been a > poor version of the algorithm and probably we should just fix that. One of the reasons leading to the deprecation was the sub-optimality of the implementation. As the situation was not fixed in several years, it's better to deprecate it than ship it. -- Pauli Virtanen From jtaylor.debian at googlemail.com Wed Oct 15 19:13:11 2014 From: jtaylor.debian at googlemail.com (Julian Taylor) Date: Thu, 16 Oct 2014 01:13:11 +0200 Subject: [SciPy-Dev] 0.14.1 and 0.15.0 release schedule In-Reply-To: References: Message-ID: <543EFF87.2030101@googlemail.com> Hi, whats the status on 0.14.1? Is this still planned? Debian is freezing on the 5. of November and I'd prefer to have 0.14.1 instead of 0.14.0 in. Though I could also package a git snapshot if you don't think you'll do a release before then, just let me know. Cheers, Julian On 18.09.2014 00:50, Ralf Gommers wrote: > > > On Tue, Sep 9, 2014 at 9:28 PM, Warren Weckesser > > wrote: > > > > On Fri, Sep 5, 2014 at 3:07 PM, Ralf Gommers > wrote: > > Hi all, > > It's about time for a couple of releases. > > 0.14.1 fixes some regressions as well as a lot of test failures > against numpy 1.9.0 (due to __numpy_ufunc__) removal. All known > regressions are fixed now, so I plan to tag it this weekend. If > there's anything really urgent that still needs to go in, please > speak up. > > For 0.15.0 there are still 17 issues/PRs, but most of those we > should be able to merge/fix relatively quickly. I propose the > following release schedule: > > - beta 1: 16 Sep > - release candidate 1: 4 Oct > - release candidate 2: 18 Oct (if needed) > - final release: 25 Oct > > Does that work for everyone? > > > > Works for me. Thanks, Ralf. > > > There are still a few issues to be solved (see > https://github.com/scipy/scipy/milestones/0.15.0), so the timing has to > shift a bit. I'll be offline for a week, so probably we can do the beta > at the end of this month. > > Ralf > > > > > Warren > > > > Are there issues/PRs that aren't under the 0.15.0 milestone but > have to go in? > > Cheers, > Ralf > > > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > > > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > > > > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > From sturla.molden at gmail.com Thu Oct 16 18:14:31 2014 From: sturla.molden at gmail.com (Sturla Molden) Date: Thu, 16 Oct 2014 22:14:31 +0000 (UTC) Subject: [SciPy-Dev] scipy.optimize.anneal - deprecation References: <097501cfe59f$2150d4e0$63f27ea0$@pipeline.com> Message-ID: <1871762558435190280.165256sturla.molden-gmail.com@news.gmane.org> Pauli Virtanen wrote: > One of the reasons leading to the deprecation was the sub-optimality of > the implementation. As the situation was not fixed in several years, > it's better to deprecate it than ship it. Considering how candiate solutions are generated in SciPy's implementation, it should come as no surprise that it fails. Sturla From sturla.molden at gmail.com Thu Oct 16 18:52:21 2014 From: sturla.molden at gmail.com (Sturla Molden) Date: Thu, 16 Oct 2014 22:52:21 +0000 (UTC) Subject: [SciPy-Dev] scipy.optimize.anneal - deprecation References: <097501cfe59f$2150d4e0$63f27ea0$@pipeline.com> <1871762558435190280.165256sturla.molden-gmail.com@news.gmane.org> Message-ID: <952036024435191983.193079sturla.molden-gmail.com@news.gmane.org> Sturla Molden wrote: > Pauli Virtanen wrote: > >> One of the reasons leading to the deprecation was the sub-optimality of >> the implementation. As the situation was not fixed in several years, >> it's better to deprecate it than ship it. > > Considering how candiate solutions are generated in SciPy's implementation, > it should come as no surprise that it fails. Also, take a look at https://github.com/scipy/scipy/blob/master/scipy/optimize/anneal.py#L82 :-D hahaha, we should rather call this simulated meltdown! Instead of cooling down it heats up, until it's infinitely hot... I suggest we start by fixing the disastrous typo. Sturla From robert.kern at gmail.com Thu Oct 16 20:44:20 2014 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 16 Oct 2014 17:44:20 -0700 Subject: [SciPy-Dev] scipy.optimize.anneal - deprecation In-Reply-To: <952036024435191983.193079sturla.molden-gmail.com@news.gmane.org> References: <097501cfe59f$2150d4e0$63f27ea0$@pipeline.com> <1871762558435190280.165256sturla.molden-gmail.com@news.gmane.org> <952036024435191983.193079sturla.molden-gmail.com@news.gmane.org> Message-ID: On Thu, Oct 16, 2014 at 3:52 PM, Sturla Molden wrote: > Sturla Molden wrote: >> Pauli Virtanen wrote: >> >>> One of the reasons leading to the deprecation was the sub-optimality of >>> the implementation. As the situation was not fixed in several years, >>> it's better to deprecate it than ship it. >> >> Considering how candiate solutions are generated in SciPy's implementation, >> it should come as no surprise that it fails. > > Also, take a look at > > https://github.com/scipy/scipy/blob/master/scipy/optimize/anneal.py#L82 > > :-D > > hahaha, we should rather call this simulated meltdown! Instead of cooling > down it heats up, until it's infinitely hot... > > I suggest we start by fixing the disastrous typo. Your PR is eagerly awaited. -- Robert Kern From sturla.molden at gmail.com Thu Oct 16 20:56:09 2014 From: sturla.molden at gmail.com (Sturla Molden) Date: Fri, 17 Oct 2014 00:56:09 +0000 (UTC) Subject: [SciPy-Dev] scipy.optimize.anneal - deprecation References: <097501cfe59f$2150d4e0$63f27ea0$@pipeline.com> <1871762558435190280.165256sturla.molden-gmail.com@news.gmane.org> <952036024435191983.193079sturla.molden-gmail.com@news.gmane.org> Message-ID: <1102955354435200090.816461sturla.molden-gmail.com@news.gmane.org> Sorry, ignore my previous post. It was too late in the evening for conscious thought processes. Sturls Sturla Molden wrote: > Sturla Molden wrote: >> Pauli Virtanen wrote: >> >>> One of the reasons leading to the deprecation was the sub-optimality of >>> the implementation. As the situation was not fixed in several years, >>> it's better to deprecate it than ship it. >> >> Considering how candiate solutions are generated in SciPy's implementation, >> it should come as no surprise that it fails. > > Also, take a look at > > https://github.com/scipy/scipy/blob/master/scipy/optimize/anneal.py#L82 > > :-D > > hahaha, we should rather call this simulated meltdown! Instead of cooling > down it heats up, until it's infinitely hot... > > I suggest we start by fixing the disastrous typo. > > > Sturla From rlucente at pipeline.com Fri Oct 17 07:36:55 2014 From: rlucente at pipeline.com (Robert Lucente - Pipeline) Date: Fri, 17 Oct 2014 07:36:55 -0400 Subject: [SciPy-Dev] scipy.optimize.anneal - deprecation Message-ID: <018201cfe9fe$a76060b0$f6212210$@pipeline.com> > One of the reasons leading to the deprecation was the sub-optimality >of the implementation. As the situation was not fixed in several years, >it's better to deprecate it than ship it. Is it better to have a sub-optimal implementation and documented as such as opposed to no implementation at all? Personally, working through the Python code was a big help to really learn the stuff. However, what do I know, I am a newbie and not sure what the tribal norms are either. Either way, thanks for the all the help. -----Original Message----- From: scipy-dev-bounces at scipy.org [mailto:scipy-dev-bounces at scipy.org] On Behalf Of scipy-dev-request at scipy.org Sent: Thursday, October 16, 2014 1:00 PM To: scipy-dev at scipy.org Subject: SciPy-Dev Digest, Vol 132, Issue 18 Send SciPy-Dev mailing list submissions to scipy-dev at scipy.org To subscribe or unsubscribe via the World Wide Web, visit http://mail.scipy.org/mailman/listinfo/scipy-dev or, via email, send a message with subject or body 'help' to scipy-dev-request at scipy.org You can reach the person managing the list at scipy-dev-owner at scipy.org When replying, please edit your Subject line so it is more specific than "Re: Contents of SciPy-Dev digest..." Today's Topics: 1. Re: scipy.optimize.anneal - deprecation (Pauli Virtanen) 2. Re: 0.14.1 and 0.15.0 release schedule (Julian Taylor) ---------------------------------------------------------------------- Message: 1 Date: Wed, 15 Oct 2014 23:43:52 +0300 From: Pauli Virtanen Subject: Re: [SciPy-Dev] scipy.optimize.anneal - deprecation To: scipy-dev at scipy.org Message-ID: Content-Type: text/plain; charset=windows-1252 13.10.2014, 02:27, Charles R Harris kirjoitti: [clip] > Looking at the tests at > http://infinity77.net/global_optimization/multidimensional.html the > scipy version of simulated annealing, SIMANN, performs horribly. > However, ANA seems to do pretty well. So the problem with scipy seems > to have been a poor version of the algorithm and probably we should just fix that. One of the reasons leading to the deprecation was the sub-optimality of the implementation. As the situation was not fixed in several years, it's better to deprecate it than ship it. -- Pauli Virtanen ------------------------------ Message: 2 Date: Thu, 16 Oct 2014 01:13:11 +0200 From: Julian Taylor Subject: Re: [SciPy-Dev] 0.14.1 and 0.15.0 release schedule To: SciPy Developers List Message-ID: <543EFF87.2030101 at googlemail.com> Content-Type: text/plain; charset=windows-1252 Hi, whats the status on 0.14.1? Is this still planned? Debian is freezing on the 5. of November and I'd prefer to have 0.14.1 instead of 0.14.0 in. Though I could also package a git snapshot if you don't think you'll do a release before then, just let me know. Cheers, Julian On 18.09.2014 00:50, Ralf Gommers wrote: > > > On Tue, Sep 9, 2014 at 9:28 PM, Warren Weckesser > > wrote: > > > > On Fri, Sep 5, 2014 at 3:07 PM, Ralf Gommers > wrote: > > Hi all, > > It's about time for a couple of releases. > > 0.14.1 fixes some regressions as well as a lot of test failures > against numpy 1.9.0 (due to __numpy_ufunc__) removal. All known > regressions are fixed now, so I plan to tag it this weekend. If > there's anything really urgent that still needs to go in, please > speak up. > > For 0.15.0 there are still 17 issues/PRs, but most of those we > should be able to merge/fix relatively quickly. I propose the > following release schedule: > > - beta 1: 16 Sep > - release candidate 1: 4 Oct > - release candidate 2: 18 Oct (if needed) > - final release: 25 Oct > > Does that work for everyone? > > > > Works for me. Thanks, Ralf. > > > There are still a few issues to be solved (see > https://github.com/scipy/scipy/milestones/0.15.0), so the timing has to > shift a bit. I'll be offline for a week, so probably we can do the beta > at the end of this month. > > Ralf > > > > > Warren > > > > Are there issues/PRs that aren't under the 0.15.0 milestone but > have to go in? > > Cheers, > Ralf > > > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > > > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > > > > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > ------------------------------ _______________________________________________ SciPy-Dev mailing list SciPy-Dev at scipy.org http://mail.scipy.org/mailman/listinfo/scipy-dev End of SciPy-Dev Digest, Vol 132, Issue 18 ****************************************** From gael.varoquaux at normalesup.org Fri Oct 17 07:41:16 2014 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Fri, 17 Oct 2014 13:41:16 +0200 Subject: [SciPy-Dev] scipy.optimize.anneal - deprecation In-Reply-To: <018201cfe9fe$a76060b0$f6212210$@pipeline.com> References: <018201cfe9fe$a76060b0$f6212210$@pipeline.com> Message-ID: <20141017114116.GP30227@phare.normalesup.org> On Fri, Oct 17, 2014 at 07:36:55AM -0400, Robert Lucente - Pipeline wrote: > > One of the reasons leading to the deprecation was the sub-optimality > >of the implementation. As the situation was not fixed in several years, > >it's better to deprecate it than ship it. > Is it better to have a sub-optimal implementation and documented as such as > opposed to no implementation at all? It tarnishes the reputation of a library and creates a huge support load on the mailing list/developpers. G From matthew.brett at gmail.com Fri Oct 17 13:13:23 2014 From: matthew.brett at gmail.com (Matthew Brett) Date: Fri, 17 Oct 2014 10:13:23 -0700 Subject: [SciPy-Dev] Use vecLibFort instead of Accelerate? In-Reply-To: References: <358013229434724162.309130sturla.molden-gmail.com@news.gmane.org> <2033297632434750599.297611sturla.molden-gmail.com@news.gmane.org> <1392384412434753957.374306sturla.molden-gmail.com@news.gmane.org> Message-ID: Sturla, On Sat, Oct 11, 2014 at 2:56 PM, Matthew Brett wrote: > On Sat, Oct 11, 2014 at 2:04 PM, Sturla Molden wrote: >> Matthew Brett wrote: >> >>> 1) Pull in our own versions of the sgemv workaround in scipy; >>> 2) Use vecLibFort to replace our own Accelerate wrappers. >>> >>> Having the Accelerate wrappers and fixes go in one place seems like a >>> good idea in general, but I know nothing about the issues, I'm afraid. >> >> A global fix would solve the problem everywhere. Probably easier to >> maintain. >> >> >>> Sturla - I saw you asking the author of vecLibFort about a cblas_sgemv >>> fix at [1] - would the plan be to use vecLibFort in numpy also? >> >> That was why I asked. My current fix for NumPy adds some minor changes the >> dotblas C code. I would rather avoid this. > > Would both numpy and scipy need a copy of vecLibFort at build time? Any progress here? It would be great to get this fix into numpy 1.9.1, Cheers, Matthew From sturla.molden at gmail.com Fri Oct 17 15:52:58 2014 From: sturla.molden at gmail.com (Sturla Molden) Date: Fri, 17 Oct 2014 19:52:58 +0000 (UTC) Subject: [SciPy-Dev] Use vecLibFort instead of Accelerate? References: <358013229434724162.309130sturla.molden-gmail.com@news.gmane.org> <2033297632434750599.297611sturla.molden-gmail.com@news.gmane.org> <1392384412434753957.374306sturla.molden-gmail.com@news.gmane.org> Message-ID: <1714991790435267281.132155sturla.molden-gmail.com@news.gmane.org> Matthew Brett wrote: > Any progress here? It would be great to get this fix into numpy 1.9.1, I am working on it, but I have to prioritise my time. Let's see if I can get it done over the weekend. :-) When is 1.9.1 due? I have a fix ready, but I want one which avoids *any* changes to the dotblas C code. It so robust and well-testet so it would be a shame to touch it just to silence Apple's bug. Sturla From matthew.brett at gmail.com Fri Oct 17 15:59:01 2014 From: matthew.brett at gmail.com (Matthew Brett) Date: Fri, 17 Oct 2014 12:59:01 -0700 Subject: [SciPy-Dev] Use vecLibFort instead of Accelerate? In-Reply-To: <1714991790435267281.132155sturla.molden-gmail.com@news.gmane.org> References: <358013229434724162.309130sturla.molden-gmail.com@news.gmane.org> <2033297632434750599.297611sturla.molden-gmail.com@news.gmane.org> <1392384412434753957.374306sturla.molden-gmail.com@news.gmane.org> <1714991790435267281.132155sturla.molden-gmail.com@news.gmane.org> Message-ID: Hi, On Fri, Oct 17, 2014 at 12:52 PM, Sturla Molden wrote: > Matthew Brett wrote: > >> Any progress here? It would be great to get this fix into numpy 1.9.1, > > I am working on it, but I have to prioritise my time. Let's see if I can > get it done over the weekend. :-) > > When is 1.9.1 due? > > I have a fix ready, but I want one which avoids *any* changes to the > dotblas C code. It so robust and well-testet so it would be a shame to > touch it just to silence Apple's bug. How about putting that fix up as a WIP pull request? Cheers, Matthew From sturla.molden at gmail.com Fri Oct 17 18:37:20 2014 From: sturla.molden at gmail.com (Sturla Molden) Date: Sat, 18 Oct 2014 00:37:20 +0200 Subject: [SciPy-Dev] Use vecLibFort instead of Accelerate? In-Reply-To: References: <358013229434724162.309130sturla.molden-gmail.com@news.gmane.org> <96605170434745800.033492sturla.molden-gmail.com@news.gmane.org> Message-ID: On 11/10/14 20:55, Pauli Virtanen wrote: > Does the symbol used in the end depend on link order? Or does the > linker/dynamic linker always pick the static symbol over the dynamic one? Good question... I am not sure. Sturla From sturla.molden at gmail.com Fri Oct 17 18:51:08 2014 From: sturla.molden at gmail.com (Sturla Molden) Date: Sat, 18 Oct 2014 00:51:08 +0200 Subject: [SciPy-Dev] Use vecLibFort instead of Accelerate? In-Reply-To: References: <358013229434724162.309130sturla.molden-gmail.com@news.gmane.org> <2033297632434750599.297611sturla.molden-gmail.com@news.gmane.org> <1392384412434753957.374306sturla.molden-gmail.com@news.gmane.org> <1714991790435267281.132155sturla.molden-gmail.com@news.gmane.org> Message-ID: On 17/10/14 21:59, Matthew Brett wrote: > How about putting that fix up as a WIP pull request? It seems the dotblas module is no longer available. WTF... Ok, now I must start all over again... Sturla From sturla.molden at gmail.com Fri Oct 17 19:22:35 2014 From: sturla.molden at gmail.com (Sturla Molden) Date: Sat, 18 Oct 2014 01:22:35 +0200 Subject: [SciPy-Dev] Use vecLibFort instead of Accelerate? In-Reply-To: References: <358013229434724162.309130sturla.molden-gmail.com@news.gmane.org> <2033297632434750599.297611sturla.molden-gmail.com@news.gmane.org> <1392384412434753957.374306sturla.molden-gmail.com@news.gmane.org> <1714991790435267281.132155sturla.molden-gmail.com@news.gmane.org> Message-ID: On 18/10/14 00:51, Sturla Molden wrote: > On 17/10/14 21:59, Matthew Brett wrote: > > > How about putting that fix up as a WIP pull request? > > It seems the dotblas module is no longer available. WTF... > > Ok, now I must start all over again... It's present in 1.9.x but moved to multiarray in 1.10.x Then I'll patch against 1.9 and leave 1.10.x for now. Sturla From sturla.molden at gmail.com Sat Oct 18 01:56:07 2014 From: sturla.molden at gmail.com (Sturla Molden) Date: Sat, 18 Oct 2014 07:56:07 +0200 Subject: [SciPy-Dev] Use vecLibFort instead of Accelerate? In-Reply-To: References: <358013229434724162.309130sturla.molden-gmail.com@news.gmane.org> <2033297632434750599.297611sturla.molden-gmail.com@news.gmane.org> <1392384412434753957.374306sturla.molden-gmail.com@news.gmane.org> <1714991790435267281.132155sturla.molden-gmail.com@news.gmane.org> Message-ID: On 17/10/14 21:59, Matthew Brett wrote: > How about putting that fix up as a WIP pull request? While I try to remember how the GIT stuff works and modify the NumPy build process, you can try this stand alone version: $ python setup.py build_ext --inplace $ python test.py If this does not segfault, be happy :) Sturla -------------- next part -------------- A non-text attachment was scrubbed... Name: demo.zip Type: application/zip Size: 18605 bytes Desc: not available URL: From sturla.molden at gmail.com Sat Oct 18 02:10:28 2014 From: sturla.molden at gmail.com (Sturla Molden) Date: Sat, 18 Oct 2014 08:10:28 +0200 Subject: [SciPy-Dev] Use vecLibFort instead of Accelerate? In-Reply-To: References: <358013229434724162.309130sturla.molden-gmail.com@news.gmane.org> <2033297632434750599.297611sturla.molden-gmail.com@news.gmane.org> <1392384412434753957.374306sturla.molden-gmail.com@news.gmane.org> <1714991790435267281.132155sturla.molden-gmail.com@news.gmane.org> Message-ID: On 18/10/14 07:56, Sturla Molden wrote: > $ python setup.py build_ext --inplace > $ python test.py > > If this does not segfault, be happy :) It seems some of the comments in the C code were out of date, just ignore them, or see this updated version. ;-) Sturla -------------- next part -------------- A non-text attachment was scrubbed... Name: demo.zip Type: application/zip Size: 19270 bytes Desc: not available URL: From sturla.molden at gmail.com Sat Oct 18 03:48:10 2014 From: sturla.molden at gmail.com (Sturla Molden) Date: Sat, 18 Oct 2014 09:48:10 +0200 Subject: [SciPy-Dev] Use vecLibFort instead of Accelerate? In-Reply-To: References: <358013229434724162.309130sturla.molden-gmail.com@news.gmane.org> <2033297632434750599.297611sturla.molden-gmail.com@news.gmane.org> <1392384412434753957.374306sturla.molden-gmail.com@news.gmane.org> <1714991790435267281.132155sturla.molden-gmail.com@news.gmane.org> Message-ID: On 17/10/14 21:59, Matthew Brett wrote: > How about putting that fix up as a WIP pull request? Here it is: https://github.com/numpy/numpy/pull/5205 Sturla From gavishnoam at gmail.com Sat Oct 18 17:35:44 2014 From: gavishnoam at gmail.com (Noam Gavish) Date: Sun, 19 Oct 2014 00:35:44 +0300 Subject: [SciPy-Dev] Advice on an open source signal processing package Message-ID: Hey, I'm an Israeli undergrad student, new to the open source community. I've noted that Scipy might benefit from a solid signal processing toolbox, and have began writing one. The package presents a natural interface for signals, and simplifies signal processing code. This package emerged from research in the LEIsec lab in Tel-Aviv University and is supposed to be a one stop shop for DSP in python. I think that as the module matures, if adoption is positive, it can possibly become part of SciPy. Your advice about the API, design, existing python signal processing resources, and roadmap for this module would be sincerely appreciated. (Note that the inner implementation at the moment is basic, until the interface is decided) source: https://github.com/noamg/signal_processing ipython notebook example: http://nbviewer.ipython.org/github/noamg/signal_processing/blob/master/code_examples/find_pulses.ipynb Best, Noam Gavish -- - - - If you walk the footsteps of a stranger You will learn things you never knew you never knew -------------- next part -------------- An HTML attachment was scrubbed... URL: From Thomas.Haslwanter at fh-linz.at Sun Oct 19 16:15:12 2014 From: Thomas.Haslwanter at fh-linz.at (Haslwanter Thomas) Date: Sun, 19 Oct 2014 22:15:12 +0200 Subject: [SciPy-Dev] thLib Message-ID: <1CFD8CBC30E0454B9DFAADEB36AD1739B05CE50D2B@LNZEXCHANGE001.linz.fhooe.at> Over the last few years, I have created a collection of tools that comprise functions for - Working with quaternions and rotation matrices - Analyzing signals from inertial sensors and video systems - Working with sound - Signal processing - Interactive data analysis They are available from pypi as the package "thLib". More info is available under http://work.thaslwanter.at/thLib/html/ I would be grateful for any kind of feedback. Thomas Haslwanter -------------- next part -------------- An HTML attachment was scrubbed... URL: From larsmans at gmail.com Mon Oct 20 11:53:36 2014 From: larsmans at gmail.com (Lars Buitinck) Date: Mon, 20 Oct 2014 17:53:36 +0200 Subject: [SciPy-Dev] extended scipy.sparse.linalg LinearOperator interface Message-ID: Hi all, [TL;DR I want to extend scipy.sparse.linalg.LinearOperator and I'm looking for current usage patterns.] To solve a few problems that I and others [1,2] encountered with the scipy.sparse.linalg LinearOperator interface, I decided to expand and refactor it. In a nutshell: * Linear operators cannot be transposed. They do have an rmatvec that implements A^H * x, but no matrix-matrix multiplication version of same, so this has be implemented as a loop. * While a lot of subclasses exist in various parts of scipy.sparse.linalg, there was no documentation on how to roll your own operator by subclassing. Instead you have to call the constructor with a few functions, which become the methods on the custom operator. This doesn't scale if we want to add more methods. * The current implementation uses monkey-patching, causing memory leaks due to reference cycles and making the code hard to read. I've already submitted an early PR [3] with the main parts of my proposal: * LinearOperator is now an abstract base class. An overloaded __new__ makes sure you can still call the constructor with the old calling conventions; it returns a subclass. * Subclassing is possible (and documented): you must provide a method _matvec and optionally a few more. These get used to implement the public matvec method, which uses _matvec but adds input validation boilerplate (the "template method pattern"). * An "adjoint" method is added, and can be overridden by supplying an _adjoint method, to return the Hermitian adjoint (conjugate transpose) of a linear operator. This obviates the need for an rmatmat method. There's been discussion lately about merging in parts of PyOperators. I decided not to wait for this to happen, but I did have a look at their documentation and used some API conventions to make a later merge easier (or at least be compatible). I'd like feedback on this, and especially on the issue of backward compatibility with the current code. In particular, I'm not sure if the possibility to subclass LinearOperator has ever been advertised and/or exploited in third-party code. If it was, then we have to deal with that. To be sure, the currently common idioms to construct a linear operator, A = aslinearoperator(X) # ndarray or sparse matrix X and A = LinearOperator(matvec=some_callable, shape=some_tuple) still work and all current tests still pass. So I'd like to ask you: * Is my proposal useful? * How was this API used in third-party code? TIA, Lars [1] https://github.com/scipy/scipy/pull/4073 [2] https://github.com/scipy/scipy/pull/4074 [3] https://github.com/scipy/scipy/pull/4087 From sturla.molden at gmail.com Mon Oct 20 20:44:11 2014 From: sturla.molden at gmail.com (Sturla Molden) Date: Tue, 21 Oct 2014 02:44:11 +0200 Subject: [SciPy-Dev] Use vecLibFort instead of Accelerate? In-Reply-To: <1714991790435267281.132155sturla.molden-gmail.com@news.gmane.org> References: <358013229434724162.309130sturla.molden-gmail.com@news.gmane.org> <2033297632434750599.297611sturla.molden-gmail.com@news.gmane.org> <1392384412434753957.374306sturla.molden-gmail.com@news.gmane.org> <1714991790435267281.132155sturla.molden-gmail.com@news.gmane.org> Message-ID: On 17/10/14 21:52, Sturla Molden wrote: > I have a fix ready, but I want one which avoids *any* changes to the > dotblas C code. It so robust and well-testet so it would be a shame to > touch it just to silence Apple's bug. This is now what this PR does: https://github.com/numpy/numpy/pull/5205 Sturla From rlucente at pipeline.com Mon Oct 20 21:47:04 2014 From: rlucente at pipeline.com (Robert Lucente - Pipeline) Date: Mon, 20 Oct 2014 21:47:04 -0400 Subject: [SciPy-Dev] thLib Message-ID: <04c201cfecd0$ea44e4d0$beceae70$@pipeline.com> Other Python signal processing resources that might be of interest 1) Python for Signal Processing by Unpingco, Jos? 2) Gavish, Noam (https://github.com/noamg/signal_processing) 3) Think DSP by Allen B Downey (http://greenteapress.com/thinkdsp/index.html) -----Original Message----- From: scipy-dev-bounces at scipy.org [mailto:scipy-dev-bounces at scipy.org] On Behalf Of scipy-dev-request at scipy.org Sent: Monday, October 20, 2014 1:00 PM To: scipy-dev at scipy.org Subject: SciPy-Dev Digest, Vol 132, Issue 23 Send SciPy-Dev mailing list submissions to scipy-dev at scipy.org To subscribe or unsubscribe via the World Wide Web, visit http://mail.scipy.org/mailman/listinfo/scipy-dev or, via email, send a message with subject or body 'help' to scipy-dev-request at scipy.org You can reach the person managing the list at scipy-dev-owner at scipy.org When replying, please edit your Subject line so it is more specific than "Re: Contents of SciPy-Dev digest..." Today's Topics: 1. thLib (Haslwanter Thomas) 2. extended scipy.sparse.linalg LinearOperator interface (Lars Buitinck) ---------------------------------------------------------------------- Message: 1 Date: Sun, 19 Oct 2014 22:15:12 +0200 From: Haslwanter Thomas Subject: [SciPy-Dev] thLib To: "scipy-dev at scipy.org" Message-ID: <1CFD8CBC30E0454B9DFAADEB36AD1739B05CE50D2B at LNZEXCHANGE001.linz.fhooe.at> Content-Type: text/plain; charset="us-ascii" Over the last few years, I have created a collection of tools that comprise functions for - Working with quaternions and rotation matrices - Analyzing signals from inertial sensors and video systems - Working with sound - Signal processing - Interactive data analysis They are available from pypi as the package "thLib". More info is available under http://work.thaslwanter.at/thLib/html/ I would be grateful for any kind of feedback. Thomas Haslwanter -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.scipy.org/pipermail/scipy-dev/attachments/20141019/ca88b148/atta chment-0001.html ------------------------------ Message: 2 Date: Mon, 20 Oct 2014 17:53:36 +0200 From: Lars Buitinck Subject: [SciPy-Dev] extended scipy.sparse.linalg LinearOperator interface To: scipy-dev at scipy.org Message-ID: Content-Type: text/plain; charset=UTF-8 Hi all, [TL;DR I want to extend scipy.sparse.linalg.LinearOperator and I'm looking for current usage patterns.] To solve a few problems that I and others [1,2] encountered with the scipy.sparse.linalg LinearOperator interface, I decided to expand and refactor it. In a nutshell: * Linear operators cannot be transposed. They do have an rmatvec that implements A^H * x, but no matrix-matrix multiplication version of same, so this has be implemented as a loop. * While a lot of subclasses exist in various parts of scipy.sparse.linalg, there was no documentation on how to roll your own operator by subclassing. Instead you have to call the constructor with a few functions, which become the methods on the custom operator. This doesn't scale if we want to add more methods. * The current implementation uses monkey-patching, causing memory leaks due to reference cycles and making the code hard to read. I've already submitted an early PR [3] with the main parts of my proposal: * LinearOperator is now an abstract base class. An overloaded __new__ makes sure you can still call the constructor with the old calling conventions; it returns a subclass. * Subclassing is possible (and documented): you must provide a method _matvec and optionally a few more. These get used to implement the public matvec method, which uses _matvec but adds input validation boilerplate (the "template method pattern"). * An "adjoint" method is added, and can be overridden by supplying an _adjoint method, to return the Hermitian adjoint (conjugate transpose) of a linear operator. This obviates the need for an rmatmat method. There's been discussion lately about merging in parts of PyOperators. I decided not to wait for this to happen, but I did have a look at their documentation and used some API conventions to make a later merge easier (or at least be compatible). I'd like feedback on this, and especially on the issue of backward compatibility with the current code. In particular, I'm not sure if the possibility to subclass LinearOperator has ever been advertised and/or exploited in third-party code. If it was, then we have to deal with that. To be sure, the currently common idioms to construct a linear operator, A = aslinearoperator(X) # ndarray or sparse matrix X and A = LinearOperator(matvec=some_callable, shape=some_tuple) still work and all current tests still pass. So I'd like to ask you: * Is my proposal useful? * How was this API used in third-party code? TIA, Lars [1] https://github.com/scipy/scipy/pull/4073 [2] https://github.com/scipy/scipy/pull/4074 [3] https://github.com/scipy/scipy/pull/4087 ------------------------------ _______________________________________________ SciPy-Dev mailing list SciPy-Dev at scipy.org http://mail.scipy.org/mailman/listinfo/scipy-dev End of SciPy-Dev Digest, Vol 132, Issue 23 ****************************************** From jtaylor.debian at googlemail.com Wed Oct 22 17:58:19 2014 From: jtaylor.debian at googlemail.com (Julian Taylor) Date: Wed, 22 Oct 2014 23:58:19 +0200 Subject: [SciPy-Dev] 0.14.1 and 0.15.0 release schedule In-Reply-To: <543EFF87.2030101@googlemail.com> References: <543EFF87.2030101@googlemail.com> Message-ID: <5448287B.1020801@googlemail.com> ping, I'd need to know soon what the 014.x plans are, just whether there will be another 0.14 release or not, I don't need any specific dates. On 16.10.2014 01:13, Julian Taylor wrote: > Hi, > whats the status on 0.14.1? Is this still planned? > > Debian is freezing on the 5. of November and I'd prefer to have 0.14.1 > instead of 0.14.0 in. Though I could also package a git snapshot if you > don't think you'll do a release before then, just let me know. > > Cheers, > Julian > > On 18.09.2014 00:50, Ralf Gommers wrote: >> >> >> On Tue, Sep 9, 2014 at 9:28 PM, Warren Weckesser >> > wrote: >> >> >> >> On Fri, Sep 5, 2014 at 3:07 PM, Ralf Gommers > > wrote: >> >> Hi all, >> >> It's about time for a couple of releases. >> >> 0.14.1 fixes some regressions as well as a lot of test failures >> against numpy 1.9.0 (due to __numpy_ufunc__) removal. All known >> regressions are fixed now, so I plan to tag it this weekend. If >> there's anything really urgent that still needs to go in, please >> speak up. >> >> For 0.15.0 there are still 17 issues/PRs, but most of those we >> should be able to merge/fix relatively quickly. I propose the >> following release schedule: >> >> - beta 1: 16 Sep >> - release candidate 1: 4 Oct >> - release candidate 2: 18 Oct (if needed) >> - final release: 25 Oct >> >> Does that work for everyone? >> >> >> >> Works for me. Thanks, Ralf. >> >> >> There are still a few issues to be solved (see >> https://github.com/scipy/scipy/milestones/0.15.0), so the timing has to >> shift a bit. I'll be offline for a week, so probably we can do the beta >> at the end of this month. >> >> Ralf >> >> >> >> >> Warren >> >> >> >> Are there issues/PRs that aren't under the 0.15.0 milestone but >> have to go in? >> >> Cheers, >> Ralf >> >> > From cimrman3 at ntc.zcu.cz Fri Oct 24 09:55:02 2014 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Fri, 24 Oct 2014 15:55:02 +0200 Subject: [SciPy-Dev] removing umfpack wrapper In-Reply-To: <528549FD.7080000@ntc.zcu.cz> References: <528549FD.7080000@ntc.zcu.cz> Message-ID: <544A5A36.10403@ntc.zcu.cz> On 11/14/2013 11:09 PM, Robert Cimrman wrote: > On 11/14/2013 09:59 PM, Ralf Gommers wrote: >> On Tue, Nov 12, 2013 at 11:57 PM, Robert Cimrman wrote: >> >>> On 11/12/2013 11:22 PM, Ralf Gommers wrote: >>>> On Tue, Nov 12, 2013 at 10:20 AM, Robert Cimrman >>> wrote: >>>> >>>>> Hi Ralf, >>>>> >>>>> On 11/10/2013 04:36 PM, Ralf Gommers wrote: >>>>>> Hi all, >>>>>> >>>>>> This is long overdue, but I think it's time to remove the UMFPACK >>> wrapper >>>>>> before the next release. I'd like some feedback on how to do that and >>> to >>>>>> what package (if any) to point existing users. >>>>>> >>>>>> As for why to remove the wrapper, see: >>>>>> https://github.com/scipy/scipy/issues/3002 >>>>>> http://permalink.gmane.org/gmane.comp.python.scientific.user/20451 >>>>>> Short summary: UMFPACK changed from a BSD-compatible license to GPL at >>>>> some >>>>>> point. >>>>>> >>>>>> The deprecation warning in sparse.linalg has been referring people to >>>>>> scikits.umfpack until now, however that package has disappeared >>>>> completely >>>>>> as far as I can tell. I suspect it was in the old scikits svn repo and >>>>> was >>>>>> never moved before that was killed. The alternatives seems to be >>>>> Pysparse ( >>>>> >>>>> I missed that the scikit was lost in transition... >>>>> >>>>> How much work would be to move it to the new site - move from SVN to >>> git, >>>>> and? >>>>> I would be willing to do it, if there is interest. >>>>> >>>> >>>> Hi Robert, I think the moving wouldn't be a lot of work (assuming svn >>>> access can still be arranged). If you'd revive the scikit it's not a >>>> one-time effort though - it's only useful if the code is being maintained >>>> imho. That can still be low effort perhaps, but with reviewing some PRs, >>>> maintenance releases, putting it on pypi, etc. I'd expect at least an >>> hour >>>> a week or so. >>>> >>>> Are there scikits.umfpack users now? Would it make sense to salvage >>>> whatever is useful from the scikit and contribute it to pysparse instead? >>> >>> We are using it in sfepy as the default direct solver. It seems to me that >>> the >>> pysparse interface requires the input matrix to be in the LL (linked list) >>> format, which is unfortunate for us, as we use CSR. The scipy (and former >>> scikit) umfpack wrappers have used CSR, so no copies were necessary. (After >>> all, I have created the original scipy wrappers to be used from sfepy in >>> the >>> first place...) >>> >> >> CSR is much better supported by scipy.sparse, so that's a good reason for >> the scikits.umfpack interface to exist I'd think. > > Ok, so I will try to migrate it to the new scikits site. > >> >>> So in case others are not interested in having the scikit, >> >> >> I'm not sure that's the case. Would be great to have some more feedback >> here - I don't have a strong opinion either way. > > I think most people still use it transparently via the wrappers in scipy, > without installing the (old) scikit, just like me. When the wrappers disappear > they would seek a replacement. > >> >>> I can see two possible solutions from my perspective: either enhance >>> pysparse interface to >>> allow CSR as well, or move the wrappers to sfepy (which I maintain anyway, >>> but >>> swig -> cython conversion would be needed). Not sure yet which solution I >>> prefer. >>> >> >> Moving them into SfePy would work for SfePy but as an optional dependency >> for scipy that would be weird imho, given the rather heavy list of >> dependencies of SfePy. > > Yes, that was just an idea for the case nobody else uses umfpack with scipy. I > do not think that is true, although the users has remained silent here. There > might be more feedback on scipy-user. > > Anyway, I will migrate the scikit so that there is a place to go after the > removal. The maintenance should not be that demanding as the umfpack library > API has been quite stable. FYI: https://github.com/rc/scikit-umfpack (sources taken from the last version that was in scipy) r. From jtaylor.debian at googlemail.com Sun Oct 26 13:13:03 2014 From: jtaylor.debian at googlemail.com (Julian Taylor) Date: Sun, 26 Oct 2014 18:13:03 +0100 Subject: [SciPy-Dev] ANN: NumPy 1.9.1 release candidate Message-ID: <544D2B9F.9020905@googlemail.com> Hi, We have finally finished the first release candidate of NumOy 1.9.1, sorry for the week delay. The 1.9.1 release will as usual be a bugfix only release to the 1.9.x series. The tarballs and win32 binaries are available on sourceforge: https://sourceforge.net/projects/numpy/files/NumPy/1.9.1rc1/ If no regressions show up the final release is planned next week. The upgrade is recommended for all users of the 1.9.x series. Following issues have been fixed: * gh-5184: restore linear edge behaviour of gradient to as it was in < 1.9. The second order behaviour is available via the `edge_order` keyword * gh-4007: workaround Accelerate sgemv crash on OSX 10.9 * gh-5100: restore object dtype inference from iterable objects without `len()` * gh-5163: avoid gcc-4.1.2 (red hat 5) miscompilation causing a crash * gh-5138: fix nanmedian on arrays containing inf * gh-5203: copy inherited masks in MaskedArray.__array_finalize__ * gh-2317: genfromtxt did not handle filling_values=0 correctly * gh-5067: restore api of npy_PyFile_DupClose in python2 * gh-5063: cannot convert invalid sequence index to tuple * gh-5082: Segmentation fault with argmin() on unicode arrays * gh-5095: don't propagate subtypes from np.where * gh-5104: np.inner segfaults with SciPy's sparse matrices * gh-5136: Import dummy_threading if importing threading fails * gh-5148: Make numpy import when run with Python flag '-OO' * gh-5147: Einsum double contraction in particular order causes ValueError * gh-479: Make f2py work with intent(in out) * gh-5170: Make python2 .npy files readable in python3 * gh-5027: Use 'll' as the default length specifier for long long * gh-4896: fix build error with MSVC 2013 caused by C99 complex support * gh-4465: Make PyArray_PutTo respect writeable flag * gh-5225: fix crash when using arange on datetime without dtype set * gh-5231: fix build in c99 mode Source tarballs, windows installers and release notes can be found at https://sourceforge.net/projects/numpy/files/NumPy/1.9.1rc1/ Cheers, Julian Taylor -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: OpenPGP digital signature URL: From sturla.molden at gmail.com Sun Oct 26 18:16:14 2014 From: sturla.molden at gmail.com (Sturla Molden) Date: Sun, 26 Oct 2014 23:16:14 +0100 Subject: [SciPy-Dev] 0.14.1 and 0.15.0 release schedule In-Reply-To: <5448287B.1020801@googlemail.com> References: <543EFF87.2030101@googlemail.com> <5448287B.1020801@googlemail.com> Message-ID: On 22/10/14 23:58, Julian Taylor wrote: > ping, I'd need to know soon what the 014.x plans are, just whether there > will be another 0.14 release or not, I don't need any specific dates. I'd also like to know this. I am thinking of fixing the SGEMV problem in SciPy as well. Now that the C code is in NumPy itt will just take a few minutes of coding. (Just reuse the C file and update setup.py for linalg, arpack and superlu.) Should I send PR against maintenance/0.14.x or master, or both? Sturla From sturla.molden at gmail.com Sun Oct 26 19:32:36 2014 From: sturla.molden at gmail.com (Sturla Molden) Date: Mon, 27 Oct 2014 00:32:36 +0100 Subject: [SciPy-Dev] Travis? Message-ID: I posted a WIP PR to SciPy but there is no Travis build. https://github.com/scipy/scipy/pull/4104 Do we use Travis on SciPy? (I should remember, but I don't.) Sturla From sturla.molden at gmail.com Sun Oct 26 19:37:14 2014 From: sturla.molden at gmail.com (Sturla Molden) Date: Mon, 27 Oct 2014 00:37:14 +0100 Subject: [SciPy-Dev] Travis? In-Reply-To: References: Message-ID: On 27/10/14 00:32, Sturla Molden wrote: > I posted a WIP PR to SciPy but there is no Travis build. > > https://github.com/scipy/scipy/pull/4104 > > Do we use Travis on SciPy? > > (I should remember, but I don't.) Ignore this, it suddenly turned up. Sturla From sturla.molden at gmail.com Sun Oct 26 21:13:04 2014 From: sturla.molden at gmail.com (Sturla Molden) Date: Mon, 27 Oct 2014 02:13:04 +0100 Subject: [SciPy-Dev] Use vecLibFort instead of Accelerate? In-Reply-To: References: <358013229434724162.309130sturla.molden-gmail.com@news.gmane.org> <2033297632434750599.297611sturla.molden-gmail.com@news.gmane.org> Message-ID: On 11/10/14 22:28, Matthew Brett wrote: > Do I understand right that we have two options for this? > > 1) Pull in our own versions of the sgemv workaround in scipy; > 2) Use vecLibFort to replace our own Accelerate wrappers. I vent for 1) because it required less changes to SciPy, and also because the sgemv fix in vecLibFort is not correct. Now we have: https://github.com/scipy/scipy/pull/4104 https://github.com/scipy/scipy/pull/4105 Adding tests to arpack and SuperLU to see if the Accelerate bug is there is very difficult. However it should be possible to add a test case for scipy.linalg.blas.sgemv that should tickle the Accelerate segfault if its there. However we already have that test case in NumPy 1.9.1. Any segfault in SciPy due to numpy.dot triggering this bug should now be taken care of in NumPy 1.9.1: https://github.com/numpy/numpy/pull/5223 We still need to port this to NumPy 1.10.x, which requires a bigger change to the code because the dotblas module is removed and integrated into multiarray. Sturla From sturla.molden at gmail.com Mon Oct 27 04:27:16 2014 From: sturla.molden at gmail.com (Sturla Molden) Date: Mon, 27 Oct 2014 09:27:16 +0100 Subject: [SciPy-Dev] Use vecLibFort instead of Accelerate? In-Reply-To: References: <358013229434724162.309130sturla.molden-gmail.com@news.gmane.org> <2033297632434750599.297611sturla.molden-gmail.com@news.gmane.org> Message-ID: On 27/10/14 02:13, Sturla Molden wrote: > We still need to port this to NumPy 1.10.x, which requires a bigger > change to the code because the dotblas module is removed and integrated > into multiarray. No, it was quite easy, actually: https://github.com/numpy/numpy/pull/5237 Chuck wants to sit on it for a while in case Apple gets their grips together (which they should): https://github.com/numpy/numpy/issues/4007 Sturla From pav at iki.fi Mon Oct 27 06:55:39 2014 From: pav at iki.fi (Pauli Virtanen) Date: Mon, 27 Oct 2014 10:55:39 +0000 (UTC) Subject: [SciPy-Dev] 0.14.1 and 0.15.0 release schedule References: <543EFF87.2030101@googlemail.com> <5448287B.1020801@googlemail.com> Message-ID: Julian Taylor googlemail.com> writes: > ping, I'd need to know soon what the 014.x plans are, just whether there > will be another 0.14 release or not, I don't need any specific dates. A 0.14.1 release is still planned, but as can be seen, the release dates have slipped. The 0.14.x branch currently contains some bugfixes that a Debian package may want to have. IIRC the only remaining thing we need to do for the release is to check whether there are other high-priority fixes that would need to be backported. Sturla's SGEMV fix is probably suitable for 0.14.1, it seems pretty straightforward. -- Pauli Virtanen From sturla.molden at gmail.com Mon Oct 27 10:37:12 2014 From: sturla.molden at gmail.com (Sturla Molden) Date: Mon, 27 Oct 2014 14:37:12 +0000 (UTC) Subject: [SciPy-Dev] 0.14.1 and 0.15.0 release schedule References: <543EFF87.2030101@googlemail.com> <5448287B.1020801@googlemail.com> Message-ID: <603121378436112982.896847sturla.molden-gmail.com@news.gmane.org> Pauli Virtanen wrote: > The 0.14.x branch currently contains some bugfixes that a Debian > package may want to have. IIRC the only remaining thing we need to > do for the release is to check whether there are other high-priority > fixes that would need to be backported. Travis reported pyflakes errors in my last PR. Those were not due to my changes by code that are already merged in the maintenance/0.14.x branch. Perhaps they should be fixed before 0.14.1 is released? https://travis-ci.org/scipy/scipy/builds/39131286 11.15s$ PYFLAKES_NODOCTEST=1 pyflakes scipy | grep -E -v 'unable to detect undefined names|assigned to but never used|imported but unused|redefinition of unused' > test.out; cat test.out; test \! -s test.out scipy/special/_testutils.py:225: list comprehension redefines 'j' from line 198 scipy/sparse/construct.py:581: list comprehension redefines 'A' from line 559 scipy/sparse/benchmarks/bench_sparse.py:81: list comprehension redefines 'name' from line 67 scipy/linalg/tests/test_decomp.py:586: import 'eigvals' from line 21 shadowed by loop variable The command "PYFLAKES_NODOCTEST=1 pyflakes scipy | grep -E -v 'unable to detect undefined names|assigned to but never used|imported but unused|redefinition of unused' > test.out; cat test.out; test \! -s test.out" exited with 1. Done. Your build exited with 1. Sturla From pav at iki.fi Mon Oct 27 12:44:58 2014 From: pav at iki.fi (Pauli Virtanen) Date: Mon, 27 Oct 2014 16:44:58 +0000 (UTC) Subject: [SciPy-Dev] 0.14.1 and 0.15.0 release schedule References: <543EFF87.2030101@googlemail.com> <5448287B.1020801@googlemail.com> <603121378436112982.896847sturla.molden-gmail.com@news.gmane.org> Message-ID: Sturla Molden gmail.com> writes: [clip] > Travis reported pyflakes errors in my last PR. Those were not due to my > changes by code that are already merged in the maintenance/0.14.x branch. > Perhaps they should be fixed before 0.14.1 is released? I think we will not backport stylistic changes to 0.14.1. So you're free to ignore the pyflakes failures in 0.14.x if they come from elsewhere. -- Pauli Virtanen From matthew.brett at gmail.com Mon Oct 27 12:47:16 2014 From: matthew.brett at gmail.com (Matthew Brett) Date: Mon, 27 Oct 2014 09:47:16 -0700 Subject: [SciPy-Dev] 0.14.1 and 0.15.0 release schedule In-Reply-To: References: <543EFF87.2030101@googlemail.com> <5448287B.1020801@googlemail.com> Message-ID: Hi, On Mon, Oct 27, 2014 at 3:55 AM, Pauli Virtanen wrote: > Julian Taylor googlemail.com> writes: >> ping, I'd need to know soon what the 014.x plans are, just whether there >> will be another 0.14 release or not, I don't need any specific dates. > > A 0.14.1 release is still planned, but as can be seen, the release > dates have slipped. > > The 0.14.x branch currently contains some bugfixes that a Debian > package may want to have. IIRC the only remaining thing we need to > do for the release is to check whether there are other high-priority > fixes that would need to be backported. > > Sturla's SGEMV fix is probably suitable for 0.14.1, it seems pretty > straightforward. I would like to get some scipy.io.matlab fixes in for 0.14.1 - I will try and get these done by Wednesday; will that fit with the release schedule? Cheers, Matthew From sturla.molden at gmail.com Mon Oct 27 13:36:27 2014 From: sturla.molden at gmail.com (Sturla Molden) Date: Mon, 27 Oct 2014 17:36:27 +0000 (UTC) Subject: [SciPy-Dev] 0.14.1 and 0.15.0 release schedule References: <543EFF87.2030101@googlemail.com> <5448287B.1020801@googlemail.com> <603121378436112982.896847sturla.molden-gmail.com@news.gmane.org> Message-ID: <281860780436124023.875899sturla.molden-gmail.com@news.gmane.org> Pauli Virtanen wrote: > I think we will not backport stylistic changes to 0.14.1. > So you're free to ignore the pyflakes failures in 0.14.x > if they come from elsewhere. They do, and I don't get these pyflakes errors in my PR against SciPy master. So they are already removed from the master branch. Sturla From yw5aj at virginia.edu Mon Oct 27 21:48:08 2014 From: yw5aj at virginia.edu (Yuxiang Wang) Date: Mon, 27 Oct 2014 21:48:08 -0400 Subject: [SciPy-Dev] Documentation page on API Message-ID: (Sorry about sending this to the wrong list, scipy-user instead of scipy-dev, before; apologize to those who saw this twice) Hi all, I was reading through the link (http://docs.scipy.org/doc/scipy/reference/api.html#guidelines-for-importing-functions-from-scipy) and it mentioned that the first form is preferred over the second one under most circumstances: # first form from scipy import stats stats.lomax(...) # second form from scipy.stats import distributions distributions.lomax(...) I honestly think the second form is far more frequently used in the examples given in the document, for example: http://docs.scipy.org/doc/scipy/reference/tutorial/special.html http://docs.scipy.org/doc/scipy/reference/tutorial/integrate.html http://docs.scipy.org/doc/scipy/reference/tutorial/optimize.html http://docs.scipy.org/doc/scipy/reference/tutorial/interpolate.html http://docs.scipy.org/doc/scipy/reference/tutorial/fftpack.html Would you think it'd be better to update this link? -Shawn From fperez.net at gmail.com Tue Oct 28 14:09:47 2014 From: fperez.net at gmail.com (Fernando Perez) Date: Tue, 28 Oct 2014 11:09:47 -0700 Subject: [SciPy-Dev] [ANN] Python for Scientific Computing conference in Boulder, CO; April'15 Message-ID: Hi folks, a colleague from NCAR in Boulder just sent me this link about a conference they are organizing in the spring: https://sea.ucar.edu/conference/2015 I figured this might be of interest to many on these lists. The actual call isn't up yet, so if you're interested, watch that site for an upcoming call when they post it (I'm not directly involved, just passing the message along). Cheers f -- Fernando Perez (@fperez_org; http://fperez.org) fperez.net-at-gmail: mailing lists only (I ignore this when swamped!) fernando.perez-at-berkeley: contact me here for any direct mail -------------- next part -------------- An HTML attachment was scrubbed... URL: From fperez.net at gmail.com Wed Oct 29 22:15:51 2014 From: fperez.net at gmail.com (Fernando Perez) Date: Wed, 29 Oct 2014 19:15:51 -0700 Subject: [SciPy-Dev] [ANN - JOB] Assistant Researcher - Berkeley Institute for Data Science Message-ID: Hi all, the newly founded Berkeley Institute for Data Science is hiring researchers with a focus on open source tools for scientific computing, please see here for details: https://aprecruit.berkeley.edu/apply/JPF00590 Cheers, f -- Fernando Perez (@fperez_org; http://fperez.org) fperez.net-at-gmail: mailing lists only (I ignore this when swamped!) fernando.perez-at-berkeley: contact me here for any direct mail -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthew.brett at gmail.com Fri Oct 31 03:16:38 2014 From: matthew.brett at gmail.com (Matthew Brett) Date: Fri, 31 Oct 2014 00:16:38 -0700 Subject: [SciPy-Dev] ndimage reflect mode Message-ID: Hi, Sorry if this is a dumb question, but I just noticed some behavior of scipy.ndimage that surprised me: In [63]: oned = np.arange(1, 10, dtype=float) In [64]: scipy.ndimage.affine_transform(oned, [1], [-2], mode='reflect') Out[64]: array([ 2., 1., 1., 2., 3., 4., 5., 6., 7.]) OK so far. But this I was surprised by: In [68]: scipy.ndimage.affine_transform(oned, [1], [-2 - 1e-15], mode='reflect') Out[68]: array([ 2., 1., 2., 2., 3., 4., 5., 6., 7.]) Why did the third value turn from a 1 into a 2? I am missing something obvious? Matthew