From robfalck at gmail.com Fri Nov 1 07:32:57 2013 From: robfalck at gmail.com (Rob Falck) Date: Fri, 1 Nov 2013 07:32:57 -0400 Subject: [SciPy-Dev] [SciPy-User] Linear Programming via Simplex Algorithm In-Reply-To: References: Message-ID: Status update: I've significantly simplified the implementation of the simplex algorithm. I've also made a common interface "linprog" which will hopefully envelop other linear programming algorithms. show_options has been updated to support linprog. Per Christopher's recommendations, the function always minimizes dot(c,x), and inequality constraints must always be expressed as dot(A_ub,x) <= b_ub. The complex anti-cycling logic I had in place before has been replaced by the much simpler method known as Bland's Rule. In theory this chooses a slightly longer path to the optimal solution, but should never cycle. To turn on the use of bland's rule, use options={'bland':True} The Result coming out of linprog now includes values of the slack variables. Now I'm trying to find difficult problems to throw at it. If anyone has a source for problems involving at least dozens of variables, I'd love to try it out. Still to do: return the Lagrange Multipliers with the result. On Fri, Oct 25, 2013 at 7:04 AM, Rob Falck wrote: > Chris, > > Thanks for the information. As for your specific questions: > > > > --What is the cycle variable? >> > > Cycle goes to one when the tableau reverts to its original form (the > iteration has begun to cycle rather than converging to a solution). It's a > pretty crude solution that I'm currently revamping. Right now I'm > comparing the current tableau to the original, but as you can imagine, this > is probably not at all ideal for larger problems. I'm currently looking > into a more efficient way to detect cycling. > > --I'm not a fan of matlab, but if you're going to call it linprog it >> might be nice to have the same interface at matlab's > > --I can't think of a good reason, either theoretical or practical, to >> use both A_lb and A_up > > --Similarly, I think the objective type (min or max) isn't worth >> including since you can just take the negative of the cost vector >> > > Thats understandable. My thought was to make the interface as explicit as > possible, but I can remove those options. It would certainly simplify the > code somewhat. > > >> --linprog looks like it contains a lot of code that looks very >> similar. Could that be refactored out into some loops and/or function >> calls, to increase clarity, conciseness, and maintainability? >> > > I'm currently in the process of simplifying things. The final form should > be much more clear. > > >> --What is an artificial variable? >> > > In a two-phase simplex method, an artificial variable essentially acts as > an "offset" that allows you to find a basic feasible solution when the > origin (x=0) isn't a basic feasible solution (i.e. isn't a vertex of the > feasible region.) In the phase one problem, a separate objective is used > in which each artificial variable has a penalty of 1. If the problem is > feasible, then at the end of phase 1 the phase-one objective will be zero, > and the initial basic feasible solution for phase 2 will be a vertex of the > feasible region. > > Wolfram actually has a very good demonstration of the two phase simplex > method where they start the phase 1 problem with m artificial variables: > http://demonstrations.wolfram.com/TwoPhaseSimplexMethod/ > > >> --What is it you're doing to prevent cycling? (i.e., can you provide a >> reference? I believe there are a number of different cycling rules, so >> there's some ambiguity.) > > > At the moment, if cycling is detected, I'm choosing a different pivot > element. By default the pivot column is the column with the most negative > coefficient in the objective row of the tableau. The pivot row is then > chosen such that it minimizes b[row]/a[row,pivcol]. In theory choosing > this combination takes to you the optimal solution as quickly as possible. > To avoid cycling I'm choosing a slightly less optimal path to perturb the > algorithm off of the cyclic sequence. I'll try to find a formal reference > for this...this one was taken from the notes of a course on optimization. > > Rob > > > On Fri, Oct 25, 2013 at 1:20 AM, Christopher Jordan-Squire < > cjordan1 at uw.edu> wrote: > >> Rob--Thanks for posting the code. I've been hoping an LP solver would >> make it into scipy.optimize for awhile. Like Rob said, they're >> important both on their own and as subproblems in other algorithms, >> such as integer LP arising from approximation algorithms for >> combinatorial optimization problems. Successive linear programming has >> also had a lot of success in some industries. Rob's also right that >> it's generally better to use an LP solver on an LP problem than a >> non-linear solver. If comparing implementations of similar quality, >> then an LP solver will usually be faster and more accurate. >> >> Anyways, I wouldn't consider myself an expert, but I am semi-informed. >> For people not familiar with linear programming, it's probably good to >> mention a few important facts about LP's. >> >> The tl;dr is LP solvers are really complicated. Including a basic one >> in Scipy is possible, especially with iterative improvements over >> time, but having even a basic LP solver with all the major options >> included for a standard LP solver would be a pretty huge undertaking. >> Details about why are below. Having just the simplex (and dual >> simplex? What happened to that PR?) would be nice, but it's important >> to be up front about them being relatively untested, almost certainly >> not very robust, and (I assume) using dense matrices instead of sparse >> matrices. >> >> >> >> --Existing open source implementations are generally about an order of >> magnitude slower than commercial implementations. [1] [2] show this, >> but those are specifically talking about integer and mixed integer >> LP's. >> >> --The main commercial implementations are Cplex, Gurobi, Mosek, and >> Xpress. As far as I know their interfaces are rather dissimilar, >> and--unfortunately--many OR people use a bafflingly large array of >> DSL's to build their LPs, such as AMPL, GAMS, Mosle (Xpress), and OPL >> (Cplex). >> >> --There are 3 major algorithms for solving general LP's: simplex, dual >> simplex, and barrier. The simplex and dual simplex methods will, for >> large-scale problems, use sparse direct linear algebra solvers, while >> barrier methods can, depending on the implementation, use sparse >> direct or iterative solvers. >> >> --There are also a whole host of algorithms for efficiently solving >> LP's with more structure, such as network flow LP's. These include >> primal-dual method (different from primal-dual interior point method) >> and auction algorithms. As I understand it, commercial LP solvers >> generally do some checking to recognize if that kind of structure is >> present. >> >> --Mature LP implementations include support for large, sparse LP's, >> which uses some LP-specific sparse matrix techniques [3]. They also >> include presolving, but the details of that are usually proprietary. >> These are a huge deal, but a standard reference says "Relatively >> little information about presolving techniques has appeared in the >> literature, in part because they have commercial value as an important >> component of linear programming software." [4] Similarly, any time >> iterative solvers are used the choice of preconditioner can be a big >> deal. >> >> --Mature implementations are also good at dealing with various kinds >> of degeneracy and detecting infeasibility and unboundedness. >> >> --All of the above refers only to LP's. Integer LP's and Mixed Integer >> LP's get (much) more complicated. For example, Frank Hutter at UBC has >> several papers on using an optimization algorithm just to find the >> best set of configurations for Cplex for a mixed integer linear >> program. >> >> [1]http://www.gurobi.com/pdf/Benchmarks.pdf, slide 16 >> [2]http://scip.zib.de/ >> [3]http://www.stanford.edu/class/msande318/notes/notes05-updates.pdf >> [4]Nocedal and Wright, Nonlinear Optimization 2nd edition, page 388 >> >> >> >> Here are some specific comments on Rob's code. >> >> --What is the cycle variable? >> --I'm not a fan of matlab, but if you're going to call it linprog it >> might be nice to have the same interface at matlab's >> --I can't think of a good reason, either theoretical or practical, to >> use both A_lb and A_up >> --Similarly, I think the objective type (min or max) isn't worth >> including since you can just take the negative of the cost vector >> --linprog looks like it contains a lot of code that looks very >> similar. Could that be refactored out into some loops and/or function >> calls, to increase clarity, conciseness, and maintainability? >> --What is an artificial variable? >> --What is it you're doing to prevent cycling? (i.e., can you provide a >> reference? I believe there are a number of different cycling rules, so >> there's some ambiguity.) >> >> -Chris >> >> On Wed, Oct 23, 2013 at 10:38 AM, Pauli Virtanen wrote: >> > Hi, >> > >> > 23.10.2013 04:35, Rob Falck kirjoitti: >> > [clip] >> >> I've spent some time recently polishing a simplex-based linear >> >> programming function. I've seen traffic from time to time about >> including >> >> such functionality in scipy.optimize but it always seems to have been >> >> closed without inclusion. >> > >> > One important question: is this algorithm regarded as useful and robust >> > enough by people in the know? >> > >> > Are there existing more sophisticated LP solver codes with compatible >> > licences? >> > >> > Does the LP simplex method win over nonlinear solvers such as COBYLA? >> > >> > scipy.optimize currently contains many "naive" solvers, which is not a >> > completely happy situation. Of course, something is better than nothing, >> > but if possible, I'd prefer one sophisticated code over many naive >> > methods. I'm not an expert in LP, so I can't judge very well myself >> here. >> > >> > -- >> > Pauli Virtanen >> > >> > _______________________________________________ >> > SciPy-Dev mailing list >> > SciPy-Dev at scipy.org >> > http://mail.scipy.org/mailman/listinfo/scipy-dev >> _______________________________________________ >> SciPy-Dev mailing list >> SciPy-Dev at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-dev >> > > > > -- > - Rob Falck > -- - Rob Falck -------------- next part -------------- An HTML attachment was scrubbed... URL: From argriffi at ncsu.edu Fri Nov 1 08:16:12 2013 From: argriffi at ncsu.edu (alex) Date: Fri, 1 Nov 2013 08:16:12 -0400 Subject: [SciPy-Dev] [SciPy-User] Linear Programming via Simplex Algorithm In-Reply-To: References: Message-ID: > If anyone has a source for problems involving at least dozens of variables, > I'd love to try it out. This page has benchmarking information and test cases for linear programming: http://plato.asu.edu/ftp/lpcom.html From msyang at princeton.edu Sat Nov 2 18:48:13 2013 From: msyang at princeton.edu (Michael Yang) Date: Sat, 2 Nov 2013 18:48:13 -0400 Subject: [SciPy-Dev] [SciPy-User] Linear Programming via Simplex Algorithm In-Reply-To: References: Message-ID: Hi Rob & Alex - great job on the forward development of the simplex implementation. I'm new to this thread but have been tracking its progress. Again, great job thus far and I'm looking forward to the final product. One recommendation for an alternate plan - and I realize this is coming late in the game and that you've made some excellent work - 'lp_solve' package does the simplex method as well as providing a full suite of linear programming features including the hard-to-implement integer constraints. It might be of interest to simply write a short python script that would convert the objective and constraints into the -lp format and then just call the subprocess module to run the program and then parse the output into a solution set of variables. In fact, I've written this already to work out a variety of problems and even parsed in the dual solution, etc. for further analysis. lp_solve is about as efficient as cvxopt (based on highly-optimized C and Fortran routines) and is hard to beat among most of the AMPL-based solvers. I've tried a bunch of them (LOQO, SNOPT, KNITRO, etc.) and lp_solve is about as fast as you can get, for linear programs. -Michael Yang On Fri, Nov 1, 2013 at 8:16 AM, alex wrote: > > If anyone has a source for problems involving at least dozens of > variables, > > I'd love to try it out. > > This page has benchmarking information and test cases for linear > programming: > http://plato.asu.edu/ftp/lpcom.html > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From argriffi at ncsu.edu Sat Nov 2 19:05:45 2013 From: argriffi at ncsu.edu (alex) Date: Sat, 2 Nov 2013 19:05:45 -0400 Subject: [SciPy-Dev] [SciPy-User] Linear Programming via Simplex Algorithm In-Reply-To: References: Message-ID: On Sat, Nov 2, 2013 at 6:48 PM, Michael Yang wrote: > [...] > lp_solve is about as efficient as cvxopt (based on highly-optimized C and > Fortran routines) and is hard to beat among most of the AMPL-based solvers. > I've tried a bunch of them (LOQO, SNOPT, KNITRO, etc.) and lp_solve is about > as fast as you can get, for linear programs. > > -Michael Yang The LGPL lp_solve might be more appropriate for something like sagemath (see http://trac.sagemath.org/ticket/8661) which gloms together various standalone math-related programs and makes them interact with each other. The scipy library is source-based -- Python and also some C and Fortran source -- and it is strict about only BSD-like licenses. So not LGPL. But perhaps the 'greater scipy' as opposed to the 'scipy library', if there is such a distinction, might be interested to use lp_solve. In any case, thanks for bringing it to my attention and I will probably use it personally in the future even if it doesn't go into scipy. Cheers, Alex From jason-sage at creativetrax.com Sat Nov 2 19:18:16 2013 From: jason-sage at creativetrax.com (Jason Grout) Date: Sat, 02 Nov 2013 18:18:16 -0500 Subject: [SciPy-Dev] [SciPy-User] Linear Programming via Simplex Algorithm In-Reply-To: References: Message-ID: <52758838.9050206@creativetrax.com> On 11/1/13 6:32 AM, Rob Falck wrote: > The Result coming out of linprog now includes values of the slack > variables. I'm curious: how hard would it be to add other sensitivity analysis data to the output (e.g., how much coefficients can change before the optimal solution changes, etc.)? I don't know much about implementing the simplex method; I don't know how hard this would be. It seems useful in an applied context, though. Thanks, Jason From cjordan1 at uw.edu Sat Nov 2 21:42:17 2013 From: cjordan1 at uw.edu (Christopher Jordan-Squire) Date: Sat, 2 Nov 2013 18:42:17 -0700 Subject: [SciPy-Dev] [SciPy-User] Linear Programming via Simplex Algorithm In-Reply-To: References: Message-ID: Aren't KNITRO and SNOPT specialized for nonlinear rather than linear problems? On Sat, Nov 2, 2013 at 3:48 PM, Michael Yang wrote: > Hi Rob & Alex - great job on the forward development of the simplex > implementation. I'm new to this thread but have been tracking its progress. > Again, great job thus far and I'm looking forward to the final product. > > One recommendation for an alternate plan - and I realize this is coming late > in the game and that you've made some excellent work - 'lp_solve' package > does the simplex method as well as providing a full suite of linear > programming features including the hard-to-implement integer constraints. > It might be of interest to simply write a short python script that would > convert the objective and constraints into the -lp format and then just call > the subprocess module to run the program and then parse the output into a > solution set of variables. In fact, I've written this already to work out a > variety of problems and even parsed in the dual solution, etc. for further > analysis. > > lp_solve is about as efficient as cvxopt (based on highly-optimized C and > Fortran routines) and is hard to beat among most of the AMPL-based solvers. > I've tried a bunch of them (LOQO, SNOPT, KNITRO, etc.) and lp_solve is about > as fast as you can get, for linear programs. > > -Michael Yang > > > On Fri, Nov 1, 2013 at 8:16 AM, alex wrote: >> >> > If anyone has a source for problems involving at least dozens of >> > variables, >> > I'd love to try it out. >> >> This page has benchmarking information and test cases for linear >> programming: >> http://plato.asu.edu/ftp/lpcom.html >> _______________________________________________ >> SciPy-Dev mailing list >> SciPy-Dev at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-dev > > > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > From msyang at princeton.edu Sat Nov 2 22:08:47 2013 From: msyang at princeton.edu (Michael Yang) Date: Sat, 2 Nov 2013 22:08:47 -0400 Subject: [SciPy-Dev] [SciPy-User] Linear Programming via Simplex Algorithm In-Reply-To: References: Message-ID: Chris: Yup, but they can do linear problems really fast, too. On Sat, Nov 2, 2013 at 9:42 PM, Christopher Jordan-Squire wrote: > Aren't KNITRO and SNOPT specialized for nonlinear rather than linear > problems? > > On Sat, Nov 2, 2013 at 3:48 PM, Michael Yang wrote: > > Hi Rob & Alex - great job on the forward development of the simplex > > implementation. I'm new to this thread but have been tracking its > progress. > > Again, great job thus far and I'm looking forward to the final product. > > > > One recommendation for an alternate plan - and I realize this is coming > late > > in the game and that you've made some excellent work - 'lp_solve' package > > does the simplex method as well as providing a full suite of linear > > programming features including the hard-to-implement integer constraints. > > It might be of interest to simply write a short python script that would > > convert the objective and constraints into the -lp format and then just > call > > the subprocess module to run the program and then parse the output into a > > solution set of variables. In fact, I've written this already to work > out a > > variety of problems and even parsed in the dual solution, etc. for > further > > analysis. > > > > lp_solve is about as efficient as cvxopt (based on highly-optimized C and > > Fortran routines) and is hard to beat among most of the AMPL-based > solvers. > > I've tried a bunch of them (LOQO, SNOPT, KNITRO, etc.) and lp_solve is > about > > as fast as you can get, for linear programs. > > > > -Michael Yang > > > > > > On Fri, Nov 1, 2013 at 8:16 AM, alex wrote: > >> > >> > If anyone has a source for problems involving at least dozens of > >> > variables, > >> > I'd love to try it out. > >> > >> This page has benchmarking information and test cases for linear > >> programming: > >> http://plato.asu.edu/ftp/lpcom.html > >> _______________________________________________ > >> SciPy-Dev mailing list > >> SciPy-Dev at scipy.org > >> http://mail.scipy.org/mailman/listinfo/scipy-dev > > > > > > > > _______________________________________________ > > SciPy-Dev mailing list > > SciPy-Dev at scipy.org > > http://mail.scipy.org/mailman/listinfo/scipy-dev > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From msyang at princeton.edu Sat Nov 2 22:09:26 2013 From: msyang at princeton.edu (Michael Yang) Date: Sat, 2 Nov 2013 22:09:26 -0400 Subject: [SciPy-Dev] [SciPy-User] Linear Programming via Simplex Algorithm In-Reply-To: References: Message-ID: And also LOQO doesn't just do linear problems (it can do nonlinear problems, too). On Sat, Nov 2, 2013 at 10:08 PM, Michael Yang wrote: > Chris: Yup, but they can do linear problems really fast, too. > > > On Sat, Nov 2, 2013 at 9:42 PM, Christopher Jordan-Squire > wrote: > >> Aren't KNITRO and SNOPT specialized for nonlinear rather than linear >> problems? >> >> On Sat, Nov 2, 2013 at 3:48 PM, Michael Yang >> wrote: >> > Hi Rob & Alex - great job on the forward development of the simplex >> > implementation. I'm new to this thread but have been tracking its >> progress. >> > Again, great job thus far and I'm looking forward to the final product. >> > >> > One recommendation for an alternate plan - and I realize this is coming >> late >> > in the game and that you've made some excellent work - 'lp_solve' >> package >> > does the simplex method as well as providing a full suite of linear >> > programming features including the hard-to-implement integer >> constraints. >> > It might be of interest to simply write a short python script that would >> > convert the objective and constraints into the -lp format and then just >> call >> > the subprocess module to run the program and then parse the output into >> a >> > solution set of variables. In fact, I've written this already to work >> out a >> > variety of problems and even parsed in the dual solution, etc. for >> further >> > analysis. >> > >> > lp_solve is about as efficient as cvxopt (based on highly-optimized C >> and >> > Fortran routines) and is hard to beat among most of the AMPL-based >> solvers. >> > I've tried a bunch of them (LOQO, SNOPT, KNITRO, etc.) and lp_solve is >> about >> > as fast as you can get, for linear programs. >> > >> > -Michael Yang >> > >> > >> > On Fri, Nov 1, 2013 at 8:16 AM, alex wrote: >> >> >> >> > If anyone has a source for problems involving at least dozens of >> >> > variables, >> >> > I'd love to try it out. >> >> >> >> This page has benchmarking information and test cases for linear >> >> programming: >> >> http://plato.asu.edu/ftp/lpcom.html >> >> _______________________________________________ >> >> SciPy-Dev mailing list >> >> SciPy-Dev at scipy.org >> >> http://mail.scipy.org/mailman/listinfo/scipy-dev >> > >> > >> > >> > _______________________________________________ >> > SciPy-Dev mailing list >> > SciPy-Dev at scipy.org >> > http://mail.scipy.org/mailman/listinfo/scipy-dev >> > >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Brian.Newsom at Colorado.EDU Tue Nov 5 12:45:26 2013 From: Brian.Newsom at Colorado.EDU (Brian Lee Newsom) Date: Tue, 5 Nov 2013 10:45:26 -0700 Subject: [SciPy-Dev] First Steps in function pointer integration into quadpack Message-ID: Hello all, So Nate and I have been working further towards how to speed up quadpack and our solution comes down to using c function pointers that will allow skipping the recursion that requires using python at every step. As a first step, we have prototyped what it would look like, in plain c, to interface with the quadpack fortran library. Ideally once this c is fully written, we will just have to figure out how to get function pointers from cython/ctypes/etc. into plain c from python. Unfortunately, as far as I can tell, we must cast the function passed in, in order to evaluate it, and this has lead to an ugly solution (actually two): 1. Use a switch statement based on the number of args (t1,t2..,tn) - really ugly, but works. Also provides a limit on the number of arguments a function may contain. OR 2. Make the user have a function of the form function(args[]) instead of function(x1,x2,x3) so that regardless of number of parameters the function signature will look the same. Keep in mind the user will already have to use ctypes or cython so they will already be going out of their way to declare this function differently so it can be integrated faster. Is either of these an acceptable solution? Or is there a better way that I am missing from my limited knowledge of c? Secondly, In basing this code generally off of quadpack.py, quadpack.h and __quadpack.h there seems to be a lot I ommitted and still gained the functionality I currently have. Is there anything important I ommitted that will make this not robust or open up problems that I don't expect? Any C advice would be helpful. So currently this code solves the issue of evaluating a multidimension function as a function of just one double *x so that it can interface the fortran. It does this by initializing the function and saving it to a global void pointer along with a global argument array of doubles for integration over higher dimensions. This should allow for an analogue to nquad for multidimension integration that does not need to callback to python constantly. C file included in gist: https://gist.github.com/BrianNewsom/7322797 To compile, I used: gcc globalmethod.c -L. -lslatec -lgfortran where libslatec.a is a compiled library containing at least dqag and it's dependencies. Thank you for any advice or criticism, Brian -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Sun Nov 10 10:36:55 2013 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Sun, 10 Nov 2013 16:36:55 +0100 Subject: [SciPy-Dev] removing umfpack wrapper Message-ID: Hi all, This is long overdue, but I think it's time to remove the UMFPACK wrapper before the next release. I'd like some feedback on how to do that and to what package (if any) to point existing users. As for why to remove the wrapper, see: https://github.com/scipy/scipy/issues/3002 http://permalink.gmane.org/gmane.comp.python.scientific.user/20451 Short summary: UMFPACK changed from a BSD-compatible license to GPL at some point. The deprecation warning in sparse.linalg has been referring people to scikits.umfpack until now, however that package has disappeared completely as far as I can tell. I suspect it was in the old scikits svn repo and was never moved before that was killed. The alternatives seems to be Pysparse ( http://pysparse.sourceforge.net) and PyUBlasExt ( https://pypi.python.org/pypi/PyUblasExt). Scikits.sparse is dead(-ish) and doesn't wrap umfpack it looks like. PyUBlasExt has something that looks like a wrapper but for C++ (?). So can we refer people to pysparse? Other question: do we call Pysparse instead of scikits.umfpack when available? And if so, by default or not? Or do we rip out everything that sits under useUmfpack=True? Cheers, Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Sun Nov 10 11:56:37 2013 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Sun, 10 Nov 2013 17:56:37 +0100 Subject: [SciPy-Dev] 0.13.1 bugfix release plan Message-ID: Hi all, A serious regression for ndimage.label was found in 0.13.0, so the plan is to do a bugfix release quickly (sometime next week): https://github.com/scipy/scipy/issues/3049 Are there any other fixes that should be taken along for 0.13.1? Thanks, Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From cimrman3 at ntc.zcu.cz Tue Nov 12 04:20:25 2013 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Tue, 12 Nov 2013 10:20:25 +0100 Subject: [SciPy-Dev] removing umfpack wrapper In-Reply-To: References: Message-ID: <5281F2D9.2080209@ntc.zcu.cz> Hi Ralf, On 11/10/2013 04:36 PM, Ralf Gommers wrote: > Hi all, > > This is long overdue, but I think it's time to remove the UMFPACK wrapper > before the next release. I'd like some feedback on how to do that and to > what package (if any) to point existing users. > > As for why to remove the wrapper, see: > https://github.com/scipy/scipy/issues/3002 > http://permalink.gmane.org/gmane.comp.python.scientific.user/20451 > Short summary: UMFPACK changed from a BSD-compatible license to GPL at some > point. > > The deprecation warning in sparse.linalg has been referring people to > scikits.umfpack until now, however that package has disappeared completely > as far as I can tell. I suspect it was in the old scikits svn repo and was > never moved before that was killed. The alternatives seems to be Pysparse ( I missed that the scikit was lost in transition... How much work would be to move it to the new site - move from SVN to git, and? I would be willing to do it, if there is interest. Cheers, r. > http://pysparse.sourceforge.net) and PyUBlasExt ( > https://pypi.python.org/pypi/PyUblasExt). Scikits.sparse is dead(-ish) and > doesn't wrap umfpack it looks like. PyUBlasExt has something that looks > like a wrapper but for C++ (?). So can we refer people to pysparse? > > Other question: do we call Pysparse instead of scikits.umfpack when > available? And if so, by default or not? Or do we rip out everything that > sits under useUmfpack=True? > > Cheers, > Ralf From ralf.gommers at gmail.com Tue Nov 12 17:22:44 2013 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Tue, 12 Nov 2013 23:22:44 +0100 Subject: [SciPy-Dev] removing umfpack wrapper In-Reply-To: <5281F2D9.2080209@ntc.zcu.cz> References: <5281F2D9.2080209@ntc.zcu.cz> Message-ID: On Tue, Nov 12, 2013 at 10:20 AM, Robert Cimrman wrote: > Hi Ralf, > > On 11/10/2013 04:36 PM, Ralf Gommers wrote: > > Hi all, > > > > This is long overdue, but I think it's time to remove the UMFPACK wrapper > > before the next release. I'd like some feedback on how to do that and to > > what package (if any) to point existing users. > > > > As for why to remove the wrapper, see: > > https://github.com/scipy/scipy/issues/3002 > > http://permalink.gmane.org/gmane.comp.python.scientific.user/20451 > > Short summary: UMFPACK changed from a BSD-compatible license to GPL at > some > > point. > > > > The deprecation warning in sparse.linalg has been referring people to > > scikits.umfpack until now, however that package has disappeared > completely > > as far as I can tell. I suspect it was in the old scikits svn repo and > was > > never moved before that was killed. The alternatives seems to be > Pysparse ( > > I missed that the scikit was lost in transition... > > How much work would be to move it to the new site - move from SVN to git, > and? > I would be willing to do it, if there is interest. > Hi Robert, I think the moving wouldn't be a lot of work (assuming svn access can still be arranged). If you'd revive the scikit it's not a one-time effort though - it's only useful if the code is being maintained imho. That can still be low effort perhaps, but with reviewing some PRs, maintenance releases, putting it on pypi, etc. I'd expect at least an hour a week or so. Are there scikits.umfpack users now? Would it make sense to salvage whatever is useful from the scikit and contribute it to pysparse instead? Cheers, Ralf > > Cheers, > r. > > > http://pysparse.sourceforge.net) and PyUBlasExt ( > > https://pypi.python.org/pypi/PyUblasExt). Scikits.sparse is dead(-ish) > and > > doesn't wrap umfpack it looks like. PyUBlasExt has something that looks > > like a wrapper but for C++ (?). So can we refer people to pysparse? > > > > Other question: do we call Pysparse instead of scikits.umfpack when > > available? And if so, by default or not? Or do we rip out everything that > > sits under useUmfpack=True? > > > > Cheers, > > Ralf > > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cimrman3 at ntc.zcu.cz Tue Nov 12 17:57:56 2013 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Tue, 12 Nov 2013 23:57:56 +0100 (CET) Subject: [SciPy-Dev] removing umfpack wrapper Message-ID: On 11/12/2013 11:22 PM, Ralf Gommers wrote: > On Tue, Nov 12, 2013 at 10:20 AM, Robert Cimrman wrote: > >> Hi Ralf, >> >> On 11/10/2013 04:36 PM, Ralf Gommers wrote: >>> Hi all, >>> >>> This is long overdue, but I think it's time to remove the UMFPACK wrapper >>> before the next release. I'd like some feedback on how to do that and to >>> what package (if any) to point existing users. >>> >>> As for why to remove the wrapper, see: >>> https://github.com/scipy/scipy/issues/3002 >>> http://permalink.gmane.org/gmane.comp.python.scientific.user/20451 >>> Short summary: UMFPACK changed from a BSD-compatible license to GPL at >> some >>> point. >>> >>> The deprecation warning in sparse.linalg has been referring people to >>> scikits.umfpack until now, however that package has disappeared >> completely >>> as far as I can tell. I suspect it was in the old scikits svn repo and >> was >>> never moved before that was killed. The alternatives seems to be >> Pysparse ( >> >> I missed that the scikit was lost in transition... >> >> How much work would be to move it to the new site - move from SVN to git, >> and? >> I would be willing to do it, if there is interest. >> > > Hi Robert, I think the moving wouldn't be a lot of work (assuming svn > access can still be arranged). If you'd revive the scikit it's not a > one-time effort though - it's only useful if the code is being maintained > imho. That can still be low effort perhaps, but with reviewing some PRs, > maintenance releases, putting it on pypi, etc. I'd expect at least an hour > a week or so. > > Are there scikits.umfpack users now? Would it make sense to salvage > whatever is useful from the scikit and contribute it to pysparse instead? We are using it in sfepy as the default direct solver. It seems to me that the pysparse interface requires the input matrix to be in the LL (linked list) format, which is unfortunate for us, as we use CSR. The scipy (and former scikit) umfpack wrappers have used CSR, so no copies were necessary. (After all, I have created the original scipy wrappers to be used from sfepy in the first place...) So in case others are not interested in having the scikit, I can see two possible solutions from my perspective: either enhance pysparse interface to allow CSR as well, or move the wrappers to sfepy (which I maintain anyway, but swig -> cython conversion would be needed). Not sure yet which solution I prefer. Cheers, r. > Cheers, > Ralf >> >> Cheers, >> r. >> >>> http://pysparse.sourceforge.net) and PyUBlasExt ( >>> https://pypi.python.org/pypi/PyUblasExt). Scikits.sparse is dead(-ish) >> and >>> doesn't wrap umfpack it looks like. PyUBlasExt has something that looks >>> like a wrapper but for C++ (?). So can we refer people to pysparse? >>> >>> Other question: do we call Pysparse instead of scikits.umfpack when >>> available? And if so, by default or not? Or do we rip out everything that >>> sits under useUmfpack=True? >>> >>> Cheers, >>> Ralf From fperez.net at gmail.com Wed Nov 13 15:58:35 2013 From: fperez.net at gmail.com (Fernando Perez) Date: Wed, 13 Nov 2013 12:58:35 -0800 Subject: [SciPy-Dev] [ANN, x-post] Creating a space for scientific open source at Berkeley (with UW and NYU) Message-ID: Hi folks, forgive me for the x-post to a few lists and the semi off-topic nature of this post, but I think it's worth mentioning this to our broader community. To keep the SNR of each list high, I'd prefer any replies to happen on the numfocus list. Yesterday, during an event at the White House OSTP, an announcement was made about a 5-year, $37.8M initative funded by the Moore and Sloan foundations to create a collaboration between UC Berkeley, the University of Washington and NYU on Data Science environments: - Press release: http://www.moore.org/newsroom/press-releases/2013/11/12/%20bold_new_partnership_launches_to_harness_potential_of_data_scientists_and_big_data - Project description: http://www.moore.org/programs/science/data-driven-discovery/data-science-environments We worked in private on this for a year, so it's great to be able to finally engage the community in an open fashion. I've provided some additional detail in my blog: http://blog.fperez.org/2013/11/an-ambitious-experiment-in-data-science.html At Berkeley, we are using this as an opportunity to create the new Berkeley Institute for Data Science (BIDS): http://newscenter.berkeley.edu/2013/11/13/new-data-science-institute-to-help-scholars-harness-big-data and from the very start, open source and the scientific Python ecosystem have been at the center of our thinking. In the team of co-PIs we have, in addition to me, a bunch of Python supporters: - Josh Bloom leads our Python bootcamps and graduate seminar) - Cathryn Carson founded the DLab (dlab.berkeley.edu), which runs python.berkeley.edu. - Philip Stark: Stats Chair, teaches reproducible research with Python tools. - Kimmen Sjolander: comp. biologist whose tools are all open source Python. - Mike Franklin and Ion Stoica: co-directors of AMPLab, whose Spark framework has Python support. - Dave Culler: chair of CS, which now uses Python for its undergraduate intro courses. We will be working very hard to basically make BIDS "a place for people like us" (and by that I mean open source scientific computing, not just Python: Juila, R, etc. are equally welcome). This is a community that has a significant portion of academic scientists who struggle with all the issues I list in my post, and solving that problem is an explicit goal of this initiative (in fact, it was the key point identified by the foundations when they announced the competition for this grant). Beyond that, we want to create a space where the best of academia, the power of a university like Berkeley, and the best of our open source communities, can come together. We are just barely getting off the ground, deep in more mundane issues like building renovations, but over the next few months we'll be clarifying our scientific programs, starting to have open positions, etc. Very importantly, I want to thank everyone who, for the last decade+, has been working like mad to make all of this possible. It's absolutely clear to me that the often unrewarded work of many of you was essential in this process, shaping the very existence of "data science" and the recognition that it should be done in an open, collaborative, reproducible fashion. Consider this event an important victory along the way, and hopefully a starting point for much more work in slightly better conditions. Here are some additional resources for anyone interested: http://bitly.com/bundles/fperezorg/1 -- Fernando Perez (@fperez_org; http://fperez.org) fperez.net-at-gmail: mailing lists only (I ignore this when swamped!) fernando.perez-at-berkeley: contact me here for any direct mail -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Thu Nov 14 15:59:15 2013 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Thu, 14 Nov 2013 21:59:15 +0100 Subject: [SciPy-Dev] removing umfpack wrapper In-Reply-To: References: Message-ID: On Tue, Nov 12, 2013 at 11:57 PM, Robert Cimrman wrote: > On 11/12/2013 11:22 PM, Ralf Gommers wrote: > > On Tue, Nov 12, 2013 at 10:20 AM, Robert Cimrman >wrote: > > > >> Hi Ralf, > >> > >> On 11/10/2013 04:36 PM, Ralf Gommers wrote: > >>> Hi all, > >>> > >>> This is long overdue, but I think it's time to remove the UMFPACK > wrapper > >>> before the next release. I'd like some feedback on how to do that and > to > >>> what package (if any) to point existing users. > >>> > >>> As for why to remove the wrapper, see: > >>> https://github.com/scipy/scipy/issues/3002 > >>> http://permalink.gmane.org/gmane.comp.python.scientific.user/20451 > >>> Short summary: UMFPACK changed from a BSD-compatible license to GPL at > >> some > >>> point. > >>> > >>> The deprecation warning in sparse.linalg has been referring people to > >>> scikits.umfpack until now, however that package has disappeared > >> completely > >>> as far as I can tell. I suspect it was in the old scikits svn repo and > >> was > >>> never moved before that was killed. The alternatives seems to be > >> Pysparse ( > >> > >> I missed that the scikit was lost in transition... > >> > >> How much work would be to move it to the new site - move from SVN to > git, > >> and? > >> I would be willing to do it, if there is interest. > >> > > > > Hi Robert, I think the moving wouldn't be a lot of work (assuming svn > > access can still be arranged). If you'd revive the scikit it's not a > > one-time effort though - it's only useful if the code is being maintained > > imho. That can still be low effort perhaps, but with reviewing some PRs, > > maintenance releases, putting it on pypi, etc. I'd expect at least an > hour > > a week or so. > > > > Are there scikits.umfpack users now? Would it make sense to salvage > > whatever is useful from the scikit and contribute it to pysparse instead? > > We are using it in sfepy as the default direct solver. It seems to me that > the > pysparse interface requires the input matrix to be in the LL (linked list) > format, which is unfortunate for us, as we use CSR. The scipy (and former > scikit) umfpack wrappers have used CSR, so no copies were necessary. (After > all, I have created the original scipy wrappers to be used from sfepy in > the > first place...) > CSR is much better supported by scipy.sparse, so that's a good reason for the scikits.umfpack interface to exist I'd think. > So in case others are not interested in having the scikit, I'm not sure that's the case. Would be great to have some more feedback here - I don't have a strong opinion either way. > I can see two possible solutions from my perspective: either enhance > pysparse interface to > allow CSR as well, or move the wrappers to sfepy (which I maintain anyway, > but > swig -> cython conversion would be needed). Not sure yet which solution I > prefer. > Moving them into SfePy would work for SfePy but as an optional dependency for scipy that would be weird imho, given the rather heavy list of dependencies of SfePy. Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From cimrman3 at ntc.zcu.cz Thu Nov 14 17:09:01 2013 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Thu, 14 Nov 2013 23:09:01 +0100 Subject: [SciPy-Dev] removing umfpack wrapper In-Reply-To: References: Message-ID: <528549FD.7080000@ntc.zcu.cz> On 11/14/2013 09:59 PM, Ralf Gommers wrote: > On Tue, Nov 12, 2013 at 11:57 PM, Robert Cimrman wrote: > >> On 11/12/2013 11:22 PM, Ralf Gommers wrote: >>> On Tue, Nov 12, 2013 at 10:20 AM, Robert Cimrman >> wrote: >>> >>>> Hi Ralf, >>>> >>>> On 11/10/2013 04:36 PM, Ralf Gommers wrote: >>>>> Hi all, >>>>> >>>>> This is long overdue, but I think it's time to remove the UMFPACK >> wrapper >>>>> before the next release. I'd like some feedback on how to do that and >> to >>>>> what package (if any) to point existing users. >>>>> >>>>> As for why to remove the wrapper, see: >>>>> https://github.com/scipy/scipy/issues/3002 >>>>> http://permalink.gmane.org/gmane.comp.python.scientific.user/20451 >>>>> Short summary: UMFPACK changed from a BSD-compatible license to GPL at >>>> some >>>>> point. >>>>> >>>>> The deprecation warning in sparse.linalg has been referring people to >>>>> scikits.umfpack until now, however that package has disappeared >>>> completely >>>>> as far as I can tell. I suspect it was in the old scikits svn repo and >>>> was >>>>> never moved before that was killed. The alternatives seems to be >>>> Pysparse ( >>>> >>>> I missed that the scikit was lost in transition... >>>> >>>> How much work would be to move it to the new site - move from SVN to >> git, >>>> and? >>>> I would be willing to do it, if there is interest. >>>> >>> >>> Hi Robert, I think the moving wouldn't be a lot of work (assuming svn >>> access can still be arranged). If you'd revive the scikit it's not a >>> one-time effort though - it's only useful if the code is being maintained >>> imho. That can still be low effort perhaps, but with reviewing some PRs, >>> maintenance releases, putting it on pypi, etc. I'd expect at least an >> hour >>> a week or so. >>> >>> Are there scikits.umfpack users now? Would it make sense to salvage >>> whatever is useful from the scikit and contribute it to pysparse instead? >> >> We are using it in sfepy as the default direct solver. It seems to me that >> the >> pysparse interface requires the input matrix to be in the LL (linked list) >> format, which is unfortunate for us, as we use CSR. The scipy (and former >> scikit) umfpack wrappers have used CSR, so no copies were necessary. (After >> all, I have created the original scipy wrappers to be used from sfepy in >> the >> first place...) >> > > CSR is much better supported by scipy.sparse, so that's a good reason for > the scikits.umfpack interface to exist I'd think. Ok, so I will try to migrate it to the new scikits site. > >> So in case others are not interested in having the scikit, > > > I'm not sure that's the case. Would be great to have some more feedback > here - I don't have a strong opinion either way. I think most people still use it transparently via the wrappers in scipy, without installing the (old) scikit, just like me. When the wrappers disappear they would seek a replacement. > >> I can see two possible solutions from my perspective: either enhance >> pysparse interface to >> allow CSR as well, or move the wrappers to sfepy (which I maintain anyway, >> but >> swig -> cython conversion would be needed). Not sure yet which solution I >> prefer. >> > > Moving them into SfePy would work for SfePy but as an optional dependency > for scipy that would be weird imho, given the rather heavy list of > dependencies of SfePy. Yes, that was just an idea for the case nobody else uses umfpack with scipy. I do not think that is true, although the users has remained silent here. There might be more feedback on scipy-user. Anyway, I will migrate the scikit so that there is a place to go after the removal. The maintenance should not be that demanding as the umfpack library API has been quite stable. r. From travis at continuum.io Fri Nov 15 02:12:53 2013 From: travis at continuum.io (Travis Oliphant) Date: Fri, 15 Nov 2013 07:12:53 +0000 (UTC) Subject: [SciPy-Dev] Invitation to connect on LinkedIn Message-ID: <224834686.13929413.1384499573522.JavaMail.app@ela4-app0099.prod> LinkedIn ------------ SciPy, I'd like to add you to my professional network on LinkedIn. - Travis Travis Oliphant CEO and co-Founder at Continuum Analytics, Inc. Austin, Texas Area Confirm that you know Travis Oliphant: https://www.linkedin.com/e/-z5jfi6-ho13arxs-5z/isd/18064934495/eoSOJY0h/?hs=false&tok=3WGg3On_OM6601 -- You are receiving Invitation to Connect emails. Click to unsubscribe: http://www.linkedin.com/e/-z5jfi6-ho13arxs-5z/7chvxKHmrlmbHlFd70OkrzzFNLbH8aF/goo/scipy-dev%40scipy%2Eorg/20061/I5937962691_1/?hs=false&tok=0zGa4ApQyM6601 (c) 2012 LinkedIn Corporation. 2029 Stierlin Ct, Mountain View, CA 94043, USA. -------------- next part -------------- An HTML attachment was scrubbed... URL: From travis at continuum.io Fri Nov 15 03:28:55 2013 From: travis at continuum.io (Travis Oliphant) Date: Fri, 15 Nov 2013 02:28:55 -0600 Subject: [SciPy-Dev] Invitation to connect on LinkedIn In-Reply-To: <224834686.13929413.1384499573522.JavaMail.app@ela4-app0099.prod> References: <224834686.13929413.1384499573522.JavaMail.app@ela4-app0099.prod> Message-ID: Apologies for the noise. A mistaken click in an eager interface... -Travis On Fri, Nov 15, 2013 at 1:12 AM, Travis Oliphant wrote: > > [image: LinkedIn] > > > > [image: Travis Oliphant] > > * From Travis Oliphant * > > CEO and co-Founder at Continuum Analytics, Inc. > Austin, Texas Area > > > > > > > > > SciPy, > > I'd like to add you to my professional network on LinkedIn. > > - Travis > > > > > Confirm that you know Travis > > > > > You are receiving Invitation to Connect emails. Unsubscribe > ? 2013, LinkedIn Corporation. 2029 Stierlin Ct. Mountain View, CA 94043, > USA > > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > > -- Travis Oliphant CEO Continuum Analytics, Inc. http://www.continuum.io -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Sat Nov 16 07:25:57 2013 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Sat, 16 Nov 2013 13:25:57 +0100 Subject: [SciPy-Dev] removing umfpack wrapper In-Reply-To: <528549FD.7080000@ntc.zcu.cz> References: <528549FD.7080000@ntc.zcu.cz> Message-ID: On Thu, Nov 14, 2013 at 11:09 PM, Robert Cimrman wrote: > On 11/14/2013 09:59 PM, Ralf Gommers wrote: > > On Tue, Nov 12, 2013 at 11:57 PM, Robert Cimrman >wrote: > > > >> On 11/12/2013 11:22 PM, Ralf Gommers wrote: > >>> On Tue, Nov 12, 2013 at 10:20 AM, Robert Cimrman >>> wrote: > >>> > >>>> Hi Ralf, > >>>> > >>>> On 11/10/2013 04:36 PM, Ralf Gommers wrote: > >>>>> Hi all, > >>>>> > >>>>> This is long overdue, but I think it's time to remove the UMFPACK > >> wrapper > >>>>> before the next release. I'd like some feedback on how to do that and > >> to > >>>>> what package (if any) to point existing users. > >>>>> > >>>>> As for why to remove the wrapper, see: > >>>>> https://github.com/scipy/scipy/issues/3002 > >>>>> http://permalink.gmane.org/gmane.comp.python.scientific.user/20451 > >>>>> Short summary: UMFPACK changed from a BSD-compatible license to GPL > at > >>>> some > >>>>> point. > >>>>> > >>>>> The deprecation warning in sparse.linalg has been referring people to > >>>>> scikits.umfpack until now, however that package has disappeared > >>>> completely > >>>>> as far as I can tell. I suspect it was in the old scikits svn repo > and > >>>> was > >>>>> never moved before that was killed. The alternatives seems to be > >>>> Pysparse ( > >>>> > >>>> I missed that the scikit was lost in transition... > >>>> > >>>> How much work would be to move it to the new site - move from SVN to > >> git, > >>>> and? > >>>> I would be willing to do it, if there is interest. > >>>> > >>> > >>> Hi Robert, I think the moving wouldn't be a lot of work (assuming svn > >>> access can still be arranged). If you'd revive the scikit it's not a > >>> one-time effort though - it's only useful if the code is being > maintained > >>> imho. That can still be low effort perhaps, but with reviewing some > PRs, > >>> maintenance releases, putting it on pypi, etc. I'd expect at least an > >> hour > >>> a week or so. > >>> > >>> Are there scikits.umfpack users now? Would it make sense to salvage > >>> whatever is useful from the scikit and contribute it to pysparse > instead? > >> > >> We are using it in sfepy as the default direct solver. It seems to me > that > >> the > >> pysparse interface requires the input matrix to be in the LL (linked > list) > >> format, which is unfortunate for us, as we use CSR. The scipy (and > former > >> scikit) umfpack wrappers have used CSR, so no copies were necessary. > (After > >> all, I have created the original scipy wrappers to be used from sfepy in > >> the > >> first place...) > >> > > > > CSR is much better supported by scipy.sparse, so that's a good reason for > > the scikits.umfpack interface to exist I'd think. > > Ok, so I will try to migrate it to the new scikits site. > Great. I'm not sure that that will live forever (who maintains it?). Just putting the repo on Github and enabling issue tracking there, plus uploading a release to pip, would be the essential steps. Ralf > > >> So in case others are not interested in having the scikit, > > > > > > I'm not sure that's the case. Would be great to have some more feedback > > here - I don't have a strong opinion either way. > > I think most people still use it transparently via the wrappers in scipy, > without installing the (old) scikit, just like me. When the wrappers > disappear > they would seek a replacement. > > > > >> I can see two possible solutions from my perspective: either enhance > >> pysparse interface to > >> allow CSR as well, or move the wrappers to sfepy (which I maintain > anyway, > >> but > >> swig -> cython conversion would be needed). Not sure yet which solution > I > >> prefer. > >> > > > > Moving them into SfePy would work for SfePy but as an optional dependency > > for scipy that would be weird imho, given the rather heavy list of > > dependencies of SfePy. > > Yes, that was just an idea for the case nobody else uses umfpack with > scipy. I > do not think that is true, although the users has remained silent here. > There > might be more feedback on scipy-user. > > Anyway, I will migrate the scikit so that there is a place to go after the > removal. The maintenance should not be that demanding as the umfpack > library > API has been quite stable. > > r. > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Sun Nov 17 04:53:13 2013 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Sun, 17 Nov 2013 10:53:13 +0100 Subject: [SciPy-Dev] ANN: Scipy 0.13.1 release Message-ID: Hi, I'm happy to announce the availability of the scipy 0.13.1 release. This is a bugfix only release; it contains several fixes for issues in ndimage. Thanks to Pauli Virtanen and Ray Jones for fixing these issues quickly. Source tarballs, binaries and release notes can be found at http://sourceforge.net/projects/scipy/files/scipy/0.13.1/. Cheers, Ralf ========================== SciPy 0.13.1 Release Notes ========================== SciPy 0.13.1 is a bug-fix release with no new features compared to 0.13.0. The only changes are several fixes in ``ndimage``, one of which was a serious regression in ``ndimage.label`` (Github issue 3025), which gave incorrect results in 0.13.0. Issues fixed ------------ - 3025: ``ndimage.label`` returns incorrect results in scipy 0.13.0 - 1992: ``ndimage.label`` return type changed from int32 to uint32 - 1992: ``ndimage.find_objects`` doesn't work with int32 input in some cases -------------- next part -------------- An HTML attachment was scrubbed... URL: From howarth at bromo.med.uc.edu Sun Nov 17 10:44:50 2013 From: howarth at bromo.med.uc.edu (Jack Howarth) Date: Sun, 17 Nov 2013 10:44:50 -0500 Subject: [SciPy-Dev] ANN: Scipy 0.13.1 release In-Reply-To: References: Message-ID: <20131117154450.GA29454@bromo.med.uc.edu> On Sun, Nov 17, 2013 at 10:53:13AM +0100, Ralf Gommers wrote: > Hi, > > I'm happy to announce the availability of the scipy 0.13.1 release. This is > a bugfix only release; it contains several fixes for issues in ndimage. > Thanks to Pauli Virtanen and Ray Jones for fixing these issues quickly. > > Source tarballs, binaries and release notes can be found at > http://sourceforge.net/projects/scipy/files/scipy/0.13.1/. > > Cheers, > Ralf Ralf, Unfortunately, the scipy 0.13.1 release has introduced two new failures on the x86_64-apple-darwin11 target... FAIL: test_cases (test_solvers.TestSolveLyapunov) ---------------------------------------------------------------------- Traceback (most recent call last): File "/sw/src/fink.build/root-scipy-py26-0.13.1-1/sw/lib/python2.6/site-packages/scipy/linalg/tests/test_solvers.py", line 45, in test_cases self.check_continuous_case(case[0], case[1]) File "/sw/src/fink.build/root-scipy-py26-0.13.1-1/sw/lib/python2.6/site-packages/scipy/linalg/tests/test_solvers.py", line 37, in check_continuous_case assert_array_almost_equal(np.dot(a, x) + np.dot(x, a.conj().transpose()), q) File "/sw/lib/python2.6/site-packages/numpy/testing/utils.py", line 811, in assert_array_almost_equal header=('Arrays are not almost equal to %d decimals' % decimal)) File "/sw/lib/python2.6/site-packages/numpy/testing/utils.py", line 644, in assert_array_compare raise AssertionError(msg) AssertionError: Arrays are not almost equal to 6 decimals (mismatch 100.0%) x: array([[ 0.66940063, 1.82351325, 0.09693803, 0.74958987, 5.37149648], [ 2.35737286, -1.68687652, -1.11483151, 2.92536995, 5.39661969], [ 0.08560546, -1.49569258, 2.3794113 , 0.51512191, 6.00411424],... y: array([[2, 4, 1, 0, 1], [4, 1, 0, 2, 0], [1, 0, 3, 0, 3],... ====================================================================== FAIL: test_cases (test_solvers.TestSolveSylvester) ---------------------------------------------------------------------- Traceback (most recent call last): File "/sw/src/fink.build/root-scipy-py26-0.13.1-1/sw/lib/python2.6/site-packages/scipy/linalg/tests/test_solvers.py", line 179, in test_cases self.check_case(case[0], case[1], case[2]) File "/sw/src/fink.build/root-scipy-py26-0.13.1-1/sw/lib/python2.6/site-packages/scipy/linalg/tests/test_solvers.py", line 175, in check_case assert_array_almost_equal(np.dot(a, x) + np.dot(x, b), c) File "/sw/lib/python2.6/site-packages/numpy/testing/utils.py", line 811, in assert_array_almost_equal header=('Arrays are not almost equal to %d decimals' % decimal)) File "/sw/lib/python2.6/site-packages/numpy/testing/utils.py", line 644, in assert_array_compare raise AssertionError(msg) AssertionError: Arrays are not almost equal to 6 decimals (mismatch 25.0%) x: array([[ 1.00000000e+00, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00], [ 0.00000000e+00, 1.00000000e+00, 0.00000000e+00,... y: array([[ 1., 0., 0., 0.], [ 0., 1., 0., 0.], [ 0., 0., 1., 0.], [ 0., 0., 0., 1.]]) ---------------------------------------------------------------------- Ran 8933 tests in 186.410s FAILED (KNOWNFAIL=114, SKIP=210, failures=2) whereas both x86_64-apple-darwin12 and x86_64-apple-darwin13, there are no failures. OK (KNOWNFAIL=114, SKIP=203) What darwin releases was this update tested on? Jack > > > > ========================== > SciPy 0.13.1 Release Notes > ========================== > > SciPy 0.13.1 is a bug-fix release with no new features compared to 0.13.0. > The only changes are several fixes in ``ndimage``, one of which was a > serious > regression in ``ndimage.label`` (Github issue 3025), which gave > incorrect results in 0.13.0. > > Issues fixed > ------------ > > - 3025: ``ndimage.label`` returns incorrect results in scipy 0.13.0 > - 1992: ``ndimage.label`` return type changed from int32 to uint32 > - 1992: ``ndimage.find_objects`` doesn't work with int32 input in some cases > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev From ralf.gommers at gmail.com Sun Nov 17 10:55:10 2013 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Sun, 17 Nov 2013 16:55:10 +0100 Subject: [SciPy-Dev] ANN: Scipy 0.13.1 release In-Reply-To: <20131117154450.GA29454@bromo.med.uc.edu> References: <20131117154450.GA29454@bromo.med.uc.edu> Message-ID: On Sun, Nov 17, 2013 at 4:44 PM, Jack Howarth wrote: > On Sun, Nov 17, 2013 at 10:53:13AM +0100, Ralf Gommers wrote: > > Hi, > > > > I'm happy to announce the availability of the scipy 0.13.1 release. This > is > > a bugfix only release; it contains several fixes for issues in ndimage. > > Thanks to Pauli Virtanen and Ray Jones for fixing these issues quickly. > > > > Source tarballs, binaries and release notes can be found at > > http://sourceforge.net/projects/scipy/files/scipy/0.13.1/. > > > > Cheers, > > Ralf > > Ralf, > Unfortunately, the scipy 0.13.1 release has introduced two new failures > on the > x86_64-apple-darwin11 target... > Hi Jack. Not sure what is going on here. There were no changes at all in the linalg module and the release was made the exact same way as the 0.13.0 release (on OS X 10.6, but that should not matter in this case). So I suspect that either this is an intermittent failure or you changed something on your end. Did you install from git or from the zipped source release? Any change in compilers you used? Ralf > FAIL: test_cases (test_solvers.TestSolveLyapunov) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/sw/src/fink.build/root-scipy-py26-0.13.1-1/sw/lib/python2.6/site-packages/scipy/linalg/tests/test_solvers.py", > line 45, in test_cases > self.check_continuous_case(case[0], case[1]) > File > "/sw/src/fink.build/root-scipy-py26-0.13.1-1/sw/lib/python2.6/site-packages/scipy/linalg/tests/test_solvers.py", > line 37, in check_continuous_case > assert_array_almost_equal(np.dot(a, x) + np.dot(x, > a.conj().transpose()), q) > File "/sw/lib/python2.6/site-packages/numpy/testing/utils.py", line 811, > in assert_array_almost_equal > header=('Arrays are not almost equal to %d decimals' % decimal)) > File "/sw/lib/python2.6/site-packages/numpy/testing/utils.py", line 644, > in assert_array_compare > raise AssertionError(msg) > AssertionError: > Arrays are not almost equal to 6 decimals > > (mismatch 100.0%) > x: array([[ 0.66940063, 1.82351325, 0.09693803, 0.74958987, > 5.37149648], > [ 2.35737286, -1.68687652, -1.11483151, 2.92536995, 5.39661969], > [ 0.08560546, -1.49569258, 2.3794113 , 0.51512191, > 6.00411424],... > y: array([[2, 4, 1, 0, 1], > [4, 1, 0, 2, 0], > [1, 0, 3, 0, 3],... > > ====================================================================== > FAIL: test_cases (test_solvers.TestSolveSylvester) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/sw/src/fink.build/root-scipy-py26-0.13.1-1/sw/lib/python2.6/site-packages/scipy/linalg/tests/test_solvers.py", > line 179, in test_cases > self.check_case(case[0], case[1], case[2]) > File > "/sw/src/fink.build/root-scipy-py26-0.13.1-1/sw/lib/python2.6/site-packages/scipy/linalg/tests/test_solvers.py", > line 175, in check_case > assert_array_almost_equal(np.dot(a, x) + np.dot(x, b), c) > File "/sw/lib/python2.6/site-packages/numpy/testing/utils.py", line 811, > in assert_array_almost_equal > header=('Arrays are not almost equal to %d decimals' % decimal)) > File "/sw/lib/python2.6/site-packages/numpy/testing/utils.py", line 644, > in assert_array_compare > raise AssertionError(msg) > AssertionError: > Arrays are not almost equal to 6 decimals > > (mismatch 25.0%) > x: array([[ 1.00000000e+00, 0.00000000e+00, 0.00000000e+00, > 0.00000000e+00], > [ 0.00000000e+00, 1.00000000e+00, 0.00000000e+00,... > y: array([[ 1., 0., 0., 0.], > [ 0., 1., 0., 0.], > [ 0., 0., 1., 0.], > [ 0., 0., 0., 1.]]) > > ---------------------------------------------------------------------- > Ran 8933 tests in 186.410s > > FAILED (KNOWNFAIL=114, SKIP=210, failures=2) > > whereas both x86_64-apple-darwin12 and x86_64-apple-darwin13, there are no > failures. > > OK (KNOWNFAIL=114, SKIP=203) > > What darwin releases was this update tested on? > Jack > > > > > > > > ========================== > > SciPy 0.13.1 Release Notes > > ========================== > > > > SciPy 0.13.1 is a bug-fix release with no new features compared to > 0.13.0. > > The only changes are several fixes in ``ndimage``, one of which was a > > serious > > regression in ``ndimage.label`` (Github issue 3025), which gave > > incorrect results in 0.13.0. > > > > Issues fixed > > ------------ > > > > - 3025: ``ndimage.label`` returns incorrect results in scipy 0.13.0 > > - 1992: ``ndimage.label`` return type changed from int32 to uint32 > > - 1992: ``ndimage.find_objects`` doesn't work with int32 input in some > cases > > > _______________________________________________ > > SciPy-Dev mailing list > > SciPy-Dev at scipy.org > > http://mail.scipy.org/mailman/listinfo/scipy-dev > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pav at iki.fi Sun Nov 17 11:28:28 2013 From: pav at iki.fi (Pauli Virtanen) Date: Sun, 17 Nov 2013 18:28:28 +0200 Subject: [SciPy-Dev] ANN: Scipy 0.13.1 release In-Reply-To: <20131117154450.GA29454@bromo.med.uc.edu> References: <20131117154450.GA29454@bromo.med.uc.edu> Message-ID: 17.11.2013 17:44, Jack Howarth kirjoitti: [clip] > Unfortunately, the scipy 0.13.1 release has introduced two new failures on the > x86_64-apple-darwin11 target... 0.13.1 cannot have introduced these test failures. The code changes between 0.13.0 and 0.13.1 concern only ndimage, and the failures you report are in linalg. -- Pauli Virtanen From jstevenson131 at gmail.com Mon Nov 18 10:20:13 2013 From: jstevenson131 at gmail.com (Jacob Stevenson) Date: Mon, 18 Nov 2013 15:20:13 +0000 Subject: [SciPy-Dev] minimizer benchmark Message-ID: <528A302D.1000001@gmail.com> Hi Everyone, it seemed to me that the folder scipy/optimize/benchmarks/ was rather lonely, with only one file in it, so I wrote a script which benchmarks the scipy minimizers. The script simpy runs all the optimizers on the Rosenbrock function and prints various information. Here's the output sorted by minimization time Optimizer benchmark on the Rosenbrock function sorted by time L-BFGS-B pass nfev 6 njev 0 nhev 0 time 0.000691891 TNC pass nfev 13 njev 0 nhev 0 time 0.00105786 Newton-CG pass nfev 17 njev 27 nhev 11 time 0.00400996 SLSQP pass nfev 41 njev 27 nhev 0 time 0.00410509 trust-ncg pass nfev 18 njev 16 nhev 15 time 0.00415802 dogleg pass nfev 16 njev 14 nhev 13 time 0.00426602 CG pass nfev 63 njev 63 nhev 0 time 0.0065279 BFGS pass nfev 44 njev 44 nhev 0 time 0.0070231 Powell pass nfev 524 njev 0 nhev 0 time 0.0262001 COBYLA fail nfev 1000 njev 0 nhev 0 time 0.026603 The results are interesting with L-BFGS-B outperforming all the others by a significant margin in both total time and total number of function calls. I have not submitted a pull request because I have no idea how to fit what I've done into an existing benchmarking framework (if there is one). I will submit the pull request if there is interest. In my opinion it would be really useful to have a long set of benchmarks to see how all the minimizers perform on different types of minimization problems. Here is the script in my scipy fork https://github.com/js850/scipy/blob/benchmarks/scipy/optimize/benchmarks/bench_optimizers.py Best wishes, Jake From josef.pktd at gmail.com Mon Nov 18 11:02:27 2013 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Mon, 18 Nov 2013 11:02:27 -0500 Subject: [SciPy-Dev] minimizer benchmark In-Reply-To: <528A302D.1000001@gmail.com> References: <528A302D.1000001@gmail.com> Message-ID: On Mon, Nov 18, 2013 at 10:20 AM, Jacob Stevenson wrote: > Hi Everyone, it seemed to me that the folder scipy/optimize/benchmarks/ > was rather lonely, with only one file in it, so I wrote a script which > benchmarks the scipy minimizers. The script simpy runs all the > optimizers on the Rosenbrock function and prints various information. > Here's the output sorted by minimization time > > Optimizer benchmark on the Rosenbrock function sorted by time > L-BFGS-B pass nfev 6 njev 0 nhev 0 time 0.000691891 > TNC pass nfev 13 njev 0 nhev 0 time 0.00105786 > Newton-CG pass nfev 17 njev 27 nhev 11 time 0.00400996 > SLSQP pass nfev 41 njev 27 nhev 0 time 0.00410509 > trust-ncg pass nfev 18 njev 16 nhev 15 time 0.00415802 > dogleg pass nfev 16 njev 14 nhev 13 time 0.00426602 > CG pass nfev 63 njev 63 nhev 0 time 0.0065279 > BFGS pass nfev 44 njev 44 nhev 0 time 0.0070231 > Powell pass nfev 524 njev 0 nhev 0 time 0.0262001 > COBYLA fail nfev 1000 njev 0 nhev 0 time 0.026603 > > The results are interesting with L-BFGS-B outperforming all the others > by a significant margin in both total time and total number of function > calls. > > I have not submitted a pull request because I have no idea how to fit > what I've done into an existing benchmarking framework (if there is > one). I will submit the pull request if there is interest. In my > opinion it would be really useful to have a long set of benchmarks to > see how all the minimizers perform on different types of minimization > problems. > > Here is the script in my scipy fork > > https://github.com/js850/scipy/blob/benchmarks/scipy/optimize/benchmarks/bench_optimizers.py I think showing results for these kind of benchmarks would make a good addition to the documentation. It would be very helpful in chosing an optimizer. I'm hitting this question very often in statsmodels, both what optimizer works for me in a specific case, and what works well enough in general to use as a default for the users of statsmodels. For example a while ago Andrea Gavana linked to results for global optimizers (mostly outside scipy, unfortunately we didn't get any new ones besides basinhopping) http://infinity77.net/global_optimization/index.html Another possibility would be to turn some of the benchmark cases into unit tests. In a github issue for scipy.stats we were also discussing to add extra unit tests that are not part of a regular test run, but could test extra things. The specific case was adding "fuzz tests" that just try out many different random values for the parameters. Besides speed it would also be interesting to benchmark how robust they are. At least for some cases that I try out for statsmodels, I need to use a boring fmin (Nelder-Mead) to get around bad behavior or bad starting values. For example, which optimizers in which scipy version survive a `nan` or an `inf` in some range of the parameter space? (My impression is that this is improving with each scipy version.) Josef > > Best wishes, > Jake > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev From argriffi at ncsu.edu Mon Nov 18 11:05:23 2013 From: argriffi at ncsu.edu (alex) Date: Mon, 18 Nov 2013 11:05:23 -0500 Subject: [SciPy-Dev] minimizer benchmark In-Reply-To: <528A302D.1000001@gmail.com> References: <528A302D.1000001@gmail.com> Message-ID: On Mon, Nov 18, 2013 at 10:20 AM, Jacob Stevenson wrote: > Hi Everyone, it seemed to me that the folder scipy/optimize/benchmarks/ > was rather lonely, with only one file in it, so I wrote a script which > benchmarks the scipy minimizers. The script simpy runs all the > optimizers on the Rosenbrock function and prints various information. > Here's the output sorted by minimization time > > Optimizer benchmark on the Rosenbrock function sorted by time > L-BFGS-B pass nfev 6 njev 0 nhev 0 time 0.000691891 > TNC pass nfev 13 njev 0 nhev 0 time 0.00105786 > Newton-CG pass nfev 17 njev 27 nhev 11 time 0.00400996 > SLSQP pass nfev 41 njev 27 nhev 0 time 0.00410509 > trust-ncg pass nfev 18 njev 16 nhev 15 time 0.00415802 > dogleg pass nfev 16 njev 14 nhev 13 time 0.00426602 > CG pass nfev 63 njev 63 nhev 0 time 0.0065279 > BFGS pass nfev 44 njev 44 nhev 0 time 0.0070231 > Powell pass nfev 524 njev 0 nhev 0 time 0.0262001 > COBYLA fail nfev 1000 njev 0 nhev 0 time 0.026603 > > The results are interesting with L-BFGS-B outperforming all the others > by a significant margin in both total time and total number of function > calls. > > I have not submitted a pull request because I have no idea how to fit > what I've done into an existing benchmarking framework (if there is > one). I will submit the pull request if there is interest. In my > opinion it would be really useful to have a long set of benchmarks to > see how all the minimizers perform on different types of minimization > problems. > > Here is the script in my scipy fork > > https://github.com/js850/scipy/blob/benchmarks/scipy/optimize/benchmarks/bench_optimizers.py > > Best wishes, > Jake I'll make a couple of comments, before I forget everything I learned about Rosenbrock and about the scipy optimization framework when I implemented the dogleg and trust-ncg functions from the Nocedal and Wright book... First, thanks for adding these benchmarks! This seems like a natural comparison to have available in scipy, and I suspect that this has not been done earlier because Denis has only recently created a general framework that allows interchangeable minimization functions. I hope that something like this will go into scipy. I've written similar one-off benchmarking code for algopy https://github.com/b45ch1/algopy/tree/master/documentation/sphinx/examples/minimization before I'd started contributing to scipy. This includes the Rosenbrock function as well as a couple of other good benchmarking functions, and easy and hard starting guesses for each; the licenses are compatible so maybe this would be useful for future scipy minimize() benchmarks. Benchmarks are an endless source of nitpicking, so I'll mention two nitpicks for your code :) First, the different minimize() functions might differ in how close they get to the optimum, before they report that they have successfully converged. The nth dimensional rosenbrock function has its minimum at (1, 1, ..., 1) so this should be not so hard to compare closeness. Second and possibly more importantly, the relative performances may change depending on whether the starting guess is easy vs. hard. Your starting point x0=[0.8, 1.2, 0.7] is relatively near (1, 1, 1) so it is "easy". Starting points that require walking around the Rosenbrock valley http://en.wikipedia.org/wiki/File:Rosenbrock_function.svg are more tricky, and the less clever minimization functions may outperform the more clever functions at easy starting guesses. Best, Alex From goxberry at gmail.com Mon Nov 18 11:12:11 2013 From: goxberry at gmail.com (Geoff Oxberry) Date: Mon, 18 Nov 2013 08:12:11 -0800 Subject: [SciPy-Dev] minimizer benchmark In-Reply-To: References: <528A302D.1000001@gmail.com> Message-ID: Would you be interested in more benchmarking problems? There should be a couple lists of problems within the thread for http://scicomp.stackexchange.com/questions/46/where-can-one-obtain-good-data-sets-test-problems-for-testing-algorithms-routine. (Disclaimer: I've moderated Computational Science Stack Exchange for a couple years with the goal of answering questions like, "Where do I find benchmark problems for numerical optimization?") The problem of benchmarking within optimization is pretty common, so you should be able to develop a fairly comprehensive suite of tests if you so desire. Geoff On Mon, Nov 18, 2013 at 8:05 AM, alex wrote: > On Mon, Nov 18, 2013 at 10:20 AM, Jacob Stevenson > wrote: > > Hi Everyone, it seemed to me that the folder scipy/optimize/benchmarks/ > > was rather lonely, with only one file in it, so I wrote a script which > > benchmarks the scipy minimizers. The script simpy runs all the > > optimizers on the Rosenbrock function and prints various information. > > Here's the output sorted by minimization time > > > > Optimizer benchmark on the Rosenbrock function sorted by time > > L-BFGS-B pass nfev 6 njev 0 nhev 0 time 0.000691891 > > TNC pass nfev 13 njev 0 nhev 0 time 0.00105786 > > Newton-CG pass nfev 17 njev 27 nhev 11 time 0.00400996 > > SLSQP pass nfev 41 njev 27 nhev 0 time 0.00410509 > > trust-ncg pass nfev 18 njev 16 nhev 15 time 0.00415802 > > dogleg pass nfev 16 njev 14 nhev 13 time 0.00426602 > > CG pass nfev 63 njev 63 nhev 0 time 0.0065279 > > BFGS pass nfev 44 njev 44 nhev 0 time 0.0070231 > > Powell pass nfev 524 njev 0 nhev 0 time 0.0262001 > > COBYLA fail nfev 1000 njev 0 nhev 0 time 0.026603 > > > > The results are interesting with L-BFGS-B outperforming all the others > > by a significant margin in both total time and total number of function > > calls. > > > > I have not submitted a pull request because I have no idea how to fit > > what I've done into an existing benchmarking framework (if there is > > one). I will submit the pull request if there is interest. In my > > opinion it would be really useful to have a long set of benchmarks to > > see how all the minimizers perform on different types of minimization > > problems. > > > > Here is the script in my scipy fork > > > > > https://github.com/js850/scipy/blob/benchmarks/scipy/optimize/benchmarks/bench_optimizers.py > > > > Best wishes, > > Jake > > I'll make a couple of comments, before I forget everything I learned > about Rosenbrock and about the scipy optimization framework when I > implemented the dogleg and trust-ncg functions from the Nocedal and > Wright book... > > First, thanks for adding these benchmarks! This seems like a natural > comparison to have available in scipy, and I suspect that this has not > been done earlier because Denis has only recently created a general > framework that allows interchangeable minimization functions. I hope > that something like this will go into scipy. > > I've written similar one-off benchmarking code for algopy > > https://github.com/b45ch1/algopy/tree/master/documentation/sphinx/examples/minimization > before I'd started contributing to scipy. This includes the > Rosenbrock function as well as a couple of other good benchmarking > functions, and easy and hard starting guesses for each; the licenses > are compatible so maybe this would be useful for future scipy > minimize() benchmarks. > > Benchmarks are an endless source of nitpicking, so I'll mention two > nitpicks for your code :) First, the different minimize() functions > might differ in how close they get to the optimum, before they report > that they have successfully converged. The nth dimensional rosenbrock > function has its minimum at (1, 1, ..., 1) so this should be not so > hard to compare closeness. Second and possibly more importantly, the > relative performances may change depending on whether the starting > guess is easy vs. hard. Your starting point x0=[0.8, 1.2, 0.7] is > relatively near (1, 1, 1) so it is "easy". Starting points that > require walking around the Rosenbrock valley > http://en.wikipedia.org/wiki/File:Rosenbrock_function.svg are more > tricky, and the less clever minimization functions may outperform the > more clever functions at easy starting guesses. > > Best, > Alex > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > -- Geoffrey Oxberry, Ph.D., E.I.T. goxberry at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From gael.varoquaux at normalesup.org Mon Nov 18 11:34:44 2013 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Mon, 18 Nov 2013 17:34:44 +0100 Subject: [SciPy-Dev] minimizer benchmark In-Reply-To: <528A302D.1000001@gmail.com> References: <528A302D.1000001@gmail.com> Message-ID: <20131118163444.GC25689@phare.normalesup.org> On Mon, Nov 18, 2013 at 03:20:13PM +0000, Jacob Stevenson wrote: > The results are interesting with L-BFGS-B outperforming all the others > by a significant margin in both total time and total number of function > calls. That seems right. BFGS/LBFGS should in general be prefered to other approaches if your problem is smooth without noisy gradient and you know nothing about the Hessian. A piece of warning: the clear cut win of L-BFGS will depend a bit on the function to optimize. Rosenbrock is amongst the nastiest function to optimize. On easer function (eg quadratic or close to quadratc) results will differ. By the way, a somewhat lengthy discussion on this, with benchmark scripts, can be found on http://scipy-lectures.github.io/advanced/mathematical_optimization/index.html G From pablo.winant at gmail.com Mon Nov 18 11:56:48 2013 From: pablo.winant at gmail.com (Pablo Winant) Date: Mon, 18 Nov 2013 17:56:48 +0100 Subject: [SciPy-Dev] minimizer benchmark In-Reply-To: <20131118163444.GC25689@phare.normalesup.org> References: <528A302D.1000001@gmail.com> <20131118163444.GC25689@phare.normalesup.org> Message-ID: I have a related question: does anybody knows of a simple minimizer function written in pure python or cython ? Are there native minimizers in scipy ? I'm asking this question because I need to optimize many small problems, where the overhead of calling the optimizing becomes non-negligible. I suspect I could achieve big performance gains, if I could call that function with compiled python code (using numba). Best, Pablo On Mon, Nov 18, 2013 at 5:34 PM, Gael Varoquaux < gael.varoquaux at normalesup.org> wrote: > On Mon, Nov 18, 2013 at 03:20:13PM +0000, Jacob Stevenson wrote: > > The results are interesting with L-BFGS-B outperforming all the others > > by a significant margin in both total time and total number of function > > calls. > > That seems right. BFGS/LBFGS should in general be prefered to other > approaches if your problem is smooth without noisy gradient and you know > nothing about the Hessian. A piece of warning: the clear cut win of > L-BFGS will depend a bit on the function to optimize. Rosenbrock is > amongst the nastiest function to optimize. On easer function (eg > quadratic or close to quadratc) results will differ. > > By the way, a somewhat lengthy discussion on this, with benchmark > scripts, can be found on > > http://scipy-lectures.github.io/advanced/mathematical_optimization/index.html > > G > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pav at iki.fi Mon Nov 18 13:28:50 2013 From: pav at iki.fi (Pauli Virtanen) Date: Mon, 18 Nov 2013 20:28:50 +0200 Subject: [SciPy-Dev] minimizer benchmark In-Reply-To: References: <528A302D.1000001@gmail.com> <20131118163444.GC25689@phare.normalesup.org> Message-ID: 18.11.2013 18:56, Pablo Winant kirjoitti: > I have a related question: does anybody knows of a simple minimizer > function written in pure python or cython ? Are there native minimizers in > scipy ? > I'm asking this question because I need to optimize many small problems, > where the overhead of calling the optimizing becomes non-negligible. I > suspect I could achieve big performance gains, if I could call that > function with compiled python code (using numba). Several of the minimizers in Scipy are pure-Python. You can take a look at the source code to see which ones (I don't remember from the top of my head the full list). -- Pauli Virtanen From hadsed at gmail.com Mon Nov 18 15:02:48 2013 From: hadsed at gmail.com (Hadayat Seddiqi) Date: Mon, 18 Nov 2013 12:02:48 -0800 Subject: [SciPy-Dev] Code-block font on documentation Message-ID: Hello SciPy guys, The new documentation is great. One problem I have is the font in the code blocks--they are too light against that light-green background. Very small change, but I look at SciPy docs tens of times a day, so it was something I noticed. Perhaps a bolder font would make it more readable, but I think just making it darker should be good. Let me know what you all think. -Had -------------- next part -------------- An HTML attachment was scrubbed... URL: From andrea.gavana at gmail.com Mon Nov 18 15:59:29 2013 From: andrea.gavana at gmail.com (Andrea Gavana) Date: Mon, 18 Nov 2013 21:59:29 +0100 Subject: [SciPy-Dev] (Possible) new optimization routine (2) - scipy.optimize Message-ID: Hi All, since I posted the last time about (possible) new optimization routines in scipy.optimize, I have been working here and there in making the code for AMPGO (Adaptive Memory Programming for Global Optimization) a bit more robust and in expanding the benchmark test suite. Since recently I saw a few messages about optimization methods flying around in the mailing list, I thought I may share some more findings and possibly (finally) start integrating AMPGO into scipy. First things first: I would love to see AMPGO into scipy, but there are still a couple of issues to be solved: 1. Some of the local optimization methods AMPGO can use (like L-BFGS-B, TNC and so on) can take advantage of gradient information, and sometimes people do actually have access to the gradient of the objective function. The problem is, given the definition of the Tunnelling function used by AMPGO: T(x) = [f(x) - aspiration)**2.0] / prod(dist(s, x)) for s in tabu_list Where "dist" is the euclidean distance between the current point "x" and one of the previous local optima "s" ("tabu_list" is a list containing 2 or more of these local optima). (or see page 4 at http://leeds-faculty.colorado.edu/glover/fred%20pubs/416%20-%20AMP%20(TS)%20for%20Constrained%20Global%20Opt%20w%20Lasdon%20et%20al%20.pdffor a clearer formula for the Tunnelling function). I have absolutely no idea of how to get the analytical expression of the gradient of the Tunnelling function, given the gradient of the objective function. I'm sure it's very much doable but my calculus skills are way too inadequate. 2. As the current code for AMPGO supports local solvers from scipy.optimize and OpenOpt, it turns out to be a pain to generalize its interface in order for AMPGO to be compatible with the minimize() general API. Not to mention the general PR process. In any case, I'll try to gather enough willpower to get AMPGO up to scipy standards and I'll contribute the benchmarks I created as well. So, mentioning the benchmarks and the results, I managed to expand the multi-dimensional test suite from 85 to 184 test functions. The test suite is now one of the most complete and comprehensive in the Open Source world, and it's not in Fortran 77 (luckily enough). The test suite currently contains: - 18 one-dimensional test functions with multiple local/global minima; - 184 multivariate problems (where the number of independent variables ranges from 2 to 17), again with multiple local/global minima. The main page describing the rules, algorithms and motivation is here: http://infinity77.net/global_optimization/index.html A fairly in-depth summary page on AMPGO and sensitivities on its input parameters (local solver, tunnelling strategy, etc...): http://infinity77.net/global_optimization/ampgo.html Algorithms comparisons: http://infinity77.net/global_optimization/multidimensional.html http://infinity77.net/global_optimization/univariate.html Test functions and how they rank in a "difficult-to-solve" context: http://infinity77.net/global_optimization/test_functions.html The overall conclusion is that AMPGO is superior to all the other algorithms I have tried, leaving the second-best (pyasa) behind by a full 20% of number of solved problems. It's also the fastest (function-evaluation-wise), as it is able to outperform all the other algorithms' best results within 200 function evaluations or less (even though the other algorithms limit is 2,000). However, to be fair, it is an algorithm designed for low-dimensional optimization problems (i.e., 1-20 variables). If anyone has any suggestion on how to implement my point (1) above, please feel free to share your thoughts. I have no clue whatsoever. Any comment or requests to add additional benchmarks, please give me a shout. Enjoy :-) . Andrea. "Imagination Is The Only Weapon In The War Against Reality." http://www.infinity77.net # ------------------------------------------------------------- # def ask_mailing_list_support(email): if mention_platform_and_version() and include_sample_app(): send_message(email) else: install_malware() erase_hard_drives() # ------------------------------------------------------------- # -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Mon Nov 18 16:30:42 2013 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Mon, 18 Nov 2013 22:30:42 +0100 Subject: [SciPy-Dev] Code-block font on documentation In-Reply-To: References: Message-ID: On Mon, Nov 18, 2013 at 9:02 PM, Hadayat Seddiqi wrote: > Hello SciPy guys, > > The new documentation is great. One problem I have is the font in the code > blocks--they are too light against that light-green background. > I see a white background, light gray code blocks and black font in the code blocks: http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.probplot.html Where is the light-green? Ralf > Very small change, but I look at SciPy docs tens of times a day, so it was > something I noticed. Perhaps a bolder font would make it more readable, but > I think just making it darker should be good. > > Let me know what you all think. > > -Had > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hadsed at gmail.com Mon Nov 18 17:04:56 2013 From: hadsed at gmail.com (Hadayat Seddiqi) Date: Mon, 18 Nov 2013 14:04:56 -0800 Subject: [SciPy-Dev] Code-block font on documentation In-Reply-To: References: Message-ID: Ah OK, I'm realizing that I'm looking at numpy docs I think. Here's an example link: http://docs.scipy.org/doc/numpy/reference/generated/numpy.std.html On Mon, Nov 18, 2013 at 1:30 PM, Ralf Gommers wrote: > > > > On Mon, Nov 18, 2013 at 9:02 PM, Hadayat Seddiqi wrote: > >> Hello SciPy guys, >> >> The new documentation is great. One problem I have is the font in the >> code blocks--they are too light against that light-green background. >> > > I see a white background, light gray code blocks and black font in the > code blocks: > http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.probplot.html > Where is the light-green? > > Ralf > > >> Very small change, but I look at SciPy docs tens of times a day, so it >> was something I noticed. Perhaps a bolder font would make it more readable, >> but I think just making it darker should be good. >> >> Let me know what you all think. >> >> -Had >> >> _______________________________________________ >> SciPy-Dev mailing list >> SciPy-Dev at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-dev >> >> > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Mon Nov 18 17:16:15 2013 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Mon, 18 Nov 2013 23:16:15 +0100 Subject: [SciPy-Dev] Code-block font on documentation In-Reply-To: References: Message-ID: On Mon, Nov 18, 2013 at 11:04 PM, Hadayat Seddiqi wrote: > Ah OK, I'm realizing that I'm looking at numpy docs I think. Here's an > example link: > > http://docs.scipy.org/doc/numpy/reference/generated/numpy.std.html > > Also not green for me. The old styling did have green boxes though. Outdated browser cache or something like that? Ralf > On Mon, Nov 18, 2013 at 1:30 PM, Ralf Gommers wrote: > >> >> >> >> On Mon, Nov 18, 2013 at 9:02 PM, Hadayat Seddiqi wrote: >> >>> Hello SciPy guys, >>> >>> The new documentation is great. One problem I have is the font in the >>> code blocks--they are too light against that light-green background. >>> >> >> I see a white background, light gray code blocks and black font in the >> code blocks: >> http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.probplot.html >> Where is the light-green? >> >> Ralf >> >> >>> Very small change, but I look at SciPy docs tens of times a day, so it >>> was something I noticed. Perhaps a bolder font would make it more readable, >>> but I think just making it darker should be good. >>> >>> Let me know what you all think. >>> >>> -Had >>> >>> _______________________________________________ >>> SciPy-Dev mailing list >>> SciPy-Dev at scipy.org >>> http://mail.scipy.org/mailman/listinfo/scipy-dev >>> >>> >> >> _______________________________________________ >> SciPy-Dev mailing list >> SciPy-Dev at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-dev >> >> > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hadsed at gmail.com Mon Nov 18 17:24:41 2013 From: hadsed at gmail.com (Hadayat Seddiqi) Date: Mon, 18 Nov 2013 14:24:41 -0800 Subject: [SciPy-Dev] Code-block font on documentation In-Reply-To: References: Message-ID: Woops, that was embarrassing. Thanks! On Mon, Nov 18, 2013 at 2:16 PM, Ralf Gommers wrote: > > > > On Mon, Nov 18, 2013 at 11:04 PM, Hadayat Seddiqi wrote: > >> Ah OK, I'm realizing that I'm looking at numpy docs I think. Here's an >> example link: >> >> http://docs.scipy.org/doc/numpy/reference/generated/numpy.std.html >> >> > Also not green for me. The old styling did have green boxes though. > Outdated browser cache or something like that? > > Ralf > > >> On Mon, Nov 18, 2013 at 1:30 PM, Ralf Gommers wrote: >> >>> >>> >>> >>> On Mon, Nov 18, 2013 at 9:02 PM, Hadayat Seddiqi wrote: >>> >>>> Hello SciPy guys, >>>> >>>> The new documentation is great. One problem I have is the font in the >>>> code blocks--they are too light against that light-green background. >>>> >>> >>> I see a white background, light gray code blocks and black font in the >>> code blocks: >>> http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.probplot.html >>> Where is the light-green? >>> >>> Ralf >>> >>> >>>> Very small change, but I look at SciPy docs tens of times a day, so it >>>> was something I noticed. Perhaps a bolder font would make it more readable, >>>> but I think just making it darker should be good. >>>> >>>> Let me know what you all think. >>>> >>>> -Had >>>> >>>> _______________________________________________ >>>> SciPy-Dev mailing list >>>> SciPy-Dev at scipy.org >>>> http://mail.scipy.org/mailman/listinfo/scipy-dev >>>> >>>> >>> >>> _______________________________________________ >>> SciPy-Dev mailing list >>> SciPy-Dev at scipy.org >>> http://mail.scipy.org/mailman/listinfo/scipy-dev >>> >>> >> >> _______________________________________________ >> SciPy-Dev mailing list >> SciPy-Dev at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-dev >> >> > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From toddrjen at gmail.com Tue Nov 19 06:46:09 2013 From: toddrjen at gmail.com (Todd) Date: Tue, 19 Nov 2013 12:46:09 +0100 Subject: [SciPy-Dev] Vector Strength function In-Reply-To: References: Message-ID: On Wed, Mar 27, 2013 at 10:37 AM, Todd wrote: > On Fri, Feb 1, 2013 at 5:19 PM, Todd wrote: > >> On Wed, Jan 9, 2013 at 8:44 PM, wrote: >> >>> On Wed, Jan 9, 2013 at 12:32 PM, Todd wrote: >>> > I am interested in implementing a function for scipy. The function is >>> > called "vector strength". It is basically a measure of how reliably a >>> set >>> > of events occur at a particular phase. >>> > >>> > It was originally developed for neuroscience research, to determine >>> how well >>> > a set of neural events sync up with a periodic stimulus like a sound >>> > waveform. >>> > >>> > However, it is useful for determining how periodic a supposedly >>> periodic set >>> > of events really are, for example: >>> > >>> > 1. Determining whether crime is really more common during a full moon >>> and by >>> > how much >>> > 2. Determining how concentrated visitors to a coffee shop are during >>> rush >>> > hour >>> > 3. Determining exactly how concentrated hurricanes are during hurricane >>> > season >>> > >>> > >>> > My thinking is that this could be implemented in stages: >>> > >>> > First would be a Numpy function that would add a set of vectors in >>> polar >>> > coordinates. Given a number of magnitude/angle pairs it would provide >>> a >>> > summed magnitude/angle pair. This would probably be combined with a >>> > cartesian<->polar conversion functions. >>> > >>> > Making use of this function would be a scipy function that would >>> actually >>> > implement the vector strength calculation. This is done by treating >>> each >>> > event as a unit vector with a phase, then taking the average of the >>> vectors. >>> > If all events have the same phase, the result will have an amplitude >>> of 1. >>> > If they all have a different phases, the result will have an amplitude >>> of 0. >>> > >>> > It may even be worth having a dedicated polar dtype, although that may >>> be >>> > too much. >>> > >>> > What does everyone think of this proposal? >>> >>> Is this the same as a mean resultant in circular statistics? >>> >>> def circular_resultant(rads, axis=0): >>> mp = np.sum(np.exp(1j*rads), axis=axis) >>> rho = np.abs(mp) >>> mu = np.angle(mp) >>> >>> return mp, rho, mu >>> >>> Josef >> >> >> It looks to be the same as the first part of my proposal. >> > > So does anyone have any opinions on this? > I would like to revisit this. The original proposal was to implement something called "vector strength", which is a measure of how periodic a set of events are. It seems fairly generally useful. Trying to determine if particular events are periodic, exactly how periodic they are, or what phase of a period they are synchronized to seems to me to be a question that would be encountered a lot. My original proposal was fairly complex, but looking at the mathematics it turns out it can be implemented fairly simply. I have an implementation already. The actual mathematical components is only a few lines, although there is some housekeeping at the beginning and end to allow for multiple target periods. If anyone thinks this is useful I can implement some unit tests and go ahead and submit it. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jstevenson131 at gmail.com Tue Nov 19 14:38:05 2013 From: jstevenson131 at gmail.com (Jacob Stevenson) Date: Tue, 19 Nov 2013 19:38:05 +0000 Subject: [SciPy-Dev] minimizer benchmark In-Reply-To: References: <528A302D.1000001@gmail.com> Message-ID: <528BBE1D.9040207@gmail.com> I added several more test functions to the benchmarking data (results are below). Taking into account advice from earlier replied, I also generalized the interface to easily make add new functions to test and to average over multiple starting points. My idea is that if this becomes part of the public repository more functions can easily be added (e.g. by any of you). We could even make several groups of benchmarks. e.g. problems with functions only, problems with gradients, problems with constraints, etc. Is this something people would want to see in scipy? Would you be interested in contributing if if it was a separate repository? Here is the link to the relevant code and the results from the new benchmarking. https://github.com/js850/scipy/blob/benchmarks/scipy/optimize/benchmarks/bench_optimizers.py --------------------------------------------------------- Optimizer benchmark: Rosenbrock function averaged over 10 starting configurations Optimizer nfail nfev njev nhev time --------------------------------------------------------- L-BFGS-B 0 35 0 0 0.00334594 SLSQP 0 51 35 0 0.00544584 BFGS 0 49 49 0 0.00776296 trust-ncg 0 45 40 39 0.0111956 Newton-CG 0 78 140 63 0.0229024 Powell 0 939 0 0 0.0497912 TNC 1 61 0 0 0.00444455 dogleg 3 17 15 14 0.00463629 CG 3 93 90 0 0.0100623 COBYLA 9 905 0 0 0.0247099 --------------------------------------------------------- Optimizer benchmark: simple quadratic function averaged over 10 starting configurations Optimizer nfail nfev njev nhev time --------------------------------------------------------- TNC 0 7 0 0 0.000121212 L-BFGS-B 0 3 0 0 0.000132966 SLSQP 0 3 2 0 0.000221419 CG 0 5 5 0 0.00023191 COBYLA 0 64 0 0 0.000245976 BFGS 0 4 4 0 0.000284338 trust-ncg 0 3 3 2 0.000300932 Newton-CG 0 3 4 2 0.000332808 dogleg 0 3 3 2 0.000438738 Powell 0 52 0 0 0.00148728 --------------------------------------------------------- Optimizer benchmark: function sum(x**2) + x[0] averaged over 10 starting configurations Optimizer nfail nfev njev nhev time --------------------------------------------------------- L-BFGS-B 0 3 0 0 0.000148368 TNC 0 12 0 0 0.000190687 SLSQP 0 3 2 0 0.000227571 CG 0 4 4 0 0.00024004 COBYLA 0 62 0 0 0.000283909 BFGS 0 3 3 0 0.000284839 trust-ncg 0 3 3 2 0.000315571 Newton-CG 0 3 4 2 0.000344849 dogleg 0 3 3 2 0.000449395 Powell 0 135 0 0 0.00436816 --------------------------------------------------------- Optimizer benchmark: 1d sin function averaged over 10 starting configurations Optimizer nfail nfev njev nhev time --------------------------------------------------------- COBYLA 0 23 0 0 0.000144291 TNC 0 10 0 0 0.000207186 L-BFGS-B 0 6 0 0 0.000222659 CG 0 5 5 0 0.000294614 SLSQP 0 4 3 0 0.000340533 BFGS 0 5 5 0 0.000469255 Powell 0 37 0 0 0.00122664 --------------------------------------------------------- Optimizer benchmark: Booth's function averaged over 10 starting configurations Optimizer nfail nfev njev nhev time --------------------------------------------------------- TNC 0 6 0 0 0.000252295 L-BFGS-B 0 5 0 0 0.000304174 CG 0 5 5 0 0.000353932 BFGS 0 5 5 0 0.000439787 SLSQP 0 7 5 0 0.000566912 COBYLA 0 63 0 0 0.000614142 Powell 0 67 0 0 0.00229499 checking gradient 7.00777666894e-06 --------------------------------------------------------- Optimizer benchmark: Beale's function averaged over 10 starting configurations Optimizer nfail nfev njev nhev time --------------------------------------------------------- L-BFGS-B 0 25 0 0 0.00166378 TNC 1 37 0 0 0.00172372 SLSQP 1 33 21 0 0.00265439 COBYLA 2 336 0 0 0.00415859 BFGS 3 180 180 0 0.0245004 CG 4 103 99 0 0.00788155 Powell 8 1829 0 0 0.071332 checking gradient 1.53950157701e-08 --------------------------------------------------------- Optimizer benchmark: 4 atom Lennard Jones potential averaged over 10 starting configurations Optimizer nfail nfev njev nhev time --------------------------------------------------------- L-BFGS-B 0 54 0 0 0.0231662 SLSQP 0 113 38 0 0.0295753 BFGS 0 73 73 0 0.0356844 Powell 0 2401 0 0 0.372276 CG 1 104 103 0 0.0457192 TNC 6 112 0 0 0.0459664 COBYLA 7 864 0 0 0.112254 -------------- next part -------------- An HTML attachment was scrubbed... URL: From denis at laxalde.org Tue Nov 19 15:24:19 2013 From: denis at laxalde.org (Denis Laxalde) Date: Tue, 19 Nov 2013 21:24:19 +0100 Subject: [SciPy-Dev] minimizer benchmark In-Reply-To: <528A302D.1000001@gmail.com> References: <528A302D.1000001@gmail.com> Message-ID: <528BC8F3.8000901@laxalde.org> Jacob Stevenson a ?crit : > I have not submitted a pull request because I have no idea how to fit > what I've done into an existing benchmarking framework (if there is > one). I will submit the pull request if there is interest. In my > opinion it would be really useful to have a long set of benchmarks to > see how all the minimizers perform on different types of minimization > problems. > > Here is the script in my scipy fork > > https://github.com/js850/scipy/blob/benchmarks/scipy/optimize/benchmarks/bench_optimizers.py I also think this could be useful. If you want to include this, I'd suggest to use numpy's benchmarking framework (see e.g. numpy.testing.Tester.bench) as this is done elsewhere (e.g. sparse.linalg). From eraldo.pomponi at gmail.com Wed Nov 20 06:39:36 2013 From: eraldo.pomponi at gmail.com (Eraldo Pomponi) Date: Wed, 20 Nov 2013 12:39:36 +0100 Subject: [SciPy-Dev] Vector Strength function In-Reply-To: References: Message-ID: Dear Todd, On Tue, Nov 19, 2013 at 12:46 PM, Todd wrote: > On Wed, Mar 27, 2013 at 10:37 AM, Todd wrote: > >> On Fri, Feb 1, 2013 at 5:19 PM, Todd wrote: >> >>> On Wed, Jan 9, 2013 at 8:44 PM, wrote: >>> >>>> On Wed, Jan 9, 2013 at 12:32 PM, Todd wrote: >>>> > I am interested in implementing a function for scipy. The function is >>>> > called "vector strength". It is basically a measure of how reliably >>>> a set >>>> > of events occur at a particular phase. >>>> > >>>> > It was originally developed for neuroscience research, to determine >>>> how well >>>> > a set of neural events sync up with a periodic stimulus like a sound >>>> > waveform. >>>> >>> Would you suggest some reference where to read the details about this approach? Thanks Cheers, Eraldo -------------- next part -------------- An HTML attachment was scrubbed... URL: From pablo.winant at gmail.com Wed Nov 20 10:24:45 2013 From: pablo.winant at gmail.com (Pablo Winant) Date: Wed, 20 Nov 2013 16:24:45 +0100 Subject: [SciPy-Dev] minimizer benchmark In-Reply-To: References: <528A302D.1000001@gmail.com> <20131118163444.GC25689@phare.normalesup.org> Message-ID: Thank you. I'll have a look and see whether one of them can be compiled without too much hassle. By the way, the scipy lectures posted by Gael are really excellent. It's a pleasure to read them. Pablo On Mon, Nov 18, 2013 at 7:28 PM, Pauli Virtanen wrote: > 18.11.2013 18:56, Pablo Winant kirjoitti: > > I have a related question: does anybody knows of a simple minimizer > > function written in pure python or cython ? Are there native minimizers > in > > scipy ? > > I'm asking this question because I need to optimize many small problems, > > where the overhead of calling the optimizing becomes non-negligible. I > > suspect I could achieve big performance gains, if I could call that > > function with compiled python code (using numba). > > Several of the minimizers in Scipy are pure-Python. You can take a look > at the source code to see which ones (I don't remember from the top of > my head the full list). > > -- > Pauli Virtanen > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From toddrjen at gmail.com Wed Nov 20 11:25:32 2013 From: toddrjen at gmail.com (Todd) Date: Wed, 20 Nov 2013 17:25:32 +0100 Subject: [SciPy-Dev] Vector Strength function In-Reply-To: References: Message-ID: On Nov 20, 2013 12:39 PM, "Eraldo Pomponi" wrote: > > Dear Todd, > > > > > On Tue, Nov 19, 2013 at 12:46 PM, Todd wrote: >> >> On Wed, Mar 27, 2013 at 10:37 AM, Todd wrote: >>> >>> On Fri, Feb 1, 2013 at 5:19 PM, Todd wrote: >>>> >>>> On Wed, Jan 9, 2013 at 8:44 PM, wrote: >>>>> >>>>> On Wed, Jan 9, 2013 at 12:32 PM, Todd wrote: >>>>> > I am interested in implementing a function for scipy. The function is >>>>> > called "vector strength". It is basically a measure of how reliably a set >>>>> > of events occur at a particular phase. >>>>> > >>>>> > It was originally developed for neuroscience research, to determine how well >>>>> > a set of neural events sync up with a periodic stimulus like a sound >>>>> > waveform. > > > > Would you suggest some reference where to read the details about this approach? Thanks > > Cheers, > Eraldo This paper is pretty good, although fairly technical (I use it as a reference in the docstring as well): http://link.springer.com/article/10.1007%2Fs00422-013-0561-7 J. Leo van Hemmen. Vector strength after Goldberg, Brown, and von Mises: biological and mathematical perspectives. Biological Cybernetics August 2013, Volume 107, Issue 4, pp 385-396 -------------- next part -------------- An HTML attachment was scrubbed... URL: From pierre.haessig at crans.org Wed Nov 20 16:51:54 2013 From: pierre.haessig at crans.org (Pierre Haessig) Date: Wed, 20 Nov 2013 22:51:54 +0100 Subject: [SciPy-Dev] Vector Strength function In-Reply-To: References: Message-ID: <528D2EFA.2040402@crans.org> Hi, Le 20/11/2013 17:25, Todd a ?crit : > > This paper is pretty good, although fairly technical (I use it as a > reference in the docstring as well): > > http://link.springer.com/article/10.1007%2Fs00422-013-0561-7 > Sounds pretty interesting. I was curious to read it but I didn't find an author version on Google Scholar. Is there any freely available manuscript somewhere ? best, Pierre From toddrjen at gmail.com Wed Nov 20 17:32:48 2013 From: toddrjen at gmail.com (Todd) Date: Wed, 20 Nov 2013 23:32:48 +0100 Subject: [SciPy-Dev] Vector Strength function In-Reply-To: <528D2EFA.2040402@crans.org> References: <528D2EFA.2040402@crans.org> Message-ID: On Wed, Nov 20, 2013 at 10:51 PM, Pierre Haessig wrote: > Hi, > > Le 20/11/2013 17:25, Todd a ?crit : > > > > This paper is pretty good, although fairly technical (I use it as a > > reference in the docstring as well): > > > > http://link.springer.com/article/10.1007%2Fs00422-013-0561-7 > > > Sounds pretty interesting. I was curious to read it but I didn't find an > author version on Google Scholar. Is there any freely available > manuscript somewhere ? > > best, > Pierre > > > I couldn't find a free copy of that paper, but this one has a pdf available and seems to cover the basics (and it is by the same group so it has some of the same figures and most of the same equations). I may use it in the docstring instead. You really only need to read the first two pages (second and third pages of the pdf since the first page is just citation information). http://scitation.aip.org/content/aip/journal/chaos/21/4/10.1063/1.3670512 van Hemmen, JL, Longtin, A, and Vollmayr, AN. Testing resonating vector strength: Auditory system, electric fish, and noise. Chaos 21, 047508 (2011); http://dx.doi.org/10.1063/1.3670512 -------------- next part -------------- An HTML attachment was scrubbed... URL: From eraldo.pomponi at gmail.com Wed Nov 20 23:02:28 2013 From: eraldo.pomponi at gmail.com (Eraldo Pomponi) Date: Thu, 21 Nov 2013 08:02:28 +0400 Subject: [SciPy-Dev] Vector Strength function In-Reply-To: References: <528D2EFA.2040402@crans.org> Message-ID: Dear Todd, > I couldn't find a free copy of that paper, but this one has a pdf available and seems to cover the basics (and it is by the same group so it has some of the same figures and most of the same equations). I may use it in the docstring instead. You really only need to read the first two pages (second and third pages of the pdf since the first page is just citation information). > > http://scitation.aip.org/content/aip/journal/chaos/21/4/10.1063/1.3670512 > > van Hemmen, JL, Longtin, A, and Vollmayr, AN. > Testing resonating vector strength: Auditory system, electric fish, and noise. > Chaos 21, 047508 (2011); http://dx.doi.org/10.1063/1.3670512 Thanks a lot for this new reference .... I searched too for a free copy of the first one without success. Cheers, Eraldo -------------- next part -------------- An HTML attachment was scrubbed... URL: From pierre.haessig at crans.org Thu Nov 21 04:59:58 2013 From: pierre.haessig at crans.org (Pierre Haessig) Date: Thu, 21 Nov 2013 10:59:58 +0100 Subject: [SciPy-Dev] Vector Strength function In-Reply-To: References: <528D2EFA.2040402@crans.org> Message-ID: <528DD99E.9070407@crans.org> Le 20/11/2013 23:32, Todd a ?crit : > I couldn't find a free copy of that paper, but this one has a pdf > available and seems to cover the basics (and it is by the same group > so it has some of the same figures and most of the same equations). I > may use it in the docstring instead. You really only need to read the > first two pages (second and third pages of the pdf since the first > page is just citation information). > > http://scitation.aip.org/content/aip/journal/chaos/21/4/10.1063/1.3670512 > > van Hemmen, JL, Longtin, A, and Vollmayr, AN. > Testing resonating vector strength: Auditory system, electric fish, > and noise. > Chaos 21, 047508 (2011); http://dx.doi.org/10.1063/1.3670512 Thanks for this new reference. I've only read part of it now. It seems to me that van Hemmen's "Resonating Vector Strength" is really close to the Fourier transform, but specialized for "spike signals". Because the input data is the timing of the events, it's a very sparse description (as opposed to a dense vector with lots of "0"s and few "1"s) which I guess leads to a quite efficient computation (compared to a dense fft). Is that right ? best, Pierre -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 897 bytes Desc: OpenPGP digital signature URL: From toddrjen at gmail.com Thu Nov 21 06:41:38 2013 From: toddrjen at gmail.com (Todd) Date: Thu, 21 Nov 2013 12:41:38 +0100 Subject: [SciPy-Dev] Vector Strength function In-Reply-To: <528DD99E.9070407@crans.org> References: <528D2EFA.2040402@crans.org> <528DD99E.9070407@crans.org> Message-ID: On Nov 21, 2013 11:00 AM, "Pierre Haessig" wrote: > > Le 20/11/2013 23:32, Todd a ?crit : > > I couldn't find a free copy of that paper, but this one has a pdf > > available and seems to cover the basics (and it is by the same group > > so it has some of the same figures and most of the same equations). I > > may use it in the docstring instead. You really only need to read the > > first two pages (second and third pages of the pdf since the first > > page is just citation information). > > > > http://scitation.aip.org/content/aip/journal/chaos/21/4/10.1063/1.3670512 > > > > van Hemmen, JL, Longtin, A, and Vollmayr, AN. > > Testing resonating vector strength: Auditory system, electric fish, > > and noise. > > Chaos 21, 047508 (2011); http://dx.doi.org/10.1063/1.3670512 > Thanks for this new reference. > > I've only read part of it now. It seems to me that van Hemmen's > "Resonating Vector Strength" is really close to the Fourier transform, > but specialized for "spike signals". Because the input data is the > timing of the events, it's a very sparse description (as opposed to a > dense vector with lots of "0"s and few "1"s) which I guess leads to a > quite efficient computation (compared to a dense fft). Is that right ? Yes, that is pretty much correct. However, if you know what the period you are looking for is (the conventional vector strength), or can narrow it down to a small list or range, then it is much easier to interpret than an FFT. Also, although it is not emphasized in the paper, due to the mathematics you get the phase of the events for free. So my implementation returns the phase automatically (people who don't want it can just exclude that value). There is one thing I am not 100% sure about. There are two ways to support multiple periods. The first is using matrix multiplication, which is faster but results in a 2D matrix so it can take a lot more memory. The other is a loop, which uses less memory but require more processing time. I am not sure which is the recommended approach in this sort of situation. -------------- next part -------------- An HTML attachment was scrubbed... URL: From toddrjen at gmail.com Thu Nov 21 11:52:22 2013 From: toddrjen at gmail.com (Todd) Date: Thu, 21 Nov 2013 17:52:22 +0100 Subject: [SciPy-Dev] Vector Strength function In-Reply-To: References: <528D2EFA.2040402@crans.org> <528DD99E.9070407@crans.org> Message-ID: On Thu, Nov 21, 2013 at 12:41 PM, Todd wrote: > > On Nov 21, 2013 11:00 AM, "Pierre Haessig" > wrote: > > > > Le 20/11/2013 23:32, Todd a ?crit : > > > I couldn't find a free copy of that paper, but this one has a pdf > > > available and seems to cover the basics (and it is by the same group > > > so it has some of the same figures and most of the same equations). I > > > may use it in the docstring instead. You really only need to read the > > > first two pages (second and third pages of the pdf since the first > > > page is just citation information). > > > > > > > http://scitation.aip.org/content/aip/journal/chaos/21/4/10.1063/1.3670512 > > > > > > van Hemmen, JL, Longtin, A, and Vollmayr, AN. > > > Testing resonating vector strength: Auditory system, electric fish, > > > and noise. > > > Chaos 21, 047508 (2011); http://dx.doi.org/10.1063/1.3670512 > > Thanks for this new reference. > > > > I've only read part of it now. It seems to me that van Hemmen's > > "Resonating Vector Strength" is really close to the Fourier transform, > > but specialized for "spike signals". Because the input data is the > > timing of the events, it's a very sparse description (as opposed to a > > dense vector with lots of "0"s and few "1"s) which I guess leads to a > > quite efficient computation (compared to a dense fft). Is that right ? > > Yes, that is pretty much correct. However, if you know what the period > you are looking for is (the conventional vector strength), or can narrow it > down to a small list or range, then it is much easier to interpret than an > FFT. > > Also, although it is not emphasized in the paper, due to the mathematics > you get the phase of the events for free. So my implementation returns the > phase automatically (people who don't want it can just exclude that value). > > There is one thing I am not 100% sure about. There are two ways to > support multiple periods. The first is using matrix multiplication, which > is faster but results in a 2D matrix so it can take a lot more memory. The > other is a loop, which uses less memory but require more processing time. > I am not sure which is the recommended approach in this sort of situation. > I have a full version of the matrix-based version of the function on github now. It has documentation and unit tests and should be ready to create a pull request if it is okay. It can be found here: https://github.com/toddrjen/scipy/compare/vectorstrength -------------- next part -------------- An HTML attachment was scrubbed... URL: From post.moni at t-online.de Thu Nov 21 11:59:35 2013 From: post.moni at t-online.de (Monika) Date: Thu, 21 Nov 2013 16:59:35 +0000 (UTC) Subject: [SciPy-Dev] SLSQP Constrained Optimizer Status References: Message-ID: Rob Falck gmail.com> writes: > > I'm currently implementing the Sequential Least Squares Quadratic Programming (SLSQP) optimizer, by Dieter Kraft, for use in Scipy.The Fortran code being wrapped with F2PY is here:? > http://www.netlib.org/toms/733? (its use within Scipy has been cleared)SLSQP provides for bounds on the independent variables, as well as equality and inequality constraint functions, which is a capability that doesn't exist in > scipy.optimize.Currently the code works, although the constraint normals are being generated by approximation.? I'm working on a way to pass these in.? I think the most elegant way will be a single function that returns the matrix of constraint normals. > For a demonstration of what the code can do, here is an optimization of f(x,y) = 2xy + 2x - x**2 - 2y**2Example 14.2 in Chapra & Canale gives the maximum as x=2.0, y=1.0.The unbounded optimization tests find this solution.? As expected, its faster when derivatives are provided rather than approximated. > Unbounded optimization. Derivatives approximated.Elapsed time: 1.45792961121 msResults [[1.9999999515712266, 0.99999996181577444], -1.9999999999999984, 4, 0, 'Optimization terminated successfully.'] > Unbounded optimization.? Derivatives provided.Elapsed time: 1.03211402893 msResults [[1.9999999515712266, 0.99999996181577444], -1.9999999999999984, 4, 0, 'Optimization terminated successfully.']The following example uses an equality constraint to find the optimal when x=y.? Bound optimization.? Derivatives approximated.Elapsed time: 1.384973526 msResults [[0.99999996845920858, 0.99999996845920858 > ], -0.99999999999999889, 4, 0, 'Optimization terminated successfully.']I've tried to conform to the syntax used by the other optimizers in scipy.optimize.? The function definition and doc string are below. > If anyone is interested in testing it out, let me know.def fmin_slsqp( func, x0 , eqcons=[], f_eqcons=None, ieqcons=[], f_ieqcons=None,??????????????? bounds = [], fprime = None, fprime_cons=None,args = (), > ??????????????? iter = 100, acc = 1.0E-6, iprint = 1, full_output = 0, ??????????????? epsilon = _epsilon ):??? """??? Minimize a function using Sequential Least SQuares Programming??? > ??? Description:??????? Python interface function for the SLSQP Optimization subroutine ??????? originally implemented by Dieter Kraft.??? ??? Inputs:??????? func???????? - Objective function (in the form func(x, *args)) > ??????? x0?????????? - Initial guess for the independent variable(s). ??????? eqcons?????? - A list of functions of length n such that ?????????????????????? eqcons[j](x0,*args) == 0.0 in a successfully optimized problem > ??????? f_eqcons???? - A function of the form f_eqcons(x, *args) that returns an?????????????????????? array in which each element must equal 0.0 in a ?????????????????????? successfully optimized problem.? If f_eqcons is > ?????????????????????? specified, eqcons is ignored.??????????????????? ??????? ieqcons????? - A list of functions of length n such that ?????????????????????? ieqcons[j](x0,*args) >= 0.0 in a successfully optimized problem > ??????? f_ieqcons??? - A function of the form f_ieqcons(x0, *args) that returns an?????????????????????? array in which each element must be greater or equal to?????????????????????? 0.0 in a successfully optimized problem.? If f_ieqcons is > ?????????????????????? specified, ieqcons is ignored.?????????????????????????????????????????????? ??????? bounds?????? - A list of tuples specifying the lower and upper bound for each ?????????????????????? independent variable [ (xl0, xu0), (xl1, xu1), ...] > ??????? fprime?????? - A function that evaluates the partial derivatives of func ??????? fprime_cons? - A function of the form f(x, *args) that returns the?????????????????????? m by n array of constraint normals.? If not provided, > ?????????????????????? the normals will be approximated. Equality constraint?????????????????????? normals precede inequality constraint normals. ??????? args???????? - A sequence of additional arguments passed to func and fprime > ??????? iter???????? - The maximum number of iterations (int)??????? acc????????? - Requested accuracy (float)??????? iprint?????? - The verbosity of fmin_slsqp.?????????????????????? iprint <= 0 : Silent operation > ?????????????????????? iprint == 1 : Print summary upon completion (default)?????????????????????? iprint >= 2 : Print status of each iterate and summary??????? full_output? - If 0, return only the minimizer of func (default).? Otherwise, > ?????????????????????? output final objective function and summary information.??????? epsilon????? - The step size for finite-difference derivative estimates.???????????????????? ??? Outputs: ( x, { fx, gnorm, its, imode, smode }) > ??????? x??????????? - The final minimizer of func.??????? fx?????????? - The final value of the objective function.??????? its????????? - The number of iterations.??????? imode??????? - The exit mode from the optimizer, as an integer. > ??????? smode??????? - A string describing the exit mode from the optimizer. ??????? ??? Exit modes are defined as follows:??????? -1 : Gradient evaluation required (g & a)???????? 0 : Optimization terminated successfully. > ???????? 1 : Function evaluation required (f & c)???????? 2 : Number of equality constraints larger than number of independent variables???????? 3 : More than 3*n iterations in LSQ subproblem???????? 4 : Inequality constraints incompatible > ???????? 5 : Singular matrix E in LSQ subproblem???????? 6 : Singular matrix C in LSQ subproblem???????? 7 : Rank-deficient equality constraint subproblem HFTI???????? 8 : Positive directional derivative for linesearch > ???????? 9 : Iteration limit exceeded-- - Rob Falck > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > I have a question regarding the SLSQP Code. If inexact linesearch is used, which L1-penalty function is used? I?m looking forward to your answer, Monika From cimrman3 at ntc.zcu.cz Fri Nov 22 07:50:26 2013 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Fri, 22 Nov 2013 13:50:26 +0100 Subject: [SciPy-Dev] ANN: SfePy 2013.4 Message-ID: <528F5312.30700@ntc.zcu.cz> I am pleased to announce release 2013.4 of SfePy. Description ----------- SfePy (simple finite elements in Python) is a software for solving systems of coupled partial differential equations by the finite element method. The code is based on NumPy and SciPy packages. It is distributed under the new BSD license. Home page: http://sfepy.org Mailing list: http://groups.google.com/group/sfepy-devel Git (source) repository, issue tracker, wiki: http://github.com/sfepy Highlights of this release -------------------------- - simplified quadrature definition - equation sequence solver - initial support for 'plate' integration/connectivity type - script for visualization of quadrature points and weights For full release notes see http://docs.sfepy.org/doc/release_notes.html#id1 (rather long and technical). Best regards, Robert Cimrman and Contributors (*) (*) Contributors to this release (alphabetical order): Vladim?r Luke?, Jaroslav Vond?ejc From ricardomayerb at gmail.com Fri Nov 22 12:37:10 2013 From: ricardomayerb at gmail.com (Ricardo Mayer) Date: Fri, 22 Nov 2013 14:37:10 -0300 Subject: [SciPy-Dev] any chance of enabling sort option in scipy.linalg.qz? Message-ID: Dear all, As of release 0.13 of scipy the sorting option for the generalized schur decomposition (aka QZ) it's been disabled because some failure wrapping the corresponding function from LAPACK. Is there are any chance of fixing this for the next release (or sooner)? Doing the QZ decomposition in such a way that generalized eigenvalues with absolute values greater than one appear last is required for a pretty common algorithm in (macro)economics solving linear rational expectation models. I'd loathe to have to go back to matlab to do just this :-) (which I won't!!! but still ...) best, R PS: browsing the current documentation for 0.14 it seems that this situation has not changed, yet. -------------- next part -------------- An HTML attachment was scrubbed... URL: From padarn at gmail.com Sun Nov 24 01:11:12 2013 From: padarn at gmail.com (Padarn Wilson) Date: Sun, 24 Nov 2013 17:11:12 +1100 Subject: [SciPy-Dev] Simple arithmetic with scipy.stats.distributions Message-ID: Hi, I was wondering if anyone else would interested in having some simple arithmetic functionality for 'distributions'? So that for example you could add distributions, then sample from them, get the pdf/cdf etc. I've hacked something together for my own purposes, but it seems like it be nice general functionality? Cheers, Padarn -------------- next part -------------- An HTML attachment was scrubbed... URL: From deshpande.jaidev at gmail.com Sun Nov 24 05:49:56 2013 From: deshpande.jaidev at gmail.com (Jaidev Deshpande) Date: Sun, 24 Nov 2013 16:19:56 +0530 Subject: [SciPy-Dev] Simple arithmetic with scipy.stats.distributions In-Reply-To: References: Message-ID: Hi, On Sun, Nov 24, 2013 at 11:41 AM, Padarn Wilson wrote: > Hi, > > I was wondering if anyone else would interested in having some simple > arithmetic functionality for 'distributions'? So that for example you could > add distributions, then sample from them, get the pdf/cdf etc. I agree. I for one would love to have a NumPy equivalent of the `sample` function in R. > > I've hacked something together for my own purposes, but it seems like it be > nice general functionality? Have you hosted your code anywhere? > > Cheers, > Padarn > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > -- JD From josef.pktd at gmail.com Sun Nov 24 07:17:57 2013 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Sun, 24 Nov 2013 07:17:57 -0500 Subject: [SciPy-Dev] Simple arithmetic with scipy.stats.distributions In-Reply-To: References: Message-ID: On Sun, Nov 24, 2013 at 5:49 AM, Jaidev Deshpande wrote: > Hi, > > On Sun, Nov 24, 2013 at 11:41 AM, Padarn Wilson wrote: >> Hi, >> >> I was wondering if anyone else would interested in having some simple >> arithmetic functionality for 'distributions'? So that for example you could >> add distributions, then sample from them, get the pdf/cdf etc. How do you want to do that or what do you have in mind here? some analytical calculations or Monte Carlo propagation? https://pypi.python.org/pypi/soerp looks interesting (based on browsing it's documentation) > > I agree. I for one would love to have a NumPy equivalent of the > `sample` function in R. What does this do that we don't have yet in numpy ? np.random.choice ? Josef > >> >> I've hacked something together for my own purposes, but it seems like it be >> nice general functionality? > > Have you hosted your code anywhere? > >> >> Cheers, >> Padarn >> >> _______________________________________________ >> SciPy-Dev mailing list >> SciPy-Dev at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-dev >> > > > > -- > JD > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev From pav at iki.fi Sun Nov 24 12:38:55 2013 From: pav at iki.fi (Pauli Virtanen) Date: Sun, 24 Nov 2013 19:38:55 +0200 Subject: [SciPy-Dev] SLSQP Constrained Optimizer Status In-Reply-To: References: Message-ID: 21.11.2013 18:59, Monika kirjoitti: [clip] > I have a question regarding the SLSQP Code. If inexact linesearch is used, > which L1-penalty function is used? The source code can be found in http://www.netlib.org/toms/733 -- Pauli Virtanen From padarn at gmail.com Sun Nov 24 15:28:31 2013 From: padarn at gmail.com (Padarn Wilson) Date: Mon, 25 Nov 2013 07:28:31 +1100 Subject: [SciPy-Dev] Simple arithmetic with scipy.stats.distributions Message-ID: Apologies all, I accidentally had digest mode set, so I'm replying manually. >> I was wondering if anyone else would interested in having some simple > >> arithmetic functionality for 'distributions'? So that for example you > could > >> add distributions, then sample from them, get the pdf/cdf etc. > How do you want to do that or what do you have in mind here? > some analytical calculations or Monte Carlo propagation? > https://pypi.python.org/pypi/soerp looks interesting (based on > browsing it's documentation) Well, at the moment I'm just using the PDF (and random samples, but that is easy), and doing this approximately by taking a discrete convolution. For something outside my specific use case though, something like Monte Carlo progpagation is what I had in mind (I wanted to be able to use approximate densities eventually too). However soerp does look good. Thanks for pointing that out. Perhaps not so useful for the approximate cases, but good otherwise. >> I've hacked something together for my own purposes, but it seems like it > be > >> nice general functionality? > > > > Have you hosted your code anywhere? No - it is tied into some private code at the moment. I'll clean it up and host it on Github, but it isn't anything special. Thanks, Padarn -------------- next part -------------- An HTML attachment was scrubbed... URL: From svenbrauch at googlemail.com Mon Nov 25 03:01:43 2013 From: svenbrauch at googlemail.com (Sven Brauch) Date: Mon, 25 Nov 2013 09:01:43 +0100 Subject: [SciPy-Dev] Supporting numpy/scipy in an IDE Message-ID: <1927747.BK2LNiVUDe@localhost.localdomain> Hi! I'm writing a plugin for an IDE which does things like code completion on Python code. Usually, it gets the necessary information from static analysis of Python code (which works fairly well for many libraries); however, in the case of numpy, this obviously does not work at all because numpy is not written in pure Python. For improving numpy support, I would especially need the following information: * What are the arguments of a particular function? Python's inspect won't tell me; it seems like this information is just available in the prosa text docstring. The website has it in a somewhat machine-readable way, is there a method to get the actual list of parameters which does not involve parsing that? Or is the table on the website based on parsing the docstring? * What kind of object does a function return? Of course this is not always well-defined, but in most cases it is. This information, too, only seems to be available in the prosa docstrings, if at all -- and different from the argument specification, it's usually just mentioned somewhere in a not even remotely machine-readable form. My question is thus: Do you see any reasonable (automatable) way to extract this information from any resource (code, docs, ...)? I think having this information in a machine-readable form would also be beneficial for various other projects. I'm happy about every suggestion. I dislike solutions based on inspecting things at runtime; it is scary to run code a user types into the editor, and it will also be slow in a variety of cases. Best regards, Sven From matt at plot.ly Mon Nov 25 03:35:47 2013 From: matt at plot.ly (Matt Sundquist) Date: Mon, 25 Nov 2013 00:35:47 -0800 Subject: [SciPy-Dev] Plotly: Python Graphing and Analytics feedback and "NumPy for Matlab Users" wiki Message-ID: Hi SciPy developers, My name is Matt, and I'm part of Plot.ly, a graphing and analytics project. Per a helpful suggestion from Pauli Virtanen, I thought I'd write to see if it might be possible to list Plotly on this page (NumPy for Matlab Users) as a "MATLAB packages/tools" for graphing and then as a "Link." More broadly, it would be incredibly cool to hear suggestions and feedback from the experts on this list on what we can do to improve Plot.ly and integrate with and support SciPy users. Background. Plotly has APIs for Python and MATLAB that allow users to create interactive, web-based graphs. You can import files, copy and paste data, and stream it in with APIs. Users can then style graphs with code and the GUI. We store data and graphs together for convenience of access, and users can share with a URL or as a jointly-editable graph/data project, download them, or embed them as an iframe. We have a Python Shell (NumPy supported) and a grid that support stats, functions, and fits for controlling data. Here is an IPython demo showing interactive graphs, and a Notebook showing multiple axes, subplots, and Insets. This talk from the Montreal Python Meetup shows an IPython demo. You can also stream data from your Arduinoor RasPi with the Arduino or Python API. We also have a gallery that you can life code from to re-make graphs (screenshot below). We're a quite new startup, and any and all support, suggestions, and help are greatly appreciated and go a long way. It would be awesome to be listed on the wiki, and if I should do anything to help that out or write anything to put on the site, I'm happy to. I made an account, but it looks like I don't have permission to edit. All my best, Matt [image: Inline image 1] -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Gallery.png Type: image/png Size: 286629 bytes Desc: not available URL: From marmaduke.woodman at univ-amu.fr Mon Nov 25 04:55:32 2013 From: marmaduke.woodman at univ-amu.fr (Marmaduke Woodman) Date: Mon, 25 Nov 2013 10:55:32 +0100 Subject: [SciPy-Dev] Supporting numpy/scipy in an IDE In-Reply-To: <1927747.BK2LNiVUDe@localhost.localdomain> References: <1927747.BK2LNiVUDe@localhost.localdomain> Message-ID: Hi If by website you mean the usual NumPy SciPy docs [1], this is generated by Sphinx, which seems to identify nicely the arguments and returns which are written out as RST in the docstring format [2] You could adapt Sphinx's parsing/building classes to get "direct" access to the docstring info? cheers, Marmaduke [1] e.g. http://docs.scipy.org/doc/numpy/reference/generated/numpy.load.html [2] https://github.com/numpy/numpy/blob/master/doc/HOWTO_DOCUMENT.rst.txt#id4 On Mon, Nov 25, 2013 at 9:01 AM, Sven Brauch wrote: > Hi! > > I'm writing a plugin for an IDE which does things like code completion on > Python code. Usually, it gets the necessary information from static > analysis > of Python code (which works fairly well for many libraries); however, in > the > case of numpy, this obviously does not work at all because numpy is not > written in pure Python. > > For improving numpy support, I would especially need the following > information: > > * What are the arguments of a particular function? Python's inspect won't > tell me; it seems like this information is just available in the prosa > text docstring. The website has it in a somewhat machine-readable way, > is there a method to get the actual list of parameters which does not > involve parsing that? Or is the table on the website based on parsing > the docstring? > > * What kind of object does a function return? Of course this is not always > well-defined, but in most cases it is. This information, too, only > seems to > be available in the prosa docstrings, if at all -- and different from > the > argument specification, it's usually just mentioned somewhere in a not > even remotely machine-readable form. > > My question is thus: Do you see any reasonable (automatable) way to extract > this information from any resource (code, docs, ...)? I think having this > information in a machine-readable form would also be beneficial for various > other projects. > > I'm happy about every suggestion. I dislike solutions based on inspecting > things at runtime; it is scary to run code a user types into the editor, > and > it will also be slow in a variety of cases. > > Best regards, > Sven > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From svenbrauch at googlemail.com Mon Nov 25 05:11:19 2013 From: svenbrauch at googlemail.com (Sven Brauch) Date: Mon, 25 Nov 2013 11:11:19 +0100 Subject: [SciPy-Dev] Supporting numpy/scipy in an IDE In-Reply-To: References: <1927747.BK2LNiVUDe@localhost.localdomain> Message-ID: <1717647.TYGHmCova0@localhost.localdomain> Hi! On Monday 25 November 2013 10:55:32 Marmaduke Woodman wrote: > You could adapt Sphinx's parsing/building classes to get "direct" access to > the docstring info? Hmm, yes, for the argument count and names that would work. But look at the types, e.g. in [1]: "out: complex ndarray" is not really machine-readable. I would need the actual type it returns, optimally the exact type (which module it is from etc.) but at least the class name. For the arguments the types are not that important, but having the return types is really essential information (otherwise, if the user writes a = numpy.fft.fft(...) and then requests code-completion for a, I can do nothing). Best regards, Sven ______________ [1] http://docs.scipy.org/doc/numpy/reference/generated/numpy.fft.fft.html#numpy.fft.fft From marmaduke.woodman at univ-amu.fr Mon Nov 25 05:24:41 2013 From: marmaduke.woodman at univ-amu.fr (Marmaduke Woodman) Date: Mon, 25 Nov 2013 11:24:41 +0100 Subject: [SciPy-Dev] Supporting numpy/scipy in an IDE In-Reply-To: <1717647.TYGHmCova0@localhost.localdomain> References: <1927747.BK2LNiVUDe@localhost.localdomain> <1717647.TYGHmCova0@localhost.localdomain> Message-ID: I think "complex ndarray" is probably pretty good. It's unlike someone would write ndarray if they didn't mean a NumPy array. So, already you could complete .shape, .mean, etc. which is pretty good. Knowing it's complex, you complete .imag, .real, no? The existence of certain methods, or not, doesn't depend on whether it's complex64 or complex128 (and arguably shouldn't), so I would consider this a win. I realize it wouldn't be scalable if you have 100s of types with wildly different methods etc. On Mon, Nov 25, 2013 at 11:11 AM, Sven Brauch wrote: > Hi! > > On Monday 25 November 2013 10:55:32 Marmaduke Woodman wrote: > > You could adapt Sphinx's parsing/building classes to get "direct" access > to > > the docstring info? > Hmm, yes, for the argument count and names that would work. But look at the > types, e.g. in [1]: "out: complex ndarray" is not really machine-readable. > I > would need the actual type it returns, optimally the exact type (which > module > it is from etc.) but at least the class name. > > For the arguments the types are not that important, but having the return > types is really essential information (otherwise, if the user writes a = > numpy.fft.fft(...) and then requests code-completion for a, I can do > nothing). > > Best regards, > Sven > > ______________ > [1] > > http://docs.scipy.org/doc/numpy/reference/generated/numpy.fft.fft.html#numpy.fft.fft > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bussonniermatthias at gmail.com Mon Nov 25 05:45:06 2013 From: bussonniermatthias at gmail.com (Matthias BUSSONNIER) Date: Mon, 25 Nov 2013 11:45:06 +0100 Subject: [SciPy-Dev] Supporting numpy/scipy in an IDE In-Reply-To: References: <1927747.BK2LNiVUDe@localhost.localdomain> <1717647.TYGHmCova0@localhost.localdomain> Message-ID: <2266FEF5-BAAC-46BE-93A4-F300A3E26959@gmail.com> Hi, we **might** have some of that in IPython somewhere as IIRC, someone asked that about cython completing on cython methods when signature is embedded into docstring. It seem to work partially with numpy as np.fft.fft( arange(10), axi complete to np.fft.fft( arange(10), axis= I don't think we have anything based on return type. Any improvement welcomed, we also have enhancement proposal for completin machinery IPep 11 IIRC. -- Matthias Le 25 nov. 2013 ? 11:24, Marmaduke Woodman a ?crit : > I think "complex ndarray" is probably pretty good. It's unlike someone would write ndarray if they didn't mean a NumPy array. > > So, already you could complete .shape, .mean, etc. which is pretty good. Knowing it's complex, you complete .imag, .real, no? > > The existence of certain methods, or not, doesn't depend on whether it's complex64 or complex128 (and arguably shouldn't), so I would consider this a win. > > I realize it wouldn't be scalable if you have 100s of types with wildly different methods etc. > > > > > On Mon, Nov 25, 2013 at 11:11 AM, Sven Brauch wrote: > Hi! > > On Monday 25 November 2013 10:55:32 Marmaduke Woodman wrote: > > You could adapt Sphinx's parsing/building classes to get "direct" access to > > the docstring info? > Hmm, yes, for the argument count and names that would work. But look at the > types, e.g. in [1]: "out: complex ndarray" is not really machine-readable. I > would need the actual type it returns, optimally the exact type (which module > it is from etc.) but at least the class name. > > For the arguments the types are not that important, but having the return > types is really essential information (otherwise, if the user writes a = > numpy.fft.fft(...) and then requests code-completion for a, I can do nothing). > > Best regards, > Sven > > ______________ > [1] > http://docs.scipy.org/doc/numpy/reference/generated/numpy.fft.fft.html#numpy.fft.fft > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Mon Nov 25 05:44:13 2013 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 25 Nov 2013 10:44:13 +0000 Subject: [SciPy-Dev] Plotly: Python Graphing and Analytics feedback and "NumPy for Matlab Users" wiki In-Reply-To: References: Message-ID: On Mon, Nov 25, 2013 at 8:35 AM, Matt Sundquist wrote: > > Hi SciPy developers, > > My name is Matt, and I'm part of Plot.ly, a graphing and analytics project. Per a helpful suggestion from Pauli Virtanen, I thought I'd write to see if it might be possible to list Plotly on this page (NumPy for Matlab Users) as a "MATLAB packages/tools" for graphing and then as a "Link." The Wiki is in read-only state at this time due to spam and the transition to static site (open to contributions via Github pull requests). I think we would prefer that any legacy wiki pages should be converted over to the static site in preference to updating them in situ. As for this particular addition, I don't think it's *quite* in scope for that page and that section in particular. The main goal of that section is to show numpy-ecosystem equivalents to long-established MATLAB capabilities to help people transition from MATLAB to numpy. It's not so much for tools that happen to work with both MATLAB and numpy backends. If MATLAB people are using Plotly already, they are almost certainly aware that Plotly supports numpy through your own documentation. Does that make sense? I want you guys to get all the publicity you can bear because your product is really cool, but I don't think that particular addition is in line with the goal of that page. Plotly is already listed in the Plotting section of the "Topical Software" page on the new site, which is where I would expect to find it. http://www.scipy.org/topical-software.html#tools-with-a-mostly-2-d-focus -- Robert Kern -------------- next part -------------- An HTML attachment was scrubbed... URL: From lists at hilboll.de Mon Nov 25 08:13:05 2013 From: lists at hilboll.de (Andreas Hilboll) Date: Mon, 25 Nov 2013 14:13:05 +0100 Subject: [SciPy-Dev] interpolation of data given on a regular grid Message-ID: <52934CE1.5060406@hilboll.de> Hi, as I understand, there's currently no interpolation method suitable for large multidimensional data given on a regular grid. The current ND interpolators (Nearest and Linear) are tailored towards unstructured sample points and require all sample points to be passed explicitly, which is not very memory efficient for 20x20x20x20x20x20x20x20 datasets ... As a second problem, the sample data might be too large to fit into memory. I'm thinking of an out-of-memory ND interpolator object, which would take the 8 (20,) sample point coordinate arrays and one reference to the large sample data array. This interpolator would then for each requested interpolation point coordinate perform the necessary interpolation, be it nearest neighbor, linear, or (at a later stage) even splines. Using Cython, this should be fast enough. Any thoughts on this? Did I overlook something and this is already somewhere in scipy? Would there be interest to have such an gridded interpolator class? Cheers, Andreas. From svenbrauch at googlemail.com Mon Nov 25 09:16:01 2013 From: svenbrauch at googlemail.com (Sven Brauch) Date: Mon, 25 Nov 2013 15:16:01 +0100 Subject: [SciPy-Dev] Supporting numpy/scipy in an IDE In-Reply-To: <2266FEF5-BAAC-46BE-93A4-F300A3E26959@gmail.com> References: <1927747.BK2LNiVUDe@localhost.localdomain> <1717647.TYGHmCova0@localhost.localdomain> <2266FEF5-BAAC-46BE-93A4-F300A3E26959@gmail.com> Message-ID: Hi! 2013/11/25 Matthias BUSSONNIER : > It seem to work partially with numpy as That sounds cool, I'll check that out. Maybe I can also ask the ipython people. > I think "complex ndarray" is probably pretty good. It's unlike someone would > write ndarray if they didn't mean a NumPy array. It's good for a human. How would a script even know it's an ndarray? Of course this case is easy to cover, but there's lots and lots of ways the return type is specified in the docs; often something like "array_like" which is not an actual class, or "this or that if $condition is met" or various other forms. I tried to write a script which parses that, but it failed quite miserably. In many cases, return type is also clear from the docstring, but not mentioned in the proper format, e.g. here: http://docs.scipy.org/doc/numpy/reference/generated/numpy.fromfile.html#numpy.fromfile There you have zero chance to figure it out with a script imo, although it would be quite important. To bring up another idea, might it be possible to observe a test suite and deduce types from there? I could possibly hook into cpython and catch function calls, and inspect the return and argument types from there. But I don't know enough about CPython's internals to judge whether that would give any useful results or not. Greetings, Sven From bussonniermatthias at gmail.com Mon Nov 25 09:44:46 2013 From: bussonniermatthias at gmail.com (Matthias BUSSONNIER) Date: Mon, 25 Nov 2013 15:44:46 +0100 Subject: [SciPy-Dev] Supporting numpy/scipy in an IDE In-Reply-To: References: <1927747.BK2LNiVUDe@localhost.localdomain> <1717647.TYGHmCova0@localhost.localdomain> <2266FEF5-BAAC-46BE-93A4-F300A3E26959@gmail.com> Message-ID: Le 25 nov. 2013 ? 15:16, Sven Brauch a ?crit : > Hi! > > 2013/11/25 Matthias BUSSONNIER : >> It seem to work partially with numpy as > That sounds cool, I'll check that out. Maybe I can also ask the ipython people. Sure we'll be happy to help, Completer and introspection is just not a place were I feel comfortable. You can either pitch in on IPython-dev or directly on github. This : https://github.com/ipython/ipython/wiki/IPEP-11%3A-Tab-Completion-System-Refactor Might also be relevant, both to understand our current completion system and see what might be coming up. People from Spyder also use IPython under the hood for completion IIRC, LightTable can also hook into IPython I guess. I think one way this can be improved is using Python3 annotation to hint the type of the parameters and or return value. I think Jedi is able to parse this kind of information. Of course one solution could also be to uniformise numpy docstrings. -- Matthias >> I think "complex ndarray" is probably pretty good. It's unlike someone would >> write ndarray if they didn't mean a NumPy array. > It's good for a human. How would a script even know it's an ndarray? > Of course this case is easy to cover, but there's lots and lots of > ways the return type is specified in the docs; often something like > "array_like" which is not an actual class, or "this or that if > $condition is met" or various other forms. I tried to write a script > which parses that, but it failed quite miserably. > In many cases, return type is also clear from the docstring, but not > mentioned in the proper format, e.g. here: > http://docs.scipy.org/doc/numpy/reference/generated/numpy.fromfile.html#numpy.fromfile > There you have zero chance to figure it out with a script imo, > although it would be quite important. > > To bring up another idea, might it be possible to observe a test suite > and deduce types from there? I could possibly hook into cpython and > catch function calls, and inspect the return and argument types from > there. But I don't know enough about CPython's internals to judge > whether that would give any useful results or not. > > Greetings, > Sven > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From svenbrauch at googlemail.com Mon Nov 25 11:47:29 2013 From: svenbrauch at googlemail.com (Sven Brauch) Date: Mon, 25 Nov 2013 17:47:29 +0100 Subject: [SciPy-Dev] Supporting numpy/scipy in an IDE In-Reply-To: References: <1927747.BK2LNiVUDe@localhost.localdomain> <1717647.TYGHmCova0@localhost.localdomain> <2266FEF5-BAAC-46BE-93A4-F300A3E26959@gmail.com> Message-ID: Hi! > You can either pitch in on IPython-dev or directly on github. Thanks, I'll look around. I'll have to investigate how dynamic the nature of the IPython completion is, since what I do relies purely on static analysis and pre-computed information. > People from Spyder also use IPython under the hood for completion IIRC, > LightTable can also hook > into IPython I guess. I still see quite a problem here: IPython can use runtime inspection. Thus, you don't really need to care what a function returns; you can just wait until it was executed, and then inspect the object to provide completion. I cannot do that, I need to compute this information before the code is actually executed. > I think one way this can be improved is using Python3 annotation to hint the > type of the parameters > and or return value. Yes, if people use this feature for providing type hints, that will definitely be very helpful. It doesn't really help the numpy case though, or does it? Most of its functions are C functions which cannot be annotated. > Of course one solution could also be to uniformise numpy docstrings. That would of course be the best solution: put return types into the docstrings in a somewhat machine-readable format. In my opinion that would also improve the usefulness of the generated documentation for humans, at least in some cases (since it will be easier to spot what a particular function returns). It would be awesome if that could be something like an official recommendation for writing numpy docstrings; any chance of doing something like that? I would be happy to help fix up the existing docstrings, I think if a few people would participate it would be quite doable. Greetings! Sven From jjhelmus at gmail.com Mon Nov 25 12:12:10 2013 From: jjhelmus at gmail.com (Jonathan Helmus) Date: Mon, 25 Nov 2013 11:12:10 -0600 Subject: [SciPy-Dev] Supporting numpy/scipy in an IDE In-Reply-To: References: <1927747.BK2LNiVUDe@localhost.localdomain> <1717647.TYGHmCova0@localhost.localdomain> <2266FEF5-BAAC-46BE-93A4-F300A3E26959@gmail.com> Message-ID: <529384EA.5060002@gmail.com> On 11/25/2013 10:47 AM, Sven Brauch wrote: > Hi! > >> You can either pitch in on IPython-dev or directly on github. > Thanks, I'll look around. I'll have to investigate how dynamic the > nature of the IPython completion is, since what I do relies purely on > static analysis and pre-computed information. > >> People from Spyder also use IPython under the hood for completion IIRC, >> LightTable can also hook >> into IPython I guess. > I still see quite a problem here: IPython can use runtime inspection. > Thus, you don't really need to care what a function returns; you can > just wait until it was executed, and then inspect the object to > provide completion. I cannot do that, I need to compute this > information before the code is actually executed. > >> I think one way this can be improved is using Python3 annotation to hint the >> type of the parameters >> and or return value. > Yes, if people use this feature for providing type hints, that will > definitely be very helpful. > It doesn't really help the numpy case though, or does it? Most of its > functions are C functions which cannot be annotated. > >> Of course one solution could also be to uniformise numpy docstrings. > That would of course be the best solution: put return types into the > docstrings in a somewhat machine-readable format. In my opinion that > would also improve the usefulness of the generated documentation for > humans, at least in some cases (since it will be easier to spot what a > particular function returns). > > It would be awesome if that could be something like an official > recommendation for writing numpy docstrings; any chance of doing > something like that? > I would be happy to help fix up the existing docstrings, I think if a > few people would participate it would be quite doable. There is this document: https://github.com/numpy/numpy/blob/master/doc/HOWTO_DOCUMENT.rst.txt, that specified the standard for the NumPy/SciPy docstrings. That document recommends that the parameters types should be "as precise as possible" but does have any recommendation on how exactly to specify these, or defines a machine readable format. There was some talk back a while ago on one of the scipy-dev [1] list about how to include type, shape, and similar information in the docstrings, but I don't recall anything being decided. T - Jonathan Helmus [1] http://article.gmane.org/gmane.comp.python.scientific.devel/17230 > > Greetings! > Sven > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev From robert.kern at gmail.com Mon Nov 25 12:19:22 2013 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 25 Nov 2013 17:19:22 +0000 Subject: [SciPy-Dev] Supporting numpy/scipy in an IDE In-Reply-To: References: <1927747.BK2LNiVUDe@localhost.localdomain> <1717647.TYGHmCova0@localhost.localdomain> <2266FEF5-BAAC-46BE-93A4-F300A3E26959@gmail.com> Message-ID: On Mon, Nov 25, 2013 at 4:47 PM, Sven Brauch wrote: > > Of course one solution could also be to uniformise numpy docstrings. > That would of course be the best solution: put return types into the > docstrings in a somewhat machine-readable format. In my opinion that > would also improve the usefulness of the generated documentation for > humans, at least in some cases (since it will be easier to spot what a > particular function returns). I don't think it would be an improvement for human readers. The current format serves that purpose fine (unsurprisingly, as it was designed for humans rather than static analyzers). Restricting that field to strict, machine-understandable types will make them less human-readable. If you want to generate machine-readable data, please keep it out of human-readable documentation. I don't think it solves your problem, either. For example, ufuncs return different types depending on the input. np.sin(1.0) returns an np.float64 while np.sin([1.0]) returns an np.ndarray. We document that for humans by just saying that both inputs and outputs are `array_like` and refer them to the general ufunc documentation that describes it all. As far as your static analyzer is concerned, they all just return an object. Python is not a statically typed language, and we have written numpy to take full advantage of that. Your idea of running the test suite with some kind of tracing and generating a database is a good one and probably the most accurate of all the possible methods of getting this kind of information. -- Robert Kern -------------- next part -------------- An HTML attachment was scrubbed... URL: From svenbrauch at googlemail.com Mon Nov 25 12:38:00 2013 From: svenbrauch at googlemail.com (Sven Brauch) Date: Mon, 25 Nov 2013 18:38:00 +0100 Subject: [SciPy-Dev] Supporting numpy/scipy in an IDE In-Reply-To: <529384EA.5060002@gmail.com> References: <1927747.BK2LNiVUDe@localhost.localdomain> <1717647.TYGHmCova0@localhost.localdomain> <2266FEF5-BAAC-46BE-93A4-F300A3E26959@gmail.com> <529384EA.5060002@gmail.com> Message-ID: Hi! > https://github.com/numpy/numpy/blob/master/doc/HOWTO_DOCUMENT.rst.txt, > that specified the standard for the NumPy/SciPy docstrings. Interesting, thanks for the link. That will certainly be useful for parsing the docstrings, if I decide to try that again. > There was some talk back a > while ago on one of the scipy-dev [1] list about how to include type, > shape, and similar information in the docstrings, but I don't recall > anything being decided. That seems to have a different aim from mine; it aims to make the information more precise. I would be happy if I had any readable information at all. ;) 2013/11/25 Robert Kern : > Restricting that field to strict, machine-understandable > types will make them less human-readable. I agree, but it might be possible to find a form which suits both, no? For example, you could define a format for specifying multiple possible return types (e.g. "separate types by 'or'"), and a character which marks the beginning of further notes (e.g. ";"). Then you could write your example as "Returns: float64 or ndarray; the former if the input is a float, the latter if input is an range" or so. This is your decision of course; and not the most important point. It would already help if all docstrings would consequently use the "Returns:" field at all. I guess the rest of the various notations could eventually be figured out by a sufficiently sophisticated script (while the "the return type is mentioned in the prosa text" case is sort of impossible). > As far as your static analyzer is concerned, they all just > return an object. Python is not a statically typed language, > and we have written numpy to take full advantage of that. If I assume everything is an object then I don't need a static analyzer. ;) It is obvious that the analyzer can never treat every obscure situation correctly, but that doesn't mean it can't be useful at all. It relies on guessing, sure; still, it can get accurate information in a lot of cases (see e.g. [1]). > Your idea of running the test suite with some kind of > tracing and generating a database is a good one and > probably the most accurate of all the possible methods > of getting this kind of information. It is great to hear that somebody else considers this possible; I'll investigate how far I get with this approach, and will report back with the success I had (if any). Best regards, Sven ________ [1] http://1.bp.blogspot.com/-sKbyd7w50R0/TmpLCqJy1tI/AAAAAAAAADk/cq4p1n9BlJ8/s1600/alpha2_1.png From robert.kern at gmail.com Mon Nov 25 12:42:45 2013 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 25 Nov 2013 17:42:45 +0000 Subject: [SciPy-Dev] Supporting numpy/scipy in an IDE In-Reply-To: References: <1927747.BK2LNiVUDe@localhost.localdomain> <1717647.TYGHmCova0@localhost.localdomain> <2266FEF5-BAAC-46BE-93A4-F300A3E26959@gmail.com> <529384EA.5060002@gmail.com> Message-ID: On Mon, Nov 25, 2013 at 5:38 PM, Sven Brauch wrote: > 2013/11/25 Robert Kern : > > Restricting that field to strict, machine-understandable > > types will make them less human-readable. > I agree, but it might be possible to find a form which suits both, no? > For example, you could define a format for specifying multiple > possible return types (e.g. "separate types by 'or'"), and a character > which marks the beginning of further notes (e.g. ";"). Then you could > write your example as "Returns: float64 or ndarray; the former if the > input is a float, the latter if input is an range" or so. > This is your decision of course; and not the most important point. It > would already help if all docstrings would consequently use the > "Returns:" field at all. They should. Did you look at the documentation? > I guess the rest of the various notations > could eventually be figured out by a sufficiently sophisticated script > (while the "the return type is mentioned in the prosa text" case is > sort of impossible). Then go forth and write that sufficiently sophisticated script. -- Robert Kern -------------- next part -------------- An HTML attachment was scrubbed... URL: From svenbrauch at googlemail.com Mon Nov 25 12:45:52 2013 From: svenbrauch at googlemail.com (Sven Brauch) Date: Mon, 25 Nov 2013 18:45:52 +0100 Subject: [SciPy-Dev] Supporting numpy/scipy in an IDE In-Reply-To: References: <1927747.BK2LNiVUDe@localhost.localdomain> <1717647.TYGHmCova0@localhost.localdomain> <2266FEF5-BAAC-46BE-93A4-F300A3E26959@gmail.com> <529384EA.5060002@gmail.com> Message-ID: 2013/11/25 Robert Kern : > They should. Did you look at the documentation? They should, but they don't ;) If you click through the docs there's quite a few examples which only mention it somewhere in the text, e.g. http://docs.scipy.org/doc/numpy/reference/generated/numpy.frombuffer.html. > Then go forth and write that sufficiently sophisticated script. I will do that. Thanks for the comments, at least I have two ideas what to try next now. Best regards, Sven From robert.kern at gmail.com Mon Nov 25 13:00:36 2013 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 25 Nov 2013 18:00:36 +0000 Subject: [SciPy-Dev] Supporting numpy/scipy in an IDE In-Reply-To: References: <1927747.BK2LNiVUDe@localhost.localdomain> <1717647.TYGHmCova0@localhost.localdomain> <2266FEF5-BAAC-46BE-93A4-F300A3E26959@gmail.com> <529384EA.5060002@gmail.com> Message-ID: On Mon, Nov 25, 2013 at 5:45 PM, Sven Brauch wrote: > > 2013/11/25 Robert Kern : > > They should. Did you look at the documentation? > They should, but they don't ;) > If you click through the docs there's quite a few examples which only > mention it somewhere in the text, e.g. > http://docs.scipy.org/doc/numpy/reference/generated/numpy.frombuffer.html. I'm sorry, but you were presenting it as if it weren't policy to include a Returns section, whereas the HOWTO_DOCUMENT.rst document you were pointed to shows otherwise. There are errors and omissions in our documentation, of course, and these need to be fixed, but there is already agreement that we have this information (in human-understandable form). Of course, these errors and omissions only demonstrate how inaccurate this approach would be. It's easy to spot an omission of a Returns section. It will be much harder to spot a semantic error. -- Robert Kern -------------- next part -------------- An HTML attachment was scrubbed... URL: From david.verelst at gmail.com Mon Nov 25 14:41:05 2013 From: david.verelst at gmail.com (David Verelst) Date: Mon, 25 Nov 2013 20:41:05 +0100 Subject: [SciPy-Dev] Supporting numpy/scipy in an IDE In-Reply-To: References: <1927747.BK2LNiVUDe@localhost.localdomain> <1717647.TYGHmCova0@localhost.localdomain> <2266FEF5-BAAC-46BE-93A4-F300A3E26959@gmail.com> <529384EA.5060002@gmail.com> Message-ID: Have you looked at Jedi [1] and/or Rope [2]? Rope has been used for quite a while in the Spyder [3] editor, and future versions will also support Jedi [4] [1] http://jedi.jedidjah.ch/en/latest/ [2] http://rope.sourceforge.net/ [3] http://code.google.com/p/spyderlib/ [4] http://code.google.com/p/spyderlib/issues/detail?id=1213 On 25 November 2013 19:00, Robert Kern wrote: > On Mon, Nov 25, 2013 at 5:45 PM, Sven Brauch > wrote: > > > > 2013/11/25 Robert Kern : > > > They should. Did you look at the documentation? > > They should, but they don't ;) > > If you click through the docs there's quite a few examples which only > > mention it somewhere in the text, e.g. > > > http://docs.scipy.org/doc/numpy/reference/generated/numpy.frombuffer.html. > > I'm sorry, but you were presenting it as if it weren't policy to include a > Returns section, whereas the HOWTO_DOCUMENT.rst document you were pointed > to shows otherwise. There are errors and omissions in our documentation, of > course, and these need to be fixed, but there is already agreement that we > have this information (in human-understandable form). Of course, these > errors and omissions only demonstrate how inaccurate this approach would > be. It's easy to spot an omission of a Returns section. It will be much > harder to spot a semantic error. > > -- > Robert Kern > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Mon Nov 25 15:33:22 2013 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Mon, 25 Nov 2013 21:33:22 +0100 Subject: [SciPy-Dev] any chance of enabling sort option in scipy.linalg.qz? In-Reply-To: References: Message-ID: On Fri, Nov 22, 2013 at 6:37 PM, Ricardo Mayer wrote: > Dear all, > > As of release 0.13 of scipy the sorting option for the generalized > schur decomposition (aka QZ) it's been disabled because some failure > wrapping the corresponding function from LAPACK. > Just a note for whoever reads this later that the discussion continues on https://github.com/scipy/scipy/pull/303. Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From svenbrauch at googlemail.com Mon Nov 25 18:21:24 2013 From: svenbrauch at googlemail.com (Sven Brauch) Date: Tue, 26 Nov 2013 00:21:24 +0100 Subject: [SciPy-Dev] Supporting numpy/scipy in an IDE In-Reply-To: References: <1927747.BK2LNiVUDe@localhost.localdomain> Message-ID: <197918442.PQjigj6Lt2@localhost.localdomain> Hi, On Monday 25 November 2013 18:00:36 Robert Kern wrote: > I'm sorry, but you were presenting it as if it weren't policy to include a > Returns section, whereas the HOWTO_DOCUMENT.rst document you were pointed > to shows otherwise. I'm sorry if what I wrote sounded that way, that was not my intention at all. I merely wanted to show that parsing the docs currently is not a feasible solution, at least not without an effort to avoid such problems like the one I linked to. I'm aware of Jedi and Rope, both are cool projects, but they are a bit different from what I do. Both have a very specific aim (completion and refactoring), and would be difficult to integrate seamlessly into our existing IDE framework (which is KDevelop, btw). This has various reasons which are probably not very interesting to discuss here (if you're interested, I'll be happy to elaborate in private). Jedi solves the problem of providing completion for libs by simply importing the libs and doing inspection of them. I dislike this approach. It is potentially exploitable and slow; also it might cause unintended side-effects if the packages you import do initializations (sound libs might claim the audio device, the Raspberry Pi GPIO lib reconfigures all the GPIOs, etc.) Greetings, Sven From pmhobson at gmail.com Mon Nov 25 20:59:11 2013 From: pmhobson at gmail.com (Paul Hobson) Date: Mon, 25 Nov 2013 17:59:11 -0800 Subject: [SciPy-Dev] Supporting numpy/scipy in an IDE In-Reply-To: <197918442.PQjigj6Lt2@localhost.localdomain> References: <1927747.BK2LNiVUDe@localhost.localdomain> <197918442.PQjigj6Lt2@localhost.localdomain> Message-ID: You should look into how Python Tools For Visual Studio parses all of the libraries in a Python installation (they have support for numpy, scipy, etc...) http://pytools.codeplex.com/SourceControl/latest -paul On Mon, Nov 25, 2013 at 3:21 PM, Sven Brauch wrote: > Hi, > > On Monday 25 November 2013 18:00:36 Robert Kern wrote: > > I'm sorry, but you were presenting it as if it weren't policy to include > a > > Returns section, whereas the HOWTO_DOCUMENT.rst document you were pointed > > to shows otherwise. > I'm sorry if what I wrote sounded that way, that was not my intention at > all. > I merely wanted to show that parsing the docs currently is not a feasible > solution, at least not without an effort to avoid such problems like the > one I > linked to. > > I'm aware of Jedi and Rope, both are cool projects, but they are a bit > different from what I do. Both have a very specific aim (completion and > refactoring), and would be difficult to integrate seamlessly into our > existing > IDE framework (which is KDevelop, btw). This has various reasons which are > probably not very interesting to discuss here (if you're interested, I'll > be > happy to elaborate in private). > > Jedi solves the problem of providing completion for libs by simply > importing > the libs and doing inspection of them. I dislike this approach. It is > potentially exploitable and slow; also it might cause unintended > side-effects > if the packages you import do initializations (sound libs might claim the > audio device, the Raspberry Pi GPIO lib reconfigures all the GPIOs, etc.) > > Greetings, > Sven > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Thomas.Haslwanter at fh-linz.at Tue Nov 26 03:01:20 2013 From: Thomas.Haslwanter at fh-linz.at (Haslwanter Thomas) Date: Tue, 26 Nov 2013 09:01:20 +0100 Subject: [SciPy-Dev] thLib: package for 3D kinematics and data analysis (including Savitzky-Golay filter) Message-ID: <1CFD8CBC30E0454B9DFAADEB36AD1739B050568340@LNZEXCHANGE001.linz.fhooe.at> I have compiled a package that should be useful for many researchers, especially in the field of movement analysis. It can be installed with pip install thLib and documentation is provided under http://work.thaslwanter.at/thLib/html/ I would be happy to contribute any of the functions in this package to SciPy. Especially, I believe that a Savitzky-Golay filter is overdue for Scipy. Please provide feedback, which of the functions may be relevant for SciPy, and which functions are faulty or missing. Thanks thomas --- Prof. (FH) PD Dr. Thomas Haslwanter School of Applied Health and Social Sciences University of Applied Sciences Upper Austria FH O? Studienbetriebs GmbH Garnisonstra?e 21 4020 Linz/Austria Tel.: +43 (0)5 0804 -52170 Fax: +43 (0)5 0804 -52171 E-Mail: Thomas.Haslwanter at fh-linz.at Web: me-research.fh-linz.at or work.thaslwanter.at -------------- next part -------------- An HTML attachment was scrubbed... URL: From svenbrauch at googlemail.com Tue Nov 26 03:42:35 2013 From: svenbrauch at googlemail.com (Sven Brauch) Date: Tue, 26 Nov 2013 09:42:35 +0100 Subject: [SciPy-Dev] Supporting numpy/scipy in an IDE In-Reply-To: References: <1927747.BK2LNiVUDe@localhost.localdomain> <197918442.PQjigj6Lt2@localhost.localdomain> Message-ID: <1608315.rx2QzyBlBF@localhost.localdomain> On Monday 25 November 2013 17:59:11 Paul Hobson wrote: > You should look into how Python Tools For Visual Studio parses all of the > libraries in a Python installation (they have support for numpy, scipy, > etc...) > http://pytools.codeplex.com/SourceControl/latest I talked to them, when I recall correctly they do introspection too. But thanks :) Sven From njs at pobox.com Tue Nov 26 10:15:00 2013 From: njs at pobox.com (Nathaniel Smith) Date: Tue, 26 Nov 2013 07:15:00 -0800 Subject: [SciPy-Dev] thLib: package for 3D kinematics and data analysis (including Savitzky-Golay filter) In-Reply-To: <1CFD8CBC30E0454B9DFAADEB36AD1739B050568340@LNZEXCHANGE001.linz.fhooe.at> References: <1CFD8CBC30E0454B9DFAADEB36AD1739B050568340@LNZEXCHANGE001.linz.fhooe.at> Message-ID: On 26 Nov 2013 00:03, "Haslwanter Thomas" wrote: > > I have compiled a package that should be useful for many researchers, especially in the field of movement analysis. It can be installed with > > > > pip install thLib > > and documentation is provided under > > http://work.thaslwanter.at/thLib/html/ FYI, it's generally considered a good idea to use the same (or at least two-way compatible) licenses for documentation and code, since otherwise it's a violation of copyright to do things like move code between a .py file and an example in the docs, or to move text between a source code comment and the docs. Also, your current doc license is not open source (the "non-commercial" restriction violates the DFSG and OSD). -n -------------- next part -------------- An HTML attachment was scrubbed... URL: From thomas.haslwanter at fh-linz.at Tue Nov 26 12:07:26 2013 From: thomas.haslwanter at fh-linz.at (Thomas Haslwanter) Date: Tue, 26 Nov 2013 17:07:26 +0000 (UTC) Subject: [SciPy-Dev] thLib: package for 3D kinematics and data analysis (including Savitzky-Golay filter) References: <1CFD8CBC30E0454B9DFAADEB36AD1739B050568340@LNZEXCHANGE001.linz.fhooe.at> Message-ID: What would you suggest: GPL? or BSD? -th From robert.kern at gmail.com Tue Nov 26 12:17:03 2013 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 26 Nov 2013 17:17:03 +0000 Subject: [SciPy-Dev] thLib: package for 3D kinematics and data analysis (including Savitzky-Golay filter) In-Reply-To: References: <1CFD8CBC30E0454B9DFAADEB36AD1739B050568340@LNZEXCHANGE001.linz.fhooe.at> Message-ID: On Tue, Nov 26, 2013 at 5:07 PM, Thomas Haslwanter < thomas.haslwanter at fh-linz.at> wrote: > > What would you suggest: GPL? or BSD? If you want to submit any of this code for inclusion in scipy, it must be BSD. You can do whichever you like for the thLib package outside of scipy, but we are generally predisposed to BSD software in this corner of the open source universe. -- Robert Kern -------------- next part -------------- An HTML attachment was scrubbed... URL: From njs at pobox.com Tue Nov 26 13:10:48 2013 From: njs at pobox.com (Nathaniel Smith) Date: Tue, 26 Nov 2013 10:10:48 -0800 Subject: [SciPy-Dev] thLib: package for 3D kinematics and data analysis (including Savitzky-Golay filter) In-Reply-To: References: <1CFD8CBC30E0454B9DFAADEB36AD1739B050568340@LNZEXCHANGE001.linz.fhooe.at> Message-ID: On Tue, Nov 26, 2013 at 9:07 AM, Thomas Haslwanter wrote: > What would you suggest: GPL? or BSD? Up to your preferences. But either is much much better than your current non-open-source license (which I didn't realize before is actually applied to the code as well)! (If using BSD make sure to use the 2- or 3-clause version.) -- Nathaniel J. Smith Postdoctoral researcher - Informatics - University of Edinburgh http://vorpus.org From warren.weckesser at gmail.com Tue Nov 26 15:26:27 2013 From: warren.weckesser at gmail.com (Warren Weckesser) Date: Tue, 26 Nov 2013 15:26:27 -0500 Subject: [SciPy-Dev] thLib: package for 3D kinematics and data analysis (including Savitzky-Golay filter) In-Reply-To: <1CFD8CBC30E0454B9DFAADEB36AD1739B050568340@LNZEXCHANGE001.linz.fhooe.at> References: <1CFD8CBC30E0454B9DFAADEB36AD1739B050568340@LNZEXCHANGE001.linz.fhooe.at> Message-ID: On Tue, Nov 26, 2013 at 3:01 AM, Haslwanter Thomas < Thomas.Haslwanter at fh-linz.at> wrote: > I have compiled a package that should be useful for many researchers, > especially in the field of movement analysis. It can be installed with > > > > pip install thLib > > and documentation is provided under > > http://work.thaslwanter.at/thLib/html/ > > > > I would be happy to contribute any of the functions in this package to > SciPy. Especially, I believe that a Savitzky-Golay filter is overdue for > Scipy. > Hey Thomas, It took a while, but eventually a Savitzky-Golay filter made it into scipy: https://github.com/scipy/scipy/pull/470 Warren > > Please provide feedback, which of the functions may be relevant for SciPy, > and which functions are faulty or missing. > > > > Thanks > > thomas > > > > > > --- > Prof. (FH) PD Dr. Thomas Haslwanter > School of Applied Health and Social Sciences > > *University of Applied Sciences* *Upper Austria* > *FH O? Studienbetriebs GmbH* > Garnisonstra?e 21 > 4020 Linz/Austria > Tel.: +43 (0)5 0804 -52170 > Fax: +43 (0)5 0804 -52171 > E-Mail: Thomas.Haslwanter at fh-linz.at > Web: me-research.fh-linz.at > or work.thaslwanter.at > > > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Tue Nov 26 16:53:16 2013 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Tue, 26 Nov 2013 22:53:16 +0100 Subject: [SciPy-Dev] (Possible) new optimization routine (2) - scipy.optimize In-Reply-To: References: Message-ID: On Mon, Nov 18, 2013 at 9:59 PM, Andrea Gavana wrote: > Hi All, > > since I posted the last time about (possible) new optimization > routines in scipy.optimize, I have been working here and there in making > the code for AMPGO (Adaptive Memory Programming for Global Optimization) a > bit more robust and in expanding the benchmark test suite. Since recently I > saw a few messages about optimization methods flying around in the mailing > list, I thought I may share some more findings and possibly (finally) start > integrating AMPGO into scipy. > > First things first: I would love to see AMPGO into scipy, but there are > still a couple of issues to be solved: > > 1. Some of the local optimization methods AMPGO can use (like L-BFGS-B, > TNC and so on) can take advantage of gradient information, and sometimes > people do actually have access to the gradient of the objective function. > The problem is, given the definition of the Tunnelling function used by > AMPGO: > > T(x) = [f(x) - aspiration)**2.0] / prod(dist(s, x)) for s in tabu_list > > Where "dist" is the euclidean distance between the current point "x" and > one of the previous local optima "s" ("tabu_list" is a list containing 2 or > more of these local optima). > > (or see page 4 at > http://leeds-faculty.colorado.edu/glover/fred%20pubs/416%20-%20AMP%20(TS)%20for%20Constrained%20Global%20Opt%20w%20Lasdon%20et%20al%20.pdffor a clearer formula for the Tunnelling function). > > I have absolutely no idea of how to get the analytical expression of the > gradient of the Tunnelling function, given the gradient of the objective > function. I'm sure it's very much doable but my calculus skills are way too > inadequate. > That's certainly not trivial. With pen and paper I don't get there. This is not a show-stopper though, right? Your current benchmarks work just fine.... > 2. As the current code for AMPGO supports local solvers from > scipy.optimize and OpenOpt, it turns out to be a pain to generalize its > interface in order for AMPGO to be compatible with the minimize() general > API. > What's the exact problem? The whole idea of minimize() was to make it easier to support more solvers by providing a uniform signature. > Not to mention the general PR process. > It's not all that hard. Seeing what's you've done with your benchmark site so far, a PR should be trivial in comparison. If you need some input on what goes where just point us to your Github repo. > In any case, I'll try to gather enough willpower to get AMPGO up to scipy > standards and I'll contribute the benchmarks I created as well. > Sounds good! > So, mentioning the benchmarks and the results, I managed to expand the > multi-dimensional test suite from 85 to 184 test functions. The test suite > is now one of the most complete and comprehensive in the Open Source world, > and it's not in Fortran 77 (luckily enough). > > The test suite currently contains: > > - 18 one-dimensional test functions with multiple local/global minima; > - 184 multivariate problems (where the number of independent variables > ranges from 2 to 17), again with multiple local/global minima. > > The main page describing the rules, algorithms and motivation is here: > > http://infinity77.net/global_optimization/index.html > > A fairly in-depth summary page on AMPGO and sensitivities on its input > parameters (local solver, tunnelling strategy, etc...): > > http://infinity77.net/global_optimization/ampgo.html > > Algorithms comparisons: > > http://infinity77.net/global_optimization/multidimensional.html > http://infinity77.net/global_optimization/univariate.html > > Test functions and how they rank in a "difficult-to-solve" context: > > http://infinity77.net/global_optimization/test_functions.html > > The overall conclusion is that AMPGO is superior to all the other > algorithms I have tried, leaving the second-best (pyasa) behind by a full > 20% of number of solved problems. It's also the fastest > (function-evaluation-wise), as it is able to outperform all the other > algorithms' best results within 200 function evaluations or less (even > though the other algorithms limit is 2,000). > Would you consider Firefly also interesting for scipy because of its higher success rate on univariate problems? Cheers, Ralf However, to be fair, it is an algorithm designed for low-dimensional > optimization problems (i.e., 1-20 variables). > > If anyone has any suggestion on how to implement my point (1) above, > please feel free to share your thoughts. I have no clue whatsoever. > > Any comment or requests to add additional benchmarks, please give me a > shout. > > Enjoy :-) . > > Andrea. > > "Imagination Is The Only Weapon In The War Against Reality." > http://www.infinity77.net > > # ------------------------------------------------------------- # > def ask_mailing_list_support(email): > > if mention_platform_and_version() and include_sample_app(): > send_message(email) > else: > install_malware() > erase_hard_drives() > # ------------------------------------------------------------- # > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pierre.haessig at crans.org Wed Nov 27 13:43:09 2013 From: pierre.haessig at crans.org (Pierre Haessig) Date: Wed, 27 Nov 2013 19:43:09 +0100 Subject: [SciPy-Dev] broken link to chm documentation Message-ID: <52963D3D.6030401@crans.org> Hi, I don't know how widely is the CHM documentation used, but the link to the CHM version of the Reference Guide on http://docs.scipy.org/doc/ is broken. It points to http://docs.scipy.org/doc/scipy/scipy-chm.zip which is "Not Found" (there is also a CHM version for v0.8 which does work) best, Pierre -------------- next part -------------- A non-text attachment was scrubbed... Name: pierre_haessig.vcf Type: text/x-vcard Size: 329 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 897 bytes Desc: OpenPGP digital signature URL: From josef.pktd at gmail.com Wed Nov 27 14:54:44 2013 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Wed, 27 Nov 2013 14:54:44 -0500 Subject: [SciPy-Dev] broken link to chm documentation In-Reply-To: <52963D3D.6030401@crans.org> References: <52963D3D.6030401@crans.org> Message-ID: On Wed, Nov 27, 2013 at 1:43 PM, Pierre Haessig wrote: > Hi, > > I don't know how widely is the CHM documentation used, but the link to > the CHM version of the Reference Guide on http://docs.scipy.org/doc/ is > broken. > It points to http://docs.scipy.org/doc/scipy/scipy-chm.zip which is "Not > Found" > (there is also a CHM version for v0.8 which does work) I was still using v0.8 chm on the desktop and the online docs for updates. However, it looks like the htmlhelp builds without errors, although with 1506 warnings. I only have scipy 0.12.rc1 on the path of my working python right now. https://sites.google.com/site/mymiscsite/Home/scipy_chm.zip Based on a quick look it looks ok. If there is demand, then I can try to do it properly (as a quick try, I used a master checkout of the docs directory, but the old installed scipy version). Josef > > best, > Pierre > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > From thomas.haslwanter at fh-linz.at Wed Nov 27 16:30:01 2013 From: thomas.haslwanter at fh-linz.at (Thomas Haslwanter) Date: Wed, 27 Nov 2013 21:30:01 +0000 (UTC) Subject: [SciPy-Dev] thLib: package for 3D kinematics and data analysis (including Savitzky-Golay filter) References: <1CFD8CBC30E0454B9DFAADEB36AD1739B050568340@LNZEXCHANGE001.linz.fhooe.at> Message-ID: I have changed the license to BSD 2-clause license - thanks for the information and suggestions. Any thoughts about the content? -th From imakaev at mit.edu Thu Nov 28 16:03:46 2013 From: imakaev at mit.edu (Maxim Imakaev) Date: Thu, 28 Nov 2013 21:03:46 +0000 (UTC) Subject: [SciPy-Dev] 1.0 roadmap: weave References: Message-ID: Hi, Pauli Virtanen iki.fi> writes: > > Arnd Baecker web.de> writes: > [clip] > > Finally, to answer Roberts question whether I could volunteer > > to maintain weave as a separate package: > > not sure - maybe with some helping hands (currently I have no clue what > > this would require) it might be possible. > > The important aspect is the long term perspective: > > How many people would be interested in this and maybe even in a python 3 > > port and actively use weave also for new code, or > > is the general impression, that the more modern tools (cython, ...) > > should be used? > > Also, let's say that if someone with personal interest in keeping > it working addresses the issues within the next few years, then > the pressure of splitting it out decreases quite a lot. > > In the big picture, weave is relatively "mature" code base, and > keeping it working is probably not too big apart from the Py3 port. > I found this two-month-old thread while checking (again) for any solutions regarding weave.inline and python 3.x compatibility. Our lab is another example of what Arnd was talking about: I'm a graduate student close to the end of my Ph.D, and I built many parts of our code using weave.inline. Now I'm looking for ways to make my code compatible with future generation. And weave.inline is the only package which holds us at python 2.x. Did someone start porting weave.inline since this thread ended on Sep 26? What are the chances someone will do this in the nearest future? Thanks, Max From djpine at gmail.com Fri Nov 29 10:21:19 2013 From: djpine at gmail.com (David Pine) Date: Fri, 29 Nov 2013 10:21:19 -0500 Subject: [SciPy-Dev] adding linear fitting routine Message-ID: <6D454EB9-44AE-4A5D-9B55-3080364B9A26@gmail.com> I have written a function called linfit for linear least square fitting that I am proposing to have added to one of the numpy or scipy libraries. linfit performs a full least squares chi-squared fit. Thus, it can handle data with error estimates (aka error bars), weighting the data accordingly. linfit provides estimates of the uncertainties of the fitted parameters, the slope and y-intercept. linfit allows one to optionally: (1) use no weighting, or (2) to weight data according the residuals, often called relative weighting (the way it's often done in work in the social sciences), or (3) to use the absolute measure of uncertainties either for each data point or for all the data points at once (the way it's often done in the physical sciences). These options were included with the recent discussion on weighted least squares fitting in mind. See scipy/scipy#448. I am not sure where linfit best belongs in the numpy/scipy universe. The most reasonable places would seem to be either the polynomial package (a straight line is the simplest polynomial) or perhaps the scipy.optimize package along with curve_fit, which fits nonlinear functions to data. I wrote the function because there really is nothing like it in numpy or scipy, and it is so basic that in my opinion, something like it should be available. I tried to write it so that it is useful to a very wide range of users that cross all branches of social and physical sciences as well as engineering. I have added linfit to a cloned version of the numpy.polynomial module, which can be found at https://github.com/numpy/numpy/pull/4080. The standalone linfit function can be found at https://github.com/djpine/linfit. An ipython notebook demonstrating various ways of using linfit is available at the same site; its output can be viewed at http://nbviewer.ipython.org/github/djpine/linfit/blob/master/linfit.ipynb. I have in included a unit test test_linfit.py that, among other things, compares the speed of linfit to numpy.polyfit, scipy.linalg.lstsq, and scipy.stats.linregress. linfit is faster for all cases I tested, typically by several times. Finally, I am new to using Github and to contributing to numpy/scipy, so I am not sure if I have submitted everything properly, but I hope this gets the process going. David Pine -------------- next part -------------- An HTML attachment was scrubbed... URL: