From francesco.delcitto at sauber-motorsport.com Wed Sep 3 11:17:00 2014 From: francesco.delcitto at sauber-motorsport.com (Del Citto Francesco) Date: Wed, 3 Sep 2014 17:17:00 +0200 Subject: [SciPy-User] RZ Factorization Message-ID: <589CEB614006334D93C1A48C1B1964C90200186C4F84@srvmes03.spe-ch-md9.net> Hi! I'm new to this mailing list, so first of all I'd like to thank all the developers for the great work behind numpy, scipy and all the related tools! Now, back to the subject of my email. I'd like to implement in Python a methodology based on an RZ Factorization of an upper trapezoidal matrix, which is actually the "R" matrix of a QR decomposition of a non-squared matrix. It is something I have already done a few years ago in Fortran, using some dedicated LAPACK functions, namely: DGEQP3 DTZRZF DORMRZ DGELSY Now, DGEQP3 is already wrapped in SciPy 0.14.0, but the other functions aren't. (Strictly speaking, DGELSY has nothing to do with the decomposition itself, as it solves an over- or under-determined system using an orthogonal factorization, but it is a function I used in my Fortran code and it is not wrapped) As I generally install SciPy from sources, I have modified scipy/linalg/flapack.pyf.src adding: subroutine tzrzf(m,n,a,tau,work,lwork,info) ! tz_a,tau,work,info = tzrzf(a,lwork=3*(n+1),overwrite_a=0) ! Compute a QTZ factorization of a real M-by-N matrix A: ! A = T * Z = ( R 0 ) * Z. ! where Z is an N-by-N orthogonal matrix and R is an M-by-M upper ! triangular matrix. threadsafe callstatement (*f2py_func)(&m,&n,a,&m,tau,work,&lwork,&info) callprotoargument int*,int*,*,int*,*,*,int*,int* integer intent(hide),depend(a):: m = shape(a,0) integer intent(hide),depend(a):: n = shape(a,1) dimension(m,n),intent(in,out,copy,out=tz,aligned8) :: a dimension(MIN(m,n)),intent(out) :: tau integer optional,intent(in),depend(n),check(lwork>=n||lwork==-1) :: lwork=3*(n+1) dimension(MAX(lwork,1)),intent(out),depend(lwork) :: work integer intent(out) :: info end subroutine tzrzf recompiled and I can now see DTZRZF from Python as scipy.linalg.lapack.dtzrzf, which is great. I'll do the same for the other two missing functions and expose them to my code, but in any case I'll lose the compatibility with a standard SciPy installation, which is not ideal if you need to install the program somewhere else. Is there any other way of accessing these functions from a clean SciPy installations? Are there plans to include the RZ factorization with a high-level wrapper as done for the QR? Regards, Francesco ---- Francesco Del Citto CFD Group [cid:image002.png at 01CFC79A.DF3B7320] Sauber Motorsport AG Wildbachstrasse 9 8340 Hinwil Switzerland Tel +41 44 937 90 00 Direct +41 44 937 95 10 Fax +41 44 937 95 01 www.sauberf1team.com This message contains confidential information and is intended only for the individual named herein. If you are not the herein named addressee you should not disseminate, distribute a copy or otherwise make use of this e-mail. Please notify the sender immediately by e-mail if you have received this e-mail by mistake, and delete this e-mail from your system. Please consider the environment before printing this email. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.png Type: image/png Size: 4890 bytes Desc: image002.png URL: From sturla.molden at gmail.com Wed Sep 3 14:02:13 2014 From: sturla.molden at gmail.com (Sturla Molden) Date: Wed, 3 Sep 2014 18:02:13 +0000 (UTC) Subject: [SciPy-User] RZ Factorization References: <589CEB614006334D93C1A48C1B1964C90200186C4F84@srvmes03.spe-ch-md9.net> Message-ID: <754878507431459460.045769sturla.molden-gmail.com@news.gmane.org> Del Citto Francesco wrote: > Is there any other way of accessing these functions from a clean SciPy installations? > Are there plans to include the RZ factorization with a high-level wrapper > as done for the QR? I often find that I need BLAS and LAPACK functions not exposed by SciPy. And so I end up doing exactly what you have done here, except I do not modify SciPy but make my own extension module. (I prefer to use Cython instead of f2py to better control the behavior of wrapper, though.) Anyhow, it raises the question if scipy.linalg.blas and scipy.linalg.lapack perhaps should be more complete? Today it mostly exposes the BLAS and LAPACK that SciPy needs internally. But perhaps it is tine sto acknowledge that some users of SciPy also need other parts of these libraries. If we do what you have done here, and extended SciPy's f2py wrappers with missing LAPACK and BLAS subroutines, will PRs be accepted? Or there a policy that SciPy should only expose the parts of LAPACK and BLAS which it needs internally? Sturla From pav at iki.fi Wed Sep 3 15:19:47 2014 From: pav at iki.fi (Pauli Virtanen) Date: Wed, 03 Sep 2014 22:19:47 +0300 Subject: [SciPy-User] RZ Factorization In-Reply-To: <589CEB614006334D93C1A48C1B1964C90200186C4F84@srvmes03.spe-ch-md9.net> References: <589CEB614006334D93C1A48C1B1964C90200186C4F84@srvmes03.spe-ch-md9.net> Message-ID: 03.09.2014, 18:17, Del Citto Francesco kirjoitti: [clip] > It is something I have already done a few years ago in > Fortran, using some dedicated LAPACK functions, namely: > > DGEQP3 > DTZRZF > DORMRZ > DGELSY > > Now, DGEQP3 is already wrapped in SciPy 0.14.0, but the other functions aren't. [clip] > Are there plans to include the RZ factorization with a high-level wrapper as done for the QR? Scipy is a community-driven project, and most often a new feature is added when someone who needs it or finds it interesting turns up and implements it. In this case, both adding the relevant f2py wrappers, and a high-level rz() function to scipy.linalg seems quite justified. In general, I think there's no reason to not wrap the whole of LAPACK this way even if there are no corresponding high-level functions. Scipy has a workflow for code contributions: https://github.com/scipy/scipy/blob/master/HACKING.rst.txt In case you have some code ready but don't want to integrate it in Scipy, copypaste it to https://gist.github.com and open an issue in https://github.com/scipy/scipy/issues with a description on what it does, and a pointer to the Gist. Someone else with too much time on their hands may later turn up and take a look. -- Pauli Virtanen From francesco.delcitto at sauber-motorsport.com Thu Sep 4 02:32:31 2014 From: francesco.delcitto at sauber-motorsport.com (Del Citto Francesco) Date: Thu, 4 Sep 2014 08:32:31 +0200 Subject: [SciPy-User] RZ Factorization In-Reply-To: <754878507431459460.045769sturla.molden-gmail.com@news.gmane.org> References: <589CEB614006334D93C1A48C1B1964C90200186C4F84@srvmes03.spe-ch-md9.net> <754878507431459460.045769sturla.molden-gmail.com@news.gmane.org> Message-ID: <589CEB614006334D93C1A48C1B1964C90200186C4F8E@srvmes03.spe-ch-md9.net> Hi Sturla, Pauli, > I often find that I need BLAS and LAPACK functions not exposed by SciPy. > And so I end up doing exactly what you have done here, except I do not modify SciPy but make my own > extension module. (I prefer to use Cython instead of f2py to better control the behavior of wrapper, > though.) How do you create an extension module for this task? Could you give me some hint? > Anyhow, it raises the question if scipy.linalg.blas and scipy.linalg.lapack perhaps should be > more complete? Today it mostly exposes the BLAS and LAPACK that SciPy needs internally. But perhaps it > is tine sto acknowledge that some users of SciPy also need other parts of these libraries. If we do > what you have done here, and extended SciPy's f2py wrappers with missing LAPACK and BLAS subroutines, > will PRs be accepted? Or there a policy that SciPy should only expose the parts of LAPACK and BLAS > which it needs internally? Agree, but understand that wrapping all the BLAS and LAPACK functions could be boring and time consuming... On the other hand, BLAS and LAPACK are effectively THE standard libraries for vector and matrix operations. Having all the functions available in SciPy, even if without corresponding high-level functions, could be useful and, in the end, quicker than adding the missing functions one by one, when they are needed. > Scipy is a community-driven project, and most often a new feature is added when someone who needs it > or finds it interesting turns up and implements it. > > In this case, both adding the relevant f2py wrappers, and a high-level > rz() function to scipy.linalg seems quite justified. You're right, but as I never contributed to scipy, I was just checking if there wasn't somebody else writing the same wrappers. I'll try to create a decompose_rz.py high-level module and send a pull request. Thanks! Francesco From sturla.molden at gmail.com Thu Sep 4 09:15:15 2014 From: sturla.molden at gmail.com (Sturla Molden) Date: Thu, 4 Sep 2014 13:15:15 +0000 (UTC) Subject: [SciPy-User] RZ Factorization References: <589CEB614006334D93C1A48C1B1964C90200186C4F84@srvmes03.spe-ch-md9.net> Message-ID: <416148593431527142.802483sturla.molden-gmail.com@news.gmane.org> Pauli Virtanen wrote: > In this case, both adding the relevant f2py wrappers, and a high-level > rz() function to scipy.linalg seems quite justified. What about LQ? The justification is to solve least squares problems when the input is p x n instead of n x p, or then the input it n x p in C order. LAPACK's least squares driver *GELS, also not wrapped in SciPy, chooses intelligently between QR or LQ depending on the data. (Unfortunately the SVD based solver *GELSS cannot do this optimization.) It seems to be more common to use C order matrices instead of Fortran order matrices today, and then LQ becomes the 'natural' method to use for least-squares. I would also like to change the triangular matrix solver to use level-3 BLAS *TRSM instead of LAPACK *TRTRS. *TRSM has the advantage that is can solve AX=B and XA=B, but *TRTRS can only solve AX=B unless we compute the inverse of A explicitely. It often happens that the triangular matrix is on the right-hand side. And *TRTRS is just a shallow wrapper for *TRSM anyway. > In general, I think > there's no reason to not wrap the whole of LAPACK this way even if there > are no corresponding high-level functions. I would really appreciate this, though it would be a major effort. But we could do it incrementally, whenever the need raises. Sturla From francesco.delcitto at sauber-motorsport.com Thu Sep 4 09:33:08 2014 From: francesco.delcitto at sauber-motorsport.com (Del Citto Francesco) Date: Thu, 4 Sep 2014 15:33:08 +0200 Subject: [SciPy-User] RZ Factorization In-Reply-To: <416148593431527142.802483sturla.molden-gmail.com@news.gmane.org> References: <589CEB614006334D93C1A48C1B1964C90200186C4F84@srvmes03.spe-ch-md9.net> <416148593431527142.802483sturla.molden-gmail.com@news.gmane.org> Message-ID: <589CEB614006334D93C1A48C1B1964C90200186C4FD0@srvmes03.spe-ch-md9.net> Hi Sturla, > What about LQ? The justification is to solve least squares problems when the input is p x n instead of > n x p, or then the input it n x p in C order. > LAPACK's least squares driver *GELS, also not wrapped in SciPy, chooses intelligently between QR or LQ > depending on the data. (Unfortunately the SVD based solver *GELSS cannot do this optimization.) It > seems to be more common to use C order matrices instead of Fortran order matrices today, and then LQ > becomes the 'natural' method to use for least-squares. In my case, I need the actual matrices coming out from the RZ decomposition, I'm not using the decomposition to solve a system of equations. Hence I need to access xTZRZF and xORMRZ. > I would also like to change the triangular matrix solver to use level-3 BLAS *TRSM instead of LAPACK > *TRTRS. *TRSM has the advantage that is can solve AX=B and XA=B, but *TRTRS can only solve AX=B unless > we compute the inverse of A explicitely. It often happens that the triangular matrix is on the right- > hand side. And *TRTRS is just a shallow wrapper for *TRSM anyway. I'm a bit too rusty with LAPACK to give an opinion on this! I haven't been used it for too long. > I would really appreciate this, though it would be a major effort. But we could do it incrementally, > whenever the need raises. I'll do my best. It's not a high-priority project at the moment, but I'm keen to spend some time on it, whenever I can. By the way, you mentioned the possibility of building a custom extension module. Can you give me some indication to do this, or some documentation on the web? Thanks, Francesco From jsseabold at gmail.com Thu Sep 4 10:01:31 2014 From: jsseabold at gmail.com (Skipper Seabold) Date: Thu, 4 Sep 2014 10:01:31 -0400 Subject: [SciPy-User] RZ Factorization In-Reply-To: <589CEB614006334D93C1A48C1B1964C90200186C4F8E@srvmes03.spe-ch-md9.net> References: <589CEB614006334D93C1A48C1B1964C90200186C4F84@srvmes03.spe-ch-md9.net> <754878507431459460.045769sturla.molden-gmail.com@news.gmane.org> <589CEB614006334D93C1A48C1B1964C90200186C4F8E@srvmes03.spe-ch-md9.net> Message-ID: On Thu, Sep 4, 2014 at 2:32 AM, Del Citto Francesco wrote: > Hi Sturla, Pauli, > >> I often find that I need BLAS and LAPACK functions not exposed by SciPy. >> And so I end up doing exactly what you have done here, except I do not modify SciPy but make my own >> extension module. (I prefer to use Cython instead of f2py to better control the behavior of wrapper, >> though.) > > How do you create an extension module for this task? Could you give me some hint? > >> Anyhow, it raises the question if scipy.linalg.blas and scipy.linalg.lapack perhaps should be >> more complete? Today it mostly exposes the BLAS and LAPACK that SciPy needs internally. But perhaps it >> is tine sto acknowledge that some users of SciPy also need other parts of these libraries. If we do >> what you have done here, and extended SciPy's f2py wrappers with missing LAPACK and BLAS subroutines, >> will PRs be accepted? Or there a policy that SciPy should only expose the parts of LAPACK and BLAS >> which it needs internally? > > Agree, but understand that wrapping all the BLAS and LAPACK functions could be boring and time consuming... > On the other hand, BLAS and LAPACK are effectively THE standard libraries for vector and matrix operations. > Having all the functions available in SciPy, even if without corresponding high-level functions, could be useful and, in the end, quicker than adding the missing functions one by one, when they are needed. > >> Scipy is a community-driven project, and most often a new feature is added when someone who needs it >> or finds it interesting turns up and implements it. >> >> In this case, both adding the relevant f2py wrappers, and a high-level >> rz() function to scipy.linalg seems quite justified. > > You're right, but as I never contributed to scipy, I was just checking if there wasn't somebody else writing the same wrappers. > I'll try to create a decompose_rz.py high-level module and send a pull request. I don't have another example handy, but here's a PR to add the QZ decomposition that you may use for hints on how to write the PR. It hasn't been merged because of issues with our available LAPACK for Windows, unfortunately. There may be other similar PRs to peruse that will help you get all the way there. https://github.com/scipy/scipy/pull/3107 Skipper From sturla.molden at gmail.com Thu Sep 4 10:19:11 2014 From: sturla.molden at gmail.com (Sturla Molden) Date: Thu, 4 Sep 2014 14:19:11 +0000 (UTC) Subject: [SciPy-User] RZ Factorization References: <589CEB614006334D93C1A48C1B1964C90200186C4F84@srvmes03.spe-ch-md9.net> <754878507431459460.045769sturla.molden-gmail.com@news.gmane.org> <589CEB614006334D93C1A48C1B1964C90200186C4F8E@srvmes03.spe-ch-md9.net> Message-ID: <316986180431529370.530404sturla.molden-gmail.com@news.gmane.org> Del Citto Francesco wrote: > How do you create an extension module for this task? Could you give me some hint? To use f2py, just do it the same way as SciPy. Create a .pyf with the LAPACK prototypes you need and a setup.py file, compile and link with LAPACK. Be aware that there can be an ABI problem if you use Apple Accelerate framework, which is why SciPy has special wrappers for Accelerate. Personally I prefer to write a C function that uses the NumPy C API, then I call this C function from Cython. I do not use typed memoryviews in Cython as the buffer acquisition incurs a lot of overhead. I try to keep my Cython files clean of any numerical stuff. They are just plain wrappers to avoid the boiler-plate Python C API coding (and refcounting, if possible). If you want to call Fortran from Python in a portable way, you should use the Fortran 2003 ISO C bindings, and then call this fortran wrapper from Cython. There is a program called "fwrap" which will autogenerate these Fortran wrappers. This way of wrapping Fortran whould generally avoid any ABI problems. I have not used fwrap, but I always use Fortran 2003. Then I typically end up with a calling cascade like this: Python code | Cython wrapper (avoid Python C API) | C function using NumPy C API | Fortran 2003 wrapper (ISO C binding) | Fortran 95 modules (my own numerical code) | Fortran 77 LAPACK subroutine My actual numerical code would be in the Python and Fortran 95 layers. Also note that there is no f2py layer in this calling cascade. This gives me full control over what happens at each step, including memory use, but incurs more boiler-plate coding for me. Using f2py simplifies the cascade: Python code | f2py wrapper (little control over the overhead here) | Fortran 95 modules | Fortran 77 LAPACK subroutine I used to do this before. It works ok if f2py overhead is acceptable. SciPy is even more puristic and usually does this: Python code (all numerics except LAPACK) | f2py wrapper (don't care what happens here) | Fortran 77 LAPACK subroutine Do whatever you prefer and fits you best :) Sturla From francesco.delcitto at sauber-motorsport.com Thu Sep 4 10:22:53 2014 From: francesco.delcitto at sauber-motorsport.com (Del Citto Francesco) Date: Thu, 4 Sep 2014 16:22:53 +0200 Subject: [SciPy-User] RZ Factorization In-Reply-To: <316986180431529370.530404sturla.molden-gmail.com@news.gmane.org> References: <589CEB614006334D93C1A48C1B1964C90200186C4F84@srvmes03.spe-ch-md9.net> <754878507431459460.045769sturla.molden-gmail.com@news.gmane.org> <589CEB614006334D93C1A48C1B1964C90200186C4F8E@srvmes03.spe-ch-md9.net> <316986180431529370.530404sturla.molden-gmail.com@news.gmane.org> Message-ID: <589CEB614006334D93C1A48C1B1964C90200186C4FDE@srvmes03.spe-ch-md9.net> Wow, precious hints, thanks a lot! > -----Original Message----- > From: scipy-user-bounces at scipy.org [mailto:scipy-user-bounces at scipy.org] On Behalf Of Sturla Molden > Sent: Thursday, September 04, 2014 4:19 PM > To: scipy-user at scipy.org > Subject: Re: [SciPy-User] RZ Factorization > > Del Citto Francesco wrote: > > > How do you create an extension module for this task? Could you give me some hint? > > To use f2py, just do it the same way as SciPy. Create a .pyf with the LAPACK prototypes you need and a > setup.py file, compile and link with LAPACK. Be aware that there can be an ABI problem if you use > Apple Accelerate framework, which is why SciPy has special wrappers for Accelerate. > > Personally I prefer to write a C function that uses the NumPy C API, then I call this C function from > Cython. I do not use typed memoryviews in Cython as the buffer acquisition incurs a lot of overhead. I > try to keep my Cython files clean of any numerical stuff. They are just plain wrappers to avoid the > boiler-plate Python C API coding (and refcounting, if possible). > > If you want to call Fortran from Python in a portable way, you should use the Fortran 2003 ISO C > bindings, and then call this fortran wrapper from Cython. There is a program called "fwrap" which will > autogenerate these Fortran wrappers. This way of wrapping Fortran whould generally avoid any ABI > problems. I have not used fwrap, but I always use Fortran 2003. > > Then I typically end up with a calling cascade like this: > > > Python code > | > Cython wrapper (avoid Python C API) > | > C function using NumPy C API > | > Fortran 2003 wrapper (ISO C binding) > | > Fortran 95 modules (my own numerical code) > | > Fortran 77 LAPACK subroutine > > > My actual numerical code would be in the Python and Fortran 95 layers. > > Also note that there is no f2py layer in this calling cascade. This gives me full control over what > happens at each step, including memory use, but incurs more boiler-plate coding for me. > > > Using f2py simplifies the cascade: > > Python code > | > f2py wrapper > (little control over the overhead here) > | > Fortran 95 modules > | > Fortran 77 LAPACK subroutine > > I used to do this before. It works ok if f2py overhead is acceptable. > > > SciPy is even more puristic and usually does this: > > Python code > (all numerics except LAPACK) > | > f2py wrapper > (don't care what happens here) > | > Fortran 77 LAPACK subroutine > > > > Do whatever you prefer and fits you best :) > > > Sturla > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user From francesco.delcitto at sauber-motorsport.com Thu Sep 4 10:25:05 2014 From: francesco.delcitto at sauber-motorsport.com (Del Citto Francesco) Date: Thu, 4 Sep 2014 16:25:05 +0200 Subject: [SciPy-User] RZ Factorization In-Reply-To: References: <589CEB614006334D93C1A48C1B1964C90200186C4F84@srvmes03.spe-ch-md9.net> <754878507431459460.045769sturla.molden-gmail.com@news.gmane.org> <589CEB614006334D93C1A48C1B1964C90200186C4F8E@srvmes03.spe-ch-md9.net> Message-ID: <589CEB614006334D93C1A48C1B1964C90200186C4FDF@srvmes03.spe-ch-md9.net> Hi Skipper, > I don't have another example handy, but here's a PR to add the QZ decomposition that you may use for > hints on how to write the PR. It hasn't been merged because of issues with our available LAPACK for > Windows, unfortunately. There may be other similar PRs to peruse that will help you get all the way > there. > > https://github.com/scipy/scipy/pull/3107 > Thanks, I'll go through the code in this PR and see if I can use it as a reference. Francesco From sturla.molden at gmail.com Thu Sep 4 10:36:36 2014 From: sturla.molden at gmail.com (Sturla Molden) Date: Thu, 4 Sep 2014 14:36:36 +0000 (UTC) Subject: [SciPy-User] RZ Factorization References: <589CEB614006334D93C1A48C1B1964C90200186C4F84@srvmes03.spe-ch-md9.net> <754878507431459460.045769sturla.molden-gmail.com@news.gmane.org> <589CEB614006334D93C1A48C1B1964C90200186C4F8E@srvmes03.spe-ch-md9.net> Message-ID: <1500536086431533311.264157sturla.molden-gmail.com@news.gmane.org> Del Citto Francesco wrote: > Agree, but understand that wrapping all the BLAS and LAPACK functions > could be boring and time consuming... > On the other hand, BLAS and LAPACK are effectively THE standard libraries > for vector and matrix operations. > Having all the functions available in SciPy, even if without > corresponding high-level functions, could be useful and, in the end, > quicker than adding the missing functions one by one, when they are needed. That is my feeling as well. Also given that f2py functions have a _cpointer attribute, it is possible for Python packages to "borrow" LAPACK and BLAS functions from SciPy even for their C or Fortran codes. E.g. statsmodels does this for its cythonized Kalman filter. But it will take a lot of work. LAPACK is huge, which perhaps is why nobody has bothered. Also it is not just a matter of writing more f2py wrappers, we also need to write tests to make sure the output is correct. And if there is a f2c vs. gfortran ABI collision, we need to write a C wrapper for Accelerate. Also for any BLAS function we need to write a cblas to Fortran BLAS wrapper for Accelerate. So it is not as trivial as just adding more f2py prototypes to the .pyf files. In order not to create a mess, it should be done properly from the start. Sturla From sturla.molden at gmail.com Thu Sep 4 10:45:40 2014 From: sturla.molden at gmail.com (Sturla Molden) Date: Thu, 4 Sep 2014 14:45:40 +0000 (UTC) Subject: [SciPy-User] RZ Factorization References: <589CEB614006334D93C1A48C1B1964C90200186C4F84@srvmes03.spe-ch-md9.net> <416148593431527142.802483sturla.molden-gmail.com@news.gmane.org> <589CEB614006334D93C1A48C1B1964C90200186C4FD0@srvmes03.spe-ch-md9.net> Message-ID: <1631424968431534589.514614sturla.molden-gmail.com@news.gmane.org> Del Citto Francesco wrote: >> I would also like to change the triangular matrix solver to use level-3 >> BLAS *TRSM instead of LAPACK >> *TRTRS. *TRSM has the advantage that is can solve AX=B and XA=B, but >> *TRTRS can only solve AX=B unless >> we compute the inverse of A explicitely. It often happens that the >> triangular matrix is on the right- >> hand side. And *TRTRS is just a shallow wrapper for *TRSM anyway. > > I'm a bit too rusty with LAPACK to give an opinion on this! > I haven't been used it for too long. It was my fault for suggesting *TRTRS for the trisngular solver in the first place. Big mistake :( So I feel a bit responsible for correcting it. Sturla From thomas_unterthiner at web.de Thu Sep 4 10:58:20 2014 From: thomas_unterthiner at web.de (Thomas Unterthiner) Date: Thu, 04 Sep 2014 16:58:20 +0200 Subject: [SciPy-User] RZ Factorization In-Reply-To: <754878507431459460.045769sturla.molden-gmail.com@news.gmane.org> References: <589CEB614006334D93C1A48C1B1964C90200186C4F84@srvmes03.spe-ch-md9.net> <754878507431459460.045769sturla.molden-gmail.com@news.gmane.org> Message-ID: <54087E0C.7040205@web.de> On 2014-09-03 20:02, Sturla Molden wrote: > Del Citto Francesco wrote: > >> Is there any other way of accessing these functions from a clean SciPy installations? >> Are there plans to include the RZ factorization with a high-level wrapper >> as done for the QR? > I often find that I need BLAS and LAPACK functions not exposed by SciPy. > And so I end up doing exactly what you have done here, except I do not > modify SciPy but make my own extension module. (I prefer to use Cython > instead of f2py to better control the behavior of wrapper, though.) Anyhow, > it raises the question if scipy.linalg.blas and scipy.linalg.lapack perhaps > should be more complete? Today it mostly exposes the BLAS and LAPACK that > SciPy needs internally. But perhaps it is tine sto acknowledge that some > users of SciPy also need other parts of these libraries. If we do what you > have done here, and extended SciPy's f2py wrappers with missing LAPACK and > BLAS subroutines, will PRs be accepted? Or there a policy that SciPy should > only expose the parts of LAPACK and BLAS which it needs internally? > > Sturla > I also often to run into the situation of needing BLAS/LAPACK functionality that is not exposed by SciPy. It would be nice if more/all the calls would be wrapped. However, I find it very annoying that the f2py wrappers often silently copy and/or transpose data. I understand that this is a F-order vs C-order issue, but sometimes e.g. I know that my matrices are PSD, yet f2py will copy/transpose anyway. Such behavior is annoying and often makes the SciPy wrappers useless for my purposes. So if talk of extending the LAPACK/BLAS support for SciPy is on the table, then I'd like to propose to get rid of that behavior: low-level wrappers should not secretly make copies of the data. I'm using low-level functions to save speed, yet the copying takes away much of the speed gains. With all that said, I use CFFI for my wrappers, and that typically ends up like this: # NOTE: both MKL and OpenBLAS (from 0.2.10) know saxpby # but this will fail on older OpenBLAS versions __blas_ffi = FFI() __blas_ffi.cdef(""" void cblas_saxpby(const int N, const float alpha, const float *X, const int incX, const float beta, float *Y, const int incY); """) import os c = np.__config__.get_info('blas_opt_info') soname = os.path.join(c['library_dirs'][0], "lib" + c['libraries'][0] + '.so') __blas_cffi = __blas_ffi.dlopen(soname) ### USAGE ### px = __blas_ffi.cast("float*", x.ctypes.data) py = __blas_ffi.cast("const float*", y.ctypes.data) __blas_cffi.cblas_saxpby(x.size, alpha, py, 1, beta, px, 1) Which I think is fairly straight forward. I think it would be easily doable to wrap all of LAPACKE/CBLAS this way (thus getting rid of C- vs F-order, since that can be specified by the user) and replace the current f2py wrappers with that. If time permits and there is enough interested, I could see if I can hack together a prototype/PR for this. Cheers Thomas From sturla.molden at gmail.com Fri Sep 5 02:52:23 2014 From: sturla.molden at gmail.com (Sturla Molden) Date: Fri, 5 Sep 2014 06:52:23 +0000 (UTC) Subject: [SciPy-User] RZ Factorization References: <589CEB614006334D93C1A48C1B1964C90200186C4F84@srvmes03.spe-ch-md9.net> <754878507431459460.045769sturla.molden-gmail.com@news.gmane.org> <54087E0C.7040205@web.de> Message-ID: <1250467885431592469.690663sturla.molden-gmail.com@news.gmane.org> Thomas Unterthiner wrote: > I also often to run into the situation of needing BLAS/LAPACK > functionality that is not exposed by SciPy. It would be nice if more/all > the calls would be wrapped. However, I find it very annoying that the > f2py wrappers often silently copy and/or transpose data. I understand > that this is a F-order vs C-order issue, but sometimes e.g. I know that > my matrices are PSD, yet f2py will copy/transpose anyway. Such behavior > is annoying and often makes the SciPy wrappers useless for my purposes. That is one of the reasons I prefer my own LAPACK wrappers. > Which I think is fairly straight forward. I think it would be easily > doable to wrap all of LAPACKE/CBLAS this way (thus getting rid of C- vs > F-order, since that can be specified by the user) and replace the > current f2py wrappers with that. If time permits and there is enough > interested, I could see if I can hack together a prototype/PR for this. LAPACKE has the same issue as f2py. It creates copies and transposes C order arrays. Sturla From thomas_unterthiner at web.de Fri Sep 5 05:25:58 2014 From: thomas_unterthiner at web.de (Thomas Unterthiner) Date: Fri, 05 Sep 2014 11:25:58 +0200 Subject: [SciPy-User] RZ Factorization In-Reply-To: <1250467885431592469.690663sturla.molden-gmail.com@news.gmane.org> References: <589CEB614006334D93C1A48C1B1964C90200186C4F84@srvmes03.spe-ch-md9.net> <754878507431459460.045769sturla.molden-gmail.com@news.gmane.org> <54087E0C.7040205@web.de> <1250467885431592469.690663sturla.molden-gmail.com@news.gmane.org> Message-ID: <540981A6.6030401@web.de> >> Which I think is fairly straight forward. I think it would be easily >> doable to wrap all of LAPACKE/CBLAS this way (thus getting rid of C- vs >> F-order, since that can be specified by the user) and replace the >> current f2py wrappers with that. If time permits and there is enough >> interested, I could see if I can hack together a prototype/PR for this. > LAPACKE has the same issue as f2py. It creates copies and transposes C > order arrays. I just went to see for myself: Netlib's LAPACKE does indeed do that even in cases when it isn't needed (e.g. in *potrf). Is there a reason why this is done, or could other implementations (e.g. MKL) potentially have a more intelligent solution? From sturla.molden at gmail.com Fri Sep 5 09:24:52 2014 From: sturla.molden at gmail.com (Sturla Molden) Date: Fri, 5 Sep 2014 13:24:52 +0000 (UTC) Subject: [SciPy-User] RZ Factorization References: <589CEB614006334D93C1A48C1B1964C90200186C4F84@srvmes03.spe-ch-md9.net> <754878507431459460.045769sturla.molden-gmail.com@news.gmane.org> <54087E0C.7040205@web.de> <1250467885431592469.690663sturla.molden-gmail.com@news.gmane.org> <540981A6.6030401@web.de> Message-ID: <162627792431615538.223362sturla.molden-gmail.com@news.gmane.org> Thomas Unterthiner wrote: > I just went to see for myself: Netlib's LAPACKE does indeed do that even > in cases when it isn't needed (e.g. in *potrf). Is there a reason why > this is done, or could other implementations (e.g. MKL) potentially have > a more intelligent solution? The Netlib LAPACKE code is written by Intel, and probably similar to the LAPACKE code in MKL. Another PITA is that LAPACKE does not cover all of LAPACK. Also there is no agreement on a standard C interface for LAPACK. Intel has LAPACKE, AMD has its own wrappers, Apple has CLAPACK, and Netlib has CLAPACK and LAPACKE (depending on LAPACK version). I think SciPy's decision to stick with Fortran LAPACK is the better option here, even though there is a f2c/g77 vs. gfortran/ifort ABI problem. Fortran 2003 was supposed to remove this ABI problem, but it does not work here as f2c and g77 cannot compile Fortran 2003 code. A while ago I thought I should just ditch Fortran and use C. But now the C standards comittee has decided to screw over all numerical C programmers, and make the numerical capabilites added to C99 optional. I am getting depressed... :( Sturla From alan.isaac at gmail.com Fri Sep 5 10:11:01 2014 From: alan.isaac at gmail.com (Alan G Isaac) Date: Fri, 05 Sep 2014 10:11:01 -0400 Subject: [SciPy-User] C numerics In-Reply-To: <162627792431615538.223362sturla.molden-gmail.com@news.gmane.org> References: <589CEB614006334D93C1A48C1B1964C90200186C4F84@srvmes03.spe-ch-md9.net> <754878507431459460.045769sturla.molden-gmail.com@news.gmane.org> <54087E0C.7040205@web.de> <1250467885431592469.690663sturla.molden-gmail.com@news.gmane.org> <540981A6.6030401@web.de> <162627792431615538.223362sturla.molden-gmail.com@news.gmane.org> Message-ID: <5409C475.9030309@gmail.com> On 9/5/2014 9:24 AM, Sturla Molden wrote: > the C standards comittee has decided to screw over all numerical C programmers, > and make the numerical capabilites added to C99 optional For me and possible others less attuned to these events, can you please elaborate on this statement? (I'm going to guess you mean the new optional status of complex types, but you might also mean the continuing optional status of the IEEE floating point standard.) What are the specifics of your dismay, looking forward. Thanks, Alan From sturla.molden at gmail.com Fri Sep 5 12:01:04 2014 From: sturla.molden at gmail.com (Sturla Molden) Date: Fri, 05 Sep 2014 18:01:04 +0200 Subject: [SciPy-User] C numerics In-Reply-To: <5409C475.9030309@gmail.com> References: <589CEB614006334D93C1A48C1B1964C90200186C4F84@srvmes03.spe-ch-md9.net> <754878507431459460.045769sturla.molden-gmail.com@news.gmane.org> <54087E0C.7040205@web.de> <1250467885431592469.690663sturla.molden-gmail.com@news.gmane.org> <540981A6.6030401@web.de> <162627792431615538.223362sturla.molden-gmail.com@news.gmane.org> <5409C475.9030309@gmail.com> Message-ID: On 05/09/14 16:11, Alan G Isaac wrote: > On 9/5/2014 9:24 AM, Sturla Molden wrote: >> the C standards comittee has decided to screw over all numerical C programmers, >> and make the numerical capabilites added to C99 optional > > > For me and possible others less attuned to these events, > can you please elaborate on this statement? (I'm going to > guess you mean the new optional status of complex types, > but you might also mean the continuing optional status > of the IEEE floating point standard.) What are the > specifics of your dismay, looking forward. I am thinking about the optional status of: * complex types and complex math functions (this one in particular) * variable-length arrays * IEEE floating point standard Sturla From u.fechner at tudelft.nl Tue Sep 9 05:37:04 2014 From: u.fechner at tudelft.nl (Uwe Fechner) Date: Tue, 09 Sep 2014 11:37:04 +0200 Subject: [SciPy-User] 3D spline interpolation very, very slow Message-ID: <540ECA40.9090407@tudelft.nl> Hello, I am trying to do a spline interpolation of a 3D wind field. For this I am using the function ndimage.map_coordinates. For some reasons this works well if I interpolate in one dimension, but it gets terrible slow if I do this in three dimensions. An example program can be found at: https://gist.github.com/anonymous/32c8599466dad30cd551 Am I doing something wrong? I wrote a similar program, using the Julia programming language, at the speed is about 10000 times higher. Any ideas how to improve the speed of this code are welcome. Best regards: Uwe Fechner --------------------------------------------- Uwe Fechner, M.Sc. Delft University of Technology Faculty of Aerospace Engineering/ Wind Energy Kluyverweg 1, 2629 HS Delft, The Netherlands Phone: +31-15-27-88902 From u.fechner at tudelft.nl Tue Sep 9 06:06:32 2014 From: u.fechner at tudelft.nl (Uwe Fechner) Date: Tue, 09 Sep 2014 12:06:32 +0200 Subject: [SciPy-User] 3D spline interpolation very, very slow - UPDATE - Message-ID: <540ED128.6040007@tudelft.nl> Hello, I am trying to do a spline interpolation of a 3D wind field. For this I am using the function ndimage.map_coordinates. For some reasons this works well if I interpolate in one dimension, but it gets terrible slow if I do this in three dimensions. An example program can be found at: https://gist.github.com/anonymous/32c8599466dad30cd551 Am I doing something wrong? I wrote a similar program, using the Julia programming language, at the speed is about 10000 times higher. This Julia program can be found at: https://gist.github.com/anonymous/70af96a6916b4906f2b0 Any ideas how to improve the speed of the Python code are welcome. Best regards: Uwe Fechner --------------------------------------------- Uwe Fechner, M.Sc. Delft University of Technology Faculty of Aerospace Engineering/ Wind Energy Kluyverweg 1, 2629 HS Delft, The Netherlands Phone: +31-15-27-88902 From joferkington at gmail.com Tue Sep 9 09:53:05 2014 From: joferkington at gmail.com (Joe Kington) Date: Tue, 9 Sep 2014 08:53:05 -0500 Subject: [SciPy-User] 3D spline interpolation very, very slow - UPDATE - In-Reply-To: <540ED128.6040007@tudelft.nl> References: <540ED128.6040007@tudelft.nl> Message-ID: The reason is because `ndimage.map_coordinates` pre-calculates a series of weights on the entire input grid to reduce a higher-order interpolation problem to linear interpolation. This is why linear 3D interpolation (`order=1`) takes a fraction of a millisecond while your second-order spline interpolation on the same dataset takes several seconds. It's an optimization for the situation where you're interpolating many (e.g. thousands) of points instead of a single point, as your example. In a nutshell, `map_coordinates` isn't optimized for the use-case of interpolating a small number of points. If you were to compare an interpolation with several thousand points, the timing should be similar. However, the cost is a one-time cost. One way to get around it is to pre-calculate the weights before-hand using `ndimage.spline_filter` and then pass them in and specify `prefilter=False` to `map_coordinates. Here's a quick modification of your original example: https://gist.github.com/joferkington/cc2f4d9a3eb96d837ef1 At any rate, this is something that could certainly be more optimized for a small number of points. It currently assumes you're interested in interpolating over most of the input array and does a lot of (sometimes unnecessary) legwork to optimize for that use-case. I'm not going to volunteer, at the moment, though :) (Also, on a side note, it's best to use odd-ordered splines. Otherwise you won't have a "smooth" interpolation. I'd be willing to bet that your julia example is effectively using `order=3`. Regardless, that doesn't have anything to do with speed.) Hope that helps, -Joe On Tue, Sep 9, 2014 at 5:06 AM, Uwe Fechner wrote: > Hello, > > I am trying to do a spline interpolation of a 3D wind field. > > For this I am using the function ndimage.map_coordinates. > > For some reasons this works well if I interpolate in one dimension, > but it gets terrible slow if I do this in three dimensions. > > An example program can be found at: > https://gist.github.com/anonymous/32c8599466dad30cd551 > > Am I doing something wrong? > > I wrote a similar program, using the Julia programming language, > at the speed is about 10000 times higher. > > This Julia program can be found at: > https://gist.github.com/anonymous/70af96a6916b4906f2b0 > > Any ideas how to improve the speed of the Python code are welcome. > > Best regards: > > Uwe Fechner > > --------------------------------------------- > Uwe Fechner, M.Sc. > Delft University of Technology > Faculty of Aerospace Engineering/ Wind Energy > Kluyverweg 1, > 2629 HS Delft, The Netherlands > Phone: +31-15-27-88902 > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From u.fechner at tudelft.nl Tue Sep 9 16:00:33 2014 From: u.fechner at tudelft.nl (Uwe Fechner) Date: Tue, 09 Sep 2014 22:00:33 +0200 Subject: [SciPy-User] 3D spline interpolation very, very slow - UPDATE - In-Reply-To: References: <540ED128.6040007@tudelft.nl> Message-ID: <540F5C61.8090609@tudelft.nl> Thank you very much for your detailed response. Your suggestion is working fine. My application is a real-time kite simulator, and I need every 50 ms a new wind vector, depending on the current kite position. When I launch the simulator I can run the spline_filter, and than do the lookup of the wind vector fast when the simulator is running. Only the documentation should be improved. Best regards: Uwe Fechner Am 09.09.2014 um 15:53 schrieb Joe Kington: > The reason is because `ndimage.map_coordinates` pre-calculates a series of weights on the entire > input grid to reduce a higher-order interpolation problem to linear interpolation. This is why > linear 3D interpolation (`order=1`) takes a fraction of a millisecond while your second-order spline > interpolation on the same dataset takes several seconds. > > It's an optimization for the situation where you're interpolating many (e.g. thousands) of points > instead of a single point, as your example. In a nutshell, `map_coordinates` isn't optimized for > the use-case of interpolating a small number of points. > > If you were to compare an interpolation with several thousand points, the timing should be similar. > > However, the cost is a one-time cost. One way to get around it is to pre-calculate the weights > before-hand using `ndimage.spline_filter` and then pass them in and specify `prefilter=False` to > `map_coordinates. Here's a quick modification of your original example: > https://gist.github.com/joferkington/cc2f4d9a3eb96d837ef1 > > At any rate, this is something that could certainly be more optimized for a small number of points. > It currently assumes you're interested in interpolating over most of the input array and does a lot > of (sometimes unnecessary) legwork to optimize for that use-case. I'm not going to volunteer, at > the moment, though :) > > (Also, on a side note, it's best to use odd-ordered splines. Otherwise you won't have a "smooth" > interpolation. I'd be willing to bet that your julia example is effectively using `order=3`. > Regardless, that doesn't have anything to do with speed.) > > Hope that helps, > -Joe > > On Tue, Sep 9, 2014 at 5:06 AM, Uwe Fechner > wrote: > > Hello, > > I am trying to do a spline interpolation of a 3D wind field. > > For this I am using the function ndimage.map_coordinates. > > For some reasons this works well if I interpolate in one dimension, > but it gets terrible slow if I do this in three dimensions. > > An example program can be found at: > https://gist.github.com/anonymous/32c8599466dad30cd551 > > Am I doing something wrong? > > I wrote a similar program, using the Julia programming language, > at the speed is about 10000 times higher. > > This Julia program can be found at: > https://gist.github.com/anonymous/70af96a6916b4906f2b0 > > Any ideas how to improve the speed of the Python code are welcome. > > Best regards: > > Uwe Fechner > > --------------------------------------------- > Uwe Fechner, M.Sc. > Delft University of Technology > Faculty of Aerospace Engineering/ Wind Energy > Kluyverweg 1, > 2629 HS Delft, The Netherlands > Phone: +31-15-27-88902 > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -- --------------------------------------------- Uwe Fechner, M.Sc. Delft University of Technology Faculty of Aerospace Engineering/ Wind Energy Kluyverweg 1, 2629 HS Delft, The Netherlands Phone: +31-15-27-88902 From evgeny.burovskiy at gmail.com Tue Sep 9 16:04:54 2014 From: evgeny.burovskiy at gmail.com (Evgeni Burovski) Date: Tue, 9 Sep 2014 21:04:54 +0100 Subject: [SciPy-User] 3D spline interpolation very, very slow - UPDATE - In-Reply-To: <540F5C61.8090609@tudelft.nl> References: <540ED128.6040007@tudelft.nl> <540F5C61.8090609@tudelft.nl> Message-ID: > > Only the documentation should be improved. Pull requests are always welcome :-). > > Best regards: > > Uwe Fechner > > Am 09.09.2014 um 15:53 schrieb Joe Kington: > > The reason is because `ndimage.map_coordinates` pre-calculates a series of weights on the entire > > input grid to reduce a higher-order interpolation problem to linear interpolation. This is why > > linear 3D interpolation (`order=1`) takes a fraction of a millisecond while your second-order spline > > interpolation on the same dataset takes several seconds. > > > > It's an optimization for the situation where you're interpolating many (e.g. thousands) of points > > instead of a single point, as your example. In a nutshell, `map_coordinates` isn't optimized for > > the use-case of interpolating a small number of points. > > > > If you were to compare an interpolation with several thousand points, the timing should be similar. > > > > However, the cost is a one-time cost. One way to get around it is to pre-calculate the weights > > before-hand using `ndimage.spline_filter` and then pass them in and specify `prefilter=False` to > > `map_coordinates. Here's a quick modification of your original example: > > https://gist.github.com/joferkington/cc2f4d9a3eb96d837ef1 > > > > At any rate, this is something that could certainly be more optimized for a small number of points. > > It currently assumes you're interested in interpolating over most of the input array and does a lot > > of (sometimes unnecessary) legwork to optimize for that use-case. I'm not going to volunteer, at > > the moment, though :) > > > > (Also, on a side note, it's best to use odd-ordered splines. Otherwise you won't have a "smooth" > > interpolation. I'd be willing to bet that your julia example is effectively using `order=3`. > > Regardless, that doesn't have anything to do with speed.) > > > > Hope that helps, > > -Joe > > > > On Tue, Sep 9, 2014 at 5:06 AM, Uwe Fechner > wrote: > > > > Hello, > > > > I am trying to do a spline interpolation of a 3D wind field. > > > > For this I am using the function ndimage.map_coordinates. > > > > For some reasons this works well if I interpolate in one dimension, > > but it gets terrible slow if I do this in three dimensions. > > > > An example program can be found at: > > https://gist.github.com/anonymous/32c8599466dad30cd551 > > > > Am I doing something wrong? > > > > I wrote a similar program, using the Julia programming language, > > at the speed is about 10000 times higher. > > > > This Julia program can be found at: > > https://gist.github.com/anonymous/70af96a6916b4906f2b0 > > > > Any ideas how to improve the speed of the Python code are welcome. > > > > Best regards: > > > > Uwe Fechner > > > > --------------------------------------------- > > Uwe Fechner, M.Sc. > > Delft University of Technology > > Faculty of Aerospace Engineering/ Wind Energy > > Kluyverweg 1, > > 2629 HS Delft, The Netherlands > > Phone: +31-15-27-88902 > > > > _______________________________________________ > > SciPy-User mailing list > > SciPy-User at scipy.org > > http://mail.scipy.org/mailman/listinfo/scipy-user > > > > > > > > > > _______________________________________________ > > SciPy-User mailing list > > SciPy-User at scipy.org > > http://mail.scipy.org/mailman/listinfo/scipy-user > > > > > -- > --------------------------------------------- > Uwe Fechner, M.Sc. > Delft University of Technology > Faculty of Aerospace Engineering/ Wind Energy > Kluyverweg 1, > 2629 HS Delft, The Netherlands > Phone: +31-15-27-88902 > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Wed Sep 10 11:38:10 2014 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Wed, 10 Sep 2014 11:38:10 -0400 Subject: [SciPy-User] linalg question: unique sign of qr ? Message-ID: I'm trying to do QR updating and my updating algorithm doesn't manage to match the signs of numpy or scipy linalg.qr. Is there a sign convention for R matrix of QR? Wikipedia says " If *A* is invertible , then the factorization is unique if we require that the diagonal elements of *R* are positive." scipy and numpy seems to have all diagonal values negative except for the last one which is positive. R'R looks to be the same even with different signs (and the same as x dot x of the data). So similar to eigenvalue decomposition we need to pick some signs. Extra question: If I want to match the signs, do I need to change signs by column or by rows in the upper triangular R matrix? Josef -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Wed Sep 10 11:57:08 2014 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Wed, 10 Sep 2014 11:57:08 -0400 Subject: [SciPy-User] linalg question: unique sign of qr ? In-Reply-To: References: Message-ID: On Wed, Sep 10, 2014 at 11:38 AM, wrote: > I'm trying to do QR updating and my updating algorithm doesn't manage to > match the signs of numpy or scipy linalg.qr. > > > Is there a sign convention for R matrix of QR? > > Wikipedia says " If *A* is invertible > , then the factorization > is unique if we require that the diagonal elements of *R* are positive." > > scipy and numpy seems to have all diagonal values negative except for the > last one which is positive. > > R'R looks to be the same even with different signs (and the same as x dot > x of the data). So similar to eigenvalue decomposition we need to pick some > signs. > > Extra question: If I want to match the signs, do I need to change signs by > column or by rows in the upper triangular R matrix? > and for comparison cholesky has positive diagonal elements >>> np.round(np.linalg.qr(xy, mode='r') / np.linalg.cholesky(xy.T.dot(xy)).T , 3) array([[ -1., -1., -1., -1., -1.], [ nan, -1., -1., -1., -1.], [ nan, nan, -1., -1., -1.], [ nan, nan, nan, -1., -1.], [ nan, nan, nan, nan, 1.]]) my updating QR seems to have random signs, that can be fixed by sign adjustments by rows >>> np.round(res[-1] * np.sign(res[-1].diagonal())[:,None] / np.linalg.cholesky(xy.T.dot(xy)).T , 3) array([[ 1., 1., 1., 1., 1.], [ nan, 1., 1., 1., 1.], [ nan, nan, 1., 1., 1.], [ nan, nan, nan, 1., 1.], [ nan, nan, nan, nan, 1.]]) BTW: updating the R of QR without keeping track of Q after new observations have arrived is just two lines (after finding a reference). Josef > > Josef > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gdmcbain at freeshell.org Wed Sep 10 19:50:55 2014 From: gdmcbain at freeshell.org (Geordie McBain) Date: Thu, 11 Sep 2014 09:50:55 +1000 Subject: [SciPy-User] linalg question: unique sign of qr ? In-Reply-To: References: Message-ID: 2014-09-11 1:57 GMT+10:00 : > > > > On Wed, Sep 10, 2014 at 11:38 AM, wrote: >> >> I'm trying to do QR updating and my updating algorithm doesn't manage to >> match the signs of numpy or scipy linalg.qr. >> >> >> Is there a sign convention for R matrix of QR? >> >> Wikipedia says " If A is invertible, then the factorization is unique if >> we require that the diagonal elements of R are positive." >> >> scipy and numpy seems to have all diagonal values negative except for the >> last one which is positive. >> >> R'R looks to be the same even with different signs (and the same as x dot >> x of the data). So similar to eigenvalue decomposition we need to pick some >> signs. >> >> Extra question: If I want to match the signs, do I need to change signs by >> column or by rows in the upper triangular R matrix? Hi Josef, I found the same nonunicity of numpy.linalg.qr. My fix was similarly to enforce the positive diagonal on R. If A = QR and S = sgn diag R, then S is an involution (S inv S = I) and QR = Q S inv S R = (Q S) (inv S R), and the last is another QR decomposition but with the property required for uniqueness. The code I use is Q, R = np.linalg.qr(A) Q = Q * np.sign(np.diag(R)) HTH G. D. McBain From josef.pktd at gmail.com Wed Sep 10 21:35:05 2014 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Wed, 10 Sep 2014 21:35:05 -0400 Subject: [SciPy-User] linalg question: unique sign of qr ? In-Reply-To: References: Message-ID: On Wed, Sep 10, 2014 at 7:50 PM, Geordie McBain wrote: > 2014-09-11 1:57 GMT+10:00 : > > > > > > > > On Wed, Sep 10, 2014 at 11:38 AM, wrote: > >> > >> I'm trying to do QR updating and my updating algorithm doesn't manage to > >> match the signs of numpy or scipy linalg.qr. > >> > >> > >> Is there a sign convention for R matrix of QR? > >> > >> Wikipedia says " If A is invertible, then the factorization is unique if > >> we require that the diagonal elements of R are positive." > >> > >> scipy and numpy seems to have all diagonal values negative except for > the > >> last one which is positive. > >> > >> R'R looks to be the same even with different signs (and the same as x > dot > >> x of the data). So similar to eigenvalue decomposition we need to pick > some > >> signs. > >> > >> Extra question: If I want to match the signs, do I need to change signs > by > >> column or by rows in the upper triangular R matrix? > > Hi Josef, > I found the same nonunicity of numpy.linalg.qr. My fix was > similarly to enforce the positive diagonal on R. > > If A = QR and S = sgn diag R, then S is an involution (S inv S = I) > and QR = Q S inv S R = (Q S) (inv S R), and the last is another QR > decomposition but with the property required for uniqueness. The code > I use is > > Q, R = np.linalg.qr(A) > Q = Q * np.sign(np.diag(R)) > Thanks Geordie for the explanation and confirming this. It would have saved me an hour of hunting for non-existing bugs had I thought about this earlier. Josef > > HTH > > G. D. McBain > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pbicou at gmail.com Mon Sep 15 11:14:26 2014 From: pbicou at gmail.com (Pierre Bicou) Date: Mon, 15 Sep 2014 17:14:26 +0200 Subject: [SciPy-User] adding dataframes to a Panel Message-ID: Hi, I'm not sure i'm on the right mailing list but I need some help with Pandas. Yes, i'm a newbie with numerical python, so be indulgent plz. I have a non empty dataframe: df = pd.DataFrame(...) and an empty panel : p = pd.Panel() I want to add "df" to "p", with index "1234", like this : p["1234"] = data This doesn't seem to work, I get an empty dataframe at p["1234"]. Why ? Thx, P -------------- next part -------------- An HTML attachment was scrubbed... URL: From matrajt at gmail.com Tue Sep 16 13:28:38 2014 From: matrajt at gmail.com (Laura Matrajt) Date: Tue, 16 Sep 2014 10:28:38 -0700 Subject: [SciPy-User] odeint problems with a simple ode Message-ID: Hi there, I have a simple ODE that I was trying to integrate with odeint and ran into weird problems: the ode is basically this: ds/dt = 0 if t b+a this integrates to a stair-type function, with the "stair" part being a line. I know this can be integrated analytically but this is just part of a more complicated ode. If one tries to integrate this using odeint, the solver will work fine as long as ba. In fact, I checked the values that t was taking, and it is going all over the place. I know that this function is non differentiable in two points, but it is so easy that I was hoping the integrator would be able to handle this. In fact, other solvers (my own RK4, matlab, etc) can deal with this without any problem. Any ideas of how/why this is happening would be very much appreciated. thanks, Below is a minimal (not)working example: import matplotlib import numpy as np import pylab as plt from scipy.integrate import odeint def frac(t, a, fraction, initS0, b): if (t>0) & (t<= b): frac = 0 elif (t> b) & (t<= b + a): frac = (1.0*initS0*fraction/a) else: frac = 0 return frac def linearModel(y,t,params): [a, fraction, initS0,b] = params out = [-frac(t, a, fraction, initS0, b), frac(t,a, fraction, initS0, b) ] return out if __name__ == '__main__': a = 15.0 b = 10.0 time = np.linspace(0,100, 101) initLM = 1000.0 initCondLM = [initLM, 0] paramsLM = [a,0.1,initLM, b] LM = odeint(linearModel, initCondLM, time, args=(paramsLM,)) plt.plot(time,LM[:,0],'b') plt.plot(time,LM[:,1],'r') plt.show() -- -------------- next part -------------- An HTML attachment was scrubbed... URL: From pav at iki.fi Tue Sep 16 13:43:23 2014 From: pav at iki.fi (Pauli Virtanen) Date: Tue, 16 Sep 2014 20:43:23 +0300 Subject: [SciPy-User] odeint problems with a simple ode In-Reply-To: References: Message-ID: 16.09.2014, 20:28, Laura Matrajt kirjoitti: [clip] > Below is a minimal (not)working example: [clip] Seems to work fine for me, without any problems. Please give details how it does not work. -- Pauli Virtanen From matrajt at gmail.com Tue Sep 16 13:49:39 2014 From: matrajt at gmail.com (Laura Matrajt) Date: Tue, 16 Sep 2014 10:49:39 -0700 Subject: [SciPy-User] odeint problems with a simple ode In-Reply-To: References: Message-ID: Hi Pauli, thanks for your reply. So if a wrote: > 16.09.2014, 20:28, Laura Matrajt kirjoitti: > [clip] > > Below is a minimal (not)working example: > [clip] > > Seems to work fine for me, without any problems. > Please give details how it does not work. > > -- > Pauli Virtanen > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -- Laura -------------- next part -------------- An HTML attachment was scrubbed... URL: From pav at iki.fi Tue Sep 16 13:47:56 2014 From: pav at iki.fi (Pauli Virtanen) Date: Tue, 16 Sep 2014 20:47:56 +0300 Subject: [SciPy-User] odeint problems with a simple ode In-Reply-To: References: Message-ID: 16.09.2014, 20:43, Pauli Virtanen kirjoitti: > 16.09.2014, 20:28, Laura Matrajt kirjoitti: > [clip] >> Below is a minimal (not)working example: > [clip] > > Seems to work fine for me, without any problems. > Please give details how it does not work. Ok, some issue seems to appear for somewhat larger b than that given in the example code pasted. The reason is that the solver does not know the length scale of the RHS function and increases the step size rapidly so that it misses the nonzero bump in it. You can to specify the option hmax=1.0 to limit the step size for the solver, so that it does not skip past the nonzero region. From nils106 at googlemail.com Thu Sep 18 08:32:19 2014 From: nils106 at googlemail.com (Nils Wagner) Date: Thu, 18 Sep 2014 14:32:19 +0200 Subject: [SciPy-User] interpolate.sproot -- Warning: the number of zeros exceeds mest Message-ID: Hi all, I try to compute the zeros of a periodic spline using splrep and sproot. How can I suppress the message Warning: the number of zeros exceeds mest Is it possible to throw an exception ? Regards, Nils -------------- next part -------------- An HTML attachment was scrubbed... URL: From davidmenhur at gmail.com Thu Sep 18 08:43:41 2014 From: davidmenhur at gmail.com (=?UTF-8?B?RGHPgGlk?=) Date: Thu, 18 Sep 2014 14:43:41 +0200 Subject: [SciPy-User] interpolate.sproot -- Warning: the number of zeros exceeds mest In-Reply-To: References: Message-ID: On 18 September 2014 14:32, Nils Wagner wrote: > How can I suppress the message > > Warning: the number of zeros exceeds mest > > Is it possible to throw an exception ? > Python's warnings machinery can be used. https://docs.python.org/2/library/warnings.html Using warnings.simplefilter('ignore') you will completely silence the warning, and using 'error' instead will raise an exception. You may want to run this in a context, so it doesn't "leak out" on the rest of your program: with warnings.catch_warnings(): See examples in the docs. /David. -------------- next part -------------- An HTML attachment was scrubbed... URL: From nils106 at googlemail.com Thu Sep 18 08:52:20 2014 From: nils106 at googlemail.com (Nils Wagner) Date: Thu, 18 Sep 2014 14:52:20 +0200 Subject: [SciPy-User] interpolate.sproot -- Warning: the number of zeros exceeds mest In-Reply-To: References: Message-ID: So far I have used from scipy.interpolate import splrep, splev, sproot import warnings warnings.simplefilter("ignore") but the warning message is still there. Am I missing something ? Nils On Thu, Sep 18, 2014 at 2:43 PM, Da?id wrote: > > On 18 September 2014 14:32, Nils Wagner wrote: > >> How can I suppress the message >> >> Warning: the number of zeros exceeds mest >> >> Is it possible to throw an exception ? >> > > > Python's warnings machinery can be used. > > https://docs.python.org/2/library/warnings.html > > Using warnings.simplefilter('ignore') you will completely silence the > warning, and using 'error' instead will raise an exception. You may want to > run this in a context, so it doesn't "leak out" on the rest of your program: > > with warnings.catch_warnings(): > > > See examples in the docs. > > > /David. > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From eric.moore2 at nih.gov Thu Sep 18 09:49:31 2014 From: eric.moore2 at nih.gov (Moore, Eric (NIH/NIDDK) [F]) Date: Thu, 18 Sep 2014 13:49:31 +0000 Subject: [SciPy-User] interpolate.sproot -- Warning: the number of zeros exceeds mest In-Reply-To: References: Message-ID: <649847CE7F259144A0FD99AC64E7326D11C991@MLBXV17.nih.gov> From: Nils Wagner [mailto:nils106 at googlemail.com] Sent: Thursday, September 18, 2014 8:52 AM To: SciPy Users List Subject: Re: [SciPy-User] interpolate.sproot -- Warning: the number of zeros exceeds mest So far I have used from scipy.interpolate import splrep, splev, sproot import warnings warnings.simplefilter("ignore") but the warning message is still there. Am I missing something ? Nils On Thu, Sep 18, 2014 at 2:43 PM, Da?id wrote: On 18 September 2014 14:32, Nils Wagner wrote: How can I suppress the message Warning: the number of zeros exceeds mest Is it possible to throw an exception ? Python's warnings machinery can be used. https://docs.python.org/2/library/warnings.html Using warnings.simplefilter('ignore') you will completely silence the warning, and using 'error' instead will raise an exception. You may want to run this in a context, so it doesn't "leak out" on the rest of your program: with warnings.catch_warnings(): See examples in the docs. /David. _______________________________________________ SciPy-User mailing list SciPy-User at scipy.org http://mail.scipy.org/mailman/listinfo/scipy-user It isn't a warning in that sense. sproot simply calls print. See https://github.com/scipy/scipy/blob/master/scipy/interpolate/fitpack.py#L726 Eric From guziy.sasha at gmail.com Thu Sep 18 10:07:11 2014 From: guziy.sasha at gmail.com (Oleksandr Huziy) Date: Thu, 18 Sep 2014 10:07:11 -0400 Subject: [SciPy-User] interpolate.sproot -- Warning: the number of zeros exceeds mest In-Reply-To: References: Message-ID: Can you put a very big number for mest=1e10? Or it will still be possible to exceed it? Cheers 2014-09-18 8:52 GMT-04:00 Nils Wagner : > So far I have used > > from scipy.interpolate import splrep, splev, sproot > import warnings > warnings.simplefilter("ignore") > > but the warning message is still there. > > Am I missing something ? > > > Nils > > > On Thu, Sep 18, 2014 at 2:43 PM, Da?id wrote: > >> >> On 18 September 2014 14:32, Nils Wagner wrote: >> >>> How can I suppress the message >>> >>> Warning: the number of zeros exceeds mest >>> >>> Is it possible to throw an exception ? >>> >> >> >> Python's warnings machinery can be used. >> >> https://docs.python.org/2/library/warnings.html >> >> Using warnings.simplefilter('ignore') you will completely silence the >> warning, and using 'error' instead will raise an exception. You may want to >> run this in a context, so it doesn't "leak out" on the rest of your program: >> >> with warnings.catch_warnings(): >> >> >> See examples in the docs. >> >> >> /David. >> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> >> > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -- Sasha -------------- next part -------------- An HTML attachment was scrubbed... URL: From nils106 at googlemail.com Thu Sep 18 10:07:32 2014 From: nils106 at googlemail.com (Nils Wagner) Date: Thu, 18 Sep 2014 16:07:32 +0200 Subject: [SciPy-User] interpolate.sproot -- Warning: the number of zeros exceeds mest In-Reply-To: <649847CE7F259144A0FD99AC64E7326D11C991@MLBXV17.nih.gov> References: <649847CE7F259144A0FD99AC64E7326D11C991@MLBXV17.nih.gov> Message-ID: IMHO, this behavior could be improved. Nils On Thu, Sep 18, 2014 at 3:49 PM, Moore, Eric (NIH/NIDDK) [F] < eric.moore2 at nih.gov> wrote: > From: Nils Wagner [mailto:nils106 at googlemail.com] > Sent: Thursday, September 18, 2014 8:52 AM > To: SciPy Users List > Subject: Re: [SciPy-User] interpolate.sproot -- Warning: the number of > zeros exceeds mest > > So far I have used > > from scipy.interpolate import splrep, splev, sproot > import warnings > warnings.simplefilter("ignore") > but the warning message is still there. > Am I missing something ? > > Nils > > On Thu, Sep 18, 2014 at 2:43 PM, Da?id wrote: > > On 18 September 2014 14:32, Nils Wagner wrote: > How can I suppress the message > Warning: the number of zeros exceeds mest > Is it possible to throw an exception ? > > Python's warnings machinery can be used. > > https://docs.python.org/2/library/warnings.html > Using warnings.simplefilter('ignore') you will completely silence the > warning, and using 'error' instead will raise an exception. You may want to > run this in a context, so it doesn't "leak out" on the rest of your program: > with warnings.catch_warnings(): > > See examples in the docs. > > /David. > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > > It isn't a warning in that sense. sproot simply calls print. See > https://github.com/scipy/scipy/blob/master/scipy/interpolate/fitpack.py#L726 > > Eric > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From evgeny.burovskiy at gmail.com Thu Sep 18 12:36:42 2014 From: evgeny.burovskiy at gmail.com (Evgeni Burovski) Date: Thu, 18 Sep 2014 17:36:42 +0100 Subject: [SciPy-User] interpolate.sproot -- Warning: the number of zeros exceeds mest In-Reply-To: References: <649847CE7F259144A0FD99AC64E7326D11C991@MLBXV17.nih.gov> Message-ID: On Sep 18, 2014 6:07 PM, "Nils Wagner" wrote: > > IMHO, this behavior could be improved. > > Nils > > > On Thu, Sep 18, 2014 at 3:49 PM, Moore, Eric (NIH/NIDDK) [F] < eric.moore2 at nih.gov> wrote: >> >> From: Nils Wagner [mailto:nils106 at googlemail.com] >> Sent: Thursday, September 18, 2014 8:52 AM >> To: SciPy Users List >> Subject: Re: [SciPy-User] interpolate.sproot -- Warning: the number of zeros exceeds mest >> >> So far I have used >> >> from scipy.interpolate import splrep, splev, sproot >> import warnings >> warnings.simplefilter("ignore") >> but the warning message is still there. >> Am I missing something ? >> >> Nils >> >> On Thu, Sep 18, 2014 at 2:43 PM, Da?id wrote: >> >> On 18 September 2014 14:32, Nils Wagner wrote: >> How can I suppress the message >> Warning: the number of zeros exceeds mest >> Is it possible to throw an exception ? >> >> Python's warnings machinery can be used. >> >> https://docs.python.org/2/library/warnings.html >> Using warnings.simplefilter('ignore') you will completely silence the warning, and using 'error' instead will raise an exception. You may want to run this in a context, so it doesn't "leak out" on the rest of your program: >> with warnings.catch_warnings(): >> >> See examples in the docs. >> >> /David. >> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> >> >> It isn't a warning in that sense. sproot simply calls print. See https://github.com/scipy/scipy/blob/master/scipy/interpolate/fitpack.py#L726 >> >> Eric >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user > > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > +1 for replacing the print with a normal python warning. There are several more of them in fitpack.py, all of which could/should be python warnings. -------------- next part -------------- An HTML attachment was scrubbed... URL: From davidmenhur at gmail.com Thu Sep 18 13:23:55 2014 From: davidmenhur at gmail.com (=?UTF-8?B?RGHPgGlk?=) Date: Thu, 18 Sep 2014 19:23:55 +0200 Subject: [SciPy-User] interpolate.sproot -- Warning: the number of zeros exceeds mest In-Reply-To: References: <649847CE7F259144A0FD99AC64E7326D11C991@MLBXV17.nih.gov> Message-ID: On 18 September 2014 18:36, Evgeni Burovski wrote: > +1 for replacing the print with a normal python warning. > There are several more of them in fitpack.py, all of which could/should be > python warnings. > I am working on a PR. While I am at it, should the old style string formatting be replaced by new style, or just left as it is? /David -------------- next part -------------- An HTML attachment was scrubbed... URL: From ccordoba12 at gmail.com Thu Sep 18 18:00:32 2014 From: ccordoba12 at gmail.com (=?UTF-8?B?Q2FybG9zIEPDs3Jkb2Jh?=) Date: Thu, 18 Sep 2014 17:00:32 -0500 Subject: [SciPy-User] ANN: Spyder 2.3.1 is released! Message-ID: <541B5600.4010309@gmail.com> Hi all, On the behalf of Spyder's development team (http://code.google.com/p/spyderlib/people/list), I'm pleased to announce that Spyder 2.3.1 has been released and is available for Windows XP/Vista/7/8, GNU/Linux and MacOS X: https://bitbucket.org/spyder-ide/spyderlib/downloads This release represents 2 months of development since 2.3.0 and introduces major enhancements and new features: * Support for Pandas DataFrame's and TimeSerie's types, and Numpy 3D arrays in the Variable Explorer * Connect to external IPython kernels through ssh * Add a tutorial for beginners to the Object Inspector * Improve our looking style on Mac * And several other changes: http://code.google.com/p/spyderlib/wiki/ChangeLog We fixed 15 important bugs, merged 13 pull requests from 8 authors and added more than 300 commits between these two releases. Spyder is a free, open-source (MIT license) interactive development environment for the Python language with advanced editing, interactive testing, debugging and introspection features. Originally designed to provide MATLAB-like features (integrated help, interactive console, variable explorer with GUI-based editors for dictionaries, NumPy arrays, ...), it is strongly oriented towards scientific computing and software development. Last, but not least, we welcome any contribution that helps making Spyder an efficient scientific development/computing environment. Join us to help creating your favorite environment! (http://code.google.com/p/spyderlib/wiki/NoteForContributors) Enjoy! -Carlos From pmhobson at gmail.com Fri Sep 19 17:02:18 2014 From: pmhobson at gmail.com (Paul Hobson) Date: Fri, 19 Sep 2014 14:02:18 -0700 Subject: [SciPy-User] adding dataframes to a Panel In-Reply-To: References: Message-ID: You want pydata at googlegroups.com but I think the answer to your questions is p.loc['1234'] = data I don't work with Panel much, so caveat emptor, I guess. On Mon, Sep 15, 2014 at 8:14 AM, Pierre Bicou wrote: > Hi, > > I'm not sure i'm on the right mailing list but I need some help with > Pandas. > Yes, i'm a newbie with numerical python, so be indulgent plz. > > I have a non empty dataframe: df = pd.DataFrame(...) > and an empty panel : p = pd.Panel() > > I want to add "df" to "p", with index "1234", like this : p["1234"] = data > > This doesn't seem to work, I get an empty dataframe at p["1234"]. > > Why ? > > > Thx, > P > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From newville at cars.uchicago.edu Mon Sep 22 10:30:39 2014 From: newville at cars.uchicago.edu (Matt Newville) Date: Mon, 22 Sep 2014 09:30:39 -0500 Subject: [SciPy-User] ANN: Lmfit 0.8.0 Message-ID: Hi Folks, Lmfit 0.8.0 has been released, and is available from PyPI and github: https://pypi.python.org/pypi/lmfit/ http://lmfit.github.io/lmfit-py/ Lmfit provides a high level approach of least squares minimization and curve fitting based on the routines from scipy.optimize. The key idea is to use Parameter objects that can be bounded, fixed, or algebraically constrained in place of floating point variables. Many additional features to make minimization problems easier and better are included. Lmfit is MIT-licensed and a pure Python module. It requires scipy version 0.13 or later. Lmfit version 0.8.0 includes several bug fixes and improvements. The most important new feature is a Model class for high level curve-fitting problems, currently emphasizing 1-D functions. The Model class, largely the work of Daniel Allen with substantial input from Antonino Ingargiola, wraps a model function that simulates some data. It includes methods to create parameters from function arguments, to fit to data, and to evaluate models. More than 20 pre-built models for line shapes such as Gaussian and Exponential are included. An important feature of Model is that they can be added together, making it very easy to construct complex models. Automated testing with nose and Travis-CI is greatly improved. There are over 100 tests, many of these checking the numerical results for non-trivial fits. All of the NIST StRD datasets are tested, requiring that NIST certified values be found (with fairly loose precision) from at least one of the NIST-provided starting values. Daniel Allen also started an IPython GUI Fitter, providing a very nice tool for fitting simple line-shapes to 1 dimensional data. Finally, we've added a lmfit-py google group for questions about usage, discussions of design, and announcements. We're happy to have conversations about ideas for improving lmfit, or minimization and curve-fitting routines with scipy in general, on any appropriate forum. Thanks, --Matt Newville -------------- next part -------------- An HTML attachment was scrubbed... URL: From fperez.net at gmail.com Mon Sep 22 12:59:30 2014 From: fperez.net at gmail.com (Fernando Perez) Date: Mon, 22 Sep 2014 09:59:30 -0700 Subject: [SciPy-User] ANN: Lmfit 0.8.0 In-Reply-To: References: Message-ID: On Mon, Sep 22, 2014 at 7:30 AM, Matt Newville wrote: > Daniel Allen also started an IPython GUI Fitter, providing a very nice > tool for fitting simple line-shapes to 1 dimensional data. Do you have a pointer to an example notebook I could look at that showcases this? Thanks! f -- Fernando Perez (@fperez_org; http://fperez.org) fperez.net-at-gmail: mailing lists only (I ignore this when swamped!) fernando.perez-at-berkeley: contact me here for any direct mail -------------- next part -------------- An HTML attachment was scrubbed... URL: From trive at astro.su.se Mon Sep 22 18:52:44 2014 From: trive at astro.su.se (=?windows-1252?Q?Th=F8ger_Emil__Rivera-Thorsen?=) Date: Tue, 23 Sep 2014 00:52:44 +0200 Subject: [SciPy-User] ANN: Lmfit 0.8.0 In-Reply-To: References: Message-ID: <5420A83C.1070401@astro.su.se> Hi Matt, I have been working on a 1D line-fitter which uses lmfit as backend (as module for a larger project). Had to shelve it for the moment, but planning to get back to work on it. It could possibly be of some use, interface-wise. It is written using Traits, Traitsui, Chaco, as the interactive IPython and Matplotlib were very limited when I started writing. In any case, it is at https://github.com/thriveth/lpbuilder /Emil On 09/22/2014 04:30 PM, Matt Newville wrote: > Hi Folks, > > Lmfit 0.8.0 has been released, and is available from PyPI and github: > https://pypi.python.org/pypi/lmfit/ > http://lmfit.github.io/lmfit-py/ > > Lmfit provides a high level approach of least squares minimization and > curve fitting based on the routines from scipy.optimize. The key idea > is to use Parameter objects that can be bounded, fixed, or > algebraically constrained in place of floating point variables. Many > additional features to make minimization problems easier and better > are included. Lmfit is MIT-licensed and a pure Python module. It > requires scipy version 0.13 or later. > > Lmfit version 0.8.0 includes several bug fixes and improvements. The > most important new feature is a Model class for high level > curve-fitting problems, currently emphasizing 1-D functions. The > Model class, largely the work of Daniel Allen with substantial input > from Antonino Ingargiola, wraps a model function that simulates some > data. It includes methods to create parameters from function > arguments, to fit to data, and to evaluate models. More than 20 > pre-built models for line shapes such as Gaussian and Exponential are > included. An important feature of Model is that they can be added > together, making it very easy to construct complex models. > > Automated testing with nose and Travis-CI is greatly improved. There > are over 100 tests, many of these checking the numerical results for > non-trivial fits. All of the NIST StRD datasets are tested, requiring > that NIST certified values be found (with fairly loose precision) from > at least one of the NIST-provided starting values. > > Daniel Allen also started an IPython GUI Fitter, providing a very nice > tool for fitting simple line-shapes to 1 dimensional data. > > Finally, we've added a lmfit-py google group for questions about > usage, discussions of design, and announcements. We're happy to have > conversations about ideas for improving lmfit, or minimization and > curve-fitting routines with scipy in general, on any appropriate forum. > > Thanks, > > --Matt Newville > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user -------------- next part -------------- An HTML attachment was scrubbed... URL: From newville at cars.uchicago.edu Mon Sep 22 22:43:41 2014 From: newville at cars.uchicago.edu (Matt Newville) Date: Mon, 22 Sep 2014 21:43:41 -0500 Subject: [SciPy-User] ANN: Lmfit 0.8.0 In-Reply-To: <5420A83C.1070401@astro.su.se> References: <5420A83C.1070401@astro.su.se> Message-ID: Hi Emil, On Mon, Sep 22, 2014 at 5:52 PM, Th?ger Emil Rivera-Thorsen < trive at astro.su.se> wrote: > Hi Matt, > > I have been working on a 1D line-fitter which uses lmfit as backend (as > module for a larger project). > Had to shelve it for the moment, but planning to get back to work on it. > It could possibly be of some use, interface-wise. > It is written using Traits, Traitsui, Chaco, as the interactive IPython > and Matplotlib were very limited when I started writing. In any case, it is > at > > https://github.com/thriveth/lpbuilder > > /Emil > Thanks, that looks very nice. I'm also aware of a similar interface at http://nexpy.github.io/nexpy/pythongui.html#fitting-nexus-data, and I have some similar (less complete) wx widgets for some of my data collection/visualization tools. It does seem like there is a need for a GUI interface for "simple peak fitting" similar to Origin or ftyk that is very easy to use for simple cases but powerful enough to allow complex work. I would imagine that IPython notebooks or Qt would be the most obvious tool-kit choices, but I don;t know either of these tools very well. I'm sure that being friendly to Pandas Series and datasets from HDF5 files would be greatly appreciated too. I don't think it would be a huge effort for a properly motivated person. --Matt Newville -------------- next part -------------- An HTML attachment was scrubbed... URL: From jeremy at jeremysanders.net Tue Sep 23 06:25:34 2014 From: jeremy at jeremysanders.net (Jeremy Sanders) Date: Tue, 23 Sep 2014 12:25:34 +0200 Subject: [SciPy-User] Minimizing Monte Carlo simulation function Message-ID: I have a function which returns a value computed using a Monte Carlo simulation. I'd like to minimize this function. It's also hard to compute the gradient for this function as the parameters are converted to integers internally (basically they are converted to array indices). I'd have thought that simulated annealing might be the best way to minimize this function. However, this is now deprecated in scipy. The replacement, basinhopping, appears to use scipy.minimize internally, so I think this relies on a function where the gradient can be computed. Is there are real replacement for annealing in this scenario? basinhopping doesn't seem to work very well when I tried it. It seems that it is wrong to deprecate simulated annealing, as it is a widely understood algorithm. Doing some tests, the most robust way of finding the minimum I have found so far is to use the MCMC emcee module. Thanks Jeremy Sanders. From sturla.molden at gmail.com Tue Sep 23 08:58:10 2014 From: sturla.molden at gmail.com (Sturla Molden) Date: Tue, 23 Sep 2014 12:58:10 +0000 (UTC) Subject: [SciPy-User] Minimizing Monte Carlo simulation function References: Message-ID: <1501805938433169675.742923sturla.molden-gmail.com@news.gmane.org> Jeremy Sanders wrote: > Is there are real replacement for annealing in this scenario? basinhopping > doesn't seem to work very well when I tried it. It seems that it is wrong to > deprecate simulated annealing, as it is a widely understood algorithm. I also don't understand why SA was deprecated. scipy.optimize needs more minimizers, not less. Sturla From andyfaff at gmail.com Tue Sep 23 17:54:10 2014 From: andyfaff at gmail.com (Andrew Nelson) Date: Wed, 24 Sep 2014 07:54:10 +1000 Subject: [SciPy-User] Minimizing Monte Carlo simulation function Message-ID: Jeremy Sanders wrote: >I have a function which returns a value computed using a Monte Carlo simulation. I'd like to minimize this function. It's also hard to compute the gradient for this function as the parameters are converted to integers internally (basically they are converted to array indices). >I'd have thought that simulated annealing might be the best way to minimize this function. However, this is now deprecated in scipy. The replacement, basinhopping, appears to use scipy.minimize internally, so I think this relies on a function where the gradient can be computed. >Is there are real replacement for annealing in this scenario? basinhopping doesn't seem to work very well when I tried it. It seems that it is wrong to deprecate simulated annealing, as it is a widely understood algorithm. >Doing some tests, the most robust way of finding the minimum I have found so far is to use the MCMC emcee module. The 0.15 release of scipy will have the 'scipy.optimize.differential_evolution' minimizer. This optimizer uses Differential Evolution (http://en.wikipedia.org/wiki/Differential_evolution), which is a stochastic method and does not use gradients at all. Internal conversion of parameters to integers will not be problem. But you do have to specify a range, (min, max), for each parameter. The minimizer stays strictly within the bounds of those ranges. >From tests on my problems (high dimensional non-linear least squares) the algorithm is streets ahead of simulated annealing. I also prefer differential evolution over scipy.optimize.curve_fit, because it is totally insensitive to the starting parameters used. Andrew. -- _____________________________________ Dr. Andrew Nelson _____________________________________ -------------- next part -------------- An HTML attachment was scrubbed... URL: From jeremy at jeremysanders.net Wed Sep 24 04:47:31 2014 From: jeremy at jeremysanders.net (Jeremy Sanders) Date: Wed, 24 Sep 2014 10:47:31 +0200 Subject: [SciPy-User] Minimizing Monte Carlo simulation function References: Message-ID: Andrew Nelson wrote: > The 0.15 release of scipy will have the > 'scipy.optimize.differential_evolution' minimizer. This optimizer uses > Differential Evolution > (http://en.wikipedia.org/wiki/Differential_evolution), which is a > stochastic method and does not use gradients at all. Sounds interesting - I should try it out... Thanks Jeremy From newville at cars.uchicago.edu Wed Sep 24 10:07:04 2014 From: newville at cars.uchicago.edu (Matt Newville) Date: Wed, 24 Sep 2014 09:07:04 -0500 Subject: [SciPy-User] Minimizing Monte Carlo simulation function In-Reply-To: <1501805938433169675.742923sturla.molden-gmail.com@news.gmane.org> References: <1501805938433169675.742923sturla.molden-gmail.com@news.gmane.org> Message-ID: On Tue, Sep 23, 2014 at 7:58 AM, Sturla Molden wrote: > Jeremy Sanders wrote: > > > Is there are real replacement for annealing in this scenario? > basinhopping > > doesn't seem to work very well when I tried it. It seems that it is > wrong to > > deprecate simulated annealing, as it is a widely understood algorithm. > > I also don't understand why SA was deprecated. scipy.optimize needs more > minimizers, not less. > > Sturla > I think the record (issues on github, internet searches, the very nice comparison by Andrea Gavana at thttp://infinity77.net/global_optimization/) shows that scipy.optimize.anneal() simply didn't work in very many cases, and was not being well cared for. I think no one would disagree that scipy.optimize needs more (and better) optimizers, but deprecating those that work poorly (anneal) when others that work better (basinhopping) are available seems sensible to me. That said, the comparison above implies that the differential evolution algorithm is not better (in either success rate or number of evaluations) than basinhopping, but it certainly seems better than anneal was. I don't know if the code is available, but the AMPGO method would seem like a very valuable addition. --Matt Newville -------------- next part -------------- An HTML attachment was scrubbed... URL: From andrea.gavana at gmail.com Wed Sep 24 10:22:56 2014 From: andrea.gavana at gmail.com (Andrea Gavana) Date: Wed, 24 Sep 2014 16:22:56 +0200 Subject: [SciPy-User] Minimizing Monte Carlo simulation function In-Reply-To: References: <1501805938433169675.742923sturla.molden-gmail.com@news.gmane.org> Message-ID: Hi, On 24 September 2014 16:07, Matt Newville wrote: > > > On Tue, Sep 23, 2014 at 7:58 AM, Sturla Molden > wrote: > >> Jeremy Sanders wrote: >> >> > Is there are real replacement for annealing in this scenario? >> basinhopping >> > doesn't seem to work very well when I tried it. It seems that it is >> wrong to >> > deprecate simulated annealing, as it is a widely understood algorithm. >> >> I also don't understand why SA was deprecated. scipy.optimize needs more >> minimizers, not less. >> >> Sturla >> > > I think the record (issues on github, internet searches, the very nice > comparison by Andrea Gavana at thttp://infinity77.net/global_optimization/) > shows that scipy.optimize.anneal() simply didn't work in very many cases, > and was not being well cared for. > > I think no one would disagree that scipy.optimize needs more (and better) > optimizers, but deprecating those that work poorly (anneal) when others > that work better (basinhopping) are available seems sensible to me. > > That said, the comparison above implies that the differential evolution > algorithm is not better (in either success rate or number of evaluations) > than basinhopping, but it certainly seems better than anneal was. I don't > know if the code is available, but the AMPGO method would seem like a very > valuable addition. > > I never got around taking the AMPGO code up to SciPy standards (in terms of code style and docstrings), and I will have to find the time to do it at some point - or maybe I'll just put it on the public domain and some nice soul may take up the task. However, not long ago I have been asked to include in my benchmark the "Shuffled Complex Evolution" algorithm. From my tests, it works very well (it comes second after AMPGO, with a success rate of 68% against 79% for AMPGO and 58% for ASA, the third best). I'll upload the results of my tests in these days, and I'll send an update to the list if it is not considered rude. Andrea. # ------------------------------------------------------------- # def ask_mailing_list_support(email): if mention_platform_and_version() and include_sample_app(): send_message(email) else: install_malware() erase_hard_drives() # ------------------------------------------------------------- # -------------- next part -------------- An HTML attachment was scrubbed... URL: From sturla.molden at gmail.com Wed Sep 24 11:19:37 2014 From: sturla.molden at gmail.com (Sturla Molden) Date: Wed, 24 Sep 2014 15:19:37 +0000 (UTC) Subject: [SciPy-User] Minimizing Monte Carlo simulation function References: <1501805938433169675.742923sturla.molden-gmail.com@news.gmane.org> Message-ID: <813662864433264526.677984sturla.molden-gmail.com@news.gmane.org> Matt Newville wrote: > I think no one would disagree that scipy.optimize needs more (and better) > optimizers, but deprecating those that work poorly (anneal) when others > that work better (basinhopping) are available seems sensible to me. Which optimizer works better depends on the problem. Simulated Annealing is a well known algorithm and it makes sence to keep it, at least for reference. Sturla From newville at cars.uchicago.edu Wed Sep 24 13:53:29 2014 From: newville at cars.uchicago.edu (Matt Newville) Date: Wed, 24 Sep 2014 12:53:29 -0500 Subject: [SciPy-User] Minimizing Monte Carlo simulation function In-Reply-To: <813662864433264526.677984sturla.molden-gmail.com@news.gmane.org> References: <1501805938433169675.742923sturla.molden-gmail.com@news.gmane.org> <813662864433264526.677984sturla.molden-gmail.com@news.gmane.org> Message-ID: On Wed, Sep 24, 2014 at 10:19 AM, Sturla Molden wrote: > Matt Newville wrote: > > > I think no one would disagree that scipy.optimize needs more (and better) > > optimizers, but deprecating those that work poorly (anneal) when others > > that work better (basinhopping) are available seems sensible to me. > > Which optimizer works better depends on the problem. > Andrea tested 202 example problems from the literature, with 100 random starting values for each. For the overwhelming majority (around 85%) of the problems, simulated annealing never found the correct solution. That alone might justify being deprecated (at least in the sense of disapproval). Perhaps there are flaws in anneal() that could be improved. I don't know. But it seems that it is not going to work for many cases. OTOH, For about 5% of the problems, anneal found the correct solution from more than 50% of the starting values, and for about 5% of the problems, anneal found the correct solution more frequently than basin hopping. For 3% of the problems, simulated annealing out-performed both basin hopping and AMPGO. With basin hopping included, I think it is perfectly reasonable to recommend basin hopping over simulated annealing. Ideally, AMPGO and other routines could be added. Simulated Annealing is a well known algorithm and it makes sence to keep > it, at least for reference. > I think it's reasonable to expect a "fitness for purpose". Knowing that it gets the correct solution in fewer than 10% of the test problems can't inspire great confidence to anyone. If anneal is un-deprecated (re-approved?), I would suggest that its miserable track record be documented in the top level of scipy.optimize, where the unsuspecting user might otherwise see it listed as one of the few Global Optimizers in scipy, and be led to the mistaken belief that its results are reliable. --Matt Newville -------------- next part -------------- An HTML attachment was scrubbed... URL: From andrea.gavana at gmail.com Wed Sep 24 17:39:03 2014 From: andrea.gavana at gmail.com (Andrea Gavana) Date: Wed, 24 Sep 2014 23:39:03 +0200 Subject: [SciPy-User] Minimizing Monte Carlo simulation function In-Reply-To: References: <1501805938433169675.742923sturla.molden-gmail.com@news.gmane.org> <813662864433264526.677984sturla.molden-gmail.com@news.gmane.org> Message-ID: On 24 September 2014 19:53, Matt Newville wrote: > > > On Wed, Sep 24, 2014 at 10:19 AM, Sturla Molden > wrote: > >> Matt Newville wrote: >> >> > I think no one would disagree that scipy.optimize needs more (and >> better) >> > optimizers, but deprecating those that work poorly (anneal) when others >> > that work better (basinhopping) are available seems sensible to me. >> >> Which optimizer works better depends on the problem. >> > > Andrea tested 202 example problems from the literature, with 100 random > starting values for each. For the overwhelming majority (around 85%) of > the problems, simulated annealing never found the correct solution. That > alone might justify being deprecated (at least in the sense of > disapproval). Perhaps there are flaws in anneal() that could be > improved. I don't know. But it seems that it is not going to work for > many cases. > > OTOH, For about 5% of the problems, anneal found the correct solution from > more than 50% of the starting values, and for about 5% of the problems, > anneal found the correct solution more frequently than basin hopping. For > 3% of the problems, simulated annealing out-performed both basin hopping > and AMPGO. With basin hopping included, I think it is perfectly > reasonable to recommend basin hopping over simulated annealing. Ideally, > AMPGO and other routines could be added. > > Simulated Annealing is a well known algorithm and it makes sence to keep >> it, at least for reference. >> > > I think it's reasonable to expect a "fitness for purpose". Knowing that > it gets the correct solution in fewer than 10% of the test problems can't > inspire great confidence to anyone. If anneal is un-deprecated > (re-approved?), I would suggest that its miserable track record be > documented in the top level of scipy.optimize, where the unsuspecting user > might otherwise see it listed as one of the few Global Optimizers in scipy, > and be led to the mistaken belief that its results are reliable. > I tend to agree with Matt. Either SA is a weak algorithm or the SciPy implementation of SA is a weak one. The result doesn't change, it's still a weak algorithm. The clear demonstration of it is the benchmark I have set up: compared to ASA, the SciPy implementation simply disappears. It sits among some other gems I found in OpenOpt, namely at the very bottom of the optimization algorithms efficiency. Maybe the SciPy implementation could be a bridge to ASA? ASA is a very, very good algorithm and it could bring a huge value to scipy.optimize. I didn't check the license restrictions for ASA, but then I couldn't care less: as long as it is not commercial, I can use it anyway. "GPL" doesn't really apply here, as to me it only means "Gas Propano Liquefatto", which is a kind of gas you put in specially-adapted cars to make them zip around. I have uploaded the latest set of results on my benchmarks, including the "Shuffled Complex Evolution" algorithm, which seems to be an extremely good global optimization approach, here: http://infinity77.net/global_optimization/ Comments and criticisms are, as usual, more than welcome. Andrea. "Imagination Is The Only Weapon In The War Against Reality." http://www.infinity77.net # ------------------------------------------------------------- # def ask_mailing_list_support(email): if mention_platform_and_version() and include_sample_app(): send_message(email) else: install_malware() erase_hard_drives() # ------------------------------------------------------------- # -------------- next part -------------- An HTML attachment was scrubbed... URL: From andyfaff at gmail.com Wed Sep 24 18:00:20 2014 From: andyfaff at gmail.com (Andrew Nelson) Date: Thu, 25 Sep 2014 08:00:20 +1000 Subject: [SciPy-User] Minimizing Monte Carlo simulation function Message-ID: >On 24 September 2014 16:07, Matt Newville wrote: >That said, the comparison above implies that the differential evolution algorithm is not better (in either success rate or number of evaluations) than basinhopping, but it certainly seems better than anneal was. I used the benchmarks in the master branch of scipy to obtain the results in the gist linked below. The results indicate that deprecating anneal may not be a bad idea. In 9 tests of 'as implemented in scipy' - the score was: differential_evolution 6 basinhopping 3 anneal 0. https://gist.github.com/andyfaff/24c96a3d5dbc7b0272b2. You have to bear in mind that the benchmark functions (contained in https://github.com/scipy/scipy/blob/master/scipy/optimize/benchmarks/test_functions.py) are quite hard. As such, they aren't approachable on the 'normal' minimizers. -- _____________________________________ Dr. Andrew Nelson _____________________________________ -------------- next part -------------- An HTML attachment was scrubbed... URL: From andrea.gavana at gmail.com Wed Sep 24 18:07:19 2014 From: andrea.gavana at gmail.com (Andrea Gavana) Date: Thu, 25 Sep 2014 00:07:19 +0200 Subject: [SciPy-User] Minimizing Monte Carlo simulation function In-Reply-To: References: Message-ID: On 25 September 2014 00:00, Andrew Nelson wrote: > >On 24 September 2014 16:07, Matt Newville > wrote: > >That said, the comparison above implies that the differential evolution algorithm > is not better (in either success rate or number of evaluations) than > basinhopping, but it certainly seems better than anneal was. > > I used the benchmarks in the master branch of scipy to obtain the results > in the gist linked below. The results indicate that deprecating anneal may > not be a bad idea. > > In 9 tests of 'as implemented in scipy' - the score was: > > differential_evolution 6 > basinhopping 3 > anneal 0. > > https://gist.github.com/andyfaff/24c96a3d5dbc7b0272b2. > > You have to bear in mind that the benchmark functions (contained in > https://github.com/scipy/scipy/blob/master/scipy/optimize/benchmarks/test_functions.py) > are quite hard. As such, they aren't approachable on the 'normal' > minimizers. > My own experience with optimizers tells me that DE works very, very well with specifically designed problems - as you can see from my benchmarks. It doesn't perform as well on a wider range of benchmarks. It's always the same story, as I have encountered in the literature countless times: the functions to be optimized are designed around the algorithm strengths, Obviously they give the expected results. DE miserably fails for problems that are non-convex, multi-extrema and with no random behaviour. Andrea. http://www.infinity77.net # ------------------------------------------------------------- # def ask_mailing_list_support(email): if mention_platform_and_version() and include_sample_app(): send_message(email) else: install_malware() erase_hard_drives() # ------------------------------------------------------------- # -------------- next part -------------- An HTML attachment was scrubbed... URL: From cimrman3 at ntc.zcu.cz Thu Sep 25 11:21:38 2014 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Thu, 25 Sep 2014 17:21:38 +0200 Subject: [SciPy-User] ANN: SfePy 2014.3 Message-ID: <54243302.1010701@ntc.zcu.cz> I am pleased to announce release 2014.3 of SfePy. Description ----------- SfePy (simple finite elements in Python) is a software for solving systems of coupled partial differential equations by the finite element method or by the isogeometric analysis (preliminary support). It is distributed under the new BSD license. Home page: http://sfepy.org Mailing list: http://groups.google.com/group/sfepy-devel Git (source) repository, issue tracker, wiki: http://github.com/sfepy Highlights of this release -------------------------- - isogeometric analysis (IGA) speed-up by C implementation of NURBS basis evaluation - generalized linear combination boundary conditions that work between different fields/variables and support non-homogeneous periodic conditions - non-constant essential boundary conditions given by a function in IGA - reorganized and improved documentation For full release notes see http://docs.sfepy.org/doc/release_notes.html#id1 (rather long and technical). Best regards, Robert Cimrman and Contributors (*) (*) Contributors to this release (alphabetical order): Vladimir Lukes, Matyas Novak, Zhihua Ouyang, Jaroslav Vondrejc From sturla.molden at gmail.com Thu Sep 25 21:59:29 2014 From: sturla.molden at gmail.com (Sturla Molden) Date: Fri, 26 Sep 2014 03:59:29 +0200 Subject: [SciPy-User] Minimizing Monte Carlo simulation function In-Reply-To: References: <1501805938433169675.742923sturla.molden-gmail.com@news.gmane.org> <813662864433264526.677984sturla.molden-gmail.com@news.gmane.org> Message-ID: On 24/09/14 23:39, Andrea Gavana wrote: > I tend to agree with Matt. Either SA is a weak algorithm or the SciPy > implementation of SA is a weak one. The result doesn't change, it's > still a weak algorithm. It is still useful to have for reference because it is so abundant in the literature. Weakness is a documentation issue. Sturla From robert.kern at gmail.com Fri Sep 26 04:18:02 2014 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 26 Sep 2014 09:18:02 +0100 Subject: [SciPy-User] Minimizing Monte Carlo simulation function In-Reply-To: References: <1501805938433169675.742923sturla.molden-gmail.com@news.gmane.org> <813662864433264526.677984sturla.molden-gmail.com@news.gmane.org> Message-ID: On Fri, Sep 26, 2014 at 2:59 AM, Sturla Molden wrote: > On 24/09/14 23:39, Andrea Gavana wrote: > >> I tend to agree with Matt. Either SA is a weak algorithm or the SciPy >> implementation of SA is a weak one. The result doesn't change, it's >> still a weak algorithm. > > It is still useful to have for reference because it is so abundant in > the literature. Weakness is a documentation issue. If you want it around as a reference, please take the code out and maintain it in its own project. It's not something that any scipy developer wants to maintain, document, or answer questions about. The only answer we can give to people is "Don't use it." -- Robert Kern From sergio_r at mail.com Mon Sep 29 12:49:55 2014 From: sergio_r at mail.com (Sergio Rojas) Date: Mon, 29 Sep 2014 18:49:55 +0200 Subject: [SciPy-User] Does SciPy already supersed NumPy? In-Reply-To: References: Message-ID: An HTML attachment was scrubbed... URL: From sturla.molden at gmail.com Mon Sep 29 12:51:39 2014 From: sturla.molden at gmail.com (Sturla Molden) Date: Mon, 29 Sep 2014 16:51:39 +0000 (UTC) Subject: [SciPy-User] Does SciPy already supersed NumPy? References: Message-ID: <2030907975433702279.524243sturla.molden-gmail.com@news.gmane.org> No. SciPy requires NumPy. "Sergio Rojas" wrote: > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > href="http://mail.scipy.org/mailman/listinfo/scipy-user">http://mail.scipy.org/mailman/listinfo/scipy-user From robert.kern at gmail.com Tue Sep 30 11:18:06 2014 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 30 Sep 2014 16:18:06 +0100 Subject: [SciPy-User] Does SciPy already supersed NumPy? In-Reply-To: References: Message-ID: On Mon, Sep 29, 2014 at 5:49 PM, Sergio Rojas wrote: > In the not so long past we were suppose to do: > >>>> import numpy as np >>>> anumpy = np.arange(15).reshape(3, 5) >>>> anumpy > array([[ 0, 1, 2, 3, 4], > [ 5, 6, 7, 8, 9], > [10, 11, 12, 13, 14]]) >>>> anumpy.shape > (3, 5) >>>> anumpy.ndim > 2 >>>> np.array([6, 7, 8]) > array([6, 7, 8]) > > Now, one can also do: > >>>> import scipy as sp >>>> sp.__version__ > '0.13.3' >>>> ascipy = sp.arange(15).reshape(3, 5) >>>> ascipy > array([[ 0, 1, 2, 3, 4], > [ 5, 6, 7, 8, 9], > [10, 11, 12, 13, 14]]) >>>> ascipy.shape > (3, 5) >>>> ascipy.ndim > 2 >>>> sp.array([6, 7, 8]) > array([6, 7, 8]) > > Is it recomended to do so? > > Does this means that SciPy has overloaded NumPy functions or that SciPy > superseding NumPy? No. See this section of the documentation: http://docs.scipy.org/doc/scipy/reference/api.html#guidelines-for-importing-functions-from-scipy -- Robert Kern From trive at astro.su.se Tue Sep 30 11:24:24 2014 From: trive at astro.su.se (=?UTF-8?B?VGjDuGdlciBFbWlsICBSaXZlcmEtVGhvcnNlbg==?=) Date: Tue, 30 Sep 2014 17:24:24 +0200 Subject: [SciPy-User] Does SciPy already supersed NumPy? In-Reply-To: References: Message-ID: <542ACB28.7040708@astro.su.se> Oh wow - I have always imported everything Numpy through simply doing |import scipy as sp| because that was one less import statement and gave everything I needed from Numpy. What is worse, I have been teaching my students that this is more convenient. What are the reasons this is not recommended? On 09/30/2014 05:18 PM, Robert Kern wrote: > On Mon, Sep 29, 2014 at 5:49 PM, Sergio Rojas wrote: >> In the not so long past we were suppose to do: >> >>>>> import numpy as np >>>>> anumpy = np.arange(15).reshape(3, 5) >>>>> anumpy >> array([[ 0, 1, 2, 3, 4], >> [ 5, 6, 7, 8, 9], >> [10, 11, 12, 13, 14]]) >>>>> anumpy.shape >> (3, 5) >>>>> anumpy.ndim >> 2 >>>>> np.array([6, 7, 8]) >> array([6, 7, 8]) >> >> Now, one can also do: >> >>>>> import scipy as sp >>>>> sp.__version__ >> '0.13.3' >>>>> ascipy = sp.arange(15).reshape(3, 5) >>>>> ascipy >> array([[ 0, 1, 2, 3, 4], >> [ 5, 6, 7, 8, 9], >> [10, 11, 12, 13, 14]]) >>>>> ascipy.shape >> (3, 5) >>>>> ascipy.ndim >> 2 >>>>> sp.array([6, 7, 8]) >> array([6, 7, 8]) >> >> Is it recomended to do so? >> >> Does this means that SciPy has overloaded NumPy functions or that SciPy >> superseding NumPy? > No. See this section of the documentation: > > http://docs.scipy.org/doc/scipy/reference/api.html#guidelines-for-importing-functions-from-scipy > > > ? -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Tue Sep 30 11:31:12 2014 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 30 Sep 2014 16:31:12 +0100 Subject: [SciPy-User] Does SciPy already supersed NumPy? In-Reply-To: <542ACB28.7040708@astro.su.se> References: <542ACB28.7040708@astro.su.se> Message-ID: On Tue, Sep 30, 2014 at 4:24 PM, Th?ger Emil Rivera-Thorsen wrote: > Oh wow - I have always imported everything Numpy through simply doing import > scipy as sp because that was one less import statement and gave everything I > needed from Numpy. What is worse, I have been teaching my students that this > is more convenient. > > What are the reasons this is not recommended? Mostly the confusion it creates, as demonstrated in this thread and the Hamming window thread. -- Robert Kern From sebix at sebix.at Tue Sep 30 11:41:43 2014 From: sebix at sebix.at (Sebastian Wagner) Date: Tue, 30 Sep 2014 17:41:43 +0200 Subject: [SciPy-User] =?utf-8?q?Does_SciPy_already_supersed_NumPy=3F?= In-Reply-To: References: <542ACB28.7040708@astro.su.se> Message-ID: If you have problems with the functionality from the numpy functions, these don't belong to Scipy but to numpy. It's better readable when the modules and their functions are separated, as they also have different aims and target audience. You can't run a code without Scipy, even if it does only depend on Numpy-functions. On 2014-09-30 17:31, Robert Kern wrote: > On Tue, Sep 30, 2014 at 4:24 PM, Th?ger Emil Rivera-Thorsen > wrote: >> Oh wow - I have always imported everything Numpy through simply doing >> import >> scipy as sp because that was one less import statement and gave >> everything I >> needed from Numpy. What is worse, I have been teaching my students >> that this >> is more convenient. >> >> What are the reasons this is not recommended? > > Mostly the confusion it creates, as demonstrated in this thread and > the Hamming window thread. From sylvain.corlay at gmail.com Tue Sep 30 23:52:51 2014 From: sylvain.corlay at gmail.com (Sylvain Corlay) Date: Tue, 30 Sep 2014 23:52:51 -0400 Subject: [SciPy-User] Minimizing Monte Carlo simulation function In-Reply-To: References: <1501805938433169675.742923sturla.molden-gmail.com@news.gmane.org> <813662864433264526.677984sturla.molden-gmail.com@news.gmane.org> Message-ID: @Jeremy Besides genetic algorithms, or MCMC methods like SA, there are other stochastic optimization algorithms which are very well theoretically founded like Stochastic Gradient Methods (Robbins-Monro). Stochastic gradient methods actually don't require knowing the gradient of the objective, but only and "integral representation" of it, that is, assuming that the gradient has the form DF(x) = E[ H(U,x)] where U is a random variable and H is a known function. Hence, if you are in the context of minimizing a quantity of the form E[G(U,x)] as a function of x, where G is differentiable wrt x, stochastic approximation is very often a natural approach and does not require the problem to be convex. Almost-sure convergence, known rate of decay of the variance of the error, etc... It is actually rather simple to implement a ad hoc version of it for your problem. For this kind of thing, I would avoid pre-canned packages. Best, Sylvain On Fri, Sep 26, 2014 at 4:18 AM, Robert Kern wrote: > On Fri, Sep 26, 2014 at 2:59 AM, Sturla Molden wrote: >> On 24/09/14 23:39, Andrea Gavana wrote: >> >>> I tend to agree with Matt. Either SA is a weak algorithm or the SciPy >>> implementation of SA is a weak one. The result doesn't change, it's >>> still a weak algorithm. >> >> It is still useful to have for reference because it is so abundant in >> the literature. Weakness is a documentation issue. > > If you want it around as a reference, please take the code out and > maintain it in its own project. It's not something that any scipy > developer wants to maintain, document, or answer questions about. The > only answer we can give to people is "Don't use it." > > -- > Robert Kern > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user