From contact at pythonxy.com Sat Nov 1 15:00:45 2008 From: contact at pythonxy.com (Pierre Raybaut) Date: Sat, 01 Nov 2008 20:00:45 +0100 Subject: [SciPy-user] [ Python(x,y) ] New release : 2.1.4 Message-ID: <490CA75D.1020803@pythonxy.com> Hi all, Release 2.1.4 is now available on http://www.pythonxy.com. (Full Edition, Basic Edition, Light Edition, Custom Edition and Update) Changes history Version 2.1.4 (11-01-2008) * Added: o ReportLab 2.2, the PDF generation library o Windows explorer integration: added a "Run in interactive mode" (python -i) option on Python files contextual menu * Updated: o The *-components listed below are not included in Python(x,y) Update 2.1.4 because of the huge size of this update installer which is very close to the Google Code per-file size limit (100MB). Even though these updates are quite minor, note that you can download them individually on here. o NumPy 1.2.1 o SciPy 0.6.0.2 (minor update regarding deprecation warnings with NumPy 1.2.x) o matplotlib 0.98.3.3 (new 660-pages PDF documentation) o Enthought Tool Suite 3.0.2.3 o VTK 5.2.0.4 o *ITK 3.8.0.2 o *wxPython 2.8.9.1 o MDP 2.4 o PySQlite 2.5.0 o *Eclipse 3.4.1 o Pydev 1.3.23 o CDT 5.0.1 o xy 1.0.9 o Console 2.0.140.6 o Notepad++ 5.0.3.5 Regards, Pierre Raybaut From dfranci at seas.upenn.edu Sat Nov 1 19:43:10 2008 From: dfranci at seas.upenn.edu (Frank Lagor) Date: Sat, 1 Nov 2008 19:43:10 -0400 Subject: [SciPy-user] guassian_kde and kernel regression Message-ID: <9fddf64a0811011643tcde69acm548a12bceeb001bb@mail.gmail.com> This question is probably for Robert Kern, because I believe the he wrote the gaussian_kde class in scipy.stats.kde, however I would very much appreciate a response from anyone else who could help. My question is: Is there currently any way to perform weighted kernel density estimation using the gaussian_kde class? If not, what needs to be done, and how do I get started? Just for clarity sake-- by weighted KDE I mean that I have more than just the distribution of points for the density estimate. I also have an associated probability with each point. In this case, I believe it becomes a regression problem and I think is referred to as kernel regression. I would very much like to use the class to perform both KDE and wKDE. Thanks in advance, Frank -------------- next part -------------- An HTML attachment was scrubbed... URL: From aarchiba at physics.mcgill.ca Sat Nov 1 19:51:08 2008 From: aarchiba at physics.mcgill.ca (Anne Archibald) Date: Sat, 1 Nov 2008 19:51:08 -0400 Subject: [SciPy-user] guassian_kde and kernel regression In-Reply-To: <9fddf64a0811011643tcde69acm548a12bceeb001bb@mail.gmail.com> References: <9fddf64a0811011643tcde69acm548a12bceeb001bb@mail.gmail.com> Message-ID: 2008/11/1 Frank Lagor : > This question is probably for Robert Kern, because I believe the he wrote > the gaussian_kde class in scipy.stats.kde, however I would very much > appreciate a response from anyone else who could help. My question is: Is > there currently any way to perform weighted kernel density estimation using > the gaussian_kde class? If not, what needs to be done, and how do I get > started? > > Just for clarity sake-- by weighted KDE I mean that I have more than just > the distribution of points for the density estimate. I also have an > associated probability with each point. In this case, I believe it becomes > a regression problem and I think is referred to as kernel regression. I > would very much like to use the class to perform both KDE and wKDE. The class does not support weights right now, but I don't think it would be very difficult to add them to most parts of the code, essentially just adding a "weights" optional argument. The automatic covariance selection would need some rethinking; you'd need to hunt down some research papers. (That method is only really appropriate for unimodal distributions anyway.) But it does seem valuable. Anne From robert.kern at gmail.com Sun Nov 2 03:18:22 2008 From: robert.kern at gmail.com (Robert Kern) Date: Sun, 2 Nov 2008 03:18:22 -0500 Subject: [SciPy-user] guassian_kde and kernel regression In-Reply-To: References: <9fddf64a0811011643tcde69acm548a12bceeb001bb@mail.gmail.com> Message-ID: <3d375d730811020118s430d2b9ic8adb09ada3a0ab4@mail.gmail.com> On Sat, Nov 1, 2008 at 18:51, Anne Archibald wrote: > 2008/11/1 Frank Lagor : >> This question is probably for Robert Kern, because I believe the he wrote >> the gaussian_kde class in scipy.stats.kde, however I would very much >> appreciate a response from anyone else who could help. My question is: Is >> there currently any way to perform weighted kernel density estimation using >> the gaussian_kde class? If not, what needs to be done, and how do I get >> started? >> >> Just for clarity sake-- by weighted KDE I mean that I have more than just >> the distribution of points for the density estimate. I also have an >> associated probability with each point. In this case, I believe it becomes >> a regression problem and I think is referred to as kernel regression. I >> would very much like to use the class to perform both KDE and wKDE. > > The class does not support weights right now, but I don't think it > would be very difficult to add them to most parts of the code, > essentially just adding a "weights" optional argument. The automatic > covariance selection would need some rethinking; you'd need to hunt > down some research papers. (That method is only really appropriate for > unimodal distributions anyway.) But it does seem valuable. What Anne said. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From anand.prabhakar.patil at gmail.com Sun Nov 2 09:33:16 2008 From: anand.prabhakar.patil at gmail.com (Anand Patil) Date: Sun, 2 Nov 2008 14:33:16 +0000 Subject: [SciPy-user] Problem with mkl 10.0.2: undefined symbol: mkl_blas_xdgemm_1_thr_htn Message-ID: <2bc7a5a50811020633v7c15b581v535f2cfafba5febb@mail.gmail.com> Hi all, I'm trying to build numpy from svn on Ubuntu with mkl 10.0.2.018, as it looks like mkl 10.0.3 and above won't work with numpy (I got the i_free error). I get the following problem, which I can't find on Google: In [1]: from numpy import * In [2]: A=asmatrix(eye(6000)) In [3]: A=A*A /opt/intel/mkl/10.0.2.018/lib/em64t/libmkl_mc.so: undefined symbol: mkl_blas_xdgemm_1_thr_htn /opt/intel/mkl/10.0.2.018/lib/em64t/libmkl_mc.so: undefined symbol: mkl_blas_xdgemm_1_thr_htn More confusingly, I can't find where the symbol is defined: $ nm /opt/intel/mkl/10.0.2.018/lib/em64t/*.so | grep mkl_blas_xdgemm_1_thr_htn nm: /opt/intel/mkl/10.0.2.018/lib/em64t/libmkl.so: File format not recognized $ Anyone know how I can fix this? Thanks, Anand -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthieu.brucher at gmail.com Sun Nov 2 09:35:59 2008 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Sun, 2 Nov 2008 15:35:59 +0100 Subject: [SciPy-user] Problem with mkl 10.0.2: undefined symbol: mkl_blas_xdgemm_1_thr_htn In-Reply-To: <2bc7a5a50811020633v7c15b581v535f2cfafba5febb@mail.gmail.com> References: <2bc7a5a50811020633v7c15b581v535f2cfafba5febb@mail.gmail.com> Message-ID: Hi, There is a problem with the MKL, but not with version prior to 10.0.2. What libraries did you link on ? Matthieu 2008/11/2 Anand Patil : > Hi all, > I'm trying to build numpy from svn on Ubuntu with mkl 10.0.2.018, as it > looks like mkl 10.0.3 and above won't work with numpy (I got the i_free > error). I get the following problem, which I can't find on Google: > In [1]: from numpy import * > In [2]: A=asmatrix(eye(6000)) > In [3]: A=A*A > /opt/intel/mkl/10.0.2.018/lib/em64t/libmkl_mc.so: undefined symbol: > mkl_blas_xdgemm_1_thr_htn > /opt/intel/mkl/10.0.2.018/lib/em64t/libmkl_mc.so: undefined symbol: > mkl_blas_xdgemm_1_thr_htn > More confusingly, I can't find where the symbol is defined: > $ nm /opt/intel/mkl/10.0.2.018/lib/em64t/*.so | grep > mkl_blas_xdgemm_1_thr_htn > nm: /opt/intel/mkl/10.0.2.018/lib/em64t/libmkl.so: File format not > recognized > $ > Anyone know how I can fix this? > Thanks, > Anand > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > -- Information System Engineer, Ph.D. Website: http://matthieu-brucher.developpez.com/ Blogs: http://matt.eifelle.com and http://blog.developpez.com/?blog=92 LinkedIn: http://www.linkedin.com/in/matthieubrucher From dfranci at seas.upenn.edu Sun Nov 2 12:43:12 2008 From: dfranci at seas.upenn.edu (Frank Lagor) Date: Sun, 2 Nov 2008 12:43:12 -0500 Subject: [SciPy-user] guassian_kde and kernel regression In-Reply-To: <3d375d730811020118s430d2b9ic8adb09ada3a0ab4@mail.gmail.com> References: <9fddf64a0811011643tcde69acm548a12bceeb001bb@mail.gmail.com> <3d375d730811020118s430d2b9ic8adb09ada3a0ab4@mail.gmail.com> Message-ID: <9fddf64a0811020943v3cfc5695p3353a7c80053050b@mail.gmail.com> On Sun, Nov 2, 2008 at 3:18 AM, Robert Kern wrote: > On Sat, Nov 1, 2008 at 18:51, Anne Archibald > wrote: > > 2008/11/1 Frank Lagor : > >> This question is probably for Robert Kern, because I believe the he > wrote > >> the gaussian_kde class in scipy.stats.kde, however I would very much > >> appreciate a response from anyone else who could help. My question is: > Is > >> there currently any way to perform weighted kernel density estimation > using > >> the gaussian_kde class? If not, what needs to be done, and how do I > get > >> started? > >> > >> Just for clarity sake-- by weighted KDE I mean that I have more than > just > >> the distribution of points for the density estimate. I also have an > >> associated probability with each point. In this case, I believe it > becomes > >> a regression problem and I think is referred to as kernel regression. I > >> would very much like to use the class to perform both KDE and wKDE. > > > > The class does not support weights right now, but I don't think it > > would be very difficult to add them to most parts of the code, > > essentially just adding a "weights" optional argument. The automatic > > covariance selection would need some rethinking; you'd need to hunt > > down some research papers. (That method is only really appropriate for > > unimodal distributions anyway.) But it does seem valuable. > > What Anne said. > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless > enigma that is made terrible by our own mad attempt to interpret it as > though it had an underlying truth." > -- Umberto Eco > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > Wonderful. Thank you both very much for your responses. I will soon get started working on it. Take care, Frank -------------- next part -------------- An HTML attachment was scrubbed... URL: From nwagner at iam.uni-stuttgart.de Mon Nov 3 05:40:46 2008 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Mon, 03 Nov 2008 11:40:46 +0100 Subject: [SciPy-user] odeint parameter Message-ID: Hi all, I have a question wrt. odeint(func, y0, t, args=(), Dfun=None, col_deriv=0, full_output=0, ml=None, mu=None, rtol=None, atol=None, tcrit=None, h0=0.0, hmax=0.0, hmin=0.0, ixpr=0, mxstep=0, mxhn il=0, mxordn=12, mxords=5, printmessg=0) h0 : float, (0: solver-determined) The step size to be attempted on the first step. hmax : float, (0: solver-determined) The maximum absolute step size allowed. hmin : float, (0: solver-determined) The minimum absolute step size allowed. Is it really useful to start with the default values h0=0.0, hmax= 0.0 and hmin=0.0 ? Nils From anand.prabhakar.patil at gmail.com Mon Nov 3 07:10:52 2008 From: anand.prabhakar.patil at gmail.com (Anand Patil) Date: Mon, 3 Nov 2008 12:10:52 +0000 Subject: [SciPy-user] Problem with mkl 10.0.2: undefined symbol: mkl_blas_xdgemm_1_thr_htn In-Reply-To: References: <2bc7a5a50811020633v7c15b581v535f2cfafba5febb@mail.gmail.com> Message-ID: <2bc7a5a50811030410g7b2f57beo3d514344baf9d934@mail.gmail.com> Just about every combination of mkl_core, mkl_def, mkl, guide and mkl_intel_thread . Thanks, Anand On Sun, Nov 2, 2008 at 2:35 PM, Matthieu Brucher wrote: > Hi, > > There is a problem with the MKL, but not with version prior to 10.0.2. > What libraries did you link on ? > > Matthieu > > 2008/11/2 Anand Patil : > > Hi all, > > I'm trying to build numpy from svn on Ubuntu with mkl 10.0.2.018, as it > > looks like mkl 10.0.3 and above won't work with numpy (I got the i_free > > error). I get the following problem, which I can't find on Google: > > In [1]: from numpy import * > > In [2]: A=asmatrix(eye(6000)) > > In [3]: A=A*A > > /opt/intel/mkl/10.0.2.018/lib/em64t/libmkl_mc.so: undefined symbol: > > mkl_blas_xdgemm_1_thr_htn > > /opt/intel/mkl/10.0.2.018/lib/em64t/libmkl_mc.so: undefined symbol: > > mkl_blas_xdgemm_1_thr_htn > > More confusingly, I can't find where the symbol is defined: > > $ nm /opt/intel/mkl/10.0.2.018/lib/em64t/*.so | grep > > mkl_blas_xdgemm_1_thr_htn > > nm: /opt/intel/mkl/10.0.2.018/lib/em64t/libmkl.so: File format not > > recognized > > $ > > Anyone know how I can fix this? > > Thanks, > > Anand > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > > > > > -- > Information System Engineer, Ph.D. > Website: http://matthieu-brucher.developpez.com/ > Blogs: http://matt.eifelle.com and http://blog.developpez.com/?blog=92 > LinkedIn: http://www.linkedin.com/in/matthieubrucher > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthieu.brucher at gmail.com Mon Nov 3 07:34:41 2008 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Mon, 3 Nov 2008 13:34:41 +0100 Subject: [SciPy-user] Problem with mkl 10.0.2: undefined symbol: mkl_blas_xdgemm_1_thr_htn In-Reply-To: <2bc7a5a50811030410g7b2f57beo3d514344baf9d934@mail.gmail.com> References: <2bc7a5a50811020633v7c15b581v535f2cfafba5febb@mail.gmail.com> <2bc7a5a50811030410g7b2f57beo3d514344baf9d934@mail.gmail.com> Message-ID: Hi, It seems you may have to link against libmkl_sequential.so. I don't now why, because it depends on the parameters you used, but it may do the trick. Matthieu 2008/11/3 Anand Patil : > Just about every combination of mkl_core, mkl_def, mkl, guide and > mkl_intel_thread . > Thanks, > Anand > > On Sun, Nov 2, 2008 at 2:35 PM, Matthieu Brucher > wrote: >> >> Hi, >> >> There is a problem with the MKL, but not with version prior to 10.0.2. >> What libraries did you link on ? >> >> Matthieu >> >> 2008/11/2 Anand Patil : >> > Hi all, >> > I'm trying to build numpy from svn on Ubuntu with mkl 10.0.2.018, as it >> > looks like mkl 10.0.3 and above won't work with numpy (I got the i_free >> > error). I get the following problem, which I can't find on Google: >> > In [1]: from numpy import * >> > In [2]: A=asmatrix(eye(6000)) >> > In [3]: A=A*A >> > /opt/intel/mkl/10.0.2.018/lib/em64t/libmkl_mc.so: undefined symbol: >> > mkl_blas_xdgemm_1_thr_htn >> > /opt/intel/mkl/10.0.2.018/lib/em64t/libmkl_mc.so: undefined symbol: >> > mkl_blas_xdgemm_1_thr_htn >> > More confusingly, I can't find where the symbol is defined: >> > $ nm /opt/intel/mkl/10.0.2.018/lib/em64t/*.so | grep >> > mkl_blas_xdgemm_1_thr_htn >> > nm: /opt/intel/mkl/10.0.2.018/lib/em64t/libmkl.so: File format not >> > recognized >> > $ >> > Anyone know how I can fix this? >> > Thanks, >> > Anand >> > _______________________________________________ >> > SciPy-user mailing list >> > SciPy-user at scipy.org >> > http://projects.scipy.org/mailman/listinfo/scipy-user >> > >> > >> >> >> >> -- >> Information System Engineer, Ph.D. >> Website: http://matthieu-brucher.developpez.com/ >> Blogs: http://matt.eifelle.com and http://blog.developpez.com/?blog=92 >> LinkedIn: http://www.linkedin.com/in/matthieubrucher >> _______________________________________________ >> SciPy-user mailing list >> SciPy-user at scipy.org >> http://projects.scipy.org/mailman/listinfo/scipy-user > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > -- Information System Engineer, Ph.D. Website: http://matthieu-brucher.developpez.com/ Blogs: http://matt.eifelle.com and http://blog.developpez.com/?blog=92 LinkedIn: http://www.linkedin.com/in/matthieubrucher From anand.prabhakar.patil at gmail.com Mon Nov 3 07:49:56 2008 From: anand.prabhakar.patil at gmail.com (Anand Patil) Date: Mon, 3 Nov 2008 12:49:56 +0000 Subject: [SciPy-user] Problem with mkl 10.0.2: undefined symbol: mkl_blas_xdgemm_1_thr_htn In-Reply-To: References: <2bc7a5a50811020633v7c15b581v535f2cfafba5febb@mail.gmail.com> <2bc7a5a50811030410g7b2f57beo3d514344baf9d934@mail.gmail.com> Message-ID: <2bc7a5a50811030449u7f29e583i2e504b1e2264ccfa@mail.gmail.com> Hi Matthieu, It works with: [mkl] library_dirs = /opt/intel/mkl/10.0.5.025/lib/em64t lapack_libs = mkl, mkl_lapack mkl_libs = mkl_core, mkl_def, mkl_vml_def, mkl_intel_thread, mkl_sequential, guide, mkl_em64t, mkl _but_ I actually get the wrong answer now: In [24]: A=asmatrix(eye(6000)) In [25]: B=A*A In [26]: diag(B) Out[26]: array([ 0., 0., 0., ..., 0., 0., 0.]) In [27]: diag(A) Out[27]: array([ 1., 1., 1., ..., 1., 1., 1.]) In [28]: B=dot(A,A) In [29]: diag(B) Out[29]: array([ 0., 0., 0., ..., 0., 0., 0.]) Any advice? Thanks, Anand On Mon, Nov 3, 2008 at 12:34 PM, Matthieu Brucher < matthieu.brucher at gmail.com> wrote: > Hi, > > It seems you may have to link against libmkl_sequential.so. I don't > now why, because it depends on the parameters you used, but it may do > the trick. > > Matthieu > > 2008/11/3 Anand Patil : > > Just about every combination of mkl_core, mkl_def, mkl, guide and > > mkl_intel_thread . > > Thanks, > > Anand > > > > On Sun, Nov 2, 2008 at 2:35 PM, Matthieu Brucher > > wrote: > >> > >> Hi, > >> > >> There is a problem with the MKL, but not with version prior to 10.0.2. > >> What libraries did you link on ? > >> > >> Matthieu > >> > >> 2008/11/2 Anand Patil : > >> > Hi all, > >> > I'm trying to build numpy from svn on Ubuntu with mkl 10.0.2.018, as > it > >> > looks like mkl 10.0.3 and above won't work with numpy (I got the > i_free > >> > error). I get the following problem, which I can't find on Google: > >> > In [1]: from numpy import * > >> > In [2]: A=asmatrix(eye(6000)) > >> > In [3]: A=A*A > >> > /opt/intel/mkl/10.0.2.018/lib/em64t/libmkl_mc.so: undefined symbol: > >> > mkl_blas_xdgemm_1_thr_htn > >> > /opt/intel/mkl/10.0.2.018/lib/em64t/libmkl_mc.so: undefined symbol: > >> > mkl_blas_xdgemm_1_thr_htn > >> > More confusingly, I can't find where the symbol is defined: > >> > $ nm /opt/intel/mkl/10.0.2.018/lib/em64t/*.so | grep > >> > mkl_blas_xdgemm_1_thr_htn > >> > nm: /opt/intel/mkl/10.0.2.018/lib/em64t/libmkl.so: File format not > >> > recognized > >> > $ > >> > Anyone know how I can fix this? > >> > Thanks, > >> > Anand > >> > _______________________________________________ > >> > SciPy-user mailing list > >> > SciPy-user at scipy.org > >> > http://projects.scipy.org/mailman/listinfo/scipy-user > >> > > >> > > >> > >> > >> > >> -- > >> Information System Engineer, Ph.D. > >> Website: http://matthieu-brucher.developpez.com/ > >> Blogs: http://matt.eifelle.com and http://blog.developpez.com/?blog=92 > >> LinkedIn: http://www.linkedin.com/in/matthieubrucher > >> _______________________________________________ > >> SciPy-user mailing list > >> SciPy-user at scipy.org > >> http://projects.scipy.org/mailman/listinfo/scipy-user > > > > > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > > > > > -- > Information System Engineer, Ph.D. > Website: http://matthieu-brucher.developpez.com/ > Blogs: http://matt.eifelle.com and http://blog.developpez.com/?blog=92 > LinkedIn: http://www.linkedin.com/in/matthieubrucher > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthieu.brucher at gmail.com Mon Nov 3 08:03:09 2008 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Mon, 3 Nov 2008 14:03:09 +0100 Subject: [SciPy-user] Problem with mkl 10.0.2: undefined symbol: mkl_blas_xdgemm_1_thr_htn In-Reply-To: <2bc7a5a50811030449u7f29e583i2e504b1e2264ccfa@mail.gmail.com> References: <2bc7a5a50811020633v7c15b581v535f2cfafba5febb@mail.gmail.com> <2bc7a5a50811030410g7b2f57beo3d514344baf9d934@mail.gmail.com> <2bc7a5a50811030449u7f29e583i2e504b1e2264ccfa@mail.gmail.com> Message-ID: 2008/11/3 Anand Patil : > Hi Matthieu, > > It works with: > > [mkl] > library_dirs = /opt/intel/mkl/10.0.5.025/lib/em64t > lapack_libs = mkl, mkl_lapack > mkl_libs = mkl_core, mkl_def, mkl_vml_def, mkl_intel_thread, mkl_sequential, > guide, mkl_em64t, mkl I think you try to link with too many libraries. If you can, use only mkl, guide, iomp5, pthread. If the issue (the missing symbol) arises again, use libmkl_intel_lp64, libmkl_sequential, libmkl_core only (see the MKL user guide a well for more information about the different threading models). Matthieu -- Information System Engineer, Ph.D. Website: http://matthieu-brucher.developpez.com/ Blogs: http://matt.eifelle.com and http://blog.developpez.com/?blog=92 LinkedIn: http://www.linkedin.com/in/matthieubrucher From anand.prabhakar.patil at gmail.com Mon Nov 3 08:13:05 2008 From: anand.prabhakar.patil at gmail.com (Anand Patil) Date: Mon, 3 Nov 2008 13:13:05 +0000 Subject: [SciPy-user] Problem with mkl 10.0.2: undefined symbol: mkl_blas_xdgemm_1_thr_htn In-Reply-To: References: <2bc7a5a50811020633v7c15b581v535f2cfafba5febb@mail.gmail.com> <2bc7a5a50811030410g7b2f57beo3d514344baf9d934@mail.gmail.com> <2bc7a5a50811030449u7f29e583i2e504b1e2264ccfa@mail.gmail.com> Message-ID: <2bc7a5a50811030513l31dcfb90p3a3f56c38422aba3@mail.gmail.com> On Mon, Nov 3, 2008 at 1:03 PM, Matthieu Brucher wrote: > 2008/11/3 Anand Patil : > > Hi Matthieu, > > > > It works with: > > > > [mkl] > > library_dirs = /opt/intel/mkl/10.0.5.025/lib/em64t > > lapack_libs = mkl, mkl_lapack > > mkl_libs = mkl_core, mkl_def, mkl_vml_def, mkl_intel_thread, > mkl_sequential, > > guide, mkl_em64t, mkl > > I think you try to link with too many libraries. If you can, use only > mkl, guide, iomp5, pthread. If the issue (the missing symbol) arises > again, use libmkl_intel_lp64, libmkl_sequential, libmkl_core only (see > the MKL user guide a well for more information about the different > threading models). > > Matthieu > -- > Information System Engineer, Ph.D. > Website: http://matthieu-brucher.developpez.com/ > Blogs: http://matt.eifelle.com and http://blog.developpez.com/?blog=92 > LinkedIn: http://www.linkedin.com/in/matthieubrucher > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > Thanks Matthieu, The diagonal of B is still zero with both collections of libraries. I'll have a look through the MKL user guide and see if I can't resolve this, please let me know if you think of anything also. Anand -------------- next part -------------- An HTML attachment was scrubbed... URL: From rob.clewley at gmail.com Mon Nov 3 08:35:22 2008 From: rob.clewley at gmail.com (Rob Clewley) Date: Mon, 3 Nov 2008 08:35:22 -0500 Subject: [SciPy-user] odeint parameter In-Reply-To: References: Message-ID: > > Is it really useful to start with the default values > h0=0.0, hmax= 0.0 and hmin=0.0 ? AFAIK these are just special values used to tell the integrator to work out the appropriate initial stepsize and limits from its own calculations. So there might be a tiny bit more lead time for an integration with these settings. I guess it's meant to be more of a "general purpose" setup so the user doesn't have to work on problem-specific settings. -Rob From gnurser at googlemail.com Mon Nov 3 08:49:24 2008 From: gnurser at googlemail.com (George Nurser) Date: Mon, 3 Nov 2008 13:49:24 +0000 Subject: [SciPy-user] f2py "Segmentation fault"-revisited, please help In-Reply-To: References: Message-ID: <1d1e6ea70811030549n2e531492x3e1b59dc6a59abc5@mail.gmail.com> 2008/10/31 Kimberly Artita : > Tried it on a different machine (numpy-1.2.0 and gcc-4.3.2 on linux) > > Either way (--fcompiler=gnu95 or gfortran) gives a segfault > The output says "General", then segfaults. > It is reading the space as a delimiter, even though I specify delim='none' > > My laptop and the desktop used above run gentoo. A third desktop using > ubuntu with gcc-4.3.2 and numpy-1.2.0 works fine. What gives? > I've idea why it should work on one machine but not the other. --George. > > On Fri, Oct 31, 2008 at 5:20 AM, George Nurser > wrote: >> >> Hi, >> >> 2008/10/31 Kimberly Artita : >> > Hi, >> > >> > Can someone please tell me why I keep getting a segmentation fault? >> [cut] >> >> You need to compile with fcompiler=gnu95 >> >> > >> > I type: f2py --fcompiler=gfortran -c -m gfortran_test gfortran_test.f90 >> >> Do >> f2py --fcompiler=gnu95 -c -m gfortran_test gfortran_test.f90 >> >> It worked fine for me (gfortran 4.3.2, Numpy 1.3.0.dev5867, Mac OS X) >> >> HTH, George Nurser. >> _______________________________________________ >> SciPy-user mailing list >> SciPy-user at scipy.org >> http://projects.scipy.org/mailman/listinfo/scipy-user > > > > On Thu, Oct 30, 2008 at 11:52 PM, Kimberly Artita wrote: >> >> Hi, >> >> Can someone please tell me why I keep getting a segmentation fault? >> >> my fortran script (gfortran_test.f90): >> subroutine readin_test >> >> implicit none >> >> character(len=4) :: title (60) >> character (len=13) :: bigsub, sbsout, rchout, rsvout, lwqout, wtrout >> open (2,file="gfortran.txt", delim='none') >> print *, "title" >> read (2,5100) title >> print *, title >> read (2,5000) bigsub, sbsout, rchout, rsvout, lwqout, wtrout >> >> print *, "bigsub, sbsout, rchout, rsvout, lwqout, wtrout" >> print *, bigsub, sbsout, rchout, rsvout, lwqout, wtrout >> close(2) >> >> 5100 format (20a4) >> 5000 format (6a) >> >> end subroutine readin_test >> >> my python script (gfortran_test.py): >> import gfortran_test >> >> gfortran_test.readin_test() >> >> my text file (gfortran.txt): >> General Input/Output section (file.cio): Thu Mar 13 17:32:19 >> 2008 AVSWAT2000 - SWAT interface MDL >> >> >> basins.bsb basins.sbs basins.rch basins.rsv basins.lqo >> basins.wtr >> >> >> using this version of gfortran: i686-pc-linux-gnu-4.1.2 with either >> numpy-1.0.4-r2 or numpy-1.2.0 >> >> I can compile gfortran_test.f90 as a standalone program and it works! >> >> BUT, when I call it as a subroutine from python using f2py, it fails! >> I type: f2py --fcompiler=gfortran -c -m gfortran_test gfortran_test.f90 >> >> >> Why?????? >> >> -- >> Kimberly S. Artita >> PhD Intern, CDM >> Graduate Student, Engineering Science >> Southern Illinois University Carbondale >> Carbondale, Illinois 62901-6603 >> (618)-528-0349 >> e-mail: kartita at gmail.com, kartita at siu.edu >> web: http://civil.engr.siu.edu/GraduateStudents/artita/index.html > > > > -- > Kimberly S. Artita > PhD Intern, CDM > Graduate Student, Engineering Science > Southern Illinois University Carbondale > Carbondale, Illinois 62901-6603 > (618)-528-0349 > e-mail: kartita at gmail.com, kartita at siu.edu > web: http://civil.engr.siu.edu/GraduateStudents/artita/index.html > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > From b.webber at uea.ac.uk Mon Nov 3 10:00:05 2008 From: b.webber at uea.ac.uk (Ben Webber) Date: Mon, 3 Nov 2008 15:00:05 +0000 Subject: [SciPy-user] interp1d problem Message-ID: <67ea8dba0811030700w3431402bn8013b4b9a97ca855@mail.gmail.com> Hi, In CDAT I have been trying to use the interp1d class from the scipy.interpolate package to interpolate 3-dimensional oceanographic data along the time axis using cubic splines. This works fine for 1 or 2 dimensional data but fails for 3 dimensional data. However, using linear interpolation works no matter what the dimensions. I have tried to simplify the problem as much as possible and have created the following simplified script: import scipy from scipy.interpolate import interp1d test_array = [[[2,6],[10,7]],[[4,8],[12,9]],[[2,6],[10,7]],[[4,8],[12,9]]] test_array = scipy.array(test_array) test_axis = scipy.array(range(0,31,10)) myInterp = interp1d(test_axis,test_array,kind='cubic',axis = 0) #-----------------fails here---------------------- new_axis = scipy.array(range(0,31)) new_data = myInterp(new_axis) If kind is specified as 'linear' this script works. However, with kind as 'cubic', I get the following error: Traceback (most recent call last): File "cubic_interp.py", line 11, in myInterp = interp1d(timeax_array,theta_array,kind = 'cubic',axis = 0) File "/cvos/apps/CDAT-5.0.b1/lib/python2.5/site-packages/scipy/interpolate/interpolate.py", line 235, in __init__ self._spline = splmake(x,oriented_y,order=order) File "/cvos/apps/CDAT-5.0.b1/lib/python2.5/site-packages/scipy/interpolate/interpolate.py", line 697, in splmake coefs = func(xk, yk, order, conds, B) File "/cvos/apps/CDAT-5.0.b1/lib/python2.5/site-packages/scipy/interpolate/interpolate.py", line 431, in _find_smoothest return dot(tmp, yk) ValueError: objects are not aligned The shape of the array is (4,2,2). The time axis is length 4, so it should work. Can anybody explain this error? Cheers, Ben Webber -------------- next part -------------- An HTML attachment was scrubbed... URL: From timmichelsen at gmx-topmail.de Mon Nov 3 10:16:39 2008 From: timmichelsen at gmx-topmail.de (Timmie) Date: Mon, 3 Nov 2008 15:16:39 +0000 (UTC) Subject: [SciPy-user] documentation on scipy.interpolate Message-ID: Hello, is there a documentation on the interpolation functions included within scipy? I am particular interested in the kinds of interpolation available in http://www.scipy.org/doc/api_docs/SciPy.interpolate.interpolate.interp1d.html The same functionality is used by the timeseries scikit: http://pytseries.sourceforge.net/lib/interpolation.html#scikits.timeseries.lib.interpolate.interp_masked1d What does cubic, quintic mean? Kind regards, From pav at iki.fi Mon Nov 3 10:30:44 2008 From: pav at iki.fi (Pauli Virtanen) Date: Mon, 3 Nov 2008 15:30:44 +0000 (UTC) Subject: [SciPy-user] interp1d problem References: <67ea8dba0811030700w3431402bn8013b4b9a97ca855@mail.gmail.com> Message-ID: Hi, Mon, 03 Nov 2008 15:00:05 +0000, Ben Webber wrote: > In CDAT I have been trying to use the interp1d class from the > scipy.interpolate package to interpolate 3-dimensional oceanographic > data along the time axis using cubic splines. This works fine for 1 or 2 > dimensional data but fails for 3 dimensional data. However, using linear > interpolation works no matter what the dimensions. [clip] > "/cvos/apps/CDAT-5.0.b1/lib/python2.5/site-packages/scipy/interpolate/ interpolate.py", > line 431, in _find_smoothest > return dot(tmp, yk) > ValueError: objects are not aligned I believe this bug was fixed in r4489 [1] and spline interpolation should work in the upcoming Scipy 0.7.0. (In the meantime, you can try to use a development version of Scipy, or apply the patch linked.) .. [1] http://scipy.org/scipy/scipy/changeset?format=diff&new=4489&old=4175&new_path=trunk%2Fscipy%2Finterpolate%2Finterpolate.py&old_path=trunk%2Fscipy%2Finterpolate%2Finterpolate.py -- Pauli Virtanen From timmichelsen at gmx-topmail.de Mon Nov 3 10:45:59 2008 From: timmichelsen at gmx-topmail.de (Timmie) Date: Mon, 3 Nov 2008 15:45:59 +0000 (UTC) Subject: [SciPy-user] spreadsheet data visualisation app Message-ID: Hello, is there any application that I can use view numpy arrays in a tabular / spreadsheet like manner? Although I know that there may be large arrays which make it difficult for such a application to work properly, this can sometimes be desireable for validating calculation results. Especially when working interactively (i. e. using Ipython). I imagine something like: arr = np.arrange(0,10) array.sheet(arr) => similar to pylab.show() an application like the spreadsheet on http://zetcode.com/wxpython/skeletons/ could then pop up visualising my array. The only app I can currently imagine to do such tasks would be Resolver One: http://www.resolversystems.com/products/resolver-one/ but that application is based on IronPython. Kind regards, Timmie From nwagner at iam.uni-stuttgart.de Mon Nov 3 11:38:20 2008 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Mon, 03 Nov 2008 17:38:20 +0100 Subject: [SciPy-user] Event handling in odeint Message-ID: Hi all, Is it possible to handle events in odeint, e. g. http://books.google.de/books?id=UX61pYtpI40C&pg=PA254&lpg=PA254&dq=st%C3%BCckweise+linear+Federkennlinie&source=web&ots=uUtyb7iGiV&sig=uOk54Yk-K3myoMMzJVK9wtP6G8g&hl=de&sa=X&oi=book_result&resnum=3&ct=result page 256 Nils From gael.varoquaux at normalesup.org Mon Nov 3 11:56:55 2008 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Mon, 3 Nov 2008 17:56:55 +0100 Subject: [SciPy-user] documentation on scipy.interpolate In-Reply-To: References: Message-ID: <20081103165655.GA20201@phare.normalesup.org> On Mon, Nov 03, 2008 at 03:16:39PM +0000, Timmie wrote: > I am particular interested in the kinds of interpolation available in > http://www.scipy.org/doc/api_docs/SciPy.interpolate.interpolate.interp1d.html All I know of is: http://docs.scipy.org/doc/scipy/reference/interpolate.html That's sparse, very sparse, you'll have to interpolate it :). > The same functionality is used by the timeseries scikit: > http://pytseries.sourceforge.net/lib/interpolation.html#scikits.timeseries.lib.interpolate.interp_masked1d That's outside of my knowledge. Ga?l From rob.clewley at gmail.com Mon Nov 3 12:05:22 2008 From: rob.clewley at gmail.com (Rob Clewley) Date: Mon, 3 Nov 2008 13:05:22 -0400 Subject: [SciPy-user] Event handling in odeint In-Reply-To: References: Message-ID: > Is it possible to handle events in odeint, e. g. No, I believe not. But my VODE wrapper has good event detection, although then you have to specify your problem with PyDSTool. Actually, I've made a few improvements and fixes in the current branched SVN version of PyDSTool, available at http://www.cam.cornell.edu/svn/PyDSTool/branches/robmods/ It will be released in the next week or so on Sourceforge. -Rob From anand.prabhakar.patil at gmail.com Mon Nov 3 12:32:09 2008 From: anand.prabhakar.patil at gmail.com (Anand Patil) Date: Mon, 3 Nov 2008 17:32:09 +0000 Subject: [SciPy-user] Problem with mkl 10.0.2: undefined symbol: mkl_blas_xdgemm_1_thr_htn In-Reply-To: <2bc7a5a50811030513l31dcfb90p3a3f56c38422aba3@mail.gmail.com> References: <2bc7a5a50811020633v7c15b581v535f2cfafba5febb@mail.gmail.com> <2bc7a5a50811030410g7b2f57beo3d514344baf9d934@mail.gmail.com> <2bc7a5a50811030449u7f29e583i2e504b1e2264ccfa@mail.gmail.com> <2bc7a5a50811030513l31dcfb90p3a3f56c38422aba3@mail.gmail.com> Message-ID: <2bc7a5a50811030932u44c19732pfa8bdea7481da096@mail.gmail.com> Hi Matthieu, For posterity, here's what worked. I am sure there's a better way to do this but right now I don't want to know! ;-) [DEFAULT] library_dirs = /usr/lib include_dirs = /usr/include libraries = pthread, m [mkl] library_dirs = /opt/intel/mkl/10.0.2.018/lib/em64t lapack_libs = mkl, mkl_lapack mkl_libs = mkl, guide Thanks for your help, Anand On Mon, Nov 3, 2008 at 1:13 PM, Anand Patil wrote: > On Mon, Nov 3, 2008 at 1:03 PM, Matthieu Brucher < > matthieu.brucher at gmail.com> wrote: > >> 2008/11/3 Anand Patil : >> > Hi Matthieu, >> > >> > It works with: >> > >> > [mkl] >> > library_dirs = /opt/intel/mkl/10.0.5.025/lib/em64t >> > lapack_libs = mkl, mkl_lapack >> > mkl_libs = mkl_core, mkl_def, mkl_vml_def, mkl_intel_thread, >> mkl_sequential, >> > guide, mkl_em64t, mkl >> >> I think you try to link with too many libraries. If you can, use only >> mkl, guide, iomp5, pthread. If the issue (the missing symbol) arises >> again, use libmkl_intel_lp64, libmkl_sequential, libmkl_core only (see >> the MKL user guide a well for more information about the different >> threading models). >> >> Matthieu >> -- >> Information System Engineer, Ph.D. >> Website: http://matthieu-brucher.developpez.com/ >> Blogs: http://matt.eifelle.com and http://blog.developpez.com/?blog=92 >> LinkedIn: http://www.linkedin.com/in/matthieubrucher >> _______________________________________________ >> SciPy-user mailing list >> SciPy-user at scipy.org >> http://projects.scipy.org/mailman/listinfo/scipy-user >> > > Thanks Matthieu, > > The diagonal of B is still zero with both collections of libraries. I'll > have a look through the MKL user guide and see if I can't resolve this, > please let me know if you think of anything also. > > Anand > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nwagner at iam.uni-stuttgart.de Tue Nov 4 02:23:09 2008 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Tue, 04 Nov 2008 08:23:09 +0100 Subject: [SciPy-user] Event handling in odeint In-Reply-To: References: Message-ID: On Mon, 3 Nov 2008 13:05:22 -0400 "Rob Clewley" wrote: >> Is it possible to handle events in odeint, e. g. > > No, I believe not. But my VODE wrapper has good event >detection, > although then you have to specify your problem with >PyDSTool. > Actually, I've made a few improvements and fixes in the >current > branched SVN version of PyDSTool, available at > http://www.cam.cornell.edu/svn/PyDSTool/branches/robmods/ > > It will be released in the next week or so on >Sourceforge. > > -Rob > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user Thank you for your prompt response. Is there an example illustrating the handling of events ? Nils From stefan at sun.ac.za Tue Nov 4 04:30:09 2008 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Tue, 4 Nov 2008 11:30:09 +0200 Subject: [SciPy-user] Explanation of different edge modes in scipy.ndimage In-Reply-To: References: Message-ID: <9457e7c80811040130s113e70f4if45245d3fa4ea043@mail.gmail.com> Hi Kurt 2008/10/31 Kurt Smith : > I'm doing some gaussian filtering of periodic 2D arrays using > scipy.ndimage.gaussian_filter. There is a 'mode' argument that is set to > 'reflect' by default. In _ni_support.py:34 there is a conversion function, > '_extend_mode_to_code' that gives the different modes available. For > periodic data I believe I should use 'wrap', but I'm interested to know what > the other modes mean, esp the difference between 'reflect' and 'mirror'. > For the record, the modes defined are 'nearest', 'wrap', 'reflect', > 'mirror', and 'constant'. For future reference, is there a place where > these arguments are documented? Sorry for the long overdue reply. Reflect means: 1 | 2 | 3 | 2 | 1 While mirror means: 1 | 2 | 3 | 3| 2 | 1 (or the other way around, can't remember). The problem with the last approach is the interpolation between 3 and 3, which is currently broken, so I'd advise against using it. Thanks for your interest, Regards St?fan From bgoli at sun.ac.za Tue Nov 4 05:37:46 2008 From: bgoli at sun.ac.za (Brett Olivier) Date: Tue, 4 Nov 2008 12:37:46 +0200 Subject: [SciPy-user] Event handling in odeint In-Reply-To: References: Message-ID: <200811041237.46334.bgoli@sun.ac.za> On Monday 03 November 2008 19:05:22 Rob Clewley wrote: > > Is it possible to handle events in odeint, e. g. > > No, I believe not. But my VODE wrapper has good event detection, > although then you have to specify your problem with PyDSTool. > Actually, I've made a few improvements and fixes in the current > branched SVN version of PyDSTool, available at > http://www.cam.cornell.edu/svn/PyDSTool/branches/robmods/ Another option is to use CVODE via PySundials (http://pysundials.sourceforge.net/). Brett From anand.prabhakar.patil at gmail.com Tue Nov 4 07:06:01 2008 From: anand.prabhakar.patil at gmail.com (Anand Patil) Date: Tue, 4 Nov 2008 12:06:01 +0000 Subject: [SciPy-user] Installing on Ubuntu with mkl 10.0.2: libimf.so not found Message-ID: <2bc7a5a50811040406w614bf67lf1fb8869a03f0182@mail.gmail.com> Hi all, Sorry to post again for help installing. I recently got numpy installed with mkl 10.0.2. I pared the site.cfg file down to just [mkl] library_dirs = /opt/intel/mkl/10.0.2.018/lib/em64t lapack_libs = mkl, mkl_lapack mkl_libs = mkl, guide and everything seems to work fine. Now I'm trying to install scipy with the Intel compilers. The build went fine, but when I go to do python setup.py install I get: /working_copies/scipy$ sudo python setup.py install [sudo] password for anand: Traceback (most recent call last): File "setup.py", line 92, in setup_package() File "setup.py", line 63, in setup_package from numpy.distutils.core import setup File "/usr/lib/python2.5/site-packages/numpy/__init__.py", line 130, in import add_newdocs File "/usr/lib/python2.5/site-packages/numpy/add_newdocs.py", line 9, in from lib import add_newdoc File "/usr/lib/python2.5/site-packages/numpy/lib/__init__.py", line 152, in from type_check import * File "/usr/lib/python2.5/site-packages/numpy/lib/type_check.py", line 8, in import numpy.core.numeric as _nx File "/usr/lib/python2.5/site-packages/numpy/core/__init__.py", line 5, in import multiarray ImportError: libimf.so: cannot open shared object file: No such file or directory I'm having a hard time debugging this because first, I can import setup from numpy.distutils.core directly in Python: /working_copies/scipy$ python Python 2.5.2 (r252:60911, Jul 31 2008, 17:31:22) [GCC 4.2.3 (Ubuntu 4.2.3-2ubuntu7)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> from numpy.distutils.core import setup >>> Second, libimf.so is on my LD_LIBRARY_PATH; /working_copies/scipy$ echo $LD_LIBRARY_PATH :/usr/lib64:/lib64:/usr/local/lib64:/usr/lib:/usr/local/lib:/lib:/opt/intel/fce/10.1.018/lib:/opt/intel/ipp/ 5.3.4.080/em64t/sharedlib:/opt/intel/cce/10.1.018/lib:/opt/intel/mkl/10.0.2.018/lib/em64t /working_copies/scipy$ ls /opt/intel/ipp/5.3.4.080/em64t/sharedlib/libimf.so /opt/intel/ipp/5.3.4.080/em64t/sharedlib/libimf.so Has anyone seen anything like this before? Thanks, ANand -------------- next part -------------- An HTML attachment was scrubbed... URL: From cournape at gmail.com Tue Nov 4 07:20:19 2008 From: cournape at gmail.com (David Cournapeau) Date: Tue, 4 Nov 2008 21:20:19 +0900 Subject: [SciPy-user] Installing on Ubuntu with mkl 10.0.2: libimf.so not found In-Reply-To: <2bc7a5a50811040406w614bf67lf1fb8869a03f0182@mail.gmail.com> References: <2bc7a5a50811040406w614bf67lf1fb8869a03f0182@mail.gmail.com> Message-ID: <5b8d13220811040420o66285e75od208ff9eef6a7b46@mail.gmail.com> On Tue, Nov 4, 2008 at 9:06 PM, Anand Patil wrote: > Hi all, > Sorry to post again for help installing. I recently got numpy installed with > mkl 10.0.2. I pared the site.cfg file down to just I think that's a bug in the MKL. AFAIK, nobody has been able to track it down (you're not the first one to report this problem with this version of the MKL). cheers, David From nwagner at iam.uni-stuttgart.de Tue Nov 4 07:42:41 2008 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Tue, 04 Nov 2008 13:42:41 +0100 Subject: [SciPy-user] Event handling in odeint In-Reply-To: <200811041237.46334.bgoli@sun.ac.za> References: <200811041237.46334.bgoli@sun.ac.za> Message-ID: On Tue, 4 Nov 2008 12:37:46 +0200 Brett Olivier wrote: > On Monday 03 November 2008 19:05:22 Rob Clewley wrote: >> > Is it possible to handle events in odeint, e. g. >> >> No, I believe not. But my VODE wrapper has good event >>detection, >> although then you have to specify your problem with >>PyDSTool. >> Actually, I've made a few improvements and fixes in the >>current >> branched SVN version of PyDSTool, available at >> http://www.cam.cornell.edu/svn/PyDSTool/branches/robmods/ > > Another option is to use CVODE via PySundials > (http://pysundials.sourceforge.net/). > > Brett > Hi Brett, Thank you for your reply. I have installed pysundials. Unfortunately, I am not familiar with CVODE. Therefore, a small example how to deal with events within cvode would be appreciated. Thanks in advance Nils From anand.prabhakar.patil at gmail.com Tue Nov 4 08:48:56 2008 From: anand.prabhakar.patil at gmail.com (Anand Patil) Date: Tue, 4 Nov 2008 13:48:56 +0000 Subject: [SciPy-user] Installing on Ubuntu with mkl 10.0.2: libimf.so not found In-Reply-To: <5b8d13220811040420o66285e75od208ff9eef6a7b46@mail.gmail.com> References: <2bc7a5a50811040406w614bf67lf1fb8869a03f0182@mail.gmail.com> <5b8d13220811040420o66285e75od208ff9eef6a7b46@mail.gmail.com> Message-ID: <2bc7a5a50811040548m4599cb90le355ad408b89826a@mail.gmail.com> On Tue, Nov 4, 2008 at 12:20 PM, David Cournapeau wrote: > On Tue, Nov 4, 2008 at 9:06 PM, Anand Patil > wrote: > > Hi all, > > Sorry to post again for help installing. I recently got numpy installed > with > > mkl 10.0.2. I pared the site.cfg file down to just > > I think that's a bug in the MKL. AFAIK, nobody has been able to track > it down (you're not the first one to report this problem with this > version of the MKL). > Thanks David, I couldn't find the ticket in the scipy bug tracker. Should I open a new one? Also, what's the most recent version of MKL that's known to work? Anand -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthieu.brucher at gmail.com Tue Nov 4 09:10:43 2008 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Tue, 4 Nov 2008 15:10:43 +0100 Subject: [SciPy-user] Installing on Ubuntu with mkl 10.0.2: libimf.so not found In-Reply-To: <2bc7a5a50811040548m4599cb90le355ad408b89826a@mail.gmail.com> References: <2bc7a5a50811040406w614bf67lf1fb8869a03f0182@mail.gmail.com> <5b8d13220811040420o66285e75od208ff9eef6a7b46@mail.gmail.com> <2bc7a5a50811040548m4599cb90le355ad408b89826a@mail.gmail.com> Message-ID: Before doing that, try without the sudo. The fact that imf is not found is not MKL related, but system dependent IMHO. For instance try (after checking the availability of libimf.so in the library path): python setup.py install --prefix=/somewhere/where/I/put/garbage Matthieu 2008/11/4 Anand Patil : > On Tue, Nov 4, 2008 at 12:20 PM, David Cournapeau > wrote: >> >> On Tue, Nov 4, 2008 at 9:06 PM, Anand Patil >> wrote: >> > Hi all, >> > Sorry to post again for help installing. I recently got numpy installed >> > with >> > mkl 10.0.2. I pared the site.cfg file down to just >> >> I think that's a bug in the MKL. AFAIK, nobody has been able to track >> it down (you're not the first one to report this problem with this >> version of the MKL). > > Thanks David, > I couldn't find the ticket in the scipy bug tracker. Should I open a new > one? Also, what's the most recent version of MKL that's known to work? > Anand > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > -- Information System Engineer, Ph.D. Website: http://matthieu-brucher.developpez.com/ Blogs: http://matt.eifelle.com and http://blog.developpez.com/?blog=92 LinkedIn: http://www.linkedin.com/in/matthieubrucher From anand.prabhakar.patil at gmail.com Tue Nov 4 09:40:24 2008 From: anand.prabhakar.patil at gmail.com (Anand Patil) Date: Tue, 4 Nov 2008 14:40:24 +0000 Subject: [SciPy-user] Installing on Ubuntu with mkl 10.0.2: libimf.so not found In-Reply-To: References: <2bc7a5a50811040406w614bf67lf1fb8869a03f0182@mail.gmail.com> <5b8d13220811040420o66285e75od208ff9eef6a7b46@mail.gmail.com> <2bc7a5a50811040548m4599cb90le355ad408b89826a@mail.gmail.com> Message-ID: <2bc7a5a50811040640l5e247d40le53dd29c0a21552c@mail.gmail.com> On Tue, Nov 4, 2008 at 2:10 PM, Matthieu Brucher wrote: > Before doing that, try without the sudo. The fact that imf is not > found is not MKL related, but system dependent IMHO. > For instance try (after checking the availability of libimf.so in the > library path): > python setup.py install --prefix=/somewhere/where/I/put/garbage > > Matthieu > That worked! So I can install scipy, but for packages that have binaries like PyTables I still need root permissions. How can I make the library available when I use sudo? Thanks, Anand -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthieu.brucher at gmail.com Tue Nov 4 09:54:35 2008 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Tue, 4 Nov 2008 15:54:35 +0100 Subject: [SciPy-user] Installing on Ubuntu with mkl 10.0.2: libimf.so not found In-Reply-To: <2bc7a5a50811040640l5e247d40le53dd29c0a21552c@mail.gmail.com> References: <2bc7a5a50811040406w614bf67lf1fb8869a03f0182@mail.gmail.com> <5b8d13220811040420o66285e75od208ff9eef6a7b46@mail.gmail.com> <2bc7a5a50811040548m4599cb90le355ad408b89826a@mail.gmail.com> <2bc7a5a50811040640l5e247d40le53dd29c0a21552c@mail.gmail.com> Message-ID: > That worked! So I can install scipy, but for packages that have binaries > like PyTables I still need root permissions. How can I make the library > available when I use sudo? > Thanks, > Anand Well, it should have worked with the sudo :| Is it an installation for all users or just for you ? If it is the latter, I suggest you to create a local folder in your home directory where you will put everything (every decent installation tool has the --prefix options which will correctly populate the local folder, you just have to set the environment variables correctly, but you already modified LD_LIBRARY_PATH, it won't much more trouble). This way, you don't need root permission and you won't mess up with the system installation. This is what I did on every computer even where I had sudo capabilities. Less trouble in the long run ;) Matthieu -- Information System Engineer, Ph.D. Website: http://matthieu-brucher.developpez.com/ Blogs: http://matt.eifelle.com and http://blog.developpez.com/?blog=92 LinkedIn: http://www.linkedin.com/in/matthieubrucher From anand.prabhakar.patil at gmail.com Tue Nov 4 10:06:21 2008 From: anand.prabhakar.patil at gmail.com (Anand Patil) Date: Tue, 4 Nov 2008 15:06:21 +0000 Subject: [SciPy-user] Installing on Ubuntu with mkl 10.0.2: libimf.so not found In-Reply-To: References: <2bc7a5a50811040406w614bf67lf1fb8869a03f0182@mail.gmail.com> <5b8d13220811040420o66285e75od208ff9eef6a7b46@mail.gmail.com> <2bc7a5a50811040548m4599cb90le355ad408b89826a@mail.gmail.com> <2bc7a5a50811040640l5e247d40le53dd29c0a21552c@mail.gmail.com> Message-ID: <2bc7a5a50811040706o23327cct6d13bae2726fa9ce@mail.gmail.com> On Tue, Nov 4, 2008 at 2:54 PM, Matthieu Brucher wrote: > > That worked! So I can install scipy, but for packages that have binaries > > like PyTables I still need root permissions. How can I make the library > > available when I use sudo? > > Thanks, > > Anand > > Well, it should have worked with the sudo :| > Is it an installation for all users or just for you ? If it is the > latter, I suggest you to create a local folder in your home directory > where you will put everything (every decent installation tool has the > --prefix options which will correctly populate the local folder, you > just have to set the environment variables correctly, but you already > modified LD_LIBRARY_PATH, it won't much more trouble). This way, you > don't need root permission and you won't mess up with the system > installation. This is what I did on every computer even where I had > sudo capabilities. Less trouble in the long run ;) > Will do... that's really weird, but I'm glad to have my Python environment up and running anyway! Thanks, Anand -------------- next part -------------- An HTML attachment was scrubbed... URL: From kwmsmith at gmail.com Tue Nov 4 10:26:25 2008 From: kwmsmith at gmail.com (Kurt Smith) Date: Tue, 4 Nov 2008 09:26:25 -0600 Subject: [SciPy-user] Explanation of different edge modes in scipy.ndimage In-Reply-To: <9457e7c80811040130s113e70f4if45245d3fa4ea043@mail.gmail.com> References: <9457e7c80811040130s113e70f4if45245d3fa4ea043@mail.gmail.com> Message-ID: On Tue, Nov 4, 2008 at 3:30 AM, St?fan van der Walt wrote: > Hi Kurt > > 2008/10/31 Kurt Smith : > > I'm doing some gaussian filtering of periodic 2D arrays using > > scipy.ndimage.gaussian_filter. There is a 'mode' argument that is set to > > 'reflect' by default. In _ni_support.py:34 there is a conversion > function, > > '_extend_mode_to_code' that gives the different modes available. For > > periodic data I believe I should use 'wrap', but I'm interested to know > what > > the other modes mean, esp the difference between 'reflect' and 'mirror'. > > For the record, the modes defined are 'nearest', 'wrap', 'reflect', > > 'mirror', and 'constant'. For future reference, is there a place where > > these arguments are documented? > > Sorry for the long overdue reply. > > Reflect means: > > 1 | 2 | 3 | 2 | 1 > > While mirror means: > > 1 | 2 | 3 | 3| 2 | 1 > > (or the other way around, can't remember). > > The problem with the last approach is the interpolation between 3 and > 3, which is currently broken, so I'd advise against using it. Thanks Stefan. -------------- next part -------------- An HTML attachment was scrubbed... URL: From kern at mpi-magdeburg.mpg.de Tue Nov 4 12:45:11 2008 From: kern at mpi-magdeburg.mpg.de (Benjamin Kern) Date: Tue, 4 Nov 2008 18:45:11 +0100 Subject: [SciPy-user] F2PY: Problems after upgrading to Python2.6 Message-ID: <20081104184511.668b238f@mpi-magdeburg.mpg.de> Hello, i'm experiencing strange problems after upgrading to python2.6. I'm also using numpy-svn and scipy-svn. So here is the problem. When i try to wrap the following simple fortran code, C File hello.f subroutine foo (a) integer a print*, "Hello from Fortran!" print*, "a=",a end I have problems executing this from python, i.e. >>> import hello >>> print hello.__doc__ This module 'hello' is auto-generated with f2py (version:2_5968). Functions: foo(a) . >>> print hello.foo.__doc__ foo - Function signature: foo(a) Required arguments: a : input int >>> hello.foo(4) Traceback (most recent call last): File "", line 1, in RuntimeError: more argument specifiers than keyword list entries (remaining format:'|:hello.foo') Thanks for the help in advance From kern at mpi-magdeburg.mpg.de Tue Nov 4 12:22:36 2008 From: kern at mpi-magdeburg.mpg.de (Benjamin Kern) Date: Tue, 4 Nov 2008 18:22:36 +0100 Subject: [SciPy-user] F2PY: Problems after upgrading to Python2.6 Message-ID: <20081104182236.3ed10734@mpi-magdeburg.mpg.de> Hello, i'm experiencing strange problems after upgrading to python2.6. I'm also using numpy-svn and scipy-svn. So here is the problem. When i try to wrap the following simple fortran code, C File hello.f subroutine foo (a) integer a print*, "Hello from Fortran!" print*, "a=",a end I have problems executing this from python, i.e. >>> import hello >>> print hello.__doc__ This module 'hello' is auto-generated with f2py (version:2_5968). Functions: foo(a) . >>> print hello.foo.__doc__ foo - Function signature: foo(a) Required arguments: a : input int >>> hello.foo(4) Traceback (most recent call last): File "", line 1, in RuntimeError: more argument specifiers than keyword list entries (remaining format:'|:hello.foo') Thanks for the help in advance From bandtheory at rocketmail.com Tue Nov 4 15:59:51 2008 From: bandtheory at rocketmail.com (Evan Wilson) Date: Tue, 4 Nov 2008 12:59:51 -0800 (PST) Subject: [SciPy-user] Understanding numpy array operations Message-ID: <633128.607.qm@web59705.mail.ac4.yahoo.com> Hello, I am a novice python/numpy user, using the language for computational physical science research. I have been modifying scripts developed by another student who worked on the project before me. However, I require a good understanding of what is actually happening in the code for when I go back to modify them. I was wondering if anyone could tell me what the bit of code does: n = 5 A = reshape(zeros(5*3*n), (n*n,3))*1.0 for j in range(n): for i in range(n): A[i+n*j] = i*u + j*v I understand there are some matrix operations happening but I cannot tell what they do. I would appreciate any help you could offer. Thanks, BT -------------- next part -------------- An HTML attachment was scrubbed... URL: From lou_boog2000 at yahoo.com Tue Nov 4 16:39:36 2008 From: lou_boog2000 at yahoo.com (Lou Pecora) Date: Tue, 4 Nov 2008 13:39:36 -0800 (PST) Subject: [SciPy-user] Understanding numpy array operations In-Reply-To: <633128.607.qm@web59705.mail.ac4.yahoo.com> Message-ID: <848440.84806.qm@web34402.mail.mud.yahoo.com> OK, here's a shot: n=5 # Pretty obvious, set n=5 A= reshape(zeros(5*3*n), (n*n,3))*1.0 # Start from the inside out. zeros(5*3*n) give you a 1D array # (not a matrix) of zeros 0.0 that has 75 0.0's. The reshape changes # it to an array that is 25 rows by 3 columns (a 2D array). It is all # multipled by 1.0 (which doesn't really change things in this case) # and given the name A as a reference for j in range(n): for i in range(n): A[i+n*j] = i*u + j*v # This is a double loop, but I suspect there is something wrong or missing # What are u and v? The indexing on A is for returning an entire row, but # it will go beyond the bounds of A. I can't figure this one out. Sorry. -- Lou Pecora, my views are my own. --- On Tue, 11/4/08, Evan Wilson wrote: > From: Evan Wilson > Subject: [SciPy-user] Understanding numpy array operations > To: scipy-user at scipy.org > Date: Tuesday, November 4, 2008, 3:59 PM > Hello, > > I am a novice python/numpy user, using the language for > computational physical science research. I have been > modifying scripts developed by another student who worked on > the project before me. However, I require a good > understanding of what is actually happening in the code for > when I go back to modify them. I was wondering if anyone > could tell me what the bit of code does: > > > n > = 5 > A > = reshape(zeros(5*3*n), (n*n,3))*1.0 > for > j in range(n): > for i in range(n): > A[i+n*j] = i*u + j*v > > I understand there are some matrix operations happening but > I cannot tell what they do. I would appreciate any help you > could offer. > > Thanks, > > BT > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user From pav at iki.fi Tue Nov 4 16:59:41 2008 From: pav at iki.fi (Pauli Virtanen) Date: Tue, 4 Nov 2008 21:59:41 +0000 (UTC) Subject: [SciPy-user] Understanding numpy array operations References: <633128.607.qm@web59705.mail.ac4.yahoo.com> <848440.84806.qm@web34402.mail.mud.yahoo.com> Message-ID: Tue, 04 Nov 2008 13:39:36 -0800, Lou Pecora wrote: [clip] > for j in range(n): > for i in range(n): > A[i+n*j] = i*u + j*v > > # This is a double loop, but I suspect there is something wrong or > # missing What are u and v? The indexing on A is for returning an > # entire row, but it will go beyond the bounds of A. I can't figure > # this one out. Sorry. It actually won't go out-of-bounds, the loops go through i=0, 1, ..., n-1 and same for j -- which is in the range >= 0 and < n*n. If 'u' and 'v' are vectors of length 3, the row "(i+n*j)" will after the loops contain the vector "i*u + j*v". -- Pauli Virtanen From s.mientki at ru.nl Wed Nov 5 14:46:59 2008 From: s.mientki at ru.nl (Stef Mientki) Date: Wed, 05 Nov 2008 20:46:59 +0100 Subject: [SciPy-user] solving linear equations ? Message-ID: <4911F833.1050502@ru.nl> hello, (forgive my math is a bit rusty, so I don't know the right terms anymore) If I want to solve a set of linear equations, I use in MatLab: a \ b this works also if I have too many equations, so more columns than rows. In Numpy for Matlab users http://www.scipy.org/NumPy_for_Matlab_Users I read this: linalg.solve(a,b) if a is square linalg.lstsq(a,b) otherwise I find the name already suspicious, sound like least square, which is confirmed by the help. So I guess the translation from MatLab to Numpy is not correct. Is there a function to reduce the number of columns / remove the redundancy, so I end up with a square matrix ? thanks, Stef Mientki From aarchiba at physics.mcgill.ca Wed Nov 5 15:11:29 2008 From: aarchiba at physics.mcgill.ca (Anne Archibald) Date: Wed, 5 Nov 2008 15:11:29 -0500 Subject: [SciPy-user] solving linear equations ? In-Reply-To: <4911F833.1050502@ru.nl> References: <4911F833.1050502@ru.nl> Message-ID: 2008/11/5 Stef Mientki : > hello, > > (forgive my math is a bit rusty, so I don't know the right terms anymore) > > If I want to solve a set of linear equations, > I use in MatLab: > > a \ b > > this works also if I have too many equations, so more columns than rows. > In Numpy for Matlab users > > http://www.scipy.org/NumPy_for_Matlab_Users > > I read this: > linalg.solve(a,b) if a is square > linalg.lstsq(a,b) otherwise > > I find the name already suspicious, sound like least square, > which is confirmed by the help. > > So I guess the translation from MatLab to Numpy is not correct. > > Is there a function to reduce the number of columns / remove the > redundancy, so I end up with a square matrix ? Actually, MATLAB uses least-squares too, if the problem is overdetermined (or underdetermined). It turns out that this is a very good idea. If you have a problem with more equations than unknowns in which some of the equations are redundant, then you could in principle throw away the extra equations. But numerical solution of systems of linear equations is a messy business, full of creeping roundoff error and instabilities which can blow your solution away. So using a least-squares approach to solve all the equations gives you a better answer. If the equations *aren't* redundant, the least-squares approach gets as close as possible to an answer. MATLAB also works when you have more unknowns than equations; in this case it gives you the smallest solution in a least-squares sense. This too is a good idea for numerical reasons. All this is done under the hood with the singular value decomposition, which in fact uses a least-squares approach when throwing away dimensions that have been badly corrupted by roundoff error. All that said, it would still be nice to have at least a cookbook example that implements a generic linsolve equivalent to MATLAB's division operators. Anne From hoytak at cs.ubc.ca Wed Nov 5 16:18:59 2008 From: hoytak at cs.ubc.ca (Hoyt Koepke) Date: Wed, 5 Nov 2008 13:18:59 -0800 Subject: [SciPy-user] solving linear equations ? In-Reply-To: References: <4911F833.1050502@ru.nl> Message-ID: <4db580fd0811051318o76f38ecfn1b44ebe9ac899d34@mail.gmail.com> I'm also interested, so I hope it's okay to jump in here. Correct me if I'm wrong, but I thought that MATLAB's \ operator uses Gaussian elimination to solve the system as does solve(). You get the least squares approach in MATLAB by invoking the pinv() function; i.e. to solve Ax = b, you could use either: 1. x = A \ b, which gives you a solution unless there isn't any, even if it's not unique (in the case of an undetermined system). 2. x = pinv(A)*b, if you want the least squares approach. This performs least squares via an SVD decomposition and could thus give you a better solution than the first in the overconstrained/underconstrained case. The Gaussian elimination approach is an order of magnitude faster, and works in most situations, so that's what the shorthand does. I always assumed linalg.solve paralleled the first and linalg.lstsqr did the second, but I may be wrong. Also, I'm not clear on what happens with either solve or matlab's \ if it's not a square matrix. Could someone clarify? Thanks! --Hoyt +++++++++++++++++++++++++++++++++++ Hoyt Koepke University of Washington, Department of Statistics http://www.stat.washington.edu/~hoytak hoytak at gmail.com +++++++++++++++++++++++++++++++++++ From hasslerjc at comcast.net Wed Nov 5 17:07:38 2008 From: hasslerjc at comcast.net (John Hassler) Date: Wed, 05 Nov 2008 17:07:38 -0500 Subject: [SciPy-user] solving linear equations ? In-Reply-To: <4db580fd0811051318o76f38ecfn1b44ebe9ac899d34@mail.gmail.com> References: <4911F833.1050502@ru.nl> <4db580fd0811051318o76f38ecfn1b44ebe9ac899d34@mail.gmail.com> Message-ID: <4912192A.4040507@comcast.net> An HTML attachment was scrubbed... URL: From gardner at networknow.org Wed Nov 5 18:02:01 2008 From: gardner at networknow.org (Gardner Pomper) Date: Wed, 5 Nov 2008 18:02:01 -0500 Subject: [SciPy-user] Distributed testing? Message-ID: <42d22dbf0811051502g2c47b4b1gf38bdb75aec181f2@mail.gmail.com> I have a c library, hooked to python with swig, that I want to test on my condor cluster. I have not found any testing frameworks that seem to have any hooks for running on a cluster. I was wondering if anyone in this group knows of something. I am currently using nose to test on a single machine, but I want to run many data file through the tests, and would like to use my cluster to do that. Thanks, - Gardner -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Wed Nov 5 18:16:43 2008 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 5 Nov 2008 17:16:43 -0600 Subject: [SciPy-user] Distributed testing? In-Reply-To: <42d22dbf0811051502g2c47b4b1gf38bdb75aec181f2@mail.gmail.com> References: <42d22dbf0811051502g2c47b4b1gf38bdb75aec181f2@mail.gmail.com> Message-ID: <3d375d730811051516i2ef20607idd4661164f48cd15@mail.gmail.com> On Wed, Nov 5, 2008 at 17:02, Gardner Pomper wrote: > I have a c library, hooked to python with swig, that I want to test on my > condor cluster. I have not found any testing frameworks that seem to have > any hooks for running on a cluster. I was wondering if anyone in this group > knows of something. I am currently using nose to test on a single machine, > but I want to run many data file through the tests, and would like to use my > cluster to do that. I believe py.test has some basic capability to do that. http://codespeak.net/py/dist/test.html#automated-distributed-testing -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From blloyd at firstquadrant.com Wed Nov 5 21:15:15 2008 From: blloyd at firstquadrant.com (Brendon Lloyd) Date: Wed, 5 Nov 2008 18:15:15 -0800 Subject: [SciPy-user] subscribe Message-ID: <5919CB4651BF5445B8A4525B2DBAA2490921ED7F@fqexc.FirstQuadrant.com> -------------- next part -------------- An HTML attachment was scrubbed... URL: From massimo.sandal at unibo.it Thu Nov 6 10:10:32 2008 From: massimo.sandal at unibo.it (massimo sandal) Date: Thu, 06 Nov 2008 16:10:32 +0100 Subject: [SciPy-user] scipy.optimize.leastsq and covariance matrix meaning Message-ID: <491308E8.5060807@unibo.it> Hi, I have a trouble with the covariance matrix in the output of scipy.optimize.leastsq . I am trying to find the estimated sigma of the parameters obtained by the fit. Please bear with me since my statistics knowledge is poor. I understand that the diagonal of the covariance matrix should return me the variance values of each parameter. Problems are: 1) The variance of such parameters look unreasonably large to me, despite the fact I obtain an *excellent* fit over a lot of data points (and values extremly well coherent with expected). 2) The non-diagonal values of the covariance are also unreasonably large, which lets me doubt that picking simply the diagonal values is the correct thing to do. The residuals function is: def residuals(params,y,x,T): ''' Calculates the residuals of the fit ''' lambd, pii=params Kb=(1.38065e-23) therm=Kb*T err = y-( (therm*pii/4) * (((1-(x*lambd))**-2) - 1 + (4*x*lambd)) ) return err For example, a common entity of values is: 4390808.6184609979 3993219683.7749424 and the relative covariance matrix is [[ 1.97019986e+29 -2.67163157e+33] [ -2.67163157e+33 3.78415451e+37]] ...which concerns me. m. -- Massimo Sandal , Ph.D. University of Bologna Department of Biochemistry "G.Moruzzi" snail mail: Via Irnerio 48, 40126 Bologna, Italy email: massimo.sandal at unibo.it web: http://www.biocfarm.unibo.it/samori/people/sandal.html tel: +39-051-2094388 fax: +39-051-2094387 -------------- next part -------------- A non-text attachment was scrubbed... Name: massimo_sandal.vcf Type: text/x-vcard Size: 274 bytes Desc: not available URL: From gary.pajer at gmail.com Thu Nov 6 11:15:30 2008 From: gary.pajer at gmail.com (Gary Pajer) Date: Thu, 6 Nov 2008 11:15:30 -0500 Subject: [SciPy-user] spreadsheet data visualisation app In-Reply-To: References: Message-ID: <88fe22a0811060815p1a4889a1n62bb41399bb9f3b6@mail.gmail.com> I think I've done this using the Enthought Tool Suite TraitsGUI package, but my memory is a little fuzzy. http://www.enthought.com/products/open-tool-suite.php If I get a chance, I'll see if I can find what I did back then. In any event, Traits and TraitsGUI have a lot of features, so you might have to browse the docs a bit to find it. On Mon, Nov 3, 2008 at 10:45 AM, Timmie wrote: > Hello, > is there any application that I can use view numpy arrays in a tabular / > spreadsheet like manner? > > Although I know that there may be large arrays which make it difficult for > such > a application to work properly, this can sometimes be desireable for > validating > calculation results. Especially when working interactively (i. e. using > Ipython). > > I imagine something like: > > arr = np.arrange(0,10) > > array.sheet(arr) > > => similar to pylab.show() an application like the spreadsheet on > http://zetcode.com/wxpython/skeletons/ could then pop up visualising my > array. > > The only app I can currently imagine to do such tasks would be Resolver > One: > http://www.resolversystems.com/products/resolver-one/ but that application > is > based on IronPython. > > Kind regards, > Timmie > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jdh2358 at gmail.com Thu Nov 6 11:44:58 2008 From: jdh2358 at gmail.com (John Hunter) Date: Thu, 6 Nov 2008 10:44:58 -0600 Subject: [SciPy-user] spreadsheet data visualisation app In-Reply-To: <88fe22a0811060815p1a4889a1n62bb41399bb9f3b6@mail.gmail.com> References: <88fe22a0811060815p1a4889a1n62bb41399bb9f3b6@mail.gmail.com> Message-ID: <88e473830811060844h2516ce2bta014eee3e6dc9857@mail.gmail.com> On Thu, Nov 6, 2008 at 10:15 AM, Gary Pajer wrote: >> is there any application that I can use view numpy arrays in a tabular / >> spreadsheet like manner? matplotlib has a gtk toolkit for an editable record array view. One could easily adapt the pattern to other toolkits In[1:]: import matplotlib.mlab as mlab In [2]: r = mlab.csv2rec('data/intc.csv') In [3]: import mpl_toolkits.gtktools as gtktools In [4]: gtktools.rec2gtk(r) Out[4]: Screenshot is attached -- click on a cell to edit... JDH -------------- next part -------------- A non-text attachment was scrubbed... Name: gtkview.png Type: image/png Size: 41755 bytes Desc: not available URL: From simon.palmer at gmail.com Thu Nov 6 14:32:32 2008 From: simon.palmer at gmail.com (SimonPalmer) Date: Thu, 6 Nov 2008 11:32:32 -0800 (PST) Subject: [SciPy-user] Where did the Numpy-discussion group go? Message-ID: The Numpy-discussion group seems to have disappeared. I'm still receiving emails from the distribution list but I can't find the group. I doubt I am the first person to ask, so sorry for the repeat. From robert.kern at gmail.com Thu Nov 6 14:53:18 2008 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 6 Nov 2008 13:53:18 -0600 Subject: [SciPy-user] Where did the Numpy-discussion group go? In-Reply-To: References: Message-ID: <3d375d730811061153ha3dabfex1f4bd37278f9a32@mail.gmail.com> On Thu, Nov 6, 2008 at 13:32, SimonPalmer wrote: > The Numpy-discussion group seems to have disappeared. I'm still > receiving emails from the distribution list but I can't find the > group. I doubt I am the first person to ask, so sorry for the repeat. Do you mean the Google Group that gateways the mailing list? Yes, a number of Google groups have disappeared. I don't think there has been an explanation from Google, yet. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From simon.palmer at gmail.com Thu Nov 6 14:57:43 2008 From: simon.palmer at gmail.com (Simon Palmer) Date: Thu, 6 Nov 2008 19:57:43 +0000 Subject: [SciPy-user] Where did the Numpy-discussion group go? In-Reply-To: <3d375d730811061153ha3dabfex1f4bd37278f9a32@mail.gmail.com> References: <3d375d730811061153ha3dabfex1f4bd37278f9a32@mail.gmail.com> Message-ID: yes, sorry, I meant the google group. Has the message archive gone with it? I have some questions which I am sure have been answered before, it would be great to be able to search the discussions. Any idea how I would do that? On Thu, Nov 6, 2008 at 7:53 PM, Robert Kern wrote: > On Thu, Nov 6, 2008 at 13:32, SimonPalmer wrote: > > The Numpy-discussion group seems to have disappeared. I'm still > > receiving emails from the distribution list but I can't find the > > group. I doubt I am the first person to ask, so sorry for the repeat. > > Do you mean the Google Group that gateways the mailing list? Yes, a > number of Google groups have disappeared. I don't think there has been > an explanation from Google, yet. > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless > enigma that is made terrible by our own mad attempt to interpret it as > though it had an underlying truth." > -- Umberto Eco > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Thu Nov 6 15:03:14 2008 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 6 Nov 2008 14:03:14 -0600 Subject: [SciPy-user] Where did the Numpy-discussion group go? In-Reply-To: References: <3d375d730811061153ha3dabfex1f4bd37278f9a32@mail.gmail.com> Message-ID: <3d375d730811061203h3f4b6f9fk913b4d224df733e7@mail.gmail.com> On Thu, Nov 6, 2008 at 13:57, Simon Palmer wrote: > yes, sorry, I meant the google group. Has the message archive gone with > it? I have some questions which I am sure have been answered before, it > would be great to be able to search the discussions. Any idea how I would > do that? Either Google with site:projects.scipy.org/pipermail/numpy-discussion/ or through GMane: http://dir.gmane.org/gmane.comp.python.numeric.general For general information on the mailing lists: http://www.scipy.org/Mailing_Lists -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From simon.palmer at gmail.com Thu Nov 6 15:07:43 2008 From: simon.palmer at gmail.com (Simon Palmer) Date: Thu, 6 Nov 2008 20:07:43 +0000 Subject: [SciPy-user] Where did the Numpy-discussion group go? In-Reply-To: <3d375d730811061203h3f4b6f9fk913b4d224df733e7@mail.gmail.com> References: <3d375d730811061153ha3dabfex1f4bd37278f9a32@mail.gmail.com> <3d375d730811061203h3f4b6f9fk913b4d224df733e7@mail.gmail.com> Message-ID: Thanks very much, that will save me posting duplicate questions. On Thu, Nov 6, 2008 at 8:03 PM, Robert Kern wrote: > On Thu, Nov 6, 2008 at 13:57, Simon Palmer wrote: > > yes, sorry, I meant the google group. Has the message archive gone with > > it? I have some questions which I am sure have been answered before, it > > would be great to be able to search the discussions. Any idea how I > would > > do that? > > Either Google with > > site:projects.scipy.org/pipermail/numpy-discussion/ > > or through GMane: > > http://dir.gmane.org/gmane.comp.python.numeric.general > > For general information on the mailing lists: > > http://www.scipy.org/Mailing_Lists > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless > enigma that is made terrible by our own mad attempt to interpret it as > though it had an underlying truth." > -- Umberto Eco > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsouthey at gmail.com Thu Nov 6 16:05:21 2008 From: bsouthey at gmail.com (Bruce Southey) Date: Thu, 06 Nov 2008 15:05:21 -0600 Subject: [SciPy-user] scipy.optimize.leastsq and covariance matrix meaning In-Reply-To: <491308E8.5060807@unibo.it> References: <491308E8.5060807@unibo.it> Message-ID: <49135C11.5050309@gmail.com> massimo sandal wrote: > Hi, > > I have a trouble with the covariance matrix in the output of > scipy.optimize.leastsq . I am trying to find the estimated sigma of > the parameters obtained by the fit. Please bear with me since my > statistics knowledge is poor. I understand that the diagonal of the > covariance matrix should return me the variance values of each parameter. > > Problems are: > 1) The variance of such parameters look unreasonably large to me, > despite the fact I obtain an *excellent* fit over a lot of data points > (and values extremly well coherent with expected). > 2) The non-diagonal values of the covariance are also unreasonably > large, which lets me doubt that picking simply the diagonal values is > the correct thing to do. > > The residuals function is: > > def residuals(params,y,x,T): > ''' > Calculates the residuals of the fit > ''' > lambd, pii=params > > Kb=(1.38065e-23) > therm=Kb*T > > err = y-( (therm*pii/4) * (((1-(x*lambd))**-2) - 1 + > (4*x*lambd)) ) > > return err > > For example, a common entity of values is: > 4390808.6184609979 > 3993219683.7749424 > > and the relative covariance matrix is > [[ 1.97019986e+29 -2.67163157e+33] > [ -2.67163157e+33 3.78415451e+37]] > > ...which concerns me. > > m. > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > It is possible to be correct if the values of y are large and sufficiently variable. But, based on the comment on the fit and the correlation in the matrix above is -0.98, my expectation is that there is almost no error/residual variation left. The residual variance should be very small (sum of squared residuals divided by defree of freedom). You don't provide enough details but your two x variables would appear to virtually correlated because of the very highly correlation. There are other reasons, but with data etc. I can not guess. Bruce From robert.kern at gmail.com Thu Nov 6 16:11:46 2008 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 6 Nov 2008 15:11:46 -0600 Subject: [SciPy-user] scipy.optimize.leastsq and covariance matrix meaning In-Reply-To: <491308E8.5060807@unibo.it> References: <491308E8.5060807@unibo.it> Message-ID: <3d375d730811061311l7816c33bh770c53a22a8f8ba1@mail.gmail.com> On Thu, Nov 6, 2008 at 09:10, massimo sandal wrote: > Hi, > > I have a trouble with the covariance matrix in the output of > scipy.optimize.leastsq . I am trying to find the estimated sigma of the > parameters obtained by the fit. Please bear with me since my statistics > knowledge is poor. I understand that the diagonal of the covariance matrix > should return me the variance values of each parameter. > > Problems are: > 1) The variance of such parameters look unreasonably large to me, despite > the fact I obtain an *excellent* fit over a lot of data points (and values > extremly well coherent with expected). The variance of the point estimate of the paremeters is not necessarily related to the goodness of fit. It may just mean that your parameters can change significantly without affecting the fit. Try generating random parameters using the covariance matrix and numpy.random.multivariate_normal() and seeing how well they fit. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From dwf at cs.toronto.edu Thu Nov 6 22:36:58 2008 From: dwf at cs.toronto.edu (David Warde-Farley) Date: Thu, 6 Nov 2008 22:36:58 -0500 Subject: [SciPy-user] Calculating a lot of (squared) Mahalanobis distances Message-ID: <4F9F4BE4-34B0-4D50-908D-F70BD29F1C7E@cs.toronto.edu> Hi folks, I'm trying to calculate a lot of Mahalanobis distances (in essence, applying a positive definite quadratic x.T * A * x to a lot of vectors x) and trying to think of the fastest way to do it with numpy. If I've got a single vector x and a 2D array sigmainv, then I've got something like this. import numpy as np ... xmmu = x - mu dist = np.dot(xmmu, np.dot(sigmainv, xmmu)) However if I've got a DxN 2d array of N different vectors for which I want this quantity, it seems I can either use a loop or do something like xmmu = x - mu[:,np.newaxis] dist = np.diag(xmmu, np.dot(sigmainv, xmmu))) It seems like a lot of wasted computation to throw out the off- diagonals. One thought I've had would be to diagonalize sigmainv and then do something tricky with scalar products and broadcasting the diagonal, but I am not sure whether that would save me much. Does anyone have any other tricks up their sleeve? Thanks, David From robert.kern at gmail.com Thu Nov 6 22:44:22 2008 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 6 Nov 2008 21:44:22 -0600 Subject: [SciPy-user] Calculating a lot of (squared) Mahalanobis distances In-Reply-To: <4F9F4BE4-34B0-4D50-908D-F70BD29F1C7E@cs.toronto.edu> References: <4F9F4BE4-34B0-4D50-908D-F70BD29F1C7E@cs.toronto.edu> Message-ID: <3d375d730811061944x7268c2b2t984b5c7f85541200@mail.gmail.com> On Thu, Nov 6, 2008 at 21:36, David Warde-Farley wrote: > Hi folks, > > I'm trying to calculate a lot of Mahalanobis distances (in essence, > applying a positive definite quadratic x.T * A * x to a lot of vectors > x) and trying to think of the fastest way to do it with numpy. > > If I've got a single vector x and a 2D array sigmainv, then I've got > something like this. > > import numpy as np > ... > xmmu = x - mu > dist = np.dot(xmmu, np.dot(sigmainv, xmmu)) > > However if I've got a DxN 2d array of N different vectors for which I > want this quantity, it seems I can either use a loop or do something > like > > xmmu = x - mu[:,np.newaxis] > dist = np.diag(xmmu, np.dot(sigmainv, xmmu))) > > It seems like a lot of wasted computation to throw out the off- > diagonals. One thought I've had would be to diagonalize sigmainv and > then do something tricky with scalar products and broadcasting the > diagonal, but I am not sure whether that would save me much. > > Does anyone have any other tricks up their sleeve? (xmmu * np.dot(sigmainv, xmmu)).sum(axis=0) -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From roger.herikstad at gmail.com Fri Nov 7 04:12:48 2008 From: roger.herikstad at gmail.com (Roger Herikstad) Date: Fri, 7 Nov 2008 17:12:48 +0800 Subject: [SciPy-user] Shift the rows of a matrix Message-ID: Hi list, I was curious if anyone has a good method of shifting individual rows of a matrix? My problem is that I have a matrix consisting of waveforms on the rows, and I want to shift each waveform, i.e. pad with zeros on either end, depending on where the minimum point of each waveform is located relative to a pre-determined zero point. For example, if each waveform consists for 32 data points, I would be interested in aligning each waveform so that the minimum point always happens on index 10. My current solution is to loop through each waveform and taking the dot product with a shift matrix, but I'd rather avoid the for loop if possible. If anyone has any thoughts, I'd be happy for any input. Thanks! ~ Roger From dwf at cs.toronto.edu Fri Nov 7 04:36:22 2008 From: dwf at cs.toronto.edu (David Warde-Farley) Date: Fri, 7 Nov 2008 04:36:22 -0500 Subject: [SciPy-user] Calculating a lot of (squared) Mahalanobis distances In-Reply-To: <3d375d730811061944x7268c2b2t984b5c7f85541200@mail.gmail.com> References: <4F9F4BE4-34B0-4D50-908D-F70BD29F1C7E@cs.toronto.edu> <3d375d730811061944x7268c2b2t984b5c7f85541200@mail.gmail.com> Message-ID: <757D732D-4202-4CA0-888C-07DF3853B0A3@cs.toronto.edu> On 6-Nov-08, at 10:44 PM, Robert Kern wrote: >> Does anyone have any other tricks up their sleeve? > > (xmmu * np.dot(sigmainv, xmmu)).sum(axis=0) As usual, Robert to the rescue. Thanks! I'm thinking of modifying scipy.cluster.distance.mahalanobis to incorporate this behaviour. Does anyone know why they seem to assume row-vectors (the right hand one is transposed rather than the left)? David From bblais at bryant.edu Fri Nov 7 06:03:03 2008 From: bblais at bryant.edu (Brian Blais) Date: Fri, 7 Nov 2008 06:03:03 -0500 Subject: [SciPy-user] adding cookbook entry Message-ID: Hello, I was wondering if there are some directions/tutorial for adding a cookbook entry. I'd like to add one under Scientific Scripts, but I don't want to mess up the Scipy Cookbook in the process of my bumbling around. :) thanks, Brian Blais -- Brian Blais bblais at bryant.edu http://web.bryant.edu/~bblais -------------- next part -------------- An HTML attachment was scrubbed... URL: From wdj at usna.edu Fri Nov 7 08:11:37 2008 From: wdj at usna.edu (David Joyner) Date: Fri, 07 Nov 2008 08:11:37 -0500 Subject: [SciPy-user] spreadsheet data visualisation app In-Reply-To: <88e473830811060844h2516ce2bta014eee3e6dc9857@mail.gmail.com> References: <88fe22a0811060815p1a4889a1n62bb41399bb9f3b6@mail.gmail.com> <88e473830811060844h2516ce2bta014eee3e6dc9857@mail.gmail.com> Message-ID: <49143E89.9000903@usna.edu> John Hunter wrote: > On Thu, Nov 6, 2008 at 10:15 AM, Gary Pajer wrote: > > >>> is there any application that I can use view numpy arrays in a tabular / >>> spreadsheet like manner? >>> > > matplotlib has a gtk toolkit for an editable record array view. One > could easily adapt the pattern to other toolkits > > In[1:]: import matplotlib.mlab as mlab > > In [2]: r = mlab.csv2rec('data/intc.csv') > > In [3]: import mpl_toolkits.gtktools as gtktools > Where do you find this gtk module? It doesn't seem to be installed by default. Can it be installed using apt-get? I tried googling but got no useful info. I'm using the python and matplotlib installed via apt-get on hardy heron ubuntu. > In [4]: gtktools.rec2gtk(r) > Out[4]: > > Screenshot is attached -- click on a cell to edit... > > JDH > > > ------------------------------------------------------------------------ > > ------------------------------------------------------------------------ > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -- Prof. David Joyner wdj at usna.edu Some USNA-specific information in this email may be FOUO. From jdh2358 at gmail.com Fri Nov 7 09:12:44 2008 From: jdh2358 at gmail.com (John Hunter) Date: Fri, 7 Nov 2008 08:12:44 -0600 Subject: [SciPy-user] spreadsheet data visualisation app In-Reply-To: <49143E89.9000903@usna.edu> References: <88fe22a0811060815p1a4889a1n62bb41399bb9f3b6@mail.gmail.com> <88e473830811060844h2516ce2bta014eee3e6dc9857@mail.gmail.com> <49143E89.9000903@usna.edu> Message-ID: <88e473830811070612v4585ef6bv2333b73afa150978@mail.gmail.com> On Fri, Nov 7, 2008 at 7:11 AM, David Joyner wrote: > Where do you find this gtk module? It doesn't seem to be installed by > default. > Can it be installed using apt-get? I tried googling but got no useful info. > I'm using the python and matplotlib installed via apt-get on hardy heron > ubuntu. It should be in any version of matplotlib (python-matplotlib in ubuntu I think) since 0.98 -- you can check your mpl version with http://matplotlib.sourceforge.net/faq/troubleshooting_faq.html#obtaining-matplotlib-version JDH From mhearne at usgs.gov Fri Nov 7 12:32:35 2008 From: mhearne at usgs.gov (Michael Hearne) Date: Fri, 07 Nov 2008 10:32:35 -0700 Subject: [SciPy-user] syntax for indexing Message-ID: <49147BB3.5070600@usgs.gov> Assume that I have a setup like this: from pylab import * x = random((4,4)) I know how to get the indices of the values that are (for example), greater than 0.5: i = (x > 0.5).nonzero() How do I get the indices for those values in x that are greater than 0.5 AND less than 0.8? I tried: i = (x > 0.5 && x < 0.8).nonzero() i = (x > 0.5 & x < 0.8).nonzero() i = (x > 0.5 and x < 0.8).nonzero() to no avail. Is this the wrong approach? For Matlab users, the functionality which I am trying to replicate is: x = rand(4,4); i = find(x > 0.5 & x < 0.8); Thanks, Mike -- ------------------------------------------------------ Michael Hearne mhearne at usgs.gov (303) 273-8620 USGS National Earthquake Information Center 1711 Illinois St. Golden CO 80401 Senior Software Engineer Synergetics, Inc. ------------------------------------------------------ From amcmorl at gmail.com Fri Nov 7 12:47:40 2008 From: amcmorl at gmail.com (Angus McMorland) Date: Fri, 7 Nov 2008 12:47:40 -0500 Subject: [SciPy-user] syntax for indexing In-Reply-To: <49147BB3.5070600@usgs.gov> References: <49147BB3.5070600@usgs.gov> Message-ID: 2008/11/7 Michael Hearne : > Assume that I have a setup like this: > > from pylab import * > x = random((4,4)) > > I know how to get the indices of the values that are (for example), > greater than 0.5: > i = (x > 0.5).nonzero() > > How do I get the indices for those values in x that are greater than 0.5 > AND less than 0.8? > > I tried: > i = (x > 0.5 && x < 0.8).nonzero() > i = (x > 0.5 & x < 0.8).nonzero() > i = (x > 0.5 and x < 0.8).nonzero() > > to no avail. Is this the wrong approach? Very close. Because of their place in the evaluation order, logical operators need to be separated by brackets. i = ((x > 0.5) & (x < 0.8)).nonzero() should do what you want. HTH, Angus. -- AJC McMorland Post-doctoral research fellow Neurobiology, University of Pittsburgh From eads at soe.ucsc.edu Fri Nov 7 12:48:37 2008 From: eads at soe.ucsc.edu (Damian Eads) Date: Fri, 7 Nov 2008 09:48:37 -0800 Subject: [SciPy-user] syntax for indexing In-Reply-To: <49147BB3.5070600@usgs.gov> References: <49147BB3.5070600@usgs.gov> Message-ID: <91b4b1ab0811070948p20216952y36445a17678411b4@mail.gmail.com> Hi Michael, The following gives a mask array of the same size as the original array, (x > 0.5) * (x < 0.8) and using the mask array as an index, gives the values that meet the condition above, x[(x > 0.5) * (x < 0.8)] If you want indices to the flat array, you first need to create a flat view of it. xr = x.ravel() Note that xr is not a copy but a view of x with its striding parameters changed appropriately. In [23]: np.where((xr > 0.5) * (xr < 0.8)) Out[23]: (array([ 1, 3, 4, 6, 10]),) I hope this helps. Damian On Fri, Nov 7, 2008 at 9:32 AM, Michael Hearne wrote: > Assume that I have a setup like this: > > from pylab import * > x = random((4,4)) > > I know how to get the indices of the values that are (for example), > greater than 0.5: > i = (x > 0.5).nonzero() > > How do I get the indices for those values in x that are greater than 0.5 > AND less than 0.8? > > I tried: > i = (x > 0.5 && x < 0.8).nonzero() > i = (x > 0.5 & x < 0.8).nonzero() > i = (x > 0.5 and x < 0.8).nonzero() > > to no avail. Is this the wrong approach? > > For Matlab users, the functionality which I am trying to replicate is: > x = rand(4,4); > i = find(x > 0.5 & x < 0.8); > > Thanks, > > Mike > > -- > ------------------------------------------------------ > Michael Hearne > mhearne at usgs.gov > (303) 273-8620 > USGS National Earthquake Information Center > 1711 Illinois St. Golden CO 80401 > Senior Software Engineer > Synergetics, Inc. ----------------------------------------------------- Damian Eads Ph.D. Student Jack Baskin School of Engineering, UCSC E2-489 1156 High Street Machine Learning Lab Santa Cruz, CA 95064 http://www.soe.ucsc.edu/~eads From eads at soe.ucsc.edu Fri Nov 7 12:58:44 2008 From: eads at soe.ucsc.edu (Damian Eads) Date: Fri, 7 Nov 2008 09:58:44 -0800 Subject: [SciPy-user] Calculating a lot of (squared) Mahalanobis distances In-Reply-To: <4F9F4BE4-34B0-4D50-908D-F70BD29F1C7E@cs.toronto.edu> References: <4F9F4BE4-34B0-4D50-908D-F70BD29F1C7E@cs.toronto.edu> Message-ID: <91b4b1ab0811070958w699e146du6e62ad2dcad482da@mail.gmail.com> Have you looked at the cdist function? It takes as input two sets of vectors S1 and S2 and returns a n1 by n2 rectangular array back. The ij'th entry is the distance between S1[i] and S2[j]. On Thu, Nov 6, 2008 at 7:36 PM, David Warde-Farley wrote: > Hi folks, > > I'm trying to calculate a lot of Mahalanobis distances (in essence, > applying a positive definite quadratic x.T * A * x to a lot of vectors > x) and trying to think of the fastest way to do it with numpy. > > If I've got a single vector x and a 2D array sigmainv, then I've got > something like this. > > import numpy as np > ... > xmmu = x - mu > dist = np.dot(xmmu, np.dot(sigmainv, xmmu)) > > However if I've got a DxN 2d array of N different vectors for which I > want this quantity, it seems I can either use a loop or do something > like > > xmmu = x - mu[:,np.newaxis] > dist = np.diag(xmmu, np.dot(sigmainv, xmmu))) > > It seems like a lot of wasted computation to throw out the off- > diagonals. One thought I've had would be to diagonalize sigmainv and > then do something tricky with scalar products and broadcasting the > diagonal, but I am not sure whether that would save me much. > > Does anyone have any other tricks up their sleeve? > > Thanks, > > David > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -- ----------------------------------------------------- Damian Eads Ph.D. Student Jack Baskin School of Engineering, UCSC E2-489 1156 High Street Machine Learning Lab Santa Cruz, CA 95064 http://www.soe.ucsc.edu/~eads From strawman at astraw.com Fri Nov 7 14:03:06 2008 From: strawman at astraw.com (Andrew Straw) Date: Fri, 07 Nov 2008 11:03:06 -0800 Subject: [SciPy-user] adding cookbook entry In-Reply-To: References: Message-ID: <491490EA.8050205@astraw.com> Hi Brian, Just go ahead and add a wiki page, with the aim of making its final form of a high standard. Then, before linking it to the cookbook table of contents page, you can send another email to this list asking for review. I'm sure you'll get feedback, and once you're happy with it, we can link it from the main table of contents page. As an additional precaution you may want to add "draft -- this document is still in its early stages" or something similar at the top of the page while you're working on it. But, really, it's a wiki and we welcome contributors, so go for it. In the worst case scenario, someone will just change the bits they don't like. Finally, if you want to make a set of directions for adding entries, that itself would be a useful entry. Hopefully it won't be too complex, though. :) -Andrew Brian Blais wrote: > Hello, > > I was wondering if there are some directions/tutorial for adding a > cookbook entry. I'd like to add one under Scientific Scripts, but I > don't want to mess up the Scipy Cookbook in the process of my bumbling > around. :) > > > thanks, > > Brian Blais > > -- > Brian Blais > bblais at bryant.edu > http://web.bryant.edu/~bblais > > > > > ------------------------------------------------------------------------ > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user From w.kejia at gmail.com Fri Nov 7 15:48:35 2008 From: w.kejia at gmail.com (Wu, Kejia) Date: Fri, 07 Nov 2008 12:48:35 -0800 Subject: [SciPy-user] About Random Number Generation In-Reply-To: <2A0F5F54-E5E3-40CB-B25C-DE93EB26B872@cs.toronto.edu> References: <1225473698.7737.2.camel@localhost> <2A0F5F54-E5E3-40CB-B25C-DE93EB26B872@cs.toronto.edu> Message-ID: <1226090915.5176.2.camel@localhost> Hi David, Thank you very much for your reply. On Fri, 2008-10-31 at 14:31 -0400, David Warde-Farley wrote: > On 31-Oct-08, at 1:21 PM, Wu, Kejia wrote: > > > Also, can any body tell me whether the random number algorithm in RNG > > package is a pseudorandom one or a real-random one? > > You can't generate real-random numbers in software alone. Real random > number generation relies on sampling some random physical process. > Google "real random number" and you'll find a number of online sources > of genuine random numbers, including random.org (which uses > atmospheric noise) and hotbits (which uses radioactive decay). > > > And is there an > > available implementation for Monte Carlo method in NumPy? > > Try http://code.google.com/p/pymc/ > > David From ch.monty.burns at googlemail.com Fri Nov 7 21:53:39 2008 From: ch.monty.burns at googlemail.com (Charles Monty Burns) Date: Sat, 8 Nov 2008 03:53:39 +0100 Subject: [SciPy-user] Fitting Problem with line in 3D-space Message-ID: Hello, I am trying to get an axis of a cylinder in 3D-space using the leastsq method. My modell-equation is: \vec u \times (\vec r - \vec r_0) = \vec 0 u ... the direction vector r ... the independent vector r_0 ... one position vector on the line Can somebody tell me how to construct the functions leastsq is need? With simple function like y=x*... its very simple ... but not with that line Greetings -------------- next part -------------- An HTML attachment was scrubbed... URL: From dwf at cs.toronto.edu Sat Nov 8 07:53:36 2008 From: dwf at cs.toronto.edu (David Warde-Farley) Date: Sat, 8 Nov 2008 07:53:36 -0500 Subject: [SciPy-user] spreadsheet data visualisation app In-Reply-To: <49143E89.9000903@usna.edu> References: <88fe22a0811060815p1a4889a1n62bb41399bb9f3b6@mail.gmail.com> <88e473830811060844h2516ce2bta014eee3e6dc9857@mail.gmail.com> <49143E89.9000903@usna.edu> Message-ID: On 7-Nov-08, at 8:11 AM, David Joyner wrote: > Where do you find this gtk module? It doesn't seem to be installed by > default. > Can it be installed using apt-get? I tried googling but got no > useful info. > I'm using the python and matplotlib installed via apt-get on hardy > heron > ubuntu. Presumably you mean the 'gtk' module in Python rather than the matplotlib backend... Try: apt-get install python-gtk2 python-gtk2-dev You should then be able to use import gtk without error at the python prompt. Hopefully the matplotlib package for Ubuntu has the GTK backend built in and they just forgot to tag the dependency; if not, you may be stuck compiling from source (which admittedly is not hard). David From pav at iki.fi Sat Nov 8 07:59:49 2008 From: pav at iki.fi (Pauli Virtanen) Date: Sat, 8 Nov 2008 12:59:49 +0000 (UTC) Subject: [SciPy-user] Fitting Problem with line in 3D-space References: Message-ID: Sat, 08 Nov 2008 03:53:39 +0100, Charles Monty Burns wrote: > Hello, > > I am trying to get an axis of a cylinder in 3D-space using the leastsq > method. > > My modell-equation is: > > \vec u \times (\vec r - \vec r_0) = \vec 0 > > u ... the direction vector > r ... the independent vector > r_0 ... one position vector on the line > > Can somebody tell me how to construct the functions leastsq is need? > > With simple function like y=x*... its very simple ... but not with that > line Like this, for example: ----------------------------------------- from scipy.optimize import leastsq import numpy as np points = np.loadtxt('points.dat') # data file with 3 columns def params(p): return p[:3]/np.linalg.norm(p[:3]), p[3:] def model(p): u, v0 = params(p) return np.cross(points - v0, u).ravel() result, ier = leastsq(model, [1, 0, 0, 0, 0, 0]) u, v0 = params(result) print u print v0 ----------------------------------------- -- Pauli Virtanen From jdh2358 at gmail.com Sat Nov 8 08:07:26 2008 From: jdh2358 at gmail.com (John Hunter) Date: Sat, 8 Nov 2008 07:07:26 -0600 Subject: [SciPy-user] spreadsheet data visualisation app In-Reply-To: References: <88fe22a0811060815p1a4889a1n62bb41399bb9f3b6@mail.gmail.com> <88e473830811060844h2516ce2bta014eee3e6dc9857@mail.gmail.com> <49143E89.9000903@usna.edu> Message-ID: <88e473830811080507m542b244em8281b19d1eb76314@mail.gmail.com> On Sat, Nov 8, 2008 at 6:53 AM, David Warde-Farley wrote: > without error at the python prompt. Hopefully the matplotlib package > for Ubuntu has the GTK backend built in and they just forgot to tag > the dependency; if not, you may be stuck compiling from source (which > admittedly is not hard). Actually, you do not necessarily need to use a gtk* backend to use this feature. The gtktools are just things I use when embedding matplotlib in gtkapps, but the rec2gtk view is not dependent on any mpl backend as it simply creates a gtk treeeview in a gtk scroll window from a rec array. So yes, he will need to install pygtk, but shouldn't have any problems if he simply wants to use rec2gtk from the python shell or embedded in a gtk app. There is one caveat to this -- if you want to use this feature interactively from the ipython shell and use pylab at the same time, then you will need a gtk backend. Eg in the example I posted, when I did rec2gtk from the ipython shell, that only works properly if ipython is in gthread mode, which it will be if you are running -pylab with a gtk* backend. JDH From wdj at usna.edu Sat Nov 8 09:15:20 2008 From: wdj at usna.edu (David Joyner) Date: Sat, 8 Nov 2008 09:15:20 -0500 (EST) Subject: [SciPy-user] spreadsheet data visualisation app Message-ID: <20081108091520.AOA38181@mp2.nettest.usna.edu> Unfortunately, the intrepid ibex ubuntu version of matplotlib seems to be too old, though pygtk and friends is easy using apt-get. I actually prefer to use matplotloib within Sage (as in, www.sagemath.org), but there pygtk is not easy for me to install. If your comment below says gtktools is not needed, then I don't understand how to use matplotlib to display a csv file. Is there a different command you had in mind? ---- Original message ---- >Date: Sat, 8 Nov 2008 07:07:26 -0600 >From: "John Hunter" >Subject: Re: [SciPy-user] spreadsheet data visualisation app >To: "SciPy Users List" > >On Sat, Nov 8, 2008 at 6:53 AM, David Warde-Farley wrote: > >> without error at the python prompt. Hopefully the matplotlib package >> for Ubuntu has the GTK backend built in and they just forgot to tag >> the dependency; if not, you may be stuck compiling from source (which >> admittedly is not hard). > >Actually, you do not necessarily need to use a gtk* backend to use >this feature. The gtktools are just things I use when embedding >matplotlib in gtkapps, but the rec2gtk view is not dependent on any >mpl backend as it simply creates a gtk treeeview in a gtk scroll >window from a rec array. So yes, he will need to install pygtk, but >shouldn't have any problems if he simply wants to use rec2gtk from the >python shell or embedded in a gtk app. > >There is one caveat to this -- if you want to use this feature >interactively from the ipython shell and use pylab at the same time, >then you will need a gtk backend. Eg in the example I posted, when I >did rec2gtk from the ipython shell, that only works properly if >ipython is in gthread mode, which it will be if you are running -pylab >with a gtk* backend. > >JDH >_______________________________________________ >SciPy-user mailing list >SciPy-user at scipy.org >http://projects.scipy.org/mailman/listinfo/scipy-user From ch.monty.burns at googlemail.com Sat Nov 8 08:33:22 2008 From: ch.monty.burns at googlemail.com (Charles Monty Burns) Date: Sat, 8 Nov 2008 14:33:22 +0100 Subject: [SciPy-user] Fitting Problem with line in 3D-space In-Reply-To: References: Message-ID: Tanks for your replay. Can you test your code with that points http://pastebin.mozilla.org/570655 . The result should be the direction vector (0, 0, 1). I construct this points with direction vector on z-axis. Greetings On Sat, Nov 8, 2008 at 1:59 PM, Pauli Virtanen wrote: > Sat, 08 Nov 2008 03:53:39 +0100, Charles Monty Burns wrote: > > > Hello, > > > > I am trying to get an axis of a cylinder in 3D-space using the leastsq > > method. > > > > My modell-equation is: > > > > \vec u \times (\vec r - \vec r_0) = \vec 0 > > > > u ... the direction vector > > r ... the independent vector > > r_0 ... one position vector on the line > > > > Can somebody tell me how to construct the functions leastsq is need? > > > > With simple function like y=x*... its very simple ... but not with that > > line > > Like this, for example: > > ----------------------------------------- > from scipy.optimize import leastsq > import numpy as np > > points = np.loadtxt('points.dat') # data file with 3 columns > > def params(p): > return p[:3]/np.linalg.norm(p[:3]), p[3:] > > def model(p): > u, v0 = params(p) > return np.cross(points - v0, u).ravel() > > result, ier = leastsq(model, [1, 0, 0, 0, 0, 0]) > u, v0 = params(result) > > print u > print v0 > ----------------------------------------- > > -- > Pauli Virtanen > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From xavier.gnata at gmail.com Sat Nov 8 14:12:42 2008 From: xavier.gnata at gmail.com (Xavier Gnata) Date: Sat, 08 Nov 2008 20:12:42 +0100 Subject: [SciPy-user] cephes.pbdv test fails Message-ID: <4915E4AA.5010202@gmail.com> Hi, On my box, from scipy.special import * import scipy.special._cephes as cephes cephes.pbdv(1,0) returns (0.0, 1.0) As a result, assert_equal(cephes.pbdv(1,0),(0.0,0.0)) from "/usr/lib/python2.5/site-packages/scipy/special/tests/test_basic.py",line 357 fails. First can someone reproduce this bug? I'm using ubuntu intrepid 64bits. Second, if my understanding of pbdv is supposed to do, (0.0, 1.0) is the correct answer. http://mathworld.wolfram.com/ParabolicCylinderFunction.html Xavier From nwagner at iam.uni-stuttgart.de Sun Nov 9 12:40:37 2008 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Sun, 09 Nov 2008 18:40:37 +0100 Subject: [SciPy-user] audiolab Message-ID: Hi all, How do I write numpy arrays to sound files ? Nils From coughlan at ski.org Sun Nov 9 14:13:12 2008 From: coughlan at ski.org (James Coughlan) Date: Sun, 09 Nov 2008 11:13:12 -0800 Subject: [SciPy-user] audiolab In-Reply-To: References: Message-ID: <49173648.20101@ski.org> Nils Wagner wrote: > Hi all, > > How do I write numpy arrays to sound files ? > > Nils > > Hi, This works for me, at least on Windows. Best, James from numpy import arange, sin from waveio import exportWave w=sin(arange(0,2000,0.5)) #simple waveform exportWave('test.wav',8000,w) #8000=sampling rate in Hz. From rob.clewley at gmail.com Sun Nov 9 21:59:01 2008 From: rob.clewley at gmail.com (Rob Clewley) Date: Sun, 9 Nov 2008 21:59:01 -0500 Subject: [SciPy-user] Event handling in odeint In-Reply-To: References: Message-ID: Hi Nils, > Thank you for your prompt response. sorry this new reply was a lot less prompt. I was making sure a small bug was fixed in exactly this code before I mentioned it again. > Is there an example illustrating the handling of events ? In the previously-mentioned SVN's new file vode_event_test1.py, you can see an example where an integration-terminating event is defined like this ev_args_term = {'name': 'threshold', 'eventtol': 1e-4, 'eventdelay': 1e-5, 'starttime': 0, 'active': True, 'term': True, 'precise': True} thresh_ev_term = Events.makeZeroCrossEvent('w-p_thresh', -1, ev_args_term, varnames=['w'], parnames=['p_thresh']) This creates a threshold event that triggers when the ODE variable w *decreases* through the value p_thresh. For details, including how this is related to the ODE definition, see the PyDSTool wiki page Events and the SVN file. After integration of the ODE, you might do this: >>> term_evs_found = testODE.getEvents()['threshold'] >>> term_evs_found.info() Pointset (parameterized) Independent variable: t: [ 2.3469947] Coordinates: w: [-0.25000045] Labels by index: Empty -Rob From cournape at gmail.com Mon Nov 10 00:48:02 2008 From: cournape at gmail.com (David Cournapeau) Date: Mon, 10 Nov 2008 14:48:02 +0900 Subject: [SciPy-user] audiolab In-Reply-To: References: Message-ID: <5b8d13220811092148l3025815cv2a4cfae3d17ff6bd@mail.gmail.com> On Mon, Nov 10, 2008 at 2:40 AM, Nils Wagner wrote: > Hi all, > > How do I write numpy arrays to sound files ? > If you have very basic needs, when python stdlib (as given by James' example) is enough. If you want more control/other file format, then audiolab may be more appropriate. There is a simple API (the so-called matlab API) for wav, aiff, flac and a few other formats, David From nwagner at iam.uni-stuttgart.de Mon Nov 10 02:30:42 2008 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Mon, 10 Nov 2008 08:30:42 +0100 Subject: [SciPy-user] audiolab In-Reply-To: <49173648.20101@ski.org> References: <49173648.20101@ski.org> Message-ID: On Sun, 09 Nov 2008 11:13:12 -0800 James Coughlan wrote: > Nils Wagner wrote: >> Hi all, >> >> How do I write numpy arrays to sound files ? >> >> Nils >> >> > Hi, > > This works for me, at least on Windows. > > Best, > > James > > > from numpy import arange, sin > from waveio import exportWave > w=sin(arange(0,2000,0.5)) #simple waveform > exportWave('test.wav',8000,w) #8000=sampling rate in Hz. > Hi James, where can I get the module waveio and how do I install the module waveio ? Nils From grh at mur.at Mon Nov 10 04:28:35 2008 From: grh at mur.at (Georg Holzmann) Date: Mon, 10 Nov 2008 10:28:35 +0100 Subject: [SciPy-user] audiolab In-Reply-To: <5b8d13220811092148l3025815cv2a4cfae3d17ff6bd@mail.gmail.com> References: <5b8d13220811092148l3025815cv2a4cfae3d17ff6bd@mail.gmail.com> Message-ID: <4917FEC3.4090302@mur.at> Hallo! If you want to have the exact commands: 1) build audiolab (http://www.ar.media.kyoto-u.ac.jp/members/david/softwares/audiolab/) 2) in python: from scikits.audiolab import wavread, wavwrite # read a wave file (audiodata, samplingrate, encoding) = wavread("yourfile.wav") # write a wave file wavwrite(audiodata_as_numpy_array, "youroutputfile.wav", samplingrate) LG Georg David Cournapeau schrieb: > On Mon, Nov 10, 2008 at 2:40 AM, Nils Wagner > wrote: >> Hi all, >> >> How do I write numpy arrays to sound files ? >> > > If you have very basic needs, when python stdlib (as given by James' > example) is enough. If you want more control/other file format, then > audiolab may be more appropriate. There is a simple API (the so-called > matlab API) for wav, aiff, flac and a few other formats, > > David > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user From massimo.sandal at unibo.it Mon Nov 10 05:29:05 2008 From: massimo.sandal at unibo.it (massimo sandal) Date: Mon, 10 Nov 2008 11:29:05 +0100 Subject: [SciPy-user] scipy.optimize.leastsq and covariance matrix meaning In-Reply-To: <49135C11.5050309@gmail.com> References: <491308E8.5060807@unibo.it> <49135C11.5050309@gmail.com> Message-ID: <49180CF1.5030508@unibo.it> Bruce Southey wrote: > It is possible to be correct if the values of y are large and > sufficiently variable. y values should be in the 10**-10 range... > But, based on the comment on the fit and the > correlation in the matrix above is -0.98, my expectation is that there > is almost no error/residual variation left. The residual variance should > be very small (sum of squared residuals divided by defree of freedom). Is the sum of squared residuals / degree of freedom a residual variance... of what parameters? Sorry again, but I'm not that good in non-linear fitting theory. > You don't provide enough details but your two x variables would appear > to virtually correlated because of the very highly correlation. There > are other reasons, but with data etc. I can not guess. I'll try to sketch up a script reproducing the core of the problem with actual data. m. -- Massimo Sandal , Ph.D. University of Bologna Department of Biochemistry "G.Moruzzi" snail mail: Via Irnerio 48, 40126 Bologna, Italy email: massimo.sandal at unibo.it web: http://www.biocfarm.unibo.it/samori/people/sandal.html tel: +39-051-2094388 fax: +39-051-2094387 -------------- next part -------------- A non-text attachment was scrubbed... Name: massimo_sandal.vcf Type: text/x-vcard Size: 274 bytes Desc: not available URL: From massimo.sandal at unibo.it Mon Nov 10 06:13:04 2008 From: massimo.sandal at unibo.it (massimo sandal) Date: Mon, 10 Nov 2008 12:13:04 +0100 Subject: [SciPy-user] scipy.optimize.leastsq and covariance matrix meaning In-Reply-To: <49180CF1.5030508@unibo.it> References: <491308E8.5060807@unibo.it> <49135C11.5050309@gmail.com> <49180CF1.5030508@unibo.it> Message-ID: <49181740.8000407@unibo.it> massimo sandal wrote: > I'll try to sketch up a script reproducing the core of the problem with > actual data. Here it is. Can anyone give it a look to help me understand if and how to make sense of the covariance matrix? m. -- Massimo Sandal , Ph.D. University of Bologna Department of Biochemistry "G.Moruzzi" snail mail: Via Irnerio 48, 40126 Bologna, Italy email: massimo.sandal at unibo.it web: http://www.biocfarm.unibo.it/samori/people/sandal.html tel: +39-051-2094388 fax: +39-051-2094387 -------------- next part -------------- A non-text attachment was scrubbed... Name: test_wlc_cov.py Type: text/x-python Size: 2954 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: massimo_sandal.vcf Type: text/x-vcard Size: 274 bytes Desc: not available URL: From nwagner at iam.uni-stuttgart.de Mon Nov 10 08:51:10 2008 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Mon, 10 Nov 2008 14:51:10 +0100 Subject: [SciPy-user] optimize.bisect Message-ID: Hi all, how can I use optimize.bisect if the function f is only given at discrete points instead of a continuous function like sin(x) ? Cheers, Nils From aisaac at american.edu Mon Nov 10 09:10:01 2008 From: aisaac at american.edu (Alan G Isaac) Date: Mon, 10 Nov 2008 09:10:01 -0500 Subject: [SciPy-user] optimize.bisect In-Reply-To: References: Message-ID: <491840B9.5040509@american.edu> On 11/10/2008 8:51 AM Nils Wagner apparently wrote: > how can I use optimize.bisect if the function f is only > given at discrete points instead of a continuous function > like sin(x) ? Do you have the points ``x``? Then shouldn't you just take x[np.argmin(np.abs(f(x)))]? What is the point of using an algorithm designed for continuous functions? Alan Isaac From nwagner at iam.uni-stuttgart.de Mon Nov 10 09:18:57 2008 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Mon, 10 Nov 2008 15:18:57 +0100 Subject: [SciPy-user] optimize.bisect In-Reply-To: <491840B9.5040509@american.edu> References: <491840B9.5040509@american.edu> Message-ID: On Mon, 10 Nov 2008 09:10:01 -0500 Alan G Isaac wrote: > On 11/10/2008 8:51 AM Nils Wagner apparently wrote: >> how can I use optimize.bisect if the function f is only >> given at discrete points instead of a continuous >>function >> like sin(x) ? > > Do you have the points ``x``? > Then shouldn't you just take > x[np.argmin(np.abs(f(x)))]? > What is the point of using an algorithm > designed for continuous functions? > > Alan Isaac Hi Alan, Thank you very much. I have overlooked your approach. Nils From nwagner at iam.uni-stuttgart.de Mon Nov 10 09:26:34 2008 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Mon, 10 Nov 2008 15:26:34 +0100 Subject: [SciPy-user] optimize.bisect In-Reply-To: <491840B9.5040509@american.edu> References: <491840B9.5040509@american.edu> Message-ID: On Mon, 10 Nov 2008 09:10:01 -0500 Alan G Isaac wrote: > On 11/10/2008 8:51 AM Nils Wagner apparently wrote: >> how can I use optimize.bisect if the function f is only >> given at discrete points instead of a continuous >>function >> like sin(x) ? > > Do you have the points ``x``? > Then shouldn't you just take > x[np.argmin(np.abs(f(x)))]? Your approach returns a single root. How do I extract successive zeros of a sampled signal ? Nils From aisaac at american.edu Mon Nov 10 09:54:40 2008 From: aisaac at american.edu (Alan G Isaac) Date: Mon, 10 Nov 2008 09:54:40 -0500 Subject: [SciPy-user] optimize.bisect In-Reply-To: References: <491840B9.5040509@american.edu> Message-ID: <49184B30.90309@american.edu> >> On 11/10/2008 8:51 AM Nils Wagner apparently wrote: >>> how can I use optimize.bisect if the function f is only >>> given at discrete points instead of a continuous >>> function >>> like sin(x) ? > On Mon, 10 Nov 2008 09:10:01 -0500 Alan G Isaac wrote: >> Do you have the points ``x``? >> Then shouldn't you just take >> x[np.argmin(np.abs(f(x)))]? On 11/10/2008 9:26 AM Nils Wagner apparently wrote: > Your approach returns a single root. > How do I extract successive zeros of a sampled signal ? Ah, you had proposed a bisection algorithm, which will also return a single zero... I don't know a good answer to this new question, but you could look at the diff of f(x)>0. This should get you close: >>> x = np.linspace(-1,1,21) >>> def f(x): return x**2 - 0.5 ... >>> fd = np.diff( f(x)>0 ) >>> z = x[fd>0] >>> z array([-0.8, 0.7]) Cheers, Alan From bsouthey at gmail.com Mon Nov 10 10:05:44 2008 From: bsouthey at gmail.com (Bruce Southey) Date: Mon, 10 Nov 2008 09:05:44 -0600 Subject: [SciPy-user] scipy.optimize.leastsq and covariance matrix meaning In-Reply-To: <49181740.8000407@unibo.it> References: <491308E8.5060807@unibo.it> <49135C11.5050309@gmail.com> <49180CF1.5030508@unibo.it> <49181740.8000407@unibo.it> Message-ID: <49184DC8.50304@gmail.com> massimo sandal wrote: > massimo sandal wrote: > >> I'll try to sketch up a script reproducing the core of the problem >> with actual data. > > Here it is. Can anyone give it a look to help me understand if and how > to make sense of the covariance matrix? > > m. > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user There is some problem with your model with respect to your data. Looking at the plot of x and y, the relationship is linear with a correlation of 0.86. There is no hint of a non-linear relationship although a spline or similar local polynomial method could give a nicer fit. I do not know what you would expect to see from your function but you should also plot the expected model using typical values of your parameters. I would suggest you explore fitting polynomial models first (could only get a linear term for x in what you provided) and splines before doing nonlinear models. Bruce From massimo.sandal at unibo.it Mon Nov 10 10:29:02 2008 From: massimo.sandal at unibo.it (massimo sandal) Date: Mon, 10 Nov 2008 16:29:02 +0100 Subject: [SciPy-user] scipy.optimize.leastsq and covariance matrix meaning In-Reply-To: <49184DC8.50304@gmail.com> References: <491308E8.5060807@unibo.it> <49135C11.5050309@gmail.com> <49180CF1.5030508@unibo.it> <49181740.8000407@unibo.it> <49184DC8.50304@gmail.com> Message-ID: <4918533E.4040406@unibo.it> Bruce Southey wrote: > massimo sandal wrote: >> massimo sandal wrote: >> >>> I'll try to sketch up a script reproducing the core of the problem >>> with actual data. >> Here it is. Can anyone give it a look to help me understand if and how >> to make sense of the covariance matrix? >> >> m. > There is some problem with your model with respect to your data. Looking > at the plot of x and y, the relationship is linear with a correlation of > 0.86. There is no hint of a non-linear relationship although a spline or > similar local polynomial method could give a nicer fit. I do not know > what you would expect to see from your function but you should also plot > the expected model using typical values of your parameters. > > I would suggest you explore fitting polynomial models first (could only > get a linear term for x in what you provided) and splines before doing > nonlinear models. The kind of things I am fitting is a single molecule force spectroscopy force curve: see for example http://www.jpk.com/unfolding-of-individual-titin-i27-octamer-i91-8.media.f229c5303ada22eb0a4ebe759457750av2.gif and I am fitting peaks using the worm-like chain equation (actually, in the script I use the inverse valus of the parameters) that you can find here: http://en.wikipedia.org/wiki/Worm-like_chain with results looking like that: http://www.jpk.com/titin-force-extension-profile.media.37bc90d1dbd105742098ed8317385a48v1.gif http://biology.plosjournals.org/perlserv/?request=slideshow&type=figure&doi=10.1371/journal.pbio.0060006&id=93367 The model is non-linear because the physics underlying the data is non-linear. I am not "choosing" the equation*, I am applying that equation to find parameters from the curve. What I have pasted is just the section of a much larger data plot. The section can seem almost linear, but the non-linear fit on that section fits perfectly also the remaining sections -as expected. Fitting the whole peak or only the last portion of it does not change significantly the fit or the output parameters. The whole software I am working on is Hooke, available at http://code.google.com/p/hooke , in case anyone is interested. m. *strictly speaking the are subtly different models to choose from indeed (WLC, FJC, etc.), but WLC is the simpler and more widespread and is enough for what I mean to do -- Massimo Sandal , Ph.D. University of Bologna Department of Biochemistry "G.Moruzzi" snail mail: Via Irnerio 48, 40126 Bologna, Italy email: massimo.sandal at unibo.it web: http://www.biocfarm.unibo.it/samori/people/sandal.html tel: +39-051-2094388 fax: +39-051-2094387 -------------- next part -------------- A non-text attachment was scrubbed... Name: massimo_sandal.vcf Type: text/x-vcard Size: 274 bytes Desc: not available URL: From Laurent.Perrinet at incm.cnrs-mrs.fr Mon Nov 10 10:56:03 2008 From: Laurent.Perrinet at incm.cnrs-mrs.fr (Laurent Perrinet) Date: Mon, 10 Nov 2008 16:56:03 +0100 Subject: [SciPy-user] audiolab In-Reply-To: <4917FEC3.4090302@mur.at> References: <5b8d13220811092148l3025815cv2a4cfae3d17ff6bd@mail.gmail.com> <4917FEC3.4090302@mur.at> Message-ID: <3DBECFA7-1DB7-41DC-AF9A-19EC96059231@incm.cnrs-mrs.fr> Hi! You may find this example useful: http://neuralensemble.org/trac/NeuroTools/browser/trunk/examples/single_neuron/playing_with_simple_single_neuron.py We used scikits.audiolab to record a numpy array and pyaudio to play it on your computer. (and btw, it's an educational example, not scientific!) cheers laurent Le 10 nov. 08 ? 10:28, Georg Holzmann a ?crit : > > Hallo! > > If you want to have the exact commands: > > 1) build audiolab > (http://www.ar.media.kyoto-u.ac.jp/members/david/softwares/audiolab/) > > 2) in python: > > from scikits.audiolab import wavread, wavwrite > > # read a wave file > (audiodata, samplingrate, encoding) = wavread("yourfile.wav") > > # write a wave file > wavwrite(audiodata_as_numpy_array, "youroutputfile.wav", samplingrate) > > LG > Georg > > > > David Cournapeau schrieb: >> On Mon, Nov 10, 2008 at 2:40 AM, Nils Wagner >> wrote: >>> Hi all, >>> >>> How do I write numpy arrays to sound files ? >>> >> >> If you have very basic needs, when python stdlib (as given by James' >> example) is enough. If you want more control/other file format, then >> audiolab may be more appropriate. There is a simple API (the so- >> called >> matlab API) for wav, aiff, flac and a few other formats, >> >> David >> _______________________________________________ >> SciPy-user mailing list >> SciPy-user at scipy.org >> http://projects.scipy.org/mailman/listinfo/scipy-user > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From bsouthey at gmail.com Mon Nov 10 12:37:06 2008 From: bsouthey at gmail.com (Bruce Southey) Date: Mon, 10 Nov 2008 11:37:06 -0600 Subject: [SciPy-user] scipy.optimize.leastsq and covariance matrix meaning In-Reply-To: <4918533E.4040406@unibo.it> References: <491308E8.5060807@unibo.it> <49135C11.5050309@gmail.com> <49180CF1.5030508@unibo.it> <49181740.8000407@unibo.it> <49184DC8.50304@gmail.com> <4918533E.4040406@unibo.it> Message-ID: <49187142.6020805@gmail.com> massimo sandal wrote: > Bruce Southey wrote: >> massimo sandal wrote: >>> massimo sandal wrote: >>> >>>> I'll try to sketch up a script reproducing the core of the problem >>>> with actual data. >>> Here it is. Can anyone give it a look to help me understand if and >>> how to make sense of the covariance matrix? >>> >>> m. >> There is some problem with your model with respect to your data. >> Looking at the plot of x and y, the relationship is linear with a >> correlation of 0.86. There is no hint of a non-linear relationship >> although a spline or similar local polynomial method could give a >> nicer fit. I do not know what you would expect to see from your >> function but you should also plot the expected model using typical >> values of your parameters. >> >> I would suggest you explore fitting polynomial models first (could >> only get a linear term for x in what you provided) and splines before >> doing nonlinear models. > > The kind of things I am fitting is a single molecule force > spectroscopy force curve: see for example > http://www.jpk.com/unfolding-of-individual-titin-i27-octamer-i91-8.media.f229c5303ada22eb0a4ebe759457750av2.gif > > > and I am fitting peaks using the worm-like chain equation (actually, > in the script I use the inverse valus of the parameters) that you can > find here: > http://en.wikipedia.org/wiki/Worm-like_chain > > with results looking like that: > http://www.jpk.com/titin-force-extension-profile.media.37bc90d1dbd105742098ed8317385a48v1.gif > > http://biology.plosjournals.org/perlserv/?request=slideshow&type=figure&doi=10.1371/journal.pbio.0060006&id=93367 > > > The model is non-linear because the physics underlying the data is > non-linear. I am not "choosing" the equation*, I am applying that > equation to find parameters from the curve. > > What I have pasted is just the section of a much larger data plot. The > section can seem almost linear, but the non-linear fit on that section > fits perfectly also the remaining sections -as expected. > Fitting the whole peak or only the last portion of it does not change > significantly the fit or the output parameters. > > The whole software I am working on is Hooke, available at > http://code.google.com/p/hooke , in case anyone is interested. > > m. > > *strictly speaking the are subtly different models to choose from > indeed (WLC, FJC, etc.), but WLC is the simpler and more widespread > and is enough for what I mean to do > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > What are the actual parameters that you think you are trying to estimate here? What is y and x relative to the equation? In particular is y=F*P or just F or P? Your parameter therm is a constant so I would first compute Y/therm before doing anything else or just ignore it. Also, you probably need to rescale both x and y because these are either very small or very large. Perhaps even standardize x to mean 0 and variance 1. However, you do need to be very careful here. Getting nonlinear models to converge to 'correct' parameters is often an art than a science. However, I still don't think you have the data to estimate this function as there are no clear 'high' and 'low' points. I also don't think this model will describe the patterns provided by the links (not even clear how your data relates to these images). Bruce From massimo.sandal at unibo.it Mon Nov 10 13:23:15 2008 From: massimo.sandal at unibo.it (massimo sandal) Date: Mon, 10 Nov 2008 19:23:15 +0100 Subject: [SciPy-user] scipy.optimize.leastsq and covariance matrix meaning In-Reply-To: <49187142.6020805@gmail.com> References: <491308E8.5060807@unibo.it> <49135C11.5050309@gmail.com> <49180CF1.5030508@unibo.it> <49181740.8000407@unibo.it> <49184DC8.50304@gmail.com> <4918533E.4040406@unibo.it> <49187142.6020805@gmail.com> Message-ID: <49187C13.9030602@unibo.it> Bruce Southey wrote: > What are the actual parameters that you think you are trying to estimate > here? persistent length and contour length (Lo and P in the script). > What is y and x relative to the equation? In particular is y=F*P or just > F or P? I don't understand that. y is a force, x is a distance. > Your parameter therm is a constant so I would first compute Y/therm > before doing anything else or just ignore it. This is a nice idea, thanks. > Also, you probably need to rescale both x and y because these are either > very small or very large. Why? Is there any numerical error waiting, you mean? > Perhaps even standardize x to mean 0 and > variance 1. However, you do need to be very careful here. Getting > nonlinear models to converge to 'correct' parameters is often an art > than a science. I think I have been misunderstood. The nonlinear model converges *very correctly*, and the parameters I find are in *excellent agreement* with expected values in practically all cases. What I am asking is for a way to estimate the sigma I have on these parameters on a single fit. The covariance matrix gives me what, in my naivety, look like unreasonably enormous covariance values. This to me seems very odd, given that I can estimate the correct length of an about 30-nm protein module, as measured on several peaks, with a 1.5 nm sigma. > However, I still don't think you have the data to estimate this function > as there are no clear 'high' and 'low' points. I also don't think this > model will describe the patterns provided by the links (not even clear > how your data relates to these images). The model describes peaks on force spectroscopy curves very well -it's the standard model used in the literature for that. The model does not describe the *whole* sawtooth curve, but only *each* rising portion of these peaks. The data I put in the script is just an interval where I fit the whole dataset. I have curves with lots of peaks. In the software, I have a function that allows me to click two points on the curve and have the WLC fitted to the data interval between two points. I just pasted that interval and put it in a small script to give the mailing list actual stuff to help me. m. -- Massimo Sandal , Ph.D. University of Bologna Department of Biochemistry "G.Moruzzi" snail mail: Via Irnerio 48, 40126 Bologna, Italy email: massimo.sandal at unibo.it web: http://www.biocfarm.unibo.it/samori/people/sandal.html tel: +39-051-2094388 fax: +39-051-2094387 -------------- next part -------------- A non-text attachment was scrubbed... Name: massimo_sandal.vcf Type: text/x-vcard Size: 274 bytes Desc: not available URL: From robert.kern at gmail.com Mon Nov 10 15:08:52 2008 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 10 Nov 2008 14:08:52 -0600 Subject: [SciPy-user] scipy.optimize.leastsq and covariance matrix meaning In-Reply-To: <49181740.8000407@unibo.it> References: <491308E8.5060807@unibo.it> <49135C11.5050309@gmail.com> <49180CF1.5030508@unibo.it> <49181740.8000407@unibo.it> Message-ID: <3d375d730811101208g41e7be3elbd881c66d694fa42@mail.gmail.com> On Mon, Nov 10, 2008 at 05:13, massimo sandal wrote: > massimo sandal wrote: > >> I'll try to sketch up a script reproducing the core of the problem with >> actual data. > > Here it is. Can anyone give it a look to help me understand if and how to > make sense of the covariance matrix? The covariance matrix does need some scaling before it can be interpreted statistically. Basically, if you are doing nonlinear least squares as a statistical procedure, rather than a purely numerical one, the residuals need to be scaled so that they are in units of standard deviations of the measurement error for each individual measurement. If you don't know what that is, then you can estimate it from the fitted residuals. The parameter estimate is unchanged, but you will need to rescale the covariance matrix of the estimate by multiplying it by the residual variance. scipy.odr does most of this for you. Attached is a version of your code using scipy.odr. Here is the text output: Fitted parameters: [ 4.90666526e+06 4.78090340e+09] Covariance: [[ 1.72438988e+31 -1.64258997e+35] [ -1.64258997e+35 1.57791262e+39]] Residual variance: 2.83606592894e-22 Scaled error bars: [ 6.99319913e+04 6.68959208e+08] Scaled covariance: [[ 4.89048340e+09 -4.65849344e+13] [ -4.65849344e+13 4.47506422e+17]] -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco -------------- next part -------------- A non-text attachment was scrubbed... Name: odr_wlc_cov.py Type: text/x-python Size: 3037 bytes Desc: not available URL: From wbrevis at gmail.com Mon Nov 10 18:35:56 2008 From: wbrevis at gmail.com (wbrevis) Date: Mon, 10 Nov 2008 15:35:56 -0800 (PST) Subject: [SciPy-user] Plotting vector field + velocity magnitude in background Message-ID: <2498aa29-c764-4153-b5ff-f5f0beadd9b8@n1g2000prb.googlegroups.com> Hello all, I'm trying to plot one of my experimental data using scipy. Until now, all the work I did was using Matlab. For one of my normal data- visualization, I read ASCII or Binary files containing 4 columns: The first contains the x coordinate, the second the y one, and the third and fourth columns the velocity in the x and y directions (u and v), i.e. file= x y u v (ordered in columns). After reading the data in Matlab, I normally do: pcolor(x,y,sqrt(u.^2+v.^2)), in order to visualize in colors the velocity magnitude and then quiver(x,y,u,v) in order to see the associated vectors. I was reading the manual of scipy, including the plotting tools, but I am a bit lost (too much information to start). Can somebody help me with suggestions on how to read data using scipy and the best way to plot (pcolor+quiver)?. What about the function quiver3d of mlab, can be used for 2d representation of a flow field, together with surf (also mlab). Thank you in advance for your help and suggestions W. Brevis From robert.kern at gmail.com Mon Nov 10 21:04:28 2008 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 10 Nov 2008 20:04:28 -0600 Subject: [SciPy-user] Plotting vector field + velocity magnitude in background In-Reply-To: <2498aa29-c764-4153-b5ff-f5f0beadd9b8@n1g2000prb.googlegroups.com> References: <2498aa29-c764-4153-b5ff-f5f0beadd9b8@n1g2000prb.googlegroups.com> Message-ID: <3d375d730811101804j5f2b7820ra43f6239cdeaeecc@mail.gmail.com> On Mon, Nov 10, 2008 at 17:35, wbrevis wrote: > Hello all, > > I'm trying to plot one of my experimental data using scipy. Well, scipy doesn't have any plotting tools. I assume that you are asking about using matplotlib for 2D plots and Mayavi for 3D plots. Ask on matplotlib-users for help with matplotlib and enthought-dev for help with Mayavi. https://lists.sourceforge.net/lists/listinfo/matplotlib-users https://mail.enthought.com/mailman/listinfo/enthought-dev -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From robert.kern at gmail.com Tue Nov 11 00:43:31 2008 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 10 Nov 2008 23:43:31 -0600 Subject: [SciPy-user] Need new spam cleaner on scipy.org Message-ID: <3d375d730811102143v3fbfedebwa904b2ab08214a8@mail.gmail.com> I've been deleting all of the spam on the www.scipy.org wiki for some time now. I'm done. If you would like to jump in, please do so. If you need permissions to use the "Delete Page" and "Remove Spam" actions, let me know, and I will give them to you. If you have a better idea how to manage the spam, please coordinate with Jarrod. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From william.ratcliff at gmail.com Tue Nov 11 01:09:15 2008 From: william.ratcliff at gmail.com (william ratcliff) Date: Tue, 11 Nov 2008 01:09:15 -0500 Subject: [SciPy-user] Need new spam cleaner on scipy.org In-Reply-To: <3d375d730811102143v3fbfedebwa904b2ab08214a8@mail.gmail.com> References: <3d375d730811102143v3fbfedebwa904b2ab08214a8@mail.gmail.com> Message-ID: <827183970811102209j61fc270fo7c4f0baf21c113b4@mail.gmail.com> Is there any way to put up one of those stupid yahoo-esque puzzle tests in order to post to the wiki? On Tue, Nov 11, 2008 at 12:43 AM, Robert Kern wrote: > I've been deleting all of the spam on the www.scipy.org wiki for some > time now. I'm done. If you would like to jump in, please do so. If you > need permissions to use the "Delete Page" and "Remove Spam" actions, > let me know, and I will give them to you. If you have a better idea > how to manage the spam, please coordinate with Jarrod. > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless > enigma that is made terrible by our own mad attempt to interpret it as > though it had an underlying truth." > -- Umberto Eco > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pav at iki.fi Tue Nov 11 04:11:47 2008 From: pav at iki.fi (Pauli Virtanen) Date: Tue, 11 Nov 2008 09:11:47 +0000 (UTC) Subject: [SciPy-user] Need new spam cleaner on scipy.org References: <3d375d730811102143v3fbfedebwa904b2ab08214a8@mail.gmail.com> <827183970811102209j61fc270fo7c4f0baf21c113b4@mail.gmail.com> Message-ID: Tue, 11 Nov 2008 01:09:15 -0500, william ratcliff wrote: > Is there any way to put up one of those stupid yahoo-esque puzzle tests > in order to post to the wiki? MoinMoin 1.6 has them, http://moinmo.in/HelpOnTextChas but, as I understand, the version running on scipy.org is older than this and someone would need to upgrade it first. (But I believe this would pay off -- spending time on deleting spam manually is nearly wasted time IMO.) To use this feature sensibly, some people would have to maintain a list of known-good users, so that frequent editors wouldn't have to fill in the CAPTCHA on every edit. Since passing this test is required also on account registration, one could perhaps use Known as the textchas_disabled_group to limit CAPTCHAS to registration, and not to editing. -- Pauli Virtanen From timmichelsen at gmx-topmail.de Tue Nov 11 06:02:01 2008 From: timmichelsen at gmx-topmail.de (Timmie) Date: Tue, 11 Nov 2008 11:02:01 +0000 (UTC) Subject: [SciPy-user] scikits trac Message-ID: Hello, is there any option to tell the trac installation at http://scipy.org/scipy/scikits to send me emails if there is a status change or new comment for the issues I am involved (as reporter, commenter)? I have only found the option to change the password. Thanks ind advance, Timmie From massimo.sandal at unibo.it Tue Nov 11 07:15:39 2008 From: massimo.sandal at unibo.it (massimo sandal) Date: Tue, 11 Nov 2008 13:15:39 +0100 Subject: [SciPy-user] scipy.optimize.leastsq and covariance matrix meaning In-Reply-To: <3d375d730811101208g41e7be3elbd881c66d694fa42@mail.gmail.com> References: <491308E8.5060807@unibo.it> <49135C11.5050309@gmail.com> <49180CF1.5030508@unibo.it> <49181740.8000407@unibo.it> <3d375d730811101208g41e7be3elbd881c66d694fa42@mail.gmail.com> Message-ID: <4919776B.1020007@unibo.it> Robert Kern wrote: > On Mon, Nov 10, 2008 at 05:13, massimo sandal wrote: >> massimo sandal wrote: >> >>> I'll try to sketch up a script reproducing the core of the problem with >>> actual data. >> Here it is. Can anyone give it a look to help me understand if and how to >> make sense of the covariance matrix? > > The covariance matrix does need some scaling before it can be > interpreted statistically. Basically, if you are doing nonlinear least > squares as a statistical procedure, rather than a purely numerical > one, the residuals need to be scaled so that they are in units of > standard deviations of the measurement error for each individual > measurement. If you don't know what that is, then you can estimate it > from the fitted residuals. The parameter estimate is unchanged, but > you will need to rescale the covariance matrix of the estimate by > multiplying it by the residual variance. > > scipy.odr does most of this for you. Attached is a version of your > code using scipy.odr. Here is the text output: > > Fitted parameters: [ 4.90666526e+06 4.78090340e+09] > Covariance: [[ 1.72438988e+31 -1.64258997e+35] > [ -1.64258997e+35 1.57791262e+39]] > Residual variance: 2.83606592894e-22 > Scaled error bars: [ 6.99319913e+04 6.68959208e+08] > Scaled covariance: [[ 4.89048340e+09 -4.65849344e+13] > [ -4.65849344e+13 4.47506422e+17]] Thanks a lot! What I need are the scaled error bars, isn't it? (By the way: any good tutorial reference/book on this kind of numerical things? I am a molecular biologist now doing biophysics, and while enjoying it a lot, I feel behind on a lot of technical stuff) Thanks again, m. -- Massimo Sandal , Ph.D. University of Bologna Department of Biochemistry "G.Moruzzi" snail mail: Via Irnerio 48, 40126 Bologna, Italy email: massimo.sandal at unibo.it web: http://www.biocfarm.unibo.it/samori/people/sandal.html tel: +39-051-2094388 fax: +39-051-2094387 -------------- next part -------------- A non-text attachment was scrubbed... Name: massimo_sandal.vcf Type: text/x-vcard Size: 274 bytes Desc: not available URL: From cournape at gmail.com Tue Nov 11 07:51:29 2008 From: cournape at gmail.com (David Cournapeau) Date: Tue, 11 Nov 2008 21:51:29 +0900 Subject: [SciPy-user] scikits trac In-Reply-To: References: Message-ID: <5b8d13220811110451q78682051v2e161e83005ce05d@mail.gmail.com> On Tue, Nov 11, 2008 at 8:02 PM, Timmie wrote: > Hello, > is there any option to tell the trac installation at > http://scipy.org/scipy/scikits > to send me emails if there is a status change or new comment for the issues I am > involved (as reporter, commenter)? I am not 100 % sure, but you could try to add yourself to the CC field of a ticket. cheers, David From kdere at gmu.edu Tue Nov 11 12:07:37 2008 From: kdere at gmu.edu (Ken Dere) Date: Tue, 11 Nov 2008 17:07:37 +0000 (UTC) Subject: [SciPy-user] =?utf-8?q?Plotting_vector_field_+_velocity_magnitude?= =?utf-8?q?_in=09background?= References: <2498aa29-c764-4153-b5ff-f5f0beadd9b8@n1g2000prb.googlegroups.com> Message-ID: wbrevis gmail.com> writes: > > Hello all, > > I'm trying to plot one of my experimental data using scipy. Until now, > all the work I did was using Matlab. snip Can somebody help me with suggestions on how to > read data using scipy and the best way to plot (pcolor+quiver)?. What > about the function quiver3d of mlab, can be used for 2d representation > of a flow field, together with surf (also mlab). > > Thank you in advance for your help and suggestions > > W. Brevis > matplotlib has the quiver commmand for 2D data. Ken Dere From robert.kern at gmail.com Tue Nov 11 15:27:11 2008 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 11 Nov 2008 14:27:11 -0600 Subject: [SciPy-user] scipy.optimize.leastsq and covariance matrix meaning In-Reply-To: <4919776B.1020007@unibo.it> References: <491308E8.5060807@unibo.it> <49135C11.5050309@gmail.com> <49180CF1.5030508@unibo.it> <49181740.8000407@unibo.it> <3d375d730811101208g41e7be3elbd881c66d694fa42@mail.gmail.com> <4919776B.1020007@unibo.it> Message-ID: <3d375d730811111227v6e48686cu886d5d3fb3ac2640@mail.gmail.com> On Tue, Nov 11, 2008 at 06:15, massimo sandal wrote: > Robert Kern wrote: >> >> On Mon, Nov 10, 2008 at 05:13, massimo sandal >> wrote: >>> >>> massimo sandal wrote: >>> >>>> I'll try to sketch up a script reproducing the core of the problem with >>>> actual data. >>> >>> Here it is. Can anyone give it a look to help me understand if and how to >>> make sense of the covariance matrix? >> >> The covariance matrix does need some scaling before it can be >> interpreted statistically. Basically, if you are doing nonlinear least >> squares as a statistical procedure, rather than a purely numerical >> one, the residuals need to be scaled so that they are in units of >> standard deviations of the measurement error for each individual >> measurement. If you don't know what that is, then you can estimate it >> from the fitted residuals. The parameter estimate is unchanged, but >> you will need to rescale the covariance matrix of the estimate by >> multiplying it by the residual variance. >> >> scipy.odr does most of this for you. Attached is a version of your >> code using scipy.odr. Here is the text output: >> >> Fitted parameters: [ 4.90666526e+06 4.78090340e+09] >> Covariance: [[ 1.72438988e+31 -1.64258997e+35] >> [ -1.64258997e+35 1.57791262e+39]] >> Residual variance: 2.83606592894e-22 >> Scaled error bars: [ 6.99319913e+04 6.68959208e+08] >> Scaled covariance: [[ 4.89048340e+09 -4.65849344e+13] >> [ -4.65849344e+13 4.47506422e+17]] > > Thanks a lot! What I need are the scaled error bars, isn't it? You have a high anti-correlation (-0.996), so it is very much worth reporting the entire (scaled) covariance matrix. At least report the scaled error bars and the correlation factor. > (By the way: any good tutorial reference/book on this kind of numerical > things? I am a molecular biologist now doing biophysics, and while enjoying > it a lot, I feel behind on a lot of technical stuff) Do you mean the statistical aspects of curve fitting? The book I learned this kind of stuff from is quite old, _Data Reduction and Error Analysis for the Physical Sciences_ by Bevington and Robinson, but it covers the practical basics of curve fitting and error analysis pretty well. Many statistics books cover similar ground, but speak a different language. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From wolfgang.meyer at gmail.com Tue Nov 11 20:16:18 2008 From: wolfgang.meyer at gmail.com (Wolfgang Meyer) Date: Wed, 12 Nov 2008 02:16:18 +0100 Subject: [SciPy-user] how to stop optimization when using optimize.fmin_l_bfgs_b() Message-ID: In the optimization function scipy.optimize.fmin_l_bfgs_b(func, x0, fprime=None, args=(), approx_grad=0, bounds=None, m=10, factr=10000000.0, pgtol=1.0000000000000001e-05, epsilon=1e-08, iprint=-1, maxfun=15000) >>func<< is a function to minimize. Suppose during the optimization process I detect some errors inside >>func<< and want to stop the optimization process, how should I manoeuvre? Maybe return some random values from >>func<>fprime<<: what should I do to break the optimization process when I realized errors inside fprime? Thanks! -- Wolfgang Meyer -------------- next part -------------- An HTML attachment was scrubbed... URL: From rob.clewley at gmail.com Tue Nov 11 21:00:40 2008 From: rob.clewley at gmail.com (Rob Clewley) Date: Tue, 11 Nov 2008 21:00:40 -0500 Subject: [SciPy-user] how to stop optimization when using optimize.fmin_l_bfgs_b() In-Reply-To: References: Message-ID: Assuming you have access to 'func' so that you can modify it, just raise an exception when it detects errors. I think that should stop the optimization immediately. Maybe define your own subtype of exception, or just raise a ValueError, AssertionError or whatever seems most appropriate. On Tue, Nov 11, 2008 at 8:16 PM, Wolfgang Meyer wrote: > In the optimization function > scipy.optimize.fmin_l_bfgs_b(func, x0, fprime=None, args=(), approx_grad=0, > bounds=None, m=10, factr=10000000.0, pgtol=1.0000000000000001e-05, > epsilon=1e-08, iprint=-1, maxfun=15000) > >>>func<< is a function to minimize. Suppose during the optimization process >>> I detect some errors inside >>func<< and want to stop the optimization >>> process, how should I manoeuvre? Maybe return some random values from >>> >>func<>fprime<<: what should I do to break the >>> optimization process when I realized errors inside fprime? > > Thanks! > -- > Wolfgang Meyer From rocksportrocker at googlemail.com Wed Nov 12 07:37:29 2008 From: rocksportrocker at googlemail.com (Uwe Schmitt) Date: Wed, 12 Nov 2008 04:37:29 -0800 (PST) Subject: [SciPy-user] scipy.sparse: read file Message-ID: <41064f5a-628c-4131-9dcc-0a9d39624bc9@a3g2000prm.googlegroups.com> Hi, I discovered the save-method which writes a sparse matrix to a file. How can construct a sparse matrix from such a file ? I found no appropriate method. Greetings, Uwe From rocksportrocker at googlemail.com Wed Nov 12 08:11:33 2008 From: rocksportrocker at googlemail.com (Uwe Schmitt) Date: Wed, 12 Nov 2008 05:11:33 -0800 (PST) Subject: [SciPy-user] scipy.sparse: read file In-Reply-To: <41064f5a-628c-4131-9dcc-0a9d39624bc9@a3g2000prm.googlegroups.com> References: <41064f5a-628c-4131-9dcc-0a9d39624bc9@a3g2000prm.googlegroups.com> Message-ID: I found a solution: fp = file(...) fp.next() num_data = int(fp.next()) data = zeros((num_data,), dtype=....) ij = zeros((2, num_data), dtype=int) for k, row in enumerate(fp): i, j, val = row.split() ij[0, k] = int(i) ij[1, k] = int(j) data[k] = float(val) mat = csc_matrix((data, ij)) Greetings, Uwe On 12 Nov., 13:37, Uwe Schmitt wrote: > Hi, > > I discovered the save-method which writes a sparse matrix to a file. > How can construct a sparse matrix from such a file ? I found no > appropriate > method. > > Greetings, Uwe > _______________________________________________ > SciPy-user mailing list > SciPy-u... at scipy.orghttp://projects.scipy.org/mailman/listinfo/scipy-user From bsouthey at gmail.com Wed Nov 12 09:51:11 2008 From: bsouthey at gmail.com (Bruce Southey) Date: Wed, 12 Nov 2008 08:51:11 -0600 Subject: [SciPy-user] scipy.optimize.leastsq and covariance matrix meaning In-Reply-To: <3d375d730811111227v6e48686cu886d5d3fb3ac2640@mail.gmail.com> References: <491308E8.5060807@unibo.it> <49135C11.5050309@gmail.com> <49180CF1.5030508@unibo.it> <49181740.8000407@unibo.it> <3d375d730811101208g41e7be3elbd881c66d694fa42@mail.gmail.com> <4919776B.1020007@unibo.it> <3d375d730811111227v6e48686cu886d5d3fb3ac2640@mail.gmail.com> Message-ID: <491AED5F.8050400@gmail.com> Robert Kern wrote: > On Tue, Nov 11, 2008 at 06:15, massimo sandal wrote: > >> Robert Kern wrote: >> >>> On Mon, Nov 10, 2008 at 05:13, massimo sandal >>> wrote: >>> >>>> massimo sandal wrote: >>>> >>>> >>>>> I'll try to sketch up a script reproducing the core of the problem with >>>>> actual data. >>>>> >>>> Here it is. Can anyone give it a look to help me understand if and how to >>>> make sense of the covariance matrix? >>>> >>> The covariance matrix does need some scaling before it can be >>> interpreted statistically. Basically, if you are doing nonlinear least >>> squares as a statistical procedure, rather than a purely numerical >>> one, the residuals need to be scaled so that they are in units of >>> standard deviations of the measurement error for each individual >>> measurement. If you don't know what that is, then you can estimate it >>> from the fitted residuals. The parameter estimate is unchanged, but >>> you will need to rescale the covariance matrix of the estimate by >>> multiplying it by the residual variance. >>> >>> scipy.odr does most of this for you. Attached is a version of your >>> code using scipy.odr. Here is the text output: >>> >>> Fitted parameters: [ 4.90666526e+06 4.78090340e+09] >>> Covariance: [[ 1.72438988e+31 -1.64258997e+35] >>> [ -1.64258997e+35 1.57791262e+39]] >>> Residual variance: 2.83606592894e-22 >>> Scaled error bars: [ 6.99319913e+04 6.68959208e+08] >>> Scaled covariance: [[ 4.89048340e+09 -4.65849344e+13] >>> [ -4.65849344e+13 4.47506422e+17]] >>> >> Thanks a lot! What I need are the scaled error bars, isn't it? >> > > You have a high anti-correlation (-0.996), so it is very much worth > reporting the entire (scaled) covariance matrix. At least report the > scaled error bars and the correlation factor. > You also have a very bad fit as the standard errors are huge relative to the estimate meaning your parameters are not statistically different from zero. Physics is one thing but the data tell a very different story perhaps due to measurement errors. The linear regression of x on y gives a residual variance of 2.86121E-22 and R-squared is 73% (approx 73% for the model above). I don't see any evidence for a nonlinear fit especially if you bother to plot the data. As I previously said, you probably would get a better fit using splines or similar because of the variability present. > >> (By the way: any good tutorial reference/book on this kind of numerical >> things? I am a molecular biologist now doing biophysics, and while enjoying >> it a lot, I feel behind on a lot of technical stuff) >> > > Do you mean the statistical aspects of curve fitting? The book I > learned this kind of stuff from is quite old, _Data Reduction and > Error Analysis for the Physical Sciences_ by Bevington and Robinson, > but it covers the practical basics of curve fitting and error analysis > pretty well. Many statistics books cover similar ground, but speak a > different language. > > There are many books on nonlinear modeling but these are usually focused to the authors experience and which specific aspect of nonlinear models. Bates and Watts 'Nonlinear Regression Analysis and Its Applications' is one. Bruce From massimo.sandal at unibo.it Wed Nov 12 10:36:31 2008 From: massimo.sandal at unibo.it (massimo sandal) Date: Wed, 12 Nov 2008 16:36:31 +0100 Subject: [SciPy-user] scipy.optimize.leastsq and covariance matrix meaning In-Reply-To: <491AED5F.8050400@gmail.com> References: <491308E8.5060807@unibo.it> <49135C11.5050309@gmail.com> <49180CF1.5030508@unibo.it> <49181740.8000407@unibo.it> <3d375d730811101208g41e7be3elbd881c66d694fa42@mail.gmail.com> <4919776B.1020007@unibo.it> <3d375d730811111227v6e48686cu886d5d3fb3ac2640@mail.gmail.com> <491AED5F.8050400@gmail.com> Message-ID: <491AF7FF.3030300@unibo.it> Bruce Southey wrote: > You also have a very bad fit as the standard errors are huge relative to > the estimate meaning your parameters are not statistically different > from zero. Huh, why? Parameters: 4.90666526e+06 4.78090340e+09 Scaled error bars: 6.99319913e+04 6.68959208e+08 It seems values are estimated with roughly 1% and 10% confidence, respectively. Am I wrong? > Physics is one thing but the data tell a very different story > perhaps due to measurement errors. The data are right. > The linear regression of x on y gives > a residual variance of 2.86121E-22 and R-squared is 73% (approx 73% for > the model above). I don't see any evidence for a nonlinear fit > especially if you bother to plot the data. As I previously said, you > probably would get a better fit using splines or similar because of the > variability present. *sigh*. I am probably bad at communicating, so I repeat. This is a SECTION of the data. A tiny SECTION of a 2048-point force curve. Being a small section, it is not surprising it is almost linear. I just posted that bunch of points because they fitted correctly with the equation and it was totally pointless to copy and paste hundreds of numbers just to make people sleep happily about my own data. Each peak in the force curve is clearly non linear, and I posted examples of what I mean. If you want, you can download the software from http://code.google.com/p/hooke and I can send you examples of whole relevant data, fresh from the instrument. I am not trying to fit the data with whatever model works best. I *want to pick up well defined parameters* from a physical model. If you have doubts on the physical consistency of the model or on its application to my data, you're welcome to come here and discuss with me and my coworkers (among them physicists). Thanks for your interest, but really, you are misunderstanding the issue. If you want information on single molecule force spectroscopy of polymers, I can give you some reference. Thanks for the bibliography too! m. -- Massimo Sandal , Ph.D. University of Bologna Department of Biochemistry "G.Moruzzi" snail mail: Via Irnerio 48, 40126 Bologna, Italy email: massimo.sandal at unibo.it web: http://www.biocfarm.unibo.it/samori/people/sandal.html tel: +39-051-2094388 fax: +39-051-2094387 -------------- next part -------------- A non-text attachment was scrubbed... Name: massimo_sandal.vcf Type: text/x-vcard Size: 274 bytes Desc: not available URL: From josh.k.lawrence at gmail.com Wed Nov 12 12:23:26 2008 From: josh.k.lawrence at gmail.com (Josh Lawrence) Date: Wed, 12 Nov 2008 12:23:26 -0500 Subject: [SciPy-user] Vectorized Approach to Matrix Inversion Message-ID: Hey all, I have a NxNxP array. Let's call it foo. foo[:,:,p] contains a matrix I want to invert (or to solve in the Ax = b fasion). Is there a pure python/scipy way to compute an array bar without loops such that it would be equivalent to the following? import scipy.linalg as la import numpy as np bar = np.zeros((N, N, P)) + 0j for i in range(0,P): bar[:,:,i] = la.inv(foo[:,:,i]) Or in the Ax = b sense (with b an NxP array, and A NxNxP): import scipy.linalg as la import numpy as np x = np.zeros((N, P)) + 0j for i in range(0,P): x[:,i] = la.solve(A[:,:,i], b[:,i]) I realize I could write some fortran code to do this or even use cython, but it would be nice if I could do this without needing to compile some extra code. As a summary, does anyone know how to compute the above (either example, but preferably both) without using loops? Cheers, Josh From wnbell at gmail.com Wed Nov 12 12:32:58 2008 From: wnbell at gmail.com (Nathan Bell) Date: Wed, 12 Nov 2008 12:32:58 -0500 Subject: [SciPy-user] scipy.sparse: read file In-Reply-To: References: <41064f5a-628c-4131-9dcc-0a9d39624bc9@a3g2000prm.googlegroups.com> Message-ID: On Wed, Nov 12, 2008 at 8:11 AM, Uwe Schmitt wrote: > I found a solution: > > fp = file(...) > > fp.next() > num_data = int(fp.next()) > > data = zeros((num_data,), dtype=....) > ij = zeros((2, num_data), dtype=int) > > for k, row in enumerate(fp): > i, j, val = row.split() > > ij[0, k] = int(i) > ij[1, k] = int(j) > data[k] = float(val) > > mat = csc_matrix((data, ij)) > Yep, that will work. Note that in SciPy 0.7 the .save() method is deprecated. Better alternatives include mmwrite() and savemat() in scipy.io: http://projects.scipy.org/scipy/scipy/browser/trunk/scipy/io/mmio.py#L55 http://projects.scipy.org/scipy/scipy/browser/trunk/scipy/io/matlab/mio.py#L119 These functions also have corresponding read methods: mmread() and loadmat(). -- Nathan Bell wnbell at gmail.com http://graphics.cs.uiuc.edu/~wnbell/ From HAWRYLA at novachem.com Wed Nov 12 18:29:24 2008 From: HAWRYLA at novachem.com (Andrew Hawryluk) Date: Wed, 12 Nov 2008 16:29:24 -0700 Subject: [SciPy-user] convergence detection in optimize/nonlin.py Message-ID: <48C01AE7354EC240A26F19CEB995E943033AEFB7@CHMAILMBX01.novachem.com> I am experimenting with optimize.broyden3() for solving a multivariable, nonlinear problem. The signature is def broyden3(F, xin, iter=10, alpha=0.4, verbose = False) and it is written to iterate exactly 'iter' times. However, after it converges (to within machine tolerances) it runs into division by zero errors and fails while trying to take the square root of a NaN. I have modified my copy as follows. Original (revision 5067), beginning on line 151: #Gm=Gm+(deltaxm-Gm*deltaFxm)*deltaFxm.T/norm(deltaFxm)**2 updateG(deltaxm-Gmul(deltaFxm),deltaFxm/norm(deltaFxm)**2) Modified version, beginning on line 151 normDelta = norm(deltaFxm) if normDelta == 0.0: break #Gm=Gm+(deltaxm-Gm*deltaFxm)*deltaFxm.T/norm(deltaFxm)**2 updateG(deltaxm-Gmul(deltaFxm),deltaFxm/normDelta**2) All of the routines in optimize/nonlin.py have the same behaviour. Could we add some convergence checking to each of them? Should 'iter' be called 'maxiter' to reflect this? Andrew Hawryluk From timmichelsen at gmx-topmail.de Wed Nov 12 18:38:29 2008 From: timmichelsen at gmx-topmail.de (Tim Michelsen) Date: Thu, 13 Nov 2008 00:38:29 +0100 Subject: [SciPy-user] calculations using the datetime information of timeseries Message-ID: Hello, I need to perform calculations for a time series that use the datetime of each data point as input. An example: def myfunction(datetime_obj, scaling_factor): pass I found out that I can get the datetime for each entry with for i in range(0, series.size): series[i] = myfunction(series.dates.tolist()[i], 10.) Now, I noticed a strange thing. If I have a base series "base_series" and assige it to a new one with new_series = base_series The base_series gets updated/changed according to all calculations I perform on new_series (Please see method 1 below). The only way I could imagine to make my code work is createding lots of template series like in method 3 below. This way lets me calculate my new values in new_series using the datetime information and still retrain base_series with its original values. I kindly ask you to shed some light why the base_series get changed when I change derived series. Is there a more efficient way to acomplish my task that I may haven't thought of so far? Thanks in advance! Kind regards, Timmie #### BELOW A SAMPLE SCRIPT THAT MAY ILLUSTRATE #### #!/usr/bin/env python # -*- coding: utf-8 -*- import datetime import scikits.timeseries as ts import numpy as np #create dummy series data = np.zeros(600)+1 now = datetime.datetime.now() start = datetime.datetime(now.year, now.month, now.day) #print start start_date = ts.Date('H', datetime=start) #print start_date series_dummy = ts.time_series(data, dtype=np.float_, freq='H', start_date=start_date) snew = series_dummy ###method 1 for i in range(0,snew.size): snew[i] = snew[i]* 2 #snew.dates[i].datetime print "method 1:", snew.sum()-series_dummy.sum() ###method 2 for i in range(0,snew.size): snew = snew*2 print "method 2:", snew.sum()-series_dummy.sum() #method 3: data = np.zeros(series_dummy.size)+1 dt_arr = series_dummy.dates cser = ts.time_series(data.astype(np.float_), dt_arr) for i in range(0,cser.size): # note: cser.dates[i].datetime.hour is just used as an example # my function performes calculations based on the value of the datetime of each data point for each data point (current datetime is the input parameter). cser[i] = cser.dates[i].datetime.hour print "method 3:", cser.sum()-series_dummy.sum() From pav at iki.fi Wed Nov 12 19:20:43 2008 From: pav at iki.fi (Pauli Virtanen) Date: Thu, 13 Nov 2008 00:20:43 +0000 (UTC) Subject: [SciPy-user] convergence detection in optimize/nonlin.py References: <48C01AE7354EC240A26F19CEB995E943033AEFB7@CHMAILMBX01.novachem.com> Message-ID: Wed, 12 Nov 2008 16:29:24 -0700, Andrew Hawryluk wrote: > I am experimenting with optimize.broyden3() for solving a multivariable, > nonlinear problem. The signature is > def broyden3(F, xin, iter=10, alpha=0.4, verbose = False) > and it is written to iterate exactly 'iter' times. > > However, after it converges (to within machine tolerances) it runs into > division by zero errors and fails while trying to take the square root > of a NaN. [clip] Yes, I think these methods should terminate after reaching user-specified tolerances. IIRC, divisions by zero etc. are not uncommon in Broyden methods if they are run to very short step lengths. It's a bug, ticket here: http://scipy.org/scipy/scipy/ticket/791 -- Pauli Virtanen From pgmdevlist at gmail.com Wed Nov 12 20:35:57 2008 From: pgmdevlist at gmail.com (Pierre GM) Date: Wed, 12 Nov 2008 20:35:57 -0500 Subject: [SciPy-user] calculations using the datetime information of timeseries In-Reply-To: References: Message-ID: <39E9D479-C5F5-4AF3-A4E6-4EEFB4F1DAD6@gmail.com> Timmie, Let's go through method #1 first: > snew = series_dummy > > ###method 1 > > for i in range(0,snew.size): > snew[i] = snew[i]* 2 #snew.dates[i].datetime Your `snew` object is only a reference to `series_dummy`. When you modify an element of snew, you're in fact modifying the corresponding element of `series_dummy`. That's a feature of Python, you would get the same result with lists: >>> a = [0,0,0] >>> b = a >>> b[0] = 1 >>> a [1,0,0] If you want to avoid that, you can make snew a copy of series_dummy snew = series_dummy.copy() Now, method #2: > > for i in range(0,snew.size): > snew = snew*2 Are you sure that's what you want to do ? you could do snew = snew*(2**snew.size) and get the same result. Anyway: here, you change what snew is at each iteration: initially, it was a reference to series_dummy, now, it's a reference to another (temporary) object, snew*2. No back propagation of results. Finally, some comments for method #3: You want to create a new timeseries based on the result of some calculation on the data part, but still using the dates of the initial series ? If you don't have any missing values, perform the computation on series._data, that'll be faster. If you have mssing values, use the series._series instead to access directly the MaskedArray methods, and not the timeseries ones (you don't want to carry the dates around if you don't need them). As a wrap-up: Try to avoid looping if you can. You said a generic form of your function is: > > def myfunction(datetime_obj, scaling_factor): > pass Do you really need datetime objects ? In your example, you were using series.dates[i].datetime.hour, a list. You should have used series.dates.hour, which is an array. Using functions on an array as a whole is far more efficient than using the same functions on each element of the array. Let me know how it goes, and don't hesitate to contact me off-list if you need some help with your function. Cheers P. > > I found out that I can get the datetime for each entry with > > for i in range(0, series.size): > series[i] = myfunction(series.dates.tolist()[i], 10.) > > Now, I noticed a strange thing. > > If I have a base series "base_series" and assige it to a new one with > > new_series = base_series > > The base_series gets updated/changed according to all calculations I > perform on new_series (Please see method 1 below). > > The only way I could imagine to make my code work is createding lots > of > template series like in method 3 below. This way lets me calculate my > new values in new_series using the datetime information and still > retrain base_series with its original values. > > I kindly ask you to shed some light why the base_series get changed > when > I change derived series. > > Is there a more efficient way to acomplish my task that I may haven't > thought of so far? > > Thanks in advance! > Kind regards, > Timmie > > > > #### BELOW A SAMPLE SCRIPT THAT MAY ILLUSTRATE #### > > #!/usr/bin/env python > # -*- coding: utf-8 -*- > > import datetime > import scikits.timeseries as ts > > import numpy as np > > #create dummy series > data = np.zeros(600)+1 > now = datetime.datetime.now() > start = datetime.datetime(now.year, now.month, now.day) > #print start > start_date = ts.Date('H', datetime=start) > #print start_date > series_dummy = ts.time_series(data, dtype=np.float_, freq='H', > start_date=start_date) > > snew = series_dummy > > ###method 1 > > for i in range(0,snew.size): > snew[i] = snew[i]* 2 #snew.dates[i].datetime > > print "method 1:", snew.sum()-series_dummy.sum() > > ###method 2 > > for i in range(0,snew.size): > snew = snew*2 > > print "method 2:", snew.sum()-series_dummy.sum() > > #method 3: > > data = np.zeros(series_dummy.size)+1 > dt_arr = series_dummy.dates > cser = ts.time_series(data.astype(np.float_), dt_arr) > for i in range(0,cser.size): > # note: cser.dates[i].datetime.hour is just used as an example > # my function performes calculations based on the value of the > datetime of each data point for each data point (current datetime is > the > input parameter). > > cser[i] = cser.dates[i].datetime.hour > > print "method 3:", cser.sum()-series_dummy.sum() > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user From jason-sage at creativetrax.com Wed Nov 12 22:01:25 2008 From: jason-sage at creativetrax.com (jason-sage at creativetrax.com) Date: Wed, 12 Nov 2008 21:01:25 -0600 Subject: [SciPy-user] Recommended SVN version? Message-ID: <491B9885.6090108@creativetrax.com> First of all, thanks for the tremendous work everyone has done! In the Sage project (http://www.sagemath.org), we have a slightly patched version of scipy 0.6 currently. We recently upgraded to numpy 1.2 and would like to match that with an upgrade of scipy. We are using scipy more and more; for example, in our next version out later this week, we switched our floating point and complex matrices to use a numpy/scipy backend for most calculations. Is there a recommended SVN version that we should update to while waiting for 0.7 to be released? We're looking for an SVN revision that is relatively stable. Incidentally, count me into the crowd that would find scipy much more valuable if there were more frequent releases; Sage users in general would be testing the code and giving feedback as well. Also, we noticed the following behavior in our current version of scipy, but only on an OSX 10.5 box. If it's easy, can someone see if the following commands give the spurious result we see below for the inverse matrix with scipy.linalg.inv on an OSX 10.5 box? I'd test it, but the code works on our old scipy on my 32 bit Ubuntu box. sage: import numpy sage: a=numpy.array([[1,2,3],[4,5,6],[7,8,9]],dtype="float64") sage: import scipy sage: import scipy.linalg sage: import numpy.linalg sage: scipy.linalg.det(a) 0.0 sage: scipy.linalg.inv(a) array([[ -4.50359963e+15, 9.00719925e+15, -4.50359963e+15], [ 9.00719925e+15, -1.80143985e+16, 9.00719925e+15], [ -4.50359963e+15, 9.00719925e+15, -4.50359963e+15]]) sage: numpy.linalg.det(a) 6.6613381477509392e-16 sage: numpy.linalg.inv(a) array([[ -4.50359963e+15, 9.00719925e+15, -4.50359963e+15], [ 9.00719925e+15, -1.80143985e+16, 9.00719925e+15], [ -4.50359963e+15, 9.00719925e+15, -4.50359963e+15]]) Thanks, Jason From michael.abshoff at googlemail.com Wed Nov 12 22:10:41 2008 From: michael.abshoff at googlemail.com (Michael Abshoff) Date: Wed, 12 Nov 2008 19:10:41 -0800 Subject: [SciPy-user] Recommended SVN version? In-Reply-To: <491B9885.6090108@creativetrax.com> References: <491B9885.6090108@creativetrax.com> Message-ID: <491B9AB1.40005@gmail.com> jason-sage at creativetrax.com wrote: Hi, > First of all, thanks for the tremendous work everyone has done! +1 > In the Sage project (http://www.sagemath.org), we have a slightly > patched version of scipy 0.6 currently. We recently upgraded to numpy > 1.2 and would like to match that with an upgrade of scipy. For the record: we tried to upgrade to 0.7r4752svn when we did the upgrade to numpy 1.2 a couple weeks back and after fixing various deprecation issues in our code we ran into some regressions with the stats module. I know that there is actually activity there, i.e. earlier today, so hopefully someone from our end can sort the issue out and make a precise bug report in case it wasn't something dumb on our end. > Thanks, > > Jason Cheers, Michael > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From lepto.python at gmail.com Wed Nov 12 22:16:51 2008 From: lepto.python at gmail.com (oyster) Date: Thu, 13 Nov 2008 11:16:51 +0800 Subject: [SciPy-user] scipy on old CPU crashes Message-ID: <6a4f17690811121916p7221acf9vec0af7adb4fbc72d@mail.gmail.com> hi, all I am using an old AMD Duron CPU with Win2k, which seems does not support SSE/SSE2 When I write [code]] >>> from scipy.integrate import quad >>> quad(lambda e:e,1,5) [/code] or [code] >>> import numpy >>> import scipy >>> import scipy.interpolate >>> x = numpy.arange(10,dtype='float32') * 0.3 >>> y = numpy.cos(x) >>> sp = scipy.interpolate.UnivariateSpline(x,y) [/code] python 2.4/python 2.5 (which are all from www.python.org) crash soon. I searched the internet, and found the reason may be SSE/SSE2 instructions in ATLAS, but I am not sure. I found that there are 3 verisons in numpy-1.2.1-win32-superpack-python2.5.exe(numpy-1.2.1-sse3.exe, numpy-1.2.1-sse2.exe and numpy-1.2.1-nosse.exe) Is there a precompiled scipy that judges nosse/sse/sse2 automatically? or is there a way to change ATLAS only according to my CPU? thanx From cournape at gmail.com Wed Nov 12 22:41:33 2008 From: cournape at gmail.com (David Cournapeau) Date: Thu, 13 Nov 2008 12:41:33 +0900 Subject: [SciPy-user] scipy on old CPU crashes In-Reply-To: <6a4f17690811121916p7221acf9vec0af7adb4fbc72d@mail.gmail.com> References: <6a4f17690811121916p7221acf9vec0af7adb4fbc72d@mail.gmail.com> Message-ID: <5b8d13220811121941va8442f2gcb3a997874878b4b@mail.gmail.com> On Thu, Nov 13, 2008 at 12:16 PM, oyster wrote: > hi, all > I am using an old AMD Duron CPU with Win2k, which seems does not > support SSE/SSE2 indeed, old Duron does not support SSE IIRC. > I found that there are 3 verisons in > numpy-1.2.1-win32-superpack-python2.5.exe(numpy-1.2.1-sse3.exe, > numpy-1.2.1-sse2.exe and numpy-1.2.1-nosse.exe) Yep, the superpack is just a simple wrapper around the correct installer, nothing fancy. > Is there a precompiled scipy that judges nosse/sse/sse2 automatically? No, but there will be for 0.7, which hopfully is only days away now. > or is there a way to > change ATLAS only according to my CPU? Unfortunately not without rebuilding scipy yourself. Win32 binaries are built by linking atlas statically. David From cournape at gmail.com Wed Nov 12 23:30:22 2008 From: cournape at gmail.com (David Cournapeau) Date: Thu, 13 Nov 2008 13:30:22 +0900 Subject: [SciPy-user] Recommended SVN version? In-Reply-To: <491B9885.6090108@creativetrax.com> References: <491B9885.6090108@creativetrax.com> Message-ID: <5b8d13220811122030g5aee739drd54cc150a5cf95e6@mail.gmail.com> On Thu, Nov 13, 2008 at 12:01 PM, wrote: > > First of all, thanks for the tremendous work everyone has done! > > In the Sage project (http://www.sagemath.org), we have a slightly > patched version of scipy 0.6 currently. We recently upgraded to numpy > 1.2 and would like to match that with an upgrade of scipy. We are using > scipy more and more; for example, in our next version out later this > week, we switched our floating point and complex matrices to use a > numpy/scipy backend for most calculations. great. > > Is there a recommended SVN version that we should update to while > waiting for 0.7 to be released? We're looking for an SVN revision that > is relatively stable. Incidentally, count me into the crowd that would > find scipy much more valuable if there were more frequent releases; Sage > users in general would be testing the code and giving feedback as well. If you can wait for a couple of days, scipy 0.7 will be there. A beta is immininent (next WE), and a proper release should follow soon after. David From michael.abshoff at googlemail.com Wed Nov 12 23:40:24 2008 From: michael.abshoff at googlemail.com (Michael Abshoff) Date: Wed, 12 Nov 2008 20:40:24 -0800 Subject: [SciPy-user] Recommended SVN version? In-Reply-To: <5b8d13220811122030g5aee739drd54cc150a5cf95e6@mail.gmail.com> References: <491B9885.6090108@creativetrax.com> <5b8d13220811122030g5aee739drd54cc150a5cf95e6@mail.gmail.com> Message-ID: <491BAFB8.2030708@gmail.com> David Cournapeau wrote: > On Thu, Nov 13, 2008 at 12:01 PM, wrote: >> First of all, thanks for the tremendous work everyone has done! >> >> In the Sage project (http://www.sagemath.org), we have a slightly >> patched version of scipy 0.6 currently. We recently upgraded to numpy >> 1.2 and would like to match that with an upgrade of scipy. We are using >> scipy more and more; for example, in our next version out later this >> week, we switched our floating point and complex matrices to use a >> numpy/scipy backend for most calculations. > > great. Yeah, we are also moving to the buffer interface hopefully in the not too distant future. >> Is there a recommended SVN version that we should update to while >> waiting for 0.7 to be released? We're looking for an SVN revision that >> is relatively stable. Incidentally, count me into the crowd that would >> find scipy much more valuable if there were more frequent releases; Sage >> users in general would be testing the code and giving feedback as well. > > If you can wait for a couple of days, scipy 0.7 will be there. A beta > is immininent (next WE), and a proper release should follow soon > after. Ok, we will keep an eye on that and let you know if we are still hitting regressions in the stats module. > David Cheers, Michael > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From lepto.python at gmail.com Thu Nov 13 01:53:54 2008 From: lepto.python at gmail.com (oyster) Date: Thu, 13 Nov 2008 14:53:54 +0800 Subject: [SciPy-user] suggest to change PREREQUISITES of scipy clearly Message-ID: <6a4f17690811122253t1004e19em38defc9f924d37bb@mail.gmail.com> for example in scipy version = '0.5.1', the 'PREREQUISITES' section of INSTALL.txt says 'NumPy__ 1.0b1 or newer', but when I install numpy version='1.0.4', and run scipy, I get [code] h:\sap-24\bin\lib\site-packages\scipy\misc\__init__.py:25: DeprecationWarning: ScipyTest is now called NumpyTest; please update your code test = ScipyTest().test RuntimeError: module compiled against version 1000002 of C-API but this version of numpy is 1000009 [/code] From cournape at gmail.com Thu Nov 13 03:59:23 2008 From: cournape at gmail.com (David Cournapeau) Date: Thu, 13 Nov 2008 17:59:23 +0900 Subject: [SciPy-user] suggest to change PREREQUISITES of scipy clearly In-Reply-To: <6a4f17690811122253t1004e19em38defc9f924d37bb@mail.gmail.com> References: <6a4f17690811122253t1004e19em38defc9f924d37bb@mail.gmail.com> Message-ID: <5b8d13220811130059gbee5d3by121b8d2a4c846387@mail.gmail.com> On Thu, Nov 13, 2008 at 3:53 PM, oyster wrote: > for example > in scipy version = '0.5.1', the 'PREREQUISITES' section of INSTALL.txt > says 'NumPy__ 1.0b1 or newer', but when I install numpy > version='1.0.4', and run scipy, I get > [code] > h:\sap-24\bin\lib\site-packages\scipy\misc\__init__.py:25: > DeprecationWarning: ScipyTest is now called NumpyTest; please update > your code > test = ScipyTest().test > RuntimeError: module compiled against version 1000002 of C-API but > this version of numpy is 1000009 > [/code] There are two problems: - scipy and numpy must be compatible feature-wise (that is a version S of scipy requires at least a version of N of numpy), that is API compatibility. If S and N are API compatible, you should be able to build and use them together. That's what is mentioned in the INSTALL.txt - ABI compatibility, that is if you build scipy against a given version of numpy N1, will it work with a version N2 without recompilation; that's what you see in your case. API compatibility is necessary but not sufficient for ABI compatibility. For some time, I think we just raise the error you are seeing when the version did not match exactly. But someone (Stefan) worked in this, and maybe this won't happen for future versions. David From timmichelsen at gmx-topmail.de Thu Nov 13 04:49:49 2008 From: timmichelsen at gmx-topmail.de (Timmie) Date: Thu, 13 Nov 2008 09:49:49 +0000 (UTC) Subject: [SciPy-user] =?utf-8?q?calculations_using_the_datetime_informatio?= =?utf-8?q?n_of=09timeseries?= References: <39E9D479-C5F5-4AF3-A4E6-4EEFB4F1DAD6@gmail.com> Message-ID: Hello Pierre, > first, thanks for the fast reply. I really appreciate it. As note on my last email I may add that I simplyfied the functions (method 1-3). The different methods were only created to illustrate how I handle/access the series. > Your `snew` object is only a reference to `series_dummy`. When you > modify an element of snew, you're in fact modifying the corresponding > element of `series_dummy`. That's a feature of Python, you would get > the same result with lists: > >>> a = [0,0,0] > >>> b = a > >>> b[0] = 1 > >>> a > [1,0,0] > If you want to avoid that, you can make snew a copy of series_dummy > snew = series_dummy.copy() OK, thanks for this gentle hint. I must re-read this in my basic python books... > Finally, some comments for method #3: > You want to create a new timeseries based on the result of some > calculation on the data part, but still using the dates of the initial > series ? > If you don't have any missing values, perform the computation on > series._data, that'll be faster. If you have mssing values, use the > series._series instead to access directly the MaskedArray methods, and > not the timeseries ones (you don't want to carry the dates around if > you don't need them). > As a wrap-up: > Try to avoid looping if you can. Yes, I noticed that. But I couldn't find another way to pass the individual datetimes to my calculation function which expects only one value at once (i.e. it is not designed to calculate full arrays). >You said a generic form of your function is: > > > > def myfunction(datetime_obj, scaling_factor): > > pass > > Do you really need datetime objects ? Yes, in geoscience/earthscience and engineering it's quite normal to have parameters which are date/your of your dependent like: position of planets, state of the ocean, etc. > In your example, you were using > series.dates[i].datetime.hour, a list. You should have used > series.dates.hour, which is an array. Using functions on an array as a > whole is far more efficient than using the same functions on each > element of the array. I will try to adjust the function in order to let it calculate the directly with array. But the basic problem I haven't solved yet is to pass a signle datetime_obj to the myfunction along with further parameters. Regards, Timmie From ecomesana at googlemail.com Thu Nov 13 10:20:05 2008 From: ecomesana at googlemail.com (=?ISO-8859-1?Q?Enrique_Comesa=F1a_Figueroa?=) Date: Thu, 13 Nov 2008 16:20:05 +0100 Subject: [SciPy-user] About arrays and objects Message-ID: Hello, Is it possible to list a property from an object array without using a 'for' loop? I've created the array using: In [1]: from numpy import * In [2]: class nodo: ...: def __init__(self, pos=0): ...: self.pos = pos ...: In [3]: nodo_array = array ([nodo(1),nodo(2),nodo(3),nodo(4),nodo(5)]) I want to print the "node_array" using something like this: In [4]: print nodo_array[:].pos without using a for loop. Is that possible? Thanks, Enrique From pgmdevlist at gmail.com Thu Nov 13 10:23:50 2008 From: pgmdevlist at gmail.com (Pierre GM) Date: Thu, 13 Nov 2008 10:23:50 -0500 Subject: [SciPy-user] calculations using the datetime information of timeseries In-Reply-To: References: <39E9D479-C5F5-4AF3-A4E6-4EEFB4F1DAD6@gmail.com> Message-ID: <39BA62A5-E5F2-4E17-A34A-1EA59F2A649B@gmail.com> Timmie, > As note on my last email I may add that I simplyfied the functions > (method 1-3). > The different methods were only created to illustrate how I handle/ > access the > series. I got that. My comments were themselves intended for illustration ;) >> As a wrap-up: >> Try to avoid looping if you can. > Yes, I noticed that. > But I couldn't find another way to pass the individual datetimes to my > calculation function which expects only one value at once (i.e. it > is not > designed to calculate full arrays). That might be a bottleneck. If you could modify your function so that it can process arrays, you should get better results. Of course, that depends on the actual function... When I asked whether you really needed datetime objects, I was thinking about the actual datetime.datetime objects, not about objects having, say, a `day` or `hour` property. If you send an example of function closer to your actual need, I may be able to help you more. From anthony.j.mannucci at jpl.nasa.gov Thu Nov 13 11:22:05 2008 From: anthony.j.mannucci at jpl.nasa.gov (Mannucci, Anthony J) Date: Thu, 13 Nov 2008 08:22:05 -0800 Subject: [SciPy-user] Scipy fails to build in Mac OS X 10.5 Message-ID: I am trying to build numpy and scipy on Mac OS X 10.5.5. I recently installed Apple's Developer Tools (3.1.1) and am using gfortran 4.2.3. Gcc is at version 4.0.1. I installed the latest python (2.6) from the offcial python site using a binary. I then installed fftw 3.1.2. This appeared to install (no explicit tests were run). I created three softlinks as suggested on the site: http://www.scipy.org/Installing_SciPy/Mac_OS_X Then I tried numpy. I incorrectly tried to install a binary for numpy with python 2.5. Not surprisngly, that did not work (numpy not found). I then grabbed the source for numpy from SourceForge and compiled that, which seemed to finish. For numpy, I installed like this: sudo python setup.py build >& build.log sudo python setup.py install I then turned to SciPy. I obtained the tarball for version 0.6.0. I unpacked it and ran the following python setup.py build_src build_clib --fcompiler=gnu95 build_ext --fcompiler=gnu95 build >& config.log This produced lots of error messages, such as: > mkl_info: > libraries mkl,vml,guide not found in > /Library/Frameworks/Python.framework/Versions/2.6/lib > libraries mkl,vml,guide not found in /usr/local/lib > libraries mkl,vml,guide not found in /usr/lib > NOT AVAILABLE > > fftw3_info: > libraries fftw3 not found in > /Library/Frameworks/Python.framework/Versions/2.6/lib > FOUND: > libraries = ['fftw3'] > library_dirs = ['/usr/local/lib'] > define_macros = [('SCIPY_FFTW3_H', None)] > include_dirs = ['/usr/local/include'] > > djbfft_info: > NOT AVAILABLE > > blas_opt_info: > FOUND: > extra_link_args = ['-Wl,-framework', '-Wl,Accelerate'] > define_macros = [('NO_ATLAS_INFO', 3)] > extra_compile_args = ['-msse3', > '-I/System/Library/Frameworks/vecLib.framework/Headers'] > > lapack_opt_info: > FOUND: > extra_link_args = ['-Wl,-framework', '-Wl,Accelerate'] > define_macros = [('NO_ATLAS_INFO', 3)] > extra_compile_args = ['-msse3'] > > non-existing path in 'scipy/linsolve': 'tests' > umfpack_info: > libraries umfpack not found in > /Library/Frameworks/Python.framework/Versions/2.6/lib > libraries umfpack not found in /usr/local/lib > libraries umfpack not found in /usr/lib Etc. Some of the build seemed to go OK, and then I found these errors near the end: > building 'scipy.linsolve._zsuperlu' extension > compiling C sources > C compiler: gcc -arch ppc -arch i386 -isysroot /Developer/SDKs/MacOSX10.4u.sdk > -fno-strict-aliasing -fno-common -dy > namic -DNDEBUG -g -O3 > > compile options: '-DNO_ATLAS_INFO=3 -DUSE_VENDOR_BLAS=1 > -I/Library/Frameworks/Python.framework/Versions/2.6/lib/pyt > hon2.6/site-packages/numpy/core/include > -I/Library/Frameworks/Python.framework/Versions/2.6/include/python2.6 -c' > extra options: '-msse3' > gcc: scipy/linsolve/_superluobject.c > In file included from scipy/linsolve/_superluobject.h:8, > from scipy/linsolve/_superluobject.c:5: > scipy/linsolve/SuperLU/SRC/scomplex.h:60: error: conflicting types for > '_Py_c_abs' > /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/complexobj > ect.h:30: error: previous declaration > of '_Py_c_abs' was here > In file included from scipy/linsolve/_superluobject.h:8, > from scipy/linsolve/_superluobject.c:5: > scipy/linsolve/SuperLU/SRC/scomplex.h:60: error: conflicting types for > '_Py_c_abs' > /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/complexobj > ect.h:30: error: previous declaration > of '_Py_c_abs' was here > lipo: can't figure out the architecture type of: /var/tmp//ccWenONq.out > In file included from scipy/linsolve/_superluobject.h:8, > from scipy/linsolve/_superluobject.c:5: > scipy/linsolve/SuperLU/SRC/scomplex.h:60: error: conflicting types for > '_Py_c_abs' > /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/complexobj > ect.h:30: error: previous declaration > of '_Py_c_abs' was here > In file included from scipy/linsolve/_superluobject.h:8, > from scipy/linsolve/_superluobject.c:5: > scipy/linsolve/SuperLU/SRC/scomplex.h:60: error: conflicting types for > '_Py_c_abs' > /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/complexobj > ect.h:30: error: previous declaration > of '_Py_c_abs' was here > lipo: can't figure out the architecture type of: /var/tmp//ccWenONq.out > error: Command "gcc -arch ppc -arch i386 -isysroot > /Developer/SDKs/MacOSX10.4u.sdk -fno-strict-aliasing -fno-common > -dynamic -DNDEBUG -g -O3 -DNO_ATLAS_INFO=3 -DUSE_VENDOR_BLAS=1 > -I/Library/Frameworks/Python.framework/Versions/2.6 > /lib/python2.6/site-packages/numpy/core/include > -I/Library/Frameworks/Python.framework/Versions/2.6/include/python2 > .6 -c scipy/linsolve/_superluobject.c -o > build/temp.macosx-10.3-i386-2.6/scipy/linsolve/_superluobject.o -msse3" fa > iled with exit status 1 Running numpy tests fails as follows: > Tonys-Mac-2:286:scipy-0.6.0 $ python > Python 2.6 (trunk:66714:66715M, Oct 1 2008, 18:36:04) > [GCC 4.0.1 (Apple Computer, Inc. build 5370)] on darwin > Type "help", "copyright", "credits" or "license" for more information. >>>> import numpy >>>> numpy.test('1') > Running unit tests for numpy > Traceback (most recent call last): > File "", line 1, in > File > "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages > /numpy/testing/nosetester.py", line 240, in test > self._show_system_info() > File > "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages > /numpy/testing/nosetester.py", line 151, in _show_system_info > nose = import_nose() > File > "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages > /numpy/testing/nosetester.py", line 51, in import_nose > raise ImportError(msg) > ImportError: Need nose >= 0.10.0 for tests - see > http://somethingaboutorange.com/mrl/projects/nose I found the nose package and installed it, using easy_install (no direct download), like this: $ sudo easy_install nose Numpy tests continue to fail, as follows: > Tonys-Mac-2:286:scipy-0.6.0 $ python > Python 2.6 (trunk:66714:66715M, Oct 1 2008, 18:36:04) > [GCC 4.0.1 (Apple Computer, Inc. build 5370)] on darwin > Type "help", "copyright", "credits" or "license" for more information. >>>> import numpy >>>> numpy.test('1') > Running unit tests for numpy > Traceback (most recent call last): > File "", line 1, in > File > "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages > /numpy/testing/nosetester.py", line 240, in test > self._show_system_info() > File > "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages > /numpy/testing/nosetester.py", line 151, in _show_system_info > nose = import_nose() > File > "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages > /numpy/testing/nosetester.py", line 51, in import_nose > raise ImportError(msg) > ImportError: Need nose >= 0.10.0 for tests - see > http://somethingaboutorange.com/mrl/projects/nose I believe when I did easy_install, version 0.10.4 of nose was used. Any help is appreciated. Thank you! -Tony -- Tony Mannucci Supervisor, Ionospheric and Atmospheric Remote Sensing Group Mail-Stop 138-308, Tel > (818) 354-1699 Jet Propulsion Laboratory, Fax > (818) 393-5115 California Institute of Technology, Email > Tony.Mannucci at jpl.nasa.gov 4800 Oak Grove Drive, http://genesis.jpl.nasa.gov Pasadena, CA 91109 From ecomesana at googlemail.com Thu Nov 13 11:36:01 2008 From: ecomesana at googlemail.com (=?ISO-8859-1?Q?Enrique_Comesa=F1a_Figueroa?=) Date: Thu, 13 Nov 2008 17:36:01 +0100 Subject: [SciPy-user] About arrays and objects In-Reply-To: References: Message-ID: Hello, Is it possible to list a property from an object array without using a 'for' loop? I've created the array using: In [1]: from numpy import * In [2]: class nodo: ...: def __init__(self, pos=0): ...: self.pos = pos ...: In [3]: nodo_array = array ([nodo(1),nodo(2),nodo(3),nodo(4),nodo(5)]) I want to print the "node_array" using something like this: In [4]: print nodo_array[:].pos without using a for loop. Is that possible? Thanks, Enrique From bjracine at glosten.com Thu Nov 13 12:27:36 2008 From: bjracine at glosten.com (Benjamin J. Racine) Date: Thu, 13 Nov 2008 09:27:36 -0800 Subject: [SciPy-user] IPython TextMate Bundle Message-ID: <8C2B20C4348091499673D86BF10AB67621C30FF3@clipper.glosten.local> I am sending this forward on behalf of Matt Foster... Be sure to look into pysmell (for completion) as well. >>>>>>>>>>>>>>>>>>>>>> Hi All, A similar mail has already been on the (ipython) users mailing list, so my apologies if you've seen most of this before. I've started working on a TextMate bundle for IPython, based on the info on the Wiki [1], the aim is to produce a BSD licensed bundle which helps to integrate TextMate with IPython. I have set up a project on Github [2] which currently contains: * Some help, which doubles as the README * commands for running the current file / line / section in IPython (via applescript, and Terminal.app) * a basic language grammar for ipythonrc config files. The GitHub page contains the README file which has instructions on how to get GetBundles, which will allow you to install the bundle (but not track changes). Alternatively, if you have git, you can get the bundle using the following commands: cd "~/Library/Application Support/TextMate/Bundles" git clone git://github.com/mattfoster/ipython-tmbundle.git IPython.tmbundle osascript -e 'tell app "TextMate" to reload bundles' GitHub users can fork the project and make their own changes. I'd really love to hear any ideas, suggestions or feature requests people have, and I've been told by Fernando that it's ok to use this list for discussions, provided we prefix mail subjects with [TextMate]. Thanks, Matt [1]: http://ipython.scipy.org/moin/Cookbook/UsingIPythonWithTextMate [2]: http://github.com/mattfoster/ipython-tmbundle/ -- Matt Foster | http://my-mili.eu/matt _______________________________________________ IPython-dev mailing list IPython-dev at scipy.org http://lists.ipython.scipy.org/mailman/listinfo/ipython-dev From ckkart at hoc.net Thu Nov 13 15:32:15 2008 From: ckkart at hoc.net (Christian K.) Date: Thu, 13 Nov 2008 21:32:15 +0100 Subject: [SciPy-user] About arrays and objects In-Reply-To: References: Message-ID: Enrique Comesa?a Figueroa wrote: > Hello, > > Is it possible to list a property from an object array without using a > 'for' loop? > > I've created the array using: > > > In [1]: from numpy import * > > In [2]: class nodo: > ...: def __init__(self, pos=0): > ...: self.pos = pos > ...: > > In [3]: nodo_array = array ([nodo(1),nodo(2),nodo(3),nodo(4),nodo(5)]) > > > I want to print the "node_array" using something like this: > > In [4]: print nodo_array[:].pos > > without using a for loop. > > Is that possible? Not without subclassing numpy.ndarray I guess. Do you really need a numpy array or may it be a python list as well? Sublassing a list is slightly easier but if you insist have a look here: http://docs.scipy.org/doc/numpy/user/basics.subclassing.html Christian From rob.clewley at gmail.com Thu Nov 13 17:53:42 2008 From: rob.clewley at gmail.com (Rob Clewley) Date: Thu, 13 Nov 2008 17:53:42 -0500 Subject: [SciPy-user] ANN: PyDSTool 0.87 released Message-ID: Dear Scipy and Numpy user lists, The latest update to the open-source python dynamical systems modeling toolbox, PyDSTool 0.87, has been released on Sourceforge. http://www.sourceforge.net/projects/pydstool/ Major highlights are: * Implemented a more natural hybrid model specification format * Supports quadratic interpolation of data points in Trajectory objects (courtesy of Anne Archibald's poly_int class) * Supports more sophisticated data-driven model inference * Improved efficiency of ODE solvers * Various bug fixes and other API improvements * New demonstration scripts and more commenting for existing scripts in PyDSTool/tests/ * New wiki tutorial (courtesy of Daniel Mart?) This is a modest update in preparation for a substantial upgrade at version 0.90, which will move symbolic expression support over to SymPy, and greatly improve the implementation of C-based ODE integrators. We are also trying to incorporate basic boundary-value problem solving, and we aim to further improve the parameter estimation / model inference tools to work effectively with OpenOpt. For installation and setting up, see the GettingStarted page at our wiki, http://pydstool.sourceforge.net The download contains full API documentation, BSD license information, and further details of recent code changes. Further documentation is on the wiki. As ever, all feedback is welcome as we try to find time to improve our code base. If you would like to contribute effort in improving the tutorial and wiki documentation, or to the code itself, please contact me. -Rob Clewley From mforbes at physics.ubc.ca Thu Nov 13 18:47:41 2008 From: mforbes at physics.ubc.ca (Michael McNeil Forbes) Date: Thu, 13 Nov 2008 16:47:41 -0700 Subject: [SciPy-user] Forcing quad to use endpoints? Message-ID: Does anyone familiar with the details of QUADPACK have a suggestion of a way to force it to used specified endpoints? I am running into a problem like the following: def f(p): return (400.0/(p*p + 400.0))**2 points = [20.0] scipy.integrate.quad(f, 0, 1e16, points=points) (12.853981633974485, 1.4270786368147365e-13) The answer is actually 15.707963267949529 ~ 20*pi/4. The problem is that the quadrature points do not include the endpoints, so quad does not sample the function sufficiently closely to 20.0 to realize that it is still significant there. Basically, I would like to have a way of specifying both potential singularities (which is what `points` is for) as well as specifying important points that should be included in the quadrature (such as the 20 above which specifies the typical length-scale for the problem). I suspect that this is orthogonal to the QUADPACK quadratures, but if you have a suggestion, please let me know. Thanks, Michael. From anthony.j.mannucci at jpl.nasa.gov Thu Nov 13 19:35:43 2008 From: anthony.j.mannucci at jpl.nasa.gov (Mannucci, Anthony J) Date: Thu, 13 Nov 2008 16:35:43 -0800 Subject: [SciPy-user] Uninstalling python Message-ID: I have tried to install Scipy with python 2.6. I am beginning to think this was a mistake. I see there are pre-built binaries for python 2.5 but not 2.6. Is there any way to revert back to python 2.5 (in effect, "uninstall" version 2.6 which is now my current version)? Thanks for any help you can provide! -Tony -- Tony Mannucci Supervisor, Ionospheric and Atmospheric Remote Sensing Group Mail-Stop 138-308, Tel > (818) 354-1699 Jet Propulsion Laboratory, Fax > (818) 393-5115 California Institute of Technology, Email > Tony.Mannucci at jpl.nasa.gov 4800 Oak Grove Drive, http://genesis.jpl.nasa.gov Pasadena, CA 91109 From daelfin at gmail.com Thu Nov 13 20:22:46 2008 From: daelfin at gmail.com (Daniel) Date: Fri, 14 Nov 2008 01:22:46 +0000 (UTC) Subject: [SciPy-user] =?utf-8?q?scipy=2Eodr_crashes_when_raising_odr=5Fsto?= =?utf-8?q?p?= Message-ID: I'm using scipy.odr and trying to constrain my parameter values to a certain range, by raising stop_odr from my fitting function -- however raising stop_odr causes a crash. This happens both on a mac (Leopard, Python 2.5.2, Numpy 1.2.1, Scipy 0.7.0.dev4576) and on Windows (Vista32, Python 2.5.2, Numpy 1.2, Scipy 0.6.0). A simplified version along with its output is below. Any suggestions on how to fix, or workarounds? Thanks! --Daniel ---test.py----------------------------------------------- import sys from numpy import arange, abs from scipy.odr import odr_stop, Model, RealData, ODR def f(B, x): print >>sys.stderr, "f called, B=", B if B[1] < 0: raise odr_stop return B[0]*x + B[1] x = arange(10.) y = .1*x**2+1 m = Model(f) d = RealData(x, y) o = ODR(d, m, [.5, 1]) out = o.run() out.pprint() --------------------------------------------------------- $ python test.py f called, B= [ 0.5 1. ] ...a few more times... f called, B= [ 0.9 -0.20000001] /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/warnings.py:80: RuntimeWarning: tp_compare didn't return -1 or -2 for exception if registry.get(key): Bus error From dwf at cs.toronto.edu Thu Nov 13 20:43:38 2008 From: dwf at cs.toronto.edu (David Warde-Farley) Date: Thu, 13 Nov 2008 20:43:38 -0500 Subject: [SciPy-user] Scipy fails to build in Mac OS X 10.5 In-Reply-To: References: Message-ID: On 13-Nov-08, at 11:22 AM, Mannucci, Anthony J wrote: > I am trying to build numpy and scipy on Mac OS X 10.5.5. I recently > installed Apple's Developer Tools (3.1.1) and am using gfortran > 4.2.3. Gcc > is at version 4.0.1. I installed the latest python (2.6) from the > offcial > python site using a binary. I then installed fftw 3.1.2. This > appeared to > install (no explicit tests were run). I created three softlinks as > suggested > on the site: > http://www.scipy.org/Installing_SciPy/Mac_OS_X AFAIK Python 2.6 is not supported at this point, I recall there being known issues with 0.6.0 and py2.6. You may have more luck with the latest svn snapshots (instructions are at http://scipy.org/Download ), but if you want to run stable you're probably better off installing python 2.5.2 instead. Cheers, David From massimo.sandal at unibo.it Fri Nov 14 05:51:41 2008 From: massimo.sandal at unibo.it (massimo sandal) Date: Fri, 14 Nov 2008 11:51:41 +0100 Subject: [SciPy-user] Uninstalling python In-Reply-To: References: Message-ID: <491D583D.5080403@unibo.it> Mannucci, Anthony J wrote: > I have tried to install Scipy with python 2.6. I am beginning to think this > was a mistake. I see there are pre-built binaries for python 2.5 but not > 2.6. Is there any way to revert back to python 2.5 (in effect, "uninstall" > version 2.6 which is now my current version)? Thanks for any help you can > provide! What is your operating system? m. -- Massimo Sandal , Ph.D. University of Bologna Department of Biochemistry "G.Moruzzi" snail mail: Via Irnerio 48, 40126 Bologna, Italy email: massimo.sandal at unibo.it web: http://www.biocfarm.unibo.it/samori/people/sandal.html tel: +39-051-2094388 fax: +39-051-2094387 -------------- next part -------------- A non-text attachment was scrubbed... Name: massimo_sandal.vcf Type: text/x-vcard Size: 274 bytes Desc: not available URL: From tonyyu at MIT.EDU Fri Nov 14 13:33:45 2008 From: tonyyu at MIT.EDU (Tony S Yu) Date: Fri, 14 Nov 2008 13:33:45 -0500 Subject: [SciPy-user] calculating numerical jacobian Message-ID: Does scipy provide any functions to calculate the Jacobian of a function? I noticed that minpack (used by scipy.optimize) approximates a Jacobian numerically, but scipy doesn't seem to provide a wrapper for these minpack functions. Any ideas? Thanks, -Tony From andrew.fefferman at gmail.com Fri Nov 14 17:38:06 2008 From: andrew.fefferman at gmail.com (AndrewF) Date: Fri, 14 Nov 2008 14:38:06 -0800 (PST) Subject: [SciPy-user] interpolate Message-ID: <20509445.post@talk.nabble.com> Why does tck1=interpolate.splrep(scipy.array([1.0,2.0]),scipy.array([1.0,4.0]),k=1) work okay, but tck2=interpolate.splrep(scipy.array([8.2,2.0]),scipy.array([1.0,4.0]),k=1) does not? I ultimately want to interpolate a more complicated, non-linear data set, but here I am trying to reduce the problem to the simplest possible level. Thanks. -- View this message in context: http://www.nabble.com/interpolate-tp20509445p20509445.html Sent from the Scipy-User mailing list archive at Nabble.com. From Kristian.Sandberg at Colorado.EDU Fri Nov 14 17:47:18 2008 From: Kristian.Sandberg at Colorado.EDU (Kristian Hans Sandberg) Date: Fri, 14 Nov 2008 15:47:18 -0700 (MST) Subject: [SciPy-user] weave/blitz problem Message-ID: <20081114154718.AFO50973@joker.int.colorado.edu> I'm having exactly the same problem as reported in Ticket # 739: http://www.scipy.org/scipy/scipy/ticket/739 That is, I get the error /usr/lib/python2.5/site-packages/scipy/weave/blitz/blitz/mathfunc.h:45: error: 'labs' is not a member of 'std' for many of my weave codes when using g++ 4.3. (It works fine with g++ 4.2.) Is there any fix to this problem? Thanks! Kristian Kristian Sandberg, Ph.D. Dept. of Applied Mathematics and The Boulder Laboratory for 3-D Electron Microscopy of Cells University of Colorado at Boulder Campus Box 526 Boulder, CO 80309-0526, USA Phone: (303) 492 0593 (work) (303) 499 4404 (home) (303) 547 6290 (cell) Home page: http://amath.colorado.edu/faculty/sandberg From oliphant at enthought.com Fri Nov 14 17:47:43 2008 From: oliphant at enthought.com (Travis E. Oliphant) Date: Fri, 14 Nov 2008 16:47:43 -0600 Subject: [SciPy-user] interpolate In-Reply-To: <20509445.post@talk.nabble.com> References: <20509445.post@talk.nabble.com> Message-ID: <491E000F.6010407@enthought.com> AndrewF wrote: > Why does > > tck1=interpolate.splrep(scipy.array([1.0,2.0]),scipy.array([1.0,4.0]),k=1) > > work okay, but > > tck2=interpolate.splrep(scipy.array([8.2,2.0]),scipy.array([1.0,4.0]),k=1) > > does not? > The second-case does not provide a monotonically increasing array in the first argument. -Travis From dwf at cs.toronto.edu Fri Nov 14 18:07:28 2008 From: dwf at cs.toronto.edu (David Warde-Farley) Date: Fri, 14 Nov 2008 18:07:28 -0500 Subject: [SciPy-user] interpolate In-Reply-To: <491E000F.6010407@enthought.com> References: <20509445.post@talk.nabble.com> <491E000F.6010407@enthought.com> Message-ID: <4791EF56-3ACD-48D6-BC91-2D3E85F33895@cs.toronto.edu> On 14-Nov-08, at 5:47 PM, Travis E. Oliphant wrote: > The second-case does not provide a monotonically increasing array in > the > first argument. Hi Travis, Mildly related question about interpolate/FITPACK: when I fit using the 't=knots' arg to splrep (I have a lot more data than I want there to be knots, so I feed it some evenly spaced internal knots with the 't' parameter), it seems that the last 4 coefficients I get back are always zero. I was wondering if there's a good reason for this or if I'm doing something silly. I'm using k=3 if that makes a difference. David From aarchiba at physics.mcgill.ca Fri Nov 14 18:26:18 2008 From: aarchiba at physics.mcgill.ca (Anne Archibald) Date: Fri, 14 Nov 2008 18:26:18 -0500 Subject: [SciPy-user] interpolate In-Reply-To: <4791EF56-3ACD-48D6-BC91-2D3E85F33895@cs.toronto.edu> References: <20509445.post@talk.nabble.com> <491E000F.6010407@enthought.com> <4791EF56-3ACD-48D6-BC91-2D3E85F33895@cs.toronto.edu> Message-ID: 2008/11/14 David Warde-Farley : > On 14-Nov-08, at 5:47 PM, Travis E. Oliphant wrote: > >> The second-case does not provide a monotonically increasing array in >> the >> first argument. If you are not fitting a function of the form y=f(x) you may want to use the parametric spline code instead. > Mildly related question about interpolate/FITPACK: when I fit using > the 't=knots' arg to splrep (I have a lot more data than I want there > to be knots, so I feed it some evenly spaced internal knots with the > 't' parameter), it seems that the last 4 coefficients I get back are > always zero. I was wondering if there's a good reason for this or if > I'm doing something silly. I'm using k=3 if that makes a difference. The knots are specified in a form that allows them all to be treated identically. This sometimes means repeating knots or having zero coefficients. If you have more data points than you want knots, then you are going to be producing a spline which does not pass through all the data. The smoothing splines include an automatic number-of-knots selector, which you may prefer to specifying the number of knots yourself. it chooses (approximately) the minimum number of knots needed to let the curve pass within one sigma of the data points, so by adjusting the smoothing parameter and the weights you can tune the number of knots. Evaluation time is not particularly sensitive to the number of knots (though of course memory usage is). Anne From dwf at cs.toronto.edu Fri Nov 14 23:16:28 2008 From: dwf at cs.toronto.edu (David Warde-Farley) Date: Fri, 14 Nov 2008 23:16:28 -0500 Subject: [SciPy-user] interpolate In-Reply-To: References: <20509445.post@talk.nabble.com> <491E000F.6010407@enthought.com> <4791EF56-3ACD-48D6-BC91-2D3E85F33895@cs.toronto.edu> Message-ID: <33A62AF2-AD61-42ED-882B-5A6769F475D2@cs.toronto.edu> On 14-Nov-08, at 6:26 PM, Anne Archibald wrote: > The knots are specified in a form that allows them all to be treated > identically. This sometimes means repeating knots or having zero > coefficients. > > If you have more data points than you want knots, then you are going > to be producing a spline which does not pass through all the data. The > smoothing splines include an automatic number-of-knots selector, which > you may prefer to specifying the number of knots yourself. it chooses > (approximately) the minimum number of knots needed to let the curve > pass within one sigma of the data points, so by adjusting the > smoothing parameter and the weights you can tune the number of knots. > Evaluation time is not particularly sensitive to the number of knots > (though of course memory usage is). I see. I'm interested in doing is modeling the variation in the curves, presumably via a description of the joint distribution of the spline coefficients. This gets difficult if the number of knots is variable, which is why I've gone this route. It's not important that the curves fit the data exactly, but part of the reason for fitting splines is to reduce each of many, many curves to a fixed-length description. Does this make sense? David From robert.kern at gmail.com Fri Nov 14 23:25:02 2008 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 14 Nov 2008 22:25:02 -0600 Subject: [SciPy-user] interpolate In-Reply-To: <33A62AF2-AD61-42ED-882B-5A6769F475D2@cs.toronto.edu> References: <20509445.post@talk.nabble.com> <491E000F.6010407@enthought.com> <4791EF56-3ACD-48D6-BC91-2D3E85F33895@cs.toronto.edu> <33A62AF2-AD61-42ED-882B-5A6769F475D2@cs.toronto.edu> Message-ID: <3d375d730811142025r2f571b4w7e2db789d8f707cb@mail.gmail.com> On Fri, Nov 14, 2008 at 22:16, David Warde-Farley wrote: > > On 14-Nov-08, at 6:26 PM, Anne Archibald wrote: > >> The knots are specified in a form that allows them all to be treated >> identically. This sometimes means repeating knots or having zero >> coefficients. >> >> If you have more data points than you want knots, then you are going >> to be producing a spline which does not pass through all the data. The >> smoothing splines include an automatic number-of-knots selector, which >> you may prefer to specifying the number of knots yourself. it chooses >> (approximately) the minimum number of knots needed to let the curve >> pass within one sigma of the data points, so by adjusting the >> smoothing parameter and the weights you can tune the number of knots. >> Evaluation time is not particularly sensitive to the number of knots >> (though of course memory usage is). > > I see. I'm interested in doing is modeling the variation in the > curves, presumably via a description of the joint distribution of the > spline coefficients. This gets difficult if the number of knots is > variable, which is why I've gone this route. It's not important that > the curves fit the data exactly, but part of the reason for fitting > splines is to reduce each of many, many curves to a fixed-length > description. Does this make sense? I'm not entirely sure how applicable this paper is to your problem, but it does have an approach for dealing with varying numbers of knots in an MCMC context: http://www.jstatsoft.org/v26/i01 An Implementation of Bayesian Adaptive Regression Splines (BARS) in C with S and R Wrappers BARS (DiMatteo, Genovese, and Kass 2001) uses the powerful reversible-jump MCMC engine to perform spline-based generalized nonparametric regression. It has been shown to work well in terms of having small mean-squared error in many examples (smaller than known competitors), as well as producing visually-appealing fits that are smooth (filtering out high-frequency noise) while adapting to sudden changes (retaining high-frequency signal). However, BARS is computationally intensive. The original implementation in S was too slow to be practical in certain situations, and was found to handle some data sets incorrectly. We have implemented BARS in C for the normal and Poisson cases, the latter being important in neurophysiological and other point-process applications. The C implementation includes all needed subroutines for fitting Poisson regression, manipulating B-splines (using code created by Bates and Venables), and finding starting values for Poisson regression (using code for density estimation created by Kooperberg). The code utilizes only freely-available external libraries (LAPACK and BLAS) and is otherwise self-contained. We have also provided wrappers so that BARS can be used easily within S or R. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From pav at iki.fi Sat Nov 15 08:17:25 2008 From: pav at iki.fi (Pauli Virtanen) Date: Sat, 15 Nov 2008 13:17:25 +0000 (UTC) Subject: [SciPy-user] calculating numerical jacobian References: Message-ID: Fri, 14 Nov 2008 13:33:45 -0500, Tony S Yu wrote: > Does scipy provide any functions to calculate the Jacobian of a > function? I noticed that minpack (used by scipy.optimize) approximates a > Jacobian numerically, but scipy doesn't seem to provide a wrapper for > these minpack functions. Any ideas? If you're happy with naive differentiation (as you may well be -- optimization and root finding typically isn't too picky), something like this may work: {{{ import numpy as np def jacobian(f, u, eps=1e-6): """Evaluate partial derivatives of f(u) numerically. :note: This routine is currently naive and could be improved. :returns: (*f.shape, *u.shape) array ``df``, where df[i,j] ~= (d f_i / u_j)(u) """ f0 = _N.asarray(f(u)) # asarray: because of matrices u_shape = u.shape nu = np.prod(u_shape) f_shape = f0.shape nf = np.prod(f_shape) df = np.empty([nf, nu]) for k in range(nu): du = np.zeros(nu) du[k] = max(eps*abs(u.flat[k]), eps) f1 = np.asarray(f(u + _N.reshape(du, u_shape))) df[:,k] = np.reshape((f1 - f0) / eps, [nf]) df.shape = f_shape + u_shape return df }}} Requires the function be vectorized. -- Pauli Virtanen From robfalck at gmail.com Sat Nov 15 08:26:48 2008 From: robfalck at gmail.com (Rob Falck) Date: Sat, 15 Nov 2008 08:26:48 -0500 Subject: [SciPy-user] calculating numerical jacobian In-Reply-To: References: Message-ID: When I added scipy.optimize.fmin_slsqp for Scipy 0.70.0 I included a function approx_jacobian that just does a basic forward finite difference like approx_fprime. http://www.scipy.org/scipy/scipy/attachment/ticket/570/slsqp.py If you don't yet have 0.7.0, heres the code: _epsilon = sqrt(finfo(float).eps) def approx_jacobian(x,func,epsilon,*args): """Approximate the Jacobian matrix of callable function func * Parameters x - The state vector at which the Jacobian matrix is desired func - A vector-valued function of the form f(x,*args) epsilon - The peturbation used to determine the partial derivatives *args - Additional arguments passed to func * Returns An array of dimensions (lenf, lenx) where lenf is the length of the outputs of func, and lenx is the number of * Notes The approximation is done using forward differences """ x0 = asfarray(x) f0 = func(*((x0,)+args)) jac = zeros([len(x0),len(f0)]) dx = zeros(len(x0)) for i in range(len(x0)): dx[i] = epsilon jac[i] = (func(*((x0+dx,)+args)) - f0)/epsilon dx[i] = 0.0 return jac.transpose() On Sat, Nov 15, 2008 at 8:17 AM, Pauli Virtanen wrote: > Fri, 14 Nov 2008 13:33:45 -0500, Tony S Yu wrote: > > Does scipy provide any functions to calculate the Jacobian of a > > function? I noticed that minpack (used by scipy.optimize) approximates a > > Jacobian numerically, but scipy doesn't seem to provide a wrapper for > > these minpack functions. Any ideas? > > If you're happy with naive differentiation (as you may well be -- > optimization and root finding typically isn't too picky), something > like this may work: > > {{{ > import numpy as np > > def jacobian(f, u, eps=1e-6): > """Evaluate partial derivatives of f(u) numerically. > > :note: > This routine is currently naive and could be improved. > > :returns: > (*f.shape, *u.shape) array ``df``, where df[i,j] ~= (d f_i / u_j)(u) > """ > f0 = _N.asarray(f(u)) # asarray: because of matrices > > u_shape = u.shape > nu = np.prod(u_shape) > > f_shape = f0.shape > nf = np.prod(f_shape) > > df = np.empty([nf, nu]) > > for k in range(nu): > du = np.zeros(nu) > du[k] = max(eps*abs(u.flat[k]), eps) > f1 = np.asarray(f(u + _N.reshape(du, u_shape))) > df[:,k] = np.reshape((f1 - f0) / eps, [nf]) > > df.shape = f_shape + u_shape > return df > }}} > > Requires the function be vectorized. > > -- > Pauli Virtanen > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -- - Rob Falck -------------- next part -------------- An HTML attachment was scrubbed... URL: From mforbes at physics.ubc.ca Sat Nov 15 16:54:15 2008 From: mforbes at physics.ubc.ca (Michael McNeil Forbes) Date: Sat, 15 Nov 2008 14:54:15 -0700 Subject: [SciPy-user] interpolate In-Reply-To: <20509445.post@talk.nabble.com> References: <20509445.post@talk.nabble.com> Message-ID: <7D874FD7-34B8-440B-A500-563F6E492525@physics.ubc.ca> The splrep function seems to require the input data to be sorted along the abscissa. tck2=interpolate.splrep(scipy.array([2.0,8.2]),scipy.array ([4.0,1.0]),k=1) You can use argsort to sort the abscissa and then using the inds to sort the dependent variable: import numpy as np import scipy as sp x = np.array([8.2, 2.0]) y = np.array([1.0, 4.0]) inds = np.argsort(x) tck = sp.interpolate.splrep(x[inds], y[inds], k=1) Michael. On Nov 14, 2008, at 3:38 PM, AndrewF wrote: > > Why does > > tck1=interpolate.splrep(scipy.array([1.0,2.0]),scipy.array > ([1.0,4.0]),k=1) > > work okay, but > > tck2=interpolate.splrep(scipy.array([8.2,2.0]),scipy.array > ([1.0,4.0]),k=1) > > does not? > > I ultimately want to interpolate a more complicated, non-linear > data set, > but here I am trying to reduce the problem to the simplest possible > level. > > Thanks. > -- > View this message in context: http://www.nabble.com/interpolate- > tp20509445p20509445.html > Sent from the Scipy-User mailing list archive at Nabble.com. > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user From aarchiba at physics.mcgill.ca Sat Nov 15 17:22:14 2008 From: aarchiba at physics.mcgill.ca (Anne Archibald) Date: Sat, 15 Nov 2008 17:22:14 -0500 Subject: [SciPy-user] interpolate In-Reply-To: <33A62AF2-AD61-42ED-882B-5A6769F475D2@cs.toronto.edu> References: <20509445.post@talk.nabble.com> <491E000F.6010407@enthought.com> <4791EF56-3ACD-48D6-BC91-2D3E85F33895@cs.toronto.edu> <33A62AF2-AD61-42ED-882B-5A6769F475D2@cs.toronto.edu> Message-ID: 2008/11/14 David Warde-Farley : > > On 14-Nov-08, at 6:26 PM, Anne Archibald wrote: > >> The knots are specified in a form that allows them all to be treated >> identically. This sometimes means repeating knots or having zero >> coefficients. >> >> If you have more data points than you want knots, then you are going >> to be producing a spline which does not pass through all the data. The >> smoothing splines include an automatic number-of-knots selector, which >> you may prefer to specifying the number of knots yourself. it chooses >> (approximately) the minimum number of knots needed to let the curve >> pass within one sigma of the data points, so by adjusting the >> smoothing parameter and the weights you can tune the number of knots. >> Evaluation time is not particularly sensitive to the number of knots >> (though of course memory usage is). > > I see. I'm interested in doing is modeling the variation in the > curves, presumably via a description of the joint distribution of the > spline coefficients. This gets difficult if the number of knots is > variable, which is why I've gone this route. It's not important that > the curves fit the data exactly, but part of the reason for fitting > splines is to reduce each of many, many curves to a fixed-length > description. Does this make sense? This makes sense but may pose some additional difficulties. In particular, the way the fitpack routines select their knots, even when the number is specified, is by successive subdivision. So you're going to get "jumps" in your description where a knot hops from one place to another as you vary the data you're fitting to. You might want to avoid the fitpack fitting routines entirely, at least in the stage where you are varying the curve: fix not just the number of knots but the knot positions, and vary only the coefficients. If you correctly identify a basis for the space of splines on your given set of knots, fitting each curve becomes a linear least-squares fit, which you can easily do in scipy. The fitting won't be quite as efficient as fitpack, though if you are clever you might be able to make sure it's a sparse problem. But this ought to free you from ugly discontinuities in your parameterization. You could of course do this with splines implemented from scratch, but if you can understand the fitpack tck representation well enough, you should be able to both use fitpack to evaluate your splines (efficiently, in C code) and use fitpack to come up with an initial set of knots. Anne From jsalvati at u.washington.edu Sun Nov 16 18:03:02 2008 From: jsalvati at u.washington.edu (John Salvatier) Date: Sun, 16 Nov 2008 15:03:02 -0800 Subject: [SciPy-user] Wrapping TWPBVPLC BVP solver Message-ID: <113e17f20811161503i459a96ape027c3186348f043@mail.gmail.com> I want to wrap the Fortran Boundary Value Problem Solver TWPBVPLC (http:// www.ma.ic.ac.uk/~jcash/BVP_software/readme.php) which I believe I have arranged to have released under a BSD license. Has someone else done this before? I don't want to repeat work unnecessarily. -------------- next part -------------- An HTML attachment was scrubbed... URL: From wnbell at gmail.com Sun Nov 16 18:23:16 2008 From: wnbell at gmail.com (Nathan Bell) Date: Sun, 16 Nov 2008 18:23:16 -0500 Subject: [SciPy-user] Wrapping TWPBVPLC BVP solver In-Reply-To: <113e17f20811161503i459a96ape027c3186348f043@mail.gmail.com> References: <113e17f20811161503i459a96ape027c3186348f043@mail.gmail.com> Message-ID: On Sun, Nov 16, 2008 at 6:03 PM, John Salvatier wrote: > I want to wrap the Fortran Boundary Value Problem Solver TWPBVPLC (http:// > www.ma.ic.ac.uk/~jcash/BVP_software/readme.php) which I believe I have > arranged to have released under a BSD license. Has someone else done > this before? I don't want to repeat work unnecessarily. Re: "TWPBVPLC BVP" Wrap the name while you're at it :) -- Nathan Bell wnbell at gmail.com http://graphics.cs.uiuc.edu/~wnbell/ From pav at iki.fi Sun Nov 16 18:55:07 2008 From: pav at iki.fi (Pauli Virtanen) Date: Sun, 16 Nov 2008 23:55:07 +0000 (UTC) Subject: [SciPy-user] Wrapping TWPBVPLC BVP solver References: <113e17f20811161503i459a96ape027c3186348f043@mail.gmail.com> Message-ID: Sun, 16 Nov 2008 15:03:02 -0800, John Salvatier wrote: > I want to wrap the Fortran Boundary Value Problem Solver TWPBVPLC > (http:// > www.ma.ic.ac.uk/~jcash/BVP_software/readme.php) > which I believe I have > arranged to have released under a BSD license. Has someone else done > this before? I don't want to repeat work unnecessarily. I want to wrap > the Fortran Boundary Value Problem Solver TWPBVPLC (http://
href="http://www.ma.ic.ac.uk/%7Ejcash/BVP_software/readme.php" > target="_blank">www.ma.ic.ac.uk/~jcash/BVP_software/readme.php) > which I believe I have
arranged to have released under a BSD > license. Has someone else done
this before? I don't want to > repeat work unnecessarily. Not TWPBVP*, but COLNEW is here: http://www.iki.fi/pav/software/bvp/index.html COLNEW is of course non-free, but TWPBVP should go along the same lines -- the Python wrapper is BSD, so you can lift whatever you want from the wrapper part. Just avoid reading anything in Fortran from there, to stay within a clean room environment :) I think it would be useful if you considered these things when writing the TWPBVP* wrapper: * To cut down Python <-> Fortran call overhead, modify TWPBVP* so that FSUB and DFSUB evaluate the result for all X-points in the mesh in one call. Also collapse GSUB and DGSUB by vectorizing the `i` index away. This makes writing efficient vectorised Python code quite a bit easier. * Make DFSUB and DGSUB optional, by falling back to simple numerical differentiation if the user omits them. Writing the DGSUB and DFSUB takes typically much more work than GSUB and FSUB, and it's nice not to have to do it when it's not necessary. (I think you can lift some code from this directly from the colnew wrapper.) I remember Ascher and Bader (?) mentioning in some paper that naive numerical differentiation typically is enough, and this indeed seems to be usually the case. -- Pauli Virtanen From robert.kern at gmail.com Sun Nov 16 19:49:52 2008 From: robert.kern at gmail.com (Robert Kern) Date: Sun, 16 Nov 2008 18:49:52 -0600 Subject: [SciPy-user] Shift the rows of a matrix In-Reply-To: References: Message-ID: <3d375d730811161649r3adb4606l3b6efa0aa937733a@mail.gmail.com> On Fri, Nov 7, 2008 at 03:12, Roger Herikstad wrote: > Hi list, > I was curious if anyone has a good method of shifting individual rows > of a matrix? My problem is that I have a matrix consisting of > waveforms on the rows, and I want to shift each waveform, i.e. pad > with zeros on either end, depending on where the minimum point of each > waveform is located relative to a pre-determined zero point. For > example, if each waveform consists for 32 data points, I would be > interested in aligning each waveform so that the minimum point always > happens on index 10. My current solution is to loop through each > waveform and taking the dot product with a shift matrix, but I'd > rather avoid the for loop if possible. If anyone has any thoughts, I'd > be happy for any input. Thanks! You can do this with fancy indexing. Find the minimum index across the rows: jmin = x.argmin(axis=1) Shift this to correspond to row indices in the new array such that the minimum will be on index 10 (note: I suspect you will need to pad to at least 2*32 and place the minimum in the center to be safe, but you know your problem better than I). jmin_shift = 10 - jmin Turn this array into a column vector and add it to arange(32) to create an index array of the same size as the original array: j = jmin_shift[:,np.newaxis] + np.arange(32) These are row indices. To get the column indices which we will need, we can use just a column vector of the right size. Broadcasting will handle the rest: i = np.arange(len(x))[:,np.newaxis] Use these to set the values in the new array: y[i,j] = x -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From berthold.hoellmann at gl-group.com Mon Nov 17 04:18:33 2008 From: berthold.hoellmann at gl-group.com (=?iso-8859-15?Q?Berthold_=22H=F6llmann=22?=) Date: Mon, 17 Nov 2008 10:18:33 +0100 Subject: [SciPy-user] f2py: Adding F_WRAPPEDFUNC to*module.c without actually wrapping a function Message-ID: I am wrapping some LAPACK routines currently not handled in scipy.linalg. some of them require workspace. In order to provide the optimal workspace i wrote some helper routines taking the FORTRAN code for calculating the optimal workspace and translating them C, e.g.:: static int dsytrf_lwork(int n, char uplo) { int one = 1; int name_len = 6; int opts_len = 1; int none = -1; int nb; (*F_WRAPPEDFUNC(ilaenv,ILAENV))(&nb, &one, "DSYTRF", &uplo, &n, &none, &none, &none, name_len, opts_len); return (n*nb); } and then in the wraper code for dsytrf:: integer intent(hide), depend(n, uplo) :: lwork = dsytrf_lwork(n, *uplo) This requires the function wrapper for the FORTRAN ilaenv function:: extern void F_WRAPPEDFUNC(ilaenv,ILAENV)(int*,int*,string,string,int*,int*,int*,int*,size_t,size_t); Now my code only works when I also wrap the ilanev function (or, I guess, any other function), otherwise the F_WRAPPEDFUNC is missing in the generated code. Is there a way to get the F_WRAPPEDFUNC macro to the wrapper code without wrapping a function? the ilanev function should not become part of the interface, nor is there any other function that should. Kind regards Berthold H?llmann -- Germanischer Lloyd AG CAE Development Vorsetzen 35 20459 Hamburg Phone: +49(0)40 36149-7374 Fax: +49(0)40 36149-7320 e-mail: berthold.hoellmann at gl-group.com Internet: http://www.gl-group.com -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 188 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: disclaimer.txt Type: application/octet-stream Size: 2196 bytes Desc: not available URL: From pearu at cens.ioc.ee Mon Nov 17 07:57:32 2008 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Mon, 17 Nov 2008 14:57:32 +0200 (EET) Subject: [SciPy-user] f2py: Adding F_WRAPPEDFUNC to*module.c without actually wrapping a function In-Reply-To: References: Message-ID: <52214.172.17.0.4.1226926652.squirrel@cens.ioc.ee> On Mon, November 17, 2008 11:18 am, Berthold "H?llmann" wrote: > > I am wrapping some LAPACK routines currently not handled in > scipy.linalg. some of them require workspace. In order to provide the > optimal workspace i wrote some helper routines taking the FORTRAN code > for calculating the optimal workspace and translating them C, e.g.:: > > static int dsytrf_lwork(int n, char uplo) { > int one = 1; > int name_len = 6; > int opts_len = 1; > int none = -1; > int nb; > (*F_WRAPPEDFUNC(ilaenv,ILAENV))(&nb, &one, "DSYTRF", &uplo, &n, &none, > &none, &none, name_len, opts_len); > return (n*nb); > } > > and then in the wraper code for dsytrf:: > > integer intent(hide), depend(n, uplo) :: lwork = dsytrf_lwork(n, > *uplo) > > This requires the function wrapper for the FORTRAN ilaenv function:: > > extern void > F_WRAPPEDFUNC(ilaenv,ILAENV)(int*,int*,string,string,int*,int*,int*,int*,size_t,size_t); > > Now my code only works when I also wrap the ilanev function (or, I > guess, any other function), otherwise the F_WRAPPEDFUNC is missing in > the generated code. Is there a way to get the F_WRAPPEDFUNC macro to the > wrapper code without wrapping a function? the ilanev function should not > become part of the interface, nor is there any other function that should. The answer is No. You either have to have a function wrapped or just copy the F_WRAPPEDFUNC macro from numpy/f2py/cfuncs.py to your .pyf file (look at the usercode statement in f2py manual). HTH, Pearu From soren.skou.nielsen at gmail.com Mon Nov 17 09:40:37 2008 From: soren.skou.nielsen at gmail.com (=?ISO-8859-1?Q?S=F8ren_Nielsen?=) Date: Mon, 17 Nov 2008 15:40:37 +0100 Subject: [SciPy-user] Ext_tools converters not working?? Message-ID: Can anyone explain why this fails? This piece of code runs perfectly using weave.inline and type_converters = blitz.. Obviously it can't handle 2D arrays anymore. It's just a stupid example to illustrate that. Thanks, Soren CODE : ------------------------------------------------------------------------------------------------ mod = ext_tools.ext_module('ravg_ext') test = zeros((5,5)) xlen = 5 ylen = 5 code = """ int x, y; for( x = 0; x < xlen; x++) { for( y = 0; y < ylen; y++) { test(x,y) = 2; } } """ ravg = ext_tools.ext_function('ravg', code, ['xlen', 'ylen', 'test']) mod.add_function(ravg) mod.compile(compiler = 'gcc') RESULT: ------------------------------------------------------------------------------------------------ C:\Temp\ravg_ext.cpp: In function `PyObject* ravg(PyObject*, PyObject*, PyObject*)': C:\Temp\ravg_ext.cpp:654: error: `test' cannot be used as a function C:\Temp\ravg_ext.cpp:641: warning: unused variable 'Ntest' C:\Temp\ravg_ext.cpp:642: warning: unused variable 'Stest' C:\Temp\ravg_ext.cpp:643: warning: unused variable 'Dtest' Traceback (most recent call last): File "C:\Temp\ravg_extension.py", line 132, in ? build_ravg_extension() File "C:\Temp\ravg_extension.py", line 125, in build_ravg_extension mod.compile(compiler = 'gcc') File "C:\Python24\Lib\site-packages\scipy\weave\ext_tools.py", line 365, in compile verbose = verbose, **kw) File "C:\Python24\Lib\site-packages\scipy\weave\build_tools.py", line 269, in build_extension setup(name = module_name, ext_modules = [ext],verbose=verb) File "C:\Python24\Lib\site-packages\numpy\distutils\core.py", line 184, in setup return old_setup(**new_attr) File "C:\Python24\Lib\distutils\core.py", line 166, in setup raise SystemExit, "error: " + str(msg) CompileError: error: Command "g++ -mno-cygwin -O2 -Wall -IC:\Python24\lib\site-packages\scipy\weave -IC:\Python24\lib\site-packages\scipy\weave\scxx -IC:\Python24\lib\site-packages\numpy\core\include -IC:\Python24\include -IC:\Python24\PC -c C:\Temp\ravg_ext.cpp -o c:\docume~1\ssn\locals~1\temp\ssn\python24_intermediate\compiler_894ad5ed761bb51736c6d2b7872dc212\Releas -------------- next part -------------- An HTML attachment was scrubbed... URL: From rob.braswell at unh.edu Mon Nov 17 10:18:40 2008 From: rob.braswell at unh.edu (Bobby H. Braswell) Date: Mon, 17 Nov 2008 10:18:40 -0500 Subject: [SciPy-user] weave problem on ubuntu 8.10 In-Reply-To: References: Message-ID: <1226935120.32694.66.camel@waage.sr.unh.edu> Hi- By coincidence I am trying to get weave working on a new system, I had previously been using it successfully under OS X with the Fink version of SciPy. I don't want to distract from Soren's question but when I try his simple example (or any of my own) using converters.blitz, I get a very long error message, actually mostly warnings, but it ends like this: >>> ravg = weave.inline(code, ['xlen', 'ylen', 'test'], type_converters=converters.blitz, compiler = 'gcc') ...hundreds of lines... Traceback (most recent call last): File "", line 2, in File "/usr/lib/python2.5/site-packages/scipy/weave/inline_tools.py", line 339, in inline **kw) File "/usr/lib/python2.5/site-packages/scipy/weave/inline_tools.py", line 447, in compile_function verbose=verbose, **kw) File "/usr/lib/python2.5/site-packages/scipy/weave/ext_tools.py", line 365, in compile verbose = verbose, **kw) File "/usr/lib/python2.5/site-packages/scipy/weave/build_tools.py", line 269, in build_extension setup(name = module_name, ext_modules = [ext],verbose=verb) File "/usr/lib/python2.5/site-packages/numpy/distutils/core.py", line 184, in setup return old_setup(**new_attr) File "/usr/lib/python2.5/distutils/core.py", line 168, in setup raise SystemExit, "error: " + str(msg) scipy.weave.build_tools.CompileError: error: Command "g++ -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -fPIC -I/usr/lib/python2.5/site-packages/scipy/weave -I/usr/lib/python2.5/site-packages/scipy/weave/scxx -I/usr/lib/python2.5/site-packages/scipy/weave/blitz -I/usr/lib/python2.5/site-packages/numpy/core/include -I/usr/include/python2.5 -c /home/braswell/.python25_compiled/sc_f8b4f30889557b51310ac43eda9472b30.cpp -o /tmp/braswell/python25_intermediate/compiler_a9bbef2f14d61f7aa8f0ba6e068e18c2/home/braswell/.python25_compiled/sc_f8b4f30889557b51310ac43eda9472b30.o" failed with exit status 1 Sorry if this is more of a compiler/Ubuntu problem, I'm not sure about that. I'd be grateful to hear from someone who has had or not had problems with Weave on Ubuntu 8.10. Thanks very much, Rob On Mon, 2008-11-17 at 15:40 +0100, S?ren Nielsen wrote: > Can anyone explain why this fails? This piece of code runs perfectly > using weave.inline and type_converters = blitz.. > > Obviously it can't handle 2D arrays anymore. It's just a stupid > example to illustrate that. > > Thanks, > Soren > > CODE : > ------------------------------------------------------------------------------------------------ > mod = ext_tools.ext_module('ravg_ext') > > test = zeros((5,5)) > xlen = 5 > ylen = 5 > > code = """ > int x, y; > > for( x = 0; x < xlen; x++) > { > for( y = 0; y < ylen; y++) > { > test(x,y) = 2; > } > } > > """ > > ravg = ext_tools.ext_function('ravg', code, ['xlen', 'ylen', > 'test']) > mod.add_function(ravg) > mod.compile(compiler = 'gcc') > > RESULT: > ------------------------------------------------------------------------------------------------ > C:\Temp\ravg_ext.cpp: In function `PyObject* ravg(PyObject*, > PyObject*, PyObject*)': > C:\Temp\ravg_ext.cpp:654: error: `test' cannot be used as a function > C:\Temp\ravg_ext.cpp:641: warning: unused variable 'Ntest' > C:\Temp\ravg_ext.cpp:642: warning: unused variable 'Stest' > C:\Temp\ravg_ext.cpp:643: warning: unused variable 'Dtest' > > Traceback (most recent call last): > File "C:\Temp\ravg_extension.py", line 132, in ? > build_ravg_extension() > File "C:\Temp\ravg_extension.py", line 125, in build_ravg_extension > mod.compile(compiler = 'gcc') > File "C:\Python24\Lib\site-packages\scipy\weave\ext_tools.py", line > 365, in compile > verbose = verbose, **kw) > File "C:\Python24\Lib\site-packages\scipy\weave\build_tools.py", > line 269, in build_extension > setup(name = module_name, ext_modules = [ext],verbose=verb) > File "C:\Python24\Lib\site-packages\numpy\distutils\core.py", line > 184, in setup > return old_setup(**new_attr) > File "C:\Python24\Lib\distutils\core.py", line 166, in setup > raise SystemExit, "error: " + str(msg) > CompileError: error: Command "g++ -mno-cygwin -O2 -Wall -IC:\Python24 > \lib\site-packages\scipy\weave -IC:\Python24\lib\site-packages\scipy > \weave\scxx -IC:\Python24\lib\site-packages\numpy\core\include -IC: > \Python24\include -IC:\Python24\PC -c C:\Temp\ravg_ext.cpp -o c: > \docume~1\ssn\locals~1\temp\ssn\python24_intermediate > \compiler_894ad5ed761bb51736c6d2b7872dc212\Releas > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user -------------- next part -------------- An HTML attachment was scrubbed... URL: From soren.skou.nielsen at gmail.com Mon Nov 17 11:08:02 2008 From: soren.skou.nielsen at gmail.com (=?ISO-8859-1?Q?S=F8ren_Nielsen?=) Date: Mon, 17 Nov 2008 17:08:02 +0100 Subject: [SciPy-user] weave problem on ubuntu 8.10 In-Reply-To: <1226935120.32694.66.camel@waage.sr.unh.edu> References: <1226935120.32694.66.camel@waage.sr.unh.edu> Message-ID: Hi Rob, What are the first lines of your error message? I found the answer to my own question... I just had to add type_converters = converters.blitz under the ext_function. On Mon, Nov 17, 2008 at 4:18 PM, Bobby H. Braswell wrote: > > Hi- > > By coincidence I am trying to get weave working on a new system, I had > previously been using it successfully under OS X with the Fink version of > SciPy. I don't want to distract from Soren's question but when I try his > simple example (or any of my own) using converters.blitz, I get a very long > error message, actually mostly warnings, but it ends like this: > > >>> ravg = weave.inline(code, ['xlen', 'ylen', 'test'], > type_converters=converters.blitz, compiler = 'gcc') > ...hundreds of lines... > Traceback (most recent call last): > File "", line 2, in > File "/usr/lib/python2.5/site-packages/scipy/weave/inline_tools.py", line > 339, in inline > **kw) > File "/usr/lib/python2.5/site-packages/scipy/weave/inline_tools.py", line > 447, in compile_function > verbose=verbose, **kw) > File "/usr/lib/python2.5/site-packages/scipy/weave/ext_tools.py", line > 365, in compile > verbose = verbose, **kw) > File "/usr/lib/python2.5/site-packages/scipy/weave/build_tools.py", line > 269, in build_extension > setup(name = module_name, ext_modules = [ext],verbose=verb) > File "/usr/lib/python2.5/site-packages/numpy/distutils/core.py", line > 184, in setup > return old_setup(**new_attr) > File "/usr/lib/python2.5/distutils/core.py", line 168, in setup > raise SystemExit, "error: " + str(msg) > scipy.weave.build_tools.CompileError: error: Command "g++ -pthread > -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -fPIC > -I/usr/lib/python2.5/site-packages/scipy/weave > -I/usr/lib/python2.5/site-packages/scipy/weave/scxx > -I/usr/lib/python2.5/site-packages/scipy/weave/blitz > -I/usr/lib/python2.5/site-packages/numpy/core/include > -I/usr/include/python2.5 -c > /home/braswell/.python25_compiled/sc_f8b4f30889557b51310ac43eda9472b30.cpp > -o > /tmp/braswell/python25_intermediate/compiler_a9bbef2f14d61f7aa8f0ba6e068e18c2/home/braswell/.python25_compiled/sc_f8b4f30889557b51310ac43eda9472b30.o" > failed with exit status 1 > > Sorry if this is more of a compiler/Ubuntu problem, I'm not sure about > that. I'd be grateful to hear from someone who has had or not had problems > with Weave on Ubuntu 8.10. > > Thanks very much, > Rob > > On Mon, 2008-11-17 at 15:40 +0100, S?ren Nielsen wrote: > > Can anyone explain why this fails? This piece of code runs perfectly using > weave.inline and type_converters = blitz.. > > Obviously it can't handle 2D arrays anymore. It's just a stupid example to > illustrate that. > > Thanks, > Soren > > CODE : > > ------------------------------------------------------------------------------------------------ > mod = ext_tools.ext_module('ravg_ext') > > test = zeros((5,5)) > xlen = 5 > ylen = 5 > > code = """ > int x, y; > > for( x = 0; x < xlen; x++) > { > for( y = 0; y < ylen; y++) > { > test(x,y) = 2; > } > } > > """ > > ravg = ext_tools.ext_function('ravg', code, ['xlen', 'ylen', 'test']) > mod.add_function(ravg) > mod.compile(compiler = 'gcc') > > RESULT: > > ------------------------------------------------------------------------------------------------ > C:\Temp\ravg_ext.cpp: In function `PyObject* ravg(PyObject*, PyObject*, > PyObject*)': > C:\Temp\ravg_ext.cpp:654: error: `test' cannot be used as a function > C:\Temp\ravg_ext.cpp:641: warning: unused variable 'Ntest' > C:\Temp\ravg_ext.cpp:642: warning: unused variable 'Stest' > C:\Temp\ravg_ext.cpp:643: warning: unused variable 'Dtest' > > Traceback (most recent call last): > File "C:\Temp\ravg_extension.py", line 132, in ? > build_ravg_extension() > File "C:\Temp\ravg_extension.py", line 125, in build_ravg_extension > mod.compile(compiler = 'gcc') > File "C:\Python24\Lib\site-packages\scipy\weave\ext_tools.py", line 365, > in compile > verbose = verbose, **kw) > File "C:\Python24\Lib\site-packages\scipy\weave\build_tools.py", line > 269, in build_extension > setup(name = module_name, ext_modules = [ext],verbose=verb) > File "C:\Python24\Lib\site-packages\numpy\distutils\core.py", line 184, > in setup > return old_setup(**new_attr) > File "C:\Python24\Lib\distutils\core.py", line 166, in setup > raise SystemExit, "error: " + str(msg) > CompileError: error: Command "g++ -mno-cygwin -O2 -Wall > -IC:\Python24\lib\site-packages\scipy\weave > -IC:\Python24\lib\site-packages\scipy\weave\scxx > -IC:\Python24\lib\site-packages\numpy\core\include -IC:\Python24\include > -IC:\Python24\PC -c C:\Temp\ravg_ext.cpp -o > c:\docume~1\ssn\locals~1\temp\ssn\python24_intermediate\compiler_894ad5ed761bb51736c6d2b7872dc212\Releas > > > _______________________________________________ > SciPy-user mailing listSciPy-user at scipy.orghttp://projects.scipy.org/mailman/listinfo/scipy-user > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rob.braswell at unh.edu Mon Nov 17 11:18:30 2008 From: rob.braswell at unh.edu (Bobby H. Braswell) Date: Mon, 17 Nov 2008 11:18:30 -0500 Subject: [SciPy-user] weave problem on ubuntu 8.10 In-Reply-To: References: <1226935120.32694.66.camel@waage.sr.unh.edu> Message-ID: <1226938710.32694.75.camel@waage.sr.unh.edu> Hi, thanks for the reply. Here are the first lines: In file included from /usr/lib/python2.5/site-packages/scipy/weave/blitz/blitz/applics.h:400, from /usr/lib/python2.5/site-packages/scipy/weave/blitz/blitz/vecexpr.h:32, from /usr/lib/python2.5/site-packages/scipy/weave/blitz/blitz/vecpick.cc:16, from /usr/lib/python2.5/site-packages/scipy/weave/blitz/blitz/vecpick.h:293, from /usr/lib/python2.5/site-packages/scipy/weave/blitz/blitz/vector.h:449, from /usr/lib/python2.5/site-packages/scipy/weave/blitz/blitz/tinyvec.h:430, from /usr/lib/python2.5/site-packages/scipy/weave/blitz/blitz/array-impl.h:44, from /usr/lib/python2.5/site-packages/scipy/weave/blitz/blitz/array.h:32, from /home/braswell/.python25_compiled/sc_f935818f52299953943b3b48fe3685483.cpp:11: /usr/lib/python2.5/site-packages/scipy/weave/blitz/blitz/mathfunc.h: In static member function ?static long int blitz::_bz_abs::apply(long int)?: /usr/lib/python2.5/site-packages/scipy/weave/blitz/blitz/mathfunc.h:45: error: ?labs? is not a member of ?std? In file included from /usr/lib/python2.5/site-packages/scipy/weave/blitz/blitz/array/funcs.h:29, from /usr/lib/python2.5/site-packages/scipy/weave/blitz/blitz/array/newet.h:29, from /usr/lib/python2.5/site-packages/scipy/weave/blitz/blitz/array/et.h:27, from /usr/lib/python2.5/site-packages/scipy/weave/blitz/blitz/array-impl.h:2515, from /usr/lib/python2.5/site-packages/scipy/weave/blitz/blitz/array.h:32, from /home/braswell/.python25_compiled/sc_f935818f52299953943b3b48fe3685483.cpp:11: /usr/lib/python2.5/site-packages/scipy/weave/blitz/blitz/funcs.h: In static member function ?static int blitz::Fn_abs::apply(int)?: /usr/lib/python2.5/site-packages/scipy/weave/blitz/blitz/funcs.h:509: error: call of overloaded ?abs(int&)? is ambiguous /usr/include/c++/4.3/cmath:99: note: candidates are: double std::abs(double) /usr/include/c++/4.3/cmath:103: note: float std::abs(float) /usr/include/c++/4.3/cmath:107: note: long double std::abs(long double) /usr/lib/python2.5/site-packages/scipy/weave/blitz/blitz/funcs.h: In static member function ?static long int blitz::Fn_abs::apply(long int)?: /usr/lib/python2.5/site-packages/scipy/weave/blitz/blitz/funcs.h:530: error: ?labs? is not a member of ?std? /home/braswell/.python25_compiled/sc_f935818f52299953943b3b48fe3685483.cpp: In function ?char* find_type(PyObject*)?: /home/braswell/.python25_compiled/sc_f935818f52299953943b3b48fe3685483.cpp:35: warning: deprecated conversion from string constant to ?char*? ... On Mon, 2008-11-17 at 17:08 +0100, S?ren Nielsen wrote: > Hi Rob, > > What are the first lines of your error message? > > I found the answer to my own question... I just had to add > type_converters = converters.blitz under the ext_function. > > > On Mon, Nov 17, 2008 at 4:18 PM, Bobby H. Braswell > wrote: > > > Hi- > > By coincidence I am trying to get weave working on a new > system, I had previously been using it successfully under OS X > with the Fink version of SciPy. I don't want to distract from > Soren's question but when I try his simple example (or any of > my own) using converters.blitz, I get a very long error > message, actually mostly warnings, but it ends like this: > > >>> ravg = weave.inline(code, ['xlen', 'ylen', 'test'], > type_converters=converters.blitz, compiler = 'gcc') > ...hundreds of lines... > Traceback (most recent call last): > File "", line 2, in > File > "/usr/lib/python2.5/site-packages/scipy/weave/inline_tools.py", line 339, in inline > **kw) > File > "/usr/lib/python2.5/site-packages/scipy/weave/inline_tools.py", line 447, in compile_function > verbose=verbose, **kw) > File > "/usr/lib/python2.5/site-packages/scipy/weave/ext_tools.py", > line 365, in compile > verbose = verbose, **kw) > File > "/usr/lib/python2.5/site-packages/scipy/weave/build_tools.py", > line 269, in build_extension > setup(name = module_name, ext_modules = > [ext],verbose=verb) > File > "/usr/lib/python2.5/site-packages/numpy/distutils/core.py", > line 184, in setup > return old_setup(**new_attr) > File "/usr/lib/python2.5/distutils/core.py", line 168, in > setup > raise SystemExit, "error: " + str(msg) > scipy.weave.build_tools.CompileError: error: Command "g++ > -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -fPIC > -I/usr/lib/python2.5/site-packages/scipy/weave > -I/usr/lib/python2.5/site-packages/scipy/weave/scxx > -I/usr/lib/python2.5/site-packages/scipy/weave/blitz > -I/usr/lib/python2.5/site-packages/numpy/core/include > -I/usr/include/python2.5 > -c /home/braswell/.python25_compiled/sc_f8b4f30889557b51310ac43eda9472b30.cpp -o /tmp/braswell/python25_intermediate/compiler_a9bbef2f14d61f7aa8f0ba6e068e18c2/home/braswell/.python25_compiled/sc_f8b4f30889557b51310ac43eda9472b30.o" failed with exit status 1 > > Sorry if this is more of a compiler/Ubuntu problem, I'm not > sure about that. I'd be grateful to hear from someone who has > had or not had problems with Weave on Ubuntu 8.10. > > Thanks very much, > Rob > > On Mon, 2008-11-17 at 15:40 +0100, S?ren Nielsen wrote: > > > Can anyone explain why this fails? This piece of code runs > > perfectly using weave.inline and type_converters = blitz.. > > > > Obviously it can't handle 2D arrays anymore. It's just a > > stupid example to illustrate that. > > > > Thanks, > > Soren > > > > CODE : > > ------------------------------------------------------------------------------------------------ > > mod = ext_tools.ext_module('ravg_ext') > > > > test = zeros((5,5)) > > xlen = 5 > > ylen = 5 > > > > code = """ > > int x, y; > > > > for( x = 0; x < xlen; x++) > > { > > for( y = 0; y < ylen; y++) > > { > > test(x,y) = 2; > > } > > } > > > > """ > > > > ravg = ext_tools.ext_function('ravg', code, ['xlen', 'ylen', > > 'test']) > > mod.add_function(ravg) > > mod.compile(compiler = 'gcc') > > > > RESULT: > > ------------------------------------------------------------------------------------------------ > > C:\Temp\ravg_ext.cpp: In function `PyObject* ravg(PyObject*, > > PyObject*, PyObject*)': > > C:\Temp\ravg_ext.cpp:654: error: `test' cannot be used as a > > function > > C:\Temp\ravg_ext.cpp:641: warning: unused variable 'Ntest' > > C:\Temp\ravg_ext.cpp:642: warning: unused variable 'Stest' > > C:\Temp\ravg_ext.cpp:643: warning: unused variable 'Dtest' > > > > Traceback (most recent call last): > > File "C:\Temp\ravg_extension.py", line 132, in ? > > build_ravg_extension() > > File "C:\Temp\ravg_extension.py", line 125, in > > build_ravg_extension > > mod.compile(compiler = 'gcc') > > File "C:\Python24\Lib\site-packages\scipy\weave > > \ext_tools.py", line 365, in compile > > verbose = verbose, **kw) > > File "C:\Python24\Lib\site-packages\scipy\weave > > \build_tools.py", line 269, in build_extension > > setup(name = module_name, ext_modules = > > [ext],verbose=verb) > > File "C:\Python24\Lib\site-packages\numpy\distutils > > \core.py", line 184, in setup > > return old_setup(**new_attr) > > File "C:\Python24\Lib\distutils\core.py", line 166, in > > setup > > raise SystemExit, "error: " + str(msg) > > CompileError: error: Command "g++ -mno-cygwin -O2 -Wall -IC: > > \Python24\lib\site-packages\scipy\weave -IC:\Python24\lib > > \site-packages\scipy\weave\scxx -IC:\Python24\lib > > \site-packages\numpy\core\include -IC:\Python24\include -IC: > > \Python24\PC -c C:\Temp\ravg_ext.cpp -o c:\docume~1\ssn > > \locals~1\temp\ssn\python24_intermediate > > \compiler_894ad5ed761bb51736c6d2b7872dc212\Releas > > > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user -------------- next part -------------- An HTML attachment was scrubbed... URL: From emanuele at relativita.com Mon Nov 17 11:38:06 2008 From: emanuele at relativita.com (Emanuele Olivetti) Date: Mon, 17 Nov 2008 17:38:06 +0100 Subject: [SciPy-user] why signature of func in odeint != ode ? Message-ID: <49219DEE.5010405@relativita.com> Why signature of func in odeint is swapped (y,t0 -> t0,y) with respect to func in ode ? It would be nice to have same signature in order to be able to play with both of them more transparently. Details from docstrings: scipy.integrate.odeint: ---- Inputs: func -- func(y,t0,...) computes the derivative of y at t0. ---- scipy.integrate.ode: ---- where f and jac have the following signatures: def f(t,y[,arg1,..]): return ---- Is there a reason for this or is it just a little defect? Regards, Emanuele From Kristian.Sandberg at Colorado.EDU Mon Nov 17 11:42:21 2008 From: Kristian.Sandberg at Colorado.EDU (Kristian Hans Sandberg) Date: Mon, 17 Nov 2008 09:42:21 -0700 (MST) Subject: [SciPy-user] weave problem on ubuntu 8.10 Message-ID: <20081117094221.AFP24765@joker.int.colorado.edu> That's exactly the problem I wrote about a couple of days ago (with topic "weave/blitz problem"). It's listed as Ticket # 739 in the scipy trac system: http://www.scipy.org/scipy/scipy/ticket/739 This seems to happen with g++ version 4.3. As a temporary fix, I installed g++ version 4.2, and then it worked. I believe this problem will be more common as more people update to newer compilers. Kristian Kristian Sandberg, Ph.D. Dept. of Applied Mathematics and The Boulder Laboratory for 3-D Electron Microscopy of Cells University of Colorado at Boulder Campus Box 526 Boulder, CO 80309-0526, USA Phone: (303) 492 0593 (work) (303) 499 4404 (home) (303) 547 6290 (cell) Home page: http://amath.colorado.edu/faculty/sandberg ---- Original message ---- >Date: Mon, 17 Nov 2008 11:18:30 -0500 >From: "Bobby H. Braswell" >Subject: Re: [SciPy-user] weave problem on ubuntu 8.10 >To: SciPy Users List > > Hi, thanks for the reply. Here are the first lines: > > > In file included from > /usr/lib/python2.5/site-packages/scipy/weave/blitz/blitz/applics.h:400, > from > /usr/lib/python2.5/site-packages/scipy/weave/blitz/blitz/vecexpr.h:32, > from > /usr/lib/python2.5/site-packages/scipy/weave/blitz/blitz/vecpick.cc:16, > from > /usr/lib/python2.5/site-packages/scipy/weave/blitz/blitz/vecpick.h:293, > from > /usr/lib/python2.5/site-packages/scipy/weave/blitz/blitz/vector.h:449, > from > /usr/lib/python2.5/site-packages/scipy/weave/blitz/blitz/tinyvec.h:430, > from > /usr/lib/python2.5/site-packages/scipy/weave/blitz/blitz/array-impl.h:44, > from > /usr/lib/python2.5/site-packages/scipy/weave/blitz/blitz/array.h:32, > from > /home/braswell/.python25_compiled/sc_f935818f52299953943b3b48fe3685483.cpp:11: > /usr/lib/python2.5/site-packages/scipy/weave/blitz/blitz/mathfunc.h: > In static member function `static long int > blitz::_bz_abs::apply(long int)': > /usr/lib/python2.5/site-packages/scipy/weave/blitz/blitz/mathfunc.h:45: > error: `labs' is not a member of `std' > In file included from > /usr/lib/python2.5/site-packages/scipy/weave/blitz/blitz/array/funcs.h:29, > from > /usr/lib/python2.5/site-packages/scipy/weave/blitz/blitz/array/newet.h:29, > from > /usr/lib/python2.5/site-packages/scipy/weave/blitz/blitz/array/et.h:27, > from > /usr/lib/python2.5/site-packages/scipy/weave/blitz/blitz/array-impl.h:2515, > from > /usr/lib/python2.5/site-packages/scipy/weave/blitz/blitz/array.h:32, > from > /home/braswell/.python25_compiled/sc_f935818f52299953943b3b48fe3685483.cpp:11: > /usr/lib/python2.5/site-packages/scipy/weave/blitz/blitz/funcs.h: > In static member function `static int > blitz::Fn_abs::apply(int)': > /usr/lib/python2.5/site-packages/scipy/weave/blitz/blitz/funcs.h:509: > error: call of overloaded `abs(int&)' is ambiguous > /usr/include/c++/4.3/cmath:99: note: candidates are: > double std::abs(double) > /usr/include/c++/4.3/cmath:103: > note: float std::abs(float) > /usr/include/c++/4.3/cmath:107: > note: long double std::abs(long > double) > /usr/lib/python2.5/site-packages/scipy/weave/blitz/blitz/funcs.h: > In static member function `static long int > blitz::Fn_abs::apply(long int)': > /usr/lib/python2.5/site-packages/scipy/weave/blitz/blitz/funcs.h:530: > error: `labs' is not a member of `std' > /home/braswell/.python25_compiled/sc_f935818f52299953943b3b48fe3685483.cpp: > In function `char* find_type(PyObject*)': > /home/braswell/.python25_compiled/sc_f935818f52299953943b3b48fe3685483.cpp:35: > warning: deprecated conversion from string constant > to `char*' > ... > > On Mon, 2008-11-17 at 17:08 +0100, So/ren Nielsen > wrote: > > Hi Rob, > > What are the first lines of your error message? > > I found the answer to my own question... I just > had to add type_converters = converters.blitz > under the ext_function. > > On Mon, Nov 17, 2008 at 4:18 PM, Bobby H. Braswell > wrote: > > Hi- > > By coincidence I am trying to get weave working > on a new system, I had previously been using it > successfully under OS X with the Fink version of > SciPy. I don't want to distract from Soren's > question but when I try his simple example (or > any of my own) using converters.blitz, I get a > very long error message, actually mostly > warnings, but it ends like this: > > >>> ravg = weave.inline(code, ['xlen', 'ylen', > 'test'], type_converters=converters.blitz, > compiler = 'gcc') > ...hundreds of lines... > Traceback (most recent call last): > File "", line 2, in > File > "/usr/lib/python2.5/site-packages/scipy/weave/inline_tools.py", > line 339, in inline > **kw) > File > "/usr/lib/python2.5/site-packages/scipy/weave/inline_tools.py", > line 447, in compile_function > verbose=verbose, **kw) > File > "/usr/lib/python2.5/site-packages/scipy/weave/ext_tools.py", > line 365, in compile > verbose = verbose, **kw) > File > "/usr/lib/python2.5/site-packages/scipy/weave/build_tools.py", > line 269, in build_extension > setup(name = module_name, ext_modules = > [ext],verbose=verb) > File > "/usr/lib/python2.5/site-packages/numpy/distutils/core.py", > line 184, in setup > return old_setup(**new_attr) > File "/usr/lib/python2.5/distutils/core.py", > line 168, in setup > raise SystemExit, "error: " + str(msg) > scipy.weave.build_tools.CompileError: error: > Command "g++ -pthread -fno-strict-aliasing > -DNDEBUG -g -fwrapv -O2 -fPIC > -I/usr/lib/python2.5/site-packages/scipy/weave > -I/usr/lib/python2.5/site-packages/scipy/weave/scxx > -I/usr/lib/python2.5/site-packages/scipy/weave/blitz > -I/usr/lib/python2.5/site-packages/numpy/core/include > -I/usr/include/python2.5 -c > /home/braswell/.python25_compiled/sc_f8b4f30889557b51310ac43eda9472b30.cpp > -o > /tmp/braswell/python25_intermediate/compiler_a9bbef2f14d61f7aa8f0ba6e068e18c2/home/braswell/.python25_compiled/sc_f8b4f30889557b51310ac43eda9472b30.o" > failed with exit status 1 > > Sorry if this is more of a compiler/Ubuntu > problem, I'm not sure about that. I'd be > grateful to hear from someone who has had or not > had problems with Weave on Ubuntu 8.10. > > Thanks very much, > Rob > > On Mon, 2008-11-17 at 15:40 +0100, So/ren > Nielsen wrote: > > Can anyone explain why this fails? This piece > of code runs perfectly using weave.inline and > type_converters = blitz.. > > Obviously it can't handle 2D arrays anymore. > It's just a stupid example to illustrate that. > > Thanks, > Soren > > CODE : > ------------------------------------------------------------------------------------------------ > mod = ext_tools.ext_module('ravg_ext') > > test = zeros((5,5)) > xlen = 5 > ylen = 5 > > code = """ > int x, y; > > for( x = 0; x < xlen; x++) > { > for( y = 0; y < ylen; y++) > { > test(x,y) = 2; > } > } > > """ > > ravg = ext_tools.ext_function('ravg', code, > ['xlen', 'ylen', 'test']) > mod.add_function(ravg) > mod.compile(compiler = 'gcc') > > RESULT: > ------------------------------------------------------------------------------------------------ > C:\Temp\ravg_ext.cpp: In function `PyObject* > ravg(PyObject*, PyObject*, PyObject*)': > C:\Temp\ravg_ext.cpp:654: error: `test' cannot > be used as a function > C:\Temp\ravg_ext.cpp:641: warning: unused > variable 'Ntest' > C:\Temp\ravg_ext.cpp:642: warning: unused > variable 'Stest' > C:\Temp\ravg_ext.cpp:643: warning: unused > variable 'Dtest' > > Traceback (most recent call last): > File "C:\Temp\ravg_extension.py", line 132, > in ? > build_ravg_extension() > File "C:\Temp\ravg_extension.py", line 125, > in build_ravg_extension > mod.compile(compiler = 'gcc') > File > "C:\Python24\Lib\site-packages\scipy\weave\ext_tools.py", > line 365, in compile > verbose = verbose, **kw) > File > "C:\Python24\Lib\site-packages\scipy\weave\build_tools.py", > line 269, in build_extension > setup(name = module_name, ext_modules = > [ext],verbose=verb) > File > "C:\Python24\Lib\site-packages\numpy\distutils\core.py", > line 184, in setup > return old_setup(**new_attr) > File "C:\Python24\Lib\distutils\core.py", > line 166, in setup > raise SystemExit, "error: " + str(msg) > CompileError: error: Command "g++ -mno-cygwin > -O2 -Wall > -IC:\Python24\lib\site-packages\scipy\weave > -IC:\Python24\lib\site-packages\scipy\weave\scxx > -IC:\Python24\lib\site-packages\numpy\core\include > -IC:\Python24\include -IC:\Python24\PC -c > C:\Temp\ravg_ext.cpp -o > c:\docume~1\ssn\locals~1\temp\ssn\python24_intermediate\compiler_894ad5ed761bb51736c6d2b7872dc212\Releas > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user >________________ >_______________________________________________ >SciPy-user mailing list >SciPy-user at scipy.org >http://projects.scipy.org/mailman/listinfo/scipy-user From matthieu.brucher at gmail.com Mon Nov 17 11:57:06 2008 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Mon, 17 Nov 2008 17:57:06 +0100 Subject: [SciPy-user] weave problem on ubuntu 8.10 In-Reply-To: <20081117094221.AFP24765@joker.int.colorado.edu> References: <20081117094221.AFP24765@joker.int.colorado.edu> Message-ID: Hi, The answer would be to replace labs by the correct C++ function abs() (as for floating point numbers). Matthieu 2008/11/17 Kristian Hans Sandberg : > That's exactly the problem I wrote about a couple of days ago (with topic "weave/blitz problem"). It's listed as Ticket # 739 in the scipy trac system: > > http://www.scipy.org/scipy/scipy/ticket/739 > > This seems to happen with g++ version 4.3. As a temporary fix, I installed g++ version 4.2, and then it worked. > > I believe this problem will be more common as more people update to newer compilers. > > Kristian > > Kristian Sandberg, Ph.D. > > Dept. of Applied Mathematics and > The Boulder Laboratory for 3-D Electron Microscopy of Cells > University of Colorado at Boulder > Campus Box 526 > Boulder, CO 80309-0526, USA > > Phone: (303) 492 0593 (work) > (303) 499 4404 (home) > (303) 547 6290 (cell) > > Home page: http://amath.colorado.edu/faculty/sandberg > > > ---- Original message ---- >>Date: Mon, 17 Nov 2008 11:18:30 -0500 >>From: "Bobby H. Braswell" >>Subject: Re: [SciPy-user] weave problem on ubuntu 8.10 >>To: SciPy Users List >> >> Hi, thanks for the reply. Here are the first lines: >> >> >> In file included from >> /usr/lib/python2.5/site-packages/scipy/weave/blitz/blitz/applics.h:400, >> from >> /usr/lib/python2.5/site-packages/scipy/weave/blitz/blitz/vecexpr.h:32, >> from >> /usr/lib/python2.5/site-packages/scipy/weave/blitz/blitz/vecpick.cc:16, >> from >> /usr/lib/python2.5/site-packages/scipy/weave/blitz/blitz/vecpick.h:293, >> from >> /usr/lib/python2.5/site-packages/scipy/weave/blitz/blitz/vector.h:449, >> from >> /usr/lib/python2.5/site-packages/scipy/weave/blitz/blitz/tinyvec.h:430, >> from >> /usr/lib/python2.5/site-packages/scipy/weave/blitz/blitz/array-impl.h:44, >> from >> /usr/lib/python2.5/site-packages/scipy/weave/blitz/blitz/array.h:32, >> from >> /home/braswell/.python25_compiled/sc_f935818f52299953943b3b48fe3685483.cpp:11: >> /usr/lib/python2.5/site-packages/scipy/weave/blitz/blitz/mathfunc.h: >> In static member function `static long int >> blitz::_bz_abs::apply(long int)': >> /usr/lib/python2.5/site-packages/scipy/weave/blitz/blitz/mathfunc.h:45: >> error: `labs' is not a member of `std' >> In file included from >> /usr/lib/python2.5/site-packages/scipy/weave/blitz/blitz/array/funcs.h:29, >> from >> /usr/lib/python2.5/site-packages/scipy/weave/blitz/blitz/array/newet.h:29, >> from >> /usr/lib/python2.5/site-packages/scipy/weave/blitz/blitz/array/et.h:27, >> from >> /usr/lib/python2.5/site-packages/scipy/weave/blitz/blitz/array-impl.h:2515, >> from >> /usr/lib/python2.5/site-packages/scipy/weave/blitz/blitz/array.h:32, >> from >> /home/braswell/.python25_compiled/sc_f935818f52299953943b3b48fe3685483.cpp:11: >> /usr/lib/python2.5/site-packages/scipy/weave/blitz/blitz/funcs.h: >> In static member function `static int >> blitz::Fn_abs::apply(int)': >> /usr/lib/python2.5/site-packages/scipy/weave/blitz/blitz/funcs.h:509: >> error: call of overloaded `abs(int&)' is ambiguous >> /usr/include/c++/4.3/cmath:99: note: candidates are: >> double std::abs(double) >> /usr/include/c++/4.3/cmath:103: >> note: float std::abs(float) >> /usr/include/c++/4.3/cmath:107: >> note: long double std::abs(long >> double) >> /usr/lib/python2.5/site-packages/scipy/weave/blitz/blitz/funcs.h: >> In static member function `static long int >> blitz::Fn_abs::apply(long int)': >> /usr/lib/python2.5/site-packages/scipy/weave/blitz/blitz/funcs.h:530: >> error: `labs' is not a member of `std' >> /home/braswell/.python25_compiled/sc_f935818f52299953943b3b48fe3685483.cpp: >> In function `char* find_type(PyObject*)': >> /home/braswell/.python25_compiled/sc_f935818f52299953943b3b48fe3685483.cpp:35: >> warning: deprecated conversion from string constant >> to `char*' >> ... >> >> On Mon, 2008-11-17 at 17:08 +0100, So/ren Nielsen >> wrote: >> >> Hi Rob, >> >> What are the first lines of your error message? >> >> I found the answer to my own question... I just >> had to add type_converters = converters.blitz >> under the ext_function. >> >> On Mon, Nov 17, 2008 at 4:18 PM, Bobby H. Braswell >> wrote: >> >> Hi- >> >> By coincidence I am trying to get weave working >> on a new system, I had previously been using it >> successfully under OS X with the Fink version of >> SciPy. I don't want to distract from Soren's >> question but when I try his simple example (or >> any of my own) using converters.blitz, I get a >> very long error message, actually mostly >> warnings, but it ends like this: >> >> >>> ravg = weave.inline(code, ['xlen', 'ylen', >> 'test'], type_converters=converters.blitz, >> compiler = 'gcc') >> ...hundreds of lines... >> Traceback (most recent call last): >> File "", line 2, in >> File >> "/usr/lib/python2.5/site-packages/scipy/weave/inline_tools.py", >> line 339, in inline >> **kw) >> File >> "/usr/lib/python2.5/site-packages/scipy/weave/inline_tools.py", >> line 447, in compile_function >> verbose=verbose, **kw) >> File >> "/usr/lib/python2.5/site-packages/scipy/weave/ext_tools.py", >> line 365, in compile >> verbose = verbose, **kw) >> File >> "/usr/lib/python2.5/site-packages/scipy/weave/build_tools.py", >> line 269, in build_extension >> setup(name = module_name, ext_modules = >> [ext],verbose=verb) >> File >> "/usr/lib/python2.5/site-packages/numpy/distutils/core.py", >> line 184, in setup >> return old_setup(**new_attr) >> File "/usr/lib/python2.5/distutils/core.py", >> line 168, in setup >> raise SystemExit, "error: " + str(msg) >> scipy.weave.build_tools.CompileError: error: >> Command "g++ -pthread -fno-strict-aliasing >> -DNDEBUG -g -fwrapv -O2 -fPIC >> -I/usr/lib/python2.5/site-packages/scipy/weave >> -I/usr/lib/python2.5/site-packages/scipy/weave/scxx >> -I/usr/lib/python2.5/site-packages/scipy/weave/blitz >> -I/usr/lib/python2.5/site-packages/numpy/core/include >> -I/usr/include/python2.5 -c >> /home/braswell/.python25_compiled/sc_f8b4f30889557b51310ac43eda9472b30.cpp >> -o >> /tmp/braswell/python25_intermediate/compiler_a9bbef2f14d61f7aa8f0ba6e068e18c2/home/braswell/.python25_compiled/sc_f8b4f30889557b51310ac43eda9472b30.o" >> failed with exit status 1 >> >> Sorry if this is more of a compiler/Ubuntu >> problem, I'm not sure about that. I'd be >> grateful to hear from someone who has had or not >> had problems with Weave on Ubuntu 8.10. >> >> Thanks very much, >> Rob >> >> On Mon, 2008-11-17 at 15:40 +0100, So/ren >> Nielsen wrote: >> >> Can anyone explain why this fails? This piece >> of code runs perfectly using weave.inline and >> type_converters = blitz.. >> >> Obviously it can't handle 2D arrays anymore. >> It's just a stupid example to illustrate that. >> >> Thanks, >> Soren >> >> CODE : >> ------------------------------------------------------------------------------------------------ >> mod = ext_tools.ext_module('ravg_ext') >> >> test = zeros((5,5)) >> xlen = 5 >> ylen = 5 >> >> code = """ >> int x, y; >> >> for( x = 0; x < xlen; x++) >> { >> for( y = 0; y < ylen; y++) >> { >> test(x,y) = 2; >> } >> } >> >> """ >> >> ravg = ext_tools.ext_function('ravg', code, >> ['xlen', 'ylen', 'test']) >> mod.add_function(ravg) >> mod.compile(compiler = 'gcc') >> >> RESULT: >> ------------------------------------------------------------------------------------------------ >> C:\Temp\ravg_ext.cpp: In function `PyObject* >> ravg(PyObject*, PyObject*, PyObject*)': >> C:\Temp\ravg_ext.cpp:654: error: `test' cannot >> be used as a function >> C:\Temp\ravg_ext.cpp:641: warning: unused >> variable 'Ntest' >> C:\Temp\ravg_ext.cpp:642: warning: unused >> variable 'Stest' >> C:\Temp\ravg_ext.cpp:643: warning: unused >> variable 'Dtest' >> >> Traceback (most recent call last): >> File "C:\Temp\ravg_extension.py", line 132, >> in ? >> build_ravg_extension() >> File "C:\Temp\ravg_extension.py", line 125, >> in build_ravg_extension >> mod.compile(compiler = 'gcc') >> File >> "C:\Python24\Lib\site-packages\scipy\weave\ext_tools.py", >> line 365, in compile >> verbose = verbose, **kw) >> File >> "C:\Python24\Lib\site-packages\scipy\weave\build_tools.py", >> line 269, in build_extension >> setup(name = module_name, ext_modules = >> [ext],verbose=verb) >> File >> "C:\Python24\Lib\site-packages\numpy\distutils\core.py", >> line 184, in setup >> return old_setup(**new_attr) >> File "C:\Python24\Lib\distutils\core.py", >> line 166, in setup >> raise SystemExit, "error: " + str(msg) >> CompileError: error: Command "g++ -mno-cygwin >> -O2 -Wall >> -IC:\Python24\lib\site-packages\scipy\weave >> -IC:\Python24\lib\site-packages\scipy\weave\scxx >> -IC:\Python24\lib\site-packages\numpy\core\include >> -IC:\Python24\include -IC:\Python24\PC -c >> C:\Temp\ravg_ext.cpp -o >> c:\docume~1\ssn\locals~1\temp\ssn\python24_intermediate\compiler_894ad5ed761bb51736c6d2b7872dc212\Releas >> >> _______________________________________________ >> SciPy-user mailing list >> SciPy-user at scipy.org >> http://projects.scipy.org/mailman/listinfo/scipy-user >> >> _______________________________________________ >> SciPy-user mailing list >> SciPy-user at scipy.org >> http://projects.scipy.org/mailman/listinfo/scipy-user >> >> _______________________________________________ >> SciPy-user mailing list >> SciPy-user at scipy.org >> http://projects.scipy.org/mailman/listinfo/scipy-user >>________________ >>_______________________________________________ >>SciPy-user mailing list >>SciPy-user at scipy.org >>http://projects.scipy.org/mailman/listinfo/scipy-user > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -- Information System Engineer, Ph.D. Website: http://matthieu-brucher.developpez.com/ Blogs: http://matt.eifelle.com and http://blog.developpez.com/?blog=92 LinkedIn: http://www.linkedin.com/in/matthieubrucher From rob.braswell at unh.edu Mon Nov 17 12:04:54 2008 From: rob.braswell at unh.edu (Bobby H. Braswell) Date: Mon, 17 Nov 2008 12:04:54 -0500 Subject: [SciPy-user] weave problem on ubuntu 8.10 In-Reply-To: <20081117094221.AFP24765@joker.int.colorado.edu> References: <20081117094221.AFP24765@joker.int.colorado.edu> Message-ID: <1226941494.32694.77.camel@waage.sr.unh.edu> I'm sorry I missed your previous message. Yes, indeed g++-4.2 works just fine. Thanks very much, Rob On Mon, 2008-11-17 at 09:42 -0700, Kristian Hans Sandberg wrote: > That's exactly the problem I wrote about a couple of days ago (with topic "weave/blitz problem"). It's listed as Ticket # 739 in the scipy trac system: > > http://www.scipy.org/scipy/scipy/ticket/739 > > This seems to happen with g++ version 4.3. As a temporary fix, I installed g++ version 4.2, and then it worked. > > I believe this problem will be more common as more people update to newer compilers. > > Kristian > > Kristian Sandberg, Ph.D. > > Dept. of Applied Mathematics and > The Boulder Laboratory for 3-D Electron Microscopy of Cells > University of Colorado at Boulder > Campus Box 526 > Boulder, CO 80309-0526, USA > > Phone: (303) 492 0593 (work) > (303) 499 4404 (home) > (303) 547 6290 (cell) > > Home page: http://amath.colorado.edu/faculty/sandberg > > > ---- Original message ---- > >Date: Mon, 17 Nov 2008 11:18:30 -0500 > >From: "Bobby H. Braswell" > >Subject: Re: [SciPy-user] weave problem on ubuntu 8.10 > >To: SciPy Users List > > > > Hi, thanks for the reply. Here are the first lines: > > > > > > In file included from > > /usr/lib/python2.5/site-packages/scipy/weave/blitz/blitz/applics.h:400, > > from > > /usr/lib/python2.5/site-packages/scipy/weave/blitz/blitz/vecexpr.h:32, > > from > > /usr/lib/python2.5/site-packages/scipy/weave/blitz/blitz/vecpick.cc:16, > > from > > /usr/lib/python2.5/site-packages/scipy/weave/blitz/blitz/vecpick.h:293, > > from > > /usr/lib/python2.5/site-packages/scipy/weave/blitz/blitz/vector.h:449, > > from > > /usr/lib/python2.5/site-packages/scipy/weave/blitz/blitz/tinyvec.h:430, > > from > > /usr/lib/python2.5/site-packages/scipy/weave/blitz/blitz/array-impl.h:44, > > from > > /usr/lib/python2.5/site-packages/scipy/weave/blitz/blitz/array.h:32, > > from > > /home/braswell/.python25_compiled/sc_f935818f52299953943b3b48fe3685483.cpp:11: > > /usr/lib/python2.5/site-packages/scipy/weave/blitz/blitz/mathfunc.h: > > In static member function `static long int > > blitz::_bz_abs::apply(long int)': > > /usr/lib/python2.5/site-packages/scipy/weave/blitz/blitz/mathfunc.h:45: > > error: `labs' is not a member of `std' > > In file included from > > /usr/lib/python2.5/site-packages/scipy/weave/blitz/blitz/array/funcs.h:29, > > from > > /usr/lib/python2.5/site-packages/scipy/weave/blitz/blitz/array/newet.h:29, > > from > > /usr/lib/python2.5/site-packages/scipy/weave/blitz/blitz/array/et.h:27, > > from > > /usr/lib/python2.5/site-packages/scipy/weave/blitz/blitz/array-impl.h:2515, > > from > > /usr/lib/python2.5/site-packages/scipy/weave/blitz/blitz/array.h:32, > > from > > /home/braswell/.python25_compiled/sc_f935818f52299953943b3b48fe3685483.cpp:11: > > /usr/lib/python2.5/site-packages/scipy/weave/blitz/blitz/funcs.h: > > In static member function `static int > > blitz::Fn_abs::apply(int)': > > /usr/lib/python2.5/site-packages/scipy/weave/blitz/blitz/funcs.h:509: > > error: call of overloaded `abs(int&)' is ambiguous > > /usr/include/c++/4.3/cmath:99: note: candidates are: > > double std::abs(double) > > /usr/include/c++/4.3/cmath:103: > > note: float std::abs(float) > > /usr/include/c++/4.3/cmath:107: > > note: long double std::abs(long > > double) > > /usr/lib/python2.5/site-packages/scipy/weave/blitz/blitz/funcs.h: > > In static member function `static long int > > blitz::Fn_abs::apply(long int)': > > /usr/lib/python2.5/site-packages/scipy/weave/blitz/blitz/funcs.h:530: > > error: `labs' is not a member of `std' > > /home/braswell/.python25_compiled/sc_f935818f52299953943b3b48fe3685483.cpp: > > In function `char* find_type(PyObject*)': > > /home/braswell/.python25_compiled/sc_f935818f52299953943b3b48fe3685483.cpp:35: > > warning: deprecated conversion from string constant > > to `char*' > > ... > > > > On Mon, 2008-11-17 at 17:08 +0100, So/ren Nielsen > > wrote: > > > > Hi Rob, > > > > What are the first lines of your error message? > > > > I found the answer to my own question... I just > > had to add type_converters = converters.blitz > > under the ext_function. > > > > On Mon, Nov 17, 2008 at 4:18 PM, Bobby H. Braswell > > wrote: > > > > Hi- > > > > By coincidence I am trying to get weave working > > on a new system, I had previously been using it > > successfully under OS X with the Fink version of > > SciPy. I don't want to distract from Soren's > > question but when I try his simple example (or > > any of my own) using converters.blitz, I get a > > very long error message, actually mostly > > warnings, but it ends like this: > > > > >>> ravg = weave.inline(code, ['xlen', 'ylen', > > 'test'], type_converters=converters.blitz, > > compiler = 'gcc') > > ...hundreds of lines... > > Traceback (most recent call last): > > File "", line 2, in > > File > > "/usr/lib/python2.5/site-packages/scipy/weave/inline_tools.py", > > line 339, in inline > > **kw) > > File > > "/usr/lib/python2.5/site-packages/scipy/weave/inline_tools.py", > > line 447, in compile_function > > verbose=verbose, **kw) > > File > > "/usr/lib/python2.5/site-packages/scipy/weave/ext_tools.py", > > line 365, in compile > > verbose = verbose, **kw) > > File > > "/usr/lib/python2.5/site-packages/scipy/weave/build_tools.py", > > line 269, in build_extension > > setup(name = module_name, ext_modules = > > [ext],verbose=verb) > > File > > "/usr/lib/python2.5/site-packages/numpy/distutils/core.py", > > line 184, in setup > > return old_setup(**new_attr) > > File "/usr/lib/python2.5/distutils/core.py", > > line 168, in setup > > raise SystemExit, "error: " + str(msg) > > scipy.weave.build_tools.CompileError: error: > > Command "g++ -pthread -fno-strict-aliasing > > -DNDEBUG -g -fwrapv -O2 -fPIC > > -I/usr/lib/python2.5/site-packages/scipy/weave > > -I/usr/lib/python2.5/site-packages/scipy/weave/scxx > > -I/usr/lib/python2.5/site-packages/scipy/weave/blitz > > -I/usr/lib/python2.5/site-packages/numpy/core/include > > -I/usr/include/python2.5 -c > > /home/braswell/.python25_compiled/sc_f8b4f30889557b51310ac43eda9472b30.cpp > > -o > > /tmp/braswell/python25_intermediate/compiler_a9bbef2f14d61f7aa8f0ba6e068e18c2/home/braswell/.python25_compiled/sc_f8b4f30889557b51310ac43eda9472b30.o" > > failed with exit status 1 > > > > Sorry if this is more of a compiler/Ubuntu > > problem, I'm not sure about that. I'd be > > grateful to hear from someone who has had or not > > had problems with Weave on Ubuntu 8.10. > > > > Thanks very much, > > Rob > > > > On Mon, 2008-11-17 at 15:40 +0100, So/ren > > Nielsen wrote: > > > > Can anyone explain why this fails? This piece > > of code runs perfectly using weave.inline and > > type_converters = blitz.. > > > > Obviously it can't handle 2D arrays anymore. > > It's just a stupid example to illustrate that. > > > > Thanks, > > Soren > > > > CODE : > > ------------------------------------------------------------------------------------------------ > > mod = ext_tools.ext_module('ravg_ext') > > > > test = zeros((5,5)) > > xlen = 5 > > ylen = 5 > > > > code = """ > > int x, y; > > > > for( x = 0; x < xlen; x++) > > { > > for( y = 0; y < ylen; y++) > > { > > test(x,y) = 2; > > } > > } > > > > """ > > > > ravg = ext_tools.ext_function('ravg', code, > > ['xlen', 'ylen', 'test']) > > mod.add_function(ravg) > > mod.compile(compiler = 'gcc') > > > > RESULT: > > ------------------------------------------------------------------------------------------------ > > C:\Temp\ravg_ext.cpp: In function `PyObject* > > ravg(PyObject*, PyObject*, PyObject*)': > > C:\Temp\ravg_ext.cpp:654: error: `test' cannot > > be used as a function > > C:\Temp\ravg_ext.cpp:641: warning: unused > > variable 'Ntest' > > C:\Temp\ravg_ext.cpp:642: warning: unused > > variable 'Stest' > > C:\Temp\ravg_ext.cpp:643: warning: unused > > variable 'Dtest' > > > > Traceback (most recent call last): > > File "C:\Temp\ravg_extension.py", line 132, > > in ? > > build_ravg_extension() > > File "C:\Temp\ravg_extension.py", line 125, > > in build_ravg_extension > > mod.compile(compiler = 'gcc') > > File > > "C:\Python24\Lib\site-packages\scipy\weave\ext_tools.py", > > line 365, in compile > > verbose = verbose, **kw) > > File > > "C:\Python24\Lib\site-packages\scipy\weave\build_tools.py", > > line 269, in build_extension > > setup(name = module_name, ext_modules = > > [ext],verbose=verb) > > File > > "C:\Python24\Lib\site-packages\numpy\distutils\core.py", > > line 184, in setup > > return old_setup(**new_attr) > > File "C:\Python24\Lib\distutils\core.py", > > line 166, in setup > > raise SystemExit, "error: " + str(msg) > > CompileError: error: Command "g++ -mno-cygwin > > -O2 -Wall > > -IC:\Python24\lib\site-packages\scipy\weave > > -IC:\Python24\lib\site-packages\scipy\weave\scxx > > -IC:\Python24\lib\site-packages\numpy\core\include > > -IC:\Python24\include -IC:\Python24\PC -c > > C:\Temp\ravg_ext.cpp -o > > c:\docume~1\ssn\locals~1\temp\ssn\python24_intermediate\compiler_894ad5ed761bb51736c6d2b7872dc212\Releas > > > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user > >________________ > >_______________________________________________ > >SciPy-user mailing list > >SciPy-user at scipy.org > >http://projects.scipy.org/mailman/listinfo/scipy-user > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user -------------- next part -------------- An HTML attachment was scrubbed... URL: From pav at iki.fi Mon Nov 17 14:03:33 2008 From: pav at iki.fi (Pauli Virtanen) Date: Mon, 17 Nov 2008 19:03:33 +0000 (UTC) Subject: [SciPy-user] why signature of func in odeint != ode ? References: <49219DEE.5010405@relativita.com> Message-ID: Mon, 17 Nov 2008 17:38:06 +0100, Emanuele Olivetti wrote: > Why signature of func in odeint is swapped (y,t0 -> t0,y) with respect > to func in ode ? Legacy -- I'd guess the two interfaces were written by different authors. I'm not sure if this API break can be made the moment. One possible way forward could be to deprecate both `ode` and `odeint` and write a new unified interface for LSODA, *VODE, etc. -- Pauli Virtanen From timmichelsen at gmx-topmail.de Mon Nov 17 15:34:29 2008 From: timmichelsen at gmx-topmail.de (Timmie) Date: Mon, 17 Nov 2008 20:34:29 +0000 (UTC) Subject: [SciPy-user] how to get only complete years from series? Message-ID: Hello, I am unsing the scikit.timeseries to evaluate a long-term measurement data set. How can I extract those years, which have complete measurements? In the below, years 2004 & 2008 are not complete. Is there a generic possibility that all incomplete years get masked? Thanks & regards, Timmie ###code import numpy as np import numpy.ma as ma import scikits.timeseries as ts data = np.arange(0, 40800) start_dt = ts.Date(freq='H', year=2004, month=3, day=1, hour=0) s_all = ts.time_series(data, freq='H', start_date=start_dt) From rowen at u.washington.edu Mon Nov 17 16:25:21 2008 From: rowen at u.washington.edu (Russell E. Owen) Date: Mon, 17 Nov 2008 13:25:21 -0800 Subject: [SciPy-user] PyGSL with numpy? Message-ID: I'm part of a project that is doing scientific computing with a mix of C++ and python (all the heavy lifting being done in C++ for speed). We need a scientific library that can be used in C++ (which puts out scipy) and preferably can also be used in python. We're considering GSL, which has a python wrapper (PyGSL), the latter of which apparently works with numpy. Has anyone tried this? How is the integration? Are there other solutions we should be considering? Regards, -- Russell From rob.clewley at gmail.com Mon Nov 17 16:29:53 2008 From: rob.clewley at gmail.com (Rob Clewley) Date: Mon, 17 Nov 2008 16:29:53 -0500 Subject: [SciPy-user] why signature of func in odeint != ode ? In-Reply-To: References: <49219DEE.5010405@relativita.com> Message-ID: On Mon, Nov 17, 2008 at 2:03 PM, Pauli Virtanen wrote: > I'm not sure if this API break can be made the moment. One possible way > forward could be to deprecate both `ode` and `odeint` and write a new > unified interface for LSODA, *VODE, etc. > I believe Gabriel Gellner had started trying to do this, but his project seems not to have any code online when I just checked. https://launchpad.net/pyode Fixing this with a unified interface would be an important milestone towards scipy 1.0 -Rob From cohen at lpta.in2p3.fr Mon Nov 17 17:10:07 2008 From: cohen at lpta.in2p3.fr (Cohen-Tanugi Johann) Date: Mon, 17 Nov 2008 23:10:07 +0100 Subject: [SciPy-user] why signature of func in odeint != ode ? In-Reply-To: References: <49219DEE.5010405@relativita.com> Message-ID: <4921EBBF.1060905@lpta.in2p3.fr> hi, another option that comes immediately to mind is to use SWIG to expose your C++ code to python. That is what we are doing routinely in my work. HTH, Johann Rob Clewley wrote: > On Mon, Nov 17, 2008 at 2:03 PM, Pauli Virtanen wrote: > >> I'm not sure if this API break can be made the moment. One possible way >> forward could be to deprecate both `ode` and `odeint` and write a new >> unified interface for LSODA, *VODE, etc. >> >> > > I believe Gabriel Gellner had started trying to do this, but his > project seems not to have any code online when I just checked. > > https://launchpad.net/pyode > > Fixing this with a unified interface would be an important milestone > towards scipy 1.0 > > -Rob > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From cohen at lpta.in2p3.fr Mon Nov 17 17:11:51 2008 From: cohen at lpta.in2p3.fr (Cohen-Tanugi Johann) Date: Mon, 17 Nov 2008 23:11:51 +0100 Subject: [SciPy-user] PyGSL with numpy? In-Reply-To: References: Message-ID: <4921EC27.8020505@lpta.in2p3.fr> hi, another option that comes immediately to mind is to use SWIG to expose your C++ code to python. That is what we are doing routinely in my work. HTH, Johann Russell E. Owen wrote: > I'm part of a project that is doing scientific computing with a mix of > C++ and python (all the heavy lifting being done in C++ for speed). We > need a scientific library that can be used in C++ (which puts out scipy) > and preferably can also be used in python. We're considering GSL, which > has a python wrapper (PyGSL), the latter of which apparently works with > numpy. > > Has anyone tried this? How is the integration? Are there other solutions > we should be considering? > > Regards, > > -- Russell > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From cohen at lpta.in2p3.fr Mon Nov 17 17:12:05 2008 From: cohen at lpta.in2p3.fr (Cohen-Tanugi Johann) Date: Mon, 17 Nov 2008 23:12:05 +0100 Subject: [SciPy-user] PyGSL with numpy? In-Reply-To: References: Message-ID: <4921EC35.3090207@lpta.in2p3.fr> sorry wrong thread :( JCT Russell E. Owen wrote: > I'm part of a project that is doing scientific computing with a mix of > C++ and python (all the heavy lifting being done in C++ for speed). We > need a scientific library that can be used in C++ (which puts out scipy) > and preferably can also be used in python. We're considering GSL, which > has a python wrapper (PyGSL), the latter of which apparently works with > numpy. > > Has anyone tried this? How is the integration? Are there other solutions > we should be considering? > > Regards, > > -- Russell > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From fperez.net at gmail.com Mon Nov 17 21:13:10 2008 From: fperez.net at gmail.com (Fernando Perez) Date: Mon, 17 Nov 2008 18:13:10 -0800 Subject: [SciPy-user] weave problem on ubuntu 8.10 In-Reply-To: <20081117094221.AFP24765@joker.int.colorado.edu> References: <20081117094221.AFP24765@joker.int.colorado.edu> Message-ID: Hi Kristian, On Mon, Nov 17, 2008 at 8:42 AM, Kristian Hans Sandberg wrote: > That's exactly the problem I wrote about a couple of days ago (with topic "weave/blitz problem"). It's listed as Ticket # 739 in the scipy trac system: > > http://www.scipy.org/scipy/scipy/ticket/739 > > This seems to happen with g++ version 4.3. As a temporary fix, I installed g++ version 4.2, and then it worked. Thanks, I added this as a note to the ticket so others find the right workaround, until a fix goes in. Cheers, f From emanuele at relativita.com Tue Nov 18 06:24:56 2008 From: emanuele at relativita.com (Emanuele Olivetti) Date: Tue, 18 Nov 2008 12:24:56 +0100 Subject: [SciPy-user] why signature of func in odeint != ode ? In-Reply-To: References: <49219DEE.5010405@relativita.com> Message-ID: <4922A608.4030703@relativita.com> Rob Clewley wrote: > On Mon, Nov 17, 2008 at 2:03 PM, Pauli Virtanen wrote: > >> I'm not sure if this API break can be made the moment. One possible way >> forward could be to deprecate both `ode` and `odeint` and write a new >> unified interface for LSODA, *VODE, etc. >> >> > > I believe Gabriel Gellner had started trying to do this, but his > project seems not to have any code online when I just checked. > > https://launchpad.net/pyode > > Fixing this with a unified interface would be an important milestone > towards scipy 1.0 > > So I assume it is OK to file a ticket to the SciPy trac (792): http://scipy.org/scipy/scipy/ticket/792 Please review it and add any missing info. Regards, Emanuele From nwagner at iam.uni-stuttgart.de Tue Nov 18 07:51:04 2008 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Tue, 18 Nov 2008 13:51:04 +0100 Subject: [SciPy-user] a sequence of formats in savetxt Message-ID: Hi all, How do I specify a sequence of formats in savetxt ? savetxt('f06.dat',F,fmt='%10.5f %i4 4%10.5f') doesn't work F is an (m,5) array. The entries of the second column should be stored as integers. Nils From scott.sinclair.za at gmail.com Tue Nov 18 08:33:35 2008 From: scott.sinclair.za at gmail.com (Scott Sinclair) Date: Tue, 18 Nov 2008 15:33:35 +0200 Subject: [SciPy-user] a sequence of formats in savetxt In-Reply-To: References: Message-ID: <6a17e9ee0811180533v79795f8btc7126a0d1f452f29@mail.gmail.com> 2008/11/18 Nils Wagner : > How do I specify a sequence of formats in savetxt ? > > savetxt('f06.dat',F,fmt='%10.5f %i4 4%10.5f') doesn't work > > F is an (m,5) array. > The entries of the second column should be stored as > integers. Your format string will need 5 different format specifiers if F has 5 columns. >>> savetxt('f06.dat', F, fmt='%10.5f %4d %10.5f %10.5f %10.5f') The above should work with your (m, 5) array, giving you an integer in the second column of each line in the file, with all other columns being floating point. The documentation for savetxt is at http://docs.scipy.org/doc/numpy/reference/generated/numpy.savetxt.html Cheers, Scott From pgmdevlist at gmail.com Tue Nov 18 11:15:07 2008 From: pgmdevlist at gmail.com (Pierre GM) Date: Tue, 18 Nov 2008 11:15:07 -0500 Subject: [SciPy-user] how to get only complete years from series? In-Reply-To: References: Message-ID: Timmie, There's no generic function to perform what you want as it'll depend on the frequency. What you can do is: 1. get a list of years >>> singleyears = set(s_all.years) 2. for each year, check what are the first and last days of the year: >>> firstandlast = [tuple([year] +s_all[s_all.years==year].yeardays[[0,-1]].tolist()) for year in singleyears] That gives you a list of tuples (year, first day, last day) 3. find the years for which the first day is strictly larger than 1 and the last strictly lower than 365. >>> maskyears = [y for (y,f,l) in firstandlast if f>1 or l<365] 4. Mask the corresponding years >>> for y in maskyears: >>> s_all[s_all.years==y] = ma.masked That's far from efficient and rather ugly, but that should give you a generic idea. Let me know how it goes. P. On Nov 17, 2008, at 3:34 PM, Timmie wrote: > Hello, > I am unsing the scikit.timeseries to evaluate a long-term > measurement data set. > > How can I extract those years, which have complete measurements? > > In the below, years 2004 & 2008 are not complete. > Is there a generic possibility that all incomplete years get masked? > > Thanks & regards, > Timmie > > ###code > > import numpy as np > import numpy.ma as ma > import scikits.timeseries as ts > > data = np.arange(0, 40800) > start_dt = ts.Date(freq='H', year=2004, month=3, day=1, hour=0) > s_all = ts.time_series(data, freq='H', start_date=start_dt) > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user From pgmdevlist at gmail.com Tue Nov 18 14:23:42 2008 From: pgmdevlist at gmail.com (Pierre GM) Date: Tue, 18 Nov 2008 14:23:42 -0500 Subject: [SciPy-user] how to get only complete years from series? In-Reply-To: References: Message-ID: Timmie, There's smarter than the previous answer, if you're not afraid of temporary arrays. Here's a copy-pasted version, commented. Let me know how it goes. Cheers P. #### BELOW A SAMPLE SCRIPT THAT MAY ILLUSTRATE #### #!/usr/bin/env python # -*- coding: utf-8 -*- import datetime import scikits.timeseries as ts import numpy as np #import numpy as np import numpy.ma as ma import scikits.timeseries as ts data = np.arange(0, 40800) start_dt = ts.Date(freq='H', year=2004, month=3, day=1, hour=0) s_all = ts.time_series(data, freq='H', start_date=start_dt) # Convert to a (5,24*366) annual series: each row is a year, each column an hour # Because of lapse years, we have 24*366 cols, not 24*365 a_s_all = s_all.convert('A') # If the first column (the first date) is masked, mask the row. a_s_all[a_s_all[:,0].mask] = ma.masked # If the column -25 (last hour of 12/31 or 12/30) is masked, masked the column a_s_all[a_s_all[:,-25].mask] = ma.masked # Make a new series from the annual series. # We can't us convert because the annual series is 2D. # Instead, we create a new series starting at the first date of the annual series, # converted to the correct frequency (s_all.freq). # As the method asfreq defaults to END, we need to force 'START' for relation # (check the docstring of asfreq). starting_date = a_s_all.dates[0].asfreq(s_all.freq, relation='START') # For the data, we can't use a_s_all.ravel() directly because a_s_all is 2D, # but we only need the data actually, not the dates. s_new = ts.time_series(a_s_all._series.ravel(), start_date=starting_date) # And if you want, you can force the starting and ending dates of this new series # to the initial ones s_mod = ts.align_with(s_all, s_new) On Nov 17, 2008, at 3:34 PM, Timmie wrote: > Hello, > I am unsing the scikit.timeseries to evaluate a long-term > measurement data set. > > How can I extract those years, which have complete measurements? > > In the below, years 2004 & 2008 are not complete. > Is there a generic possibility that all incomplete years get masked? > > Thanks & regards, > Timmie > > ###code > > import numpy as np > import numpy.ma as ma > import scikits.timeseries as ts > > data = np.arange(0, 40800) > start_dt = ts.Date(freq='H', year=2004, month=3, day=1, hour=0) > s_all = ts.time_series(data, freq='H', start_date=start_dt) > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user From mandli at amath.washington.edu Tue Nov 18 16:11:09 2008 From: mandli at amath.washington.edu (Kyle Mandli) Date: Tue, 18 Nov 2008 21:11:09 +0000 (UTC) Subject: [SciPy-user] F2PY: Problems after upgrading to Python2.6 References: <20081104184511.668b238f@mpi-magdeburg.mpg.de> Message-ID: Benjamin Kern mpi-magdeburg.mpg.de> writes: > C File hello.f > subroutine foo (a) > integer a > print*, "Hello from Fortran!" > print*, "a=",a > end > I have problems executing this from python, i.e. > > >>> import hello > >>> print hello.__doc__ > This module 'hello' is auto-generated with f2py (version:2_5968). > Functions: > foo(a) > . > >>> print hello.foo.__doc__ > foo - Function signature: > foo(a) > Required arguments: > a : input int > > >>> hello.foo(4) > Traceback (most recent call last): > File "", line 1, in > RuntimeError: more argument specifiers than keyword list entries > (remaining format:'|:hello.foo') I am having the same problem. Seemed to happen after I upgraded to python 2.6 and built from numpy-svn and scipy-svn. Any ideas on what's causing this to happen? There was also a post on tbe development list that seems to be the same type of problem. http://thread.gmane.org/gmane.comp.python.scientific.devel/9098 - Kyle From nwagner at iam.uni-stuttgart.de Wed Nov 19 02:32:21 2008 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Wed, 19 Nov 2008 08:32:21 +0100 Subject: [SciPy-user] array manipulation Message-ID: Hi all, How can I insert a row/column in an existing array ? Nils From robert.kern at gmail.com Wed Nov 19 02:39:34 2008 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 19 Nov 2008 01:39:34 -0600 Subject: [SciPy-user] array manipulation In-Reply-To: References: Message-ID: <3d375d730811182339h3b2c6daaq45ee222ba9bb58c9@mail.gmail.com> On Wed, Nov 19, 2008 at 01:32, Nils Wagner wrote: > Hi all, > > How can I insert a row/column in an existing array ? You can't. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From fredmfp at gmail.com Wed Nov 19 02:53:29 2008 From: fredmfp at gmail.com (fred) Date: Wed, 19 Nov 2008 08:53:29 +0100 Subject: [SciPy-user] array manipulation In-Reply-To: <3d375d730811182339h3b2c6daaq45ee222ba9bb58c9@mail.gmail.com> References: <3d375d730811182339h3b2c6daaq45ee222ba9bb58c9@mail.gmail.com> Message-ID: <4923C5F9.1020706@gmail.com> Robert Kern a ?crit : > On Wed, Nov 19, 2008 at 01:32, Nils Wagner wrote: >> Hi all, >> >> How can I insert a row/column in an existing array ? > > You can't. And numpy.insert? Return a new array with values inserted along the given axis before the given indices If axis is None, then ravel the array first. The obj argument can be an integer, a slice, or a sequence of integers. Examples -------- >>> a = array([[1,2,3], ... [4,5,6], ... [7,8,9]]) >>> insert(a, [1,2], [[4],[5]], axis=0) array([[1, 2, 3], [4, 4, 4], [4, 5, 6], [5, 5, 5], [7, 8, 9]]) Cheers, -- Fred From nwagner at iam.uni-stuttgart.de Wed Nov 19 02:57:21 2008 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Wed, 19 Nov 2008 08:57:21 +0100 Subject: [SciPy-user] array manipulation In-Reply-To: <3d375d730811182339h3b2c6daaq45ee222ba9bb58c9@mail.gmail.com> References: <3d375d730811182339h3b2c6daaq45ee222ba9bb58c9@mail.gmail.com> Message-ID: On Wed, 19 Nov 2008 01:39:34 -0600 "Robert Kern" wrote: > On Wed, Nov 19, 2008 at 01:32, Nils Wagner > wrote: >> Hi all, >> >> How can I insert a row/column in an existing array ? > > You can't. How about that ? >>> A = ones((10,5)) >>> A array([[ 1., 1., 1., 1., 1.], [ 1., 1., 1., 1., 1.], [ 1., 1., 1., 1., 1.], [ 1., 1., 1., 1., 1.], [ 1., 1., 1., 1., 1.], [ 1., 1., 1., 1., 1.], [ 1., 1., 1., 1., 1.], [ 1., 1., 1., 1., 1.], [ 1., 1., 1., 1., 1.], [ 1., 1., 1., 1., 1.]]) >>> A=insert(A,[1],20,axis=1) >>> A array([[ 1., 20., 1., 1., 1., 1.], [ 1., 20., 1., 1., 1., 1.], [ 1., 20., 1., 1., 1., 1.], [ 1., 20., 1., 1., 1., 1.], [ 1., 20., 1., 1., 1., 1.], [ 1., 20., 1., 1., 1., 1.], [ 1., 20., 1., 1., 1., 1.], [ 1., 20., 1., 1., 1., 1.], [ 1., 20., 1., 1., 1., 1.], [ 1., 20., 1., 1., 1., 1.]]) >>> A[:,1] = random.rand(10) Nils From robert.kern at gmail.com Wed Nov 19 03:01:18 2008 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 19 Nov 2008 02:01:18 -0600 Subject: [SciPy-user] array manipulation In-Reply-To: <4923C5F9.1020706@gmail.com> References: <3d375d730811182339h3b2c6daaq45ee222ba9bb58c9@mail.gmail.com> <4923C5F9.1020706@gmail.com> Message-ID: <3d375d730811190001s3030bf56n48956f4bfdbda609@mail.gmail.com> On Wed, Nov 19, 2008 at 01:53, fred wrote: > Robert Kern a ?crit : >> On Wed, Nov 19, 2008 at 01:32, Nils Wagner wrote: >>> Hi all, >>> >>> How can I insert a row/column in an existing array ? >> >> You can't. > And numpy.insert? > > Return a new array with values inserted along the given axis > before the given indices I assumed that by "existing array," he didn't want "a new array." -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From ndbecker2 at gmail.com Wed Nov 19 08:43:44 2008 From: ndbecker2 at gmail.com (Neal Becker) Date: Wed, 19 Nov 2008 08:43:44 -0500 Subject: [SciPy-user] optimizer for complex? Message-ID: Do any of the optimizers (n-dim nonlinear minimizer) handle complex values? It seems not, I seem to get good results by converting to a real vector That is, instead of calling fmin (f, x0=cmplx_array) convert x0 to a real array (real(x0), imag (x0), real (x1), imag (x1)....) And convert back inside the function f from real to complex. From nwagner at iam.uni-stuttgart.de Wed Nov 19 08:56:03 2008 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Wed, 19 Nov 2008 14:56:03 +0100 Subject: [SciPy-user] optimizer for complex? In-Reply-To: References: Message-ID: On Wed, 19 Nov 2008 08:43:44 -0500 Neal Becker wrote: > Do any of the optimizers (n-dim nonlinear minimizer) >handle complex values? It seems not, I seem to get good >results by converting to a real vector > > That is, instead of calling fmin (f, x0=cmplx_array) > > convert x0 to a real array (real(x0), imag (x0), real >(x1), imag (x1)....) > > And convert back inside the function f from real to >complex. > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user Just curious. Where does that problem appear ? Nils From ndbecker2 at gmail.com Wed Nov 19 08:58:57 2008 From: ndbecker2 at gmail.com (Neal Becker) Date: Wed, 19 Nov 2008 08:58:57 -0500 Subject: [SciPy-user] optimizer for complex? References: Message-ID: Nils Wagner wrote: > On Wed, 19 Nov 2008 08:43:44 -0500 > Neal Becker wrote: >> Do any of the optimizers (n-dim nonlinear minimizer) >>handle complex values? It seems not, I seem to get good >>results by converting to a real vector >> >> That is, instead of calling fmin (f, x0=cmplx_array) >> >> convert x0 to a real array (real(x0), imag (x0), real >>(x1), imag (x1)....) >> >> And convert back inside the function f from real to >>complex. >> >> _______________________________________________ >> SciPy-user mailing list >> SciPy-user at scipy.org >> http://projects.scipy.org/mailman/listinfo/scipy-user > > Just curious. Where does that problem appear ? > I'm trying to optimize an FIR filter (for a rather special application) From discerptor at gmail.com Wed Nov 19 11:10:13 2008 From: discerptor at gmail.com (Joshua Lippai) Date: Wed, 19 Nov 2008 08:10:13 -0800 Subject: [SciPy-user] F2PY: Problems after upgrading to Python2.6 In-Reply-To: <20081104184511.668b238f@mpi-magdeburg.mpg.de> References: <20081104184511.668b238f@mpi-magdeburg.mpg.de> Message-ID: <9911419a0811190810pc4708a6od1cde8b18a10d685@mail.gmail.com> I wasn't aware the problems related to it affected f2py, but AFAIK numny and scipy do not yet support the Python 2.6 release. You're best off staying with Python 2.5.2 for the time being if you need to get work done with numpy and scipy. Josh On Tue, Nov 4, 2008 at 9:45 AM, Benjamin Kern wrote: > Hello, > > i'm experiencing strange problems after upgrading to python2.6. I'm > also using numpy-svn and scipy-svn. So here is the problem. When i try > to wrap the following simple fortran code, > C File hello.f > subroutine foo (a) > integer a > print*, "Hello from Fortran!" > print*, "a=",a > end > I have problems executing this from python, i.e. > >>>> import hello >>>> print hello.__doc__ > This module 'hello' is auto-generated with f2py (version:2_5968). > Functions: > foo(a) > . >>>> print hello.foo.__doc__ > foo - Function signature: > foo(a) > Required arguments: > a : input int > >>>> hello.foo(4) > Traceback (most recent call last): > File "", line 1, in > RuntimeError: more argument specifiers than keyword list entries > (remaining format:'|:hello.foo') > > Thanks for the help in advance > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From kern at mpi-magdeburg.mpg.de Wed Nov 19 12:20:31 2008 From: kern at mpi-magdeburg.mpg.de (kern at mpi-magdeburg.mpg.de) Date: Wed, 19 Nov 2008 18:20:31 +0100 Subject: [SciPy-user] F2PY: Problems after upgrading to Python2.6 In-Reply-To: <9911419a0811190810pc4708a6od1cde8b18a10d685@mail.gmail.com> References: <20081104184511.668b238f@mpi-magdeburg.mpg.de> <9911419a0811190810pc4708a6od1cde8b18a10d685@mail.gmail.com> Message-ID: <20081119182031.08812c1a@mpi-magdeburg.mpg.de> Thanks, for the feedback. But still i'm wondering why some of the routines in scipy still work (although they use the same f2py wrapper). In particular, atm i'm writing a small application, which depends heavily on fast integration of ODEs. To increase the speed of the Ode solver, i used to make an fortran module for the ODEs to be solved, as described here http://www.scipy.org/Cookbook/Theoretical_Ecology/Hastings_and_Powell This approach doesn't work anymore. However, if i'm using a python module for the Odes, sth. like the following >> def y(t,x): return ydot(..) >> i have no problems executing the scipy ode routines.... Benjamin On Wed, 19 Nov 2008 08:10:13 -0800 "Joshua Lippai" wrote: > I wasn't aware the problems related to it affected f2py, but AFAIK > numny and scipy do not yet support the Python 2.6 release. You're best > off staying with Python 2.5.2 for the time being if you need to get > work done with numpy and scipy. > > Josh > > On Tue, Nov 4, 2008 at 9:45 AM, Benjamin Kern > wrote: > > Hello, > > > > i'm experiencing strange problems after upgrading to python2.6. I'm > > also using numpy-svn and scipy-svn. So here is the problem. When i > > try to wrap the following simple fortran code, > > C File hello.f > > subroutine foo (a) > > integer a > > print*, "Hello from Fortran!" > > print*, "a=",a > > end > > I have problems executing this from python, i.e. > > > >>>> import hello > >>>> print hello.__doc__ > > This module 'hello' is auto-generated with f2py (version:2_5968). > > Functions: > > foo(a) > > . > >>>> print hello.foo.__doc__ > > foo - Function signature: > > foo(a) > > Required arguments: > > a : input int > > > >>>> hello.foo(4) > > Traceback (most recent call last): > > File "", line 1, in > > RuntimeError: more argument specifiers than keyword list entries > > (remaining format:'|:hello.foo') > > > > Thanks for the help in advance > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From robert.kern at gmail.com Wed Nov 19 15:02:38 2008 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 19 Nov 2008 14:02:38 -0600 Subject: [SciPy-user] optimizer for complex? In-Reply-To: References: Message-ID: <3d375d730811191202r53875ef8hf86c525446aa3c95@mail.gmail.com> On Wed, Nov 19, 2008 at 07:43, Neal Becker wrote: > Do any of the optimizers (n-dim nonlinear minimizer) handle complex values? Nope. > It seems not, I seem to get good results by converting to a real vector Yup. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From sean.m.mcdaniel at gmail.com Wed Nov 19 15:17:34 2008 From: sean.m.mcdaniel at gmail.com (Sean McDaniel) Date: Wed, 19 Nov 2008 15:17:34 -0500 Subject: [SciPy-user] PPC 10.4 scipy installation problems Message-ID: <1151de160811191217m50a9522bxad0d604ea7d74b2d@mail.gmail.com> HI y'all, I am having difficulty installing scipy on my PPC powerbook on os x 10.4. I have followed the instructions on the web page, and have checked the forums for possible fixes, but without sucess. I am installing scipy for the convolution tools and need the signal submodule to work correctly. Installation notes: * I installed the latest fortran compiler from the att web site. I also have g77 installed on the system. * gfortran version 4.2.3 * GCC version - 4.0.1 * I downloaded and installed the fftw package. * Installed numpy * Installed python: python setup.py build_src build_clib --fcompiler=gnu95 build_ext --fcompiler=gnu95 build * The first time I tried to run the import tests, it said I needed "nose." This was installed. Numpy and scipy import without problems, but for scipy, the the test fails with a segmentation fault... scipy.test('1','10') ---snip--- Test generator for parametric tests ... SKIP: Need to import PIL for this test /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/unittest.py:507: DeprecationWarning: NumpyTestCase will be removed in the next release; please update your code to use nose or unittest return self.suiteClass(map(testCaseClass, testCaseNames)) test1 (test_segment.TestSegment) ... ok test2 (test_segment.TestSegment) ... Segmentation fault ---snip ...and importation of the signal submodule fails. from scipy import signal --snip--- ImportError: dlopen(/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/integrate/_odepack.so, 2): Symbol not found: _s_stop Referenced from: /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/integrate/_odepack.so Expected in: dynamic lookup --snip--- Other installation attepts have generated similar dynamic lookup errors. The numpy test generates a few errors. They are listed below... numpy.test('1', '10') ---snip--- ====================================================================== ERROR: test_ma.testta ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/nose/case.py", line 182, in runTest self.test(*self.arg) TypeError: testta() takes exactly 2 arguments (0 given) ====================================================================== ERROR: test_ma.testtb ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/nose/case.py", line 182, in runTest self.test(*self.arg) TypeError: testtb() takes exactly 2 arguments (0 given) ====================================================================== ERROR: test_ma.testtc ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/nose/case.py", line 182, in runTest self.test(*self.arg) TypeError: testtc() takes exactly 2 arguments (0 given) ====================================================================== ERROR: test_ma.testf ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/nose/case.py", line 182, in runTest self.test(*self.arg) TypeError: testf() takes exactly 1 argument (0 given) ====================================================================== ERROR: test_ma.testinplace ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/nose/case.py", line 182, in runTest self.test(*self.arg) TypeError: testinplace() takes exactly 1 argument (0 given) ====================================================================== ERROR: Ticket #396 ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/numpy/core/tests/test_regression.py", line 598, in test_poly1d_nan_roots self.failUnlessRaises(np.linalg.LinAlgError,getattr,p,"r") File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/unittest.py", line 320, in failUnlessRaises callableObj(*args, **kwargs) File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/numpy/lib/polynomial.py", line 1027, in __getattr__ return roots(self.coeffs) File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/numpy/lib/polynomial.py", line 180, in roots roots = _eigvals(A) File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/numpy/lib/polynomial.py", line 38, in _eigvals return eigvals(arg) File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/linalg/decomp.py", line 478, in eigvals return eig(a,b=b,left=0,right=0,overwrite_a=overwrite_a) File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/linalg/decomp.py", line 150, in eig a1 = asarray_chkfinite(a) File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/numpy/lib/function_base.py", line 706, in asarray_chkfinite raise ValueError, "array must not contain infs or NaNs" ValueError: array must not contain infs or NaNs ====================================================================== FAIL: check_testUfuncRegression (test_ma.TestUfuncs) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/numpy/core/tests/test_ma.py", line 692, in check_testUfuncRegression self.failUnless(eqmask(ur.mask, mr.mask)) AssertionError ---------------------------------------------------------------------- ---snip--- Suggestions? Thank you, Sean -- ---------------------------------------------------- Sean McDaniel Graduate Assistant - NSCL work: mcdaniel at nscl.msu.edu personal: sean.m.mcdaniel at gmail.com ----------------------------------------------------- -------------- next part -------------- An HTML attachment was scrubbed... URL: From cournape at gmail.com Wed Nov 19 22:27:35 2008 From: cournape at gmail.com (David Cournapeau) Date: Thu, 20 Nov 2008 12:27:35 +0900 Subject: [SciPy-user] PPC 10.4 scipy installation problems In-Reply-To: <1151de160811191217m50a9522bxad0d604ea7d74b2d@mail.gmail.com> References: <1151de160811191217m50a9522bxad0d604ea7d74b2d@mail.gmail.com> Message-ID: <5b8d13220811191927m4cbbcb91k8867df1d2f6e3f8@mail.gmail.com> On Thu, Nov 20, 2008 at 5:17 AM, Sean McDaniel wrote: > > Numpy and scipy import without problems, but for scipy, the the test fails > with a segmentation fault... Hi Sean, Sorry for the problems you encountered. The good news is that those problems are most likely build problems. First, which version of numpy and scipy are you using ? I assume you are using numpy 1.2.*, since you need nose, but is scipy 0.6 or the svn version ? scipy 0.6 is a bit old - there is no higher release, but 0.7 is about to be released (it is a matter of days), and I think it is safe to assume the svn version is actually more stable than 0.6 at this point. > --snip--- > ImportError: > dlopen(/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/integrate/_odepack.so, > 2): Symbol not found: _s_stop > Referenced from: > /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/integrate/_odepack.so > Expected in: dynamic lookup s_stop may indicate that you built this module with g77 instead of gfortran. You should try rebuilding scipy from scratch (e.g. removing the build directory in scipy source tree and the install directory), and make sure you don't use g77 at all. You can for example log the build and grep for any g77 reference in the log. If you still have problems, please save the build log so that we can take a look at it for more details information, David From simpson at math.toronto.edu Wed Nov 19 22:32:23 2008 From: simpson at math.toronto.edu (Gideon Simpson) Date: Wed, 19 Nov 2008 22:32:23 -0500 Subject: [SciPy-user] PPC 10.4 scipy installation problems In-Reply-To: <1151de160811191217m50a9522bxad0d604ea7d74b2d@mail.gmail.com> References: <1151de160811191217m50a9522bxad0d604ea7d74b2d@mail.gmail.com> Message-ID: <37D2A7A5-B5CC-4AC3-B6C7-C08BA4F17478@math.toronto.edu> Make sure you executed the command python setup.py --fcompiler=gnu95 otherwise it might snag the g77 compiler instead. -gideon On Nov 19, 2008, at 3:17 PM, Sean McDaniel wrote: > HI y'all, > > I am having difficulty installing scipy on my PPC powerbook on os x > 10.4. I have followed the instructions on the web page, and have > checked the forums for possible fixes, but without sucess. > > I am installing scipy for the convolution tools and need the signal > submodule to work correctly. > > Installation notes: > * I installed the latest fortran compiler from the att web site. I > also have g77 installed on the system. > * gfortran version 4.2.3 > * GCC version - 4.0.1 > * I downloaded and installed the fftw package. > * Installed numpy > * Installed python: python setup.py build_src build_clib -- > fcompiler=gnu95 build_ext --fcompiler=gnu95 build > * The first time I tried to run the import tests, it said I needed > "nose." This was installed. > > Numpy and scipy import without problems, but for scipy, the the test > fails with a segmentation fault... > scipy.test('1','10') > ---snip--- > Test generator for parametric tests ... SKIP: Need to import PIL for > this test > /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/ > unittest.py:507: DeprecationWarning: NumpyTestCase will be removed > in the next release; please update your code to use nose or unittest > return self.suiteClass(map(testCaseClass, testCaseNames)) > test1 (test_segment.TestSegment) ... ok > test2 (test_segment.TestSegment) ... Segmentation fault > ---snip > > ...and importation of the signal submodule fails. > from scipy import signal > --snip--- > ImportError: dlopen(/Library/Frameworks/Python.framework/Versions/ > 2.5/lib/python2.5/site-packages/scipy/integrate/_odepack.so, 2): > Symbol not found: _s_stop > Referenced from: /Library/Frameworks/Python.framework/Versions/2.5/ > lib/python2.5/site-packages/scipy/integrate/_odepack.so > Expected in: dynamic lookup > --snip--- > Other installation attepts have generated similar dynamic lookup > errors. > > The numpy test generates a few errors. They are listed below... > numpy.test('1', '10') > ---snip--- > ====================================================================== > ERROR: test_ma.testta > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > python2.5/site-packages/nose/case.py", line 182, in runTest > self.test(*self.arg) > TypeError: testta() takes exactly 2 arguments (0 given) > > ====================================================================== > ERROR: test_ma.testtb > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > python2.5/site-packages/nose/case.py", line 182, in runTest > self.test(*self.arg) > TypeError: testtb() takes exactly 2 arguments (0 given) > > ====================================================================== > ERROR: test_ma.testtc > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > python2.5/site-packages/nose/case.py", line 182, in runTest > self.test(*self.arg) > TypeError: testtc() takes exactly 2 arguments (0 given) > > ====================================================================== > ERROR: test_ma.testf > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > python2.5/site-packages/nose/case.py", line 182, in runTest > self.test(*self.arg) > TypeError: testf() takes exactly 1 argument (0 given) > > ====================================================================== > ERROR: test_ma.testinplace > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > python2.5/site-packages/nose/case.py", line 182, in runTest > self.test(*self.arg) > TypeError: testinplace() takes exactly 1 argument (0 given) > > ====================================================================== > ERROR: Ticket #396 > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > python2.5/site-packages/numpy/core/tests/test_regression.py", line > 598, in test_poly1d_nan_roots > self.failUnlessRaises(np.linalg.LinAlgError,getattr,p,"r") > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > python2.5/unittest.py", line 320, in failUnlessRaises > callableObj(*args, **kwargs) > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > python2.5/site-packages/numpy/lib/polynomial.py", line 1027, in > __getattr__ > return roots(self.coeffs) > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > python2.5/site-packages/numpy/lib/polynomial.py", line 180, in roots > roots = _eigvals(A) > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > python2.5/site-packages/numpy/lib/polynomial.py", line 38, in _eigvals > return eigvals(arg) > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > python2.5/site-packages/scipy/linalg/decomp.py", line 478, in eigvals > return eig(a,b=b,left=0,right=0,overwrite_a=overwrite_a) > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > python2.5/site-packages/scipy/linalg/decomp.py", line 150, in eig > a1 = asarray_chkfinite(a) > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > python2.5/site-packages/numpy/lib/function_base.py", line 706, in > asarray_chkfinite > raise ValueError, "array must not contain infs or NaNs" > ValueError: array must not contain infs or NaNs > > ====================================================================== > FAIL: check_testUfuncRegression (test_ma.TestUfuncs) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > python2.5/site-packages/numpy/core/tests/test_ma.py", line 692, in > check_testUfuncRegression > self.failUnless(eqmask(ur.mask, mr.mask)) > AssertionError > > ---------------------------------------------------------------------- > ---snip--- > > Suggestions? > > Thank you, > > Sean > > > > > -- > ---------------------------------------------------- > Sean McDaniel > Graduate Assistant - NSCL > work: mcdaniel at nscl.msu.edu > personal: sean.m.mcdaniel at gmail.com > ----------------------------------------------------- > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user -------------- next part -------------- An HTML attachment was scrubbed... URL: From simpson at math.toronto.edu Wed Nov 19 22:50:43 2008 From: simpson at math.toronto.edu (Gideon Simpson) Date: Wed, 19 Nov 2008 22:50:43 -0500 Subject: [SciPy-user] problem on os X with SciPy version 0.7.0.dev5151 Message-ID: Just pulled the latest version and built it, and while running the test suite, I got the following error: Python 2.5.2 (r252:60911, Jul 16 2008, 10:58:19) [GCC 4.0.1 (Apple Inc. build 5484)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import scipy >>> scipy.test() Running unit tests for scipy NumPy version 1.2.1 NumPy is installed in /opt/lib/python2.5/site-packages/numpy SciPy version 0.7.0.dev5151 SciPy is installed in /opt/lib/python2.5/site-packages/scipy Python version 2.5.2 (r252:60911, Jul 16 2008, 10:58:19) [GCC 4.0.1 (Apple Inc. build 5484)] nose version 0.10.4 /opt/lib/python2.5/site-packages/scipy/sparse/linalg/dsolve/ linsolve.py:20: DeprecationWarning: scipy.sparse.linalg.dsolve.umfpack will be removed, install scikits.umfpack instead ' install scikits.umfpack instead', DeprecationWarning ) /opt/lib/python2.5/site-packages/scipy/linsolve/__init__.py:4: DeprecationWarning: scipy.linsolve has moved to scipy.sparse.linalg.dsolve warn('scipy.linsolve has moved to scipy.sparse.linalg.dsolve', DeprecationWarning) ..............[[ 2. 5. 138. 2.] [ 3. 4. 219. 2.] [ 0. 7. 255. 3.] [ 1. 8. 268. 4.] [ 6. 9. 295. 6.]] [[ 2. 5. 138. 2.] [ 3. 4. 219. 2.] [ 0. 7. 255. 3.] [ 1. 8. 268. 4.] [ 6. 9. 295. 6.]] ..0.0 .3.33333332492e-06 .3.32912455292e-06 .3.32912455292e-06 .1.09653997882e-07 .1.98734938506e-07 .4.46252451203e-06 .1.01277823406e-06 .0.0 .3.33333335334e-06 .3.33333335334e-06 .3.33333335334e-06 .......................(array([53, 55, 56]), array([2, 3, 1])) (array([53, 55, 56]), array([2, 3, 1])) [2 2 2 2 2 2 2 2 2 2 3 3 3 3 3 3 3 3 3 3 1 1 1 1 1 1 1 1 1 1] ..7.54931229197e-08 .7.54931229197e-08 ...1.09653997882e-07 ..7.54931229197e-08 ......[[ 3 6 138] [ 4 5 219] [ 1 8 255] [ 2 9 268] [ 7 10 295]] [[ 3. 6. 138.] [ 4. 5. 219.] [ 1. 8. 255.] [ 2. 9. 268.] [ 7. 10. 295.]] ............................................................................../opt /lib/python2.5/site-packages/scipy/interpolate/fitpack2.py:488: UserWarning: The coefficients of the spline returned have been computed as the minimal norm least-squares solution of a (numerically) rank deficient system (deficiency=7). If deficiency is large, the results may be inaccurate. Deficiency may strongly depend on the value of eps. warnings.warn(message) ..../opt/lib/python2.5/site-packages/scipy/interpolate/fitpack2.py: 429: UserWarning: The required storage space exceeds the available storage space: nxest or nyest too small, or s too small. The weighted least-squares spline corresponds to the current set of knots. warnings.warn(message) ......................................................................................................................................Warning : 1000000 bytes requested, 20 bytes read. ./opt/lib/python2.5/site-packages/numpy/lib/utils.py:110: DeprecationWarning: write_array is deprecated warnings.warn(str1, DeprecationWarning) /opt/lib/python2.5/site-packages/numpy/lib/utils.py:110: DeprecationWarning: read_array is deprecated warnings.warn(str1, DeprecationWarning) ....................../opt/lib/python2.5/site-packages/numpy/lib/ utils.py:110: DeprecationWarning: npfile is deprecated warnings.warn(str1, DeprecationWarning) ............................caxpy:n=4 ..caxpy:n=3 ....ccopy:n=4 ..ccopy:n=3 .............cscal:n=4 ....cswap:n=4 ..cswap:n=3 .....daxpy:n=4 ..daxpy:n=3 ....dcopy:n=4 ..dcopy:n=3 .............dscal:n=4 ....dswap:n=4 ..dswap:n=3 .....saxpy:n=4 ..saxpy:n=3 ....scopy:n=4 ..scopy:n=3 .............sscal:n=4 ....sswap:n=4 ..sswap:n=3 .....zaxpy:n=4 ..zaxpy:n=3 ....zcopy:n=4 ..zcopy:n=3 .............zscal:n=4 ....zswap:n=4 ..zswap:n=3 ..SSSSSS......SSSSSS......SSSS....NO ATLAS INFO AVAILABLE ......................................... **************************************************************** WARNING: cblas module is empty ----------- See scipy/INSTALL.txt for troubleshooting. Notes: * If atlas library is not found by numpy/distutils/system_info.py, then scipy uses fblas instead of cblas. **************************************************************** ................S................................................../ opt/lib/python2.5/site-packages/scipy/linalg/decomp.py:1173: DeprecationWarning: qr econ argument will be removed after scipy 0.7. The economy transform will then be available through the mode='economic' argument. then be available through the mode='economic' argument.""", DeprecationWarning) ..........................................................................................Parameter 17 to routine CSTEGR was incorrect Mac OS BLAS parameter error in CSTEGR, parameter #0, (unavailable), is 0 -gideon From anthony.j.mannucci at jpl.nasa.gov Thu Nov 20 05:57:40 2008 From: anthony.j.mannucci at jpl.nasa.gov (Mannucci, Anthony J) Date: Thu, 20 Nov 2008 02:57:40 -0800 Subject: [SciPy-user] Scipy fails to build in Mac OS X 10.5 In-Reply-To: Message-ID: Yes, I backed off of 2.6. I now run 2.5. I figured out how to convert back. It was fairly easy. A Google search provided plenty of help, e.g. See http://homepages.cwi.nl/~jack/macpython/uninstall.html and http://mail.python.org/pipermail/python-list/2007-March/432284.html I just did a rename of the "Current" link in the /Library/Frameworks area. OS is 10.5.5. I had problems installing the entire SciPy suite. I need basemap, a matplotlib toolkit that requires the latest matplotlb. So, the matplotlib package at Pythonmac.org was not recent enough. That's what I've used in the past. I could not seem to install matplotlib without error. I tried following the instructions to install from source, but to no avail. I could not get the egg install to work either. At one point, the build was looking for things in /Developer/... which struck me as very odd. Libraries were not being found, etc. Perhaps I made some installation mistakes with the dependent libraries. I gave up. (In summary, in the past I've had great success with the pythonmac packages, but that's not an option now due to basemap). I also tried the "SciPy superpack", but that did not work. My FORTRAN compiler is gfortran. I then went to the Enthought distribution which is working. I installed basemap on top of that. I also added pydb. Future editions will ship with basemap, I've heard. -Tony On 11/14/08 10:00 AM, "scipy-user-request at scipy.org" wrote: AFAIK Python 2.6 is not supported at this point, I recall there being known issues with 0.6.0 and py2.6. You may have more luck with the latest svn snapshots (instructions are at http://scipy.org/Download ), but if you want to run stable you're probably better off installing python 2.5.2 instead. Cheers, David -- Tony Mannucci Supervisor, Ionospheric and Atmospheric Remote Sensing Group Mail-Stop 138-308, Tel > (818) 354-1699 Jet Propulsion Laboratory, Fax > (818) 393-5115 California Institute of Technology, Email > Tony.Mannucci at jpl.nasa.gov 4800 Oak Grove Drive, http://genesis.jpl.nasa.gov Pasadena, CA 91109 -------------- next part -------------- An HTML attachment was scrubbed... URL: From robince at gmail.com Thu Nov 20 11:56:45 2008 From: robince at gmail.com (Robin) Date: Thu, 20 Nov 2008 16:56:45 +0000 Subject: [SciPy-user] kstest and scipy.stats Message-ID: Hi, I am having trouble using kstest and the scipy.stats package which I suspect is due to a misunderstanding. Basically I'm confused by the below: O is an array of observed (integer) values: In [344]: O.shape Out[344]: (1400,) In [345]: O.max() Out[345]: 21 In [346]: O.min() Out[346]: 0 Now I am trying to use the kstest to determine how closely they described this vector of data. But I was getting low values with kstest (always p of zero - even when plotting the distributions shows that by eye they are a very good fit). But the thing that really confuses me is this: In [337]: kstest(O, stats.rv_discrete(name='test',values=(r_[0:25],prob(O,25))).cdf) Out[337]: (0.31071428571428572, 0.0) Prob is a small function of mine that returns a probability vector from a vector of integers (shown below - I have been using it for ages and I'm sure there is no mistake there). rv_discrete seems to construct the right distribution (mean and so on match) - so how come the p value is 0, when I am comparing to the distribution directly sampled from the data? Any help greatfully appreciated, Robin ---- Source: def prob(x, r): """Sample probabity of integer sequence using bincount Inputs: x - integer sequence r - number of possible responses (max(x) Hi list! I have a question about plotting and i haven't found an easy solution on the 'net. I have a 2d matrix with some values in it: basically they are just ones or minus ones, and I want to plot them putting a red "square" (or "ball" or "something") on screen (on a grid) if the value of that matrix element is one and a blue "something" if it is minus one. How can I do that? Thanks, marco -- Quando sei una human pignata e la pazzo jacket si ? accorciata e non ti puoi liberare dai colpi di legno e di bastone dai petardi sul groppone Vinicio Capossela From josef.pktd at gmail.com Thu Nov 20 12:18:31 2008 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Thu, 20 Nov 2008 12:18:31 -0500 Subject: [SciPy-user] kstest and scipy.stats In-Reply-To: References: Message-ID: <1cd32cbb0811200918i7cec4339pebf36b63fcd4e3e1@mail.gmail.com> quick answer: I have to check again: >From my interpretation, kstest only works for continuous random variables, see also http://en.wikipedia.org/wiki/Kolmogorov-Smirnov_test For discrete distributions I wrote a chisquare test, located in scipy/stats/tests/test_discrete_chisquare.py, that automatically determines the limits of the cells for the test, and calculates the pvalue. I don't have all my references right know but http://en.wikipedia.org/wiki/Pearson%27s_chi-square_test, should give the basic idea. Josef From mailinglist.honeypot at gmail.com Thu Nov 20 12:33:02 2008 From: mailinglist.honeypot at gmail.com (Steve Lianoglou) Date: Thu, 20 Nov 2008 12:33:02 -0500 Subject: [SciPy-user] Stupid plot question In-Reply-To: References: Message-ID: <64446829-81A6-407E-8408-8BCFE1498D5A@gmail.com> Hi marco, On Nov 20, 2008, at 12:02 PM, Marco wrote: > Hi list! > > I have a question about plotting and i haven't found an easy solution > on the 'net. > > I have a 2d matrix with some values in it: basically they are just > ones or minus ones, and I want to plot them putting a red "square" (or > "ball" or "something") on screen (on a grid) if the value of that > matrix element is one and a blue "something" if it is minus one. > > How can I do that? How about this -- start ipython w/ the pylab interface and do the following: import random m = array([random.gauss(0,1) for i in range(100)]).reshape((10,10)) m[m<0] = -1 m[m!=-1]=1 x,y = where(m==-1) plot(x,y,'bo') x,y = where(m==1) plot(x,y,'ro') In the plot functions, the 'bo' and 'ro' means plot blue/green ('b/o') open circles ('o') Hope that helps, -steve From berthe.loic at gmail.com Thu Nov 20 13:41:59 2008 From: berthe.loic at gmail.com (LB) Date: Thu, 20 Nov 2008 10:41:59 -0800 (PST) Subject: [SciPy-user] Stupid plot question In-Reply-To: References: Message-ID: <549d1a2d-ed94-4a25-92de-62017010cfa2@t11g2000yqg.googlegroups.com> I would use the matplotlib module : http://matplotlib.sourceforge.net/ If you want to plot the matrix directly, you could use matshow : http://matplotlib.sourceforge.net/examples/pylab_examples/matshow.html if you prefer plotting "balls" or "something", try scatter : http://matplotlib.sourceforge.net/examples/pylab_examples/scatter_demo2.html You can see a lot of matplotlib's functionality (and the associated examples) here : http://matplotlib.sourceforge.net/gallery.html HTH -- LB From aisaac at american.edu Thu Nov 20 13:48:57 2008 From: aisaac at american.edu (Alan G Isaac) Date: Thu, 20 Nov 2008 13:48:57 -0500 Subject: [SciPy-user] Stupid plot question In-Reply-To: <64446829-81A6-407E-8408-8BCFE1498D5A@gmail.com> References: <64446829-81A6-407E-8408-8BCFE1498D5A@gmail.com> Message-ID: <4925B119.408@american.edu> Steve Lianoglou wrote: > import random > m = array([random.gauss(0,1) for i in range(100)]).reshape((10,10)) > m[m<0] = -1 > m[m!=-1]=1 > x,y = where(m==-1) > plot(x,y,'bo') > x,y = where(m==1) > plot(x,y,'ro') This is nice, 'tho I'd probably prefer to add 0.5 to x and y. An alternative is `matshow(m)`. Alan Isaac From dineshbvadhia at hotmail.com Thu Nov 20 14:19:48 2008 From: dineshbvadhia at hotmail.com (Dinesh B Vadhia) Date: Thu, 20 Nov 2008 11:19:48 -0800 Subject: [SciPy-user] Sparse int and float performance Message-ID: A question for Nathan Bell: I use Scipy Sparse to solve y = Ax, where A is a MxN "binary" sparse matrix and x is a dense floating point vector, with M and N each >100,000 I use the following to create the CSR matrix: row = numpy.empty(nnz, dtype='intc') column = numpy.empty(nnz, dtype='intc') data = numpy.ones(nnz, dtype='intc') A = sparse.csr_matrix((data, (row, column)), shape=(I,J)) Now, suppose that we change data to the float datatype ie. data = numpy.ones(nnz, dtype=float) I know I can test this but from the perspective of the scipy code, how would this impact the performance of the calculation of y = Ax ie. - Same as data with dtype='intc' - Slower than data with dtype = 'intc' - Faster than data with dtype = 'intc' Thanks! Dinesh -------------- next part -------------- An HTML attachment was scrubbed... URL: From wizzard028wise at gmail.com Thu Nov 20 14:23:51 2008 From: wizzard028wise at gmail.com (Dorian) Date: Thu, 20 Nov 2008 20:23:51 +0100 Subject: [SciPy-user] Sparse int and float performance In-Reply-To: References: Message-ID: <674a602a0811201123y7a603ed1h866213630bce8a9c@mail.gmail.com> Could you increase the font size of your post. Cheers 2008/11/20 Dinesh B Vadhia > A question for Nathan Bell: > > I use Scipy Sparse to solve y = Ax, where A is a MxN "binary" sparse matrix > and x is a dense floating point vector, with M and N each >100,000 > > I use the following to create the CSR matrix: > > row = numpy.empty(nnz, dtype='intc') > column = numpy.empty(nnz, dtype='intc') > > data = numpy.ones(nnz, dtype='intc') > A = sparse.csr_matrix((data, (row, column)), shape=(I,J)) > > Now, suppose that we change data to the float datatype ie. > > data = numpy.ones(nnz, dtype=float) > > I know I can test this but from the perspective of the scipy code, how > would this impact the performance of the calculation of y = Ax ie. > > - Same as data with dtype='intc' > - Slower than data with dtype = 'intc' > - Faster than data with dtype = 'intc' > > Thanks! > > Dinesh > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefan at sun.ac.za Thu Nov 20 14:42:13 2008 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Thu, 20 Nov 2008 21:42:13 +0200 Subject: [SciPy-user] Sparse int and float performance In-Reply-To: <674a602a0811201123y7a603ed1h866213630bce8a9c@mail.gmail.com> References: <674a602a0811201123y7a603ed1h866213630bce8a9c@mail.gmail.com> Message-ID: <9457e7c80811201142g4efd0028k1aa9d80ae7f09b5b@mail.gmail.com> 2008/11/20 Dorian : > Could you increase the font size of your post. Or, preferable, post in clear text. St?fan From rgjames at ucdavis.edu Thu Nov 20 12:56:36 2008 From: rgjames at ucdavis.edu (Ryan James) Date: Thu, 20 Nov 2008 09:56:36 -0800 Subject: [SciPy-user] Stupid plot question In-Reply-To: References: Message-ID: <1227203796.16580.1.camel@localhost.localdomain> On Thu, 2008-11-20 at 18:02 +0100, Marco wrote: > I have a 2d matrix with some values in it: basically they are just > ones or minus ones, and I want to plot them putting a red "square" (or > "ball" or "something") on screen (on a grid) if the value of that > matrix element is one and a blue "something" if it is minus one. There's also: M = array([randint(0, 2) for i in xrange(100)]).reshape((10,10)) M = 2*M - 1 matshow(M) ryan From wnbell at gmail.com Thu Nov 20 16:29:16 2008 From: wnbell at gmail.com (Nathan Bell) Date: Thu, 20 Nov 2008 16:29:16 -0500 Subject: [SciPy-user] Sparse int and float performance In-Reply-To: References: Message-ID: On Thu, Nov 20, 2008 at 2:19 PM, Dinesh B Vadhia wrote: > A question for Nathan Bell: > > I use Scipy Sparse to solve y = Ax, where A is a MxN "binary" sparse matrix > and x is a dense floating point vector, with M and N each >100,000 > > I use the following to create the CSR matrix: > > row = numpy.empty(nnz, dtype='intc') > column = numpy.empty(nnz, dtype='intc') > > data = numpy.ones(nnz, dtype='intc') > A = sparse.csr_matrix((data, (row, column)), shape=(I,J)) > > Now, suppose that we change data to the float datatype ie. > > data = numpy.ones(nnz, dtype=float) > > I know I can test this but from the perspective of the scipy code, how would > this impact the performance of the calculation of y = Ax ie. > > - Same as data with dtype='intc' > - Slower than data with dtype = 'intc' > - Faster than data with dtype = 'intc' > The sparse solvers use floating point values, so I assume that dtype='intc' will get promoted to double precision. You should use 'float32' or 'float64' for the data array. The fastest would be: data = numpy.ones(nnz, dtype='float32') -- Nathan Bell wnbell at gmail.com http://graphics.cs.uiuc.edu/~wnbell/ From gislio at gmail.com Fri Nov 21 07:10:41 2008 From: gislio at gmail.com (=?ISO-8859-1?Q?G=EDsli_=D3ttarsson?=) Date: Fri, 21 Nov 2008 12:10:41 +0000 Subject: [SciPy-user] Question about scipy.optimize Message-ID: <22d42dd20811210410rfa7728dja8ac388a9bcf0111@mail.gmail.com> Hello all. I am a relatively new user of python and scipy and I have been trying out scipy's optimization facilities. I am using scipy version 0.6.0, as distributed with Ubuntu 8.04. My exploration has centered around the minimization of x*x*y, subject to the equality constraint 2*x*x+y*y=3. In my experience, this problem is solved by introducing a Lagrange multiplier and minimizing the Lagrangian: L = x*x*y - lambda * ( 2*x*x+y*y-3 ) I have had no problem finding the desired solution via Newton-Raphson using the function and its first and second derivatives: import scipy.optimize as opt import numpy import numpy.linalg as l def f(r): x,y,lam=r return x*x*y -lam*(2*x*x+y*y-3) def g(r): x,y,lam=r return numpy.array([2*x*y-4*lam*x, x*x-2*lam*y, -(2*x*x+y*y-3)]) def h(r): x,y,lam=r return numpy.mat([[2.*y-4.*lam, 2.*x, -4.*x],[2.*x,-2.*lam,-2.*y],[-4.*x,-2.*y,0.]]) def NR(f, g, h, x0, tol=1e-5, maxit=100): "Find a local extremum of f (a root of g) using Newton-Raphson" x1 = numpy.asarray(x0) f1 = f(x1) for i in range(0,maxit): dx = l.solve(h(x1),g(x1)) ldx = numpy.sqrt(numpy.dot(dx,dx)) x2 = x1-dx f2 = f(x2) if(ldx < tol): # x is close enough df = numpy.abs(f1-f2) if(df < tol): # f is close enough return x2, f2, df, ldx, i x1=x2 f1=f2 return x2, f2, df, ldx, i print NR(f,g,h,[-2.,2.,3.],tol=1e-10) My Newton-Raphson iteration converges in 5 iterations, but I have had no success using any of the functions in scipy.optimize, for example: print opt.fmin_bfgs(f=f, x0=[-2.,2.,3.], fprime=g) print opt.fmin_ncg(f=f, x0=[-2.,2.,3.], fprime=g, fhess=h) neither of which converges. I am beginning to suspect some fundamental misunderstanding on my part. Could someone throw me a bone? Best regards G?sli From nwagner at iam.uni-stuttgart.de Fri Nov 21 08:16:23 2008 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Fri, 21 Nov 2008 14:16:23 +0100 Subject: [SciPy-user] Question about scipy.optimize In-Reply-To: <22d42dd20811210410rfa7728dja8ac388a9bcf0111@mail.gmail.com> References: <22d42dd20811210410rfa7728dja8ac388a9bcf0111@mail.gmail.com> Message-ID: On Fri, 21 Nov 2008 12:10:41 +0000 "G?sli ?ttarsson" wrote: > Hello all. > > I am a relatively new user of python and scipy and I >have been trying > out scipy's optimization facilities. I am using scipy >version 0.6.0, > as distributed with Ubuntu 8.04. > > My exploration has centered around the minimization of >x*x*y, subject > to the equality constraint 2*x*x+y*y=3. In my >experience, this > problem is solved by introducing a Lagrange multiplier >and minimizing > the Lagrangian: > > L = x*x*y - lambda * ( 2*x*x+y*y-3 ) > > I have had no problem finding the desired solution via >Newton-Raphson > using the function and its first and second derivatives: > > import scipy.optimize as opt > import numpy > import numpy.linalg as l > > def f(r): > x,y,lam=r > return x*x*y -lam*(2*x*x+y*y-3) > > def g(r): > x,y,lam=r > return numpy.array([2*x*y-4*lam*x, x*x-2*lam*y, >-(2*x*x+y*y-3)]) > > def h(r): > x,y,lam=r > return numpy.mat([[2.*y-4.*lam, 2.*x, > -4.*x],[2.*x,-2.*lam,-2.*y],[-4.*x,-2.*y,0.]]) > > def NR(f, g, h, x0, tol=1e-5, maxit=100): > "Find a local extremum of f (a root of g) using >Newton-Raphson" > x1 = numpy.asarray(x0) > f1 = f(x1) > for i in range(0,maxit): > dx = l.solve(h(x1),g(x1)) > ldx = numpy.sqrt(numpy.dot(dx,dx)) > x2 = x1-dx > f2 = f(x2) > if(ldx < tol): # x is close enough > df = numpy.abs(f1-f2) > if(df < tol): # f is close enough > return x2, f2, df, ldx, i > x1=x2 > f1=f2 > return x2, f2, df, ldx, i > > print NR(f,g,h,[-2.,2.,3.],tol=1e-10) > > My Newton-Raphson iteration converges in 5 iterations, >but I have had > no success using any of the functions in scipy.optimize, >for example: > > print opt.fmin_bfgs(f=f, x0=[-2.,2.,3.], fprime=g) > print opt.fmin_ncg(f=f, x0=[-2.,2.,3.], fprime=g, >fhess=h) > > neither of which converges. > > I am beginning to suspect some fundamental >misunderstanding on my > part. Could someone throw me a bone? > > Best regards > > G?sli > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user Please find enclosed an untested implementation using openopt. Cheers, Nils -------------- next part -------------- A non-text attachment was scrubbed... Name: test_opt.py Type: text/x-python Size: 405 bytes Desc: not available URL: From dineshbvadhia at hotmail.com Fri Nov 21 08:53:19 2008 From: dineshbvadhia at hotmail.com (Dinesh B Vadhia) Date: Fri, 21 Nov 2008 05:53:19 -0800 Subject: [SciPy-user] Sparse int and float performance Message-ID: Like I said, I haven't looked at the sparse solver code to know how it works but if x is a dense float vector, A a binary matrix with data = numpy.ones(nnz, dtype='intc') Then, if there were a mixed matrix-vector multiplication version of the sparse solver that didn't upcast it to float would there be a performance improvement in the calculation of y = Ax? Dinesh ................................................................... Date: Thu, 20 Nov 2008 16:29:16 -0500 From: "Nathan Bell" Subject: Re: [SciPy-user] Sparse int and float performance To: "SciPy Users List" Message-ID: Content-Type: text/plain; charset=ISO-8859-1 On Thu, Nov 20, 2008 at 2:19 PM, Dinesh B Vadhia wrote: > A question for Nathan Bell: > > I use Scipy Sparse to solve y = Ax, where A is a MxN "binary" sparse > matrix > and x is a dense floating point vector, with M and N each >100,000 > > I use the following to create the CSR matrix: > > row = numpy.empty(nnz, dtype='intc') > column = numpy.empty(nnz, dtype='intc') > > data = numpy.ones(nnz, dtype='intc') > A = sparse.csr_matrix((data, (row, column)), shape=(I,J)) > > Now, suppose that we change data to the float datatype ie. > > data = numpy.ones(nnz, dtype=float) > > I know I can test this but from the perspective of the scipy code, how > would > this impact the performance of the calculation of y = Ax ie. > > - Same as data with dtype='intc' > - Slower than data with dtype = 'intc' > - Faster than data with dtype = 'intc' > The sparse solvers use floating point values, so I assume that dtype='intc' will get promoted to double precision. You should use 'float32' or 'float64' for the data array. The fastest would be: data = numpy.ones(nnz, dtype='float32') -- Nathan Bell wnbell at gmail.com http://graphics.cs.uiuc.edu/~wnbell/ From gislio at gmail.com Fri Nov 21 09:12:26 2008 From: gislio at gmail.com (=?ISO-8859-1?Q?G=EDsli_=D3ttarsson?=) Date: Fri, 21 Nov 2008 14:12:26 +0000 Subject: [SciPy-user] Question about scipy.optimize In-Reply-To: References: <22d42dd20811210410rfa7728dja8ac388a9bcf0111@mail.gmail.com> Message-ID: <22d42dd20811210612i12550f11tb4c99425c2c33afc@mail.gmail.com> Thanks Nils. I will install and investigate openopt. This looks like a very exciting development. Others: I would still like to understand why I am not being successful with scipy.optimize. Was I wrong to think that NCG could handle my constraint, even when I am providing the Hessian matrix? Thanks G?sli On Fri, Nov 21, 2008 at 1:16 PM, Nils Wagner wrote: > On Fri, 21 Nov 2008 12:10:41 +0000 > "G?sli ?ttarsson" wrote: > >> Hello all. >> >> I am a relatively new user of python and scipy and I have been trying >> out scipy's optimization facilities. I am using scipy version 0.6.0, >> as distributed with Ubuntu 8.04. >> >> My exploration has centered around the minimization of x*x*y, subject >> to the equality constraint 2*x*x+y*y=3. In my experience, this >> problem is solved by introducing a Lagrange multiplier and minimizing >> the Lagrangian: >> >> L = x*x*y - lambda * ( 2*x*x+y*y-3 ) >> >> I have had no problem finding the desired solution via Newton-Raphson >> using the function and its first and second derivatives: >> >> import scipy.optimize as opt >> import numpy >> import numpy.linalg as l >> >> def f(r): >> x,y,lam=r >> return x*x*y -lam*(2*x*x+y*y-3) >> >> def g(r): >> x,y,lam=r >> return numpy.array([2*x*y-4*lam*x, x*x-2*lam*y, -(2*x*x+y*y-3)]) >> >> def h(r): >> x,y,lam=r >> return numpy.mat([[2.*y-4.*lam, 2.*x, >> -4.*x],[2.*x,-2.*lam,-2.*y],[-4.*x,-2.*y,0.]]) >> >> def NR(f, g, h, x0, tol=1e-5, maxit=100): >> "Find a local extremum of f (a root of g) using Newton-Raphson" >> x1 = numpy.asarray(x0) >> f1 = f(x1) >> for i in range(0,maxit): >> dx = l.solve(h(x1),g(x1)) >> ldx = numpy.sqrt(numpy.dot(dx,dx)) >> x2 = x1-dx >> f2 = f(x2) >> if(ldx < tol): # x is close enough >> df = numpy.abs(f1-f2) >> if(df < tol): # f is close enough >> return x2, f2, df, ldx, i >> x1=x2 >> f1=f2 >> return x2, f2, df, ldx, i >> >> print NR(f,g,h,[-2.,2.,3.],tol=1e-10) >> >> My Newton-Raphson iteration converges in 5 iterations, but I have had >> no success using any of the functions in scipy.optimize, for example: >> >> print opt.fmin_bfgs(f=f, x0=[-2.,2.,3.], fprime=g) >> print opt.fmin_ncg(f=f, x0=[-2.,2.,3.], fprime=g, fhess=h) >> >> neither of which converges. >> >> I am beginning to suspect some fundamental misunderstanding on my >> part. Could someone throw me a bone? >> >> Best regards >> >> G?sli >> _______________________________________________ >> SciPy-user mailing list >> SciPy-user at scipy.org >> http://projects.scipy.org/mailman/listinfo/scipy-user >> > > Please find enclosed an untested implementation using openopt. > > Cheers, > Nils > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dmitrey.kroshko at scipy.org Fri Nov 21 09:19:24 2008 From: dmitrey.kroshko at scipy.org (dmitrey) Date: Fri, 21 Nov 2008 16:19:24 +0200 Subject: [SciPy-user] Question about scipy.optimize In-Reply-To: <22d42dd20811210612i12550f11tb4c99425c2c33afc@mail.gmail.com> References: <22d42dd20811210410rfa7728dja8ac388a9bcf0111@mail.gmail.com> <22d42dd20811210612i12550f11tb4c99425c2c33afc@mail.gmail.com> Message-ID: <4926C36C.3070005@scipy.org> Both fmin_bfgs and fmin_ncg expect objective function to be convex, while y*x^2 is not. I have no time & willing to dig more deeply for the solvers involved, problem and code mentione. D. G?sli ?ttarsson wrote: > > Thanks Nils. I will install and investigate openopt. This looks like > a very exciting development. > > Others: I would still like to understand why I am not being > successful with scipy.optimize. Was I wrong to think that NCG could > handle my constraint, even when I am providing the Hessian matrix? > > Thanks > > G?sli > > On Fri, Nov 21, 2008 at 1:16 PM, Nils Wagner > > > wrote: > > On Fri, 21 Nov 2008 12:10:41 +0000 > "G?sli ?ttarsson" > wrote: > > Hello all. > > I am a relatively new user of python and scipy and I have been > trying > out scipy's optimization facilities. I am using scipy version > 0.6.0, > as distributed with Ubuntu 8.04. > > My exploration has centered around the minimization of x*x*y, > subject > to the equality constraint 2*x*x+y*y=3. In my experience, this > problem is solved by introducing a Lagrange multiplier and > minimizing > the Lagrangian: > > L = x*x*y - lambda * ( 2*x*x+y*y-3 ) > > I have had no problem finding the desired solution via > Newton-Raphson > using the function and its first and second derivatives: > > import scipy.optimize as opt > import numpy > import numpy.linalg as l > > def f(r): > x,y,lam=r > return x*x*y -lam*(2*x*x+y*y-3) > > def g(r): > x,y,lam=r > return numpy.array([2*x*y-4*lam*x, x*x-2*lam*y, -(2*x*x+y*y-3)]) > > def h(r): > x,y,lam=r > return numpy.mat([[2.*y-4.*lam, 2.*x, > -4.*x],[2.*x,-2.*lam,-2.*y],[-4.*x,-2.*y,0.]]) > > def NR(f, g, h, x0, tol=1e-5, maxit=100): > "Find a local extremum of f (a root of g) using Newton-Raphson" > x1 = numpy.asarray(x0) > f1 = f(x1) > for i in range(0,maxit): > dx = l.solve(h(x1),g(x1)) > ldx = numpy.sqrt(numpy.dot(dx,dx)) > x2 = x1-dx > f2 = f(x2) > if(ldx < tol): # x is close enough > df = numpy.abs(f1-f2) > if(df < tol): # f is close enough > return x2, f2, df, ldx, i > x1=x2 > f1=f2 > return x2, f2, df, ldx, i > > print NR(f,g,h,[-2.,2.,3.],tol=1e-10) > > My Newton-Raphson iteration converges in 5 iterations, but I > have had > no success using any of the functions in scipy.optimize, for > example: > > print opt.fmin_bfgs(f=f, x0=[-2.,2.,3.], fprime=g) > print opt.fmin_ncg(f=f, x0=[-2.,2.,3.], fprime=g, fhess=h) > > neither of which converges. > > I am beginning to suspect some fundamental misunderstanding on my > part. Could someone throw me a bone? > > Best regards > > G?sli > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > > Please find enclosed an untested implementation using openopt. > > Cheers, > Nils > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > > ------------------------------------------------------------------------ > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From gislio at gmail.com Fri Nov 21 09:27:22 2008 From: gislio at gmail.com (=?ISO-8859-1?Q?G=EDsli_=D3ttarsson?=) Date: Fri, 21 Nov 2008 14:27:22 +0000 Subject: [SciPy-user] Question about scipy.optimize In-Reply-To: <4926C36C.3070005@scipy.org> References: <22d42dd20811210410rfa7728dja8ac388a9bcf0111@mail.gmail.com> <22d42dd20811210612i12550f11tb4c99425c2c33afc@mail.gmail.com> <4926C36C.3070005@scipy.org> Message-ID: <22d42dd20811210627g77abde15pee526c9911da31c2@mail.gmail.com> Nuff said. Thanks for setting me straight. G?sli On Fri, Nov 21, 2008 at 2:19 PM, dmitrey wrote: > Both fmin_bfgs and fmin_ncg expect objective function to be convex, > while y*x^2 is not. I have no time & willing to dig more deeply for the > solvers involved, problem and code mentione. > D. > > G?sli ?ttarsson wrote: > > > > Thanks Nils. I will install and investigate openopt. This looks like > > a very exciting development. > > > > Others: I would still like to understand why I am not being > > successful with scipy.optimize. Was I wrong to think that NCG could > > handle my constraint, even when I am providing the Hessian matrix? > > > > Thanks > > > > G?sli > > > > On Fri, Nov 21, 2008 at 1:16 PM, Nils Wagner > > > > > wrote: > > > > On Fri, 21 Nov 2008 12:10:41 +0000 > > "G?sli ?ttarsson" > > wrote: > > > > Hello all. > > > > I am a relatively new user of python and scipy and I have been > > trying > > out scipy's optimization facilities. I am using scipy version > > 0.6.0, > > as distributed with Ubuntu 8.04. > > > > My exploration has centered around the minimization of x*x*y, > > subject > > to the equality constraint 2*x*x+y*y=3. In my experience, this > > problem is solved by introducing a Lagrange multiplier and > > minimizing > > the Lagrangian: > > > > L = x*x*y - lambda * ( 2*x*x+y*y-3 ) > > > > I have had no problem finding the desired solution via > > Newton-Raphson > > using the function and its first and second derivatives: > > > > import scipy.optimize as opt > > import numpy > > import numpy.linalg as l > > > > def f(r): > > x,y,lam=r > > return x*x*y -lam*(2*x*x+y*y-3) > > > > def g(r): > > x,y,lam=r > > return numpy.array([2*x*y-4*lam*x, x*x-2*lam*y, > -(2*x*x+y*y-3)]) > > > > def h(r): > > x,y,lam=r > > return numpy.mat([[2.*y-4.*lam, 2.*x, > > -4.*x],[2.*x,-2.*lam,-2.*y],[-4.*x,-2.*y,0.]]) > > > > def NR(f, g, h, x0, tol=1e-5, maxit=100): > > "Find a local extremum of f (a root of g) using Newton-Raphson" > > x1 = numpy.asarray(x0) > > f1 = f(x1) > > for i in range(0,maxit): > > dx = l.solve(h(x1),g(x1)) > > ldx = numpy.sqrt(numpy.dot(dx,dx)) > > x2 = x1-dx > > f2 = f(x2) > > if(ldx < tol): # x is close enough > > df = numpy.abs(f1-f2) > > if(df < tol): # f is close enough > > return x2, f2, df, ldx, i > > x1=x2 > > f1=f2 > > return x2, f2, df, ldx, i > > > > print NR(f,g,h,[-2.,2.,3.],tol=1e-10) > > > > My Newton-Raphson iteration converges in 5 iterations, but I > > have had > > no success using any of the functions in scipy.optimize, for > > example: > > > > print opt.fmin_bfgs(f=f, x0=[-2.,2.,3.], fprime=g) > > print opt.fmin_ncg(f=f, x0=[-2.,2.,3.], fprime=g, fhess=h) > > > > neither of which converges. > > > > I am beginning to suspect some fundamental misunderstanding on my > > part. Could someone throw me a bone? > > > > Best regards > > > > G?sli > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > > > Please find enclosed an untested implementation using openopt. > > > > Cheers, > > Nils > > > > > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > > > ------------------------------------------------------------------------ > > > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nwagner at iam.uni-stuttgart.de Fri Nov 21 09:57:08 2008 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Fri, 21 Nov 2008 15:57:08 +0100 Subject: [SciPy-user] loadtxt question Message-ID: Hi all, is the length of the row (number of columns per row) limited when using loadtxt ? Nils From wnbell at gmail.com Fri Nov 21 10:42:43 2008 From: wnbell at gmail.com (Nathan Bell) Date: Fri, 21 Nov 2008 10:42:43 -0500 Subject: [SciPy-user] Sparse int and float performance In-Reply-To: References: Message-ID: On Fri, Nov 21, 2008 at 8:53 AM, Dinesh B Vadhia wrote: > Like I said, I haven't looked at the sparse solver code to know how it works > but if x is a dense float vector, A a binary matrix with > > data = numpy.ones(nnz, dtype='intc') > > Then, if there were a mixed matrix-vector multiplication version of the > sparse solver that didn't upcast it to float would there be a performance > improvement in the calculation of y = Ax? It depends which sparse solver we're talking about. In principle, the iterative solvers could perform operations like y=A*x where x and y are floats and A is an int. The direct solvers (i.e. SuperLU, a sparse LU method) would always need floats. Is there any reason not to use dtype='float32'? It should require no upcast in the sparse solvers. -- Nathan Bell wnbell at gmail.com http://graphics.cs.uiuc.edu/~wnbell/ From rmay31 at gmail.com Fri Nov 21 12:13:38 2008 From: rmay31 at gmail.com (Ryan May) Date: Fri, 21 Nov 2008 11:13:38 -0600 Subject: [SciPy-user] loadtxt question In-Reply-To: References: Message-ID: <4926EC42.3050605@gmail.com> Nils Wagner wrote: > Hi all, > > is the length of the row (number of columns per row) > limited when using loadtxt ? I believe the only limitation is that every row needs to have the same number of elements. Ryan -- Ryan May Graduate Research Assistant School of Meteorology University of Oklahoma From nwagner at iam.uni-stuttgart.de Fri Nov 21 12:21:35 2008 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Fri, 21 Nov 2008 18:21:35 +0100 Subject: [SciPy-user] loadtxt question In-Reply-To: <4926EC42.3050605@gmail.com> References: <4926EC42.3050605@gmail.com> Message-ID: On Fri, 21 Nov 2008 11:13:38 -0600 Ryan May wrote: > Nils Wagner wrote: >> Hi all, >> >> is the length of the row (number of columns per row) >> limited when using loadtxt ? > > I believe the only limitation is that every row needs to >have the same > number of elements. > > Ryan > > -- > Ryan May > Graduate Research Assistant > School of Meteorology > University of Oklahoma Hi Ryan, Yes indeed. I fall into a trap. Nils From afraser at lanl.gov Fri Nov 21 13:14:27 2008 From: afraser at lanl.gov (Andy Fraser) Date: Fri, 21 Nov 2008 11:14:27 -0700 Subject: [SciPy-user] Least squares for sparse matrices? Message-ID: <87tza1vtdo.fsf@lanl.gov> I am trying to understand the behavior of matrices that approximate Radon transforms and their inverses. So far, I've used commands like the following to experiment with various values of SVDcond: V_1,resids,rank,s = numpy.linalg.lstsq(M_Radon,U_0,rcond=SVDcond) Is there a sparse version of lstsq that will let me exploit the sparseness of M_Radon? It would be even better if there were something like a sparse SVD that would let me pre-calculate an SVD decomposition and then later use it to invert several U_0 vectors. -- Andy Fraser ISR-2 (MS:B244) afraser at lanl.gov Los Alamos National Laboratory 505 665 9448 Los Alamos, NM 87545 From nwagner at iam.uni-stuttgart.de Fri Nov 21 13:43:10 2008 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Fri, 21 Nov 2008 19:43:10 +0100 Subject: [SciPy-user] Least squares for sparse matrices? In-Reply-To: <87tza1vtdo.fsf@lanl.gov> References: <87tza1vtdo.fsf@lanl.gov> Message-ID: On Fri, 21 Nov 2008 11:14:27 -0700 Andy Fraser wrote: > I am trying to understand the behavior of matrices that >approximate > Radon transforms and their inverses. So far, I've used >commands like > the following to experiment with various values of >SVDcond: > > V_1,resids,rank,s = >numpy.linalg.lstsq(M_Radon,U_0,rcond=SVDcond) > > Is there a sparse version of lstsq that will let me >exploit the > sparseness of M_Radon? > > It would be even better if there were something like a >sparse SVD that > would let me pre-calculate an SVD decomposition and then >later use it to > invert several U_0 vectors. > > -- > Andy Fraser ISR-2 (MS:B244) > afraser at lanl.gov Los Alamos National Laboratory > 505 665 9448 Los Alamos, NM 87545 > ______________________ See http://projects.scipy.org/scipy/scipy/ticket/330 Nils From wnbell at gmail.com Fri Nov 21 15:17:37 2008 From: wnbell at gmail.com (Nathan Bell) Date: Fri, 21 Nov 2008 15:17:37 -0500 Subject: [SciPy-user] Least squares for sparse matrices? In-Reply-To: <87tza1vtdo.fsf@lanl.gov> References: <87tza1vtdo.fsf@lanl.gov> Message-ID: On Fri, Nov 21, 2008 at 1:14 PM, Andy Fraser wrote: > I am trying to understand the behavior of matrices that approximate > Radon transforms and their inverses. So far, I've used commands like > the following to experiment with various values of SVDcond: > > V_1,resids,rank,s = numpy.linalg.lstsq(M_Radon,U_0,rcond=SVDcond) > > Is there a sparse version of lstsq that will let me exploit the > sparseness of M_Radon? > > It would be even better if there were something like a sparse SVD that > would let me pre-calculate an SVD decomposition and then later use it to > invert several U_0 vectors. > I think the best we can currently offer you is an iterative solver (e.g. cg()) on the normal equations. Something like (untested) from scipy.sparse.linalg import LinearOperator, cg from scipy.sparse import csr_matrix M_Radon = csr_matrix(M_Radon) M,N = M_Radon.shape def matvec(x): return M_Radon.T * (M_Radon * x) A = LinearOperator( (N,N), matvec) x,info = cg(A, U_0) If you can afford to do a sparse LU decomposition of M_Radon, then you can either use that directly, or as a preconditioner to cg. -- Nathan Bell wnbell at gmail.com http://graphics.cs.uiuc.edu/~wnbell/ From afraser at lanl.gov Fri Nov 21 16:01:57 2008 From: afraser at lanl.gov (Andy Fraser) Date: Fri, 21 Nov 2008 14:01:57 -0700 Subject: [SciPy-user] Least squares for sparse matrices? In-Reply-To: (Nathan Bell's message of "Fri\, 21 Nov 2008 15\:17\:37 -0500") References: <87tza1vtdo.fsf@lanl.gov> Message-ID: <87prkox06y.fsf@lanl.gov> Thank you for the quick response. I spent some time thinking and looking things up, and I think I understand your suggestions (with the exception of preconditioning.) I can afford to do a sparse LU. But I think that your suggestion does not provide a control equivalent to the rcond argument to numpy.linalg.lstsq. >>>>> "NB" == Nathan Bell writes: NB> On Fri, Nov 21, 2008 at 1:14 PM, Andy Fraser wrote: >> I am trying to understand the behavior of matrices that >> approximate Radon transforms and their inverses. So far, I've >> used commands like the following to experiment with various >> values of SVDcond: >> >> V_1,resids,rank,s = >> numpy.linalg.lstsq(M_Radon,U_0,rcond=SVDcond) >> >> Is there a sparse version of lstsq that will let me exploit the >> sparseness of M_Radon? >> >> It would be even better if there were something like a sparse >> SVD that would let me pre-calculate an SVD decomposition and >> then later use it to invert several U_0 vectors. >> NB> I think the best we can currently offer you is an iterative solver NB> (e.g. cg()) on the normal equations. Something like NB> (untested) NB> from scipy.sparse.linalg import LinearOperator, cg from NB> scipy.sparse import csr_matrix NB> M_Radon = csr_matrix(M_Radon) M,N = M_Radon.shape NB> def matvec(x): return M_Radon.T * (M_Radon * x) NB> A = LinearOperator( (N,N), matvec) x,info = cg(A, U_0) NB> If you can afford to do a sparse LU decomposition of M_Radon, NB> then you can either use that directly, or as a preconditioner NB> to cg. -- Andy Fraser ISR-2 (MS:B244) afraser at lanl.gov Los Alamos National Laboratory 505 665 9448 Los Alamos, NM 87545 From simpson at math.toronto.edu Fri Nov 21 16:35:33 2008 From: simpson at math.toronto.edu (Gideon Simpson) Date: Fri, 21 Nov 2008 16:35:33 -0500 Subject: [SciPy-user] MATLAB ASCII format Message-ID: <6EE37430-1440-452A-8E66-1573A9A2D67C@math.toronto.edu> Is there (or should there be) a routine for reading and writing numpy arrays and matrices in MATLAB ASCII m-file format? -gideon From aisaac at american.edu Fri Nov 21 17:29:05 2008 From: aisaac at american.edu (Alan G Isaac) Date: Fri, 21 Nov 2008 17:29:05 -0500 Subject: [SciPy-user] Question about scipy.optimize In-Reply-To: <22d42dd20811210410rfa7728dja8ac388a9bcf0111@mail.gmail.com> References: <22d42dd20811210410rfa7728dja8ac388a9bcf0111@mail.gmail.com> Message-ID: <49273631.2030809@american.edu> If I understand what you sent, you are trying to turn a constrained minimization into an unconstrained minimization by introducing a Lagrange multiplier. This does not work: the first order conditions associated with L characterize a saddle. However in your case you can simply solve 2*x*x+y*y=3 for y (only the bottom half of the ellipse is relevant) and substitute this into x*x*y to get an unconstrained optimization problem in x. Naturally it will not have a unique solution, since x occurs only as x*x. However the resulting function is convex local to the minima, so you should then be able to use scipy.optimize. Cheers, Alan Isaac From bjracine at glosten.com Fri Nov 21 21:37:08 2008 From: bjracine at glosten.com (Benjamin J. Racine) Date: Fri, 21 Nov 2008 18:37:08 -0800 Subject: [SciPy-user] object-oriented help Message-ID: <8C2B20C4348091499673D86BF10AB6763010F89B@clipper.glosten.local> Hello all, Please let me know if I should be posting just general OO stuff somewhere else, but I figure that this might be relevant to a lot of procedural programming types just jumping into python (and the example is straight out of FEA). Anyways, I have the following code below. The problem is, I need to be able to instantiate many nodes for a given element as well as many elements for a model. Tackling this with built-ins such as arrays, lists and dicts seems straightforward, but I can't wrap my head around it in OO for some reason. Do I just need to make my element and node inherit from a list and then use the ".append()" when I instantiate it? Any help greatly appreciated. Thanks, Ben Racine """ untitled.py Created by Ben Racine on 2008-11-21. Copyright (c) 2008 __MyCompanyName__. All rights reserved. """ import sys import os class model(object): """docstring for model""" def __init__(self): pass class element(object): """docstring for element""" def __init__(self): pass elementID = 'something' elementPressure = 'something else' nodeCount = 'something else altogether' class node(object): """docstring for node""" def __init__(self): pass nodeID = '1' x = 'xx' y = 'yy' z = 'zz' if __name__ == "__main__": test = model() From bjracine at glosten.com Fri Nov 21 21:41:40 2008 From: bjracine at glosten.com (Benjamin J. Racine) Date: Fri, 21 Nov 2008 18:41:40 -0800 Subject: [SciPy-user] object-oriented help Message-ID: <8C2B20C4348091499673D86BF10AB6763010F89C@clipper.glosten.local> Hello all, Please let me know if I should be posting just general OO stuff somewhere else, but I figure that this might be relevant to a lot of procedural programming types just jumping into python (and the example is straight out of FEA). Anyways, I have the following code below. The problem is, I need to be able to instantiate many nodes for a given element as well as many elements for a model. Tackling this with built-ins such as arrays, lists and dicts seems straightforward, but I can't wrap my head around it in OO for some reason. Do I just need to make my element and node inherit from a list and then use the ".append()" when I instantiate it? Any help greatly appreciated. Thanks, Ben Racine """ untitled.py Created by Ben Racine on 2008-11-21. Copyright (c) 2008 __MyCompanyName__. All rights reserved. """ import sys import os class model(object): """docstring for model""" def __init__(self): pass class element(object): """docstring for element""" def __init__(self): pass elementID = 'something' elementPressure = 'something else' nodeCount = 'something else altogether' class node(object): """docstring for node""" def __init__(self): pass nodeID = '1' x = 'xx' y = 'yy' z = 'zz' if __name__ == "__main__": test = model() From robert.kern at gmail.com Fri Nov 21 21:59:32 2008 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 21 Nov 2008 20:59:32 -0600 Subject: [SciPy-user] object-oriented help In-Reply-To: <8C2B20C4348091499673D86BF10AB6763010F89B@clipper.glosten.local> References: <8C2B20C4348091499673D86BF10AB6763010F89B@clipper.glosten.local> Message-ID: <3d375d730811211859m3c6072fbu47415ae330231bb8@mail.gmail.com> On Fri, Nov 21, 2008 at 20:37, Benjamin J. Racine wrote: > Hello all, > > Please let me know if I should be posting just general OO stuff somewhere else, but I figure that this might be relevant to a lot of procedural programming types just jumping into python (and the example is straight out of FEA). Anyways, I have the following code below. The problem is, I need to be able to instantiate many nodes for a given element as well as many elements for a model. Tackling this with built-ins such as arrays, lists and dicts seems straightforward, but I can't wrap my head around it in OO for some reason. Do I just need to make my element and node inherit from a list and then use the ".append()" when I instantiate it? If the builtin types work well, then go ahead and use them. OO doesn't solve every problem, but it can certainly create a few. In languages like C++, where the builtin types are so limited, being able to write classes helps a lot. But in Python, lists, dicts, and arrays are relatively awesome (and are already OO objects) so writing your own classes is sometimes a downgrade. In any case, you will want to make use of lists, dicts, and arrays if you need collections. You almost certainly don't want nested class declarations. Try this on for size: class Model(object): """ An FEA model. """ def __init__(self, elements): self.elements = elements class Element(object): """ A single element of an FEA model. """ def __init__(self, element_id, pressure, nodes): self.element_id = element_id self.pressure = pressure self.nodes = nodes class Node(object): """ A node in an FEA model. """ def __init__(self, node_id, xyz): self.node_id = node_id self.xyz = xyz model = Model(elements=[ Element(1, 0.0, nodes=[ Node(1, (1,2,3)), Node(2, (3,4,5)), Node(3, (6,7,8)), ]), ]) -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From bjracine at glosten.com Fri Nov 21 22:42:06 2008 From: bjracine at glosten.com (Benjamin J. Racine) Date: Fri, 21 Nov 2008 19:42:06 -0800 Subject: [SciPy-user] object-oriented help In-Reply-To: <3d375d730811211859m3c6072fbu47415ae330231bb8@mail.gmail.com> References: <8C2B20C4348091499673D86BF10AB6763010F89B@clipper.glosten.local>, <3d375d730811211859m3c6072fbu47415ae330231bb8@mail.gmail.com> Message-ID: <8C2B20C4348091499673D86BF10AB6763010F89D@clipper.glosten.local> Very nice. Many thanks Robert! I need to think about how best to handle the 'model =' line for the data that I actually have, but this sure puts me down the right path. Ben R. ________________________________________ From: scipy-user-bounces at scipy.org [scipy-user-bounces at scipy.org] On Behalf Of Robert Kern [robert.kern at gmail.com] Sent: Friday, November 21, 2008 6:59 PM To: SciPy Users List Subject: Re: [SciPy-user] object-oriented help On Fri, Nov 21, 2008 at 20:37, Benjamin J. Racine wrote: > Hello all, > > Please let me know if I should be posting just general OO stuff somewhere else, but I figure that this might be relevant to a lot of procedural programming types just jumping into python (and the example is straight out of FEA). Anyways, I have the following code below. The problem is, I need to be able to instantiate many nodes for a given element as well as many elements for a model. Tackling this with built-ins such as arrays, lists and dicts seems straightforward, but I can't wrap my head around it in OO for some reason. Do I just need to make my element and node inherit from a list and then use the ".append()" when I instantiate it? If the builtin types work well, then go ahead and use them. OO doesn't solve every problem, but it can certainly create a few. In languages like C++, where the builtin types are so limited, being able to write classes helps a lot. But in Python, lists, dicts, and arrays are relatively awesome (and are already OO objects) so writing your own classes is sometimes a downgrade. In any case, you will want to make use of lists, dicts, and arrays if you need collections. You almost certainly don't want nested class declarations. Try this on for size: class Model(object): """ An FEA model. """ def __init__(self, elements): self.elements = elements class Element(object): """ A single element of an FEA model. """ def __init__(self, element_id, pressure, nodes): self.element_id = element_id self.pressure = pressure self.nodes = nodes class Node(object): """ A node in an FEA model. """ def __init__(self, node_id, xyz): self.node_id = node_id self.xyz = xyz model = Model(elements=[ Element(1, 0.0, nodes=[ Node(1, (1,2,3)), Node(2, (3,4,5)), Node(3, (6,7,8)), ]), ]) -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco _______________________________________________ SciPy-user mailing list SciPy-user at scipy.org http://projects.scipy.org/mailman/listinfo/scipy-user From dwf at cs.toronto.edu Sun Nov 23 13:50:00 2008 From: dwf at cs.toronto.edu (David Warde-Farley) Date: Sun, 23 Nov 2008 13:50:00 -0500 Subject: [SciPy-user] Gram-Schmidt orthogonalization Message-ID: Hi, Is there a built-in, somewhere in NumPy or SciPy, that implements Gram- Schmidt orthogonalization? Thanks, David From wnbell at gmail.com Sun Nov 23 14:01:25 2008 From: wnbell at gmail.com (Nathan Bell) Date: Sun, 23 Nov 2008 14:01:25 -0500 Subject: [SciPy-user] Gram-Schmidt orthogonalization In-Reply-To: References: Message-ID: On Sun, Nov 23, 2008 at 1:50 PM, David Warde-Farley wrote: > > Is there a built-in, somewhere in NumPy or SciPy, that implements Gram- > Schmidt orthogonalization? > Would you be content with a QR decomposition (scipy.linalg.qr) ? http://en.wikipedia.org/wiki/QR_decomposition -- Nathan Bell wnbell at gmail.com http://graphics.cs.uiuc.edu/~wnbell/ From dwf at cs.toronto.edu Sun Nov 23 14:14:05 2008 From: dwf at cs.toronto.edu (David Warde-Farley) Date: Sun, 23 Nov 2008 14:14:05 -0500 Subject: [SciPy-user] Gram-Schmidt orthogonalization In-Reply-To: References: Message-ID: On 23-Nov-08, at 2:01 PM, Nathan Bell wrote: > On Sun, Nov 23, 2008 at 1:50 PM, David Warde-Farley > wrote: >> >> Is there a built-in, somewhere in NumPy or SciPy, that implements >> Gram- >> Schmidt orthogonalization? >> > > Would you be content with a QR decomposition (scipy.linalg.qr) ? > http://en.wikipedia.org/wiki/QR_decomposition D'oh. Apparently that's exactly what I need. I need to brush up on my matrix factorization jargon. Is there any particular advantage to using scipy.linalg.qr over numpy.linalg.qr? Is the former faster by virtue of Fortran? David From wnbell at gmail.com Sun Nov 23 14:20:04 2008 From: wnbell at gmail.com (Nathan Bell) Date: Sun, 23 Nov 2008 14:20:04 -0500 Subject: [SciPy-user] Gram-Schmidt orthogonalization In-Reply-To: References: Message-ID: On Sun, Nov 23, 2008 at 2:14 PM, David Warde-Farley wrote: > > Is there any particular advantage to using scipy.linalg.qr over > numpy.linalg.qr? Is the former faster by virtue of Fortran? > I can't say offhand which would be better. In either case you'll want the "economy" QR, the # of vectors you're orthogonalizing is smaller than the length of the vector. This way you'll get a tall, skinny Q as opposed to a square Q. -- Nathan Bell wnbell at gmail.com http://graphics.cs.uiuc.edu/~wnbell/ From anand.prabhakar.patil at gmail.com Sun Nov 23 16:54:40 2008 From: anand.prabhakar.patil at gmail.com (Anand Patil) Date: Sun, 23 Nov 2008 21:54:40 +0000 Subject: [SciPy-user] Log-error function Message-ID: <2bc7a5a50811231354i1864040w4e2b6894cde59df0@mail.gmail.com> Hi all, I'm looking for a C or Fortran routine that computes log(erf(x)) without ever computing erf(x) directly, for use in PyMC. Does anyone have one laying around? It looks like GSL has such a thing, but it's GPL and we're using the MIT license. Thanks, Anand From roger.herikstad at gmail.com Mon Nov 24 00:57:10 2008 From: roger.herikstad at gmail.com (Roger Herikstad) Date: Mon, 24 Nov 2008 13:57:10 +0800 Subject: [SciPy-user] Problems with numpy and Python64 on Mac OS 1.5.5 Message-ID: Hi, After successfully installing a 4 bit universal of python (trunk:67176M) and building and installing numpy (1.3.0.dev5972), running the numpy.test() crashes my system. Has anyone else had the same problem? Here's the python output and the crash report: In [2]: numpy.test() Running unit tests for numpy NumPy version 1.3.0.dev5972 NumPy is installed in /Library/Frameworks/Python64.framework/Versions/2.7/lib/python2.7/site-packages/numpy Python version 2.7a0 (trunk:67176M, Nov 10 2008, 11:26:48) [GCC 4.0.1 (Apple Inc. build 5465)] nose version 0.10.4 ...............E....EE.....................Python64-64(1431) malloc: *** error for object 0x100e1e400: incorrect checksum for freed object - object was probably modified after being freed. *** set a breakpoint in malloc_error_break to debug Python64-64(1431) malloc: *** error for object 0x100e1e400: incorrect checksum for freed object - object was probably modified after being freed. *** set a breakpoint in malloc_error_break to debug E......EEE......EE.............EPython64-64(1431) malloc: *** error for object 0x101c4c1e0: incorrect checksum for freed object - object was probably modified after being freed. *** set a breakpoint in malloc_error_break to debug Python64-64(1431) malloc: *** error for object 0x101c4c1e0: incorrect checksum for freed object - object was probably modified after being freed. *** set a breakpoint in malloc_error_break to debug Python64-64(1431) malloc: *** error for object 0x101c4c1e0: incorrect checksum for freed object - object was probably modified after being freed. *** set a breakpoint in malloc_error_break to debug Python64-64(1431) malloc: *** error for object 0x101c4c1e0: incorrect checksum for freed object - object was probably modified after being freed. *** set a breakpoint in malloc_error_break to debug Python64-64(1431) malloc: *** error for object 0x101c4c1e0: incorrect checksum for freed object - object was probably modified after being freed. *** set a breakpoint in malloc_error_break to debug Python64-64(1431) malloc: *** error for object 0x101c4c1e0: incorrect checksum for freed object - object was probably modified after being freed. *** set a breakpoint in malloc_error_break to debug Python64-64(1431) malloc: *** error for object 0x101c4c1e0: incorrect checksum for freed object - object was probably modified after being freed. *** set a breakpoint in malloc_error_break to debug Python64-64(1431) malloc: *** error for object 0x101c4c1e0: incorrect checksum for freed object - object was probably modified after being freed. *** set a breakpoint in malloc_error_break to debug Python64-64(1431) malloc: *** error for object 0x101c4c1e0: incorrect checksum for freed object - object was probably modified after being freed. *** set a breakpoint in malloc_error_break to debug Python64-64(1431) malloc: *** error for object 0x101c4c1e0: incorrect checksum for freed object - object was probably modified after being freed. *** set a breakpoint in malloc_error_break to debug Python64-64(1431) malloc: *** error for object 0x101c4c1e0: incorrect checksum for freed object - object was probably modified after being freed. *** set a breakpoint in malloc_error_break to debug Python64-64(1431) malloc: *** error for object 0x101c4c1e0: incorrect checksum for freed object - object was probably modified after being freed. *** set a breakpoint in malloc_error_break to debug Python64-64(1431) malloc: *** error for object 0x101c4c1e0: incorrect checksum for freed object - object was probably modified after being freed. *** set a breakpoint in malloc_error_break to debug Python64-64(1431) malloc: *** error for object 0x101c4c1e0: incorrect checksum for freed object - object was probably modified after being freed. *** set a breakpoint in malloc_error_break to debug Python64-64(1431) malloc: *** error for object 0x101c4c1e0: incorrect checksum for freed object - object was probably modified after being freed. *** set a breakpoint in malloc_error_break to debug Python64-64(1431) malloc: *** error for object 0x101c4c1e0: incorrect checksum for freed object - object was probably modified after being freed. *** set a breakpoint in malloc_error_break to debug Python64-64(1431) malloc: *** error for object 0x101c4c1e0: incorrect checksum for freed object - object was probably modified after being freed. *** set a breakpoint in malloc_error_break to debug Python64-64(1431) malloc: *** error for object 0x101c4c1e0: incorrect checksum for freed object - object was probably modified after being freed. *** set a breakpoint in malloc_error_break to debug Python64-64(1431) malloc: *** error for object 0x101c4c1e0: incorrect checksum for freed object - object was probably modified after being freed. *** set a breakpoint in malloc_error_break to debug Python64-64(1431) malloc: *** error for object 0x101c4c1e0: incorrect checksum for freed object - object was probably modified after being freed. *** set a breakpoint in malloc_error_break to debug Python64-64(1431) malloc: *** error for object 0x101c4c1e0: incorrect checksum for freed object - object was probably modified after being freed. *** set a breakpoint in malloc_error_break to debug Python64-64(1431) malloc: *** error for object 0x101c4c1e0: incorrect checksum for freed object - object was probably modified after being freed. *** set a breakpoint in malloc_error_break to debug Python64-64(1431) malloc: *** error for object 0x101c4c1e0: incorrect checksum for freed object - object was probably modified after being freed. *** set a breakpoint in malloc_error_break to debug Python64-64(1431) malloc: *** error for object 0x101c4c1e0: incorrect checksum for freed object - object was probably modified after being freed. *** set a breakpoint in malloc_error_break to debug Python64-64(1431) malloc: *** error for object 0x101c4c1e0: incorrect checksum for freed object - object was probably modified after being freed. *** set a breakpoint in malloc_error_break to debug F.F............................Segmentation fault Process: Python64-64 [1478] Path: /Library/Frameworks/Python64.framework/Versions/2.7/Resources/Python.app/Contents/MacOS/Python64-64 Identifier: Python64-64 Version: ??? (???) Code Type: X86-64 (Native) Parent Process: bash [346] Date/Time: 2008-11-24 13:55:31.551 +0800 OS Version: Mac OS X 10.5.5 (9F33) Report Version: 6 Exception Type: EXC_BAD_ACCESS (SIGSEGV) Exception Codes: KERN_INVALID_ADDRESS at 0x0000000000000190 Crashed Thread: 0 Thread 0 Crashed: 0 libSystem.B.dylib 0x00007fff837604d7 tiny_malloc_from_free_list + 69 1 libSystem.B.dylib 0x00007fff837593ef szone_malloc + 203 2 libSystem.B.dylib 0x00007fff837592ef malloc_zone_malloc + 82 3 libSystem.B.dylib 0x00007fff83759280 malloc + 44 4 multiarray.so 0x00000001014b57ad array_alloc + 29 (arrayobject.c:6957) 5 multiarray.so 0x00000001014c1162 PyArray_NewFromDescr + 930 (arrayobject.c:5583) 6 multiarray.so 0x00000001014fbaff _strings_richcompare + 335 (arrayobject.c:4769) 7 multiarray.so 0x000000010150236c array_richcompare + 844 (arrayobject.c:5029) 8 Python64 0x00000001000575e9 PyObject_RichCompare + 201 9 Python64 0x00000001000be640 PyEval_EvalFrameEx + 16240 10 Python64 0x00000001000c0cec PyEval_EvalFrameEx + 26140 11 Python64 0x00000001000c1cfb PyEval_EvalCodeEx + 1483 12 Python64 0x000000010003db0b function_call + 171 13 Python64 0x000000010000cd72 PyObject_Call + 98 14 Python64 0x00000001000bcf59 PyEval_EvalFrameEx + 10377 15 Python64 0x00000001000c1cfb PyEval_EvalCodeEx + 1483 16 Python64 0x000000010003db0b function_call + 171 17 Python64 0x000000010000cd72 PyObject_Call + 98 18 Python64 0x000000010001f9c1 instancemethod_call + 401 19 Python64 0x000000010000cd72 PyObject_Call + 98 20 Python64 0x000000010007486a slot_tp_call + 74 21 Python64 0x000000010000cd72 PyObject_Call + 98 22 Python64 0x00000001000be73d PyEval_EvalFrameEx + 16493 23 Python64 0x00000001000c0cec PyEval_EvalFrameEx + 26140 24 Python64 0x00000001000c1cfb PyEval_EvalCodeEx + 1483 25 Python64 0x000000010003db0b function_call + 171 26 Python64 0x000000010000cd72 PyObject_Call + 98 27 Python64 0x00000001000bcf59 PyEval_EvalFrameEx + 10377 28 Python64 0x00000001000c1cfb PyEval_EvalCodeEx + 1483 29 Python64 0x000000010003db0b function_call + 171 30 Python64 0x000000010000cd72 PyObject_Call + 98 31 Python64 0x000000010001f9c1 instancemethod_call + 401 32 Python64 0x000000010000cd72 PyObject_Call + 98 33 Python64 0x000000010007486a slot_tp_call + 74 34 Python64 0x000000010000cd72 PyObject_Call + 98 35 Python64 0x00000001000be73d PyEval_EvalFrameEx + 16493 36 Python64 0x00000001000c1cfb PyEval_EvalCodeEx + 1483 37 Python64 0x000000010003db0b function_call + 171 38 Python64 0x000000010000cd72 PyObject_Call + 98 39 Python64 0x00000001000bcf59 PyEval_EvalFrameEx + 10377 40 Python64 0x00000001000c1cfb PyEval_EvalCodeEx + 1483 41 Python64 0x000000010003db0b function_call + 171 42 Python64 0x000000010000cd72 PyObject_Call + 98 43 Python64 0x000000010001f9c1 instancemethod_call + 401 44 Python64 0x000000010000cd72 PyObject_Call + 98 45 Python64 0x000000010007486a slot_tp_call + 74 46 Python64 0x000000010000cd72 PyObject_Call + 98 47 Python64 0x00000001000be73d PyEval_EvalFrameEx + 16493 48 Python64 0x00000001000c1cfb PyEval_EvalCodeEx + 1483 49 Python64 0x000000010003db0b function_call + 171 50 Python64 0x000000010000cd72 PyObject_Call + 98 51 Python64 0x00000001000bcf59 PyEval_EvalFrameEx + 10377 52 Python64 0x00000001000c1cfb PyEval_EvalCodeEx + 1483 53 Python64 0x000000010003db0b function_call + 171 54 Python64 0x000000010000cd72 PyObject_Call + 98 55 Python64 0x000000010001f9c1 instancemethod_call + 401 56 Python64 0x000000010000cd72 PyObject_Call + 98 57 Python64 0x000000010007486a slot_tp_call + 74 58 Python64 0x000000010000cd72 PyObject_Call + 98 59 Python64 0x00000001000be73d PyEval_EvalFrameEx + 16493 60 Python64 0x00000001000c1cfb PyEval_EvalCodeEx + 1483 61 Python64 0x000000010003db0b function_call + 171 62 Python64 0x000000010000cd72 PyObject_Call + 98 63 Python64 0x00000001000bcf59 PyEval_EvalFrameEx + 10377 64 Python64 0x00000001000c1cfb PyEval_EvalCodeEx + 1483 65 Python64 0x000000010003db0b function_call + 171 66 Python64 0x000000010000cd72 PyObject_Call + 98 67 Python64 0x000000010001f9c1 instancemethod_call + 401 68 Python64 0x000000010000cd72 PyObject_Call + 98 69 Python64 0x000000010007486a slot_tp_call + 74 70 Python64 0x000000010000cd72 PyObject_Call + 98 71 Python64 0x00000001000be73d PyEval_EvalFrameEx + 16493 72 Python64 0x00000001000c1cfb PyEval_EvalCodeEx + 1483 73 Python64 0x000000010003db0b function_call + 171 74 Python64 0x000000010000cd72 PyObject_Call + 98 75 Python64 0x00000001000bcf59 PyEval_EvalFrameEx + 10377 76 Python64 0x00000001000c1cfb PyEval_EvalCodeEx + 1483 77 Python64 0x000000010003db0b function_call + 171 78 Python64 0x000000010000cd72 PyObject_Call + 98 79 Python64 0x000000010001f9c1 instancemethod_call + 401 80 Python64 0x000000010000cd72 PyObject_Call + 98 81 Python64 0x000000010007486a slot_tp_call + 74 82 Python64 0x000000010000cd72 PyObject_Call + 98 83 Python64 0x00000001000be73d PyEval_EvalFrameEx + 16493 84 Python64 0x00000001000c1cfb PyEval_EvalCodeEx + 1483 85 Python64 0x000000010003db0b function_call + 171 86 Python64 0x000000010000cd72 PyObject_Call + 98 87 Python64 0x00000001000bcf59 PyEval_EvalFrameEx + 10377 88 Python64 0x00000001000c1cfb PyEval_EvalCodeEx + 1483 89 Python64 0x000000010003db0b function_call + 171 90 Python64 0x000000010000cd72 PyObject_Call + 98 91 Python64 0x000000010001f9c1 instancemethod_call + 401 92 Python64 0x000000010000cd72 PyObject_Call + 98 93 Python64 0x000000010007486a slot_tp_call + 74 94 Python64 0x000000010000cd72 PyObject_Call + 98 95 Python64 0x00000001000be73d PyEval_EvalFrameEx + 16493 96 Python64 0x00000001000c1cfb PyEval_EvalCodeEx + 1483 97 Python64 0x000000010003db0b function_call + 171 98 Python64 0x000000010000cd72 PyObject_Call + 98 99 Python64 0x00000001000bcf59 PyEval_EvalFrameEx + 10377 100 Python64 0x00000001000c1cfb PyEval_EvalCodeEx + 1483 101 Python64 0x000000010003db0b function_call + 171 102 Python64 0x000000010000cd72 PyObject_Call + 98 103 Python64 0x000000010001f9c1 instancemethod_call + 401 104 Python64 0x000000010000cd72 PyObject_Call + 98 105 Python64 0x000000010007486a slot_tp_call + 74 106 Python64 0x000000010000cd72 PyObject_Call + 98 107 Python64 0x00000001000be73d PyEval_EvalFrameEx + 16493 108 Python64 0x00000001000c0cec PyEval_EvalFrameEx + 26140 109 Python64 0x00000001000c1cfb PyEval_EvalCodeEx + 1483 110 Python64 0x00000001000bfecc PyEval_EvalFrameEx + 22524 111 Python64 0x00000001000c1cfb PyEval_EvalCodeEx + 1483 112 Python64 0x000000010003db0b function_call + 171 113 Python64 0x000000010000cd72 PyObject_Call + 98 114 Python64 0x000000010001f9c1 instancemethod_call + 401 115 Python64 0x000000010000cd72 PyObject_Call + 98 116 Python64 0x00000001000be73d PyEval_EvalFrameEx + 16493 117 Python64 0x00000001000c1cfb PyEval_EvalCodeEx + 1483 118 Python64 0x000000010003db0b function_call + 171 119 Python64 0x000000010000cd72 PyObject_Call + 98 120 Python64 0x000000010001f9c1 instancemethod_call + 401 121 Python64 0x000000010000cd72 PyObject_Call + 98 122 Python64 0x0000000100074448 slot_tp_init + 88 123 Python64 0x0000000100072b5c type_call + 188 124 Python64 0x000000010000cd72 PyObject_Call + 98 125 Python64 0x00000001000be73d PyEval_EvalFrameEx + 16493 126 Python64 0x00000001000c1cfb PyEval_EvalCodeEx + 1483 127 Python64 0x00000001000bfecc PyEval_EvalFrameEx + 22524 128 Python64 0x00000001000c1cfb PyEval_EvalCodeEx + 1483 129 Python64 0x00000001000c086a PyEval_EvalFrameEx + 24986 130 Python64 0x00000001000c1cfb PyEval_EvalCodeEx + 1483 131 Python64 0x00000001000bfecc PyEval_EvalFrameEx + 22524 132 Python64 0x00000001000c1cfb PyEval_EvalCodeEx + 1483 133 Python64 0x00000001000bfecc PyEval_EvalFrameEx + 22524 134 Python64 0x00000001000c0cec PyEval_EvalFrameEx + 26140 135 Python64 0x00000001000c1cfb PyEval_EvalCodeEx + 1483 136 Python64 0x00000001000bfecc PyEval_EvalFrameEx + 22524 137 Python64 0x00000001000c1cfb PyEval_EvalCodeEx + 1483 138 Python64 0x00000001000bfecc PyEval_EvalFrameEx + 22524 139 Python64 0x00000001000c1cfb PyEval_EvalCodeEx + 1483 140 Python64 0x00000001000bfecc PyEval_EvalFrameEx + 22524 141 Python64 0x00000001000c1cfb PyEval_EvalCodeEx + 1483 142 Python64 0x00000001000c20f6 PyEval_EvalCode + 54 143 Python64 0x00000001000e618e PyRun_FileExFlags + 174 144 Python64 0x00000001000e6f51 PyRun_SimpleFileExFlags + 801 145 Python64 0x00000001000f6d64 Py_Main + 2740 146 Python64-64 0x0000000100000f54 0x100000000 + 3924 Thread 0 crashed with X86 Thread State (64-bit): rax: 0x000000000000006c rbx: 0x0000000101c3b8d0 rcx: 0x0000000000000004 rdx: 0x0000000000000190 rdi: 0x0000000100228000 rsi: 0x00000001002280d0 rbp: 0x00007fff5fbf52d0 rsp: 0x00007fff5fbf5290 r8: 0x000000007fffffff r9: 0x0000000000000000 r10: 0x0000000000000001 r11: 0x0000000100e28428 r12: 0x0000000100228000 r13: 0x0000000000000005 r14: 0x0000000000000005 r15: 0x0000000000000003 rip: 0x00007fff837604d7 rfl: 0x0000000000010206 cr2: 0x0000000000000190 Binary Images: 0x100000000 - 0x100000ff7 +Python64-64 ??? (???) /Library/Frameworks/Python64.framework/Versions/2.7/Resources/Python.app/Contents/MacOS/Python64-64 0x100003000 - 0x100156fe3 +Python64 ??? (???) <99b06332c391a0f5ae8bae5eb10589ef> /Library/Frameworks/Python64.framework/Versions/2.7/Python64 0x10026a000 - 0x10026bfff +cStringIO.so ??? (???) /Library/Frameworks/Python64.framework/Versions/2.7/lib/python2.7/lib-dynload/cStringIO.so 0x100270000 - 0x100273ff7 +strop.so ??? (???) <0081a0a8376397566c1f847049caadd1> /Library/Frameworks/Python64.framework/Versions/2.7/lib/python2.7/lib-dynload/strop.so 0x100278000 - 0x100279ff7 +time.so ??? (???) /Library/Frameworks/Python64.framework/Versions/2.7/lib/python2.7/lib-dynload/time.so 0x1002c4000 - 0x1002c5fff +termios.so ??? (???) <98c09d8a1e37817bd87d5ad6e71437db> /Library/Frameworks/Python64.framework/Versions/2.7/lib/python2.7/lib-dynload/termios.so 0x1002ca000 - 0x1002cbff7 +_hashlib.so ??? (???) /Library/Frameworks/Python64.framework/Versions/2.7/lib/python2.7/lib-dynload/_hashlib.so 0x1002cf000 - 0x1002d1fff +_sha256.so ??? (???) /Library/Frameworks/Python64.framework/Versions/2.7/lib/python2.7/lib-dynload/_sha256.so 0x1002d4000 - 0x1002d7ff7 +_sha512.so ??? (???) <24033938710c3563bfb35d04ff17f039> /Library/Frameworks/Python64.framework/Versions/2.7/lib/python2.7/lib-dynload/_sha512.so 0x1002da000 - 0x1002dcff7 +readline.so ??? (???) <55b90901600834f97c9fbc7a258a6452> /Library/Frameworks/Python64.framework/Versions/2.7/lib/python2.7/lib-dynload/readline.so 0x1002e5000 - 0x1002e6ff7 +resource.so ??? (???) /Library/Frameworks/Python64.framework/Versions/2.7/lib/python2.7/lib-dynload/resource.so 0x1002ee000 - 0x1002f2fff +operator.so ??? (???) <7a61eb0ffb187af484f88c9e87a84c49> /Library/Frameworks/Python64.framework/Versions/2.7/lib/python2.7/lib-dynload/operator.so 0x1002fa000 - 0x1002faff7 +_bisect.so ??? (???) /Library/Frameworks/Python64.framework/Versions/2.7/lib/python2.7/lib-dynload/_bisect.so 0x100457000 - 0x100463fb1 +libgcc_s.1.dylib ??? (???) /usr/local/lib/libgcc_s.1.dylib 0x1004ac000 - 0x1004b0fff +_collections.so ??? (???) <2124ddcb797ea8030d5d6b52d16b6c38> /Library/Frameworks/Python64.framework/Versions/2.7/lib/python2.7/lib-dynload/_collections.so 0x1004b6000 - 0x1004b7ff7 +_heapq.so ??? (???) <401ecf6d5ee927d157bf9dcde04c5328> /Library/Frameworks/Python64.framework/Versions/2.7/lib/python2.7/lib-dynload/_heapq.so 0x1004bb000 - 0x1004bcff7 +_functools.so ??? (???) /Library/Frameworks/Python64.framework/Versions/2.7/lib/python2.7/lib-dynload/_functools.so 0x1004ff000 - 0x100505ff7 +itertools.so ??? (???) /Library/Frameworks/Python64.framework/Versions/2.7/lib/python2.7/lib-dynload/itertools.so 0x10050f000 - 0x100511fff +binascii.so ??? (???) <04a5c86232969859ff3a3ba578d75954> /Library/Frameworks/Python64.framework/Versions/2.7/lib/python2.7/lib-dynload/binascii.so 0x100555000 - 0x100558fff +math.so ??? (???) <7c50fa952336956ba646addf6a57860c> /Library/Frameworks/Python64.framework/Versions/2.7/lib/python2.7/lib-dynload/math.so 0x10055e000 - 0x10055fff7 +_random.so ??? (???) <1cd450b07b79e6f7a432b4c305c846d2> /Library/Frameworks/Python64.framework/Versions/2.7/lib/python2.7/lib-dynload/_random.so 0x100562000 - 0x100563fff +fcntl.so ??? (???) /Library/Frameworks/Python64.framework/Versions/2.7/lib/python2.7/lib-dynload/fcntl.so 0x100566000 - 0x100568ff7 +_locale.so ??? (???) <898e4d0efd88f9803700dc4c70674638> /Library/Frameworks/Python64.framework/Versions/2.7/lib/python2.7/lib-dynload/_locale.so 0x1005ec000 - 0x1005effff +select.so ??? (???) /Library/Frameworks/Python64.framework/Versions/2.7/lib/python2.7/lib-dynload/select.so 0x1005f4000 - 0x1005f9ff7 +_struct.so ??? (???) <50aab31a15428f8c27361bb105b22486> /Library/Frameworks/Python64.framework/Versions/2.7/lib/python2.7/lib-dynload/_struct.so 0x100600000 - 0x10060dfff +_curses.so ??? (???) <80a5f53a8ad93a01dc7c58741cb6f284> /Library/Frameworks/Python64.framework/Versions/2.7/lib/python2.7/lib-dynload/_curses.so 0x100618000 - 0x10064efe7 libncurses.5.4.dylib ??? (???) /usr/lib/libncurses.5.4.dylib 0x10076d000 - 0x100798fef libssl.0.9.7.dylib ??? (???) <3543402bd8c92a4b9fa846c940ce4b6b> /usr/lib/libssl.0.9.7.dylib 0x1007a7000 - 0x1007d1fd9 +libreadline.5.2.dylib ??? (???) /usr/local/lib/libreadline.5.2.dylib 0x1007e5000 - 0x1007f1ff3 +libintl.8.dylib ??? (???) /usr/local/lib/libintl.8.dylib 0x101100000 - 0x101117ffb +_ctypes.so ??? (???) <2b250c749c6c3fb1fbacff96600f34f5> /Library/Frameworks/Python64.framework/Versions/2.7/lib/python2.7/lib-dynload/_ctypes.so 0x101240000 - 0x101338fef libiconv.2.dylib ??? (???) <2b42104e7aa2da6e64f979e585af02e9> /usr/lib/libiconv.2.dylib 0x10133f000 - 0x10134fff7 +cPickle.so ??? (???) <3a54f1ff007030f6ac9042b9c7414d6d> /Library/Frameworks/Python64.framework/Versions/2.7/lib/python2.7/lib-dynload/cPickle.so 0x101357000 - 0x101359fff +_lsprof.so ??? (???) <850acf2138c2335c59e67a5d8a1cea3e> /Library/Frameworks/Python64.framework/Versions/2.7/lib/python2.7/lib-dynload/_lsprof.so 0x10139d000 - 0x1013a4ff7 +_socket.so ??? (???) /Library/Frameworks/Python64.framework/Versions/2.7/lib/python2.7/lib-dynload/_socket.so 0x1013ae000 - 0x1013b2fff +_ssl.so ??? (???) <83281f23577f1ef5505b275ecaf6885c> /Library/Frameworks/Python64.framework/Versions/2.7/lib/python2.7/lib-dynload/_ssl.so 0x1013f7000 - 0x1013faff7 +_dotblas.so ??? (???) <573a6d3a70993971f6d6ba3b3041cf2b> /Library/Frameworks/Python64.framework/Versions/2.7/lib/python2.7/site-packages/numpy/core/_dotblas.so 0x101451000 - 0x101453fff +_compiled_base.so ??? (???) <0edca42f1938d93de0d5a881a07cc045> /Library/Frameworks/Python64.framework/Versions/2.7/lib/python2.7/site-packages/numpy/lib/_compiled_base.so 0x10149f000 - 0x101519fff +multiarray.so ??? (???) <21f57e288c348c05e94e60787d05e14f> /Library/Frameworks/Python64.framework/Versions/2.7/lib/python2.7/site-packages/numpy/core/multiarray.so 0x101598000 - 0x1015d4ff7 +umath.so ??? (???) <37e25b358bb7cd8d69d9492f59fd4bce> /Library/Frameworks/Python64.framework/Versions/2.7/lib/python2.7/site-packages/numpy/core/umath.so 0x1015f6000 - 0x1015fcff7 +lapack_lite.so ??? (???) <089b9ac702288962a3b8cbfdc69c55ee> /Library/Frameworks/Python64.framework/Versions/2.7/lib/python2.7/site-packages/numpy/linalg/lapack_lite.so 0x101700000 - 0x101714fff +_sort.so ??? (???) <8fdbc5c1ee32400a47a01c60ce06c8bb> /Library/Frameworks/Python64.framework/Versions/2.7/lib/python2.7/site-packages/numpy/core/_sort.so 0x10175b000 - 0x10177cfff +scalarmath.so ??? (???) <37c8df02ca4c1e32de1eb658573a9083> /Library/Frameworks/Python64.framework/Versions/2.7/lib/python2.7/site-packages/numpy/core/scalarmath.so 0x101829000 - 0x101830fff +fftpack_lite.so ??? (???) /Library/Frameworks/Python64.framework/Versions/2.7/lib/python2.7/site-packages/numpy/fft/fftpack_lite.so 0x101834000 - 0x101868fff +mtrand.so ??? (???) <54df4764e45e853939907ba1864d4222> /Library/Frameworks/Python64.framework/Versions/2.7/lib/python2.7/site-packages/numpy/random/mtrand.so 0x10190a000 - 0x101917ff7 +parser.so ??? (???) <9c53d967596af699135efde872ca8278> /Library/Frameworks/Python64.framework/Versions/2.7/lib/python2.7/lib-dynload/parser.so 0x101960000 - 0x101963ff7 +mmap.so ??? (???) /Library/Frameworks/Python64.framework/Versions/2.7/lib/python2.7/lib-dynload/mmap.so 0x101968000 - 0x10196afff +zlib.so ??? (???) <8eebc9c720a03f6bf1aabc79ad6e705e> /Library/Frameworks/Python64.framework/Versions/2.7/lib/python2.7/lib-dynload/zlib.so 0x101a7c000 - 0x101a7ffef +_hotshot.so ??? (???) /Library/Frameworks/Python64.framework/Versions/2.7/lib/python2.7/lib-dynload/_hotshot.so 0x7fff5fc00000 - 0x7fff5fc2e593 dyld 96.2 (???) /usr/lib/dyld 0x7fff80003000 - 0x7fff80014ffd libz.1.dylib ??? (???) <2022cc8950afdf485ba1df76364ba725> /usr/lib/libz.1.dylib 0x7fff816c6000 - 0x7fff8178afe2 com.apple.vImage 3.0 (3.0) /System/Library/Frameworks/Accelerate.framework/Versions/A/Frameworks/vImage.framework/Versions/A/vImage 0x7fff82091000 - 0x7fff82105fe7 libstdc++.6.dylib ??? (???) <379a6a2dc6e21ba77310b3d2d9ea30ac> /usr/lib/libstdc++.6.dylib 0x7fff821a7000 - 0x7fff822defff com.apple.CoreFoundation 6.5.4 (476.15) <4b970007410b71eca926819f3959548f> /System/Library/Frameworks/CoreFoundation.framework/Versions/A/CoreFoundation 0x7fff8232e000 - 0x7fff8233aff1 libgcc_s.1.dylib ??? (???) <42e4fd8079ba44258ea9afc27d2f48f3> /usr/lib/libgcc_s.1.dylib 0x7fff82343000 - 0x7fff82343ffd com.apple.Accelerate 1.4.2 (Accelerate 1.4.2) /System/Library/Frameworks/Accelerate.framework/Versions/A/Accelerate 0x7fff825c5000 - 0x7fff826a6fff libcrypto.0.9.7.dylib ??? (???) <66f1f8773bd9fdfdcfd09e2b4b010636> /usr/lib/libcrypto.0.9.7.dylib 0x7fff826b9000 - 0x7fff82e76fef libBLAS.dylib ??? (???) /System/Library/Frameworks/Accelerate.framework/Versions/A/Frameworks/vecLib.framework/Versions/A/libBLAS.dylib 0x7fff82fec000 - 0x7fff830e0fff libobjc.A.dylib ??? (???) <118dc1ae05e685ad64290352fc94f1f0> /usr/lib/libobjc.A.dylib 0x7fff831ad000 - 0x7fff831dfff7 libauto.dylib ??? (???) /usr/lib/libauto.dylib 0x7fff832fe000 - 0x7fff832feffd com.apple.Accelerate.vecLib 3.4.2 (vecLib 3.4.2) /System/Library/Frameworks/Accelerate.framework/Versions/A/Frameworks/vecLib.framework/Versions/A/vecLib 0x7fff83308000 - 0x7fff8330cfff libmathCommon.A.dylib ??? (???) /usr/lib/system/libmathCommon.A.dylib 0x7fff83379000 - 0x7fff83379ffd com.apple.vecLib 3.4.2 (vecLib 3.4.2) /System/Library/Frameworks/vecLib.framework/Versions/A/vecLib 0x7fff834f5000 - 0x7fff8350ffff libvDSP.dylib ??? (???) /System/Library/Frameworks/Accelerate.framework/Versions/A/Frameworks/vecLib.framework/Versions/A/libvDSP.dylib 0x7fff83757000 - 0x7fff838dbffb libSystem.B.dylib ??? (???) <61a1506a5f8d9ffa37298999e05f519c> /usr/lib/libSystem.B.dylib 0x7fff83907000 - 0x7fff8397dfef libvMisc.dylib ??? (???) /System/Library/Frameworks/Accelerate.framework/Versions/A/Frameworks/vecLib.framework/Versions/A/libvMisc.dylib 0x7fff84035000 - 0x7fff841a3fff libicucore.A.dylib ??? (???) <25557e76cafa3f8a97ca7bffe42e2d97> /usr/lib/libicucore.A.dylib 0x7fff84315000 - 0x7fff846cdfff libLAPACK.dylib ??? (???) /System/Library/Frameworks/Accelerate.framework/Versions/A/Frameworks/vecLib.framework/Versions/A/libLAPACK.dylib 0x7fffffe00000 - 0x7fffffe01780 libSystem.B.dylib ??? (???) /usr/lib/libSystem.B.dylib 0xfffffffffffec000 - 0xfffffffffffeffff libobjc.A.dylib ??? (???) /usr/lib/libobjc.A.dylib From david at ar.media.kyoto-u.ac.jp Mon Nov 24 02:11:03 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Mon, 24 Nov 2008 16:11:03 +0900 Subject: [SciPy-user] Problems with numpy and Python64 on Mac OS 1.5.5 In-Reply-To: References: Message-ID: <492A5387.4060805@ar.media.kyoto-u.ac.jp> Roger Herikstad wrote: > Hi, > After successfully installing a 4 bit universal of python > (trunk:67176M) and building and installing numpy (1.3.0.dev5972), > running the numpy.test() crashes my system. Has anyone else had the > same problem? Here's the python output and the crash report: > You are using a python which is not even alpha (python 2.7). Python2.6 is not officially supported yet - please use python 2.5 or below instead. David From nwagner at iam.uni-stuttgart.de Mon Nov 24 03:10:20 2008 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Mon, 24 Nov 2008 09:10:20 +0100 Subject: [SciPy-user] Exec format error Message-ID: Hi all, Sorry if my request is off-topic but what is the reason for the following message if I try to run python Exec format error. Binary file not executable. Nils From roger.herikstad at gmail.com Mon Nov 24 03:21:04 2008 From: roger.herikstad at gmail.com (Roger Herikstad) Date: Mon, 24 Nov 2008 16:21:04 +0800 Subject: [SciPy-user] Problems with numpy and Python64 on Mac OS 1.5.5 In-Reply-To: <492A5387.4060805@ar.media.kyoto-u.ac.jp> References: <492A5387.4060805@ar.media.kyoto-u.ac.jp> Message-ID: Hi, Thanks for the reply. I realize it might be stretching it using python2.7. I tried the same thing with the current release of Python2.6 with the same result. My problems is that I need 64 bit support for some large data sets I need to analyze, and I haven't found a way to build a 64 bit Python2.5. Let me perhaps ask a different question then; has anyone successfully built a working 64 bit version of numpy, with any version of python? ~ Roger On Mon, Nov 24, 2008 at 3:11 PM, David Cournapeau wrote: > Roger Herikstad wrote: >> Hi, >> After successfully installing a 4 bit universal of python >> (trunk:67176M) and building and installing numpy (1.3.0.dev5972), >> running the numpy.test() crashes my system. Has anyone else had the >> same problem? Here's the python output and the crash report: >> > > You are using a python which is not even alpha (python 2.7). Python2.6 > is not officially supported yet - please use python 2.5 or below instead. > > David > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From michael.abshoff at googlemail.com Mon Nov 24 03:22:19 2008 From: michael.abshoff at googlemail.com (Michael Abshoff) Date: Mon, 24 Nov 2008 00:22:19 -0800 Subject: [SciPy-user] Exec format error In-Reply-To: References: Message-ID: <492A643B.7040208@gmail.com> Nils Wagner wrote: > Hi all, > > Sorry if my request is off-topic but what is the reason > for > the following message if I try to run python > > Exec format error. Binary file not executable. What platform are you on? From a little googling it seems to be potentially related either to missing executable bits, exceeded disc quota or ulimit problems, but without more details this is a shot in the dark. > Nils Cheers, Michael > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From nwagner at iam.uni-stuttgart.de Mon Nov 24 03:24:31 2008 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Mon, 24 Nov 2008 09:24:31 +0100 Subject: [SciPy-user] Exec format error In-Reply-To: <492A643B.7040208@gmail.com> References: <492A643B.7040208@gmail.com> Message-ID: On Mon, 24 Nov 2008 00:22:19 -0800 Michael Abshoff wrote: > Nils Wagner wrote: >> Hi all, >> >> Sorry if my request is off-topic but what is the reason >> for >> the following message if I try to run python >> >> Exec format error. Binary file not executable. > > What platform are you on? CentOS release 4.6 (Final) x86_64 x86_64 GNU/Linux From a little googling it >seems to be > potentially related either to missing executable bits, >exceeded disc > quota or ulimit problems, but without more details this >is a shot in the > dark. > >> Nils > > Cheers, > > Michael From david at ar.media.kyoto-u.ac.jp Mon Nov 24 03:15:31 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Mon, 24 Nov 2008 17:15:31 +0900 Subject: [SciPy-user] Problems with numpy and Python64 on Mac OS 1.5.5 In-Reply-To: References: <492A5387.4060805@ar.media.kyoto-u.ac.jp> Message-ID: <492A62A3.20401@ar.media.kyoto-u.ac.jp> Roger Herikstad wrote: > Hi, > Thanks for the reply. I realize it might be stretching it using > python2.7. I tried the same thing with the current release of > Python2.6 with the same result. My problems is that I need 64 bit > support for some large data sets I need to analyze, and I haven't > found a way to build a 64 bit Python2.5. Let me perhaps ask a > different question then; has anyone successfully built a working 64 > bit version of numpy, with any version of python? > I remember having seen some problems for 64 bits on Mac Os X. In the mean time, if that's an option, you may want to try on Linux, where 64 bits numpy works fine. David From michael.abshoff at googlemail.com Mon Nov 24 03:31:55 2008 From: michael.abshoff at googlemail.com (Michael Abshoff) Date: Mon, 24 Nov 2008 00:31:55 -0800 Subject: [SciPy-user] Problems with numpy and Python64 on Mac OS 1.5.5 In-Reply-To: References: <492A5387.4060805@ar.media.kyoto-u.ac.jp> Message-ID: <492A667B.5000202@gmail.com> Roger Herikstad wrote: > Hi, > Thanks for the reply. I realize it might be stretching it using > python2.7. I tried the same thing with the current release of > Python2.6 with the same result. My problems is that I need 64 bit > support for some large data sets I need to analyze, and I haven't > found a way to build a 64 bit Python2.5. Let me perhaps ask a > different question then; has anyone successfully built a working 64 > bit version of numpy, with any version of python? Yes, but setting OPT at compile time to inject "-m64" I got a 64 bit Pyhton 2.5.2 on OSX 10.4 and 10.5 with a recent enough XCode. To get a fully working Scipy I did have a fake gfortran 4.2 wrapper script that injected "-m64" into all command lines since as is I did not see a more elegant way to accomplish this. I have posted some build instructions on the numpy as well as scipy lists a couple months back, so googling might turn it up. If not I can dig into my pile of notes and repost. > ~ Roger Cheers, Michael From schut at sarvision.nl Mon Nov 24 10:03:43 2008 From: schut at sarvision.nl (Vincent Schut) Date: Mon, 24 Nov 2008 16:03:43 +0100 Subject: [SciPy-user] ndimage zero-ignorant filters, or other ways to fill holes Message-ID: Guys, I feel a bit hesistant to ask this because I know: as a fairly intensive user of scipy i should try to give something back to the community and code it up myself. However, I don't feel confident enough with ndimage's c code, and feel I'd spend much too much time and energy on it. That being said, I'm dying (well, almost) for a zero-ignorant version of ndimage.uniform_filter. In other words: a nd (2-d would be enough for me) windowing average filter that gives me back the mean of the non-zero pixels in the window. All current ndimage filters just incorporate the zero's, NaN's propagate through, and ndimage doesn't work with masks or masked arrays. I often find myself looking for a simple way to fill gaps (patches of zeros) in an image with values resembling the average surroundings of the gap, or smoothing an image that has zero-filled gaps of data that should not be taken into account while smoothing. I could solve such a thing very easily by iterating a version of uniform_filter that would not incorporate the zero-cells in its calculation. Hmm, I feel it's hard to explain. I hope someone understands what I mean... If someone would be willing to contribute such a feature to ndimage, please know that you'd at least make me very happy :) If someone comes up with another brillant idea to fill the zero-gaps in my images with values that are in reasonable range of the gap's surroundings, I'd also be very grateful. Keep in mind that the images typically are pretty large, though. 7000x7000 pixels is no exception. Regards, Vincent Schut. From William.T.Bridgman at nasa.gov Mon Nov 24 11:46:00 2008 From: William.T.Bridgman at nasa.gov (Bridgman, William T.) Date: Mon, 24 Nov 2008 11:46:00 -0500 Subject: [SciPy-user] Questions about Line Integral Convolution tutorial Message-ID: Hello, I found the Line Integral Convolution (LIC) example very timely for a project I'm working on. http://www.scipy.org/Cookbook/LineIntegralConvolution Once Cython & Pyrex were installed, the demo ran out of the box. Excellent. However, now I'm trying to apply this to a different dataset and the C component is crashing with index errors. I suspect these are being caused by the fact that my dataset is not a square array. An examination in the .pyx file has a couple of locations where the array indices appear to be transposed between x,y vs i,j. I'm not sure if this is a bug or not. The high symmetry of the demo vector field would probably not reveal this if it were a bug. So my questions for the author of the code or the list are: 1) Is there a paper or other reference for the algorithm implemented here? My searches have revealed several types of LIC implementations. It would be nice if this were in code comments or at least on the tutorial page. 2) Is the algorithm, the demo code, or LIC in general, restricted to square arrays? 3) Is there a pure-python or numpy-only (no Cython or Pyrex requirement) implementation? Thanks, Tom -- Dr. William T."Tom" Bridgman Scientific Visualization Studio Global Science & Technology, Inc. NASA/Goddard Space Flight Center Email: William.T.Bridgman at nasa.gov Code 610.3 Phone: 301-286-1346 Greenbelt, MD 20771 FAX: 301-286-1634 http://svs.gsfc.nasa.gov/ From david.huard at gmail.com Mon Nov 24 14:05:54 2008 From: david.huard at gmail.com (David Huard) Date: Mon, 24 Nov 2008 14:05:54 -0500 Subject: [SciPy-user] Questions about Line Integral Convolution tutorial In-Reply-To: References: Message-ID: <91cf711d0811241105t4199aa3ap8f94e6786c91d302@mail.gmail.com> Hi William, I am not the author, but I may be able to answer some of your questions. On Mon, Nov 24, 2008 at 11:46 AM, Bridgman, William T. < William.T.Bridgman at nasa.gov> wrote: > Hello, > > I found the Line Integral Convolution (LIC) example very timely for a > project I'm working on. > > http://www.scipy.org/Cookbook/LineIntegralConvolution > > Once Cython & Pyrex were installed, the demo ran out of the box. > Excellent. > > However, now I'm trying to apply this to a different dataset and the C > component is crashing with index errors. I suspect these are being > caused by the fact that my dataset is not a square array. > > An examination in the .pyx file has a couple of locations where the > array indices appear to be transposed between x,y vs i,j. I'm not > sure if this is a bug or not. The high symmetry of the demo vector > field would probably not reveal this if it were a bug. > > So my questions for the author of the code or the list are: > > 1) Is there a paper or other reference for the algorithm implemented > here? My searches have revealed several types of LIC > implementations. It would be nice if this were in code comments or at > least on the tutorial page. > I am not sure which paper Anne has used, but I have found Cabral, Brian and Laeith Leedom. Imaging vector fields using line integral convolution. SIGGRAPH '93: Proceedings of the 20th annual conference on Computer graphics and interactive techniques, pages 263-270, 1993. a very useful reference. > > 2) Is the algorithm, the demo code, or LIC in general, restricted to > square arrays? > Not that I know of. I have used it on rectangular arrays and it's working, although you may need to transpose the texture array to get it to work. > > 3) Is there a pure-python or numpy-only (no Cython or Pyrex > requirement) implementation? Not that I know of. However, with Anne's consent, I have created a scikit named vectorplot using the code she posted in the cookbook. This can be installed with a simple python setup.py install It's still not pure python, but you won't need Cython to compile the extension. I've added docstrings, a bit of documentation and utiliy functions to generate kernels that I use to "animate" vector fields. You can check out the code with subversion at http://svn.scipy.org/svn/scikits/trunk/vectorplot, but bear in mind that the user interface might change. HTH, David > > Thanks, > Tom > -- > Dr. William T."Tom" Bridgman Scientific Visualization > Studio > Global Science & Technology, Inc. NASA/Goddard Space Flight > Center > Email: William.T.Bridgman at nasa.gov Code 610.3 > Phone: 301-286-1346 Greenbelt, MD 20771 > FAX: 301-286-1634 http://svs.gsfc.nasa.gov/ > > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From josegomez at gmx.net Mon Nov 24 14:16:38 2008 From: josegomez at gmx.net (Jose Luis Gomez Dans) Date: Mon, 24 Nov 2008 20:16:38 +0100 Subject: [SciPy-user] Debugging f2py-created modules Message-ID: <20081124191638.51000@gmx.net> Hi, I have an f2py created module that wraps some fortran into a Python object. When I run one of the methods, I get the following error: : failed to initialize intent(inout) array -- expected elsize=4 but got 8 My method teakes some 34 or so parameters, and there are several defined as inout, so I don't really know where to start looking. Is there some option that tells me what bit of my calling sequence is causing the problem? Say my function call is (a, b, c, d) = my_mod.my_method ( x, y, z, inout_a, inout_b, inout_c) how do I know that the error is coming from inout_a, or inout_b? Or indeed somewhere else? In general, how do you debug a f2py module? Thanks J -- Psssst! Schon vom neuen GMX MultiMessenger geh?rt? Der kann`s mit allen: http://www.gmx.net/de/go/multimessenger From William.T.Bridgman at nasa.gov Mon Nov 24 14:41:50 2008 From: William.T.Bridgman at nasa.gov (Bridgman, William T.) Date: Mon, 24 Nov 2008 14:41:50 -0500 Subject: [SciPy-user] Questions about Line Integral Convolution tutorial Message-ID: <7B2BDB34-8AC0-454B-AC27-2C3943A3CCAB@nasa.gov> David, I had found the Cabral reference, as well as a few others that described either different algorithms or described them in radically different notation. Transposing the texture solved my crashing problem. That seems kind of counter-intuitive. Perhaps it could be noted in the docs somewhere or changed in the interface? Thanks for the assistance, Tom > Hi William, > > I am not the author, but I may be able to answer some of your > questions. > > On Mon, Nov 24, 2008 at 11:46 AM, Bridgman, William T. < > William.T.Bridgman at nasa.gov> wrote: > > > Hello, > > > > I found the Line Integral Convolution (LIC) example very timely > for a > > project I'm working on. > > > > http://www.scipy.org/Cookbook/LineIntegralConvolution > > > > Once Cython & Pyrex were installed, the demo ran out of the box. > > Excellent. > > > > However, now I'm trying to apply this to a different dataset and > the C > > component is crashing with index errors. I suspect these are being > > caused by the fact that my dataset is not a square array. > > > > An examination in the .pyx file has a couple of locations where the > > array indices appear to be transposed between x,y vs i,j. I'm not > > sure if this is a bug or not. The high symmetry of the demo vector > > field would probably not reveal this if it were a bug. > > > > So my questions for the author of the code or the list are: > > > > 1) Is there a paper or other reference for the algorithm implemented > > here? My searches have revealed several types of LIC > > implementations. It would be nice if this were in code comments > or at > > least on the tutorial page. > > > > I am not sure which paper Anne has used, but I have found > > Cabral, Brian and Laeith Leedom. Imaging vector fields using line > integral > convolution. SIGGRAPH '93: Proceedings of the 20th annual conference > on > Computer graphics and interactive techniques, pages 263-270, 1993. > > a very useful reference. > > > > > 2) Is the algorithm, the demo code, or LIC in general, restricted to > > square arrays? > > > > Not that I know of. I have used it on rectangular arrays and it's > working, > although you may need to transpose the texture array to get it to > work. > > > > > > 3) Is there a pure-python or numpy-only (no Cython or Pyrex > > requirement) implementation? > > > Not that I know of. However, with Anne's consent, I have created a > scikit > named vectorplot using the code she posted in the cookbook. This can > be > installed with a simple > python setup.py install > It's still not pure python, but you won't need Cython to compile the > extension. I've added docstrings, a bit of documentation and utiliy > functions to generate kernels that I use to "animate" vector fields. > > You can check out the code with subversion at > http://svn.scipy.org/svn/scikits/trunk/vectorplot, but bear in mind > that the > user interface might change. > > HTH, > > David > -- Dr. William T."Tom" Bridgman Scientific Visualization Studio Global Science & Technology, Inc. NASA/Goddard Space Flight Center Email: William.T.Bridgman at nasa.gov Code 610.3 Phone: 301-286-1346 Greenbelt, MD 20771 FAX: 301-286-1634 http://svs.gsfc.nasa.gov/ From vanforeest at gmail.com Mon Nov 24 15:09:52 2008 From: vanforeest at gmail.com (nicky van foreest) Date: Mon, 24 Nov 2008 21:09:52 +0100 Subject: [SciPy-user] Gram-Schmidt orthogonalization In-Reply-To: References: Message-ID: Hi David, I recall from the book numerical recipes that the Gramm Schmidt methods works terrible, numerically speaking. They provide some counterexamples too. It is better to use singular value decomposition, which is included in scipy too. bye Nicky 2008/11/23 Nathan Bell : > On Sun, Nov 23, 2008 at 2:14 PM, David Warde-Farley wrote: >> >> Is there any particular advantage to using scipy.linalg.qr over >> numpy.linalg.qr? Is the former faster by virtue of Fortran? >> > > I can't say offhand which would be better. In either case you'll want > the "economy" QR, the # of vectors you're orthogonalizing is smaller > than the length of the vector. This way you'll get a tall, skinny Q > as opposed to a square Q. > > -- > Nathan Bell wnbell at gmail.com > http://graphics.cs.uiuc.edu/~wnbell/ > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From aarchiba at physics.mcgill.ca Mon Nov 24 15:19:54 2008 From: aarchiba at physics.mcgill.ca (Anne Archibald) Date: Mon, 24 Nov 2008 15:19:54 -0500 Subject: [SciPy-user] Gram-Schmidt orthogonalization In-Reply-To: References: Message-ID: 2008/11/24 nicky van foreest : > I recall from the book numerical recipes that the Gramm Schmidt > methods works terrible, numerically speaking. They provide some > counterexamples too. It is better to use singular value decomposition, > which is included in scipy too. There are situations where the SVD won't cut it: for example, if you want to construct a custom family of orthogonal polynomials, ensuring that the nth has degree n. But I think in general you can use QR decomposition for these sorts of problems, or possibly Cholesky factorization. It's still a little shaky numerically, but it gets the job done. Anne From aarchiba at physics.mcgill.ca Mon Nov 24 15:23:32 2008 From: aarchiba at physics.mcgill.ca (Anne Archibald) Date: Mon, 24 Nov 2008 15:23:32 -0500 Subject: [SciPy-user] Questions about Line Integral Convolution tutorial In-Reply-To: <7B2BDB34-8AC0-454B-AC27-2C3943A3CCAB@nasa.gov> References: <7B2BDB34-8AC0-454B-AC27-2C3943A3CCAB@nasa.gov> Message-ID: 2008/11/24 Bridgman, William T. : > I had found the Cabral reference, as well as a few others that > described either different algorithms or described them in radically > different notation. I implemented my code based on the Cabral reference. Currently it does not quite do everything they recommend. > Transposing the texture solved my crashing problem. That seems kind > of counter-intuitive. Perhaps it could be noted in the docs somewhere > or changed in the interface? Looks like a bug. Indeed I only tested my code with square arrays. It wasn't really ready for release, but there was some interest, and I have no time at all to work on it just now. Anne From david.huard at gmail.com Mon Nov 24 15:27:38 2008 From: david.huard at gmail.com (David Huard) Date: Mon, 24 Nov 2008 15:27:38 -0500 Subject: [SciPy-user] Questions about Line Integral Convolution tutorial In-Reply-To: References: <7B2BDB34-8AC0-454B-AC27-2C3943A3CCAB@nasa.gov> Message-ID: <91cf711d0811241227h230f661br28d5afa2ed91ddda@mail.gmail.com> What about the following interface: line_integral_convolution(u, v, texture=None, kernel=None) with identical shapes for u, v and texture, and simple defaults for texture (white noise) and kernel (box of length max(u.shape)//10) ? David On Mon, Nov 24, 2008 at 3:23 PM, Anne Archibald wrote: > 2008/11/24 Bridgman, William T. : > > > I had found the Cabral reference, as well as a few others that > > described either different algorithms or described them in radically > > different notation. > > I implemented my code based on the Cabral reference. Currently it does > not quite do everything they recommend. > > > Transposing the texture solved my crashing problem. That seems kind > > of counter-intuitive. Perhaps it could be noted in the docs somewhere > > or changed in the interface? > > Looks like a bug. Indeed I only tested my code with square arrays. It > wasn't really ready for release, but there was some interest, and I > have no time at all to work on it just now. > > Anne > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From wnbell at gmail.com Mon Nov 24 15:28:11 2008 From: wnbell at gmail.com (Nathan Bell) Date: Mon, 24 Nov 2008 15:28:11 -0500 Subject: [SciPy-user] Gram-Schmidt orthogonalization In-Reply-To: References: Message-ID: On Mon, Nov 24, 2008 at 3:09 PM, nicky van foreest wrote: > > I recall from the book numerical recipes that the Gramm Schmidt > methods works terrible, numerically speaking. They provide some > counterexamples too. It is better to use singular value decomposition, > which is included in scipy too. > Try QR first. It's reasonably stable and SVD is considerably more expensive. -- Nathan Bell wnbell at gmail.com http://graphics.cs.uiuc.edu/~wnbell/ From anand.prabhakar.patil at gmail.com Mon Nov 24 15:51:12 2008 From: anand.prabhakar.patil at gmail.com (Anand Patil) Date: Mon, 24 Nov 2008 20:51:12 +0000 Subject: [SciPy-user] Log-error function In-Reply-To: <2bc7a5a50811231354i1864040w4e2b6894cde59df0@mail.gmail.com> References: <2bc7a5a50811231354i1864040w4e2b6894cde59df0@mail.gmail.com> Message-ID: <2bc7a5a50811241251v238ce090lcc67a26671d5e308@mail.gmail.com> In case it wasn't obvious- I meant log((erf(x)+1)/2), not log(erf(x)). :-) Anand On Sun, Nov 23, 2008 at 9:54 PM, Anand Patil wrote: > Hi all, > > I'm looking for a C or Fortran routine that computes log(erf(x)) > without ever computing erf(x) directly, for use in PyMC. Does anyone > have one laying around? It looks like GSL has such a thing, but it's > GPL and we're using the MIT license. > > Thanks, > Anand > From William.T.Bridgman at nasa.gov Mon Nov 24 15:53:53 2008 From: William.T.Bridgman at nasa.gov (Bridgman, William T.) Date: Mon, 24 Nov 2008 15:53:53 -0500 Subject: [SciPy-user] Questions about Line Integral Convolution tutorial Message-ID: Anne, As long as some notation of the issue is available in this forum, that should help a number of users wanting to work with it for now. Is there any protocol for others updating the Wiki entry? I've just joined Scipy but been a member of AstroPy since its inception - but I have yet to write or update a Wiki page. I'm still getting some type of 'shifting' of my dataset after running LIC, so I need to examine the Cabral reference more closely. I think I'm still missing something. Thanks, Tom > > 2008/11/24 Bridgman, William T. : > >> I had found the Cabral reference, as well as a few others that >> described either different algorithms or described them in radically >> different notation. > > I implemented my code based on the Cabral reference. Currently it does > not quite do everything they recommend. > >> Transposing the texture solved my crashing problem. That seems kind >> of counter-intuitive. Perhaps it could be noted in the docs >> somewhere >> or changed in the interface? > > Looks like a bug. Indeed I only tested my code with square arrays. It > wasn't really ready for release, but there was some interest, and I > have no time at all to work on it just now. > > Anne -- Dr. William T."Tom" Bridgman Scientific Visualization Studio Global Science & Technology, Inc. NASA/Goddard Space Flight Center Email: William.T.Bridgman at nasa.gov Code 610.3 Phone: 301-286-1346 Greenbelt, MD 20771 FAX: 301-286-1634 http://svs.gsfc.nasa.gov/ From gael.varoquaux at normalesup.org Mon Nov 24 16:37:58 2008 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Mon, 24 Nov 2008 22:37:58 +0100 Subject: [SciPy-user] Gram-Schmidt orthogonalization In-Reply-To: References: Message-ID: <20081124213758.GB22820@phare.normalesup.org> On Mon, Nov 24, 2008 at 03:28:11PM -0500, Nathan Bell wrote: > Try QR first. It's reasonably stable and SVD is considerably more expensive. +1. SVD is the sledge hammer of numeric matrix factorisation. (In other cases, options may involve 'np.linalg.eigh(np.dot(A.T, A))', or other trick to avoid the costly SVD). Ga?l From aarchiba at physics.mcgill.ca Mon Nov 24 16:39:20 2008 From: aarchiba at physics.mcgill.ca (Anne Archibald) Date: Mon, 24 Nov 2008 16:39:20 -0500 Subject: [SciPy-user] Questions about Line Integral Convolution tutorial In-Reply-To: References: Message-ID: 2008/11/24 Bridgman, William T. : > Is there any protocol for others updating the Wiki entry? I've just > joined Scipy but been a member of AstroPy since its inception - but I > have yet to write or update a Wiki page. Generally the protocol is "go right ahead, it's version-controlled". Just out of curiosity, what were you hoping to use the LIC code for? > I'm still getting some type of 'shifting' of my dataset after running > LIC, so I need to examine the Cabral reference more closely. I think > I'm still missing something. It is totally possible there's a big in my code. In particular David Huard pointed out there may be an indexing bug that means that instead of integrating forward, it integraes backward twice. If the code works right, you shouldn't see any shifting, but Cabral et al. point out that it's very important the algorithm and kernel be symmetric, or you can get circles turning into spirals and the like. Anne From dwf at cs.toronto.edu Mon Nov 24 16:43:41 2008 From: dwf at cs.toronto.edu (David Warde-Farley) Date: Mon, 24 Nov 2008 16:43:41 -0500 Subject: [SciPy-user] Gram-Schmidt orthogonalization In-Reply-To: References: Message-ID: <6B4C502D-159B-42EF-83FE-E1E61B2AD339@cs.toronto.edu> On 24-Nov-08, at 3:09 PM, nicky van foreest wrote: > Hi David, > > I recall from the book numerical recipes that the Gramm Schmidt > methods works terrible, numerically speaking. They provide some > counterexamples too. It is better to use singular value decomposition, > which is included in scipy too. Hey Nicky, You're right about Gram-Schmidt being nasty if you do it naively, but there is IIRC a more numerically stable variant of Gram-Schmidt, see http://en.wikipedia.org/wiki/Gram?Schmidt_process#Algorithm I just tend not to want to "roll my own" if I can help it, since stuff in SciPy is usually going to be better tested. Cheers, David From carlos.s.santos at gmail.com Mon Nov 24 16:48:21 2008 From: carlos.s.santos at gmail.com (Carlos da Silva Santos) Date: Mon, 24 Nov 2008 19:48:21 -0200 Subject: [SciPy-user] ndimage zero-ignorant filters, or other ways to fill holes In-Reply-To: References: Message-ID: <1dc6ddb60811241348p43b237fbl7c88570dda782d51@mail.gmail.com> On Mon, Nov 24, 2008 at 1:03 PM, Vincent Schut wrote: > > If someone comes up with another brillant idea to fill the zero-gaps in > my images with values that are in reasonable range of the gap's > surroundings, I'd also be very grateful. Keep in mind that the images > typically are pretty large, though. 7000x7000 pixels is no exception. Maybe you could use ndimage.morphology.grey_closing, I cant find a code example using it, but the idea is similar to the examples featured in this page: http://www.mmorph.com/pymorph/morph/morph/mmclose.html The "close hole" operator is probably closer to what you intended: http://www.mmorph.com/pymorph/morph/morph/mmclohole.html I am not sure whether this operator is implemented in the free version of pymorph, maybe you should give it a try: http://luispedro.org/pymorph Hope this helps. []s Carlos > > Regards, > Vincent Schut. > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From ozanbakis at gmail.com Mon Nov 24 16:57:01 2008 From: ozanbakis at gmail.com (ozan bakis) Date: Mon, 24 Nov 2008 23:57:01 +0200 Subject: [SciPy-user] bvp import problem Message-ID: Hi all, I am very new to python and scipy. As an economist I am especially interested in optimization and differential equations tools of scipy. I have tried the some of the online examples and have been impressed by how easy userfriendly it is. Thank you for great work. I have tried to install bvp package as explained ot its web site by >>> sudo python setup.py build I did not get any error. But when I want to import bvp I get the following: >>> import scipy as N >>> N.pkgload('special') >>> import bvp Traceback (most recent call last): File "", line 1, in File "bvp/__init__.py", line 6, in import colnew File "bvp/colnew.py", line 63, in import _colnew ImportError: No module named _colnew Any idea? Thanks in advance, ozan -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Mon Nov 24 17:00:55 2008 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 24 Nov 2008 16:00:55 -0600 Subject: [SciPy-user] bvp import problem In-Reply-To: References: Message-ID: <3d375d730811241400w43a99174kb449eb40562f6ff1@mail.gmail.com> On Mon, Nov 24, 2008 at 15:57, ozan bakis wrote: > Hi all, > > I am very new to python and scipy. As an economist I am especially > interested in optimization > and differential equations tools of scipy. I have tried the some of the > online examples and have > been impressed by how easy userfriendly it is. Thank you for great work. > > I have tried to install bvp package as explained ot its web site by >>>> sudo python setup.py build > > I did not get any error. That's how to build it. Now, you need to install it. $ sudo python setup.py install Note that this, and the previous command, should be done at a terminal shell, not the Python prompt. > But when I want to import bvp I get the following: >>>> import scipy as N >>>> N.pkgload('special') Don't use pkgload(). Just import scipy.special. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From cmueller_dev at yahoo.com Mon Nov 24 22:53:17 2008 From: cmueller_dev at yahoo.com (Chris Mueller) Date: Mon, 24 Nov 2008 19:53:17 -0800 (PST) Subject: [SciPy-user] CorePy 1.0 Release (x86, Cell BE, BSD!) Message-ID: <121021.42709.qm@web111211.mail.gq1.yahoo.com> Hi scipy-ers - Some of you may remember CorePy from previous SciPy conferences. Feedback from those meetings was very helpful for planning the future of CorePy. Without further ado... Announcing CorePy 1.0 - http://www.corepy.org We are pleased to announce the latest release of CorePy. CorePy is a complete system for developing machine-level programs in Python. CorePy lets developers build and execute assembly-level programs interactively from the Python command prompt, embed them directly in Python applications, or export them to standard assembly languages. CorePy's straightforward APIs enable the creation of complex, high-performance applications that take advantage of processor features usually inaccessible from high-level scripting languages, such as multi-core execution and vector instruction sets (SSE, VMX, SPU). This version addresses the two most frequently asked questions about CorePy: 1) Does CorePy support x86 processors? Yes! CorePy now has extensive support for 32/64-bit x86 and SSE ISAs on Linux and OS X*. 2) Is CorePy Open Source? Yes! CorePy now uses the standard BSD license. Of course, CorePy still supports PowerPC and Cell BE SPU processors. In fact, for this release, the Cell run-time was redesigned from the ground up to remove the dependency on IBM's libspe and now uses the system-level interfaces to work directly with the SPUs (and, CorePy is still the most fun way to program the PS3). CorePy is written almost entirely in Python. Its run-time system does not rely on any external compilers or assemblers. If you have the need to write tight, fast code from Python, want to demystify machine-level code generation, or just miss the good-old days of assembly hacking, check out CorePy! And, if you don't believe us, here's our favorite user quote: "CorePy makes assembly fun again!" __credits__ = """ CorePy is developed by Chris Mueller, Andrew Friedley, and Ben Martin and is supported by the Open Systems Lab at Indiana University. Chris can be reached at cmueller[underscore]dev[at]yahoo[dot]com. """ __footnote__ = """ *Any volunteers for a Windows port? :) """ -------------- next part -------------- An HTML attachment was scrubbed... URL: From contact at pythonxy.com Tue Nov 25 01:41:16 2008 From: contact at pythonxy.com (Pierre Raybaut) Date: Tue, 25 Nov 2008 07:41:16 +0100 Subject: [SciPy-user] [ Python(x,y) ] New release : 2.1.5 Message-ID: <492B9E0C.8040706@pythonxy.com> Hi all, Release 2.1.5 is now available on http://www.pythonxy.com. (Full Edition, Basic Edition, Light Edition, Custom Edition and soon the update) Changes history Version 2.1.5 (11-24-2008) * Added: o QtHelp 4.4.1: complete Qt documentation (Qt, Qt Designer, ...) integrated to Qt Assistant * Updated: o console 2.0.141.3 o Notepad++ 5.1.0 o Cython 0.10 o IPython 0.9.1.4 o py2exe 0.6.9 o QtEclipse 1.4.1.2 o Sphinx 0.5 o wxPython 2.8.9.2 o xy 1.0.12 o Following updates are relevant only for a new install of Python(x,y) (there is absolutely no need to update your current install) o reportlab 2.2.1 o xydoc 1.0.1 o PyQt4 4.4.3.4 o Eclipse 3.4.1.1 * Corrected: o Issues 35, 36, 37, 38 and many other minor bug fixes Regards, Pierre Raybaut From pearu at cens.ioc.ee Tue Nov 25 03:31:50 2008 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Tue, 25 Nov 2008 10:31:50 +0200 (EET) Subject: [SciPy-user] Debugging f2py-created modules In-Reply-To: <20081124191638.51000@gmx.net> References: <20081124191638.51000@gmx.net> Message-ID: <54845.172.17.0.4.1227601910.squirrel@cens.ioc.ee> Hi, Use --debug-capi f2py flag to debug f2py generated modules. With the flag all interface operations, including processing arguments, are reported to stdout. HTH, Pearu On Mon, November 24, 2008 9:16 pm, Jose Luis Gomez Dans wrote: > Hi, > I have an f2py created module that wraps some fortran into a Python > object. > When I run one of the methods, I get the following error: > : failed to initialize intent(inout) array > -- > expected elsize=4 but got 8 > > My method teakes some 34 or so parameters, and there are several defined > as > inout, so I don't really know where to start looking. Is there some option > that tells me what bit of my calling sequence is causing the problem? Say > my > function call is > (a, b, c, d) = my_mod.my_method ( x, y, z, inout_a, inout_b, inout_c) > > how do I know that the error is coming from inout_a, or inout_b? Or indeed > somewhere else? In general, how do you debug a f2py module? > > Thanks > J > -- > Psssst! Schon vom neuen GMX MultiMessenger geh?rt? Der kann`s mit allen: > http://www.gmx.net/de/go/multimessenger > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From nwagner at iam.uni-stuttgart.de Tue Nov 25 04:04:20 2008 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Tue, 25 Nov 2008 10:04:20 +0100 Subject: [SciPy-user] scikits samplerate Message-ID: Hi all, I tried to install samplerate via python setup.py install --prefix=$HOME/local --single-version-externally-managed --record=/dev/null samplerate_info: libraries samplerate not found in /data/home/nwagner/local Traceback (most recent call last): File "setup.py", line 161, in 'Topic :: Scientific/Engineering'] File "/data/home/nwagner/local/lib/python2.5/site-packages/numpy/distutils/core.py", line 150, in setup config = configuration() File "setup.py", line 116, in configuration src_config = src_info.get_info() File "/data/home/nwagner/local/lib/python2.5/site-packages/numpy/distutils/system_info.py", line 410, in get_info self.calc_info() File "setup.py", line 73, in calc_info raise self.notfounderror() __main__.SamplerateNotFoundError: samplerate (http://www.mega-nerd.com/SRC/) library not found. Directories to search for the libraries can be specified in the site.cfg file (section [samplerate]). The libraries are located in /data/home/nwagner/local/lib/libsndfile.a /data/home/nwagner/local/lib/libsamplerate.a How do I set up a corresponding site.cfg ? Nils From haase at msg.ucsf.edu Tue Nov 25 04:13:00 2008 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Tue, 25 Nov 2008 10:13:00 +0100 Subject: [SciPy-user] ndimage zero-ignorant filters, or other ways to fill holes In-Reply-To: <1dc6ddb60811241348p43b237fbl7c88570dda782d51@mail.gmail.com> References: <1dc6ddb60811241348p43b237fbl7c88570dda782d51@mail.gmail.com> Message-ID: On Mon, Nov 24, 2008 at 10:48 PM, Carlos da Silva Santos wrote: > On Mon, Nov 24, 2008 at 1:03 PM, Vincent Schut wrote: >> >> If someone comes up with another brillant idea to fill the zero-gaps in >> my images with values that are in reasonable range of the gap's >> surroundings, I'd also be very grateful. Keep in mind that the images >> typically are pretty large, though. 7000x7000 pixels is no exception. > > Maybe you could use ndimage.morphology.grey_closing, I cant find a > code example using it, but the idea is similar to the examples > featured in this page: > http://www.mmorph.com/pymorph/morph/morph/mmclose.html > > The "close hole" operator is probably closer to what you intended: > http://www.mmorph.com/pymorph/morph/morph/mmclohole.html > > I am not sure whether this operator is implemented in the free version > of pymorph, maybe you should give it a try: > http://luispedro.org/pymorph > > Hope this helps. > Hi Carlos, the links you provided look very interesting ! Would you be able to answer some further questions ? e.g. the original pymorph (http://www.mmorph.com/pymorph) appears to have a BSD license, did this change for the luispedro.org pymorph ? Is pymorph all 2D (only) ? What data data-types does pymorph support ? float32 ? Thanks, Sebastian Haase From cournape at gmail.com Tue Nov 25 04:18:19 2008 From: cournape at gmail.com (David Cournapeau) Date: Tue, 25 Nov 2008 18:18:19 +0900 Subject: [SciPy-user] scikits samplerate In-Reply-To: References: Message-ID: <5b8d13220811250118g5c7bba1alcf08d65346963c1e@mail.gmail.com> On Tue, Nov 25, 2008 at 6:04 PM, Nils Wagner wrote: > Hi all, > > I tried to install samplerate via > What does your site.cfg look like ? David From nwagner at iam.uni-stuttgart.de Tue Nov 25 04:22:14 2008 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Tue, 25 Nov 2008 10:22:14 +0100 Subject: [SciPy-user] scikits samplerate In-Reply-To: <5b8d13220811250118g5c7bba1alcf08d65346963c1e@mail.gmail.com> References: <5b8d13220811250118g5c7bba1alcf08d65346963c1e@mail.gmail.com> Message-ID: On Tue, 25 Nov 2008 18:18:19 +0900 "David Cournapeau" wrote: > On Tue, Nov 25, 2008 at 6:04 PM, Nils Wagner > wrote: >> Hi all, >> >> I tried to install samplerate via >> > > What does your site.cfg look like ? > Hi David, Just now I have added lib at the end of [samplerate] library_dirs= /data/home/nwagner/local/lib > David samplerate_info: FOUND: libraries = ['samplerate'] library_dirs = ['/data/home/nwagner/local/lib'] Traceback (most recent call last): File "setup.py", line 161, in 'Topic :: Scientific/Engineering'] File "/data/home/nwagner/local/lib/python2.5/site-packages/numpy/distutils/core.py", line 150, in setup config = configuration() File "setup.py", line 117, in configuration headername = src_config['fullheadloc'] KeyError: 'fullheadloc' Any pointer ? Nils From cournape at gmail.com Tue Nov 25 05:10:57 2008 From: cournape at gmail.com (David Cournapeau) Date: Tue, 25 Nov 2008 19:10:57 +0900 Subject: [SciPy-user] scikits samplerate In-Reply-To: References: <5b8d13220811250118g5c7bba1alcf08d65346963c1e@mail.gmail.com> Message-ID: <5b8d13220811250210h40bbeef2j3073697420f7977d@mail.gmail.com> On Tue, Nov 25, 2008 at 6:22 PM, Nils Wagner wrote: > On Tue, 25 Nov 2008 18:18:19 +0900 > "David Cournapeau" wrote: >> On Tue, Nov 25, 2008 at 6:04 PM, Nils Wagner >> wrote: >>> Hi all, >>> >>> I tried to install samplerate via >>> >> >> What does your site.cfg look like ? >> > Hi David, > > Just now I have added lib at the end of > You need to add include_dirs, too: [samplerate] library_dirs = '/data/home/nwagner/local/lib' include_dirs = 'data/home/nwagner/local/include' David From nwagner at iam.uni-stuttgart.de Tue Nov 25 05:35:24 2008 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Tue, 25 Nov 2008 11:35:24 +0100 Subject: [SciPy-user] scikits samplerate In-Reply-To: <5b8d13220811250210h40bbeef2j3073697420f7977d@mail.gmail.com> References: <5b8d13220811250118g5c7bba1alcf08d65346963c1e@mail.gmail.com> <5b8d13220811250210h40bbeef2j3073697420f7977d@mail.gmail.com> Message-ID: On Tue, 25 Nov 2008 19:10:57 +0900 "David Cournapeau" wrote: > On Tue, Nov 25, 2008 at 6:22 PM, Nils Wagner > wrote: >> On Tue, 25 Nov 2008 18:18:19 +0900 >> "David Cournapeau" wrote: >>> On Tue, Nov 25, 2008 at 6:04 PM, Nils Wagner >>> wrote: >>>> Hi all, >>>> >>>> I tried to install samplerate via >>>> >>> >>> What does your site.cfg look like ? >>> >> Hi David, >> >> Just now I have added lib at the end of >> > > You need to add include_dirs, too: > > [samplerate] > library_dirs = '/data/home/nwagner/local/lib' > include_dirs = 'data/home/nwagner/local/include' Works for me. Thank you very much. Nils From dmitrey.kroshko at scipy.org Tue Nov 25 07:43:06 2008 From: dmitrey.kroshko at scipy.org (dmitrey) Date: Tue, 25 Nov 2008 14:43:06 +0200 Subject: [SciPy-user] splines with extrapolation and derivatives Message-ID: <492BF2DA.2080300@scipy.org> hi all, I need Python splines soft (1-d, 2-d, 3-d preferably general n-d) that is capable to do extrapolation. Having derivatives in extrapolated points (df/dx, where x.ndim = n) is highly preferable. Are there any numpy / scipy / other Python tools capable of this? Thank you in advance, D. From alsmirn at gmail.com Tue Nov 25 07:54:58 2008 From: alsmirn at gmail.com (Alexey Smirnov) Date: Tue, 25 Nov 2008 15:54:58 +0300 Subject: [SciPy-user] splines with extrapolation and derivatives In-Reply-To: <492BF2DA.2080300@scipy.org> References: <492BF2DA.2080300@scipy.org> Message-ID: <373b18710811250454m3b199eb2wa957a46de2aeacb9@mail.gmail.com> D., there http://www.scipy.org/doc/api_docs/SciPy.interpolate.interpolate.html you can find all what you need :). -- Best regards, Alexey Smirnov mailto: alsmirn at gmail.com From William.T.Bridgman at nasa.gov Tue Nov 25 09:44:45 2008 From: William.T.Bridgman at nasa.gov (Bridgman, William T.) Date: Tue, 25 Nov 2008 09:44:45 -0500 Subject: [SciPy-user] Questions about Line Integral Convolution tutorial In-Reply-To: References: Message-ID: Anne, I've got a 3-D vector field that separates nicely into toroidal and a 2-D poloidal component. I plan to use LIC on the poloidal component projected on a slice of the volume. In the Cabral paper, Section 4.3 mentions a wrapping that can occur if the texture is too small. I'm wondering if the shift I'm seeing may be an artifact of the texture dimensions being transposed relative to the input vector field. If the texture doesn't have to be the same size as the vector field, it should be an easy fix to get around for now. Part of my dataset also has a parallel flow near the edge which this paper mentions can create false singularities. Thanks for the input. Tom On Nov 24, 2008, at 10:56 PM, scipy-user-request at scipy.org wrote: > 2008/11/24 Bridgman, William T. : > >> Is there any protocol for others updating the Wiki entry? I've just >> joined Scipy but been a member of AstroPy since its inception - but I >> have yet to write or update a Wiki page. > > Generally the protocol is "go right ahead, it's version-controlled". > Just out of curiosity, what were you hoping to use the LIC code for? > >> I'm still getting some type of 'shifting' of my dataset after running >> LIC, so I need to examine the Cabral reference more closely. I think >> I'm still missing something. > > It is totally possible there's a big in my code. In particular David > Huard pointed out there may be an indexing bug that means that instead > of integrating forward, it integraes backward twice. > > If the code works right, you shouldn't see any shifting, but Cabral et > al. point out that it's very important the algorithm and kernel be > symmetric, or you can get circles turning into spirals and the like. > > Anne -- Dr. William T."Tom" Bridgman Scientific Visualization Studio Global Science & Technology, Inc. NASA/Goddard Space Flight Center Email: William.T.Bridgman at nasa.gov Code 610.3 Phone: 301-286-1346 Greenbelt, MD 20771 FAX: 301-286-1634 http://svs.gsfc.nasa.gov/ From ozanbakis at gmail.com Tue Nov 25 13:21:35 2008 From: ozanbakis at gmail.com (ozan bakis) Date: Tue, 25 Nov 2008 20:21:35 +0200 Subject: [SciPy-user] bvp install / import problem :again Message-ID: Hi again, I am trying to install and use the bvp package in scipy. I download the source file in my home directory, then $ cd bvp-0.2.4 $ sudo python setup.py build $ sudo python setup.py install When I type (in python) >>> import scipy.special >>> import bvp Iget the following error message: Traceback (most recent call last): File "", line 1, in File "bvp/__init__.py", line 6, in import colnew File "bvp/colnew.py", line 63, in import _colnew ImportError: No module named _colnew I use ubuntu 8.10 on a Dell inspiron. Thank you very much (especially Robert Kern for rapid answer), ozan Date: Mon, 24 Nov 2008 16:00:55 -0600 From: "Robert Kern" Subject: Re: [SciPy-user] bvp import problem To: "SciPy Users List" Message-ID: <3d375d730811241400w43a99174kb449eb40562f6ff1 at mail.gmail.com> Content-Type: text/plain; charset=UTF-8 On Mon, Nov 24, 2008 at 15:57, ozan bakis wrote: > Hi all, > > I am very new to python and scipy. As an economist I am especially > interested in optimization > and differential equations tools of scipy. I have tried the some of the > online examples and have > been impressed by how easy userfriendly it is. Thank you for great work. > > I have tried to install bvp package as explained ot its web site by >>>> sudo python setup.py build > > I did not get any error. That's how to build it. Now, you need to install it. $ sudo python setup.py install Note that this, and the previous command, should be done at a terminal shell, not the Python prompt. > But when I want to import bvp I get the following: >>>> import scipy as N >>>> N.pkgload('special') Don't use pkgload(). Just import scipy.special. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Tue Nov 25 13:35:34 2008 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 25 Nov 2008 12:35:34 -0600 Subject: [SciPy-user] bvp install / import problem :again In-Reply-To: References: Message-ID: <3d375d730811251035g452fb120g36d7ef683113b78@mail.gmail.com> On Tue, Nov 25, 2008 at 12:21, ozan bakis wrote: > Hi again, > > I am trying to install and use the bvp package in scipy. I download > the source file in my home directory, then > > $ cd bvp-0.2.4 > $ sudo python setup.py build > $ sudo python setup.py install > > When I type (in python) >>>> import scipy.special >>>> import bvp > > Iget the following error message: > > Traceback (most recent call last): > File "", line 1, in > File "bvp/__init__.py", line 6, in > import colnew > File "bvp/colnew.py", line 63, in > import _colnew > ImportError: No module named _colnew Change directories. Python looks in the current directory before site-packages, so you are picking up the unbuilt source files, not the installed files. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From ozanbakis at gmail.com Tue Nov 25 15:59:43 2008 From: ozanbakis at gmail.com (ozan bakis) Date: Tue, 25 Nov 2008 22:59:43 +0200 Subject: [SciPy-user] bvp install / import problem :again Message-ID: Thank syou very much Mr. Kern, It works now... Ozan On Tue, Nov 25, 2008 at 12:21, ozan bakis > wrote: >* Hi again, *>* *>* I am trying to install and use the bvp package in scipy. I download *>* the source file in my home directory, then *>* *>* $ cd bvp-0.2.4 *>* $ sudo python setup.py build *>* $ sudo python setup.py install *>* *>* When I type (in python) *>>>>* import scipy.special *>>>>* import bvp *>* *>* Iget the following error message: *>* *>* Traceback (most recent call last): *>* File "", line 1, in *>* File "bvp/__init__.py", line 6, in *>* import colnew *>* File "bvp/colnew.py", line 63, in *>* import _colnew *>* ImportError: No module named _colnew * Change directories. Python looks in the current directory before site-packages, so you are picking up the unbuilt source files, not the installed files. -- Robert Kern -------------- next part -------------- An HTML attachment was scrubbed... URL: From William.T.Bridgman at nasa.gov Wed Nov 26 08:03:36 2008 From: William.T.Bridgman at nasa.gov (Bridgman, William T.) Date: Wed, 26 Nov 2008 08:03:36 -0500 Subject: [SciPy-user] Questions about Line Integral Convolution tutorial Message-ID: <03BF3D57-E440-40C7-AAE9-BFC673498BF0@nasa.gov> Anne & David, I've made some revisions to the lic_internal.pyx file. They seem to fix the problems I was having, but in the process, really alter your demo program output. My major changes are 1) clean up array indexing: w,x,i and h,y,j are correlated now 2) the reverse integration along the line seemed to start at the end of the line. I reinitialized it to kernellen//2 3) added comments I can package it up and stage it for you to review and update on the Wiki. Please email me directly on the location. Thanks. Tom -- Dr. William T."Tom" Bridgman Scientific Visualization Studio Global Science & Technology, Inc. NASA/Goddard Space Flight Center Email: William.T.Bridgman at nasa.gov Code 610.3 Phone: 301-286-1346 Greenbelt, MD 20771 FAX: 301-286-1634 http://svs.gsfc.nasa.gov/ From robince at gmail.com Wed Nov 26 11:51:59 2008 From: robince at gmail.com (Robin) Date: Wed, 26 Nov 2008 16:51:59 +0000 Subject: [SciPy-user] trouble saving sparse matrix Message-ID: Hi, I have a large sparse matrix (about 9GB): In [18]: a.A Out[18]: <21699x1048575 sparse matrix of type '' with 1035272192 stored elements in Compressed Sparse Column format> but I am having trouble saving it. I am on 64 bit linux. The problem is whatever I try I get : SystemError: Negative size passed to PyString_FromStringAndSize This happens with cPickle.dump, np.save, sp.io.savemat etc. I am guessing something is overflowing a 32 bit integer. The matrix itself seems ok... I was wondering if anyone had any ideas for another way to save it - or perhaps if I have made a mistake and it is really too big? Cheers Robin From olof.gross at student.gu.se Wed Nov 26 12:07:14 2008 From: olof.gross at student.gu.se (gross) Date: Wed, 26 Nov 2008 09:07:14 -0800 (PST) Subject: [SciPy-user] getting values from traits objects Message-ID: <20623191.post@talk.nabble.com> I'm obviously missing something very basic, can someone please explain to me what i've done wrong in this example: [code] from enthought.traits.api import HasTraits, Float, Int from scipy import linspace class InputData(HasTraits): xmin=Float(default_value=.5) xmax=Float(default_value=2.0) xres=Int(default_value=128) x=linspace(xmin, xmax, xres) if __name__ == "__main__": window = InputData() window.configure_traits() [/code] Executing this file results in: [quote] Traceback (most recent call last): File "", line 1, in File "t2.py", line 4, in class InputData(HasTraits): File "t2.py", line 8, in InputData x=linspace(xmin, xmax, xres) File "/usr/lib/python2.5/site-packages/numpy/lib/function_base.py", line 74, in linspace num = int(num) TypeError: int() argument must be a string or a number, not 'Int' [/quote] Printing the value of xres after execution returns the (by me) expected value of 128 as an int, but it seems it doesen't work in the same way when the script runs? Believe me, i've studied examples and searched for hours, but it can be really difficult to find straight answers to such general questions as this one... -- View this message in context: http://www.nabble.com/getting-values-from-traits-objects-tp20623191p20623191.html Sent from the Scipy-User mailing list archive at Nabble.com. From oliphant at enthought.com Wed Nov 26 13:06:57 2008 From: oliphant at enthought.com (Travis E. Oliphant) Date: Wed, 26 Nov 2008 12:06:57 -0600 Subject: [SciPy-user] ANNOUNCE: EPD with Py2.5 version 4.0.30002 RC2 available for testing Message-ID: <492D9041.8090207@enthought.com> Hello, We've recently posted the beta1 build of EPD (the Enthought Python Distribution) with Python 2.5 version 4.1.30001 to the EPD website. You may download the beta from here: http://www.enthought.com/products/epdearlyaccess.php You can check out the release notes here: https://svn.enthought.com/epd/wiki/Python2.5.2/4.1.300/Beta1 Please help us test it out and provide feedback on the EPD Trac instance: https://svn.enthought.com/epd or via e-mail to epd-support at enthought.com. If everything goes well, we are planning a final release for December. About EPD --------- The Enthought Python Distribution (EPD) is a "kitchen-sink-included" distribution of the Python? Programming Language, including over 60 additional tools and libraries. The EPD bundle includes NumPy, SciPy, IPython, 2D and 3D visualization, database adapters, GUI building libraries, and a lot of other tools right out of the box. http://www.enthought.com/products/epd.php It is currently available as a single-click installer for Windows XP (x86), Mac OS X (a universal binary for OS X 10.4 and above), and RedHat 3 and 4 (x86 and amd64). EPD is free for academic use. An annual subscription and installation support are available for individual commercial use. Enterprise subscriptions with support for particular deployment environments are also available for commercial purchase. Enthought Build Team From carlos.s.santos at gmail.com Wed Nov 26 14:34:38 2008 From: carlos.s.santos at gmail.com (Carlos da Silva Santos) Date: Wed, 26 Nov 2008 17:34:38 -0200 Subject: [SciPy-user] ndimage zero-ignorant filters, or other ways to fill holes In-Reply-To: References: <1dc6ddb60811241348p43b237fbl7c88570dda782d51@mail.gmail.com> Message-ID: <1dc6ddb60811261134r2a45922y5261d3669e0ecde6@mail.gmail.com> On Tue, Nov 25, 2008 at 7:13 AM, Sebastian Haase wrote: > On Mon, Nov 24, 2008 at 10:48 PM, Carlos da Silva Santos > wrote: >> On Mon, Nov 24, 2008 at 1:03 PM, Vincent Schut wrote: >>> >>> If someone comes up with another brillant idea to fill the zero-gaps in >>> my images with values that are in reasonable range of the gap's >>> surroundings, I'd also be very grateful. Keep in mind that the images >>> typically are pretty large, though. 7000x7000 pixels is no exception. >> >> Maybe you could use ndimage.morphology.grey_closing, I cant find a >> code example using it, but the idea is similar to the examples >> featured in this page: >> http://www.mmorph.com/pymorph/morph/morph/mmclose.html >> >> The "close hole" operator is probably closer to what you intended: >> http://www.mmorph.com/pymorph/morph/morph/mmclohole.html >> >> I am not sure whether this operator is implemented in the free version >> of pymorph, maybe you should give it a try: >> http://luispedro.org/pymorph >> >> Hope this helps. >> > Hi Sebastian, > Hi Carlos, > the links you provided look very interesting ! Morphology is a quite nice tool, indeed. > Would you be able to answer some further questions ? > e.g. > the original pymorph (http://www.mmorph.com/pymorph) appears to have a > BSD license, did this change for the > luispedro.org pymorph ? According to Luis Pedro, the license is the same: "The license stays BSD..." http://luispedro.org/pymorph > Is pymorph all 2D (only) ? > What data data-types does pymorph support ? float32 ? Actually, I never used pymorph. But quoting from the docs: "The Morphology Toolbox mainly supports four types of images according to their pixel datatypes : binary, unsigned gray scale uint8 and uint16, and signed gray scale int32. Most functions work for 1D, 2D and 3D images." http://www.mmorph.com/pymorph/morph/mmtypes/mmImage.html I believe the morphological operators available in ndimage.morphology work with floating point, can anyone confirm this? But pymorph has much more operators than ndimage.morphology. In my experience, it is not that common to use floating point images in applications involving morphology. Do you have anything specific in mind? Hope this helps, Carlos > > Thanks, > Sebastian Haase > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From stefan at sun.ac.za Wed Nov 26 14:42:35 2008 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Wed, 26 Nov 2008 21:42:35 +0200 Subject: [SciPy-user] getting values from traits objects In-Reply-To: <20623191.post@talk.nabble.com> References: <20623191.post@talk.nabble.com> Message-ID: <9457e7c80811261142w5d8fd23bp764f2afceecd3ae3@mail.gmail.com> You need to put the linspace line in the class __init__ function, i.e. class InputData(HasTraits): xmin=Float(default_value=.5) xmax=Float(default_value=2.0) xres=Int(128) def __init__(self): x=linspace(self.xmin, self.xmax, self.xres) Cheers St?fan 2008/11/26 gross : > > I'm obviously missing something very basic, can someone please explain to me > what i've done wrong in this example: > > [code] > from enthought.traits.api import HasTraits, Float, Int > from scipy import linspace > > class InputData(HasTraits): > xmin=Float(default_value=.5) > xmax=Float(default_value=2.0) > xres=Int(default_value=128) > x=linspace(xmin, xmax, xres) From stefan at sun.ac.za Wed Nov 26 14:43:45 2008 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Wed, 26 Nov 2008 21:43:45 +0200 Subject: [SciPy-user] getting values from traits objects In-Reply-To: <9457e7c80811261142w5d8fd23bp764f2afceecd3ae3@mail.gmail.com> References: <20623191.post@talk.nabble.com> <9457e7c80811261142w5d8fd23bp764f2afceecd3ae3@mail.gmail.com> Message-ID: <9457e7c80811261143u2e989fe2hb78e0989b80fcc39@mail.gmail.com> 2008/11/26 St?fan van der Walt : > def __init__(self): > x=linspace(self.xmin, self.xmax, self.xres) Of course, that should be self.x. From wnbell at gmail.com Wed Nov 26 14:49:05 2008 From: wnbell at gmail.com (Nathan Bell) Date: Wed, 26 Nov 2008 14:49:05 -0500 Subject: [SciPy-user] trouble saving sparse matrix In-Reply-To: References: Message-ID: On Wed, Nov 26, 2008 at 11:51 AM, Robin wrote: > > I am guessing something is overflowing a 32 bit integer. The matrix > itself seems ok... I was wondering if anyone had any ideas for another > way to save it - or perhaps if I have made a mistake and it is really > too big? > I don't know that it works, but have you tried sp.io.mmwrite? -- Nathan Bell wnbell at gmail.com http://graphics.cs.uiuc.edu/~wnbell/ From vginer at gmail.com Thu Nov 27 06:11:13 2008 From: vginer at gmail.com (Vicent Giner-Bosch) Date: Thu, 27 Nov 2008 03:11:13 -0800 (PST) Subject: [SciPy-user] How to use Open Opt Message-ID: <72b31426-a312-4876-af93-edf1746e2ac9@g38g2000yqd.googlegroups.com> Hello. I am looking for infomation about the Open Opt (OO) project, and I've been referred to this group. I've been reading the official documentation about OO, but it seems a little confusing to me. My question is: if I want to use OO, what must I do? In fact, if I want to develop a new optimization algorithm in Python, how can I use OO? I mean, in which part of the process can / should I use OO? What are the advantages of using OO? Is it just a "bunch" or library of available optimization algorithms, or does it also provide a general framework (for example, a general predefined Object Oriented structure, or some general functions in order to manage algorithms...) in order to build an test or run our own algorithms? What are the key features of OO? I hope I've been clear enough about my questions. Any answer will be appreciated. Thank you very much in advance. -- Vicent Giner-Bosch, Valencia (Spain) From ferrell at diablotech.com Thu Nov 27 11:23:15 2008 From: ferrell at diablotech.com (Robert Ferrell) Date: Thu, 27 Nov 2008 09:23:15 -0700 Subject: [SciPy-user] scikits.timeseries Message-ID: Timeseries is an awesome package. Great contribution. I have 2 questions about it, though. 1. Is scipy-user the right place for questions? 2. I've noticed that 'business frequency' includes holidays, and that can create holes in what are actually complete data sets. For instance, Sep 01, 2008 was a holiday in the US (Labor Day). However, it is included in a DateArray spanning that date. For instance. In [640]: ts.date_array(ts.Date('B','2008-08-25'), length=12) Out[640]: DateArray([25-Aug-2008, 26-Aug-2008, 27-Aug-2008, 28-Aug-2008, 29- Aug-2008, 01-Sep-2008, 02-Sep-2008, 03-Sep-2008, 04-Sep-2008, 05-Sep-2008, 08-Sep-2008, 09-Sep-2008], freq='B') This makes stock ticker data look like it's incomplete - no data for Sep 01, since the markets were closed. For instance, if I use matplotlib.finance.quotes_historical_yahoo to download Intel data, and put that into the date array above, I get the series: masked_array(data = [22.77 22.95 23.21 23.39 22.67 -- 22.39 21.35 20.34 20.43 20.79], mask = [False False False False False True False False False False False], fill_value=1e+20) That has a hole on Sep 1. This matters for things like moving average calculation. Sep 1 should be treated like a Saturday or Sunday, but instead causes a 5-day mov_average calculation to not compute anything from Sep 2 through Sep 7: timeseries([-- -- -- -- 22.998 -- -- -- -- -- 21.06], dates = [25-Aug-2008 ... 08-Sep-2008], freq = B) My question: What is a good way to handle (get rid of?) the holes in the series? thanks, -robert From dmitrey.kroshko at scipy.org Thu Nov 27 07:18:40 2008 From: dmitrey.kroshko at scipy.org (dmitrey) Date: Thu, 27 Nov 2008 14:18:40 +0200 Subject: [SciPy-user] How to use Open Opt In-Reply-To: <72b31426-a312-4876-af93-edf1746e2ac9@g38g2000yqd.googlegroups.com> References: <72b31426-a312-4876-af93-edf1746e2ac9@g38g2000yqd.googlegroups.com> Message-ID: <492E9020.1090008@scipy.org> hi Vicent, Vicent Giner-Bosch wrote: > Hello. > > I am looking for infomation about the Open Opt (OO) project, and I've > been referred to this group. > > I've been reading the official documentation about OO, but it seems a > little confusing to me. > > My question is: 1) > if I want to use OO, what must I do? > 2) > In fact, if I want to develop a new optimization algorithm in Python, > how can I use OO? I mean, in which part of the process can / should I > use OO? > 1 and 2 are two different questions. 1) If you want just use OO to find a solution of an optimization problem, read Doc page and see the examples provided for each class. Finance support for OO (it was GSoC for twice) had been finished and there will hardly be any Doc extension in nearest future. Also, I just don't see any reasons to provide alternative documentation, it's too costly to maintain (keep up-to-date) several documentations. 2) To develop optimization algorithm you don't have to use OO, pure Python, probably with numpy, will be enough. > What are the advantages of using OO? http://scipy.org/scipy/scikits/wiki/whyOpenOpt4user > Is it just a "bunch" or library > of available optimization algorithms, or does it also provide a > general framework (for example, a general predefined Object Oriented > structure, or some general functions in order to manage algorithms...) > The framework is similar to TOMOPT's TOMLAB. It has some API funcs; those ones from user API are mentioned in Doc page. > in order to build an test or run our own algorithms? > > What are the key features of OO? > I can't copy-paste here the info from OO website, moreover, you have mentioned you have it read. Regards, D. From vginer at gmail.com Thu Nov 27 13:02:00 2008 From: vginer at gmail.com (Vicent) Date: Thu, 27 Nov 2008 19:02:00 +0100 Subject: [SciPy-user] How to use Open Opt In-Reply-To: <492E9020.1090008@scipy.org> References: <72b31426-a312-4876-af93-edf1746e2ac9@g38g2000yqd.googlegroups.com> <492E9020.1090008@scipy.org> Message-ID: <50ed08f40811271002h6a71cacer8509f42462c94c30@mail.gmail.com> Dmitrey, Thank you for your clear answer. Here (http://scipy.org/scipy/scikits/wiki/whyOpenOpt4user and http://scipy.org/scipy/scikits/wiki/whereProfitsForOpenOptConnectedSolverOwners) I see now that OpenOpt can be a useful tool for "connecting" different solvers, and to "speed" algorithms... But it doesn't provide any kind of general structure for building algorithms, as I thought in the begining. Anyway, no doubt it can be interesting in my research and development of new optimization algorithms. Thank you for the information. -- Vicent On Thu, Nov 27, 2008 at 13:18, dmitrey wrote: > hi Vicent, > Vicent Giner-Bosch wrote: > > Hello. > > > > I am looking for infomation about the Open Opt (OO) project, and I've > > been referred to this group. > > > > I've been reading the official documentation about OO, but it seems a > > little confusing to me. > > > > My question is: > 1) > > if I want to use OO, what must I do? > > > 2) > > In fact, if I want to develop a new optimization algorithm in Python, > > how can I use OO? I mean, in which part of the process can / should I > > use OO? > > > 1 and 2 are two different questions. > 1) If you want just use OO to find a solution of an optimization > problem, read Doc page and see the examples provided for each class. > Finance support for OO (it was GSoC for twice) had been finished and > there will hardly be any Doc extension in nearest future. Also, I just > don't see any reasons to provide alternative documentation, it's too > costly to maintain (keep up-to-date) several documentations. > 2) To develop optimization algorithm you don't have to use OO, pure > Python, probably with numpy, will be enough. > > What are the advantages of using OO? > http://scipy.org/scipy/scikits/wiki/whyOpenOpt4user > > Is it just a "bunch" or library > > of available optimization algorithms, or does it also provide a > > general framework (for example, a general predefined Object Oriented > > structure, or some general functions in order to manage algorithms...) > > > The framework is similar to TOMOPT's TOMLAB. > It has some API funcs; those ones from user API are mentioned in Doc page. > > > in order to build an test or run our own algorithms? > > > > What are the key features of OO? > > > I can't copy-paste here the info from OO website, moreover, you have > mentioned you have it read. > > Regards, D. > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pgmdevlist at gmail.com Thu Nov 27 13:40:24 2008 From: pgmdevlist at gmail.com (Pierre GM) Date: Thu, 27 Nov 2008 13:40:24 -0500 Subject: [SciPy-user] scikits.timeseries In-Reply-To: References: Message-ID: <99B5C565-967B-43AB-A978-F0F740B31FB8@gmail.com> On Nov 27, 2008, at 11:23 AM, Robert Ferrell wrote: > Timeseries is an awesome package. Great contribution. I have 2 > questions about it, though. > > 1. Is scipy-user the right place for questions? It is > 2. I've noticed that 'business frequency' includes holidays, and that > can create holes in what are actually complete data sets. For > instance, Sep 01, 2008 was a holiday in the US (Labor Day). Yes, the moniker "business days" is a bit decepetive, as it refers only to days that are not Saturday or Sunday. It'd be too tricky for us to implement holidays, as it'd vary from one place to another (no such things as Thanksgiving in Europe, for example...). > > That has a hole on Sep 1. This matters for things like moving average > calculation. Sep 1 should be treated like a Saturday or Sunday, but > instead causes a 5-day mov_average calculation to not compute anything > from Sep 2 through Sep 7: > > timeseries([-- -- -- -- 22.998 -- -- -- -- -- 21.06], > dates = [25-Aug-2008 ... 08-Sep-2008], > freq = B) > > My question: What is a good way to handle (get rid of?) the holes in > the series? Mmh. On the top of my head, I'd do something like that: * create a new series by using .compressed on your initial series. You'll get rid of the masked data and will have incomplete dates, but it shouldn't matter. * use your moving average function on the new series. * if needed, reset the missing dates by using fill_missing_dates on the filtered series. Let me know how it goes. P. From vanforeest at gmail.com Thu Nov 27 17:23:40 2008 From: vanforeest at gmail.com (nicky van foreest) Date: Thu, 27 Nov 2008 23:23:40 +0100 Subject: [SciPy-user] Gram-Schmidt orthogonalization In-Reply-To: <6B4C502D-159B-42EF-83FE-E1E61B2AD339@cs.toronto.edu> References: <6B4C502D-159B-42EF-83FE-E1E61B2AD339@cs.toronto.edu> Message-ID: Hi David, Thanks for the pointer. 2008/11/24 David Warde-Farley : > > On 24-Nov-08, at 3:09 PM, nicky van foreest wrote: > >> Hi David, >> >> I recall from the book numerical recipes that the Gramm Schmidt >> methods works terrible, numerically speaking. They provide some >> counterexamples too. It is better to use singular value decomposition, >> which is included in scipy too. > > Hey Nicky, > > You're right about Gram-Schmidt being nasty if you do it naively, but > there is IIRC a more numerically stable variant of Gram-Schmidt, see > http://en.wikipedia.org/wiki/Gram?Schmidt_process#Algorithm > > I just tend not to want to "roll my own" if I can help it, since stuff > in SciPy is usually going to be better tested. > > Cheers, > > David > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From ferrell at diablotech.com Fri Nov 28 00:09:53 2008 From: ferrell at diablotech.com (Robert Ferrell) Date: Thu, 27 Nov 2008 22:09:53 -0700 Subject: [SciPy-user] scikits.timeseries In-Reply-To: <99B5C565-967B-43AB-A978-F0F740B31FB8@gmail.com> References: <99B5C565-967B-43AB-A978-F0F740B31FB8@gmail.com> Message-ID: On Nov 27, 2008, at 11:40 AM, Pierre GM wrote: > > On Nov 27, 2008, at 11:23 AM, Robert Ferrell wrote: > >> >> That has a hole on Sep 1. This matters for things like moving >> average >> calculation. Sep 1 should be treated like a Saturday or Sunday, but >> instead causes a 5-day mov_average calculation to not compute >> anything >> from Sep 2 through Sep 7: >> >> timeseries([-- -- -- -- 22.998 -- -- -- -- -- 21.06], >> dates = [25-Aug-2008 ... 08-Sep-2008], >> freq = B) >> >> My question: What is a good way to handle (get rid of?) the holes in >> the series? > > Mmh. On the top of my head, I'd do something like that: > * create a new series by using .compressed on your initial series. > You'll get rid of the masked data and will have incomplete dates, but > it shouldn't matter. > * use your moving average function on the new series. > * if needed, reset the missing dates by using fill_missing_dates on > the filtered series. > > Let me know how it goes. > P. Since the date arrays has holes, I can't use timeseries date range calculations. So, for instance, to get the previous 5 days of data I can't just use series[d-5:d]. Instead I need to (I think) convert to an index, series.date_to_index(d), and then use that index. I'm going to try that, along with using .compressed(), and see how I do. Is there any possibility of allowing user defined frequencies? thanks, -robert From aisaac at american.edu Fri Nov 28 09:40:35 2008 From: aisaac at american.edu (Alan G Isaac) Date: Fri, 28 Nov 2008 09:40:35 -0500 Subject: [SciPy-user] How to use Open Opt In-Reply-To: <492E9020.1090008@scipy.org> References: <72b31426-a312-4876-af93-edf1746e2ac9@g38g2000yqd.googlegroups.com> <492E9020.1090008@scipy.org> Message-ID: <493002E3.9080306@american.edu> http://scipy.org/scipy/scikits/wiki/OOClasses If you develop a new optimization algorithm in Python, of general interest, we could consider connecting it to OpenOpt. But there is an awful lot already there. Alan Isaac From aisaac at american.edu Fri Nov 28 09:55:32 2008 From: aisaac at american.edu (Alan G Isaac) Date: Fri, 28 Nov 2008 09:55:32 -0500 Subject: [SciPy-user] How to use Open Opt In-Reply-To: <50ed08f40811271002h6a71cacer8509f42462c94c30@mail.gmail.com> References: <72b31426-a312-4876-af93-edf1746e2ac9@g38g2000yqd.googlegroups.com> <492E9020.1090008@scipy.org> <50ed08f40811271002h6a71cacer8509f42462c94c30@mail.gmail.com> Message-ID: <49300664.2030402@american.edu> On 11/27/2008 1:02 PM Vicent apparently wrote: > But it doesn't provide any kind of general structure for building > algorithms, as I thought in the begining. The GenericOpt component (under solvers) is supposed to supply such structure. However at the moment I do not see most of that code ... ? Not sure what happened here. Alan Isaac From pgmdevlist at gmail.com Fri Nov 28 14:03:50 2008 From: pgmdevlist at gmail.com (Pierre GM) Date: Fri, 28 Nov 2008 14:03:50 -0500 Subject: [SciPy-user] scikits.timeseries In-Reply-To: References: <99B5C565-967B-43AB-A978-F0F740B31FB8@gmail.com> Message-ID: <94379D99-3429-4A6F-B3FA-8613ED16679B@gmail.com> Robert: It's always easier to manipulate series withoutmissing data. The trick I gave you earlier about computing a moving average after having removed the missing dates was that, just a trick. However, I'm confident it should work. Unfortunately, there's no easy way to define new frequencies, and it's not on or todo list either. Frequencies are defined in the C part of the code... On Nov 28, 2008, at 12:09 AM, Robert Ferrell wrote: >> >> On Nov 27, 2008, at 11:23 AM, Robert Ferrell wrote: >> >>> >>> That has a hole on Sep 1. This matters for things like moving >>> average >>> calculation. Sep 1 should be treated like a Saturday or Sunday, but >>> instead causes a 5-day mov_average calculation to not compute >>> anything >>> from Sep 2 through Sep 7: >>> >>> timeseries([-- -- -- -- 22.998 -- -- -- -- -- 21.06], >>> dates = [25-Aug-2008 ... 08-Sep-2008], >>> freq = B) >>> >>> My question: What is a good way to handle (get rid of?) the holes in >>> the series? >> >> Mmh. On the top of my head, I'd do something like that: >> * create a new series by using .compressed on your initial series. >> You'll get rid of the masked data and will have incomplete dates, but >> it shouldn't matter. >> * use your moving average function on the new series. >> * if needed, reset the missing dates by using fill_missing_dates on >> the filtered series. >> >> Let me know how it goes. >> P. > > Since the date arrays has holes, I can't use timeseries date range > calculations. So, for instance, to get the previous 5 days of data I > can't just use series[d-5:d]. Instead I need to (I think) convert to > an index, series.date_to_index(d), and then use that index. I'm > going to try that, along with using .compressed(), and see how I do. > > Is there any possibility of allowing user defined frequencies? > > thanks, > -robert > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user From simpson at math.toronto.edu Fri Nov 28 17:38:01 2008 From: simpson at math.toronto.edu (Gideon Simpson) Date: Fri, 28 Nov 2008 17:38:01 -0500 Subject: [SciPy-user] os x, intel compilers & mkl, and fink python Message-ID: <10D66598-1DD4-46D9-BC84-5998E06C01F5@math.toronto.edu> Has anyone gotten the combination of OS X with a fink python distribution to successfully build numpy/scipy with the intel compilers and the mkl? If so, how'd you do it? -gideon From vginer at gmail.com Sat Nov 29 06:31:45 2008 From: vginer at gmail.com (Vicent Giner-Bosch) Date: Sat, 29 Nov 2008 03:31:45 -0800 (PST) Subject: [SciPy-user] How to use Open Opt In-Reply-To: <49300664.2030402@american.edu> References: <72b31426-a312-4876-af93-edf1746e2ac9@g38g2000yqd.googlegroups.com> <492E9020.1090008@scipy.org> <50ed08f40811271002h6a71cacer8509f42462c94c30@mail.gmail.com> <49300664.2030402@american.edu> Message-ID: On Nov 28, 3:55?pm, Alan G Isaac wrote: > On 11/27/2008 1:02 PM Vicent apparently wrote: > > > But it doesn't provide any kind of general structure for building > > algorithms, as I thought in the begining. > > The GenericOpt component (under solvers) is supposed > to supply such structure. OK, it is good to know it. It is supposed that I can get benefits from using that existing general "structure", isn't it? > > However at the moment I do not see most of that code ... ? > Not sure what happened here. I hope it can be solved... Thanks, A.I. -- Vicent From matthieu.brucher at gmail.com Sat Nov 29 07:14:24 2008 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Sat, 29 Nov 2008 13:14:24 +0100 Subject: [SciPy-user] How to use Open Opt In-Reply-To: References: <72b31426-a312-4876-af93-edf1746e2ac9@g38g2000yqd.googlegroups.com> <492E9020.1090008@scipy.org> <50ed08f40811271002h6a71cacer8509f42462c94c30@mail.gmail.com> <49300664.2030402@american.edu> Message-ID: >> The GenericOpt component (under solvers) is supposed >> to supply such structure. > > OK, it is good to know it. > > It is supposed that I can get benefits from using that existing > general "structure", isn't it? Yes, it is. I've provided some basic blocks you can mix together to make the optimization procedure you want. >> However at the moment I do not see most of that code ... ? >> Not sure what happened here. > > > I hope it can be solved... There are some samples, for instance in the tests scripts. If you have more specifics questions, just ask, I'll try answer them. Matthieu -- Information System Engineer, Ph.D. Website: http://matthieu-brucher.developpez.com/ Blogs: http://matt.eifelle.com and http://blog.developpez.com/?blog=92 LinkedIn: http://www.linkedin.com/in/matthieubrucher From matthieu.brucher at gmail.com Sat Nov 29 07:20:11 2008 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Sat, 29 Nov 2008 13:20:11 +0100 Subject: [SciPy-user] How to use Open Opt In-Reply-To: References: <72b31426-a312-4876-af93-edf1746e2ac9@g38g2000yqd.googlegroups.com> <492E9020.1090008@scipy.org> <50ed08f40811271002h6a71cacer8509f42462c94c30@mail.gmail.com> <49300664.2030402@american.edu> Message-ID: > There are some samples, for instance in the tests scripts. If you have > more specifics questions, just ask, I'll try answer them. You have an explanation of the structure on the TRAC : http://scipy.org/scipy/scikits/wiki/Optimization There is a link to a tutorial, and the list of stuff that still needs to be implemented (and tested, of course). I didn't find the time to do that for the moment. My job takes a lot of my time, and I do not use the framework for it. Matthieu -- Information System Engineer, Ph.D. Website: http://matthieu-brucher.developpez.com/ Blogs: http://matt.eifelle.com and http://blog.developpez.com/?blog=92 LinkedIn: http://www.linkedin.com/in/matthieubrucher From josef.pktd at gmail.com Sat Nov 29 16:46:14 2008 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Sat, 29 Nov 2008 16:46:14 -0500 Subject: [SciPy-user] new Kolmogorov-Smirnov test Message-ID: <1cd32cbb0811291346j76d15b66ud2221653de709139@mail.gmail.com> Since the old scipy.stats.kstest wasn't correct, I spent quite some time fixing and testing it. Now, I know more about the Kolmogorov-Smirnov test, than I wanted to. The kstest now resembles the one in R and in matlab, giving the option for two-sided or one-sided tests. The names of the keyword options are a mixture of matlab and R, which I liked best. Since the exact distribution of the two-sided test is not available in scipy, I use an approximation, that seems to work very well. In several Monte Carlo studies against R, I get very close results, especially for small p-values. (For those interested, for small p-values, I use ksone.sf(D,n)*2; for large p-values or large n, I use the asymptotic distribution kstwobign) example signature and options: kstest(x,testdistfn.name, alternative = 'unequal', mode='approx')) kstest(x,testdistfn.name, alternative = 'unequal', mode='asymp')) kstest(x,testdistfn.name, alternative = 'larger')) kstest(x,testdistfn.name, alternative = 'smaller')) Below is the Monte Carlo for the case, when the random variable and the hypothesized distribution both are standard normal (with sample size 100 and 1000 replications. Rejection rates are very close to alpha levels. It also contains the mean absolute error MAE for the old kstest. I also checked for mean shifted normal random variables. In all cases that I tried, I get exactly the same rejection rates as in R. For details see doc string or source. I attach file to a separate email, to get around attachment size limit. I intend to put this in trunk tomorrow, review and comments are welcome. Josef data generation distribution is norm, hypothesis is norm ================================================== n = 100, loc = 0.000000 scale = 1.000000, n_repl = 1000 columns: D, pval rows are kstest(x,testdistfn.name, alternative = 'unequal', mode='approx')) kstest(x,testdistfn.name, alternative = 'unequal', mode='asymp')) kstest(x,testdistfn.name, alternative = 'larger')) kstest(x,testdistfn.name, alternative = 'smaller')) Results for comparison with R: MAE old kstest [[ 0.00453195 0.19152727] [ 0.00453195 0.2101139 ] [ 0.02002774 0.19145982] [ 0.02880553 0.26650226]] MAE new kstest [[ 1.87488913e-17 1.07738517e-02] [ 1.87488913e-17 1.91763848e-06] [ 2.38576520e-17 8.90287843e-16] [ 1.41312743e-17 9.92428362e-16]] percent count absdev > 0.005 [[ 0. 53.9] [ 0. 0. ] [ 0. 0. ] [ 0. 0. ]] percent count absdev > 0.01 [[ 0. 24.3] [ 0. 0. ] [ 0. 0. ] [ 0. 0. ]] percent count abs percent dev > 1% [[ 0. 51.8] [ 0. 0. ] [ 0. 0. ] [ 0. 0. ]] percent count abs percent dev > 10% [[ 0. 0.] [ 0. 0.] [ 0. 0.] [ 0. 0.]] new: count rejection at 1% significance [ 0.01 0.008 0.009 0.014] R: proportion of rejection at 1% significance [ 0.01 0.008 0.009 0.014] new: proportion of rejection at 5% significance [ 0.054 0.048 0.048 0.06 ] R: proportion of rejection at 5% significance [ 0.054 0.048 0.048 0.06 ] new: proportion of rejection at 10% significance [ 0.108 0.096 0.095 0.109] R: proportion of rejection at 10% significance [ 0.108 0.096 0.095 0.109] From josef.pktd at gmail.com Sat Nov 29 16:48:20 2008 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Sat, 29 Nov 2008 16:48:20 -0500 Subject: [SciPy-user] new Kolmogorov-Smirnov test In-Reply-To: <1cd32cbb0811291346j76d15b66ud2221653de709139@mail.gmail.com> References: <1cd32cbb0811291346j76d15b66ud2221653de709139@mail.gmail.com> Message-ID: <1cd32cbb0811291348r7689f4c8mf6096038d11bb8bc@mail.gmail.com> attachment new_kstest.py Josef -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: new_kstest.py URL: From jsalvati at u.washington.edu Sun Nov 30 02:45:51 2008 From: jsalvati at u.washington.edu (John Salvatier) Date: Sat, 29 Nov 2008 23:45:51 -0800 Subject: [SciPy-user] Is it possible to pass Fortran derived data types to Python via C and SWIG? Message-ID: <113e17f20811292345k7cab3263macda578df9189876@mail.gmail.com> I have a Fortran 90 algorithm which uses a derived data type to return data, and I would like to make a python wrapper for this algorithm. I understand that f2py cannot wrap derived data types; is it possible to do so with a C interface for the Fortran algorithm and SWIG? I would have to pass the derived data type into a C struct and then to Python. Best Regard, John Salvatier -------------- next part -------------- An HTML attachment was scrubbed... URL: From david at ar.media.kyoto-u.ac.jp Sun Nov 30 03:49:09 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Sun, 30 Nov 2008 17:49:09 +0900 Subject: [SciPy-user] Is it possible to pass Fortran derived data types to Python via C and SWIG? In-Reply-To: <113e17f20811292345k7cab3263macda578df9189876@mail.gmail.com> References: <113e17f20811292345k7cab3263macda578df9189876@mail.gmail.com> Message-ID: <49325385.9090302@ar.media.kyoto-u.ac.jp> John Salvatier wrote: > I have a Fortran 90 algorithm which uses a derived data type to return > data, and I would like to make a python wrapper for this algorithm. I > understand that f2py cannot wrap derived data types; is it possible to > do so with a C interface for the Fortran algorithm and SWIG? I would > have to pass the derived data type into a C struct and then to Python. It is possible as long as you can pass the structure from fortran to C. I don't know anything about Fortran derived data types, but if it is a non trivial object (more than a set of fundamental types), I am afraid it will be difficult. Does F90 supports POD data ? Otherwise, you will need a scheme for marshalling your data from Fortran to C (to match exactly how the structure would look like in C at the binary level). David From matthieu.brucher at gmail.com Sun Nov 30 07:21:20 2008 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Sun, 30 Nov 2008 13:21:20 +0100 Subject: [SciPy-user] Is it possible to pass Fortran derived data types to Python via C and SWIG? In-Reply-To: <49325385.9090302@ar.media.kyoto-u.ac.jp> References: <113e17f20811292345k7cab3263macda578df9189876@mail.gmail.com> <49325385.9090302@ar.media.kyoto-u.ac.jp> Message-ID: 2008/11/30 David Cournapeau : > John Salvatier wrote: >> I have a Fortran 90 algorithm which uses a derived data type to return >> data, and I would like to make a python wrapper for this algorithm. I >> understand that f2py cannot wrap derived data types; is it possible to >> do so with a C interface for the Fortran algorithm and SWIG? I would >> have to pass the derived data type into a C struct and then to Python. > > It is possible as long as you can pass the structure from fortran to C. > I don't know anything about Fortran derived data types, but if it is a > non trivial object (more than a set of fundamental types), I am afraid > it will be difficult. Does F90 supports POD data ? Otherwise, you will > need a scheme for marshalling your data from Fortran to C (to match > exactly how the structure would look like in C at the binary level). > > David I've read an article (I don't remember where though, possibly CiSE) that stated that it's really not an easy task, as each Fortran compiler can do as it pleases it. So depending on the compiler and the Fortran standard, it can be possible, or not. So as there are no guaranties, you should write a function that transforms the Fortran structure in several pieces that are then passed to the C function. Matthieu -- Information System Engineer, Ph.D. Website: http://matthieu-brucher.developpez.com/ Blogs: http://matt.eifelle.com and http://blog.developpez.com/?blog=92 LinkedIn: http://www.linkedin.com/in/matthieubrucher From jeremy at jeremysanders.net Sun Nov 30 08:14:55 2008 From: jeremy at jeremysanders.net (Jeremy Sanders) Date: Sun, 30 Nov 2008 13:14:55 +0000 Subject: [SciPy-user] ANN: Veusz 1.2.1 - a scientific plotting package Message-ID: Note that this release includes binaries for Linux/Windows/MacOSX. The embedding interface is now more robust and works under all platforms, including Windows. Veusz 1.2.1 ----------- Velvet Ember Under Sky Zenith ----------------------------- http://home.gna.org/veusz/ Veusz is Copyright (C) 2003-2008 Jeremy Sanders Licenced under the GPL (version 2 or greater). Veusz is a scientific plotting package. It is written in Python, using PyQt4 for display and user-interfaces, and numpy for handling the numeric data. Veusz is designed to produce publication-ready Postscript/PDF output. The user interface aims to be simple, consistent and powerful. Veusz provides a GUI, command line, embedding and scripting interface (based on Python) to its plotting facilities. It also allows for manipulation and editing of datasets. Change in 1.2.1: * Fix crash when adding a key without any key text defined. Changes in 1.2: * Boxes, ellipses, lines, arrows and image files can now be added to the plot or page and interactively adjusted. * Page sizes, graphs, grids and axes can be interactively adjusted. * Plot keys can have multiple columns. * Error bars can have cross-ends. * Several user interface usability enhancements. * Embedding interface has been rewritten to be more robust. It now uses multiple processes and sockets. * Embedding now works fully on Windows. * Embedding interface has been expanded: - Zoom width, height and page options for zooming graph to window - Dynamically change update interval - Move between pages of documents - Open up more than one view onto a document * PDF export fixed for recent versions of Qt * Quite a lot of minor bug fixes Features of package: * X-Y plots (with errorbars) * Line and function plots * Contour plots * Images (with colour mappings and colorbars) * Stepped plots (for histograms) * Fitting functions to data * Stacked plots and arrays of plots * Plot keys * Plot labels * Shapes and arrows on plots * LaTeX-like formatting for text * EPS/PDF/PNG/SVG export * Scripting interface * Dataset creation/manipulation * Embed Veusz within other programs * Text, CSV and FITS importing Requirements: Python (2.3 or greater required) http://www.python.org/ Qt >= 4.3 (free edition) http://www.trolltech.com/products/qt/ PyQt >= 4.3 (SIP is required to be installed first) http://www.riverbankcomputing.co.uk/pyqt/ http://www.riverbankcomputing.co.uk/sip/ numpy >= 1.0 http://numpy.scipy.org/ Optional: Microsoft Core Fonts (recommended for nice output) http://corefonts.sourceforge.net/ PyFITS >= 1.1 (optional for FITS import) http://www.stsci.edu/resources/software_hardware/pyfits For documentation on using Veusz, see the "Documents" directory. The manual is in pdf, html and text format (generated from docbook). Issues: * Can be very slow to plot large datasets if antialiasing is enabled. Right click on graph and disable antialias to speed up output. If you enjoy using Veusz, I would love to hear from you. Please join the mailing lists at https://gna.org/mail/?group=veusz to discuss new features or if you'd like to contribute code. The latest code can always be found in the SVN repository. Jeremy Sanders From berthold at xn--hllmanns-n4a.de Sun Nov 30 16:38:14 2008 From: berthold at xn--hllmanns-n4a.de (Berthold =?utf-8?b?SMO2bGxtYW5u?=) Date: Sun, 30 Nov 2008 21:38:14 +0000 (UTC) Subject: [SciPy-user] =?utf-8?q?Is_it_possible_to_pass_Fortran_derived_dat?= =?utf-8?q?a_types=09to_Python_via_C_and_SWIG=3F?= References: <113e17f20811292345k7cab3263macda578df9189876@mail.gmail.com> <49325385.9090302@ar.media.kyoto-u.ac.jp> Message-ID: Matthieu Brucher gmail.com> writes: > > 2008/11/30 David Cournapeau ar.media.kyoto-u.ac.jp>: > > John Salvatier wrote: > >> I have a Fortran 90 algorithm which uses a derived data type to return > >> data, and I would like to make a python wrapper for this algorithm. I > >> understand that f2py cannot wrap derived data types; is it possible to > >> do so with a C interface for the Fortran algorithm and SWIG? I would > >> have to pass the derived data type into a C struct and then to Python. > > > > It is possible as long as you can pass the structure from fortran to C. > > I don't know anything about Fortran derived data types, but if it is a > > non trivial object (more than a set of fundamental types), I am afraid > > it will be difficult. Does F90 supports POD data ? Otherwise, you will > > need a scheme for marshalling your data from Fortran to C (to match > > exactly how the structure would look like in C at the binary level). > > > > David > > I've read an article (I don't remember where though, possibly CiSE) > that stated that it's really not an easy task, as each Fortran > compiler can do as it pleases it. So depending on the compiler and the > Fortran standard, it can be possible, or not. So as there are no > guaranties, you should write a function that transforms the Fortran > structure in several pieces that are then passed to the C function. > > Matthieu A feasible way to achieve this would be to write a Fortran wrapper around your routine(x) that decomposes your derived data type to standard types and exposes these in the interface. Than you can compose the derived data type again in the wrapper and pass it to the original routine. :: module geom type Point real :: x, y end type Point type Circle type (Point) :: Center real :: Radius end type Circle end module geom subroutine test(c) use geom type (Circle) :: c print*, c%Radius print*, c%Center%X print*, c%Center%Y end subroutine test subroutine w_test(x, y, r) use geom real :: x, y, z type (Circle) :: C c%Radius = r c%Center%X = x c%Center%Y = y call test(c) end subroutine w_test Wrapping w_test should be trivial using f2py Regards Berthold From simpson at math.toronto.edu Sun Nov 30 19:01:58 2008 From: simpson at math.toronto.edu (Gideon Simpson) Date: Sun, 30 Nov 2008 19:01:58 -0500 Subject: [SciPy-user] fminbound vs. brent Message-ID: <4F87D554-DBC5-4D6B-B2C1-E8AB2EC5E58C@math.toronto.edu> Based on the documentation, I'm a bit unclear on how fminbound and brent, as optimization algorithms, differ. Could someone clarify this for me? -gideon From dwf at cs.toronto.edu Fri Nov 28 19:07:46 2008 From: dwf at cs.toronto.edu (David Warde-Farley) Date: Fri, 28 Nov 2008 19:07:46 -0500 Subject: [SciPy-user] os x, intel compilers & mkl, and fink python In-Reply-To: <10D66598-1DD4-46D9-BC84-5998E06C01F5@math.toronto.edu> References: <10D66598-1DD4-46D9-BC84-5998E06C01F5@math.toronto.edu> Message-ID: On 28-Nov-08, at 5:38 PM, Gideon Simpson wrote: > Has anyone gotten the combination of OS X with a fink python > distribution to successfully build numpy/scipy with the intel > compilers and the mkl? If so, how'd you do it? IIRC David Cournapeau has had some success building numpy with MKL on OS X, but I doubt it was the fink distribution. Is there a reason you prefer fink's python rather than the Python.org universal framework build? Also, which particular python version (2.4, 2.5, 2.6? I know fink typically has a couple). David From dwf at cs.toronto.edu Fri Nov 28 03:13:44 2008 From: dwf at cs.toronto.edu (David Warde-Farley) Date: Fri, 28 Nov 2008 03:13:44 -0500 Subject: [SciPy-user] trouble saving sparse matrix In-Reply-To: References: Message-ID: <9C91A471-6224-4C91-B993-B909D6AEF4C7@cs.toronto.edu> On 26-Nov-08, at 11:51 AM, Robin wrote: > Hi, > > I have a large sparse matrix (about 9GB): > > In [18]: a.A > Out[18]: > <21699x1048575 sparse matrix of type '' > with 1035272192 stored elements in Compressed Sparse Column > format> > > but I am having trouble saving it. > > I am on 64 bit linux. > > The problem is whatever I try I get : > SystemError: Negative size passed to PyString_FromStringAndSize > > This happens with cPickle.dump, np.save, sp.io.savemat etc. How are you using np.save? (just to be sure) Have you tried saving the individual component vectors? x.data, x.indices, x.indptr? I usually use np.save() on each one of these, as well as array(x.shape), or equivalently np.savez('mysparsematrix.npz', data=x.data, indices=x.indices, indptr=x.indptr,shape=np.array(x.shape)) is a nice way to save sparse matrices, I've found. Then restoring is as easy as f = np.load('mysparsematrix.npz') mymat = sp.sparse.csc_matrix((f['data'],f['indices'],f['indptr']), shape=f['shape']) At any rate, calling np.save on them individually might help you isolate the problem. Regards, David