From wkerzendorf at googlemail.com Tue Sep 1 03:19:36 2009 From: wkerzendorf at googlemail.com (Wolfgang Kerzendorf) Date: Tue, 1 Sep 2009 09:19:36 +0200 Subject: [SciPy-User] scipy on snow leopard Message-ID: <0AD56B83-FD0E-4651-BF3D-8FE26B062EB1@gmail.com> Hello, I first tried to install scipy using the stable build on snow leopard. I am using the python that came with it. It complains about can't install when cross compiling. Then I installed numpy svn and scipy svn, both compile fine but when importing scipy,linalg I get : site-packages/scipy/linalg/flapack.so: mach-o, but wrong architecture or when importing interpolate: site-packages/scipy/special/_cephes.so: mach-o, but wrong architecture Please help Wolfgang From robert.kern at gmail.com Tue Sep 1 03:24:37 2009 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 1 Sep 2009 02:24:37 -0500 Subject: [SciPy-User] scipy on snow leopard In-Reply-To: <0AD56B83-FD0E-4651-BF3D-8FE26B062EB1@gmail.com> References: <0AD56B83-FD0E-4651-BF3D-8FE26B062EB1@gmail.com> Message-ID: <3d375d730909010024m21c5fca4o6bc5182fe1707db8@mail.gmail.com> On Tue, Sep 1, 2009 at 02:19, Wolfgang Kerzendorf wrote: > Hello, > > I first tried to install scipy using the stable build on snow leopard. > I am using the python that came with it. It complains about can't > install when cross compiling. > Then I installed numpy svn and scipy svn, both compile fine but when > importing scipy,linalg I get : > site-packages/scipy/linalg/flapack.so: mach-o, but wrong architecture > or when importing interpolate: > site-packages/scipy/special/_cephes.so: mach-o, but wrong architecture Please provide the full output of the error messages and the full build log (it's probably too big to attach to the list; you may need to post it somewhere and show us the URL). What Fortran compiler are you using and what version? Please state exactly what website you downloaded it from. Please show us the output of $ file site-packages/scipy/linalg/flapack.so Probably, the required architecture flags have changed for the build of Python on Snow Leopard and numpy.distutils needs to be updated. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From wkerzendorf at googlemail.com Tue Sep 1 04:25:47 2009 From: wkerzendorf at googlemail.com (Wolfgang Kerzendorf) Date: Tue, 1 Sep 2009 10:25:47 +0200 Subject: [SciPy-User] scipy on snow leopard In-Reply-To: <3d375d730909010024m21c5fca4o6bc5182fe1707db8@mail.gmail.com> References: <0AD56B83-FD0E-4651-BF3D-8FE26B062EB1@gmail.com> <3d375d730909010024m21c5fca4o6bc5182fe1707db8@mail.gmail.com> Message-ID: <0CFCCB33-CD9E-4AB2-A721-7CECB198EBA7@gmail.com> Dear Robert, Here is all the information I can gather: I tried several gfortrans here is the one that i'm using now: GNU Fortran (GCC) 4.5.0 20090604 (experimental) [trunk revision 148180] Copyright (C) 2009 Free Software Foundation, Inc. GNU Fortran comes with NO WARRANTY, to the extent permitted by law. You may redistribute copies of GNU Fortran under the terms of the GNU General Public License. For more information about these matters, see the file named COPYING ---- Here is the full error message: In [1]: import scipy.interpolate --------------------------------------------------------------------------- ImportError Traceback (most recent call last) /Users/wkerzend/ in () /Library/Python/2.6/site-packages/scipy/interpolate/__init__.py in () 5 from info import __doc__ 6 ----> 7 from interpolate import * 8 from fitpack import * 9 /Library/Python/2.6/site-packages/scipy/interpolate/interpolate.py in () 11 dot, poly1d, asarray, intp 12 import numpy as np ---> 13 import scipy.special as spec 14 import math 15 /Library/Python/2.6/site-packages/scipy/special/__init__.py in () 6 #from special_version import special_version as __version__ 7 ----> 8 from basic import * 9 import specfun 10 import orthogonal /Library/Python/2.6/site-packages/scipy/special/basic.py in () 6 7 from numpy import * ----> 8 from _cephes import * 9 import types 10 import specfun ImportError: dlopen(/Library/Python/2.6/site-packages/scipy/special/ _cephes.so, 2): no suitable image found. Did find: /Library/Python/2.6/site-packages/scipy/special/_cephes.so: mach-o, but wrong architecture ------------------------- Another thing that I did is change the gnu.py in Numpy because it was suggested, but that didnt work: I changed line 261 from ["ppc","i686","x86_64"] to ["x86_64", "i686"] ------ I have the build log in a pastebin @ http://pastebin.com/d67ac1a9e Thanks very much for your help. Please tell me if there's more information needed. Wolfgang On 01/09/2009, at 9:24 , Robert Kern wrote: > On Tue, Sep 1, 2009 at 02:19, Wolfgang > Kerzendorf wrote: >> Hello, >> >> I first tried to install scipy using the stable build on snow >> leopard. >> I am using the python that came with it. It complains about can't >> install when cross compiling. >> Then I installed numpy svn and scipy svn, both compile fine but when >> importing scipy,linalg I get : >> site-packages/scipy/linalg/flapack.so: mach-o, but wrong architecture >> or when importing interpolate: >> site-packages/scipy/special/_cephes.so: mach-o, but wrong >> architecture > > Please provide the full output of the error messages and the full > build log (it's probably too big to attach to the list; you may need > to post it somewhere and show us the URL). > > What Fortran compiler are you using and what version? Please state > exactly what website you downloaded it from. > > Please show us the output of > > $ file site-packages/scipy/linalg/flapack.so > > Probably, the required architecture flags have changed for the build > of Python on Snow Leopard and numpy.distutils needs to be updated. > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless > enigma that is made terrible by our own mad attempt to interpret it as > though it had an underlying truth." > -- Umberto Eco > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user From wkerzendorf at googlemail.com Tue Sep 1 04:35:33 2009 From: wkerzendorf at googlemail.com (Wolfgang Kerzendorf) Date: Tue, 1 Sep 2009 10:35:33 +0200 Subject: [SciPy-User] scipy on snow leopard In-Reply-To: <3d375d730909010024m21c5fca4o6bc5182fe1707db8@mail.gmail.com> References: <0AD56B83-FD0E-4651-BF3D-8FE26B062EB1@gmail.com> <3d375d730909010024m21c5fca4o6bc5182fe1707db8@mail.gmail.com> Message-ID: Another thing that I forgot: Heres the file information on _cephes.so: file /Library/Python/2.6/site-packages/scipy/special/_cephes.so /Library/Python/2.6/site-packages/scipy/special/_cephes.so: Mach-O bundle i386 bye Wolfgang On 01/09/2009, at 9:24 , Robert Kern wrote: > On Tue, Sep 1, 2009 at 02:19, Wolfgang > Kerzendorf wrote: >> Hello, >> >> I first tried to install scipy using the stable build on snow >> leopard. >> I am using the python that came with it. It complains about can't >> install when cross compiling. >> Then I installed numpy svn and scipy svn, both compile fine but when >> importing scipy,linalg I get : >> site-packages/scipy/linalg/flapack.so: mach-o, but wrong architecture >> or when importing interpolate: >> site-packages/scipy/special/_cephes.so: mach-o, but wrong >> architecture > > Please provide the full output of the error messages and the full > build log (it's probably too big to attach to the list; you may need > to post it somewhere and show us the URL). > > What Fortran compiler are you using and what version? Please state > exactly what website you downloaded it from. > > Please show us the output of > > $ file site-packages/scipy/linalg/flapack.so > > Probably, the required architecture flags have changed for the build > of Python on Snow Leopard and numpy.distutils needs to be updated. > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless > enigma that is made terrible by our own mad attempt to interpret it as > though it had an underlying truth." > -- Umberto Eco > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user From ivo.maljevic at gmail.com Tue Sep 1 12:37:33 2009 From: ivo.maljevic at gmail.com (Ivo Maljevic) Date: Tue, 1 Sep 2009 12:37:33 -0400 Subject: [SciPy-User] SciPy+NumPy on 4 major linux distributions Message-ID: <826c64da0909010937y7617e61h578e0e178044c46@mail.gmail.com> I think this subject was brought up before in the form of a question about which distro works better with scientific tools, so I wanted to see which linux distribution does a better job at packaging scipy/numpy without being subjective (my preferred distro overall is Ubuntu). I've tested 4 major distributions: openSUSE 11.1, Ubuntu 9.04, Mandriva 2009.1 and Fedora 11. It seems like none of them does a perfect job, but, still, the clear winner is Fedora 11. Here is why: 1. It is the only distro that has all the dependencies worked out, so you can run numpy.test() and scipy.test() without having to install python-nose 2. NumPy passes all the tests without a single error, and SciPy has only one error (see at the bottom) openSUSE and Mandriva not only do not pass these tests, but end up with memory corruption and crash python. Ubuntu also fails tese tests, but there is no memory corruption. (BTW, I opened a bug on openSUSE's site) Mandriva differes from the others because it packages scimath, which includes additional enthought functions, which is a good thing, if only it didn't corrupt memory with a simple: >>>from scipy.special import chebyt >>>chebyt(12)(-0.5) call. openSUSE's scipy also crashes on this simple test. All distributions have scipy version 0.7.*, but Ubuntu's NumPy is a bit older. Cheers, Ivo NumPy test on Fedora 11: Ran 2030 tests in 64.996s OK (KNOWNFAIL=1) SciPy test on Fedora 11: ====================================================================== ERROR: test_implicit (test_odr.TestODR) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.6/site-packages/scipy/odr/tests/test_odr.py", line 88, in test_implicit out = implicit_odr.run() File "/usr/lib/python2.6/site-packages/scipy/odr/odrpack.py", line 1055, in run self.output = Output(apply(odr, args, kwds)) TypeError: y must be a sequence or integer (if model is implicit) ---------------------------------------------------------------------- Ran 3394 tests in 212.467s FAILED (KNOWNFAIL=2, SKIP=17, errors=1) -------------- next part -------------- An HTML attachment was scrubbed... URL: From sccolbert at gmail.com Tue Sep 1 12:55:34 2009 From: sccolbert at gmail.com (Chris Colbert) Date: Tue, 1 Sep 2009 12:55:34 -0400 Subject: [SciPy-User] SciPy+NumPy on 4 major linux distributions In-Reply-To: <826c64da0909010937y7617e61h578e0e178044c46@mail.gmail.com> References: <826c64da0909010937y7617e61h578e0e178044c46@mail.gmail.com> Message-ID: <7f014ea60909010955q56c7cfadkee65427c3f525e90@mail.gmail.com> What I would like to see is a distribution build numpy and scipy with threaded atlas support. As it stands currently, Ubuntu "has atlas support", but its not threaded, and the packages are broken... Until that happens, I'll be rolling my own numpy and scipy from source. Cheers, Chris On Tue, Sep 1, 2009 at 12:37 PM, Ivo Maljevic wrote: > I think this subject was brought up before in the form of a question about > which distro works better with scientific tools, so I wanted to see which > linux distribution > does a better job at packaging scipy/numpy without being subjective (my > preferred distro overall is Ubuntu). > > I've tested 4 major distributions: openSUSE 11.1, Ubuntu 9.04, Mandriva > 2009.1 and Fedora 11. > It seems like none of them does a perfect job, but, still, the clear winner > is Fedora 11. > > Here is why: > > 1. It is the only distro that has all the dependencies worked out, so you > can run numpy.test() and scipy.test() without having to install python-nose > 2. NumPy passes all the tests without a single error, and SciPy has only one > error (see at the bottom) > > openSUSE and Mandriva not only do not pass these tests, but end up with > memory corruption and crash python. Ubuntu also fails tese tests, > but there is no memory corruption. (BTW, I opened a bug on openSUSE's site) > > Mandriva differes from the others because it packages scimath, which > includes additional enthought functions, which is a good thing, if only it > didn't > corrupt memory with a simple: > >>>>from scipy.special import chebyt >>>>chebyt(12)(-0.5) > > call. openSUSE's scipy also crashes on this simple test. > > All distributions have scipy version 0.7.*, but Ubuntu's NumPy is a bit > older. > > Cheers, > Ivo > > NumPy test on Fedora 11: > > Ran 2030 tests in 64.996s > > OK (KNOWNFAIL=1) > > > SciPy test on Fedora 11: > ====================================================================== > ERROR: test_implicit (test_odr.TestODR) > ---------------------------------------------------------------------- > Traceback (most recent call last): > ? File "/usr/lib/python2.6/site-packages/scipy/odr/tests/test_odr.py", line > 88, in test_implicit > ??? out = implicit_odr.run() > ? File "/usr/lib/python2.6/site-packages/scipy/odr/odrpack.py", line 1055, > in run > ??? self.output = Output(apply(odr, args, kwds)) > TypeError: y must be a sequence or integer (if model is implicit) > > ---------------------------------------------------------------------- > Ran 3394 tests in 212.467s > > FAILED (KNOWNFAIL=2, SKIP=17, errors=1) > > > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > From robert.kern at gmail.com Tue Sep 1 14:11:16 2009 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 1 Sep 2009 13:11:16 -0500 Subject: [SciPy-User] scipy on snow leopard In-Reply-To: <0CFCCB33-CD9E-4AB2-A721-7CECB198EBA7@gmail.com> References: <0AD56B83-FD0E-4651-BF3D-8FE26B062EB1@gmail.com> <3d375d730909010024m21c5fca4o6bc5182fe1707db8@mail.gmail.com> <0CFCCB33-CD9E-4AB2-A721-7CECB198EBA7@gmail.com> Message-ID: <3d375d730909011111q35383721wa0340935655d0065@mail.gmail.com> On Tue, Sep 1, 2009 at 03:25, Wolfgang Kerzendorf wrote: > Dear Robert, > > Here is all the information I can gather: > > I tried several gfortrans here is the one that i'm using now: > > GNU Fortran (GCC) 4.5.0 20090604 (experimental) [trunk revision 148180] > Copyright (C) 2009 Free Software Foundation, Inc. > > GNU Fortran comes with NO WARRANTY, to the extent permitted by law. > You may redistribute copies of GNU Fortran > under the terms of the GNU General Public License. > For more information about these matters, see the file named COPYING And where exactly did you get it from? Does it have x86_64 support? > Here is the full error message: > > In [1]: import scipy.interpolate > --------------------------------------------------------------------------- > ImportError ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? Traceback (most recent call > last) > > /Users/wkerzend/ in () > > /Library/Python/2.6/site-packages/scipy/interpolate/__init__.py in > () > ? ? ? 5 from info import __doc__ > ? ? ? 6 > ----> 7 from interpolate import * > ? ? ? 8 from fitpack import * > ? ? ? 9 > > /Library/Python/2.6/site-packages/scipy/interpolate/interpolate.py in > () > ? ? ?11 ? ? ? ? ? ? ? ? ? dot, poly1d, asarray, intp > ? ? ?12 import numpy as np > ---> 13 import scipy.special as spec > ? ? ?14 import math > ? ? ?15 > > /Library/Python/2.6/site-packages/scipy/special/__init__.py in > () > ? ? ? 6 #from special_version import special_version as __version__ > > ? ? ? 7 > ----> 8 from basic import * > ? ? ? 9 import specfun > ? ? ?10 import orthogonal > > /Library/Python/2.6/site-packages/scipy/special/basic.py in () > ? ? ? 6 > ? ? ? 7 from numpy import * > ----> 8 from _cephes import * > ? ? ? 9 import types > ? ? ?10 import specfun > > ImportError: dlopen(/Library/Python/2.6/site-packages/scipy/special/ > _cephes.so, 2): no suitable image found. ?Did find: > ? ? ? ?/Library/Python/2.6/site-packages/scipy/special/_cephes.so: mach-o, > but wrong architecture > > ------------------------- > > Another thing that I did is change the gnu.py in Numpy because it was > suggested, but that didnt work: > I changed line 261 from ["ppc","i686","x86_64"] to ["x86_64", "i686"] > ------ > I have the build log in a pastebin @ http://pastebin.com/d67ac1a9e It looks like we are not passing any architecture flags to gfortran. The default architecture appears to be i386 for gfortran but is x86_64 for the rest of Python. Your interpreter is starting up in 64-bit mode and expecting all of its shared libraries to be 64-bit, too, but the Fortran ones aren't. Please show us the output of the following the following: $ gfortran -arch x86_64 -v -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From wkerzendorf at googlemail.com Tue Sep 1 17:13:08 2009 From: wkerzendorf at googlemail.com (Wolfgang Kerzendorf) Date: Tue, 1 Sep 2009 23:13:08 +0200 Subject: [SciPy-User] scipy on snow leopard In-Reply-To: <3d375d730909011111q35383721wa0340935655d0065@mail.gmail.com> References: <0AD56B83-FD0E-4651-BF3D-8FE26B062EB1@gmail.com> <3d375d730909010024m21c5fca4o6bc5182fe1707db8@mail.gmail.com> <0CFCCB33-CD9E-4AB2-A721-7CECB198EBA7@gmail.com> <3d375d730909011111q35383721wa0340935655d0065@mail.gmail.com> Message-ID: <9AACC485-2EA8-4BBF-B8FE-277BC6363ACB@gmail.com> I played around with the problem today and I still didnt get it to work. I have http://hpc.sourceforge.net/ gfortran installed now and I get the following output with your command: gfortran -arch x86_64 -v Using built-in specs. Target: i386-apple-darwin9.7.0 Configured with: ./configure --enable-languages=c,c++,fortran Thread model: posix gcc version 4.4.1 20090623 (prerelease) (GCC) hope that helps WOlfgang On 01/09/2009, at 20:11 , Robert Kern wrote: > gfortran -arch x86_64 -v From robert.kern at gmail.com Tue Sep 1 17:28:38 2009 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 1 Sep 2009 16:28:38 -0500 Subject: [SciPy-User] scipy on snow leopard In-Reply-To: <9AACC485-2EA8-4BBF-B8FE-277BC6363ACB@gmail.com> References: <0AD56B83-FD0E-4651-BF3D-8FE26B062EB1@gmail.com> <3d375d730909010024m21c5fca4o6bc5182fe1707db8@mail.gmail.com> <0CFCCB33-CD9E-4AB2-A721-7CECB198EBA7@gmail.com> <3d375d730909011111q35383721wa0340935655d0065@mail.gmail.com> <9AACC485-2EA8-4BBF-B8FE-277BC6363ACB@gmail.com> Message-ID: <3d375d730909011428v3ba87745sfa386b07a805adf7@mail.gmail.com> On Tue, Sep 1, 2009 at 16:13, Wolfgang Kerzendorf wrote: > I played around with the problem today and I still didnt get it to > work. I have http://hpc.sourceforge.net/ gfortran installed now and I > get the following output with your command: > > gfortran -arch x86_64 -v > Using built-in specs. > Target: i386-apple-darwin9.7.0 We're looking for "Target: i686-...." on this line. Try changing line 21 in gnu.py from this: "x86_64": r"^Target: (i686-.*)$", to this: "x86_64": r"^Target: (i[36]86-.*)$", I highly recommend using the gfortran builds from http://r.research.att.com/tools/, though. The hpc.sf.net ones are very difficult to work with. They are frequently buggy, built strangely, and they do not have version numbers attached to the tarballs, so it makes debugging installation problems with them incredibly difficult. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From wkerzendorf at googlemail.com Tue Sep 1 18:12:54 2009 From: wkerzendorf at googlemail.com (Wolfgang Kerzendorf) Date: Wed, 2 Sep 2009 00:12:54 +0200 Subject: [SciPy-User] scipy on snow leopard In-Reply-To: <3d375d730909011428v3ba87745sfa386b07a805adf7@mail.gmail.com> References: <0AD56B83-FD0E-4651-BF3D-8FE26B062EB1@gmail.com> <3d375d730909010024m21c5fca4o6bc5182fe1707db8@mail.gmail.com> <0CFCCB33-CD9E-4AB2-A721-7CECB198EBA7@gmail.com> <3d375d730909011111q35383721wa0340935655d0065@mail.gmail.com> <9AACC485-2EA8-4BBF-B8FE-277BC6363ACB@gmail.com> <3d375d730909011428v3ba87745sfa386b07a805adf7@mail.gmail.com> Message-ID: <423C4958-F08F-4CF8-9F07-B0F8EB33357B@gmail.com> I used the compiler you suggested and changed the lines in gnu.py, but I still get: mithrandir:special wkerzend$ file _cephes.so _cephes.so: Mach-O bundle i386 thats in the build dir. On 01/09/2009, at 23:28 , Robert Kern wrote: > On Tue, Sep 1, 2009 at 16:13, Wolfgang > Kerzendorf wrote: >> I played around with the problem today and I still didnt get it to >> work. I have http://hpc.sourceforge.net/ gfortran installed now and I >> get the following output with your command: >> >> gfortran -arch x86_64 -v >> Using built-in specs. >> Target: i386-apple-darwin9.7.0 > > We're looking for "Target: i686-...." on this line. Try changing line > 21 in gnu.py from this: > > "x86_64": r"^Target: (i686-.*)$", > > to this: > > "x86_64": r"^Target: (i[36]86-.*)$", > > I highly recommend using the gfortran builds from > http://r.research.att.com/tools/, though. The hpc.sf.net ones are very > difficult to work with. They are frequently buggy, built strangely, > and they do not have version numbers attached to the tarballs, so it > makes debugging installation problems with them incredibly difficult. > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless > enigma that is made terrible by our own mad attempt to interpret it as > though it had an underlying truth." > -- Umberto Eco > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user From robert.kern at gmail.com Tue Sep 1 18:15:09 2009 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 1 Sep 2009 17:15:09 -0500 Subject: [SciPy-User] scipy on snow leopard In-Reply-To: <423C4958-F08F-4CF8-9F07-B0F8EB33357B@gmail.com> References: <0AD56B83-FD0E-4651-BF3D-8FE26B062EB1@gmail.com> <3d375d730909010024m21c5fca4o6bc5182fe1707db8@mail.gmail.com> <0CFCCB33-CD9E-4AB2-A721-7CECB198EBA7@gmail.com> <3d375d730909011111q35383721wa0340935655d0065@mail.gmail.com> <9AACC485-2EA8-4BBF-B8FE-277BC6363ACB@gmail.com> <3d375d730909011428v3ba87745sfa386b07a805adf7@mail.gmail.com> <423C4958-F08F-4CF8-9F07-B0F8EB33357B@gmail.com> Message-ID: <3d375d730909011515mbc57171hed26364bec410c7e@mail.gmail.com> On Tue, Sep 1, 2009 at 17:12, Wolfgang Kerzendorf wrote: > I used the compiler you suggested and changed the lines in gnu.py, but > I still get: > > mithrandir:special wkerzend$ file _cephes.so > _cephes.so: Mach-O bundle i386 > > thats in the build dir. Build log? -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From wkerzendorf at googlemail.com Tue Sep 1 18:23:26 2009 From: wkerzendorf at googlemail.com (Wolfgang Kerzendorf) Date: Wed, 2 Sep 2009 00:23:26 +0200 Subject: [SciPy-User] scipy on snow leopard In-Reply-To: <3d375d730909011515mbc57171hed26364bec410c7e@mail.gmail.com> References: <0AD56B83-FD0E-4651-BF3D-8FE26B062EB1@gmail.com> <3d375d730909010024m21c5fca4o6bc5182fe1707db8@mail.gmail.com> <0CFCCB33-CD9E-4AB2-A721-7CECB198EBA7@gmail.com> <3d375d730909011111q35383721wa0340935655d0065@mail.gmail.com> <9AACC485-2EA8-4BBF-B8FE-277BC6363ACB@gmail.com> <3d375d730909011428v3ba87745sfa386b07a805adf7@mail.gmail.com> <423C4958-F08F-4CF8-9F07-B0F8EB33357B@gmail.com> <3d375d730909011515mbc57171hed26364bec410c7e@mail.gmail.com> Message-ID: here you go: http://pastebin.com/d27d883b9 On 02/09/2009, at 24:15 , Robert Kern wrote: > On Tue, Sep 1, 2009 at 17:12, Wolfgang > Kerzendorf wrote: >> I used the compiler you suggested and changed the lines in gnu.py, >> but >> I still get: >> >> mithrandir:special wkerzend$ file _cephes.so >> _cephes.so: Mach-O bundle i386 >> >> thats in the build dir. > > Build log? > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless > enigma that is made terrible by our own mad attempt to interpret it as > though it had an underlying truth." > -- Umberto Eco > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user From robert.kern at gmail.com Tue Sep 1 18:31:40 2009 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 1 Sep 2009 17:31:40 -0500 Subject: [SciPy-User] scipy on snow leopard In-Reply-To: References: <0AD56B83-FD0E-4651-BF3D-8FE26B062EB1@gmail.com> <3d375d730909010024m21c5fca4o6bc5182fe1707db8@mail.gmail.com> <0CFCCB33-CD9E-4AB2-A721-7CECB198EBA7@gmail.com> <3d375d730909011111q35383721wa0340935655d0065@mail.gmail.com> <9AACC485-2EA8-4BBF-B8FE-277BC6363ACB@gmail.com> <3d375d730909011428v3ba87745sfa386b07a805adf7@mail.gmail.com> <423C4958-F08F-4CF8-9F07-B0F8EB33357B@gmail.com> <3d375d730909011515mbc57171hed26364bec410c7e@mail.gmail.com> Message-ID: <3d375d730909011531k5fdf593eufad38e6146e8dc9b@mail.gmail.com> On Tue, Sep 1, 2009 at 17:23, Wolfgang Kerzendorf wrote: > here you go: http://pastebin.com/d27d883b9 Well, you can play a bit with adding print statements inside get_flags() and get_flags_linker_so() to figure out if arch_flags is being set correctly. Also, do you have any environment variables like LDFLAGS or FFLAGS? They might be interfering. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From nathanielpeterson08 at gmail.com Tue Sep 1 22:38:04 2009 From: nathanielpeterson08 at gmail.com (nathanielpeterson08 at gmail.com) Date: Tue, 01 Sep 2009 22:38:04 -0400 Subject: [SciPy-User] Efficient "Interpolation" Message-ID: <87fxb6jckz.fsf@farmer.myhome.westell.com> #!/usr/bin/env python import numpy as np A=np.array([1,2,4,5,6,8,9]) B=np.array([2,4,5,8]) C=[24,45,77,99] idx=np.array(B.searchsorted(A,side='right')) C=np.array([C[0]]+C+[C[-1]]) print(C[idx]) yields [24 24 45 77 77 99 99] Does block mode of interpolation have an advantage over this? From wkerzendorf at googlemail.com Wed Sep 2 02:30:44 2009 From: wkerzendorf at googlemail.com (Wolfgang Kerzendorf) Date: Wed, 2 Sep 2009 08:30:44 +0200 Subject: [SciPy-User] scipy on snow leopard In-Reply-To: <3d375d730909011531k5fdf593eufad38e6146e8dc9b@mail.gmail.com> References: <0AD56B83-FD0E-4651-BF3D-8FE26B062EB1@gmail.com> <3d375d730909010024m21c5fca4o6bc5182fe1707db8@mail.gmail.com> <0CFCCB33-CD9E-4AB2-A721-7CECB198EBA7@gmail.com> <3d375d730909011111q35383721wa0340935655d0065@mail.gmail.com> <9AACC485-2EA8-4BBF-B8FE-277BC6363ACB@gmail.com> <3d375d730909011428v3ba87745sfa386b07a805adf7@mail.gmail.com> <423C4958-F08F-4CF8-9F07-B0F8EB33357B@gmail.com> <3d375d730909011515mbc57171hed26364bec410c7e@mail.gmail.com> <3d375d730909011531k5fdf593eufad38e6146e8dc9b@mail.gmail.com> Message-ID: <337D8E5A-E871-4229-868A-A7C6E2056BDC@gmail.com> i have no flags set. could you give an example of what im looking for with the print statements and where in which file to set them. thx On 02/09/2009, at 24:31 , Robert Kern wrote: > On Tue, Sep 1, 2009 at 17:23, Wolfgang > Kerzendorf wrote: >> here you go: http://pastebin.com/d27d883b9 > > Well, you can play a bit with adding print statements inside > get_flags() and get_flags_linker_so() to figure out if arch_flags is > being set correctly. Also, do you have any environment variables like > LDFLAGS or FFLAGS? They might be interfering. > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless > enigma that is made terrible by our own mad attempt to interpret it as > though it had an underlying truth." > -- Umberto Eco > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user From robert.kern at gmail.com Wed Sep 2 02:35:28 2009 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 2 Sep 2009 01:35:28 -0500 Subject: [SciPy-User] scipy on snow leopard In-Reply-To: <337D8E5A-E871-4229-868A-A7C6E2056BDC@gmail.com> References: <0AD56B83-FD0E-4651-BF3D-8FE26B062EB1@gmail.com> <0CFCCB33-CD9E-4AB2-A721-7CECB198EBA7@gmail.com> <3d375d730909011111q35383721wa0340935655d0065@mail.gmail.com> <9AACC485-2EA8-4BBF-B8FE-277BC6363ACB@gmail.com> <3d375d730909011428v3ba87745sfa386b07a805adf7@mail.gmail.com> <423C4958-F08F-4CF8-9F07-B0F8EB33357B@gmail.com> <3d375d730909011515mbc57171hed26364bec410c7e@mail.gmail.com> <3d375d730909011531k5fdf593eufad38e6146e8dc9b@mail.gmail.com> <337D8E5A-E871-4229-868A-A7C6E2056BDC@gmail.com> Message-ID: <3d375d730909012335x1a5ac861y22b9e25f5d58fe10@mail.gmail.com> On Wed, Sep 2, 2009 at 01:30, Wolfgang Kerzendorf wrote: > i have no flags set. could you give an example of what im looking for > with the print statements and where in which file to set them. Print out the variable arch_flags in the methods Gnu95FCompiler.get_flags() and Gnu95FCompiler.get_flags_linker_so() in numpy/distutils/fcompiler/gnu.py -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From wkerzendorf at googlemail.com Wed Sep 2 03:47:38 2009 From: wkerzendorf at googlemail.com (Wolfgang Kerzendorf) Date: Wed, 2 Sep 2009 09:47:38 +0200 Subject: [SciPy-User] scipy on snow leopard In-Reply-To: <3d375d730909012335x1a5ac861y22b9e25f5d58fe10@mail.gmail.com> References: <0AD56B83-FD0E-4651-BF3D-8FE26B062EB1@gmail.com> <0CFCCB33-CD9E-4AB2-A721-7CECB198EBA7@gmail.com> <3d375d730909011111q35383721wa0340935655d0065@mail.gmail.com> <9AACC485-2EA8-4BBF-B8FE-277BC6363ACB@gmail.com> <3d375d730909011428v3ba87745sfa386b07a805adf7@mail.gmail.com> <423C4958-F08F-4CF8-9F07-B0F8EB33357B@gmail.com> <3d375d730909011515mbc57171hed26364bec410c7e@mail.gmail.com> <3d375d730909011531k5fdf593eufad38e6146e8dc9b@mail.gmail.com> <337D8E5A-E871-4229-868A-A7C6E2056BDC@gmail.com> <3d375d730909012335x1a5ac861y22b9e25f5d58fe10@mail.gmail.com> Message-ID: <2CAF9A7C-074F-4A12-84C3-B0EB6C700FC6@gmail.com> I have dones this, and they are empty, at least as far as I can see. The build log is at: http://pastebin.com/d1be916be cheers WOlfgang On 02/09/2009, at 8:35 , Robert Kern wrote: > On Wed, Sep 2, 2009 at 01:30, Wolfgang > Kerzendorf wrote: >> i have no flags set. could you give an example of what im looking for >> with the print statements and where in which file to set them. > > Print out the variable arch_flags in the methods > Gnu95FCompiler.get_flags() and Gnu95FCompiler.get_flags_linker_so() in > numpy/distutils/fcompiler/gnu.py > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless > enigma that is made terrible by our own mad attempt to interpret it as > though it had an underlying truth." > -- Umberto Eco > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user From wkerzendorf at googlemail.com Wed Sep 2 03:49:00 2009 From: wkerzendorf at googlemail.com (Wolfgang Kerzendorf) Date: Wed, 2 Sep 2009 09:49:00 +0200 Subject: [SciPy-User] scipy on snow leopard In-Reply-To: <3d375d730909012335x1a5ac861y22b9e25f5d58fe10@mail.gmail.com> References: <0AD56B83-FD0E-4651-BF3D-8FE26B062EB1@gmail.com> <0CFCCB33-CD9E-4AB2-A721-7CECB198EBA7@gmail.com> <3d375d730909011111q35383721wa0340935655d0065@mail.gmail.com> <9AACC485-2EA8-4BBF-B8FE-277BC6363ACB@gmail.com> <3d375d730909011428v3ba87745sfa386b07a805adf7@mail.gmail.com> <423C4958-F08F-4CF8-9F07-B0F8EB33357B@gmail.com> <3d375d730909011515mbc57171hed26364bec410c7e@mail.gmail.com> <3d375d730909011531k5fdf593eufad38e6146e8dc9b@mail.gmail.com> <337D8E5A-E871-4229-868A-A7C6E2056BDC@gmail.com> <3d375d730909012335x1a5ac861y22b9e25f5d58fe10@mail.gmail.com> Message-ID: Ah I forgot to add: the statement is called Wolfgang test %s"%arch_flags On 02/09/2009, at 8:35 , Robert Kern wrote: > On Wed, Sep 2, 2009 at 01:30, Wolfgang > Kerzendorf wrote: >> i have no flags set. could you give an example of what im looking for >> with the print statements and where in which file to set them. > > Print out the variable arch_flags in the methods > Gnu95FCompiler.get_flags() and Gnu95FCompiler.get_flags_linker_so() in > numpy/distutils/fcompiler/gnu.py > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless > enigma that is made terrible by our own mad attempt to interpret it as > though it had an underlying truth." > -- Umberto Eco > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user From timmichelsen at gmx-topmail.de Wed Sep 2 07:38:16 2009 From: timmichelsen at gmx-topmail.de (Tim Michelsen) Date: Wed, 2 Sep 2009 11:38:16 +0000 (UTC) Subject: [SciPy-User] scikity.timeseries: Report options question Message-ID: Hello, I noticed that if header_row is specified, a header_char='-' is added automatically. I had to add header_char='' to subpress it. Is this wanted? According to http://pytseries.sourceforge.net/lib.report.html #scikits.timeseries.lib.reportlib.Report This should be optional. Kind regards, Timmie From timmichelsen at gmx-topmail.de Wed Sep 2 07:41:12 2009 From: timmichelsen at gmx-topmail.de (Tim Michelsen) Date: Wed, 2 Sep 2009 11:41:12 +0000 (UTC) Subject: [SciPy-User] Scikits Trac Message-ID: Hello, may the administrator for the Scikits Trac site [1] add a passwort recovery request like on the Scipy Trac [2]. Thanks a lot, Timmie [1]: http://www.scipy.org/scipy/scikits/wiki [2]: Forgot your password? - http://projects.scipy.org/scipy/reset_password From timmichelsen at gmx-topmail.de Wed Sep 2 07:58:37 2009 From: timmichelsen at gmx-topmail.de (Tim Michelsen) Date: Wed, 2 Sep 2009 11:58:37 +0000 (UTC) Subject: [SciPy-User] scikits.timeseries: saving to binary file Message-ID: Hello, according to the timeseries docs I can save timeseries objects to a) text files http://pytseries.sourceforge.net/core.timeseries.io.html#saving-a-timeseries-to-a-text-file b) hdf5 using pytables http://pytseries.sourceforge.net/core.timeseries.io.html#saving-a-timeseries-to-a-text-file Why can I not store it as a numpy binary file using numpy.save without loosing the dates and frequency information? Thanks, Timmie From gokhansever at gmail.com Wed Sep 2 10:38:22 2009 From: gokhansever at gmail.com (=?UTF-8?Q?G=C3=B6khan_Sever?=) Date: Wed, 2 Sep 2009 09:38:22 -0500 Subject: [SciPy-User] Fastest way to parsing a specific binay file Message-ID: <49d6b3500909020738g53befd6ey4a6af8e269162510@mail.gmail.com> Hello, I want to be able to parse a binary file which hold information regarding to experiment configuration and data obviously. Both configuration and data sections are variable-length. A chuck this data is shown as below (after a binary read operation) '\x00\x00@\x00$\x00\x02\x00\x12\x00\xff\x00\x00\x00U\xaa\xfa\xffd\x00\x08\x00\x01\x00\x08\x00\xff\x00\x00\x00U\xaa\xfb\xffl\x00\xab\x00\x01\x00\xab\x00\xff\x00\x00\x00U\xaa\xe7\x03\x17\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00U\xaa\xd9\x07\x04\x00\x02\x00\r\x00\x06\x00\x03\x00\x00\x00\x01\x00\x00\x00\xd9\x07\x04\x00\x02\x00\r\x00\x06\x00\x03\x00\x00\x00\x01\x00\x00\x00prj.300\x00; Version = 1\n', 'ProjectName = PME1 2009 King Air N825ST\n', 'FlightId = \n', 'AircraftType = WMI King Air 200\n', 'AircraftId = N825ST\n', 'OperatorName = Weather Modification Inc.\n', 'Comments = \n', '\x00\x00@ In binary form the file is 1.3MB, and when written to a txt file it expands to 3.7MB totalling approximately 4 million characters. When fully processed (with an IDL code) it produces 86 seperate configuration files, and 46 ascii files for data, about 10-15 different instruments and in various combinations plus sampling rates. I attemted to use RE module, however the time it takes parse the file is really longer than I expected. What would be wisest and fastest way to tackle this issue? Upon successful re-construction of the data and metadata, I am planning to use a much modular structure like HDF5 or netCDF4 for an easy data storage and analyses. Thank you. -- G?khan -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Wed Sep 2 11:06:21 2009 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 2 Sep 2009 10:06:21 -0500 Subject: [SciPy-User] scipy on snow leopard In-Reply-To: <2CAF9A7C-074F-4A12-84C3-B0EB6C700FC6@gmail.com> References: <0AD56B83-FD0E-4651-BF3D-8FE26B062EB1@gmail.com> <9AACC485-2EA8-4BBF-B8FE-277BC6363ACB@gmail.com> <3d375d730909011428v3ba87745sfa386b07a805adf7@mail.gmail.com> <423C4958-F08F-4CF8-9F07-B0F8EB33357B@gmail.com> <3d375d730909011515mbc57171hed26364bec410c7e@mail.gmail.com> <3d375d730909011531k5fdf593eufad38e6146e8dc9b@mail.gmail.com> <337D8E5A-E871-4229-868A-A7C6E2056BDC@gmail.com> <3d375d730909012335x1a5ac861y22b9e25f5d58fe10@mail.gmail.com> <2CAF9A7C-074F-4A12-84C3-B0EB6C700FC6@gmail.com> Message-ID: <3d375d730909020806g70ad8f92uc70ed50b8b353ee7@mail.gmail.com> On Wed, Sep 2, 2009 at 02:47, Wolfgang Kerzendorf wrote: > I have dones this, and they are empty, at least as far as I can see. > The build log is at: > http://pastebin.com/d1be916be Okay, in the _can_target() function, print newcmd, st, and out. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From pgmdevlist at gmail.com Wed Sep 2 11:14:50 2009 From: pgmdevlist at gmail.com (Pierre GM) Date: Wed, 2 Sep 2009 11:14:50 -0400 Subject: [SciPy-User] scikits.timeseries: saving to binary file In-Reply-To: References: Message-ID: <586E5195-C0E3-4A5F-AFA0-9001930E897F@gmail.com> On Sep 2, 2009, at 7:58 AM, Tim Michelsen wrote: > Hello, > according to the timeseries docs I can save timeseries objects to > a) text files > http://pytseries.sourceforge.net/core.timeseries.io.html#saving-a-timeseries-to-a-text-file > b) hdf5 using pytables > http://pytseries.sourceforge.net/core.timeseries.io.html#saving-a-timeseries-to-a-text-file > > Why can I not store it as a numpy binary file using numpy.save > without loosing > the dates and frequency information? Because neither Matt nor I had any need for it so far. We'd be happy to consider a patch, of course. Note that you could try to convert the series to a structured array (with a 'dates', a 'data' and a 'mask' fields) with the "toflex" method and save the resulting ndarray. However, you'll probably lose the frequency information (unless you find a trick to save meta-information, but as I don't have any experience with np.save, I won't be able to help you). Keep us posted. P. From sturla at molden.no Wed Sep 2 11:34:36 2009 From: sturla at molden.no (Sturla Molden) Date: Wed, 02 Sep 2009 17:34:36 +0200 Subject: [SciPy-User] [Numpy-discussion] Fastest way to parsing a specific binay file In-Reply-To: <49d6b3500909020738g53befd6ey4a6af8e269162510@mail.gmail.com> References: <49d6b3500909020738g53befd6ey4a6af8e269162510@mail.gmail.com> Message-ID: <4A9E908C.1070205@molden.no> G?khan Sever skrev: > What would be wisest and fastest way to tackle this issue? Get the format, read the binary data directly, skip the ascii/regex part. I sometimes use recarrays with formatted binary data; just constructing a dtype and use numpy.fromfile to read. That works when the binary file store C structs written successively. Sturla Molden From bartomas at gmail.com Wed Sep 2 11:51:45 2009 From: bartomas at gmail.com (bar tomas) Date: Wed, 2 Sep 2009 16:51:45 +0100 Subject: [SciPy-User] Equal area grid - lat/long conversion Message-ID: Hi, I've got some georeferenced lat/long data and I'd like to construct an equal area grid (for instance 50km * 50km) to examine and calculate some functions on the data per each cell of the grid. Is there any python script or package out there that can construct an equal area grid and convert from grid indexes to lat/long relative to some projection? Thanks a lot T.Bar -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Wed Sep 2 11:55:06 2009 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 2 Sep 2009 10:55:06 -0500 Subject: [SciPy-User] Equal area grid - lat/long conversion In-Reply-To: References: Message-ID: <3d375d730909020855k41fa3fd1x74b5c64caa412ee2@mail.gmail.com> On Wed, Sep 2, 2009 at 10:51, bar tomas wrote: > Hi, > I've got some georeferenced lat/long data and I'd like to construct an equal > area grid (for instance 50km * 50km) to examine and calculate some functions > on the data per each cell of the grid. > Is there any python script or package? out there that can construct an equal > area grid and convert from grid indexes to lat/long relative to some > projection? http://code.google.com/p/pyproj/ -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From bartomas at gmail.com Wed Sep 2 11:59:40 2009 From: bartomas at gmail.com (bar tomas) Date: Wed, 2 Sep 2009 16:59:40 +0100 Subject: [SciPy-User] Equal area grid - lat/long conversion In-Reply-To: <3d375d730909020855k41fa3fd1x74b5c64caa412ee2@mail.gmail.com> References: <3d375d730909020855k41fa3fd1x74b5c64caa412ee2@mail.gmail.com> Message-ID: Fabulous! Many thanks On Wed, Sep 2, 2009 at 4:55 PM, Robert Kern wrote: > On Wed, Sep 2, 2009 at 10:51, bar tomas wrote: > > Hi, > > I've got some georeferenced lat/long data and I'd like to construct an > equal > > area grid (for instance 50km * 50km) to examine and calculate some > functions > > on the data per each cell of the grid. > > Is there any python script or package out there that can construct an > equal > > area grid and convert from grid indexes to lat/long relative to some > > projection? > > http://code.google.com/p/pyproj/ > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless > enigma that is made terrible by our own mad attempt to interpret it as > though it had an underlying truth." > -- Umberto Eco > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jgomezdans at gmail.com Wed Sep 2 12:14:26 2009 From: jgomezdans at gmail.com (Jose Gomez-Dans) Date: Wed, 2 Sep 2009 17:14:26 +0100 Subject: [SciPy-User] Equal area grid - lat/long conversion In-Reply-To: References: Message-ID: <91d218430909020914s13877dc3g4891c3eef313d15@mail.gmail.com> 2009/9/2 bar tomas > Hi, > I've got some georeferenced lat/long data and I'd like to construct an > equal area grid (for instance 50km * 50km) to examine and calculate some > functions on the data per each cell of the grid. > Is there any python script or package out there that can construct an > equal area grid and convert from grid indexes to lat/long relative to some > projection? > Admittedly, this works for rasters, but might get you going, using a combination of ogr and gdal: J -------------- next part -------------- An HTML attachment was scrubbed... URL: From gokhansever at gmail.com Wed Sep 2 12:53:52 2009 From: gokhansever at gmail.com (=?UTF-8?Q?G=C3=B6khan_Sever?=) Date: Wed, 2 Sep 2009 11:53:52 -0500 Subject: [SciPy-User] [Numpy-discussion] Fastest way to parsing a specific binay file In-Reply-To: <4A9E908C.1070205@molden.no> References: <49d6b3500909020738g53befd6ey4a6af8e269162510@mail.gmail.com> <4A9E908C.1070205@molden.no> Message-ID: <49d6b3500909020953v262b832ajaac5fdba02f8fb05@mail.gmail.com> On Wed, Sep 2, 2009 at 10:34 AM, Sturla Molden wrote: > G?khan Sever skrev: > > What would be wisest and fastest way to tackle this issue? > Get the format, read the binary data directly, skip the ascii/regex part. > > I sometimes use recarrays with formatted binary data; just constructing > a dtype and use numpy.fromfile to read. That works when the binary file > store C structs written successively. > > Sturla Molden > > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > How to use recarrays with variable-length data fields as well as metadata? Eventually I will record the data with numpy arrays but not sure how to utilize recarrays in the first stage. -- G?khan -------------- next part -------------- An HTML attachment was scrubbed... URL: From wkerzendorf at googlemail.com Wed Sep 2 13:12:48 2009 From: wkerzendorf at googlemail.com (Wolfgang Kerzendorf) Date: Wed, 2 Sep 2009 19:12:48 +0200 Subject: [SciPy-User] scipy on snow leopard In-Reply-To: <3d375d730909020806g70ad8f92uc70ed50b8b353ee7@mail.gmail.com> References: <0AD56B83-FD0E-4651-BF3D-8FE26B062EB1@gmail.com> <9AACC485-2EA8-4BBF-B8FE-277BC6363ACB@gmail.com> <3d375d730909011428v3ba87745sfa386b07a805adf7@mail.gmail.com> <423C4958-F08F-4CF8-9F07-B0F8EB33357B@gmail.com> <3d375d730909011515mbc57171hed26364bec410c7e@mail.gmail.com> <3d375d730909011531k5fdf593eufad38e6146e8dc9b@mail.gmail.com> <337D8E5A-E871-4229-868A-A7C6E2056BDC@gmail.com> <3d375d730909012335x1a5ac861y22b9e25f5d58fe10@mail.gmail.com> <2CAF9A7C-074F-4A12-84C3-B0EB6C700FC6@gmail.com> <3d375d730909020806g70ad8f92uc70ed50b8b353ee7@mail.gmail.com> Message-ID: Hello Robert, Here is the output http://pastebin.com/d5d081d18. Grep on Wolfgang and you should get all of the prinouts. cheers Wolfgang On 02/09/2009, at 17:06 , Robert Kern wrote: > On Wed, Sep 2, 2009 at 02:47, Wolfgang > Kerzendorf wrote: >> I have dones this, and they are empty, at least as far as I can see. >> The build log is at: >> http://pastebin.com/d1be916be > > Okay, in the _can_target() function, print newcmd, st, and out. > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless > enigma that is made terrible by our own mad attempt to interpret it as > though it had an underlying truth." > -- Umberto Eco > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user From robert.kern at gmail.com Wed Sep 2 13:21:18 2009 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 2 Sep 2009 12:21:18 -0500 Subject: [SciPy-User] scipy on snow leopard In-Reply-To: References: <0AD56B83-FD0E-4651-BF3D-8FE26B062EB1@gmail.com> <423C4958-F08F-4CF8-9F07-B0F8EB33357B@gmail.com> <3d375d730909011515mbc57171hed26364bec410c7e@mail.gmail.com> <3d375d730909011531k5fdf593eufad38e6146e8dc9b@mail.gmail.com> <337D8E5A-E871-4229-868A-A7C6E2056BDC@gmail.com> <3d375d730909012335x1a5ac861y22b9e25f5d58fe10@mail.gmail.com> <2CAF9A7C-074F-4A12-84C3-B0EB6C700FC6@gmail.com> <3d375d730909020806g70ad8f92uc70ed50b8b353ee7@mail.gmail.com> Message-ID: <3d375d730909021021m569b56fj1f24d764c8b40907@mail.gmail.com> On Wed, Sep 2, 2009 at 12:12, Wolfgang Kerzendorf wrote: > Hello Robert, > > Here is the output http://pastebin.com/d5d081d18. Grep on Wolfgang and > you should get all of the prinouts. Ah! Someone screwed up the conversion to use subprocess instead of os.popen(). Try this version of the _can_target() function. def _can_target(cmd, arch): """Return true is the command supports the -arch flag for the given architecture.""" newcmd = cmd[:] newcmd.extend(["-arch", arch, "-v"]) p = Popen(newcmd, stderr=STDOUT, stdout=PIPE) stdout, stderr = p.communicate() for line in stdout.splitlines(): m = re.search(_R_ARCHS[arch], line) if m: return True return False -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From cournape at gmail.com Wed Sep 2 17:15:28 2009 From: cournape at gmail.com (David Cournapeau) Date: Thu, 3 Sep 2009 06:15:28 +0900 Subject: [SciPy-User] scipy on snow leopard In-Reply-To: <3d375d730909021021m569b56fj1f24d764c8b40907@mail.gmail.com> References: <0AD56B83-FD0E-4651-BF3D-8FE26B062EB1@gmail.com> <3d375d730909011515mbc57171hed26364bec410c7e@mail.gmail.com> <3d375d730909011531k5fdf593eufad38e6146e8dc9b@mail.gmail.com> <337D8E5A-E871-4229-868A-A7C6E2056BDC@gmail.com> <3d375d730909012335x1a5ac861y22b9e25f5d58fe10@mail.gmail.com> <2CAF9A7C-074F-4A12-84C3-B0EB6C700FC6@gmail.com> <3d375d730909020806g70ad8f92uc70ed50b8b353ee7@mail.gmail.com> <3d375d730909021021m569b56fj1f24d764c8b40907@mail.gmail.com> Message-ID: <5b8d13220909021415n2938aa80r6e8e1489d46b2b5a@mail.gmail.com> On Thu, Sep 3, 2009 at 2:21 AM, Robert Kern wrote: > On Wed, Sep 2, 2009 at 12:12, Wolfgang > Kerzendorf wrote: >> Hello Robert, >> >> Here is the output http://pastebin.com/d5d081d18. Grep on Wolfgang and >> you should get all of the prinouts. > > Ah! Someone screwed up the conversion to use subprocess instead of > os.popen(). Try this version of the _can_target() function. And that someone would be me - thanks for the fix. cheers, David From stefan at sun.ac.za Wed Sep 2 18:24:32 2009 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Thu, 3 Sep 2009 00:24:32 +0200 Subject: [SciPy-User] Scikits Trac In-Reply-To: References: Message-ID: <9457e7c80909021524x2fbfd789kb4b676a6ed02e00f@mail.gmail.com> Hi Tim You've found the old scikits developer site, which has been relocated to: http://projects.scipy.org/scikits and has a reset password menu item. I've added a note at the top of the old page. Regards St?fan 2009/9/2 Tim Michelsen : > Hello, > may the administrator for the Scikits Trac site [1] add a passwort recovery > request like on the Scipy Trac [2]. > > Thanks a lot, > Timmie > > [1]: http://www.scipy.org/scipy/scikits/wiki > [2]: Forgot your password? - http://projects.scipy.org/scipy/reset_password From timmichelsen at gmx-topmail.de Wed Sep 2 19:17:30 2009 From: timmichelsen at gmx-topmail.de (Tim Michelsen) Date: Thu, 03 Sep 2009 01:17:30 +0200 Subject: [SciPy-User] Scikits Trac In-Reply-To: <9457e7c80909021524x2fbfd789kb4b676a6ed02e00f@mail.gmail.com> References: <9457e7c80909021524x2fbfd789kb4b676a6ed02e00f@mail.gmail.com> Message-ID: > You've found the old scikits developer site, which has been relocated to: Thanks for your reaction. It was google actually with the terms "trac scikits". Maybe you'd even want to add a redirection? Well, thanks for your oefforts on site maintenance. Regards, Timmie From cournape at gmail.com Wed Sep 2 20:50:39 2009 From: cournape at gmail.com (David Cournapeau) Date: Thu, 3 Sep 2009 09:50:39 +0900 Subject: [SciPy-User] SciPy+NumPy on 4 major linux distributions In-Reply-To: <7f014ea60909010955q56c7cfadkee65427c3f525e90@mail.gmail.com> References: <826c64da0909010937y7617e61h578e0e178044c46@mail.gmail.com> <7f014ea60909010955q56c7cfadkee65427c3f525e90@mail.gmail.com> Message-ID: <5b8d13220909021750n51132765j5dac1854e46dc5a3@mail.gmail.com> On Wed, Sep 2, 2009 at 1:55 AM, Chris Colbert wrote: > What I would like to see is a distribution build numpy and scipy with > threaded atlas support. > > As it stands currently, Ubuntu "has atlas support", but its not > threaded, and the packages are broken... Note that I provide a fixed atlas binary (wo threaded support though) on launchpad: deb http://ppa.launchpad.net/david-ar/ppa/ubuntu jaunty main deb-src http://ppa.launchpad.net/david-ar/ppa/ubuntu jaunty main and update/upgrade the atlas package. You should not need to rebuild numpy or scipy. cheers, David From sccolbert at gmail.com Wed Sep 2 21:00:35 2009 From: sccolbert at gmail.com (Chris Colbert) Date: Wed, 2 Sep 2009 21:00:35 -0400 Subject: [SciPy-User] SciPy+NumPy on 4 major linux distributions In-Reply-To: <5b8d13220909021750n51132765j5dac1854e46dc5a3@mail.gmail.com> References: <826c64da0909010937y7617e61h578e0e178044c46@mail.gmail.com> <7f014ea60909010955q56c7cfadkee65427c3f525e90@mail.gmail.com> <5b8d13220909021750n51132765j5dac1854e46dc5a3@mail.gmail.com> Message-ID: <7f014ea60909021800m5113d8d8h80ba433347cb1f4@mail.gmail.com> how nice of you! :) On Wed, Sep 2, 2009 at 8:50 PM, David Cournapeau wrote: > On Wed, Sep 2, 2009 at 1:55 AM, Chris Colbert wrote: >> What I would like to see is a distribution build numpy and scipy with >> threaded atlas support. >> >> As it stands currently, Ubuntu "has atlas support", but its not >> threaded, and the packages are broken... > > Note that I provide a fixed atlas binary (wo threaded support though) > on launchpad: > > deb http://ppa.launchpad.net/david-ar/ppa/ubuntu jaunty main > deb-src http://ppa.launchpad.net/david-ar/ppa/ubuntu jaunty main > > and update/upgrade the atlas package. You should not need to rebuild > numpy or scipy. > > cheers, > > David > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From markus.proeller at ifm.com Thu Sep 3 03:40:49 2009 From: markus.proeller at ifm.com (markus.proeller at ifm.com) Date: Thu, 3 Sep 2009 09:40:49 +0200 Subject: [SciPy-User] polyval, polyfit on 2D array Message-ID: Hello everyone, I have a two dimensional array with a shape of (600,800) and want to apply polyval and polyfit on each of the 600 lines. Is there an elegant way to avoid a for loop over axis 0? So for one line my code would be this: >>> x=arange(800) >>> p = polyfit(x, y[0,:]) >>> y_new = polyval(p, x) Thanks for help, Markus -------------- next part -------------- An HTML attachment was scrubbed... URL: From bartomas at gmail.com Thu Sep 3 04:54:40 2009 From: bartomas at gmail.com (bar tomas) Date: Thu, 3 Sep 2009 09:54:40 +0100 Subject: [SciPy-User] Equal area grid - lat/long conversion In-Reply-To: <91d218430909020914s13877dc3g4891c3eef313d15@mail.gmail.com> References: <91d218430909020914s13877dc3g4891c3eef313d15@mail.gmail.com> Message-ID: Hi, many thanks! sounds great. Is it possible to create an equal area grid cell with the script you refer to? (not defined by degrees, as in the example on the website) For some density calculations it is important that the grid cells are of equal size. Thanks On Wed, Sep 2, 2009 at 5:14 PM, Jose Gomez-Dans wrote: > > > 2009/9/2 bar tomas > >> Hi, >> I've got some georeferenced lat/long data and I'd like to construct an >> equal area grid (for instance 50km * 50km) to examine and calculate some >> functions on the data per each cell of the grid. >> Is there any python script or package out there that can construct an >> equal area grid and convert from grid indexes to lat/long relative to some >> projection? >> > > Admittedly, this works for rasters, but might get you going, using a > combination of ogr and gdal: > > > > J > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Thu Sep 3 08:58:29 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Thu, 3 Sep 2009 08:58:29 -0400 Subject: [SciPy-User] polyval, polyfit on 2D array In-Reply-To: References: Message-ID: <1cd32cbb0909030558l49406d71va58cb10792cb14d@mail.gmail.com> On Thu, Sep 3, 2009 at 3:40 AM, wrote: > > Hello everyone, > > I have a two dimensional array with a shape of (600,800) and want to apply > polyval and polyfit on each of the 600 lines. > Is there an elegant way to avoid a for loop over axis 0? > So for one line my code would be this: > >>>> x=arange(800) >>>> p = polyfit(x, y[0,:]) >>>> y_new = polyval(p, x) > > Thanks for help, > > Markus if you want just a linear fit then there was the discussion and recipe some time ago on numpy-discussion "polyfit on multiple data points" and "performance issue (again)" Josef From timmichelsen at gmx-topmail.de Thu Sep 3 11:35:47 2009 From: timmichelsen at gmx-topmail.de (Tim Michelsen) Date: Thu, 3 Sep 2009 15:35:47 +0000 (UTC) Subject: [SciPy-User] Scikits Trac References: <9457e7c80909021524x2fbfd789kb4b676a6ed02e00f@mail.gmail.com> Message-ID: > > You've found the old scikits developer site, which has been relocated to: > Thanks for your reaction. The link SciKits developer resources - http://www.scipy.org/scipy/scikits/ on http://scikits.appspot.com/contribute needs also to be updated to point to the new site,. From wkerzendorf at googlemail.com Thu Sep 3 12:02:07 2009 From: wkerzendorf at googlemail.com (Wolfgang Kerzendorf) Date: Thu, 3 Sep 2009 18:02:07 +0200 Subject: [SciPy-User] snow leopard issues with numpy Message-ID: <1324DD89-AE68-4B9E-BE82-339006831B86@gmail.com> I just installed numpy and scipy (both svn) on OS X 10.6 and just got scipy to work with Robert Kern's help. Playing around with numpy I got the following segfault: http://pastebin.com/m35220dbf I hope someone can make sense of it. Thanks in advance Wolfgang From lev at columbia.edu Thu Sep 3 13:59:28 2009 From: lev at columbia.edu (Lev Givon) Date: Thu, 3 Sep 2009 13:59:28 -0400 Subject: [SciPy-User] SciPy+NumPy on 4 major linux distributions In-Reply-To: <7f014ea60909010955q56c7cfadkee65427c3f525e90@mail.gmail.com> References: <826c64da0909010937y7617e61h578e0e178044c46@mail.gmail.com> <7f014ea60909010955q56c7cfadkee65427c3f525e90@mail.gmail.com> Message-ID: <20090903175927.GQ20987@localhost.ee.columbia.edu> Received from Chris Colbert on Tue, Sep 01, 2009 at 12:55:34PM EDT: > What I would like to see is a distribution build numpy and scipy with > threaded atlas support. > > As it stands currently, Ubuntu "has atlas support", but its not > threaded, and the packages are broken... > > Until that happens, I'll be rolling my own numpy and scipy from source. Mandriva's atlas packages consist of library rpms prebuilt for several stock architectures [1] that are installed along with an ld.so override that causes programs that dynamically link to a blas/lapack shared library to call its atlas equivalent. The source srpm can also be rebuilt on one's system so as to obtain a properly tuned library. I'm not sure whether the current prebuilt libraries are built with thread support. I'm also not sure whether the current (3.8.3) prebuilt libraries consistently provide any improved performance compared to the netlib blas/lapack. L.G. [1] This scheme is also used in Debian's atlas packages. From markus.proeller at ifm.com Fri Sep 4 02:18:06 2009 From: markus.proeller at ifm.com (markus.proeller at ifm.com) Date: Fri, 4 Sep 2009 08:18:06 +0200 Subject: [SciPy-User] Antwort: Re: polyval, polyfit on 2D array In-Reply-To: <1cd32cbb0909030558l49406d71va58cb10792cb14d@mail.gmail.com> Message-ID: >> >> Hello everyone, >> >> I have a two dimensional array with a shape of (600,800) and want to apply >> polyval and polyfit on each of the 600 lines. >> Is there an elegant way to avoid a for loop over axis 0? >> So for one line my code would be this: >> >>>>> x=arange(800) >>>>> p = polyfit(x, y[0,:]) >>>>> y_new = polyval(p, x) >> >> Thanks for help, >> >> Markus > >if you want just a linear fit then there was the discussion and recipe >some time ago on numpy-discussion >"polyfit on multiple data points" and "performance issue (again)" > >Josef Unfortunately I don't make a linear fit, so I will use a for loop. Thanks, Markus -------------- next part -------------- An HTML attachment was scrubbed... URL: From markus.proeller at ifm.com Fri Sep 4 03:08:29 2009 From: markus.proeller at ifm.com (markus.proeller at ifm.com) Date: Fri, 4 Sep 2009 09:08:29 +0200 Subject: [SciPy-User] scipy.interpolate.interp2d too many data values.. Message-ID: Hello, I tried to make an interpolation over an 800x600 image with the interp2d function from scipy, but I get an error message, that this are "Too many data points to interpolate". It doesn't seem that much data to me. Am I doing anything wrong or how many data does this function support? Markus -------------- next part -------------- An HTML attachment was scrubbed... URL: From pav at iki.fi Fri Sep 4 04:44:01 2009 From: pav at iki.fi (Pauli Virtanen) Date: Fri, 4 Sep 2009 08:44:01 +0000 (UTC) Subject: [SciPy-User] scipy.interpolate.interp2d too many data values.. References: Message-ID: Fri, 04 Sep 2009 09:08:29 +0200, markus.proeller kirjoitti: > I tried to make an interpolation over an 800x600 image with the interp2d > function from scipy, > but I get an error message, that this are "Too many data points to > interpolate". > It doesn't seem that much data to me. Am I doing anything wrong or how > many data does this function support? Interp2d is mainly meant for interpolation of scattered data, and yes, it has quite low limits on what it handles. It might be possible to bump these upwards, though. For interpolation of images that are specified on a regular grid, look at scipy.ndimage, especially map_coordinates. -- Pauli Virtanen From markus.proeller at ifm.com Fri Sep 4 05:33:12 2009 From: markus.proeller at ifm.com (markus.proeller at ifm.com) Date: Fri, 4 Sep 2009 11:33:12 +0200 Subject: [SciPy-User] Antwort: Re: scipy.interpolate.interp2d too many data values.. In-Reply-To: Message-ID: >> I tried to make an interpolation over an 800x600 image with the interp2d >> function from scipy, >> but I get an error message, that this are "Too many data points to >> interpolate". >> It doesn't seem that much data to me. Am I doing anything wrong or how >> many data does this function support? > >Interp2d is mainly meant for interpolation of scattered data, and yes, it >has quite low limits on what it handles. It might be possible to bump >these upwards, though. > >For interpolation of images that are specified on a regular grid, look at >scipy.ndimage, especially map_coordinates. Yes, that's what I was looking for. How do you actaully use it for a RGB image? I used the example from http://www.scipy.org/Cookbook/Interpolation but I don't understand how it works for 3-D coordinates. I want to apply the same remapping stored in y_new, x_new arrays with 2-D shape for the 3 channels. Until now I just understand like >>> coords = array([y_new, x_new]) >>> r = map_coordinates(img_org[:,:,0], coords ) >>> g = map_coordinates(img_org[:,:,1], coords ) >>> b = map_coordinates(img_org[:,:,2], coords ) >>> img = dstack((r,g,b)) But it seems that it can be done shorter... Thanks, Markus -------------- next part -------------- An HTML attachment was scrubbed... URL: From zachary.pincus at yale.edu Fri Sep 4 07:06:23 2009 From: zachary.pincus at yale.edu (Zachary Pincus) Date: Fri, 4 Sep 2009 07:06:23 -0400 Subject: [SciPy-User] Antwort: Re: scipy.interpolate.interp2d too many data values.. In-Reply-To: References: Message-ID: <0292E4CC-569C-4787-86F2-39AC36C5193C@yale.edu> > >For interpolation of images that are specified on a regular grid, > look at > >scipy.ndimage, especially map_coordinates. > > Yes, that's what I was looking for. How do you actaully use it for a > RGB image? > I used the example from http://www.scipy.org/Cookbook/Interpolation > but I don't understand how it works for 3-D coordinates. > I want to apply the same remapping stored in y_new, x_new arrays > with 2-D shape for the 3 channels. Until now I just understand like > > >>> coords = array([y_new, x_new]) > >>> r = map_coordinates(img_org[:,:,0], coords ) > >>> g = map_coordinates(img_org[:,:,1], coords ) > >>> b = map_coordinates(img_org[:,:,2], coords ) > >>> img = dstack((r,g,b)) > > But it seems that it can be done shorter... Assuming the image is large-ish, the overhead from the extra function calls is probably low enough that this is not much slower than an all- in-one approach. You could do better, memory-wise, by pre-allocating the output array and passing views on it (e.g. the appropriate slices) to map_coordinates as output arrays, but this would only really matter if the images are huge or you are doing this in a tight loop. Finally, you could also devise a 3D coordinate transform that is just the 2D transform you want, plus an identity transform in the third dimension (e.g. color channel) so you don't mix the colors. Basically, you just want your coords array, but with an additional dimension that contains 0s, 1s, and 2s to map red to red, green to green, and blue to blue, etc. If this is sufficiently unclear, I can try to gin up an example. Like I said, though, I'm not sure this will be much faster, and the code might not be any more clear. Zach From hardbyte at gmail.com Fri Sep 4 09:19:24 2009 From: hardbyte at gmail.com (Brian Thorne) Date: Sat, 5 Sep 2009 01:19:24 +1200 Subject: [SciPy-User] Gaussian Filter In-Reply-To: <1f10aea40909040549l3256683ft11e473e751644366@mail.gmail.com> References: <1f10aea40909040549l3256683ft11e473e751644366@mail.gmail.com> Message-ID: <1f10aea40909040619h41d4784csdb98f33080cab838@mail.gmail.com> Hi all, I'm trying to reduce the amount of calls to ndimage.filters.gaussian_filter but still get the same answer. >From this: r = ndimage.filters.gaussian_filter(np_image[:,:,0], sigma=(sigma, sigma)) g = ndimage.filters.gaussian_filter(np_image[:,:,1], sigma=(sigma, sigma)) b = ndimage.filters.gaussian_filter(np_image[:,:,2], sigma=(sigma, sigma)) return array([r,g,b]).transpose((1,2,0)) > to something like this: result = ndimage.filters.gaussian_filter(np_image, sigma=(sigma, sigma, 1), order=0, mode='reflect' ) return result Any ideas why that is producing different output? Or what I should be doing instead? cheers, Brian Thorne -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthieu.brucher at gmail.com Fri Sep 4 09:24:45 2009 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Fri, 4 Sep 2009 15:24:45 +0200 Subject: [SciPy-User] Gaussian Filter In-Reply-To: <1f10aea40909040619h41d4784csdb98f33080cab838@mail.gmail.com> References: <1f10aea40909040549l3256683ft11e473e751644366@mail.gmail.com> <1f10aea40909040619h41d4784csdb98f33080cab838@mail.gmail.com> Message-ID: Hi, I think gaussian_flter will also filter your data in the third dimension with one call :| Perhaps with (sigma, sigma, 0) ? Matthieu 2009/9/4 Brian Thorne : > Hi all, > I'm trying to reduce the amount of calls to ndimage.filters.gaussian_filter > but still get the same answer. > From this: > ?? ?r = ndimage.filters.gaussian_filter(np_image[:,:,0], sigma=(sigma, > sigma)) > ?? ?g = ndimage.filters.gaussian_filter(np_image[:,:,1], sigma=(sigma, > sigma)) > ?? ?b = ndimage.filters.gaussian_filter(np_image[:,:,2], sigma=(sigma, > sigma)) > ?? ?return array([r,g,b]).transpose((1,2,0)) > > to something like this: > ?? ?result = ndimage.filters.gaussian_filter(np_image, > ?? ? ? ? ? ? ? ? ? ? ? ? ? ?sigma=(sigma, sigma, 1), > ?? ? ? ? ? ? ? ? ? ? ? ? ? ?order=0, > ?? ? ? ? ? ? ? ? ? ? ? ? ? ?mode='reflect' > ?? ? ? ? ? ? ? ? ? ? ? ? ? ?) > ?? ?return result > Any ideas why that is producing different output? Or what I should be doing > instead? > cheers, > Brian Thorne > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -- Information System Engineer, Ph.D. Website: http://matthieu-brucher.developpez.com/ Blogs: http://matt.eifelle.com and http://blog.developpez.com/?blog=92 LinkedIn: http://www.linkedin.com/in/matthieubrucher From hardbyte at gmail.com Fri Sep 4 09:52:28 2009 From: hardbyte at gmail.com (Brian Thorne) Date: Sat, 5 Sep 2009 01:52:28 +1200 Subject: [SciPy-User] Gaussian Filter In-Reply-To: References: <1f10aea40909040549l3256683ft11e473e751644366@mail.gmail.com> <1f10aea40909040619h41d4784csdb98f33080cab838@mail.gmail.com> Message-ID: <1f10aea40909040652l7b125e89hd2d7e78d91c8fd58@mail.gmail.com> Brilliant, that did it in one. Thanks very much! 2009/9/5 Matthieu Brucher > Hi, > > I think gaussian_flter will also filter your data in the third > dimension with one call :| Perhaps with (sigma, sigma, 0) ? > > Matthieu > > 2009/9/4 Brian Thorne : > > Hi all, > > I'm trying to reduce the amount of calls to > ndimage.filters.gaussian_filter > > but still get the same answer. > > From this: > > r = ndimage.filters.gaussian_filter(np_image[:,:,0], sigma=(sigma, > > sigma)) > > g = ndimage.filters.gaussian_filter(np_image[:,:,1], sigma=(sigma, > > sigma)) > > b = ndimage.filters.gaussian_filter(np_image[:,:,2], sigma=(sigma, > > sigma)) > > return array([r,g,b]).transpose((1,2,0)) > > > > to something like this: > > result = ndimage.filters.gaussian_filter(np_image, > > sigma=(sigma, sigma, 1), > > order=0, > > mode='reflect' > > ) > > return result > > Any ideas why that is producing different output? Or what I should be > doing > > instead? > > cheers, > > Brian Thorne > > _______________________________________________ > > SciPy-User mailing list > > SciPy-User at scipy.org > > http://mail.scipy.org/mailman/listinfo/scipy-user > > > > > > > > -- > Information System Engineer, Ph.D. > Website: http://matthieu-brucher.developpez.com/ > Blogs: http://matt.eifelle.com and http://blog.developpez.com/?blog=92 > LinkedIn: http://www.linkedin.com/in/matthieubrucher > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hardbyte at gmail.com Fri Sep 4 10:33:09 2009 From: hardbyte at gmail.com (Brian Thorne) Date: Sat, 5 Sep 2009 02:33:09 +1200 Subject: [SciPy-User] Gaussian Filter In-Reply-To: <1f10aea40909040652l7b125e89hd2d7e78d91c8fd58@mail.gmail.com> References: <1f10aea40909040549l3256683ft11e473e751644366@mail.gmail.com> <1f10aea40909040619h41d4784csdb98f33080cab838@mail.gmail.com> <1f10aea40909040652l7b125e89hd2d7e78d91c8fd58@mail.gmail.com> Message-ID: <1f10aea40909040733i373aae9dxa75ba110dc5e18cb@mail.gmail.com> Similar question, but now a bit harder. I have this code (pieced together from a few files) that does a gaussian filter on a single image in both OpenCV and in SciPy.It is now at a point where I cannot tell them apart with a visual inspection, but a imshow(image1 - image2) begs to differ. Is it going to be possible to get the exact same output? from opencv import cv from opencv import adaptors from __future__ import division import numpy as np from numpy import array, uint8 from scipy import signal, ndimage @scipyFromOpenCV def gaussianBlur(np_image): """Blur an image with scipy""" sigma = opencvFilt2sigma(43.0) result = ndimage.filters.gaussian_filter(np_image, sigma=(sigma, sigma, 0), order=0, mode='reflect' ) return result def gaussianBlur(image, filterSize=43, sigma=opencvFilt2sigma(43)): """Blur an image with a particular strength filter. Default is 43, 139 gives a very strong blur, but takes a while """ # Carry out the filter operation cv.cvSmooth(image, image, cv.CV_GAUSSIAN, filterSize, 0, sigma) return image def opencvFilt2sigma(size): """OpenCV defaults to making sigma up with this formula. Learning OpenCV: computer vision with the OpenCV library By Gary Bradski, Adrian Kaehler pg 112""" return (( size*0.5 ) - 1)*0.30 + 0.80 class scipyFromOpenCV(object): """This decorator can be used to wrap a function that takes and returns a numpy array into one that takes and retuns an opencv CvMat. """ def __init__(self, f): self.f = f def __call__(self, image): # Convert CvMat to ndarray np_image = adaptors.Ipl2NumPy(image) # Call the original function np_image_filtered = self.f(np_image) # Convert back to CvMat return adaptors.NumPy2Ipl(np_image_filtered) cheers, Brian -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthieu.brucher at gmail.com Fri Sep 4 10:53:21 2009 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Fri, 4 Sep 2009 16:53:21 +0200 Subject: [SciPy-User] Gaussian Filter In-Reply-To: <1f10aea40909040733i373aae9dxa75ba110dc5e18cb@mail.gmail.com> References: <1f10aea40909040549l3256683ft11e473e751644366@mail.gmail.com> <1f10aea40909040619h41d4784csdb98f33080cab838@mail.gmail.com> <1f10aea40909040652l7b125e89hd2d7e78d91c8fd58@mail.gmail.com> <1f10aea40909040733i373aae9dxa75ba110dc5e18cb@mail.gmail.com> Message-ID: I don't have OpenCV here, so I can't say for sure but, what is the relative amplitude of the difference? OpenCV works on floats or doubles? It may be due to a type difference or a small difference in the algorithms. Matthieu 2009/9/4 Brian Thorne : > Similar question, but now a bit harder. I have this code (pieced together > from a few files) that does a gaussian filter on a single image in both > OpenCV and in SciPy. > It is now at a point where I cannot tell them apart with a visual > inspection, but a imshow(image1 - image2) begs to differ. Is it going to be > possible to get the exact same output? > > > from opencv import cv > from opencv import adaptors > from __future__ import division > import numpy as np > from numpy import array, uint8 > from scipy import signal, ndimage > @scipyFromOpenCV > def gaussianBlur(np_image): > ?? ?"""Blur an image with scipy""" > ?? ?sigma = opencvFilt2sigma(43.0) > > ?? ?result = ndimage.filters.gaussian_filter(np_image, > ?? ? ? ? ? ? ? ? ? ? ? ? ? ?sigma=(sigma, sigma, 0), > ?? ? ? ? ? ? ? ? ? ? ? ? ? ?order=0, > ?? ? ? ? ? ? ? ? ? ? ? ? ? ?mode='reflect' > ?? ? ? ? ? ? ? ? ? ? ? ? ? ?) > ?? ?return result > def gaussianBlur(image, filterSize=43, sigma=opencvFilt2sigma(43)): > ?? ?"""Blur an image with a particular strength filter. > ?? ?Default is 43, 139 gives a very strong blur, but takes a while > ?? ?""" > > ?? ?# Carry out the filter operation > ?? ?cv.cvSmooth(image, image, cv.CV_GAUSSIAN, filterSize, 0, sigma) > ?? ?return image > def opencvFilt2sigma(size): > ?? ?"""OpenCV defaults to making sigma up with this formula. > ?? ?Learning OpenCV: computer vision with the OpenCV library > ?? ?By Gary Bradski, Adrian Kaehler pg 112""" > ?? ?return (( size*0.5 ) - 1)*0.30 + 0.80 > class scipyFromOpenCV(object): > ?? ?"""This decorator can be used to wrap a function that takes > ?? ?and returns a numpy array into one that takes and retuns an > ?? ?opencv CvMat. > ?? ?""" > ?? ?def __init__(self, f): > ?? ? ? ?self.f = f > ?? ?def __call__(self, image): > ?? ? ? ?# Convert CvMat to ndarray > ?? ? ? ?np_image = adaptors.Ipl2NumPy(image) > > ?? ? ? ?# Call the original function > ?? ? ? ?np_image_filtered = self.f(np_image) > > ?? ? ? ?# Convert back to CvMat > ?? ? ? ?return adaptors.NumPy2Ipl(np_image_filtered) > cheers, > Brian > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -- Information System Engineer, Ph.D. Website: http://matthieu-brucher.developpez.com/ Blogs: http://matt.eifelle.com and http://blog.developpez.com/?blog=92 LinkedIn: http://www.linkedin.com/in/matthieubrucher From jean-pascal.mercier at inrialpes.fr Fri Sep 4 11:14:36 2009 From: jean-pascal.mercier at inrialpes.fr (J-Pascal Mercier) Date: Fri, 4 Sep 2009 17:14:36 +0200 Subject: [SciPy-User] Gaussian Filter In-Reply-To: References: <1f10aea40909040549l3256683ft11e473e751644366@mail.gmail.com> <1f10aea40909040733i373aae9dxa75ba110dc5e18cb@mail.gmail.com> Message-ID: <200909041714.36475.jean-pascal.mercier@inrialpes.fr> On Friday 04 September 2009 04:53:21 pm Matthieu Brucher wrote: > I don't have OpenCV here, so I can't say for sure but, what is the > relative amplitude of the difference? OpenCV works on floats or > doubles? It may be due to a type difference or a small difference in > the algorithms. > > Matthieu > Hi, This is pure speculation but since scipy uses the sigma parameter, the filter is probably created by direct sampling of the Gaussian function. On the other side, OpenCV uses the size(in pixels) of the filter. This is a good indication they probably uses the pascal triangle as an approximation for the Gaussian kernel The difference should not be very high anyway, they are both good approximation of the continuous gaussian kernel. cheers, J-Pascal From strawman at astraw.com Fri Sep 4 14:03:05 2009 From: strawman at astraw.com (Andrew Straw) Date: Fri, 04 Sep 2009 11:03:05 -0700 Subject: [SciPy-User] Antwort: Re: scipy.interpolate.interp2d too many data values.. In-Reply-To: References: Message-ID: <4AA15659.2080106@astraw.com> markus.proeller at ifm.com wrote: > >For interpolation of images that are specified on a regular grid, > look at > >scipy.ndimage, especially map_coordinates. > > Yes, that's what I was looking for. How do you actaully use it for a > RGB image? > I used the example from http://www.scipy.org/Cookbook/Interpolation > but I don't understand how it works for 3-D coordinates. > I want to apply the same remapping stored in y_new, x_new arrays with > 2-D shape for the 3 channels. Until now I just understand like > > >>> coords = array([y_new, x_new]) > >>> r = map_coordinates(img_org[:,:,0], coords ) > >>> g = map_coordinates(img_org[:,:,1], coords ) > >>> b = map_coordinates(img_org[:,:,2], coords ) > >>> img = dstack((r,g,b)) > > But it seems that it can be done shorter... Here's an example Stefan van der Walt cooked up: See lines 51-66, especially the "color band mapping" part of http://bazaar.launchpad.net/~astraw/pinpoint/dev/annotate/head%3A/pinpoint/distortion.py From sturla at molden.no Fri Sep 4 23:01:29 2009 From: sturla at molden.no (Sturla Molden) Date: Sat, 05 Sep 2009 05:01:29 +0200 Subject: [SciPy-User] Gaussian Filter In-Reply-To: <1f10aea40909040733i373aae9dxa75ba110dc5e18cb@mail.gmail.com> References: <1f10aea40909040549l3256683ft11e473e751644366@mail.gmail.com> <1f10aea40909040619h41d4784csdb98f33080cab838@mail.gmail.com> <1f10aea40909040652l7b125e89hd2d7e78d91c8fd58@mail.gmail.com> <1f10aea40909040733i373aae9dxa75ba110dc5e18cb@mail.gmail.com> Message-ID: <4AA1D489.6020608@molden.no> Brian Thorne skrev: > Similar question, but now a bit harder. I have this code (pieced together from a few files) that does a gaussian filter on a single image in both OpenCV and in SciPy. > It is now at a point where I cannot tell them apart with a visual inspection, but a imshow(image1 - image2) begs to differ. Is it going to be possible to get the exact same output? > > > from opencv import cv > from opencv import adaptors > from __future__ import division > import numpy as np > from numpy import array, uint8 > from scipy import signal, ndimage > > @scipyFromOpenCV > def gaussianBlur(np_image): > """Blur an image with scipy""" > sigma = opencvFilt2sigma(43.0) > > result = ndimage.filters.gaussian_filter(np_image, > sigma=(sigma, sigma, 0), > order=0, > mode='reflect' > ) > return result > > def gaussianBlur(image, filterSize=43, sigma=opencvFilt2sigma(43)): > """Blur an image with a particular strength filter. > Default is 43, 139 gives a very strong blur, but takes a while > """ > > # Carry out the filter operation > cv.cvSmooth(image, image, cv.CV_GAUSSIAN, filterSize, 0, sigma) > return image For Gaussian filtering (and Gaussian blur in particular) one can also use a fast IIR approximation. It will not be faster if you use small truncated kernels like 3 x 3 pixels, but is quite efficient and run-time does not depend on sigma. (I think I've posted this code before, though.) Regards, Sturla Molden from numpy import array, zeros, ones, flipud, fliplr from scipy.signal import lfilter from math import sqrt def __gausscoeff(s): # Young, I.T. and van Vliet, L.J. (1995). Recursive implementation # of the Gaussian filter, Signal Processing, 44: 139-151. if s < .5: raise ValueError, \ 'Sigma for Gaussian filter must be >0.5 samples' q = 0.98711*s - 0.96330 if s > 0.5 else 3.97156 \ - 4.14554*sqrt(1.0 - 0.26891*s) b = zeros(4) b[0] = 1.5785 + 2.44413*q + 1.4281*q**2 + 0.422205*q**3 b[1] = 2.44413*q + 2.85619*q**2 + 1.26661*q**3 b[2] = -(1.4281*q**2 + 1.26661*q**3) b[3] = 0.422205*q**3 B = 1.0 - ((b[1] + b[2] + b[3])/b[0]) # convert to a format compatible with lfilter's # difference equation B = array([B]) A = ones(4) A[1:] = -b[1:]/b[0] return B,A def gaussian1D(signal, sigma, padding=0): n = signal.shape[0] tmp = zeros(n + padding) if tmp.shape[0] < 4: raise ValueError, \ 'Signal and padding too short' tmp[:n] = signal B,A = __gausscoeff(sigma) tmp = lfilter(B, A, tmp) tmp = tmp[::-1] tmp = lfilter(B, A, tmp) tmp = tmp[::-1] return tmp[:n] def gaussian2D(image, sigma, padding=0): n,m = image.shape[0],image.shape[1] tmp = zeros((n + padding, m + padding)) if tmp.shape[0] < 4: raise ValueError, \ 'Image and padding too small' if tmp.shape[1] < 4: raise ValueError, \ 'Image and padding too small' B,A = __gausscoeff(sigma) tmp[:n,:m] = image tmp = lfilter(B, A, tmp, axis=0) tmp = flipud(tmp) tmp = lfilter(B, A, tmp, axis=0) tmp = flipud(tmp) tmp = lfilter(B, A, tmp, axis=1) tmp = fliplr(tmp) tmp = lfilter(B, A, tmp, axis=1) tmp = fliplr(tmp) return tmp[:n,:m] From ivo.maljevic at gmail.com Fri Sep 4 23:17:15 2009 From: ivo.maljevic at gmail.com (Ivo Maljevic) Date: Fri, 4 Sep 2009 23:17:15 -0400 Subject: [SciPy-User] SciPy+NumPy on 4 major linux distributions In-Reply-To: <20090903175927.GQ20987@localhost.ee.columbia.edu> References: <826c64da0909010937y7617e61h578e0e178044c46@mail.gmail.com> <7f014ea60909010955q56c7cfadkee65427c3f525e90@mail.gmail.com> <20090903175927.GQ20987@localhost.ee.columbia.edu> Message-ID: <826c64da0909042017i67453678ied1a418aa3f66d67@mail.gmail.com> For what is worth, I've updated liblapack and libblas to 3.2.1 under openSUSE and numpy runs properly now. It looks like Lev Givon is making similar progress with Mandriva, so the situation is not as bad as I thought initially. 2009/9/3 Lev Givon > Received from Chris Colbert on Tue, Sep 01, 2009 at 12:55:34PM EDT: > > What I would like to see is a distribution build numpy and scipy with > > threaded atlas support. > > > > As it stands currently, Ubuntu "has atlas support", but its not > > threaded, and the packages are broken... > > > > Until that happens, I'll be rolling my own numpy and scipy from source. > > Mandriva's atlas packages consist of library rpms prebuilt for several > stock architectures [1] that are installed along with an ld.so > override that causes programs that dynamically link to a blas/lapack > shared library to call its atlas equivalent. The source srpm can also > be rebuilt on one's system so as to obtain a properly tuned library. > > I'm not sure whether the current prebuilt libraries are built with > thread support. I'm also not sure whether the current (3.8.3) prebuilt > libraries consistently provide any improved performance compared to > the netlib blas/lapack. > > L.G. > > [1] This scheme is also used in Debian's atlas packages. > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ivo.maljevic at gmail.com Fri Sep 4 23:29:22 2009 From: ivo.maljevic at gmail.com (Ivo Maljevic) Date: Fri, 4 Sep 2009 23:29:22 -0400 Subject: [SciPy-User] SciPy+NumPy on 4 major linux distributions In-Reply-To: <826c64da0909042017i67453678ied1a418aa3f66d67@mail.gmail.com> References: <826c64da0909010937y7617e61h578e0e178044c46@mail.gmail.com> <7f014ea60909010955q56c7cfadkee65427c3f525e90@mail.gmail.com> <20090903175927.GQ20987@localhost.ee.columbia.edu> <826c64da0909042017i67453678ied1a418aa3f66d67@mail.gmail.com> Message-ID: <826c64da0909042029i5bb05577r30286bdb05c3d4ac@mail.gmail.com> So, numpy works very well, but scipy.test() fails, and I think it is the same failure across the distributions. Anyone knows what does this mean: ====================================================================== ERROR: test_implicit (test_odr.TestODR) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib64/python2.6/site-packages/scipy/odr/tests/test_odr.py", line 88, in test_implicit out = implicit_odr.run() File "/usr/lib64/python2.6/site-packages/scipy/odr/odrpack.py", line 1055, in run self.output = Output(apply(odr, args, kwds)) TypeError: y must be a sequence or integer (if model is implicit) ---------------------------------------------------------------------- Ran 3395 tests in 51.646s FAILED (KNOWNFAIL=3, SKIP=28, errors=1) 2009/9/4 Ivo Maljevic > For what is worth, I've updated liblapack and libblas to 3.2.1 under > openSUSE and numpy runs properly now. > > It looks like Lev Givon is making similar progress with Mandriva, so the > situation is not as bad as I thought > initially. > > 2009/9/3 Lev Givon > > Received from Chris Colbert on Tue, Sep 01, 2009 at 12:55:34PM EDT: >> > What I would like to see is a distribution build numpy and scipy with >> > threaded atlas support. >> > >> > As it stands currently, Ubuntu "has atlas support", but its not >> > threaded, and the packages are broken... >> > >> > Until that happens, I'll be rolling my own numpy and scipy from source. >> >> Mandriva's atlas packages consist of library rpms prebuilt for several >> stock architectures [1] that are installed along with an ld.so >> override that causes programs that dynamically link to a blas/lapack >> shared library to call its atlas equivalent. The source srpm can also >> be rebuilt on one's system so as to obtain a properly tuned library. >> >> I'm not sure whether the current prebuilt libraries are built with >> thread support. I'm also not sure whether the current (3.8.3) prebuilt >> libraries consistently provide any improved performance compared to >> the netlib blas/lapack. >> >> L.G. >> >> [1] This scheme is also used in Debian's atlas packages. >> >> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ivo.maljevic at gmail.com Fri Sep 4 23:50:35 2009 From: ivo.maljevic at gmail.com (Ivo Maljevic) Date: Fri, 4 Sep 2009 23:50:35 -0400 Subject: [SciPy-User] SciPy+NumPy on 4 major linux distributions In-Reply-To: <826c64da0909042017i67453678ied1a418aa3f66d67@mail.gmail.com> References: <826c64da0909010937y7617e61h578e0e178044c46@mail.gmail.com> <7f014ea60909010955q56c7cfadkee65427c3f525e90@mail.gmail.com> <20090903175927.GQ20987@localhost.ee.columbia.edu> <826c64da0909042017i67453678ied1a418aa3f66d67@mail.gmail.com> Message-ID: <826c64da0909042050x3992e7d1x7d15c2b806e79248@mail.gmail.com> Guilty conciseness. The was I phrased it, it seemed like it was my idea to try different versions of these libraries, but it was Lev Givon's actually. Just to give credit were it is due. The answer to question about scipy's test error would still interest me. Thanks, ivo 2009/9/4 Ivo Maljevic > For what is worth, I've updated liblapack and libblas to 3.2.1 under > openSUSE and numpy runs properly now. > > It looks like Lev Givon is making similar progress with Mandriva, so the > situation is not as bad as I thought > initially. > > 2009/9/3 Lev Givon > > Received from Chris Colbert on Tue, Sep 01, 2009 at 12:55:34PM EDT: >> > What I would like to see is a distribution build numpy and scipy with >> > threaded atlas support. >> > >> > As it stands currently, Ubuntu "has atlas support", but its not >> > threaded, and the packages are broken... >> > >> > Until that happens, I'll be rolling my own numpy and scipy from source. >> >> Mandriva's atlas packages consist of library rpms prebuilt for several >> stock architectures [1] that are installed along with an ld.so >> override that causes programs that dynamically link to a blas/lapack >> shared library to call its atlas equivalent. The source srpm can also >> be rebuilt on one's system so as to obtain a properly tuned library. >> >> I'm not sure whether the current prebuilt libraries are built with >> thread support. I'm also not sure whether the current (3.8.3) prebuilt >> libraries consistently provide any improved performance compared to >> the netlib blas/lapack. >> >> L.G. >> >> [1] This scheme is also used in Debian's atlas packages. >> >> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hardbyte at gmail.com Sat Sep 5 02:44:50 2009 From: hardbyte at gmail.com (Brian Thorne) Date: Sat, 5 Sep 2009 18:44:50 +1200 Subject: [SciPy-User] Gaussian Filter In-Reply-To: References: <1f10aea40909040549l3256683ft11e473e751644366@mail.gmail.com> <1f10aea40909040619h41d4784csdb98f33080cab838@mail.gmail.com> <1f10aea40909040652l7b125e89hd2d7e78d91c8fd58@mail.gmail.com> <1f10aea40909040733i373aae9dxa75ba110dc5e18cb@mail.gmail.com> Message-ID: <1f10aea40909042344t30dd1f70ob20279a2a81f02f9@mail.gmail.com> Thanks for the replies! I tried your code Sturla, and although it goes fast, it doesn't seem to be doing quite the same thing as the ndimage filter. If I am interpreting it right the difference (only at a few local points) seems to be very large. Here is an image showing intensity of each channel of the difference image: http://2.bp.blogspot.com/_lewp47C9PZI/SqIEHFH1EGI/AAAAAAAAAgU/0H50ceb8k7M/s1600-h/Screenshot2.png This is a plot looking at a singe row, we can see that the difference spikes the whole intensity range: http://4.bp.blogspot.com/_lewp47C9PZI/SqIEG3PcAzI/AAAAAAAAAgM/GD2z-Zx7hg8/s1600-h/Screenshot3.png Given that the gaussians are allowed to be slightly different, shouldn't the difference in the output image be very very small as well? Brian 2009/9/5 Matthieu Brucher > I don't have OpenCV here, so I can't say for sure but, what is the > relative amplitude of the difference? OpenCV works on floats or > doubles? It may be due to a type difference or a small difference in > the algorithms. > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthieu.brucher at gmail.com Sat Sep 5 02:52:50 2009 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Sat, 5 Sep 2009 08:52:50 +0200 Subject: [SciPy-User] Gaussian Filter In-Reply-To: <1f10aea40909042344t30dd1f70ob20279a2a81f02f9@mail.gmail.com> References: <1f10aea40909040549l3256683ft11e473e751644366@mail.gmail.com> <1f10aea40909040619h41d4784csdb98f33080cab838@mail.gmail.com> <1f10aea40909040652l7b125e89hd2d7e78d91c8fd58@mail.gmail.com> <1f10aea40909040733i373aae9dxa75ba110dc5e18cb@mail.gmail.com> <1f10aea40909042344t30dd1f70ob20279a2a81f02f9@mail.gmail.com> Message-ID: 2009/9/5 Brian Thorne : > Thanks for the replies! I tried your code Sturla, and although it goes fast, > it doesn't seem to be doing quite the same thing as the ndimage filter. > If I am?interpreting it right the difference (only at a few local points) > seems to be very large. > Here is an image showing intensity of each channel of the difference image: > http://2.bp.blogspot.com/_lewp47C9PZI/SqIEHFH1EGI/AAAAAAAAAgU/0H50ceb8k7M/s1600-h/Screenshot2.png > This is a plot looking at a singe row, we can see that the difference spikes > the whole intensity range: > http://4.bp.blogspot.com/_lewp47C9PZI/SqIEG3PcAzI/AAAAAAAAAgM/GD2z-Zx7hg8/s1600-h/Screenshot3.png > Given that the gaussians are allowed to be slightly different, shouldn't the > difference in the output image be very very small as well? > Brian One channel is usually 8bits, so 256 values. If you add a modulo, the values will be similar. Matthieu -- Information System Engineer, Ph.D. Website: http://matthieu-brucher.developpez.com/ Blogs: http://matt.eifelle.com and http://blog.developpez.com/?blog=92 LinkedIn: http://www.linkedin.com/in/matthieubrucher From sturla at molden.no Sat Sep 5 03:20:59 2009 From: sturla at molden.no (Sturla Molden) Date: Sat, 05 Sep 2009 09:20:59 +0200 Subject: [SciPy-User] Gaussian Filter In-Reply-To: <1f10aea40909042344t30dd1f70ob20279a2a81f02f9@mail.gmail.com> References: <1f10aea40909040549l3256683ft11e473e751644366@mail.gmail.com> <1f10aea40909040619h41d4784csdb98f33080cab838@mail.gmail.com> <1f10aea40909040652l7b125e89hd2d7e78d91c8fd58@mail.gmail.com> <1f10aea40909040733i373aae9dxa75ba110dc5e18cb@mail.gmail.com> <1f10aea40909042344t30dd1f70ob20279a2a81f02f9@mail.gmail.com> Message-ID: <4AA2115B.7090708@molden.no> Brian Thorne skrev: > Thanks for the replies! I tried your code Sturla, and although it goes > fast, it doesn't seem to be doing quite the same thing as the ndimage > filter. > Did you see gain < 1 near the edges? If you did, I'll leave it as an excercise how to fix it (it is incredibly easy). Sturla From stefan at sun.ac.za Sat Sep 5 04:55:20 2009 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Sat, 5 Sep 2009 10:55:20 +0200 Subject: [SciPy-User] Scikits Trac In-Reply-To: References: <9457e7c80909021524x2fbfd789kb4b676a6ed02e00f@mail.gmail.com> Message-ID: <9457e7c80909050155g544d03d6uf2558ca68fff2e66@mail.gmail.com> 2009/9/3 Tim Michelsen : > The link > SciKits developer resources - http://www.scipy.org/scipy/scikits/ > > on > http://scikits.appspot.com/contribute > needs also to be updated to point to the new site,. Thanks, fixed! Regards St?fan From hardbyte at gmail.com Sat Sep 5 04:58:00 2009 From: hardbyte at gmail.com (Brian Thorne) Date: Sat, 5 Sep 2009 20:58:00 +1200 Subject: [SciPy-User] Gaussian Filter In-Reply-To: References: <1f10aea40909040549l3256683ft11e473e751644366@mail.gmail.com> <1f10aea40909040619h41d4784csdb98f33080cab838@mail.gmail.com> <1f10aea40909040652l7b125e89hd2d7e78d91c8fd58@mail.gmail.com> <1f10aea40909040733i373aae9dxa75ba110dc5e18cb@mail.gmail.com> <1f10aea40909042344t30dd1f70ob20279a2a81f02f9@mail.gmail.com> Message-ID: <1f10aea40909050158s5580a551l6440e1cd6d55ab2e@mail.gmail.com> Cheers Matthieu. I've gotten so used to assuming Python will take care of things to prevent integer overflow! Sturla, I didn't actually do a compare of the boundaries or anything with your code. I just threw a single channel of my webcam stream at it, and saw that the output was more like an edge detector. Doesn't matter tho, although interesting with the IIR filter, I think the OpenCV + SciPy ndimage filter is probably enough for me to worry about! Thanks! 2009/9/5 Matthieu Brucher > > One channel is usually 8bits, so 256 values. If you add a modulo, the > values will be similar. > > Matthieu -------------- next part -------------- An HTML attachment was scrubbed... URL: From sturla at molden.no Sat Sep 5 07:07:25 2009 From: sturla at molden.no (Sturla Molden) Date: Sat, 05 Sep 2009 13:07:25 +0200 Subject: [SciPy-User] Gaussian Filter In-Reply-To: <1f10aea40909050158s5580a551l6440e1cd6d55ab2e@mail.gmail.com> References: <1f10aea40909040549l3256683ft11e473e751644366@mail.gmail.com> <1f10aea40909040619h41d4784csdb98f33080cab838@mail.gmail.com> <1f10aea40909040652l7b125e89hd2d7e78d91c8fd58@mail.gmail.com> <1f10aea40909040733i373aae9dxa75ba110dc5e18cb@mail.gmail.com> <1f10aea40909042344t30dd1f70ob20279a2a81f02f9@mail.gmail.com> <1f10aea40909050158s5580a551l6440e1cd6d55ab2e@mail.gmail.com> Message-ID: <4AA2466D.9050200@molden.no> Brian Thorne skrev: > Sturla, I didn't actually do a compare of the boundaries or anything with your code. I just threw a single channel of my webcam stream at it, and saw that the output was more like an edge detector. What? Let's try this on the Lena S?derberg image: http://upload.wikimedia.org/wikipedia/en/2/24/Lenna.png import pylab from PIL import Image import numpy as np img = np.fromstring(Image.open('lenna.png').tostring(), dtype=np.uint8).reshape((512,512,3)) sigma = 10 gain = gaussian2D(np.ones((512,512)), sigma) # corrects edges for i in range(3): img[:,:,i] = gaussian2D(img[:,:,i], sigma) / gain Image.fromstring('RGB', (512,512), img.tostring())\ .save('lenna2.png') pylab.imshow(img) pylab.show() Here is the result I got: http://www.hostdump.com/images/lenna2.png It looks like blur to me, though... Regards, Sturla Molden From hardbyte at gmail.com Sat Sep 5 08:13:51 2009 From: hardbyte at gmail.com (Brian Thorne) Date: Sun, 6 Sep 2009 00:13:51 +1200 Subject: [SciPy-User] Gaussian Filter In-Reply-To: <4AA2466D.9050200@molden.no> References: <1f10aea40909040549l3256683ft11e473e751644366@mail.gmail.com> <1f10aea40909040619h41d4784csdb98f33080cab838@mail.gmail.com> <1f10aea40909040652l7b125e89hd2d7e78d91c8fd58@mail.gmail.com> <1f10aea40909040733i373aae9dxa75ba110dc5e18cb@mail.gmail.com> <1f10aea40909042344t30dd1f70ob20279a2a81f02f9@mail.gmail.com> <1f10aea40909050158s5580a551l6440e1cd6d55ab2e@mail.gmail.com> <4AA2466D.9050200@molden.no> Message-ID: <1f10aea40909050513w1495c94fq5fa749d2d438690@mail.gmail.com> I apologize, it was only a brief look, at one channel! I was clearly quite mistaken. For interests sake though I have now put your algorithm in along side the ndimage filt, as expected the outputs are not visibly different from each other (or from the OpenCV gaussian filter) http://3.bp.blogspot.com/_lewp47C9PZI/SqJSij80bjI/AAAAAAAAAg0/PFoWpCJX3gk/s1600-h/lena_opencv_ndfilt_iir.png But I am curious as to what causes the edges to do that in the IIR filter version? I made a pretty (IMHO) plot of the diff image seperating each channel: http://1.bp.blogspot.com/_lewp47C9PZI/SqJSUG7MJEI/AAAAAAAAAgs/s8aSE0FBCwI/s1600-h/lena_diff_ndfilt_iir.png This shows that there is still something happening at the edges because plotting the same graph between the output from OpenCV and ndimage: http://4.bp.blogspot.com/_lewp47C9PZI/SqJUN8qYLPI/AAAAAAAAAg8/eqHVYKICPzU/s1600-h/gaussian_diffs.png Cheers, Brian 2009/9/5 Sturla Molden > Brian Thorne skrev: > > > Sturla, I didn't actually do a compare of the boundaries or anything > with your code. I just threw a single channel of my webcam stream at it, > and saw that the output was more like an edge detector. > > > What? > > Let's try this on the Lena S?derberg image: > > http://upload.wikimedia.org/wikipedia/en/2/24/Lenna.png > > > import pylab > from PIL import Image > import numpy as np > > img = np.fromstring(Image.open('lenna.png').tostring(), > dtype=np.uint8).reshape((512,512,3)) > > sigma = 10 > > gain = gaussian2D(np.ones((512,512)), sigma) # corrects edges > > for i in range(3): > img[:,:,i] = gaussian2D(img[:,:,i], sigma) / gain > > Image.fromstring('RGB', (512,512), img.tostring())\ > .save('lenna2.png') > > pylab.imshow(img) > pylab.show() > > > Here is the result I got: > > http://www.hostdump.com/images/lenna2.png > > > It looks like blur to me, though... > > > Regards, > Sturla Molden > > > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pav at iki.fi Sat Sep 5 09:53:43 2009 From: pav at iki.fi (Pauli Virtanen) Date: Sat, 05 Sep 2009 16:53:43 +0300 Subject: [SciPy-User] 3D plotting In-Reply-To: <43FB1E92.6030705@ntc.zcu.cz> References: <43FB1E92.6030705@ntc.zcu.cz> Message-ID: <1252158823.8021.0.camel@idol> ti, 2006-02-21 kello 15:07 +0100, Robert Cimrman kirjoitti: [clip] > * mplot3d: does not work with my version of matplotlib (0.80). I have > made the changes mentioned in the Cookbook to no avail. (Axes.__init__() > args apparently changed, as well as some other matplotlib object attributes) > > Any ideas? mplot3d looks great, I would really like to use it! I'd suggest just updating your Matplotlib library to version 0.99, or trying out Mayavi2. -- Pauli Virtanen From sturla at molden.no Sat Sep 5 15:45:57 2009 From: sturla at molden.no (Sturla Molden) Date: Sat, 05 Sep 2009 21:45:57 +0200 Subject: [SciPy-User] Gaussian Filter In-Reply-To: <1f10aea40909050513w1495c94fq5fa749d2d438690@mail.gmail.com> References: <1f10aea40909040549l3256683ft11e473e751644366@mail.gmail.com> <1f10aea40909040619h41d4784csdb98f33080cab838@mail.gmail.com> <1f10aea40909040652l7b125e89hd2d7e78d91c8fd58@mail.gmail.com> <1f10aea40909040733i373aae9dxa75ba110dc5e18cb@mail.gmail.com> <1f10aea40909042344t30dd1f70ob20279a2a81f02f9@mail.gmail.com> <1f10aea40909050158s5580a551l6440e1cd6d55ab2e@mail.gmail.com> <4AA2466D.9050200@molden.no> <1f10aea40909050513w1495c94fq5fa749d2d438690@mail.gmail.com> Message-ID: <4AA2BFF5.2060002@molden.no> Brian Thorne skrev: > I made a pretty (IMHO) plot of the diff image seperating each channel: > http://1.bp.blogspot.com/_lewp47C9PZI/SqJSUG7MJEI/AAAAAAAAAgs/s8aSE0FBCwI/s1600-h/lena_diff_ndfilt_iir.png Can you post the code for this so I could check a couple of things? Sturla Molden From hardbyte at gmail.com Sat Sep 5 16:28:08 2009 From: hardbyte at gmail.com (Brian Thorne) Date: Sun, 6 Sep 2009 08:28:08 +1200 Subject: [SciPy-User] Gaussian Filter In-Reply-To: <4AA2BFF5.2060002@molden.no> References: <1f10aea40909040549l3256683ft11e473e751644366@mail.gmail.com> <1f10aea40909040652l7b125e89hd2d7e78d91c8fd58@mail.gmail.com> <1f10aea40909040733i373aae9dxa75ba110dc5e18cb@mail.gmail.com> <1f10aea40909042344t30dd1f70ob20279a2a81f02f9@mail.gmail.com> <1f10aea40909050158s5580a551l6440e1cd6d55ab2e@mail.gmail.com> <4AA2466D.9050200@molden.no> <1f10aea40909050513w1495c94fq5fa749d2d438690@mail.gmail.com> <4AA2BFF5.2060002@molden.no> Message-ID: <1f10aea40909051328x342972b7qec63aea45bda5748@mail.gmail.com> Sure thing. Your code can be copied in verbatim for gaussian2D etc... I may have missed a few imports here, obviously these functions are spread out over a few files. from __future__ import division import numpy as np from numpy import array, uint8 from scipy import signal, ndimage import numpy as np from scipy import signal from opencv import adaptors class scipyFromOpenCV(object): """This decorator can be used to wrap a function that takes and returns a numpy array into one that takes and retuns an opencv CvMat. Seems to be a big performance hit tho, ~2ms per conversion. TODO: look into magic string buffer methods """ def __init__(self, f): self.f = f def __call__(self, image): # Convert CvMat to ndarray np_image = adaptors.Ipl2NumPy(image) # Call the original function np_image_filtered = self.f(np_image) # Convert back to CvMat return adaptors.NumPy2Ipl(np_image_filtered) def plot_seperate_rgb(diff): """Take an RGB image and plot each of the three channels in its own subplot, coloured: Red, Green and Blue""" import matplotlib.pyplot as plt import matplotlib.cm as cm plt.figure() plt.subplot(1,3,1) plt.title("R") im1 = plt.imshow(diff[:,:,0], cmap=cm.Reds) CB1 = plt.colorbar(im1, orientation='horizontal') plt.subplot(1,3,2) plt.title("G") im2 = plt.imshow(diff[:,:,1], cmap=cm.Greens) CB2 = plt.colorbar(im2, orientation='horizontal') plt.subplot(1,3,3) plt.title("B") im3 = plt.imshow(diff[:,:,2], cmap=cm.Blues) CB3 = plt.colorbar(im3, orientation='horizontal') #user may have to call plt.show() depending on env @scipyFromOpenCV def mlGaussianBlur(image): """Method using IIR filter code from thread on the SciPy-User at scipy.org mailing list""" img = array(image) sigma = opencvFilt2sigma(43.0) gain = gaussian2D(np.ones((512,512)), sigma) # corrects edges for i in range(3): img[:,:,i] = gaussian2D(img[:,:,i], sigma) / gain return img @scipyFromOpenCV def gaussianBlur(np_image): """Blur an image with scipy""" sigma = opencvFilt2sigma(43.0) result = ndimage.filters.gaussian_filter(np_image, sigma=(sigma, sigma, 0), order=0, mode='reflect' ) return result def testGaussianBlur(): """Test that the guassian blur function gives the exact same output in Python and in C++ with OpenCV and ideally with SciPy. Can run this test with: nosetests --with-doctest blur_scipy.py -v """ from pylab import imread from opencv import highgui import blur_opencv # a seperate file with the opencv gaussian operation # Using Lena image create tests image. image_filename = "/usr/share/doc/opencv-doc/examples/c/lena.jpg" i = highgui.cvLoadImage(image_filename) # Carry out the filtering py_scipy = mlGaussianBlur(i) # note - it is decorated to convert between cvMat and NumPy py_scipy2 = gaussianBlur(i) py_opencv = blur_opencv.gaussianBlur(i) # Save the outputs as jpg files highgui.cvSaveImage("gaussian_scipy_iir.jpg", py_scipy) highgui.cvSaveImage("gaussian_scipy_ndfilt.jpg", py_scipy2) highgui.cvSaveImage("gaussian_opencv.jpg", py_opencv) # Load in the image data with scipy python_opencv_image = imread("gaussian_opencv.jpg") python_scipy_image = imread("gaussian_scipy_ndfilt.jpg") python_scipy2_image = imread("gaussian_scipy_iir.jpg") diff = uint8( abs( python_opencv_image.astype(float) - python_scipy_image.astype(float) )) diff2 = uint8( abs( python_opencv_image.astype(float) - python_scipy2_image.astype(float) )) diff3 = uint8( abs( python_scipy_image.astype(float) - python_scipy2_image.astype(float) )) # For visual inspection: from pylab import show, imshow, figure, subplot, title # Show the outputs figure() subplot(1,3,1); title("The OpenCV Output (Py and C++)") imshow(python_opencv_image) subplot(1,3,2); title("SciPy: IIR filter") imshow(python_scipy_image) subplot(1,3,3); title("SciPy: ndimage.filters.gaussian_filter") imshow(python_scipy2_image) # ideally these are black figure() subplot(1,3,1) imshow(diff) subplot(1,3,2) imshow(diff2) subplot(1,3,3) imshow(diff3) # ideally these are very light plot_seperate_rgb(diff) plot_seperate_rgb(diff2) plot_seperate_rgb(diff3) show() 2009/9/6 Sturla Molden > Brian Thorne skrev: > > I made a pretty (IMHO) plot of the diff image seperating each channel: > > > http://1.bp.blogspot.com/_lewp47C9PZI/SqJSUG7MJEI/AAAAAAAAAgs/s8aSE0FBCwI/s1600-h/lena_diff_ndfilt_iir.png > Can you post the code for this so I could check a couple of things? > > > Sturla Molden > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sturla at molden.no Sat Sep 5 21:59:18 2009 From: sturla at molden.no (Sturla Molden) Date: Sun, 06 Sep 2009 03:59:18 +0200 Subject: [SciPy-User] Gaussian Filter In-Reply-To: <1f10aea40909050513w1495c94fq5fa749d2d438690@mail.gmail.com> References: <1f10aea40909040549l3256683ft11e473e751644366@mail.gmail.com> <1f10aea40909040619h41d4784csdb98f33080cab838@mail.gmail.com> <1f10aea40909040652l7b125e89hd2d7e78d91c8fd58@mail.gmail.com> <1f10aea40909040733i373aae9dxa75ba110dc5e18cb@mail.gmail.com> <1f10aea40909042344t30dd1f70ob20279a2a81f02f9@mail.gmail.com> <1f10aea40909050158s5580a551l6440e1cd6d55ab2e@mail.gmail.com> <4AA2466D.9050200@molden.no> <1f10aea40909050513w1495c94fq5fa749d2d438690@mail.gmail.com> Message-ID: <4AA31776.2020900@molden.no> Brian Thorne skrev: > But I am curious as to what causes the edges to do that in the IIR > filter version? > > I made a pretty (IMHO) plot of the diff image seperating each channel: > http://1.bp.blogspot.com/_lewp47C9PZI/SqJSUG7MJEI/AAAAAAAAAgs/s8aSE0FBCwI/s1600-h/lena_diff_ndfilt_iir.png In your code you save to JPEG (lossy compression) before taking the difference. Don't do that, you incur some error from the compression. Mind you that neither filter are 'correct'. The Gaussian filters in ndimage and OpenCV truncate the Gaussian, my version use a recursive approximation. The Gaussian filter in ndimage and OpenCV have larger truncation errors near the edges: As the kernel overlaps with an edge, the kernel is truncated further and truncation error increase. Thus, you were looking at the discreapancy between two approximations. You cannot attribute the edge difference to error in the IIR approximation from this result. First, it seems it helps to pad with some zeros (I forgot to do that, sorry!), e.g. setting padding=3*sigma. def gaussian2D(image, sigma, padding=None): if padding is None: padding = 3*sigma n,m = image.shape[0],image.shape[1] tmp = zeros((n + 2*padding, m + 2*padding)) if tmp.shape[0] < 4: raise ValueError, \ 'Image and padding too small' if tmp.shape[1] < 4: raise ValueError, \ 'Image and padding too small' B,A = __gausscoeff(sigma) tmp[padding:n+padding,padding:m+padding] = image tmp = lfilter(B, A, tmp, axis=0) tmp = flipud(tmp) tmp = lfilter(B, A, tmp, axis=0) tmp = flipud(tmp) tmp = lfilter(B, A, tmp, axis=1) tmp = fliplr(tmp) tmp = lfilter(B, A, tmp, axis=1) tmp = fliplr(tmp) return tmp[padding:n+padding,padding:m+padding] Now I get a maximum illumination difference of 9 (range 0-255) between ndimage and IIR. Another image (Forest.jpg, sample image in Windows Vista) gave a maximum of 5. The biggest discrepancy was near the edges here as well. There is still discrepancies at the edges. Who are the culprit? We can easily find 'facit' using an FFT. By truncating at 3 standard deviations, the truncation error from FFT convolution will be very small: def gaussian2D_FFT(image, sigma, padding=None): if padding is None: padding = 3*sigma n,m = image.shape[0],image.shape[1] tmp = zeros((n + 2*padding, m + 2*padding)) if tmp.shape[0] < 4: raise ValueError, \ 'Image and padding too small' if tmp.shape[1] < 4: raise ValueError, \ 'Image and padding too small' tmp[:n,:m] = image x,y = meshgrid(range(tmp.shape[1]),range(tmp.shape[0])) B = exp(-(((x-padding)**2) + ((y-padding)**2))/(2.*sigma*sigma)) B[:] *= 1./B.sum() retv = irfft2(rfft2(B)*rfft2(tmp)) return retv[padding:n+padding,padding:m+padding] This FFT-based filter also have loss of gain near the edges due to padding, so we adjust it accordingly: img = np.fromstring(Image.open('lenna.png').tostring(), dtype=np.uint8)\ .reshape((512,512,3)).astype(float) gain = gaussian2D_FFT(np.ones(img.shape[:2]), 10) for i in range(3): img[:,:,i] = gaussian2D_FFT(img[:,:,i], 10) / (gain + 1E-100) Now, comparing what we get from scipy.ndimage.filters.gaussian_filter and IIR with the FFT 'facit' (very small truncation error) : * IIR vs. FFT: maximum illumination difference: 3 * ndimage vs. FFT: maximum illumination difference: 9 (close to edges) * ndimage vs. IIR: maximum illumination difference: 9 (close to edges) Which in fact means that the IIR approximation is more accurate than OpenCV and ndimage's Gaussian filter! The edge difference in fact comes from truncation error in ndimage and OpenVC, not from the IIR being inaccurate. It is due to increased truncation error near the edges. I have prepared similar images like yours: IIR vs. FFT: http://hostdump.com/images/iirvsfft.png ndimage vs. FFT: http://hostdump.com/images/ndarrayvsf.png ndimage vs. IIR: http://hostdump.com/images/ndarrayvsi.png Thus, the IIR is the better approximation to the Gaussian filter. Regards, Sturla Molden From cournape at gmail.com Sun Sep 6 04:59:10 2009 From: cournape at gmail.com (David Cournapeau) Date: Sun, 6 Sep 2009 17:59:10 +0900 Subject: [SciPy-User] SciPy+NumPy on 4 major linux distributions In-Reply-To: <20090903175927.GQ20987@localhost.ee.columbia.edu> References: <826c64da0909010937y7617e61h578e0e178044c46@mail.gmail.com> <7f014ea60909010955q56c7cfadkee65427c3f525e90@mail.gmail.com> <20090903175927.GQ20987@localhost.ee.columbia.edu> Message-ID: <5b8d13220909060159t2026069fsd69daa7f31621209@mail.gmail.com> On Fri, Sep 4, 2009 at 2:59 AM, Lev Givon wrote: > I'm not sure whether the current prebuilt libraries are built with > thread support. I'm also not sure whether the current (3.8.3) prebuilt > libraries consistently provide any improved performance compared to > the netlib blas/lapack. The netlib blas/lapack built with gfortran has quite poor performance. Using SSE/SSE2 alone gives a significant boost. Of course, building your own will give improved performance. cheers, David From contact at pythonxy.com Sun Sep 6 05:51:29 2009 From: contact at pythonxy.com (Pierre Raybaut) Date: Sun, 06 Sep 2009 11:51:29 +0200 Subject: [SciPy-User] [ Python(x,y) ] New release : 2.1.15 Message-ID: <4AA38621.2090005@pythonxy.com> Hi all, Python(x,y) is a free scientific-oriented Python Distribution based on Qt and Eclipse providing a self-consistent scientific development environment. Python(x,y) 2.1.15 now includes Spyder, the Scientific PYthon Development EnviRonment (v1.0.0beta6). Spyder is an excellent entry point for scientific users who begin with Python thanks to features that were inspired from popular commercial scientific languages like MATLAB ("Workspace": variable explorer, integrated plots, integrated help, ...). Release 2.1.15 is now available on http://www.pythonxy.com: - All-in-One Installer ("Full Edition"), - Plugin Installer -- to be downloaded with xyweb, - Update This is the last Python(x,y) release based on Python 2.5. Python(x,y) for Python 2.6 is already available as a beta release (v2.6.0 final will soon be available): http://www.pythonxy.com/dl.php?file=windows/Python(x,y)-2.6.0beta4.exe&mirror=ntua Even if it's still a beta release, Python(x,y) 2.6.0beta4 is already as stable as v2.1.15 - the only difference between v2.6.0beta4 and v2.6.0final will be the plugin list: some plugins were not ported to v2.6 yet, that's all. Special thanks to Chris Ps for the NTUA Python(x,y) download mirror. Changes history Version 2.1.15 (09-05-2009) * Added: o Spyder 1.0.0beta6 - Scientific PYthon Development EnviRonment (PKA Pydee) * Updated: o SciPy 0.7.1 o matplotlib 0.99.0 o scikits.timeseries 0.91.2 o Sympy 0.6.5 o numexpr 1.3.1 o Pydev 1.5.0 o xy 1.0.29 o IPython 0.10 o pywin32 2.13.2 (bugfix) o wxPython 2.8.10.1 Regards, Pierre Raybaut From samehm at gmail.com Sun Sep 6 11:32:29 2009 From: samehm at gmail.com (SamehK) Date: Sun, 6 Sep 2009 08:32:29 -0700 (PDT) Subject: [SciPy-User] [SciPy-user] Weave Inline with MSVC Message-ID: <25318825.post@talk.nabble.com> Hi everyone, I was just trying out weave since I could use the speedup for a project, but the simplest example fails. I was wondering if it's an incompatibility with VS 2008. I am using Python 2.6.2, Scipy 0.7.1, Numpy 1.3.0b1, which I believe are all the latest stable versions. from scipy.weave import inline inline("int i;") No module named msvccompiler in numpy.distutils; trying from distutils Missing compiler_cxx fix for MSVCCompiler Found executable C:\Programs\Microsoft Visual Studio 9.0\VC\BIN\cl.exe sc_40d3d8fa3c65de8979d50fc530eb0b7f12.cpp C:\Programs\Microsoft Visual Studio 9.0\VC\INCLUDE\xlocale(342) : warning C4530: C++ exception handler used, but unwind semantics are not enabled. Specify /EHsc ...\python26_compiled\sc_40d3d8fa3c65de8979d50fc530eb0b7f12.cpp(653) : error C2146: syntax error : missing ';' before identifier '__attribute__' ...\python26_compiled\sc_40d3d8fa3c65de8979d50fc530eb0b7f12.cpp(653) : error C2065: 'unused' : undeclared identifier ...\python26_compiled\sc_40d3d8fa3c65de8979d50fc530eb0b7f12.cpp(653) : error C3861: '__attribute__': identifier not found ...\python26_compiled\sc_40d3d8fa3c65de8979d50fc530eb0b7f12.cpp(655) : error C2146: syntax error : missing ';' before identifier '__attribute__' ...\python26_compiled\sc_40d3d8fa3c65de8979d50fc530eb0b7f12.cpp(655) : error C2065: 'unused' : undeclared identifier ...\python26_compiled\sc_40d3d8fa3c65de8979d50fc530eb0b7f12.cpp(655) : error C3861: '__attribute__': identifier not found I also tried the print_example.py in site-packages\scipy\weave\examples and it also fails, however, the fibonacci.py example that uses ext_tools works fine. What am I missing? Thank you. Sam -- View this message in context: http://www.nabble.com/Weave-Inline-with-MSVC-tp25318825p25318825.html Sent from the Scipy-User mailing list archive at Nabble.com. From samehm at gmail.com Sun Sep 6 14:24:15 2009 From: samehm at gmail.com (SamehK) Date: Sun, 6 Sep 2009 11:24:15 -0700 (PDT) Subject: [SciPy-User] [SciPy-user] Weave Inline with MSVC In-Reply-To: <25318825.post@talk.nabble.com> References: <25318825.post@talk.nabble.com> Message-ID: <25320349.post@talk.nabble.com> By checking the generated cpp file, I was wondering if there's a bug somewhere that causes weave to dump "__attribute__" and "unused" when they haven't been defined. The cpp file: static PyObject* compiled_func(PyObject*self, PyObject* args) { py::object return_val; int exception_occured = 0; PyObject *py__locals = NULL; PyObject *py__globals = NULL; if(!PyArg_ParseTuple(args,"OO:compiled_func",&py__locals,&py__globals)) return NULL; try { PyObject* raw_locals __attribute__ ((unused)); raw_locals = py_to_raw_dict(py__locals,"_locals"); PyObject* raw_globals __attribute__ ((unused)); raw_globals = py_to_raw_dict(py__globals,"_globals"); /* argument conversion code */ /* inline code */ /* NDARRAY API VERSION 1000009 */ int i; /*I would like to fill in changed locals and globals here...*/ } catch(...) { return_val = py::object(); exception_occured = 1; } /* cleanup code */ if(!(PyObject*)return_val && !exception_occured) { return_val = Py_None; } return return_val.disown(); } SamehK wrote: > > Hi everyone, > I was just trying out weave since I could use the speedup for a project, > but the simplest example fails. I was wondering if it's an incompatibility > with VS 2008. I am using Python 2.6.2, Scipy 0.7.1, Numpy 1.3.0b1, which I > believe are all the latest stable versions. > > from scipy.weave import inline > inline("int i;") > > No module named msvccompiler in numpy.distutils; trying from distutils > Missing compiler_cxx fix for MSVCCompiler > Found executable C:\Programs\Microsoft Visual Studio 9.0\VC\BIN\cl.exe > sc_40d3d8fa3c65de8979d50fc530eb0b7f12.cpp > C:\Programs\Microsoft Visual Studio 9.0\VC\INCLUDE\xlocale(342) : warning > C4530: C++ exception handler used, but unwind semantics are not enabled. > Specify /EHsc > ...\python26_compiled\sc_40d3d8fa3c65de8979d50fc530eb0b7f12.cpp(653) : > error C2146: syntax error : missing ';' before identifier '__attribute__' > ...\python26_compiled\sc_40d3d8fa3c65de8979d50fc530eb0b7f12.cpp(653) : > error C2065: 'unused' : undeclared identifier > ...\python26_compiled\sc_40d3d8fa3c65de8979d50fc530eb0b7f12.cpp(653) : > error C3861: '__attribute__': identifier not found > ...\python26_compiled\sc_40d3d8fa3c65de8979d50fc530eb0b7f12.cpp(655) : > error C2146: syntax error : missing ';' before identifier '__attribute__' > ...\python26_compiled\sc_40d3d8fa3c65de8979d50fc530eb0b7f12.cpp(655) : > error C2065: 'unused' : undeclared identifier > ...\python26_compiled\sc_40d3d8fa3c65de8979d50fc530eb0b7f12.cpp(655) : > error C3861: '__attribute__': identifier not found > > I also tried the print_example.py in site-packages\scipy\weave\examples > and it also fails, however, the fibonacci.py example that uses ext_tools > works fine. What am I missing? > > Thank you. > Sam > -- View this message in context: http://www.nabble.com/Weave-Inline-with-MSVC-tp25318825p25320349.html Sent from the Scipy-User mailing list archive at Nabble.com. From samehm at gmail.com Sun Sep 6 14:54:59 2009 From: samehm at gmail.com (SamehK) Date: Sun, 6 Sep 2009 11:54:59 -0700 (PDT) Subject: [SciPy-User] [SciPy-user] Weave Inline with MSVC In-Reply-To: <25320349.post@talk.nabble.com> References: <25318825.post@talk.nabble.com> <25320349.post@talk.nabble.com> Message-ID: <25320644.post@talk.nabble.com> So, someone broke the current weave under windows by introducing code that is gcc-specific! In inline_tools.py, modify lines 84 and 87 by removing "__attribute__ ((unused))" from both if you want inline to work with VS. try_code = 'try \n' \ '{ \n' \ ' PyObject* raw_locals __attribute__ ((unused));\n' \ ' raw_locals = py_to_raw_dict(' \ 'py__locals,"_locals");\n' \ ' PyObject* raw_globals __attribute__ ((unused));\n' \ Hope this helps someone out. I already wasted a day on this nonsense. SamehK wrote: > > By checking the generated cpp file, I was wondering if there's a bug > somewhere that causes weave to dump "__attribute__" and "unused" when they > haven't been defined. > > The cpp file: > static PyObject* compiled_func(PyObject*self, PyObject* args) > { > py::object return_val; > int exception_occured = 0; > PyObject *py__locals = NULL; > PyObject *py__globals = NULL; > > if(!PyArg_ParseTuple(args,"OO:compiled_func",&py__locals,&py__globals)) > return NULL; > try > { > PyObject* raw_locals __attribute__ ((unused)); > raw_locals = py_to_raw_dict(py__locals,"_locals"); > PyObject* raw_globals __attribute__ ((unused)); > raw_globals = py_to_raw_dict(py__globals,"_globals"); > /* argument conversion code */ > /* inline code */ > /* NDARRAY API VERSION 1000009 */ > int i; /*I would like to fill in changed locals and globals > here...*/ > > } > catch(...) > { > return_val = py::object(); > exception_occured = 1; > } > /* cleanup code */ > if(!(PyObject*)return_val && !exception_occured) > { > > return_val = Py_None; > } > > return return_val.disown(); > } > > > > SamehK wrote: >> >> Hi everyone, >> I was just trying out weave since I could use the speedup for a project, >> but the simplest example fails. I was wondering if it's an >> incompatibility with VS 2008. I am using Python 2.6.2, Scipy 0.7.1, Numpy >> 1.3.0b1, which I believe are all the latest stable versions. >> >> from scipy.weave import inline >> inline("int i;") >> >> No module named msvccompiler in numpy.distutils; trying from distutils >> Missing compiler_cxx fix for MSVCCompiler >> Found executable C:\Programs\Microsoft Visual Studio 9.0\VC\BIN\cl.exe >> sc_40d3d8fa3c65de8979d50fc530eb0b7f12.cpp >> C:\Programs\Microsoft Visual Studio 9.0\VC\INCLUDE\xlocale(342) : warning >> C4530: C++ exception handler used, but unwind semantics are not enabled. >> Specify /EHsc >> ...\python26_compiled\sc_40d3d8fa3c65de8979d50fc530eb0b7f12.cpp(653) : >> error C2146: syntax error : missing ';' before identifier '__attribute__' >> ...\python26_compiled\sc_40d3d8fa3c65de8979d50fc530eb0b7f12.cpp(653) : >> error C2065: 'unused' : undeclared identifier >> ...\python26_compiled\sc_40d3d8fa3c65de8979d50fc530eb0b7f12.cpp(653) : >> error C3861: '__attribute__': identifier not found >> ...\python26_compiled\sc_40d3d8fa3c65de8979d50fc530eb0b7f12.cpp(655) : >> error C2146: syntax error : missing ';' before identifier '__attribute__' >> ...\python26_compiled\sc_40d3d8fa3c65de8979d50fc530eb0b7f12.cpp(655) : >> error C2065: 'unused' : undeclared identifier >> ...\python26_compiled\sc_40d3d8fa3c65de8979d50fc530eb0b7f12.cpp(655) : >> error C3861: '__attribute__': identifier not found >> >> I also tried the print_example.py in site-packages\scipy\weave\examples >> and it also fails, however, the fibonacci.py example that uses ext_tools >> works fine. What am I missing? >> >> Thank you. >> Sam >> > > -- View this message in context: http://www.nabble.com/Weave-Inline-with-MSVC-tp25318825p25320644.html Sent from the Scipy-User mailing list archive at Nabble.com. From sturla at molden.no Sun Sep 6 16:54:22 2009 From: sturla at molden.no (Sturla Molden) Date: Sun, 06 Sep 2009 22:54:22 +0200 Subject: [SciPy-User] [SciPy-user] Weave Inline with MSVC In-Reply-To: <25320644.post@talk.nabble.com> References: <25318825.post@talk.nabble.com> <25320349.post@talk.nabble.com> <25320644.post@talk.nabble.com> Message-ID: <4AA4217E.5070809@molden.no> SamehK skrev: > So, someone broke the current weave under windows by introducing code that is > gcc-specific! > AFAIK, gcc works under Windows as well (I am using it, so it must work). Go and get a professional compiler instead of that MSVC toy of yours: http://www.equation.com/servlet/equation.cmd?call=fortran S.M. From matthieu.brucher at gmail.com Sun Sep 6 16:58:13 2009 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Sun, 6 Sep 2009 22:58:13 +0200 Subject: [SciPy-User] [SciPy-user] Weave Inline with MSVC In-Reply-To: <4AA4217E.5070809@molden.no> References: <25318825.post@talk.nabble.com> <25320349.post@talk.nabble.com> <25320644.post@talk.nabble.com> <4AA4217E.5070809@molden.no> Message-ID: 2009/9/6 Sturla Molden : > SamehK skrev: >> So, someone broke the current weave under windows by introducing code that is >> gcc-specific! >> > AFAIK, gcc works under Windows as well (I am using it, so it must work). > > Go and get a professional compiler instead of that MSVC toy of yours: > > http://www.equation.com/servlet/equation.cmd?call=fortran ... MSVC IS a professional compiler (because if it isn't, why should gcc be one?), and I don't think that weave not supporting MSVC is a good thing in the long run. Matthieu -- Information System Engineer, Ph.D. Website: http://matthieu-brucher.developpez.com/ Blogs: http://matt.eifelle.com and http://blog.developpez.com/?blog=92 LinkedIn: http://www.linkedin.com/in/matthieubrucher From pav at iki.fi Sun Sep 6 16:58:52 2009 From: pav at iki.fi (Pauli Virtanen) Date: Sun, 06 Sep 2009 23:58:52 +0300 Subject: [SciPy-User] [SciPy-user] Weave Inline with MSVC In-Reply-To: <25320644.post@talk.nabble.com> References: <25318825.post@talk.nabble.com> <25320349.post@talk.nabble.com> <25320644.post@talk.nabble.com> Message-ID: <1252270732.27162.28.camel@idol> su, 2009-09-06 kello 11:54 -0700, SamehK kirjoitti: > So, someone broke the current weave under windows by introducing code that is > gcc-specific! > In inline_tools.py, modify lines 84 and 87 by removing "__attribute__ > ((unused))" from both if you want inline to work with VS. > > try_code = 'try \n' \ > '{ \n' \ > ' PyObject* raw_locals __attribute__ ((unused));\n' > \ > ' raw_locals = py_to_raw_dict(' \ > 'py__locals,"_locals");\n' \ > ' PyObject* raw_globals __attribute__ > ((unused));\n' \ > > Hope this helps someone out. I already wasted a day on this nonsense. Broken it is. Should be fixed in r5919. Verifying the fix would be appreciated. Is this caught by Scipy's test suite? If not, a new test should be added -- #994 can be closed after this, http://projects.scipy.org/scipy/ticket/994 Thanks, Pauli Virtanen From sturla at molden.no Sun Sep 6 17:03:49 2009 From: sturla at molden.no (Sturla Molden) Date: Sun, 06 Sep 2009 23:03:49 +0200 Subject: [SciPy-User] [SciPy-user] Weave Inline with MSVC In-Reply-To: References: <25318825.post@talk.nabble.com> <25320349.post@talk.nabble.com> <25320644.post@talk.nabble.com> <4AA4217E.5070809@molden.no> Message-ID: <4AA423B5.2040104@molden.no> Matthieu Brucher skrev: > MSVC IS a professional compiler (because if it isn't, why should gcc > be one?), Because MSVC doesn't support ISO C. From samehm at gmail.com Sun Sep 6 19:33:57 2009 From: samehm at gmail.com (SamehK) Date: Sun, 6 Sep 2009 16:33:57 -0700 (PDT) Subject: [SciPy-User] [SciPy-user] Weave Inline with MSVC In-Reply-To: <4AA4217E.5070809@molden.no> References: <25318825.post@talk.nabble.com> <25320349.post@talk.nabble.com> <25320644.post@talk.nabble.com> <4AA4217E.5070809@molden.no> Message-ID: <25323233.post@talk.nabble.com> __attribute__ ((unused)) is not a standard directive, it's a gcc-specific directive. The support for a compiler should not be based on how "standard" it is, given that all compilers maintain their own sets of hacks, but on how popular and in use it is. Grow up and post something useful. Sturla Molden-2 wrote: > > SamehK skrev: >> So, someone broke the current weave under windows by introducing code >> that is >> gcc-specific! >> > AFAIK, gcc works under Windows as well (I am using it, so it must work). > > Go and get a professional compiler instead of that MSVC toy of yours: > > http://www.equation.com/servlet/equation.cmd?call=fortran > > > S.M. > > > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -- View this message in context: http://www.nabble.com/Weave-Inline-with-MSVC-tp25318825p25323233.html Sent from the Scipy-User mailing list archive at Nabble.com. From cournape at gmail.com Sun Sep 6 20:35:53 2009 From: cournape at gmail.com (David Cournapeau) Date: Mon, 7 Sep 2009 09:35:53 +0900 Subject: [SciPy-User] [SciPy-user] Weave Inline with MSVC In-Reply-To: <4AA423B5.2040104@molden.no> References: <25318825.post@talk.nabble.com> <25320349.post@talk.nabble.com> <25320644.post@talk.nabble.com> <4AA4217E.5070809@molden.no> <4AA423B5.2040104@molden.no> Message-ID: <5b8d13220909061735n332b644dj86d6bc98ab1e2e36@mail.gmail.com> On Mon, Sep 7, 2009 at 6:03 AM, Sturla Molden wrote: > Matthieu Brucher skrev: >> MSVC IS a professional compiler (because if it isn't, why should gcc >> be one?), > Because MSVC doesn't support ISO C. But weave generates C++, not C. C support in MSVC is awful, but C++ is pretty good. cheers, David From matthieu.brucher at gmail.com Mon Sep 7 01:35:27 2009 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Mon, 7 Sep 2009 07:35:27 +0200 Subject: [SciPy-User] [SciPy-user] Weave Inline with MSVC In-Reply-To: <4AA423B5.2040104@molden.no> References: <25318825.post@talk.nabble.com> <25320349.post@talk.nabble.com> <25320644.post@talk.nabble.com> <4AA4217E.5070809@molden.no> <4AA423B5.2040104@molden.no> Message-ID: 2009/9/6 Sturla Molden : > Matthieu Brucher skrev: >> MSVC IS a professional compiler (because if it isn't, why should gcc >> be one?), > Because MSVC doesn't support ISO C. You mean C99? As David said, I don't see the point with Weave. Matthieu -- Information System Engineer, Ph.D. Website: http://matthieu-brucher.developpez.com/ Blogs: http://matt.eifelle.com and http://blog.developpez.com/?blog=92 LinkedIn: http://www.linkedin.com/in/matthieubrucher From josef.pktd at gmail.com Mon Sep 7 12:03:04 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Mon, 7 Sep 2009 12:03:04 -0400 Subject: [SciPy-User] example scipy.signal.lfilter and ARMA Message-ID: <1cd32cbb0909070903m34db5438nf35b1ff284027313@mail.gmail.com> I started to partially clean up an older function of mine to simulate and estimate univariate ARMA processes. It started out initially as an example how signal.lfilter can be used for time series analysis. It's pretty fast and doesn't have a single python loop. The results are checked by Monte Carlo and some simple examples but not compared with other statistical packages. And the file is not yet cleaned up enough. There is still a lot of work to do before it can be included in statsmodels. But I thought, I show an example of what can be done with some of the scipy functions like signal.lfilter. Josef -------------- next part -------------- '''ARMA process and estimation with scipy.signal.lfilter 2009-09-06: copied from try_signal.py reparameterized same as signal.lfilter (positive coefficients) Notes ----- * pretty fast * checked with Monte Carlo and cross comparison with statsmodels yule_walker for AR numbers are close but not identical to yule_walker not compared to other statistics packages, no degrees of freedom correction * good for one time calculations for entire time series, not for recursive prediction * class structure not very clean yet * many one-liners with scipy.signal, but takes time to figure out usage * missing result statistics, e.g. t-values * no criteria for choice of number of lags * no constant term in ARMA process * no integration, differencing for ARIMA * written without textbook, works but not sure about everything brief check * two names for lag polynomials ar = rhoy, ma = rhoe ? Properties: Judge, ... (1985): The Theory and Practise of Econometrics BigJudge p. 237ff: If the time series process is a stationary ARMA(p,q), then minimizing the sum of squares is asymptoticaly (as T-> inf) equivalent to the exact Maximum Likelihood Estimator Because Least Squares conditional on the initial information does not use all information, in small samples exact MLE can be better. Without the normality assumption, the least squares estimator is still consistent under suitable conditions, however not efficient Author: josefpktd License: BSD ''' import numpy as np from scipy import signal, optimize class ARIMA(object): '''currently ARMA only, no differencing used - no I reparameterized rhoy(L) y_t = rhoe(L) eta_t ''' def __init__(self): pass def fit(self,x,p,q, rhoy0=None, rhoe0=None): '''estimate lag coefficients of ARMA orocess by least squares Parameters ---------- x : array, 1d time series data p : int number of AR lags to estimate q : int number of MA lags to estimate rhoy0, rhoe0 : array_like (optional) starting values for estimation Returns ------- rh, cov_x, infodict, mesg, ier : output of scipy.optimize.leastsq rh : estimate of lag parameters, concatenated [rhoy, rhoe] cov_x : unscaled (!) covariance matrix of coefficient estimates ''' def errfn( rho): #rhoy, rhoe = rho rhoy = np.concatenate(([1], rho[:p])) rhoe = np.concatenate(([1], rho[p:])) etahatr = signal.lfilter(rhoy, rhoe, x) #print rho,np.sum(etahatr*etahatr) return etahatr if rhoy0 is None: rhoy0 = 0.5 * np.ones(p) if rhoe0 is None: rhoe0 = 0.5 * np.ones(q) usels = True if usels: rh, cov_x, infodict, mesg, ier = \ optimize.leastsq(errfn, np.r_[rhoy0, rhoe0],ftol=1e-10,full_output=True) else: # fmin_bfgs is slow or doesn't work yet errfnsum = lambda rho : np.sum(errfn(rho)**2) #xopt, {fopt, gopt, Hopt, func_calls, grad_calls rh,fopt, gopt, cov_x, _,_, ier = \ optimize.fmin_bfgs(errfnsum, np.r_[rhoy0, rhoe0], maxiter=2, full_output=True) infodict, mesg = None, None self.rh = rh self.rhoy = np.concatenate(([1], rh[:p])) self.rhoe = np.concatenate(([1], rh[p:])) #rh[-q:])) doesnt work for q=0 self.error_estimate = errfn(rh) return rh, cov_x, infodict, mesg, ier def errfn(self, rho=None, p=None, x=None): ''' duplicate -> remove one ''' #rhoy, rhoe = rho if not rho is None: rhoy = np.concatenate(([1], rho[:p])) rhoe = np.concatenate(([1], rho[p:])) else: rhoy = self.rhoy rhoe = self.rhoe etahatr = signal.lfilter(rhoy, rhoe, x) #print rho,np.sum(etahatr*etahatr) return etahatr def predicted(self, rhoy=None, rhoe=None): '''past predicted values of time series just added, not checked yet ''' if rhoy is None: rhoy = self.rhoy if rhoe is None: rhoe = self.rhoe return self.x + self.error_estimate def forecast(self, ar=None, ma=None, nperiod=10): eta = np.r_[self.error_estimate, np.zeros(nperiod)] if ar is None: ar = self.rhoy if ma is None: ma = self.rhoe return signal.lfilter(ma, ar, eta) def generate_sample(self,ar,ma,nsample,std=1): eta = std * np.random.randn(nsample) return signal.lfilter(ma, ar, eta) def generate_sample(self, ar, ma, nsample, std=1, distrvs=np.random.randn): eta = std * distrvs(nsample) return signal.lfilter(ma, ar, eta) def impulse_response(ar, ma, nobs=100): '''get the impulse response function for ARMA process Parameters ---------- ma : array_like moving average lag polynomial ar : array_like auto regressive lag polynomial nobs : int number of observations to calculate Examples -------- AR(1) >>> impulse_response([1.0, -0.8], [1.], nobs=10) array([ 1. , 0.8 , 0.64 , 0.512 , 0.4096 , 0.32768 , 0.262144 , 0.2097152 , 0.16777216, 0.13421773]) this is the same as >>> 0.8**np.arange(10) array([ 1. , 0.8 , 0.64 , 0.512 , 0.4096 , 0.32768 , 0.262144 , 0.2097152 , 0.16777216, 0.13421773]) MA(2) >>> impulse_response([1.0], [1., 0.5, 0.2], nobs=10) array([ 1. , 0.5, 0.2, 0. , 0. , 0. , 0. , 0. , 0. , 0. ]) ARMA(1,2) >>> impulse_response([1.0, -0.8], [1., 0.5, 0.2], nobs=10) array([ 1. , 1.3 , 1.24 , 0.992 , 0.7936 , 0.63488 , 0.507904 , 0.4063232 , 0.32505856, 0.26004685]) ''' impulse = np.zeros(nobs) impulse[0] = 1. return signal.lfilter(ma, ar, impulse) def mcarma22(niter=10): nsample = 1000 #ar = [1.0, 0, 0] ar = [1.0, -0.75, -0.1] #ma = [1.0, 0, 0] ma = [1.0, 0.3, 0.2] results = [] results_bse = [] arma = ARIMA() for _ in range(niter): y2 = arest.generate_sample(ar,ma,nsample,0.1) rhohat2a, cov_x2a, infodict, mesg, ier = arest2.fit(y2,2,2) results.append(rhohat2a) err2a = arest.errfn(x=y2) sige2a = np.sqrt(np.dot(err2a,err2a)/nsample) results_bse.append(sige2a * np.sqrt(np.diag(cov_x2a))) return np.r_[ar[1:], ma[1:]], np.array(results), np.array(results_bse) if __name__ == '__main__': # Simulate AR(1) #-------------- # ar * y = ma * eta ar = [1, -0.8] ma = [1.0] # generate AR data eta = 0.1 * np.random.randn(1000) yar1 = signal.lfilter(ar, ma, eta) print "\nExample 0" arest = ARIMA() rhohat, cov_x, infodict, mesg, ier = arest.fit(yar1,1,1) print rhohat print cov_x print "\nExample 1" ar = [1.0, -0.8] ma = [1.0, 0.5] y1 = arest.generate_sample(ar,ma,1000,0.1) rhohat1, cov_x1, infodict, mesg, ier = arest.fit(y1,1,1) print rhohat1 print cov_x1 err1 = arest.errfn(x=y1) print np.var(err1) import scikits.statsmodels as sm print sm.regression.yule_walker(y1, order=2, inv=True) print "\nExample 2" arest2 = ARIMA() nsample = 1000 ar = [1.0, -0.6, -0.1] ma = [1.0, 0.3, 0.2] y2 = arest2.generate_sample(ar,ma,nsample,0.1) rhohat2, cov_x2, infodict, mesg, ier = arest2.fit(y2,1,2) print rhohat2 print cov_x2 err2 = arest.errfn(x=y2) print np.var(err2) print arest2.rhoy print arest2.rhoe print "true" print ar print ma rhohat2a, cov_x2a, infodict, mesg, ier = arest2.fit(y2,2,2) print rhohat2a print cov_x2a err2a = arest.errfn(x=y2) print np.var(err2a) print arest2.rhoy print arest2.rhoe print "true" print ar print ma print sm.regression.yule_walker(y2, order=2, inv=True) print "\nExample 20" arest20 = ARIMA() nsample = 1000 ar = [1.0]#, -0.8, -0.4] ma = [1.0, 0.5, 0.2] y3 = arest20.generate_sample(ar,ma,nsample,0.01) rhohat3, cov_x3, infodict, mesg, ier = arest20.fit(y3,2,0) print rhohat3 print cov_x3 err3 = arest20.errfn(x=y3) print np.var(err3) print np.sqrt(np.dot(err3,err3)/nsample) print arest20.rhoy print arest20.rhoe print "true" print ar print ma rhohat3a, cov_x3a, infodict, mesg, ier = arest20.fit(y3,0,2) print rhohat3a print cov_x3a err3a = arest20.errfn(x=y3) print np.var(err3a) print np.sqrt(np.dot(err3a,err3a)/nsample) print arest20.rhoy print arest20.rhoe print "true" print ar print ma print sm.regression.yule_walker(y3, order=2, inv=True) print "\nExample 02" arest02 = ARIMA() nsample = 1000 ar = [1.0, -0.8, 0.4] #-0.8, -0.4] ma = [1.0]#, 0.8, 0.4] y4 = arest02.generate_sample(ar,ma,nsample) rhohat4, cov_x4, infodict, mesg, ier = arest02.fit(y4,2,0) print rhohat4 print cov_x4 err4 = arest02.errfn(x=y4) print np.var(err4) sige = np.sqrt(np.dot(err4,err4)/nsample) print sige print sige * np.sqrt(np.diag(cov_x4)) print np.sqrt(np.diag(cov_x4)) print arest02.rhoy print arest02.rhoe print "true" print ar print ma rhohat4a, cov_x4a, infodict, mesg, ier = arest02.fit(y4,0,2) print rhohat4a print cov_x4a err4a = arest02.errfn(x=y4) print np.var(err4a) sige = np.sqrt(np.dot(err4a,err4a)/nsample) print sige print sige * np.sqrt(np.diag(cov_x4a)) print np.sqrt(np.diag(cov_x4a)) print arest02.rhoy print arest02.rhoe print "true" print ar print ma import scikits.statsmodels as sm print sm.regression.yule_walker(y4, order=2, method='mle', inv=True) def mc_summary(res, rt=None): if rt is None: rt = np.zeros(res.shape[1]) print 'RMSE' print np.sqrt(((res-rt)**2).mean(0)) print 'mean bias' print (res-rt).mean(0) print 'median bias' print np.median((res-rt),0) print 'median bias percent' print np.median((res-rt)/rt*100,0) print 'median absolute error' print np.median(np.abs(res-rt),0) print 'positive error fraction' print (res > rt).mean(0) run_mc = False if run_mc: import time t0 = time.time() rt, res_rho, res_bse = mcarma22(niter=1000) print 'elapsed time for Monte Carlo', time.time()-t0 # 20 seconds for ARMA(2,2), 1000 iterations with 1000 observations sige2a = np.sqrt(np.dot(err2a,err2a)/nsample) print '\nbse of one sample' print sige2a * np.sqrt(np.diag(cov_x2a)) print '\nMC of rho versus true' mc_summary(res_rho, rt) print '\nMC of bse versus zero' mc_summary(res_bse) print '\nMC of bse versus std' mc_summary(res_bse, res_rho.std(0)) import matplotlib.pyplot as plt plt.plot(arest2.forecast()[-100:]) plt.show() From josef.pktd at gmail.com Mon Sep 7 12:35:17 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Mon, 7 Sep 2009 12:35:17 -0400 Subject: [SciPy-User] ANN: new google group: pystatsmodels Message-ID: <1cd32cbb0909070935t4618e971uab966b142d857f1e@mail.gmail.com> After the initial release of scikits.statsmodels, we started to have email discussions about the design, implementation and extension of statsmodels, as well as about datahandling and how to use statsmodels for different types of data. Since this discussion seemed to specialized to fill up scipy-dev with it, we decided to start a dedicated google group. The overall objective is to make python easier to use for statistical and econometric analysis, and the discussion group is open to and welcomes the discussion of related packages. The first example is pandas, a package for handling panel data that uses statsmodels for estimation. Anyone interested, is welcome to join us at http://groups.google.ca/group/pystatsmodels/topics?hl=en Josef et al From timmichelsen at gmx-topmail.de Mon Sep 7 14:27:41 2009 From: timmichelsen at gmx-topmail.de (Tim Michelsen) Date: Mon, 07 Sep 2009 20:27:41 +0200 Subject: [SciPy-User] ANN: new google group: pystatsmodels In-Reply-To: <1cd32cbb0909070935t4618e971uab966b142d857f1e__39980.830535251$1252347445$gmane$org@mail.gmail.com> References: <1cd32cbb0909070935t4618e971uab966b142d857f1e__39980.830535251$1252347445$gmane$org@mail.gmail.com> Message-ID: > Anyone interested, is welcome to join us at > > http://groups.google.ca/group/pystatsmodels/topics?hl=en Cool. Will this be restricted to statsmodel only or can general statistical matters related to scipy discussed there. too? Would you please add the list to Gmane? Thanks, Timmie From jsseabold at gmail.com Mon Sep 7 14:38:04 2009 From: jsseabold at gmail.com (Skipper Seabold) Date: Mon, 7 Sep 2009 14:38:04 -0400 Subject: [SciPy-User] ANN: new google group: pystatsmodels In-Reply-To: References: <1cd32cbb0909070935t4618e971uab966b142d857f1e__39980.830535251$1252347445$gmane$org@mail.gmail.com> Message-ID: On Mon, Sep 7, 2009 at 2:27 PM, Tim Michelsen wrote: >> Anyone interested, is welcome to join us at >> >> http://groups.google.ca/group/pystatsmodels/topics?hl=en > Cool. > Will this be restricted to statsmodel only or can general statistical > matters related to scipy discussed there. too? > My $.02. Mostly right now, we are only discussing design issues for statsmodels and not statistical issues (though of course this will come up while we're extending the models). Perhaps statistical issues related to the existing statsmodels code and scipy are best discussed on scipy-user, but questions about development of and extending statistics and models in statsmodels would be appropriately discussed there? We just didn't want to post all day to scipy-user about the use of decorators in the model results classes for our scikit for example... I wouldn't be against discussing statistical issues there, but they're probably best kept on the scipy list to take advantage of everyone's knowledge. Skipper > Would you please add the list to Gmane? > From josef.pktd at gmail.com Mon Sep 7 14:39:00 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Mon, 7 Sep 2009 14:39:00 -0400 Subject: [SciPy-User] ANN: new google group: pystatsmodels In-Reply-To: References: <1cd32cbb0909070935t4618e971uab966b142d857f1e__39980.830535251$1252347445$gmane$org@mail.gmail.com> Message-ID: <1cd32cbb0909071139l8230b48ye2a5abed77c45276@mail.gmail.com> On Mon, Sep 7, 2009 at 2:27 PM, Tim Michelsen wrote: >> Anyone interested, is welcome to join us at >> >> http://groups.google.ca/group/pystatsmodels/topics?hl=en > Cool. > Will this be restricted to statsmodel only or can general statistical > matters related to scipy discussed there. too? Anything statistics/econometrics/python related is welcome, the topics will depend a lot on the participants. However, I think discussion related to scipy.stats directly, bugs, problems, improvements, usage question, should stay on the scipy lists, since it makes it easier to find scipy.stats questions and answers in the archives and it concerns the entire scipy community. > > Would you please add the list to Gmane? I have to figure out how to do this. Thanks for the interest, Josef > > Thanks, > Timmie > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From jsseabold at gmail.com Mon Sep 7 14:40:06 2009 From: jsseabold at gmail.com (Skipper Seabold) Date: Mon, 7 Sep 2009 14:40:06 -0400 Subject: [SciPy-User] ANN: new google group: pystatsmodels In-Reply-To: <1cd32cbb0909071139l8230b48ye2a5abed77c45276@mail.gmail.com> References: <1cd32cbb0909070935t4618e971uab966b142d857f1e__39980.830535251$1252347445$gmane$org@mail.gmail.com> <1cd32cbb0909071139l8230b48ye2a5abed77c45276@mail.gmail.com> Message-ID: On Mon, Sep 7, 2009 at 2:39 PM, wrote: > On Mon, Sep 7, 2009 at 2:27 PM, Tim > Michelsen wrote: >>> Anyone interested, is welcome to join us at >>> >>> http://groups.google.ca/group/pystatsmodels/topics?hl=en >> Cool. >> Will this be restricted to statsmodel only or can general statistical >> matters related to scipy discussed there. too? > > Anything statistics/econometrics/python related is welcome, the topics will > depend a lot on the participants. > > However, I think discussion related to scipy.stats directly, bugs, problems, > improvements, usage question, should stay on the scipy lists, since it > makes it easier to find scipy.stats questions and answers in the archives > and it concerns the entire scipy community. > >> >> Would you please add the list to Gmane? > > I have to figure out how to do this. > http://gmane.org/subscribe.php From timmichelsen at gmx-topmail.de Mon Sep 7 16:31:54 2009 From: timmichelsen at gmx-topmail.de (Tim Michelsen) Date: Mon, 07 Sep 2009 22:31:54 +0200 Subject: [SciPy-User] ANN: new google group: pystatsmodels In-Reply-To: <1cd32cbb0909071139l8230b48ye2a5abed77c45276@mail.gmail.com> References: <1cd32cbb0909070935t4618e971uab966b142d857f1e__39980.830535251$1252347445$gmane$org@mail.gmail.com> <1cd32cbb0909071139l8230b48ye2a5abed77c45276@mail.gmail.com> Message-ID: >> Would you please add the list to Gmane? > > I have to figure out how to do this. I already filed a subscribe request... From lev at columbia.edu Tue Sep 8 09:48:19 2009 From: lev at columbia.edu (Lev Givon) Date: Tue, 8 Sep 2009 09:48:19 -0400 Subject: [SciPy-User] SciPy+NumPy on 4 major linux distributions In-Reply-To: <5b8d13220909060159t2026069fsd69daa7f31621209@mail.gmail.com> References: <826c64da0909010937y7617e61h578e0e178044c46@mail.gmail.com> <7f014ea60909010955q56c7cfadkee65427c3f525e90@mail.gmail.com> <20090903175927.GQ20987@localhost.ee.columbia.edu> <5b8d13220909060159t2026069fsd69daa7f31621209@mail.gmail.com> Message-ID: <20090908134819.GA14226@localhost.columbia.edu> Received from David Cournapeau on Sun, Sep 06, 2009 at 04:59:10AM EDT: > On Fri, Sep 4, 2009 at 2:59 AM, Lev Givon wrote: > > > I'm not sure whether the current prebuilt libraries are built with > > thread support. I'm also not sure whether the current (3.8.3) prebuilt > > libraries consistently provide any improved performance compared to > > the netlib blas/lapack. > > The netlib blas/lapack built with gfortran has quite poor performance. > Using SSE/SSE2 alone gives a significant boost. Of course, building > your own will give improved performance. > > cheers, > > David When I recently tried comparing the timing of some numpy functions that invoke lapack routines when run against the netlib libraries and the prebuilt atlas 3.8.3 libraries on Mandriva 2009.1 (32 bit), I found that the prebuilt atlas libraries provide no speedup compared to the netlib libraries (at least on the various machines to which I have access). L.G. From lev at columbia.edu Tue Sep 8 09:53:17 2009 From: lev at columbia.edu (Lev Givon) Date: Tue, 8 Sep 2009 09:53:17 -0400 Subject: [SciPy-User] SciPy+NumPy on 4 major linux distributions In-Reply-To: <826c64da0909042029i5bb05577r30286bdb05c3d4ac@mail.gmail.com> References: <826c64da0909010937y7617e61h578e0e178044c46@mail.gmail.com> <7f014ea60909010955q56c7cfadkee65427c3f525e90@mail.gmail.com> <20090903175927.GQ20987@localhost.ee.columbia.edu> <826c64da0909042017i67453678ied1a418aa3f66d67@mail.gmail.com> <826c64da0909042029i5bb05577r30286bdb05c3d4ac@mail.gmail.com> Message-ID: <20090908135317.GB14226@localhost.columbia.edu> Received from Ivo Maljevic on Fri, Sep 04, 2009 at 11:29:22PM EDT: > So, numpy works very well, but scipy.test() fails, and I think it is the > same failure across the distributions. > Anyone knows what does this mean: > > ====================================================================== > ERROR: test_implicit (test_odr.TestODR) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "/usr/lib64/python2.6/site-packages/scipy/odr/tests/test_odr.py", > line 88, in test_implicit > out = implicit_odr.run() > File "/usr/lib64/python2.6/site-packages/scipy/odr/odrpack.py", line 1055, > in run > self.output = Output(apply(odr, args, kwds)) > TypeError: y must be a sequence or integer (if model is implicit) > > ---------------------------------------------------------------------- > Ran 3395 tests in 51.646s > > FAILED (KNOWNFAIL=3, SKIP=28, errors=1) > I have observed the same failure on the various Mandriva 2009.1 32 bit systems to which I have access. L.G. From nwagner at iam.uni-stuttgart.de Tue Sep 8 10:01:51 2009 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Tue, 08 Sep 2009 16:01:51 +0200 Subject: [SciPy-User] FEAST eigensolver Message-ID: Hi all, FWIW, a new eigensolver is available (BSD license) at http://www.ecs.umass.edu/~polizzi/feast/index.htm Nils From pav+sp at iki.fi Tue Sep 8 10:45:45 2009 From: pav+sp at iki.fi (Pauli Virtanen) Date: Tue, 8 Sep 2009 14:45:45 +0000 (UTC) Subject: [SciPy-User] SciPy+NumPy on 4 major linux distributions References: <826c64da0909010937y7617e61h578e0e178044c46@mail.gmail.com> <7f014ea60909010955q56c7cfadkee65427c3f525e90@mail.gmail.com> <20090903175927.GQ20987@localhost.ee.columbia.edu> <826c64da0909042017i67453678ied1a418aa3f66d67@mail.gmail.com> <826c64da0909042029i5bb05577r30286bdb05c3d4ac@mail.gmail.com> <20090908135317.GB14226@localhost.columbia.edu> Message-ID: Tue, 08 Sep 2009 09:53:17 -0400, Lev Givon wrote: > Received from Ivo Maljevic on Fri, Sep 04, 2009 at 11:29:22PM EDT: >> So, numpy works very well, but scipy.test() fails, and I think it is >> the same failure across the distributions. Anyone knows what does this >> mean: >> >> ====================================================================== >> ERROR: test_implicit (test_odr.TestODR) >> ---------------------------------------------------------------------- >> Traceback (most recent call last): >> File >> "/usr/lib64/python2.6/site-packages/scipy/odr/tests/test_odr.py", >> line 88, in test_implicit >> out = implicit_odr.run() >> File "/usr/lib64/python2.6/site-packages/scipy/odr/odrpack.py", line >> 1055, >> in run >> self.output = Output(apply(odr, args, kwds)) >> TypeError: y must be a sequence or integer (if model is implicit) >> >> ---------------------------------------------------------------------- >> Ran 3395 tests in 51.646s >> >> FAILED (KNOWNFAIL=3, SKIP=28, errors=1) > run=3395 errors=1 failures=0> > > I have observed the same failure on the various Mandriva 2009.1 32 bit > systems to which I have access. It's a Python 2.6 incompatibility, IIRC, fixed in current SVN. -- Pauli Virtanen From lorenzo.isella at gmail.com Tue Sep 8 12:32:15 2009 From: lorenzo.isella at gmail.com (Lorenzo Isella) Date: Tue, 08 Sep 2009 18:32:15 +0200 Subject: [SciPy-User] Finding if Entries of an Array Are Within Time Windows Message-ID: <4AA6870F.40800@gmail.com> Dear All, Please consider the arrays: A=[12,23,98,34,123,9] and B=[22,34 40,43 68, 98 102,123] Array A stands for the times when certain observations are recorded, whereas the raws of B are time-windows. I need to select the entries of A according to this rule: if A[i] falls within the time-periods given by B, then I'll keep it, otherwise I'll discard it. The aim is to trim the array A by getting rid of all the entries which do not fall within any time-window given by B. Any suggestions about how to achieve that? Many thanks Lorenzo From warren.weckesser at enthought.com Tue Sep 8 12:56:24 2009 From: warren.weckesser at enthought.com (Warren Weckesser) Date: Tue, 08 Sep 2009 11:56:24 -0500 Subject: [SciPy-User] Finding if Entries of an Array Are Within Time Windows In-Reply-To: <4AA6870F.40800@gmail.com> References: <4AA6870F.40800@gmail.com> Message-ID: <4AA68CB8.9000207@enthought.com> Hi Lorenzo, Here's one way (done in an ipython shell): ---------- In [1]: from numpy import array In [2]: A = array([12, 23, 98, 34, 123, 9]) In [3]: B = array([[22, 34], [40, 43], [68, 98], [102, 123]]) In [4]: mask = ((A >= B[:,0:1]) & (A <= B[:,1:2])).any(axis=0) In [5]: Akeep = A[mask] In [6]: Akeep Out[6]: array([ 23, 98, 34, 123]) In [7]: ---------- Warren Lorenzo Isella wrote: > Dear All, > Please consider the arrays: > > A=[12,23,98,34,123,9] > > and > > B=[22,34 > 40,43 > 68, 98 > 102,123] > > Array A stands for the times when certain observations are recorded, > whereas the raws of B are time-windows. > I need to select the entries of A according to this rule: if A[i] falls > within the time-periods given by B, then I'll keep it, otherwise I'll > discard it. > The aim is to trim the array A by getting rid of all the entries which > do not fall within any time-window given by B. > Any suggestions about how to achieve that? > Many thanks > > Lorenzo > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -- Warren Weckesser Enthought, Inc. 515 Congress Avenue, Suite 2100 Austin, TX 78701 512-536-1057 From nathanielpeterson08 at gmail.com Tue Sep 8 13:22:58 2009 From: nathanielpeterson08 at gmail.com (nathanielpeterson08 at gmail.com) Date: Tue, 08 Sep 2009 13:22:58 -0400 Subject: [SciPy-User] Finding if Entries of an Array Are Within Time Windows In-Reply-To: <4AA68CB8.9000207@enthought.com> (message from Warren Weckesser on Tue, 08 Sep 2009 11:56:24 -0500) References: <4AA6870F.40800@gmail.com> <4AA68CB8.9000207@enthought.com> Message-ID: <87vdjtuz9p.fsf@farmer.myhome.westell.com> Here is another way: #!/usr/bin/env python import numpy as np a=np.array([12,23,98,34,123,9]) b=np.array([22,34, 40,43, 68, 98, 102,123]) idx=b.searchsorted(a) print(idx) # [0 1 5 1 7 0] print(np.mod(idx,2)==1) # [False True True True True False] idx2=(np.mod(idx,2)==1) print(a[idx2]) # [ 23 98 34 123] From warren.weckesser at enthought.com Tue Sep 8 13:42:49 2009 From: warren.weckesser at enthought.com (Warren Weckesser) Date: Tue, 08 Sep 2009 12:42:49 -0500 Subject: [SciPy-User] Finding if Entries of an Array Are Within Time Windows In-Reply-To: <87vdjtuz9p.fsf@farmer.myhome.westell.com> References: <4AA6870F.40800@gmail.com> <4AA68CB8.9000207@enthought.com> <87vdjtuz9p.fsf@farmer.myhome.westell.com> Message-ID: <4AA69799.6070209@enthought.com> nathanielpeterson08 at gmail.com wrote: > Here is another way: > > #!/usr/bin/env python > import numpy as np > a=np.array([12,23,98,34,123,9]) > > b=np.array([22,34, > 40,43, > 68, 98, > 102,123]) > > idx=b.searchsorted(a) > print(idx) > # [0 1 5 1 7 0] > print(np.mod(idx,2)==1) > # [False True True True True False] > idx2=(np.mod(idx,2)==1) > print(a[idx2]) > # [ 23 98 34 123] > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > That's cool, but--and this case might not matter to Lorenzo--that code will miss values that are exactly equal to the left end of the interval: ---------- In [1]: import numpy as np In [2]: a = np.array([39,40,41,42,43,44]) In [3]: b = np.array([22,34, 40,43, 68,98, 102,123]) In [4]: idx = b.searchsorted(a) In [5]: idx Out[5]: array([2, 2, 3, 3, 3, 4]) In [6]: idx2 = np.mod(idx,2) == 1 In [7]: idx2 Out[7]: array([False, False, True, True, True, False], dtype=bool) In [8]: a[idx2] Out[8]: array([41, 42, 43]) In [9]: ---------- If values that exactly match the ends of the intervals are to be included, the result in this case should be [40,41,42,43]. Warren -- Warren Weckesser Enthought, Inc. 515 Congress Avenue, Suite 2100 Austin, TX 78701 512-536-1057 From cournape at gmail.com Tue Sep 8 17:30:16 2009 From: cournape at gmail.com (David Cournapeau) Date: Wed, 9 Sep 2009 06:30:16 +0900 Subject: [SciPy-User] SciPy+NumPy on 4 major linux distributions In-Reply-To: <20090908134819.GA14226@localhost.columbia.edu> References: <826c64da0909010937y7617e61h578e0e178044c46@mail.gmail.com> <7f014ea60909010955q56c7cfadkee65427c3f525e90@mail.gmail.com> <20090903175927.GQ20987@localhost.ee.columbia.edu> <5b8d13220909060159t2026069fsd69daa7f31621209@mail.gmail.com> <20090908134819.GA14226@localhost.columbia.edu> Message-ID: <5b8d13220909081430w689e94e2q782c32225cc7ec07@mail.gmail.com> On Tue, Sep 8, 2009 at 10:48 PM, Lev Givon wrote: > Received from David Cournapeau on Sun, Sep 06, 2009 at 04:59:10AM EDT: >> On Fri, Sep 4, 2009 at 2:59 AM, Lev Givon wrote: >> >> > I'm not sure whether the current prebuilt libraries are built with >> > thread support. I'm also not sure whether the current (3.8.3) prebuilt >> > libraries consistently provide any improved performance compared to >> > the netlib blas/lapack. >> >> The netlib blas/lapack built with gfortran has quite poor performance. >> Using SSE/SSE2 alone gives a significant boost. Of course, building >> your own will give improved performance. >> >> cheers, >> >> David > > When I recently tried comparing the timing of some numpy functions > that invoke lapack routines when run against the netlib libraries and > the prebuilt atlas 3.8.3 libraries on Mandriva 2009.1 (32 bit), I > found that the prebuilt atlas libraries provide no speedup compared to > the netlib libraries (at least on the various machines to which I have > access). Maybe something is wrong in the mandriva package. I have a factor 2 difference between using netlib and atlas on ubuntu for a simple inversion. numpy.dot is also much faster if you use ATLAS (one order of magnitude faster), but that's partly a limitation of numpy build system, cheers, David From sturla at molden.no Wed Sep 9 04:54:33 2009 From: sturla at molden.no (Sturla Molden) Date: Wed, 09 Sep 2009 10:54:33 +0200 Subject: [SciPy-User] recursive Gaussian filter in C Message-ID: <4AA76D49.7060409@molden.no> I have written a C version of the recursive Gaussian filter. It is more accurate than gaussian_filter in ndimage (less truncation error near edges, as shown before) and also faster. Here are some timings on filtering the "lenna" test image on my computer: sigma = 5 elapsed time (iir with openmp): 81.366640 ms elapsed time (iir): 107.282360 ms elapsed time (ndimage): 137.548760 ms sigma = 9 elapsed time (iir with openmp): 44.403760 ms elapsed time (iir): 75.285720 ms elapsed time (ndimage): 163.750920 ms sigma = 21 elapsed time (iir with openmp): 45.063040 ms elapsed time (iir): 75.941400 ms elapsed time (ndimage): 313.359080 ms sigma = 101 elapsed time (iir with openmp): 56.134120 ms elapsed time (iir): 87.622240 ms elapsed time (ndimage): 1210.016680 ms It is still only written for np.float64, but it would be easy to make optimized versions for various dtypes, including rgb images. You don't really see the scaling effect of OpenMP here, as the only thing that changes with sigma is the amount of zero-padding. Anyhow, this beats ndimage on speed and accuracy, and scales much better with sigma. It is not restricted to 2 dimensions. It can filter along any axis of an ndarray. Thus another use case is fast kernel density estimation in nd space. Regards, Sturla Molden From mattknox.ca at gmail.com Wed Sep 9 10:08:57 2009 From: mattknox.ca at gmail.com (Matt Knox) Date: Wed, 9 Sep 2009 14:08:57 +0000 (UTC) Subject: [SciPy-User] Scikits Trac References: <9457e7c80909021524x2fbfd789kb4b676a6ed02e00f@mail.gmail.com> <9457e7c80909050155g544d03d6uf2558ca68fff2e66@mail.gmail.com> Message-ID: St?fan van der Walt sun.ac.za> writes: > > 2009/9/3 Tim Michelsen gmx-topmail.de>: > > The link > > SciKits developer resources - http://www.scipy.org/scipy/scikits/ > > > > on > > http://scikits.appspot.com/contribute > > needs also to be updated to point to the new site,. > > Thanks, fixed! > > Regards > St?fan > I haven't been following the mailing lists as closely this summer but just noticed this. Looks like we've been using the old trac server for the timeseries scikit (http://scipy.org/scipy/scikits/query?component=timeseries&order=status). Sorry for that. Is there any way to port over the tickets from the old trac to the new one? Also, is it possible for somebody to disable the obsolete trac server? There is no indication when visiting the ticketing tool that the server should no longer be used. I will update the links on the timeseries docs to point to the new trac (http://projects.scipy.org/scikits/query?component=timeseries&order=status). Thanks, - Matt From mattknox.ca at gmail.com Wed Sep 9 10:46:04 2009 From: mattknox.ca at gmail.com (Matt Knox) Date: Wed, 9 Sep 2009 14:46:04 +0000 (UTC) Subject: [SciPy-User] scikity.timeseries: Report options question References: Message-ID: Tim Michelsen gmx-topmail.de> writes: > > Hello, > > I noticed that if header_row is specified, > a header_char='-' is added automatically. > I had to add header_char='' to subpress it. > > Is this wanted? > > According to > http://pytseries.sourceforge.net/lib.report.html > #scikits.timeseries.lib.reportlib.Report > > This should be optional. > > Kind regards, > Timmie > Yes, this is intended. The documentation for that parameter is: ''' header_char : {?-?, str}, optional Character to be used for the row separator line between the header and first row of data. Specify None for no separator. This is ignored if header_row is not specified. ''' and the documentation convention is that the first value listed is the default value when it is an optional parameter. So not specifying anything is equivalent to specifying header_char="-" From timmichelsen at gmx-topmail.de Thu Sep 10 04:42:40 2009 From: timmichelsen at gmx-topmail.de (Tim Michelsen) Date: Thu, 10 Sep 2009 08:42:40 +0000 (UTC) Subject: [SciPy-User] scikity.timeseries: Report options question References: Message-ID: > Yes, this is intended. The documentation for that parameter is: > > ''' > header_char : {?-?, str}, optional > > Character to be used for the row separator line between the header and > first row of data. Specify None for no separator. This is ignored if > header_row is not specified. > ''' Sorry, totally overlooked the last sentence. I presume that Pierre didn't answer my question because of this... Its that I am porting my code step-by-step from the state of timeseries before the upgrade to numpy 1.3. I have to say that the new functions are just great! genfromtxt makes using new data just a matter of creating a new dateconverter. And convert to annual is just a nice convenience, too. Kind regards, Timmie From dave.hirschfeld at gmail.com Thu Sep 10 08:29:47 2009 From: dave.hirschfeld at gmail.com (Dave) Date: Thu, 10 Sep 2009 12:29:47 +0000 (UTC) Subject: [SciPy-User] scikits.timeseries ImportError Message-ID: I decided to try installing the latest scipy/numpy which seemed to work fine but afterwards I got an import error when trying to import the timeseries module. Upgrading my timeseries to the latest: M:\src\timeseries>svn up Fetching external item into 'scikits\timeseries\doc\sphinxext' External at revision 7375. At revision 2216. Still resulted in the same error: Python 2.5.4 (r254:67916, Dec 23 2008, 15:10:54) [MSC v.1310 32 bit (Intel)] on win32 Type "help", "copyright", "credits" or "license" for more information. |>>> import numpy |>>> numpy.__version__ '1.4.0.dev7375' |>>> import scipy |>>> scipy.__version__ '0.8.0.dev5920' |>>> import scikits.timeseries as ts RuntimeError: FATAL: module compiled aslittle endian, but detected different endianness at runtime RuntimeError: FATAL: module compiled aslittle endian, but detected different endianness at runtime Traceback (most recent call last): File "", line 1, in File "C:\dev\bin\Python25\Lib\site-packages\scikits\timeseries\__init__.py", line 13, in import const File "C:\dev\bin\Python25\Lib\site-packages\scikits\timeseries\const.py", line 79, in from cseries import freq_constants ImportError: numpy.core.multiarray failed to import |>>> Whereas from the interpreter it appears to work: |>>> import numpy.core.multiarray |>>> cseries appears to be a C file so I'm a bit stumped as to how to debug further. Any help appreciated! Thanks, Dave From magnusp at astro.su.se Thu Sep 10 09:43:43 2009 From: magnusp at astro.su.se (n.l.o) Date: Thu, 10 Sep 2009 06:43:43 -0700 (PDT) Subject: [SciPy-User] [SciPy-user] difference between different mean()s Message-ID: <25383547.post@talk.nabble.com> Hello I was wondering what the difference is between numpy.mean() and the scipy.ndimage.mean() method? I get different answers. Also is there a difference in using the different std() and median() method etc.? I am applying the methods on 2-D arrays. Cheers Magnus -- View this message in context: http://www.nabble.com/difference-between-different-mean%28%29s-tp25383547p25383547.html Sent from the Scipy-User mailing list archive at Nabble.com. From bsouthey at gmail.com Thu Sep 10 09:51:39 2009 From: bsouthey at gmail.com (Bruce Southey) Date: Thu, 10 Sep 2009 08:51:39 -0500 Subject: [SciPy-User] scikits.timeseries ImportError In-Reply-To: References: Message-ID: <4AA9046B.2060904@gmail.com> On 09/10/2009 07:29 AM, Dave wrote: > I decided to try installing the latest scipy/numpy which seemed to work fine but > afterwards I got an import error when trying to import the timeseries module. > > Upgrading my timeseries to the latest: > > M:\src\timeseries>svn up > Fetching external item into 'scikits\timeseries\doc\sphinxext' > External at revision 7375. > At revision 2216. > > Still resulted in the same error: > > Python 2.5.4 (r254:67916, Dec 23 2008, 15:10:54) [MSC v.1310 32 bit (Intel)] on > win32 > Type "help", "copyright", "credits" or "license" for more information. > |>>> import numpy > |>>> numpy.__version__ > '1.4.0.dev7375' > |>>> import scipy > |>>> scipy.__version__ > '0.8.0.dev5920' > |>>> import scikits.timeseries as ts > RuntimeError: FATAL: module compiled aslittle endian, but detected different > endianness at runtime > RuntimeError: FATAL: module compiled aslittle endian, but detected different > endianness at runtime > Traceback (most recent call last): > File "", line 1, in > File "C:\dev\bin\Python25\Lib\site-packages\scikits\timeseries\__init__.py", > line 13, in > import const > File "C:\dev\bin\Python25\Lib\site-packages\scikits\timeseries\const.py", line > 79, in > from cseries import freq_constants > ImportError: numpy.core.multiarray failed to import > |>>> > > Whereas from the interpreter it appears to work: > > |>>> import numpy.core.multiarray > |>>> > > cseries appears to be a C file so I'm a bit stumped as to how to debug further. > Any help appreciated! > > Thanks, > Dave > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > If the numpy and scipy tests run then it is the timeseries scikit. I do not use timeseries nor windows so I do not know if you are using a binary version or built it from source. In the former case it probably means it was compiled for earlier numpy/scipy versions so you probably have to built it from source yourself. If you built it from source, then I suspect that you have not removed all traces of the old timeseries scikit and/or have not correctly built it from source - like removing the build directory. Bruce From david.huard at gmail.com Thu Sep 10 09:51:54 2009 From: david.huard at gmail.com (David Huard) Date: Thu, 10 Sep 2009 09:51:54 -0400 Subject: [SciPy-User] scikits.timeseries ImportError In-Reply-To: References: Message-ID: <91cf711d0909100651m2b0e8630vbd84da01b0db5c06@mail.gmail.com> Dave, Have you removed the build/ directory before reinstalling timeseries ? David On Thu, Sep 10, 2009 at 8:29 AM, Dave wrote: > I decided to try installing the latest scipy/numpy which seemed to work > fine but > afterwards I got an import error when trying to import the timeseries > module. > > Upgrading my timeseries to the latest: > > M:\src\timeseries>svn up > Fetching external item into 'scikits\timeseries\doc\sphinxext' > External at revision 7375. > At revision 2216. > > Still resulted in the same error: > > Python 2.5.4 (r254:67916, Dec 23 2008, 15:10:54) [MSC v.1310 32 bit > (Intel)] on > win32 > Type "help", "copyright", "credits" or "license" for more information. > |>>> import numpy > |>>> numpy.__version__ > '1.4.0.dev7375' > |>>> import scipy > |>>> scipy.__version__ > '0.8.0.dev5920' > |>>> import scikits.timeseries as ts > RuntimeError: FATAL: module compiled aslittle endian, but detected > different > endianness at runtime > RuntimeError: FATAL: module compiled aslittle endian, but detected > different > endianness at runtime > Traceback (most recent call last): > File "", line 1, in > File > "C:\dev\bin\Python25\Lib\site-packages\scikits\timeseries\__init__.py", > line 13, in > import const > File "C:\dev\bin\Python25\Lib\site-packages\scikits\timeseries\const.py", > line > 79, in > from cseries import freq_constants > ImportError: numpy.core.multiarray failed to import > |>>> > > Whereas from the interpreter it appears to work: > > |>>> import numpy.core.multiarray > |>>> > > cseries appears to be a C file so I'm a bit stumped as to how to debug > further. > Any help appreciated! > > Thanks, > Dave > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lujitsu at hotmail.com Thu Sep 10 10:19:35 2009 From: lujitsu at hotmail.com (C. Campbell) Date: Thu, 10 Sep 2009 10:19:35 -0400 Subject: [SciPy-User] Fitting a system of ODEs to data Message-ID: Hi everyone, I have a system of coupled multivariate ODEs which I would like to fit to experimental data. If I am reading the SciPy documentation correctly, there exist built in functions to handle systems of multivariate nonlinear functions (Broyden's and Anderson's methods), but not systems of ODEs. After reading up on some general methods, it looks like it would be a real bear to write an implementation myself. I posed this issue to an expert Python programmer with whom I am acquainted, and he suggested using Mathematica to address my problem. I have essentially no experience with Mathematica, though, so before biting that particular bullet I thought I'd check with the larger community to see if there is Python/SciPy solution. Thanks very much for any suggestions! Colin _________________________________________________________________ With Windows Live, you can organize, edit, and share your photos. http://www.windowslive.com/Desktop/PhotoGallery -------------- next part -------------- An HTML attachment was scrubbed... URL: From dave.hirschfeld at gmail.com Thu Sep 10 10:26:45 2009 From: dave.hirschfeld at gmail.com (Dave) Date: Thu, 10 Sep 2009 14:26:45 +0000 (UTC) Subject: [SciPy-User] scikits.timeseries ImportError References: <91cf711d0909100651m2b0e8630vbd84da01b0db5c06@mail.gmail.com> Message-ID: David Huard gmail.com> writes: > > Dave, Have you removed the build/ directory before reinstalling timeseries No, I recompiled but forgot to remove the build directory first. It's working now - obviously time for a cup of coffee! Thanks Bruce/David & sorry for the noise... -Dave From rob.clewley at gmail.com Thu Sep 10 10:45:57 2009 From: rob.clewley at gmail.com (Rob Clewley) Date: Thu, 10 Sep 2009 10:45:57 -0400 Subject: [SciPy-User] Fitting a system of ODEs to data In-Reply-To: References: Message-ID: On Thu, Sep 10, 2009 at 10:19 AM, C. Campbell wrote: > I have a system of coupled multivariate ODEs which I would like to fit to > experimental data. If I am reading the SciPy documentation correctly, there > exist built in functions to handle systems of multivariate nonlinear > functions (Broyden's and Anderson's methods), but not systems of ODEs. After > reading up on some general methods, it looks like it would be a real bear to > write an implementation myself. It depends on how you want to set up your optimization problem, but the existing minimization codes in scipy are reasonably good at doing just this. I think the idea that you are missing is that you would need to write an objective function for these solvers that computes an ODE orbit and compares it with your data, according to whatever metric you prefer. A common technique does not require multivariate methods when data from multiple dimensions in concatenated into a single vector for something like a least squares fit method. A search on google for "ODE fitting scipy" immediately shows tutorials and other resources for doing such things. -Rob From robert.kern at gmail.com Thu Sep 10 11:48:34 2009 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 10 Sep 2009 10:48:34 -0500 Subject: [SciPy-User] Fitting a system of ODEs to data In-Reply-To: References: Message-ID: <3d375d730909100848o3b1e2866x4d16b8147494bffd@mail.gmail.com> On Thu, Sep 10, 2009 at 09:19, C. Campbell wrote: > Hi everyone, > > I have a system of coupled multivariate ODEs which I would like to fit to > experimental data. If I am reading the SciPy documentation correctly, there > exist built in functions to handle systems of multivariate nonlinear > functions (Broyden's and Anderson's methods), but not systems of ODEs. After > reading up on some general methods, it looks like it would be a real bear to > write an implementation myself. > > I posed this issue to an expert Python programmer with whom I am acquainted, > and he suggested using Mathematica to address my problem. I have essentially > no experience with Mathematica, though, so before biting that particular > bullet I thought I'd check with the larger community to see if there is > Python/SciPy solution. I answered a similar question over on StackOverflow: http://stackoverflow.com/questions/1164198/fitting-parameters-of-odes-while-using-octave-matlab-ode-solver/1336822#1336822 -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From jsalvati at u.washington.edu Thu Sep 10 12:03:34 2009 From: jsalvati at u.washington.edu (John Salvatier) Date: Thu, 10 Sep 2009 09:03:34 -0700 Subject: [SciPy-User] scikits.timeseries ImportError Message-ID: <113e17f20909100903j636f7337p5cfd7700372581a1@mail.gmail.com> I had a similar error a little while ago, and I got around it by recompiling the problem package with the new numpy/scipy already installed. I also had a possibly related problem with getting bus errors on import that went away after I recompiled the packages except in the case of Matplotlib, which I had to revert to v.98. Message: 2 Date: Thu, 10 Sep 2009 12:29:47 +0000 (UTC) From: Dave Subject: [SciPy-User] scikits.timeseries ImportError To: scipy-user at scipy.org Message-ID: Content-Type: text/plain; charset=us-ascii I decided to try installing the latest scipy/numpy which seemed to work fine but afterwards I got an import error when trying to import the timeseries module. Upgrading my timeseries to the latest: M:\src\timeseries>svn up Fetching external item into 'scikits\timeseries\doc\sphinxext' External at revision 7375. At revision 2216. Still resulted in the same error: Python 2.5.4 (r254:67916, Dec 23 2008, 15:10:54) [MSC v.1310 32 bit (Intel)] on win32 Type "help", "copyright", "credits" or "license" for more information. |>>> import numpy |>>> numpy.__version__ '1.4.0.dev7375' |>>> import scipy |>>> scipy.__version__ '0.8.0.dev5920' |>>> import scikits.timeseries as ts RuntimeError: FATAL: module compiled aslittle endian, but detected different endianness at runtime RuntimeError: FATAL: module compiled aslittle endian, but detected different endianness at runtime Traceback (most recent call last): File "", line 1, in File "C:\dev\bin\Python25\Lib\site-packages\scikits\timeseries\__init__.py", line 13, in import const File "C:\dev\bin\Python25\Lib\site-packages\scikits\timeseries\const.py", line 79, in from cseries import freq_constants ImportError: numpy.core.multiarray failed to import |>>> Whereas from the interpreter it appears to work: |>>> import numpy.core.multiarray |>>> cseries appears to be a C file so I'm a bit stumped as to how to debug further. Any help appreciated! Thanks, Dave -------------- next part -------------- An HTML attachment was scrubbed... URL: From lujitsu at hotmail.com Thu Sep 10 13:08:54 2009 From: lujitsu at hotmail.com (C. Campbell) Date: Thu, 10 Sep 2009 13:08:54 -0400 Subject: [SciPy-User] Fitting a system of ODEs to data In-Reply-To: <3d375d730909100848o3b1e2866x4d16b8147494bffd@mail.gmail.com> References: <3d375d730909100848o3b1e2866x4d16b8147494bffd@mail.gmail.com> Message-ID: Thanks, both of you. I actually had tried something similar to what you both suggested, but it didn't seem like the function was converging to a solution, so I (incorrectly!) assumed the problem must be with the ODE nature of my system. I'll try to optimize my code and let the program run longer, now that I know that it is the correct approach. Thanks again; I really appreciate the rapid and helpful responses! Colin > From: robert.kern at gmail.com > Date: Thu, 10 Sep 2009 10:48:34 -0500 > To: lujan at clancore.net; scipy-user at scipy.org > Subject: Re: [SciPy-User] Fitting a system of ODEs to data > > On Thu, Sep 10, 2009 at 09:19, C. Campbell wrote: > > Hi everyone, > > > > I have a system of coupled multivariate ODEs which I would like to fit to > > experimental data. If I am reading the SciPy documentation correctly, there > > exist built in functions to handle systems of multivariate nonlinear > > functions (Broyden's and Anderson's methods), but not systems of ODEs. After > > reading up on some general methods, it looks like it would be a real bear to > > write an implementation myself. > > > > I posed this issue to an expert Python programmer with whom I am acquainted, > > and he suggested using Mathematica to address my problem. I have essentially > > no experience with Mathematica, though, so before biting that particular > > bullet I thought I'd check with the larger community to see if there is > > Python/SciPy solution. > > I answered a similar question over on StackOverflow: > > http://stackoverflow.com/questions/1164198/fitting-parameters-of-odes-while-using-octave-matlab-ode-solver/1336822#1336822 > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless > enigma that is made terrible by our own mad attempt to interpret it as > though it had an underlying truth." > -- Umberto Eco > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user _________________________________________________________________ Get back to school stuff for them and cashback for you. http://www.bing.com/cashback?form=MSHYCB&publ=WLHMTAG&crea=TEXT_MSHYCB_BackToSchool_Cashback_BTSCashback_1x1 -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Thu Sep 10 13:13:00 2009 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 10 Sep 2009 12:13:00 -0500 Subject: [SciPy-User] Fitting a system of ODEs to data In-Reply-To: References: <3d375d730909100848o3b1e2866x4d16b8147494bffd@mail.gmail.com> Message-ID: <3d375d730909101013s7d82c6bcid5118ab249ba44da@mail.gmail.com> On Thu, Sep 10, 2009 at 12:08, C. Campbell wrote: > Thanks, both of you. I actually had tried something similar to what you both > suggested, but it didn't seem like the function was converging to a > solution, so I (incorrectly!) assumed the problem must be with the ODE > nature of my system. It might be, sort of. You will probably have to have a good guess of the parameters. The functions generated by many ODEs tend to be less suitable for fitting than other functions of interest. You may encounter many local optima. It would be worthwhile to do a bit of brute force searching through your parameter space to get a good starting point or use a global optimizer. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From max.shron at gmail.com Thu Sep 10 13:36:13 2009 From: max.shron at gmail.com (Max Shron) Date: Thu, 10 Sep 2009 12:36:13 -0500 Subject: [SciPy-User] [SciPy-user] difference between different mean()s In-Reply-To: <25383547.post@talk.nabble.com> References: <25383547.post@talk.nabble.com> Message-ID: Can you show us a minimal example where you get different behavior? I'm getting the same result for simple 2d arrays like x = arange(100) x.shape = (10,10) scipy.ndimage,mean(x) -> 49.5 np.mean(x) -> 49.5 Max On Thu, Sep 10, 2009 at 8:43 AM, n.l.o wrote: > > Hello > > I was wondering what the difference is between numpy.mean() and the > scipy.ndimage.mean() method? > > I get different answers. > > Also is there a difference in using the different std() and median() method > etc.? > > I am applying the methods on 2-D arrays. > > Cheers > Magnus > -- > View this message in context: > http://www.nabble.com/difference-between-different-mean%28%29s-tp25383547p25383547.html > Sent from the Scipy-User mailing list archive at Nabble.com. > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sebastian.walter at gmail.com Thu Sep 10 14:28:20 2009 From: sebastian.walter at gmail.com (Sebastian Walter) Date: Thu, 10 Sep 2009 20:28:20 +0200 Subject: [SciPy-User] Fitting a system of ODEs to data In-Reply-To: <3d375d730909101013s7d82c6bcid5118ab249ba44da@mail.gmail.com> References: <3d375d730909100848o3b1e2866x4d16b8147494bffd@mail.gmail.com> <3d375d730909101013s7d82c6bcid5118ab249ba44da@mail.gmail.com> Message-ID: encountering many local optima may as well be an artifact of inaccurate derivative approximations (gradient, hessian). A long valley with steep walls often converges only to the bottom of the valley but due to the inaccurate gradient it can't find a descent direction anymore. The get the derivative information you can do the following: define the variational ODE, i.e. if your ODE is d/dt x = f(t,x,p) x(0) = x0(p) Then the variational ODE is d/dt x = f(t,x,p) d/dt x_p = f_x(t,x,p) x_p + f_p(t,xp) x(0) = x0(p) x_p(0) = x0_p(p) where x_p := d/dp x The evaluation can then be done by the standard scipy ode solvers. Input for scipy.optimize.leastsq is the vector of measurements y = [y(t_1), y(t_2), ...., ] taken at the measurement times ts = [t_1, t_2, ....]. You can also now define a function Dfun that returns the Jacobian y_p. This should work much better than the version without derivative information. There is also an adjoint ODE solver in Python that would be preferable if the number of parameters is large, but I can't recall the name of the package right now.... Sebastian On Thu, Sep 10, 2009 at 7:13 PM, Robert Kern wrote: > On Thu, Sep 10, 2009 at 12:08, C. Campbell wrote: >> Thanks, both of you. I actually had tried something similar to what you both >> suggested, but it didn't seem like the function was converging to a >> solution, so I (incorrectly!) assumed the problem must be with the ODE >> nature of my system. > > It might be, sort of. You will probably have to have a good guess of > the parameters. The functions generated by many ODEs tend to be less > suitable for fitting than other functions of interest. You may > encounter many local optima. It would be worthwhile to do a bit of > brute force searching through your parameter space to get a good > starting point or use a global optimizer. > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless > enigma that is made terrible by our own mad attempt to interpret it as > though it had an underlying truth." > ?-- Umberto Eco > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From magnusp at astro.su.se Thu Sep 10 16:24:19 2009 From: magnusp at astro.su.se (n.l.o) Date: Thu, 10 Sep 2009 13:24:19 -0700 (PDT) Subject: [SciPy-User] [SciPy-user] difference between different mean()s In-Reply-To: References: <25383547.post@talk.nabble.com> Message-ID: <25390368.post@talk.nabble.com> ehum,then I must have done something wrong. I get the same results doing your example. But, when I use my data, i.e. a cube of shape (30,512,512) and run the different median I get different answers. Although not with your example data (taking shape to be 10,2,5 or something). (data at http://magnusp.homeip.net/data0.fits) code: a = pyfits('data0.fits') a.mean() 90.328727213541669 ndimage.mean(a) 93.617742029825848 weird, or is it just me again? Max Shron wrote: > > Can you show us a minimal example where you get different behavior? I'm > getting the same result for simple 2d arrays like > x = arange(100) > x.shape = (10,10) > scipy.ndimage,mean(x) > -> 49.5 > np.mean(x) > -> 49.5 > > Max > > On Thu, Sep 10, 2009 at 8:43 AM, n.l.o wrote: > >> >> Hello >> >> I was wondering what the difference is between numpy.mean() and the >> scipy.ndimage.mean() method? >> >> I get different answers. >> >> Also is there a difference in using the different std() and median() >> method >> etc.? >> >> I am applying the methods on 2-D arrays. >> >> Cheers >> Magnus >> -- >> View this message in context: >> http://www.nabble.com/difference-between-different-mean%28%29s-tp25383547p25383547.html >> Sent from the Scipy-User mailing list archive at Nabble.com. >> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -- View this message in context: http://www.nabble.com/difference-between-different-mean%28%29s-tp25383547p25390368.html Sent from the Scipy-User mailing list archive at Nabble.com. From mdekauwe at gmail.com Thu Sep 10 17:29:11 2009 From: mdekauwe at gmail.com (Mart.) Date: Thu, 10 Sep 2009 14:29:11 -0700 (PDT) Subject: [SciPy-User] 3D plotting In-Reply-To: <1252158823.8021.0.camel@idol> References: <43FB1E92.6030705@ntc.zcu.cz> <1252158823.8021.0.camel@idol> Message-ID: <3108bf13-82ed-4332-858c-a322760b7d79@q7g2000yqi.googlegroups.com> or if u can't get it working gnuplot might be a stop gap for ur 3d plotting? Martin On Sep 5, 2:53?pm, Pauli Virtanen wrote: > ti, 2006-02-21 kello 15:07 +0100, Robert Cimrman kirjoitti: > [clip] > > > * mplot3d: does not work with my version of matplotlib (0.80). I have > > made the changes mentioned in the Cookbook to no avail. (Axes.__init__() > > args apparently changed, as well as some other matplotlib object attributes) > > > Any ideas? mplot3d looks great, I would really like to use it! > > I'd suggest just updating your Matplotlib library to version 0.99, or > trying out Mayavi2. > > -- > Pauli Virtanen > > _______________________________________________ > SciPy-User mailing list > SciPy-U... at scipy.orghttp://mail.scipy.org/mailman/listinfo/scipy-user From mdekauwe at gmail.com Thu Sep 10 17:35:01 2009 From: mdekauwe at gmail.com (Mart.) Date: Thu, 10 Sep 2009 14:35:01 -0700 (PDT) Subject: [SciPy-User] snow leopard issues with numpy In-Reply-To: <1324DD89-AE68-4B9E-BE82-339006831B86@gmail.com> References: <1324DD89-AE68-4B9E-BE82-339006831B86@gmail.com> Message-ID: <76b0b711-cbed-46e1-b311-fa3127521278@j19g2000yqk.googlegroups.com> I don't know about the svn version but I have from '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/ site-packages/scipy-0.8.0.dev5838-py2.5-macosx-10.3-fat.egg/scipy/ version.pyc'> working fine on my mac with snow leopard? On Sep 3, 5:02?pm, Wolfgang Kerzendorf wrote: > I just installed numpy and scipy (both svn) on OS X 10.6 and just got ? > scipy to work with Robert Kern's help. Playing around with numpy I got ? > the following segfault:http://pastebin.com/m35220dbf > I hope someone can make sense of it. Thanks in advance > ? ? ? Wolfgang > _______________________________________________ > SciPy-User mailing list > SciPy-U... at scipy.orghttp://mail.scipy.org/mailman/listinfo/scipy-user From david at ar.media.kyoto-u.ac.jp Thu Sep 10 22:02:46 2009 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Fri, 11 Sep 2009 11:02:46 +0900 Subject: [SciPy-User] snow leopard issues with numpy In-Reply-To: <76b0b711-cbed-46e1-b311-fa3127521278@j19g2000yqk.googlegroups.com> References: <1324DD89-AE68-4B9E-BE82-339006831B86@gmail.com> <76b0b711-cbed-46e1-b311-fa3127521278@j19g2000yqk.googlegroups.com> Message-ID: <4AA9AFC6.2060303@ar.media.kyoto-u.ac.jp> Mart. wrote: > I don't know about the svn version but I have > > from '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/ > site-packages/scipy-0.8.0.dev5838-py2.5-macosx-10.3-fat.egg/scipy/ > version.pyc'> > > working fine on my mac with snow leopard? > That's most likely because in your configuration, the python in your path is an 'old' one. Problem arise when loading an old built numpy with the new, snow lepoard python (python 2.6 and 64 bits). cheers, David From pav+sp at iki.fi Fri Sep 11 03:38:31 2009 From: pav+sp at iki.fi (Pauli Virtanen) Date: Fri, 11 Sep 2009 07:38:31 +0000 (UTC) Subject: [SciPy-User] [SciPy-user] difference between different mean()s References: <25383547.post@talk.nabble.com> <25390368.post@talk.nabble.com> Message-ID: Thu, 10 Sep 2009 13:24:19 -0700, n.l.o wrote: > ehum,then I must have done something wrong. I get the same results doing > your example. > > But, when I use my data, i.e. a cube of shape (30,512,512) and run the > different median I get different answers. Although not with your example > data (taking shape to be 10,2,5 or something). > > (data at http://magnusp.homeip.net/data0.fits) code: > > a = pyfits('data0.fits') > a.mean() > 90.328727213541669 > ndimage.mean(a) > 93.617742029825848 > > weird, or is it just me again? You have 32-bit single-precision float data, and so numpy.mean uses a 32- bit float accumulator to compute the mean. If you use doubles (64-bit) for the accumulator, you get the same result as ndimage (which also uses double): >>> a.mean(dtype=np.float64) 93.617742029825848 I think a remark on this should be added to the documentation for mean() and other accumulator methods -- it's sort of a trap for the unwary. -- Pauli Virtanen From pav+sp at iki.fi Fri Sep 11 04:18:58 2009 From: pav+sp at iki.fi (Pauli Virtanen) Date: Fri, 11 Sep 2009 08:18:58 +0000 (UTC) Subject: [SciPy-User] [SciPy-user] difference between different mean()s References: <25383547.post@talk.nabble.com> <25390368.post@talk.nabble.com> Message-ID: (Please keep this discussion on the list, thanks!) Fri, 11 Sep 2009, n.l.o wrote: > Fri 11 Sep 2009, Pauli Virtanen wrote: > > >>> a.mean(dtype=np.float64) > > 93.617742029825848 > > Thanks for the very informative reply. > So if I understand correctly, I should use the 'a.mean()' method, since > it uses the same > dtype; 'float32'? No, you should specify a higher-accuracy accumulator, or use the ndimage routine. 93.6177 is the more correct answer. This is a generic floating point issue: if you do (in C) float item[LARGENUM]; float c; for (k = 0; k < N; ++k) { c += item[k]; } c /= N; you get a less accurate answer than with float item[LARGENUM]; double c; for (k = 0; k < N; ++k) { c += item[k]; } c /= N; because of accumulated loss of precision in the + operations. -- Pauli Virtanen From bartomas at gmail.com Fri Sep 11 10:35:16 2009 From: bartomas at gmail.com (bar tomas) Date: Fri, 11 Sep 2009 15:35:16 +0100 Subject: [SciPy-User] Using geographic coordinates with pyshapelib Message-ID: Hi, Can I use lat/long coordinates directly when creating shapes with pyshapelib. I mean, for instance in the following sample program that uses pyshapelib (from : software.stablers.net/files/shapelibSample.py) could I use lat/long in the parameters of SHPObject def make_shapefile(filename): ??? obj = shapelib.SHPObject(shapelib.SHPT_POLYGON, 1, ???????????????????????????? [[(10, 10), (20, 10), (20, 20), (10, 10)]]) ??? print obj.extents() ??? print obj.vertices() ??? outfile = shapelib.create(filename, shapelib.SHPT_POLYGON) Do I need to specify anything so that ArMap interprets the coordinates correctly when importing the created shapefile? Thanks very much. From amenity at enthought.com Fri Sep 11 14:17:57 2009 From: amenity at enthought.com (Amenity Applewhite) Date: Fri, 11 Sep 2009 13:17:57 -0500 Subject: [SciPy-User] Scientific Computing with Python, September 18, 2009 References: <1183663757.1252692788349.JavaMail.root@p2-ws607.ad.prodcc.net> Message-ID: <4832F195-92FA-412D-9C1C-CEE81851F10B@enthought.com> (HTML version of email) Greetings! September is well upon us and it looks like it's already time for another Scientific Computing with Python webinar. Next week, Travis Oliphant will be hosting a presentation on regression analysis in NumPy and SciPy. As you are probably aware, Travis was the primary developer of NumPy, so we're fortunate to have him presenting these tools. Here's a word on what to expect Friday: A common scientific and engineering need is to find the parameters to a model that best fit a particular data set. A large number of techniques and tools have been created for assisting with this general problem. They vary based on the model (e.g. linear or nonlinear), the characteristics of the errors on the data (e.g. weighted or un- weighted), and the error metric selected (e.g. least-squares, or absolute difference). This webinar will provide an overview of the tools that SciPy and NumPy provide for regression analysis including linear and non-linear least-squares and a brief look at handling other error metrics. We will also demonstrate simple GUI tools that can make some problems easier and provide a quick overview of the new Scikits package statsmodels whose API is maturing in a separate package but should be incorporated into SciPy in the future. Here's the registration information: Scientific Computing with Python Webinar: Regression analysis in NumPy Friday, September 18 1pm CDT/6pm UTC Register at GoToMeeting: https://www1.gotomeeting.com/register/632400424 Forward email http://ui.constantcontact.com/sa/fwtf.jsp?m=1102424111856&ea=leah%40enthought.com&a=1102702114724&id=preview Hope to see you there! -- Amenity Applewhite Enthought, Inc. Scientific Computing Solutions www.enthought.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdekauwe at gmail.com Fri Sep 11 17:48:15 2009 From: mdekauwe at gmail.com (Mart.) Date: Fri, 11 Sep 2009 14:48:15 -0700 (PDT) Subject: [SciPy-User] snow leopard issues with numpy In-Reply-To: <4AA9AFC6.2060303@ar.media.kyoto-u.ac.jp> References: <1324DD89-AE68-4B9E-BE82-339006831B86@gmail.com> <76b0b711-cbed-46e1-b311-fa3127521278@j19g2000yqk.googlegroups.com> <4AA9AFC6.2060303@ar.media.kyoto-u.ac.jp> Message-ID: Thats interesting - I had forgotten the OS comes with a python of it's own! Do you know what the issue is? On Sep 11, 3:02?am, David Cournapeau wrote: > Mart. wrote: > > I don't know about the svn version but I have > > > from '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/ > > site-packages/scipy-0.8.0.dev5838-py2.5-macosx-10.3-fat.egg/scipy/ > > version.pyc'> > > > working fine on my mac with snow leopard? > > That's most likely because in your configuration, the python in your > path is an 'old' one. Problem arise when loading an old built numpy with > the new, snow lepoard python (python 2.6 and 64 bits). > > cheers, > > David > _______________________________________________ > SciPy-User mailing list > SciPy-U... at scipy.orghttp://mail.scipy.org/mailman/listinfo/scipy-user From mdekauwe at gmail.com Fri Sep 11 18:00:50 2009 From: mdekauwe at gmail.com (Mart.) Date: Fri, 11 Sep 2009 15:00:50 -0700 (PDT) Subject: [SciPy-User] snow leopard issues with numpy In-Reply-To: References: <1324DD89-AE68-4B9E-BE82-339006831B86@gmail.com> <76b0b711-cbed-46e1-b311-fa3127521278@j19g2000yqk.googlegroups.com> <4AA9AFC6.2060303@ar.media.kyoto-u.ac.jp> Message-ID: <90552766-3a33-48bf-9ffe-3d4d96b3d285@k33g2000yqa.googlegroups.com> http://blog.hyperjeff.net/?p=160 seems pretty comprehensive. On Sep 11, 10:48?pm, "Mart." wrote: > Thats interesting - I had forgotten the OS comes with a python of it's > own! Do you know what the issue is? > > On Sep 11, 3:02?am, David Cournapeau > wrote: > > > > > Mart. wrote: > > > I don't know about the svn version but I have > > > > from '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/ > > > site-packages/scipy-0.8.0.dev5838-py2.5-macosx-10.3-fat.egg/scipy/ > > > version.pyc'> > > > > working fine on my mac with snow leopard? > > > That's most likely because in your configuration, the python in your > > path is an 'old' one. Problem arise when loading an old built numpy with > > the new, snow lepoard python (python 2.6 and 64 bits). > > > cheers, > > > David > > _______________________________________________ > > SciPy-User mailing list > > SciPy-U... at scipy.orghttp://mail.scipy.org/mailman/listinfo/scipy-user > > _______________________________________________ > SciPy-User mailing list > SciPy-U... at scipy.orghttp://mail.scipy.org/mailman/listinfo/scipy-user From bartomas at gmail.com Sat Sep 12 10:10:43 2009 From: bartomas at gmail.com (bar tomas) Date: Sat, 12 Sep 2009 16:10:43 +0200 Subject: [SciPy-User] Using geographic coordinates with pyshapelib In-Reply-To: References: Message-ID: Thank you very much for your reply. Would any one have an example of a .prj file for geographic coordinates? (I've been hunting on the internet for one but haven't found one so far.) Many thanks again On 9/11/09, Joe Kington wrote: > > > Well, I can't reply to the list from my phone, so see the "forwarded" > message below. Hope that helps! > > ---------- Forwarded message ---------- > From: "Joe Kington" > Date: Sep 11, 2009 9:57 AM > Subject: Re: [SciPy-User] Using geographic coordinates with pyshapelib > To: "SciPy Users List" > > > > Sure. In fact, that's the most common thing to do. > > Just define the shapefile's coordinate system as geographic in arc or write > out a .prj file (search for it, its just a txt file naming the projection, > but you need to know the format). > > As long as the coordinate system is defined, arc will reproject everything > on the fly. > > > > On Sep 11, 2009 9:35 AM, "bar tomas" wrote: > > > Hi, > > Can I use lat/long... > > From jkington at wisc.edu Sat Sep 12 16:53:11 2009 From: jkington at wisc.edu (Joe Kington) Date: Sat, 12 Sep 2009 15:53:11 -0500 Subject: [SciPy-User] Using geographic coordinates with pyshapelib In-Reply-To: References: Message-ID: Hi, For geographic data with a datum of WGS84, the .prj file should have this (just a single line of text, with a .prj extension on the file) GEOGCS["GCS_WGS_1984",DATUM["D_WGS_1984",SPHEROID["WGS_1984",6378137.0,298.257223563]],PRIMEM["Greenwich",0.0],UNIT["Degree",0.0174532925199433]] Sorry I didn't include that earlier! Hope it helps! -Joe On Sat, Sep 12, 2009 at 9:10 AM, bar tomas wrote: > Thank you very much for your reply. > Would any one have an example of a .prj file for geographic > coordinates? (I've been hunting on the internet for one but haven't > found one so far.) > Many thanks again > > On 9/11/09, Joe Kington wrote: > > > > > > Well, I can't reply to the list from my phone, so see the "forwarded" > > message below. Hope that helps! > > > > ---------- Forwarded message ---------- > > From: "Joe Kington" > > Date: Sep 11, 2009 9:57 AM > > Subject: Re: [SciPy-User] Using geographic coordinates with pyshapelib > > To: "SciPy Users List" > > > > > > > > Sure. In fact, that's the most common thing to do. > > > > Just define the shapefile's coordinate system as geographic in arc or > write > > out a .prj file (search for it, its just a txt file naming the > projection, > > but you need to know the format). > > > > As long as the coordinate system is defined, arc will reproject > everything > > on the fly. > > > > > > On Sep 11, 2009 9:35 AM, "bar tomas" wrote: > > > > > Hi, > > Can I use lat/long... > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bartomas at gmail.com Sun Sep 13 10:53:07 2009 From: bartomas at gmail.com (bar tomas) Date: Sun, 13 Sep 2009 16:53:07 +0200 Subject: [SciPy-User] Using geographic coordinates with pyshapelib In-Reply-To: References: Message-ID: Thank you so much! On 9/12/09, Joe Kington wrote: > Hi, > > For geographic data with a datum of WGS84, the .prj file should have this > (just a single line of text, with a .prj extension on the file) > GEOGCS["GCS_WGS_1984",DATUM["D_WGS_1984",SPHEROID["WGS_1984",6378137.0,298.257223563]],PRIMEM["Greenwich",0.0],UNIT["Degree",0.0174532925199433]] > > Sorry I didn't include that earlier! Hope it helps! > -Joe > > > On Sat, Sep 12, 2009 at 9:10 AM, bar tomas wrote: > > > Thank you very much for your reply. > > Would any one have an example of a .prj file for geographic > > coordinates? (I've been hunting on the internet for one but haven't > > found one so far.) > > Many thanks again > > > > > > > > > > On 9/11/09, Joe Kington wrote: > > > > > > > > > Well, I can't reply to the list from my phone, so see the "forwarded" > > > message below. Hope that helps! > > > > > > ---------- Forwarded message ---------- > > > From: "Joe Kington" > > > Date: Sep 11, 2009 9:57 AM > > > Subject: Re: [SciPy-User] Using geographic coordinates with pyshapelib > > > To: "SciPy Users List" > > > > > > > > > > > > Sure. In fact, that's the most common thing to do. > > > > > > Just define the shapefile's coordinate system as geographic in arc or > write > > > out a .prj file (search for it, its just a txt file naming the > projection, > > > but you need to know the format). > > > > > > As long as the coordinate system is defined, arc will reproject > everything > > > on the fly. > > > > > > > > On Sep 11, 2009 9:35 AM, "bar tomas" wrote: > > > > > > Hi, > > Can I use lat/long... > > > > > > > > > > From stefan at sun.ac.za Tue Sep 15 08:38:50 2009 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Tue, 15 Sep 2009 14:38:50 +0200 Subject: [SciPy-User] Scikits Trac In-Reply-To: References: <9457e7c80909021524x2fbfd789kb4b676a6ed02e00f@mail.gmail.com> <9457e7c80909050155g544d03d6uf2558ca68fff2e66@mail.gmail.com> Message-ID: <9457e7c80909150538x550a703arbb4bbed4e24ffe12@mail.gmail.com> 2009/9/9 Matt Knox : > Also, is it possible for somebody to disable the obsolete [scikits] trac server? There is > no indication when visiting the ticketing tool that the server should no longer > be used. That's a good idea, but I don't have admin rights on that server. Who is the Enthought sysadmin? Regards St?fan From robert.kern at gmail.com Tue Sep 15 11:21:17 2009 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 15 Sep 2009 10:21:17 -0500 Subject: [SciPy-User] Scikits Trac In-Reply-To: <9457e7c80909150538x550a703arbb4bbed4e24ffe12@mail.gmail.com> References: <9457e7c80909021524x2fbfd789kb4b676a6ed02e00f@mail.gmail.com> <9457e7c80909050155g544d03d6uf2558ca68fff2e66@mail.gmail.com> <9457e7c80909150538x550a703arbb4bbed4e24ffe12@mail.gmail.com> Message-ID: <3d375d730909150821r18ee751cwc7519bea12987f34@mail.gmail.com> 2009/9/15 St?fan van der Walt : > 2009/9/9 Matt Knox : >> Also, is it possible for somebody to disable the obsolete [scikits] trac server? There is >> no indication when visiting the ticketing tool that the server should no longer >> be used. > > That's a good idea, but I don't have admin rights on that server. ?Who > is the Enthought sysadmin? Aaron River -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From tmp50 at ukr.net Tue Sep 15 11:32:57 2009 From: tmp50 at ukr.net (Dmitrey) Date: Tue, 15 Sep 2009 18:32:57 +0300 Subject: [SciPy-User] ANN: FuncDesigner 0.15 - free Python-written framework with automatic differentiation Message-ID: FuncDesigner is cross-platform (Windows, Linux, Mac OS etc) Python- written framework with automatic differentiation (http:// en.wikipedia.org/wiki/Automatic_differentiation). License BSD allows to use it in both open- and closed-code soft. It has been extracted from OpenOpt framework as a stand-alone package, still you can easily optimize models written in FuncDesigner by OpenOpt (some examples here: http://openopt.org/NumericalOptimizationForFuncDesignerModels) For more details see http://openopt.org/FuncDesigner http://forum.openopt.org/viewtopic.php?id=141 Regards, D. -------------- next part -------------- An HTML attachment was scrubbed... URL: From tmp50 at ukr.net Tue Sep 15 11:37:05 2009 From: tmp50 at ukr.net (Dmitrey) Date: Tue, 15 Sep 2009 18:37:05 +0300 Subject: [SciPy-User] ANN: OpenOpt 0.25 - free numerical optimization framework with automatic differentiation Message-ID: OpenOpt is cross-platform (Windows, Linux, Mac OS etc) Python-written framework. If you have a model written in FuncDesigner (http://openopt.org/FuncDesigner), you can get 1st derivatives via automatic differentiation ( http://en.wikipedia.org/wiki/Automatic_differentiation) (some examples here: http://openopt.org/NumericalOptimizationForFuncDesignerModels). License BSD allows to use it in both open- and closed-code soft.? For more details see http://openopt.org/ http://forum.openopt.org/viewtopic.php?id=141? Regards, D. -------------- next part -------------- An HTML attachment was scrubbed... URL: From sebastian.walter at gmail.com Tue Sep 15 14:42:10 2009 From: sebastian.walter at gmail.com (Sebastian Walter) Date: Tue, 15 Sep 2009 20:42:10 +0200 Subject: [SciPy-User] ANN: FuncDesigner 0.15 - free Python-written framework with automatic differentiation In-Reply-To: References: Message-ID: sounds interesting. how does FuncDesigner accumulate the derivatives internally? does it use the reverse mode of AD? Sebastian 2009/9/15 Dmitrey : > FuncDesigner is cross-platform (Windows, Linux, Mac OS etc) Python- > written framework with automatic differentiation (http:// > en.wikipedia.org/wiki/Automatic_differentiation). License BSD allows > to use it in both open- and closed-code soft. It has been extracted > from OpenOpt framework as a stand-alone package, still you can easily > optimize models written in FuncDesigner by OpenOpt (some examples > here: http://openopt.org/NumericalOptimizationForFuncDesignerModels) > > For more details see > http://openopt.org/FuncDesigner > http://forum.openopt.org/viewtopic.php?id=141 > > Regards, D. > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > From tmp50 at ukr.net Tue Sep 15 14:49:46 2009 From: tmp50 at ukr.net (Dmitrey) Date: Tue, 15 Sep 2009 21:49:46 +0300 Subject: [SciPy-User] ANN: FuncDesigner 0.15 - free Python-written framework with automatic differentiation In-Reply-To: Message-ID: how does FuncDesigner accumulate the derivatives internally? see ooFun.py, function _D here http://trac.openopt.org/openopt/browser/PythonPackages/FuncDesigner/FuncDesigner/ooFun.py#L479 ? does it use the reverse mode of AD? yes, since the situation for numerical optimization problems where several funcs and much more variables are present is more typical, hence reverse mode is more suitable, as it is mentioned in wikipedia.org automatic differentiation webpage. You'd better ask the questions in openopt forum, currently I cannot read all the scipy mail list messages. Regards, D. -------------- next part -------------- An HTML attachment was scrubbed... URL: From washakie at gmail.com Wed Sep 16 18:28:34 2009 From: washakie at gmail.com (John [H2O]) Date: Wed, 16 Sep 2009 15:28:34 -0700 (PDT) Subject: [SciPy-User] [SciPy-user] 2d interpolation, non-regular lat/lon grid - help with delauney/natgrid?? In-Reply-To: <3d375d730908130927x65ed0d6fh2db206fa5afa0ad7@mail.gmail.com> References: <24909685.post@talk.nabble.com> <24918109.post@talk.nabble.com> <3d375d730908111058m2c0fc5daw16fe9add8936d4ec@mail.gmail.com> <24943646.post@talk.nabble.com> <3d375d730908121611p1b8eb33cof3dc3ba831b8e7b1@mail.gmail.com> <24950836.post@talk.nabble.com> <6a17e9ee0908130220i24cad4cfx5ad556f5751bac07@mail.gmail.com> <24952162.post@talk.nabble.com> <6a17e9ee0908130548k6413dedfiaaee91e6410d296f@mail.gmail.com> <24954551.post@talk.nabble.com> <3d375d730908130927x65ed0d6fh2db206fa5afa0ad7@mail.gmail.com> Message-ID: <25482004.post@talk.nabble.com> Robert Kern-2 wrote: > > Ah, yes. griddata() only handles regular grids for some reason, not > arbitrary interpolation points. You will have to use the underlying > delaunay package to interpolate arbitrary points. Using your variable > names: > > # triangulate data > tri = delaunay.Triangulation(x,y) > # interpolate data > interp = tri.nn_interpolator(z) > Z0 = interp(gridx, gridy) > > -- > I'd like to revive the thread if I may... I'm now able to use the projected coordinate system and do a regridding using the griddata function. But I would like to use the Triangulation approach. Unfortunately, I get the following error after some time: terminate called after throwing an instance of 'std::bad_alloc' Any thoughts on what may be causing this? -- View this message in context: http://www.nabble.com/2d-interpolation%2C-non-regular-lat-lon-grid-tp24909685p25482004.html Sent from the Scipy-User mailing list archive at Nabble.com. From pav at iki.fi Wed Sep 16 18:39:33 2009 From: pav at iki.fi (Pauli Virtanen) Date: Thu, 17 Sep 2009 01:39:33 +0300 Subject: [SciPy-User] [SciPy-user] 2d interpolation, non-regular lat/lon grid - help with delauney/natgrid?? In-Reply-To: <25482004.post@talk.nabble.com> References: <24909685.post@talk.nabble.com> <24918109.post@talk.nabble.com> <3d375d730908111058m2c0fc5daw16fe9add8936d4ec@mail.gmail.com> <24943646.post@talk.nabble.com> <3d375d730908121611p1b8eb33cof3dc3ba831b8e7b1@mail.gmail.com> <24950836.post@talk.nabble.com> <6a17e9ee0908130220i24cad4cfx5ad556f5751bac07@mail.gmail.com> <24952162.post@talk.nabble.com> <6a17e9ee0908130548k6413dedfiaaee91e6410d296f@mail.gmail.com> <24954551.post@talk.nabble.com> <3d375d730908130927x65ed0d6fh2db206fa5afa0ad7@mail.gmail.com> <25482004.post@talk.nabble.com> Message-ID: <1253140773.1990.189.camel@idol> ke, 2009-09-16 kello 15:28 -0700, John [H2O] kirjoitti: > > Robert Kern-2 wrote: > > > > Ah, yes. griddata() only handles regular grids for some reason, not > > arbitrary interpolation points. You will have to use the underlying > > delaunay package to interpolate arbitrary points. Using your variable > > names: > > > > # triangulate data > > tri = delaunay.Triangulation(x,y) > > # interpolate data > > interp = tri.nn_interpolator(z) > > Z0 = interp(gridx, gridy) > > > > -- > > > > I'd like to revive the thread if I may... I'm now able to use the projected > coordinate system and do a regridding using the griddata function. But I > would like to use the Triangulation approach. > > Unfortunately, I get the following error after some time: > terminate called after throwing an instance of 'std::bad_alloc' > > Any thoughts on what may be causing this? It's a C++ exception, indicating that something runs out of memory. The triangulation generation in scikits.delaunay is written in C++, so this probably means that you have more data points than the triangulator can handle. The code does not catch bad_alloc exceptions, so this results to termination of the program rather than in a MemoryError. -- Pauli Virtanen From tioguerra at gmail.com Thu Sep 17 03:05:06 2009 From: tioguerra at gmail.com (Rodrigo Guerra) Date: Thu, 17 Sep 2009 16:05:06 +0900 Subject: [SciPy-User] building svn rev 5926 on snow leopard 10.6.1 Message-ID: <817c9f950909170005p6a0020d3xa4f27709770a1e34@mail.gmail.com> Hi, I recently upgraded from Leopard to Snow Leopard. To update my NumPy and SciPy installation I've followed the steps described in this webpage: I got all the way up to step 5, where I would actually build SciPy, but got stuck with some strange compilation error. Here are the relevant error messages: ~/tmp/scipy$ python setup.py build (snip) building 'arpack' library compiling C sources C compiler: gcc-4.2 -DNDEBUG -g -fwrapv -Os -Wall -Wstrict-prototypes -arch i386 -arch x86_64 -pipe creating build/temp.macosx-10.6-universal-2.6/scipy/sparse/linalg/eigen creating build/temp.macosx-10.6-universal-2.6/scipy/sparse/linalg/eigen/arpack creating build/temp.macosx-10.6-universal-2.6/scipy/sparse/linalg/eigen/arpack/ARPACK creating build/temp.macosx-10.6-universal-2.6/scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS compile options: '-Iscipy/sparse/linalg/eigen/arpack/ARPACK/SRC -I/Library/Python/2.6/site-packages/numpy/core/include -c' gcc-4.2: scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c In file included from /System/Library/Frameworks/vecLib.framework/Headers/vecLib.h:58, from scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c:1: /System/Library/Frameworks/vecLib.framework/Headers/clapack.h: In function ?cgelsd_?: /System/Library/Frameworks/vecLib.framework/Headers/clapack.h:380: error: expected declaration specifiers before ?AVAILABLE_MAC_OS_X_VERSION_10_6_AND_LATER? /System/Library/Frameworks/vecLib.framework/Headers/clapack.h:385: error: expected ?=?, ?,?, ?;?, ?asm? or ?__attribute__? before ?AVAILABLE_MAC_OS_X_VERSION_10_6_AND_LATER? /System/Library/Frameworks/vecLib.framework/Headers/clapack.h:766: error: expected ?=?, ?,?, ?;?, ?asm? or ?__attribute__? before ?AVAILABLE_MAC_OS_X_VERSION_10_6_AND_LATER? (snip) The strange thing is that I am almost sure I have gone through something similar when upgrading from Tiger to Leopard, but I can't really remember. I hope there is some really trivial mistake I am making here. Any ideas? Cheers, Guerra From david at ar.media.kyoto-u.ac.jp Thu Sep 17 02:56:33 2009 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Thu, 17 Sep 2009 15:56:33 +0900 Subject: [SciPy-User] building svn rev 5926 on snow leopard 10.6.1 In-Reply-To: <817c9f950909170005p6a0020d3xa4f27709770a1e34@mail.gmail.com> References: <817c9f950909170005p6a0020d3xa4f27709770a1e34@mail.gmail.com> Message-ID: <4AB1DDA1.5080402@ar.media.kyoto-u.ac.jp> Rodrigo Guerra wrote: > Hi, > > I recently upgraded from Leopard to Snow Leopard. To update my NumPy > and SciPy installation I've followed the steps described in this > webpage: > > > I got all the way up to step 5, where I would actually build SciPy, > but got stuck with some strange compilation error. > > Here are the relevant error messages: > I am not sure whether that's the problem, but on late mac os x versions, the accelerate framework has superseded the veclib framework (veclib is now a part of accelerate). Can you give us the output of the configuration stage (python setup.py config) ? Also, make sure to wipe out the build directory and any leftover in your subversion checkout before building scipy, cheers, David From tioguerra at gmail.com Thu Sep 17 03:28:20 2009 From: tioguerra at gmail.com (Rodrigo Guerra) Date: Thu, 17 Sep 2009 16:28:20 +0900 Subject: [SciPy-User] building svn rev 5926 on snow leopard 10.6.1 In-Reply-To: <4AB1DDA1.5080402@ar.media.kyoto-u.ac.jp> References: <817c9f950909170005p6a0020d3xa4f27709770a1e34@mail.gmail.com> <4AB1DDA1.5080402@ar.media.kyoto-u.ac.jp> Message-ID: <817c9f950909170028m6d3b557bnc536d78529beb11a@mail.gmail.com> Hi David, hi all, Now I removed the "build/" directory and executed "python setup.py clean" and then later I executed "python setup.py config". The output of this last command was the following: ~/tmp/scipy$ python setup.py config Warning: No configuration returned, assuming unavailable. blas_opt_info: FOUND: extra_link_args = ['-Wl,-framework', '-Wl,Accelerate'] define_macros = [('NO_ATLAS_INFO', 3)] extra_compile_args = ['-faltivec', '-I/System/Library/Frameworks/vecLib.framework/Headers'] lapack_opt_info: FOUND: extra_link_args = ['-Wl,-framework', '-Wl,Accelerate'] define_macros = [('NO_ATLAS_INFO', 3)] extra_compile_args = ['-faltivec'] umfpack_info: amd_info: FOUND: libraries = ['amd'] library_dirs = ['/Users/guerra/tmp/AMD/Lib'] swig_opts = ['-I/Users/guerra/tmp/AMD/Include'] define_macros = [('SCIPY_AMD_H', None)] include_dirs = ['/Users/guerra/tmp/AMD/Include'] FOUND: libraries = ['umfpack', 'amd'] library_dirs = ['/Users/guerra/tmp/UMFPACK/Lib', '/Users/guerra/tmp/AMD/Lib'] swig_opts = ['-I/Users/guerra/tmp/UMFPACK/Include', '-I/Users/guerra/tmp/AMD/Include'] define_macros = [('SCIPY_UMFPACK_H', None), ('SCIPY_AMD_H', None)] include_dirs = ['/Users/guerra/tmp/UMFPACK/Include', '/Users/guerra/tmp/AMD/Include'] running config On Thu, Sep 17, 2009 at 3:56 PM, David Cournapeau wrote: > Rodrigo Guerra wrote: >> Hi, >> >> I recently upgraded from Leopard to Snow Leopard. To update my NumPy >> and SciPy installation I've followed the steps described in this >> webpage: >> >> >> I got all the way up to step 5, where I would actually build SciPy, >> but got stuck with some strange compilation error. >> >> Here are the relevant error messages: >> > > I am not sure whether that's the problem, but on late mac os x versions, > the accelerate framework has superseded the veclib framework (veclib is > now a part of accelerate). Can you give us the output of the > configuration stage ?(python setup.py config) ? > > Also, make sure to wipe out the build directory and any leftover in your > subversion checkout before building scipy, > > cheers, > > David > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From kunguz at gmail.com Thu Sep 17 09:57:34 2009 From: kunguz at gmail.com (=?ISO-8859-9?Q?Kaan_AK=DE=DDT?=) Date: Thu, 17 Sep 2009 15:57:34 +0200 Subject: [SciPy-User] Low pass filter Message-ID: <8fa7fe790909170657r2c41d686m7bd6e0ba1c230c3e@mail.gmail.com> Hi all, This is my first time that I use Scipy. I have two arrays both of them one dimensional. One of them is filled with voltage values and one of them is filled with time values. I want to apply low pass filter to clear the noise from the data. Can you please help me? I already wrote something but it is not working: (b,a)=butter(2,5,btype='low') voltage=lfilter(b,a,voltage) plt.plot(voltage,timing) Best regards, Kaan -------------- next part -------------- An HTML attachment was scrubbed... URL: From ivo.maljevic at gmail.com Thu Sep 17 10:50:01 2009 From: ivo.maljevic at gmail.com (Ivo Maljevic) Date: Thu, 17 Sep 2009 10:50:01 -0400 Subject: [SciPy-User] Low pass filter In-Reply-To: <8fa7fe790909170657r2c41d686m7bd6e0ba1c230c3e@mail.gmail.com> References: <8fa7fe790909170657r2c41d686m7bd6e0ba1c230c3e@mail.gmail.com> Message-ID: <826c64da0909170750x55ed9c5ax6ad155bf474c1d27@mail.gmail.com> Does the following: plt.plot(timing, voltage) plt.show() work before filtering? (Suggestion, you may want to put time on x-axis, see the line above) I just wrote a small test program, and it works: #!/usr/bin/python # import import pylab as plt from scipy import * from scipy.signal import butter, lfilter t=linspace(0,1,100) v=sin(2*pi*t) (b,a)=butter(2,5,btype='low') v=lfilter(b,a,v) plt.plot(t,v) plt.show() 2009/9/17 Kaan AK??T > Hi all, > > This is my first time that I use Scipy. I have two arrays both of them one > dimensional. One of them is filled with voltage values and one of them is > filled with time values. I want to apply low pass filter to clear the noise > from the data. Can you please help me? I already wrote something but it is > not working: > > (b,a)=butter(2,5,btype='low') > voltage=lfilter(b,a,voltage) > plt.plot(voltage,timing) > > Best regards, > Kaan > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kunguz at gmail.com Thu Sep 17 11:17:09 2009 From: kunguz at gmail.com (=?ISO-8859-9?Q?Kaan_AK=DE=DDT?=) Date: Thu, 17 Sep 2009 17:17:09 +0200 Subject: [SciPy-User] Low pass filter In-Reply-To: <826c64da0909170750x55ed9c5ax6ad155bf474c1d27@mail.gmail.com> References: <8fa7fe790909170657r2c41d686m7bd6e0ba1c230c3e@mail.gmail.com> <826c64da0909170750x55ed9c5ax6ad155bf474c1d27@mail.gmail.com> Message-ID: <8fa7fe790909170817o525a5847l45a1af81f37faedc@mail.gmail.com> Thanks Ivo, The problem is after filtering I got the same signal. I wanted to clear the frequencies higher then 5 Hz. My code is similar to yours but the output doesn't seem logical: #!/usr/bin/python # -*- coding: utf-8 -*- import time, array, datetime import matplotlib import matplotlib.pyplot as plt from scipy import * from scipy.signal import butter, lfilter input = 0 timing = [] voltage = [] values = [] input = 0 infile = open('measurements/' + "data.csv", "r") for line in infile.readlines(): values = line.split(',') voltage.insert(input,float(values[0])) timing.insert(input,float(values[1])) input = input + 1 plt.xlabel('Time steps') plt.ylabel('Voltage') plt.title('Measurments from Oscilloscope') plt.grid(True) plt.plot(voltage,timing) [b,a]=butter(2,5) voltagelowp=lfilter(b,a,voltage) plt.plot(voltagelowp,timing) plt.show() plt.close() 2009/9/17 Ivo Maljevic > Does the following: > > plt.plot(timing, voltage) > plt.show() > > work before filtering? (Suggestion, you may want to put time on x-axis, see > the line above) > > I just wrote a small test program, and it works: > > > #!/usr/bin/python > > # import > import pylab as plt > from scipy import * > from scipy.signal import butter, lfilter > > t=linspace(0,1,100) > v=sin(2*pi*t) > > (b,a)=butter(2,5,btype='low') > v=lfilter(b,a,v) > plt.plot(t,v) > plt.show() > > > 2009/9/17 Kaan AK??T > >> Hi all, >> >> This is my first time that I use Scipy. I have two arrays both of them one >> dimensional. One of them is filled with voltage values and one of them is >> filled with time values. I want to apply low pass filter to clear the noise >> from the data. Can you please help me? I already wrote something but it is >> not working: >> >> (b,a)=butter(2,5,btype='low') >> voltage=lfilter(b,a,voltage) >> plt.plot(voltage,timing) >> >> Best regards, >> Kaan >> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> >> > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kunguz at gmail.com Thu Sep 17 11:19:22 2009 From: kunguz at gmail.com (=?ISO-8859-9?Q?Kaan_AK=DE=DDT?=) Date: Thu, 17 Sep 2009 17:19:22 +0200 Subject: [SciPy-User] Low pass filter In-Reply-To: <8fa7fe790909170817o525a5847l45a1af81f37faedc@mail.gmail.com> References: <8fa7fe790909170657r2c41d686m7bd6e0ba1c230c3e@mail.gmail.com> <826c64da0909170750x55ed9c5ax6ad155bf474c1d27@mail.gmail.com> <8fa7fe790909170817o525a5847l45a1af81f37faedc@mail.gmail.com> Message-ID: <8fa7fe790909170819w68bf6ff0v3dee3a4fd8b2f0fd@mail.gmail.com> May be butterworth is not the correct way to make it. Can it be like that? 17 Eyl?l 2009 17:17 tarihinde Kaan AK??T yazd?: > Thanks Ivo, The problem is after filtering I got the same signal. I wanted > to clear the frequencies higher then 5 Hz. My code is similar to yours but > the output doesn't seem logical: > > #!/usr/bin/python > # -*- coding: utf-8 -*- > import time, array, datetime > import matplotlib > import matplotlib.pyplot as plt > from scipy import * > from scipy.signal import butter, lfilter > > input = 0 > timing = [] > voltage = [] > values = [] > > input = 0 > infile = open('measurements/' + "data.csv", "r") > for line in infile.readlines(): > values = line.split(',') > voltage.insert(input,float(values[0])) > timing.insert(input,float(values[1])) > input = input + 1 > plt.xlabel('Time steps') > plt.ylabel('Voltage') > plt.title('Measurments from Oscilloscope') > plt.grid(True) > plt.plot(voltage,timing) > [b,a]=butter(2,5) > voltagelowp=lfilter(b,a,voltage) > plt.plot(voltagelowp,timing) > plt.show() > plt.close() > > 2009/9/17 Ivo Maljevic > > Does the following: >> >> plt.plot(timing, voltage) >> plt.show() >> >> work before filtering? (Suggestion, you may want to put time on x-axis, >> see the line above) >> >> I just wrote a small test program, and it works: >> >> >> #!/usr/bin/python >> >> # import >> import pylab as plt >> from scipy import * >> from scipy.signal import butter, lfilter >> >> t=linspace(0,1,100) >> v=sin(2*pi*t) >> >> (b,a)=butter(2,5,btype='low') >> v=lfilter(b,a,v) >> plt.plot(t,v) >> plt.show() >> >> >> 2009/9/17 Kaan AK??T >> >>> Hi all, >>> >>> This is my first time that I use Scipy. I have two arrays both of them >>> one dimensional. One of them is filled with voltage values and one of them >>> is filled with time values. I want to apply low pass filter to clear the >>> noise from the data. Can you please help me? I already wrote something but >>> it is not working: >>> >>> (b,a)=butter(2,5,btype='low') >>> voltage=lfilter(b,a,voltage) >>> plt.plot(voltage,timing) >>> >>> Best regards, >>> Kaan >>> >>> _______________________________________________ >>> SciPy-User mailing list >>> SciPy-User at scipy.org >>> http://mail.scipy.org/mailman/listinfo/scipy-user >>> >>> >> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ivo.maljevic at gmail.com Thu Sep 17 11:28:55 2009 From: ivo.maljevic at gmail.com (Ivo Maljevic) Date: Thu, 17 Sep 2009 11:28:55 -0400 Subject: [SciPy-User] Low pass filter In-Reply-To: <8fa7fe790909170817o525a5847l45a1af81f37faedc@mail.gmail.com> References: <8fa7fe790909170657r2c41d686m7bd6e0ba1c230c3e@mail.gmail.com> <826c64da0909170750x55ed9c5ax6ad155bf474c1d27@mail.gmail.com> <8fa7fe790909170817o525a5847l45a1af81f37faedc@mail.gmail.com> Message-ID: <826c64da0909170828w434c2325h3e2fe898eb662551@mail.gmail.com> Kaan, If you have two consecutive plot calls, you need to call figure in between, or you will plot over the first one. See the modified code below (blue). Also, put the time on the proper place (red). I cannot comment on the frequency comment of the signal you are trying to LPF. Notes: 1. Your butterworth filter is too short: N=2? 2. Your Wn should be normalized and < 1, I think. Ivo #!/usr/bin/python # -*- coding: utf-8 -*- import time, array, datetime import matplotlib import matplotlib.pyplot as plt from scipy import * from scipy.signal import butter, lfilter input = 0 timing = [] voltage = [] values = [] input = 0 infile = open('measurements/' + "data.csv", "r") for line in infile.readlines(): values = line.split(',') voltage.insert(input,float( > > values[0])) > timing.insert(input,float(values[1])) > input = input + 1 > plt.xlabel('Time steps') > plt.ylabel('Voltage') > plt.title('Measurments from Oscilloscope') > plt.grid(True) > plt.plot(timing, voltage) > plt.figure() [b,a]=butter(2,5) voltagelowp=lfilter(b,a, > > voltage) > plt.plot(timing, voltagelowp) > plt.show() > plt.close() > 2009/9/17 Kaan AK??T > Thanks Ivo, The problem is after filtering I got the same signal. I wanted > to clear the frequencies higher then 5 Hz. My code is similar to yours but > the output doesn't seem logical: > > #!/usr/bin/python > # -*- coding: utf-8 -*- > import time, array, datetime > import matplotlib > import matplotlib.pyplot as plt > from scipy import * > from scipy.signal import butter, lfilter > > input = 0 > timing = [] > voltage = [] > values = [] > > input = 0 > infile = open('measurements/' + "data.csv", "r") > for line in infile.readlines(): > values = line.split(',') > voltage.insert(input,float(values[0])) > timing.insert(input,float(values[1])) > input = input + 1 > plt.xlabel('Time steps') > plt.ylabel('Voltage') > plt.title('Measurments from Oscilloscope') > plt.grid(True) > plt.plot(timing, voltage) > plt.figure() > [b,a]=butter(2,5) > voltagelowp=lfilter(b,a,voltage) > plt.plot(timing, voltagelowp) > plt.show() > plt.close() > > 2009/9/17 Ivo Maljevic > > Does the following: >> >> plt.plot(timing, voltage) >> plt.show() >> >> work before filtering? (Suggestion, you may want to put time on x-axis, >> see the line above) >> >> I just wrote a small test program, and it works: >> >> >> #!/usr/bin/python >> >> # import >> import pylab as plt >> from scipy import * >> from scipy.signal import butter, lfilter >> >> t=linspace(0,1,100) >> v=sin(2*pi*t) >> >> (b,a)=butter(2,5,btype='low') >> v=lfilter(b,a,v) >> plt.plot(t,v) >> plt.show() >> >> >> 2009/9/17 Kaan AK??T >> >>> Hi all, >>> >>> This is my first time that I use Scipy. I have two arrays both of them >>> one dimensional. One of them is filled with voltage values and one of them >>> is filled with time values. I want to apply low pass filter to clear the >>> noise from the data. Can you please help me? I already wrote something but >>> it is not working: >>> >>> (b,a)=butter(2,5,btype='low') >>> voltage=lfilter(b,a,voltage) >>> plt.plot(voltage,timing) >>> >>> Best regards, >>> Kaan >>> >>> _______________________________________________ >>> SciPy-User mailing list >>> SciPy-User at scipy.org >>> http://mail.scipy.org/mailman/listinfo/scipy-user >>> >>> >> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> >> > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ivo.maljevic at gmail.com Thu Sep 17 11:46:04 2009 From: ivo.maljevic at gmail.com (Ivo Maljevic) Date: Thu, 17 Sep 2009 11:46:04 -0400 Subject: [SciPy-User] Low pass filter In-Reply-To: <8fa7fe790909170819w68bf6ff0v3dee3a4fd8b2f0fd@mail.gmail.com> References: <8fa7fe790909170657r2c41d686m7bd6e0ba1c230c3e@mail.gmail.com> <826c64da0909170750x55ed9c5ax6ad155bf474c1d27@mail.gmail.com> <8fa7fe790909170817o525a5847l45a1af81f37faedc@mail.gmail.com> <8fa7fe790909170819w68bf6ff0v3dee3a4fd8b2f0fd@mail.gmail.com> Message-ID: <826c64da0909170846u4e423ddev28dee4247279d2d4@mail.gmail.com> Kaan, Maybe this little example will help you better understand how this filtering business works (run the attached file). Ivo 2009/9/17 Kaan AK??T > May be butterworth is not the correct way to make it. Can it be like that? > > 17 Eyl?l 2009 17:17 tarihinde Kaan AK??T yazd?: > > Thanks Ivo, The problem is after filtering I got the same signal. I wanted >> to clear the frequencies higher then 5 Hz. My code is similar to yours but >> the output doesn't seem logical: >> >> #!/usr/bin/python >> # -*- coding: utf-8 -*- >> import time, array, datetime >> import matplotlib >> import matplotlib.pyplot as plt >> from scipy import * >> from scipy.signal import butter, lfilter >> >> input = 0 >> timing = [] >> voltage = [] >> values = [] >> >> input = 0 >> infile = open('measurements/' + "data.csv", "r") >> for line in infile.readlines(): >> values = line.split(',') >> voltage.insert(input,float(values[0])) >> timing.insert(input,float(values[1])) >> input = input + 1 >> plt.xlabel('Time steps') >> plt.ylabel('Voltage') >> plt.title('Measurments from Oscilloscope') >> plt.grid(True) >> plt.plot(voltage,timing) >> [b,a]=butter(2,5) >> voltagelowp=lfilter(b,a,voltage) >> plt.plot(voltagelowp,timing) >> plt.show() >> plt.close() >> >> 2009/9/17 Ivo Maljevic >> >> Does the following: >>> >>> plt.plot(timing, voltage) >>> plt.show() >>> >>> work before filtering? (Suggestion, you may want to put time on x-axis, >>> see the line above) >>> >>> I just wrote a small test program, and it works: >>> >>> >>> #!/usr/bin/python >>> >>> # import >>> import pylab as plt >>> from scipy import * >>> from scipy.signal import butter, lfilter >>> >>> t=linspace(0,1,100) >>> v=sin(2*pi*t) >>> >>> (b,a)=butter(2,5,btype='low') >>> v=lfilter(b,a,v) >>> plt.plot(t,v) >>> plt.show() >>> >>> >>> 2009/9/17 Kaan AK??T >>> >>>> Hi all, >>>> >>>> This is my first time that I use Scipy. I have two arrays both of them >>>> one dimensional. One of them is filled with voltage values and one of them >>>> is filled with time values. I want to apply low pass filter to clear the >>>> noise from the data. Can you please help me? I already wrote something but >>>> it is not working: >>>> >>>> (b,a)=butter(2,5,btype='low') >>>> voltage=lfilter(b,a,voltage) >>>> plt.plot(voltage,timing) >>>> >>>> Best regards, >>>> Kaan >>>> >>>> _______________________________________________ >>>> SciPy-User mailing list >>>> SciPy-User at scipy.org >>>> http://mail.scipy.org/mailman/listinfo/scipy-user >>>> >>>> >>> >>> _______________________________________________ >>> SciPy-User mailing list >>> SciPy-User at scipy.org >>> http://mail.scipy.org/mailman/listinfo/scipy-user >>> >>> >> > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: tst.py Type: text/x-python Size: 598 bytes Desc: not available URL: From ivo.maljevic at gmail.com Thu Sep 17 11:33:41 2009 From: ivo.maljevic at gmail.com (Ivo Maljevic) Date: Thu, 17 Sep 2009 11:33:41 -0400 Subject: [SciPy-User] Low pass filter In-Reply-To: <826c64da0909170828w434c2325h3e2fe898eb662551@mail.gmail.com> References: <8fa7fe790909170657r2c41d686m7bd6e0ba1c230c3e@mail.gmail.com> <826c64da0909170750x55ed9c5ax6ad155bf474c1d27@mail.gmail.com> <8fa7fe790909170817o525a5847l45a1af81f37faedc@mail.gmail.com> <826c64da0909170828w434c2325h3e2fe898eb662551@mail.gmail.com> Message-ID: <826c64da0909170833h79e1edfen1c2721ca13dd5974@mail.gmail.com> Do you know what is the sampling rate of your data? Your Wn = 5/(f_s/2), where f_s is the data sampling rate. Also, I meant "I cannot comment on the frequency *content* of the signal you are trying to LPF". Hope this helps. Ivo 2009/9/17 Ivo Maljevic > Kaan, > If you have two consecutive plot calls, you need to call figure in between, > or you will plot over the first one. See the modified code below (blue). > Also, put the time on the proper place (red). > I cannot comment on the frequency comment of the signal you are trying to > LPF. > > Notes: > > 1. Your butterworth filter is too short: N=2? > 2. Your Wn should be normalized and < 1, I think. > > Ivo > > > #!/usr/bin/python > # -*- coding: utf-8 -*- > import time, array, datetime > import matplotlib > import matplotlib.pyplot as plt > from scipy import * > from scipy.signal import butter, lfilter > > input = 0 > timing = [] > voltage = [] > values = [] > > input = 0 > infile = open('measurements/' + "data.csv", "r") > for line in infile.readlines(): > values = line.split(',') > voltage.insert(input,float( > >> values[0])) >> timing.insert(input,float(values[1])) >> input = input + 1 >> plt.xlabel('Time steps') >> plt.ylabel('Voltage') >> plt.title('Measurments from Oscilloscope') >> plt.grid(True) >> plt.plot(timing, voltage) >> > > plt.figure() > > [b,a]=butter(2,5) > voltagelowp=lfilter(b,a, > >> voltage) >> plt.plot(timing, voltagelowp) >> plt.show() >> plt.close() >> > > > > 2009/9/17 Kaan AK??T > >> Thanks Ivo, The problem is after filtering I got the same signal. I wanted >> to clear the frequencies higher then 5 Hz. My code is similar to yours but >> the output doesn't seem logical: >> >> #!/usr/bin/python >> # -*- coding: utf-8 -*- >> import time, array, datetime >> import matplotlib >> import matplotlib.pyplot as plt >> from scipy import * >> from scipy.signal import butter, lfilter >> >> input = 0 >> timing = [] >> voltage = [] >> values = [] >> >> input = 0 >> infile = open('measurements/' + "data.csv", "r") >> for line in infile.readlines(): >> values = line.split(',') >> voltage.insert(input,float(values[0])) >> timing.insert(input,float(values[1])) >> input = input + 1 >> plt.xlabel('Time steps') >> plt.ylabel('Voltage') >> plt.title('Measurments from Oscilloscope') >> plt.grid(True) >> plt.plot(timing, voltage) >> > > plt.figure() > > >> [b,a]=butter(2,5) >> voltagelowp=lfilter(b,a,voltage) >> plt.plot(timing, voltagelowp) >> >> plt.show() >> plt.close() >> >> 2009/9/17 Ivo Maljevic >> >> Does the following: >>> >>> plt.plot(timing, voltage) >>> plt.show() >>> >>> work before filtering? (Suggestion, you may want to put time on x-axis, >>> see the line above) >>> >>> I just wrote a small test program, and it works: >>> >>> >>> #!/usr/bin/python >>> >>> # import >>> import pylab as plt >>> from scipy import * >>> from scipy.signal import butter, lfilter >>> >>> t=linspace(0,1,100) >>> v=sin(2*pi*t) >>> >>> (b,a)=butter(2,5,btype='low') >>> v=lfilter(b,a,v) >>> plt.plot(t,v) >>> plt.show() >>> >>> >>> 2009/9/17 Kaan AK??T >>> >>>> Hi all, >>>> >>>> This is my first time that I use Scipy. I have two arrays both of them >>>> one dimensional. One of them is filled with voltage values and one of them >>>> is filled with time values. I want to apply low pass filter to clear the >>>> noise from the data. Can you please help me? I already wrote something but >>>> it is not working: >>>> >>>> (b,a)=butter(2,5,btype='low') >>>> voltage=lfilter(b,a,voltage) >>>> plt.plot(voltage,timing) >>>> >>>> Best regards, >>>> Kaan >>>> >>>> _______________________________________________ >>>> SciPy-User mailing list >>>> SciPy-User at scipy.org >>>> http://mail.scipy.org/mailman/listinfo/scipy-user >>>> >>>> >>> >>> _______________________________________________ >>> SciPy-User mailing list >>> SciPy-User at scipy.org >>> http://mail.scipy.org/mailman/listinfo/scipy-user >>> >>> >> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cmac at mit.edu Thu Sep 17 12:13:26 2009 From: cmac at mit.edu (Christopher MacMinn) Date: Thu, 17 Sep 2009 12:13:26 -0400 Subject: [SciPy-User] Working with AVI video files in python Message-ID: <95da30590909170913k3afd08e3rd385b6ae008c4097@mail.gmail.com> Hi everyone - I have some videos (AVI files, MJPG encoding) that I would like to split into frames so that I can analyze them in scipy/numpy. Any suggestions of a good python package for doing this? I'm looking for something along the lines of the MATLAB aviread or mmreader commands. Thanks! - Chris MacMinn From lev at columbia.edu Thu Sep 17 13:03:48 2009 From: lev at columbia.edu (Lev Givon) Date: Thu, 17 Sep 2009 13:03:48 -0400 Subject: [SciPy-User] Working with AVI video files in python In-Reply-To: <95da30590909170913k3afd08e3rd385b6ae008c4097@mail.gmail.com> References: <95da30590909170913k3afd08e3rd385b6ae008c4097@mail.gmail.com> Message-ID: <20090917170348.GA18671@localhost.columbia.edu> Received from Christopher MacMinn on Thu, Sep 17, 2009 at 12:13:26PM EDT: > Hi everyone - > > I have some videos (AVI files, MJPG encoding) that I would like to > split into frames so that I can analyze them in scipy/numpy. > > Any suggestions of a good python package for doing this? I'm looking > for something along the lines of the MATLAB aviread or mmreader > commands. > > Thanks! > - Chris MacMinn You may wish to try pyffmpeg: http://code.google.com/p/pyffmpeg/ I recall that there were some bugs in the last stable release of the software (which is rather old), but it's possible that some of them were resolved in the pyffmpeg2-alpha-candidate branch in svn. L.G. From william.ratcliff at gmail.com Thu Sep 17 18:13:05 2009 From: william.ratcliff at gmail.com (william ratcliff) Date: Thu, 17 Sep 2009 18:13:05 -0400 Subject: [SciPy-User] Automatic Peak Detection Message-ID: <827183970909171513i199f02bbgf65debe39ab500a7@mail.gmail.com> Hi, I was wondering what other people have done in the way of automatic peak detection. If I have a set of 1D data sets where I know the number of peaks, what I've done is to use a sgolay (preserves widths) to take first and second derivatives to locate peaks. I then try to find the widths of the peaks (going left and right from the center of the peaks) and then feed this into my fitting routines. However, this method seems to be rather susceptible to noise. Has anyone else taken a stab at this? Any thoughts from a ML or pattern matching perspective to do this in a robust manner? Cheers, William (For now, I'm trying to do this in a model free manner, in most real cases, I'm dealing with gaussian and lorentzian peaks) -------------- next part -------------- An HTML attachment was scrubbed... URL: From karl.young at ucsf.edu Thu Sep 17 18:43:47 2009 From: karl.young at ucsf.edu (Karl Young) Date: Thu, 17 Sep 2009 15:43:47 -0700 Subject: [SciPy-User] Automatic Peak Detection In-Reply-To: <827183970909171513i199f02bbgf65debe39ab500a7@mail.gmail.com> References: <827183970909171513i199f02bbgf65debe39ab500a7@mail.gmail.com> Message-ID: <4AB2BBA3.1070304@ucsf.edu> Hi William, You say you know the number of peaks; do you also know where they are (i.e. the frequencies) ? If so I've had some luck FFT'ing and estimating the decay constants for the lines (that seemed slightly more noise resistant than doing it in the frequency domain) but I am then, of course, assuming a model (i.e. to get the decay constant(s) I have to assume the lines are Lorentzian or Gaussian or some mix like a Voigt lineshape). My colleagues and I have also played around with trying this in a nonparametric way, e.g. using splines to fit the lines (and using heuristic ways of initially estimating the width) though I'm not sure how edifying that would be in your case. If you're interested I could send a couple of refs. re. trying to do this in NMR (your context as well ?) though I think it's still pretty generally a black art based on what the signal to noise ratio is. -- Karl > Hi, > > I was wondering what other people have done in the way of automatic > peak detection. If I have a set of 1D data sets where I know the > number of peaks, what I've done is to use a sgolay (preserves widths) > to take first and second derivatives to locate peaks. I then try to > find the widths of the peaks (going left and right from the center of > the peaks) and then feed this into my fitting routines. However, this > method seems to be rather susceptible to noise. Has anyone else taken > a stab at this? Any thoughts from a ML or pattern matching > perspective to do this in a robust manner? > > > Cheers, > William > (For now, I'm trying to do this in a model free manner, in most real > cases, I'm dealing with gaussian and lorentzian peaks) From rob.clewley at gmail.com Thu Sep 17 18:48:16 2009 From: rob.clewley at gmail.com (Rob Clewley) Date: Thu, 17 Sep 2009 18:48:16 -0400 Subject: [SciPy-User] Automatic Peak Detection In-Reply-To: <827183970909171513i199f02bbgf65debe39ab500a7@mail.gmail.com> References: <827183970909171513i199f02bbgf65debe39ab500a7@mail.gmail.com> Message-ID: Hi Bill, In response to a similar question a couple of years ago, I had developed some feature-based detection routines that are robust to noise, adaptable, and modular. These are now part of a PyDSTool module but can easily be made stand-alone. Depending on the kind of data you have, you can set up a hierarchy of features to detect loosely-defined patterns in data. The pre-defined peak detection class can be configured in various ways, and by default uses a local quadratic fit to estimate the likely true peak from a discrete set of sample data. A simple example would go like this: from PyDSTool import * from PyDSTool.Toolbox.neuro_data import * # load your noisy 1d data, and make it into a Trajectory (a.k.a curve) object traj = numeric_to_traj([vs], 'test_traj', ['x'], ts, discrete=False) # create the feature from a pre-defined peak (a.k.a. spike) detection feature class, with a # struct-like object holding the params is_spike = get_spike_data('one_spike', pars=args( height_tol=2000., thresh_pc=0.15, fit_width_max=20, weight=0, noise_tol=300, tlo=260, width_tol=ts[-1], coord='x', eventtol=1e-2, verbose_level=2, debug=True)) # call the feature with the trajectory and get True/False result print is_spike(traj) # try detecting another spike around t = 268 is_spike.pars.tlo = 268 print is_spike(traj) With the verbose diagnostic mode on, the result of plotting the data and calling is_spike twice results in the attached figure, which shows the data points used in each fit, the local quadratic function, and its peak. # fetch the estimated peak by accessing the is_spike.results attribute, which contains various # diagnostic information, but in particular: print is_spike.results.spike_time, is_spike.results.spike_val # prints 264.498661974, 10477.4263709 If you have any questions, let me know. I use this code all the time to fit complex data to neural models whose parameterizations are incomplete, and where I want to concentrate on capturing qualitative aspects of the data rather than the detailed quantitative parts. -Rob On Thu, Sep 17, 2009 at 6:13 PM, william ratcliff wrote: > Hi, > I was wondering what other people have done in the way of automatic peak > detection. ?If I have a set of 1D data sets where I know the number of > peaks, what I've done is to use a sgolay (preserves widths) to take first > and second derivatives to locate peaks. ?I then try to find the widths of > the peaks (going left and right from the center of the peaks) and then feed > this into my fitting routines. ?However, this method seems to be rather > susceptible to noise. ?Has anyone else taken a stab at this? ?Any thoughts > from a ML or pattern matching perspective to do this in a robust manner? > > Cheers, > William > (For now, I'm trying to do this in a model free manner, in most real cases, > I'm dealing with gaussian and lorentzian peaks) > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -- Robert H. Clewley, Ph.D. Assistant Professor Neuroscience Institute and Department of Mathematics and Statistics Georgia State University 720 COE, 30 Pryor St Atlanta, GA 30303, USA tel: 404-413-6420 fax: 404-413-6403 http://www2.gsu.edu/~matrhc -------------- next part -------------- A non-text attachment was scrubbed... Name: peak_fitting.png Type: image/png Size: 29429 bytes Desc: not available URL: From pnorthug at gmail.com Thu Sep 17 18:51:38 2009 From: pnorthug at gmail.com (Paul) Date: Thu, 17 Sep 2009 22:51:38 +0000 (UTC) Subject: [SciPy-User] 2d convolution with 'full' in one dimension and 'valid' in another Message-ID: I have a n x m matrix A and n x k matrix B where k >> m. I would like to compute the 'full' convolution of A and B along 2nd dimension and get a result that is n x (m + k - 1). If I select mode='full' in scipy.signal.convolve, I get a result that is 'full' in both dimensions and a result that is (n + n - 1) x (m + k - 1). Currently, I do this: C = np.array([sp.signal.correlate(A[i], B[i], mode='full') for i in range(n)]) I do this a lot with fixed A and varying B. I was wondering if there is a faster way. Perhaps I should be using fft's instead. Thanks. From dwf at cs.toronto.edu Thu Sep 17 18:55:37 2009 From: dwf at cs.toronto.edu (David Warde-Farley) Date: Thu, 17 Sep 2009 18:55:37 -0400 Subject: [SciPy-User] Automatic Peak Detection In-Reply-To: <827183970909171513i199f02bbgf65debe39ab500a7@mail.gmail.com> References: <827183970909171513i199f02bbgf65debe39ab500a7@mail.gmail.com> Message-ID: <8A4EECBE-FAC3-4DE1-969F-440AF84404F3@cs.toronto.edu> On 17-Sep-09, at 6:13 PM, william ratcliff wrote: > Hi, > I was wondering what other people have done in the way of automatic > peak > detection. If I have a set of 1D data sets where I know the number of > peaks, what I've done is to use a sgolay (preserves widths) to take > first > and second derivatives to locate peaks. I then try to find the > widths of > the peaks (going left and right from the center of the peaks) and > then feed > this into my fitting routines. However, this method seems to be > rather > susceptible to noise. Has anyone else taken a stab at this? Any > thoughts > from a ML or pattern matching perspective to do this in a robust > manner? I've seen approaches to this sort of problem where EM for mixtures-of- Gaussians is used. Namely you initialize your components to have wide variance but centered on the correct peaks, and have an extra garbage component with fixed mean (sample mean) and fixed, high variance (perhaps even higher than the sample variance of the data). That's what I recommended to a friend of mine to estimate the length of diffraction spikes that are falsely entered into the USNO-B astronomical catalog as long colinear chains of "stars". What he ended up doing was a little bit more complicated in that it takes into account some of the 2-dimensional structure. More info: http://astrometry.net/cleanusnob/ David From kunguz at gmail.com Fri Sep 18 03:27:26 2009 From: kunguz at gmail.com (=?ISO-8859-9?Q?Kaan_AK=DE=DDT?=) Date: Fri, 18 Sep 2009 09:27:26 +0200 Subject: [SciPy-User] Low pass filter In-Reply-To: <826c64da0909170833h79e1edfen1c2721ca13dd5974@mail.gmail.com> References: <8fa7fe790909170657r2c41d686m7bd6e0ba1c230c3e@mail.gmail.com> <826c64da0909170750x55ed9c5ax6ad155bf474c1d27@mail.gmail.com> <8fa7fe790909170817o525a5847l45a1af81f37faedc@mail.gmail.com> <826c64da0909170828w434c2325h3e2fe898eb662551@mail.gmail.com> <826c64da0909170833h79e1edfen1c2721ca13dd5974@mail.gmail.com> Message-ID: <8fa7fe790909180027qc68f82dr73bbd8876f07e3ce@mail.gmail.com> Dear Ivo, Thank you very much for examples and debugging. I had this problem with tuning of Wn. My sample rate is pretty high f_s = 5000000000. When I configure it as below: f_s = 5000000000 [b,a]=butter(4,5./(f_s/2)) It warns me about bad coefficient: /usr/lib/python2.6/site-packages/scipy/signal/filter_design.py:221: BadCoefficients: Badly conditionned filter coefficients (numerator): the results may be meaningless "results may be meaningless", BadCoefficients) /usr/lib/python2.6/site-packages/scipy/signal/filter_design.py:221: BadCoefficients: Badly conditionned filter coefficients (numerator): the results may be meaningless "results may be meaningless", BadCoefficients) Best regards, Kaan 17 Eyl?l 2009 17:33 tarihinde Ivo Maljevic yazd?: > Do you know what is the sampling rate of your data? Your Wn = 5/(f_s/2), > where f_s is the data sampling rate. > Also, I meant "I cannot comment on the frequency *content* of the signal > you are trying to LPF". > > Hope this helps. > > Ivo > > > 2009/9/17 Ivo Maljevic > >> Kaan, >> If you have two consecutive plot calls, you need to call figure in >> between, or you will plot over the first one. See the modified code below >> (blue). Also, put the time on the proper place (red). >> I cannot comment on the frequency comment of the signal you are trying to >> LPF. >> >> Notes: >> >> 1. Your butterworth filter is too short: N=2? >> 2. Your Wn should be normalized and < 1, I think. >> >> Ivo >> >> >> #!/usr/bin/python >> # -*- coding: utf-8 -*- >> import time, array, datetime >> import matplotlib >> import matplotlib.pyplot as plt >> from scipy import * >> from scipy.signal import butter, lfilter >> >> input = 0 >> timing = [] >> voltage = [] >> values = [] >> >> input = 0 >> infile = open('measurements/' + "data.csv", "r") >> for line in infile.readlines(): >> values = line.split(',') >> voltage.insert(input,float( >> >>> values[0])) >>> timing.insert(input,float(values[1])) >>> input = input + 1 >>> plt.xlabel('Time steps') >>> plt.ylabel('Voltage') >>> plt.title('Measurments from Oscilloscope') >>> plt.grid(True) >>> plt.plot(timing, voltage) >>> >> >> plt.figure() >> >> [b,a]=butter(2,5) >> voltagelowp=lfilter(b,a, >> >>> voltage) >>> plt.plot(timing, voltagelowp) >>> plt.show() >>> plt.close() >>> >> >> >> >> 2009/9/17 Kaan AK??T >> >>> Thanks Ivo, The problem is after filtering I got the same signal. I >>> wanted to clear the frequencies higher then 5 Hz. My code is similar to >>> yours but the output doesn't seem logical: >>> >>> #!/usr/bin/python >>> # -*- coding: utf-8 -*- >>> import time, array, datetime >>> import matplotlib >>> import matplotlib.pyplot as plt >>> from scipy import * >>> from scipy.signal import butter, lfilter >>> >>> input = 0 >>> timing = [] >>> voltage = [] >>> values = [] >>> >>> input = 0 >>> infile = open('measurements/' + "data.csv", "r") >>> for line in infile.readlines(): >>> values = line.split(',') >>> voltage.insert(input,float(values[0])) >>> timing.insert(input,float(values[1])) >>> input = input + 1 >>> plt.xlabel('Time steps') >>> plt.ylabel('Voltage') >>> plt.title('Measurments from Oscilloscope') >>> plt.grid(True) >>> plt.plot(timing, voltage) >>> >> >> plt.figure() >> >> >>> [b,a]=butter(2,5) >>> voltagelowp=lfilter(b,a,voltage) >>> plt.plot(timing, voltagelowp) >>> >>> plt.show() >>> plt.close() >>> >>> 2009/9/17 Ivo Maljevic >>> >>> Does the following: >>>> >>>> plt.plot(timing, voltage) >>>> plt.show() >>>> >>>> work before filtering? (Suggestion, you may want to put time on x-axis, >>>> see the line above) >>>> >>>> I just wrote a small test program, and it works: >>>> >>>> >>>> #!/usr/bin/python >>>> >>>> # import >>>> import pylab as plt >>>> from scipy import * >>>> from scipy.signal import butter, lfilter >>>> >>>> t=linspace(0,1,100) >>>> v=sin(2*pi*t) >>>> >>>> (b,a)=butter(2,5,btype='low') >>>> v=lfilter(b,a,v) >>>> plt.plot(t,v) >>>> plt.show() >>>> >>>> >>>> 2009/9/17 Kaan AK??T >>>> >>>>> Hi all, >>>>> >>>>> This is my first time that I use Scipy. I have two arrays both of them >>>>> one dimensional. One of them is filled with voltage values and one of them >>>>> is filled with time values. I want to apply low pass filter to clear the >>>>> noise from the data. Can you please help me? I already wrote something but >>>>> it is not working: >>>>> >>>>> (b,a)=butter(2,5,btype='low') >>>>> voltage=lfilter(b,a,voltage) >>>>> plt.plot(voltage,timing) >>>>> >>>>> Best regards, >>>>> Kaan >>>>> >>>>> _______________________________________________ >>>>> SciPy-User mailing list >>>>> SciPy-User at scipy.org >>>>> http://mail.scipy.org/mailman/listinfo/scipy-user >>>>> >>>>> >>>> >>>> _______________________________________________ >>>> SciPy-User mailing list >>>> SciPy-User at scipy.org >>>> http://mail.scipy.org/mailman/listinfo/scipy-user >>>> >>>> >>> >>> _______________________________________________ >>> SciPy-User mailing list >>> SciPy-User at scipy.org >>> http://mail.scipy.org/mailman/listinfo/scipy-user >>> >>> >> > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ivo.maljevic at gmail.com Fri Sep 18 08:00:21 2009 From: ivo.maljevic at gmail.com (Ivo Maljevic) Date: Fri, 18 Sep 2009 08:00:21 -0400 Subject: [SciPy-User] Low pass filter In-Reply-To: <8fa7fe790909180027qc68f82dr73bbd8876f07e3ce@mail.gmail.com> References: <8fa7fe790909170657r2c41d686m7bd6e0ba1c230c3e@mail.gmail.com> <826c64da0909170750x55ed9c5ax6ad155bf474c1d27@mail.gmail.com> <8fa7fe790909170817o525a5847l45a1af81f37faedc@mail.gmail.com> <826c64da0909170828w434c2325h3e2fe898eb662551@mail.gmail.com> <826c64da0909170833h79e1edfen1c2721ca13dd5974@mail.gmail.com> <8fa7fe790909180027qc68f82dr73bbd8876f07e3ce@mail.gmail.com> Message-ID: <826c64da0909180500r6b46dad6lfdcde83b7f1622e4@mail.gmail.com> Kaan, If your signal is sampled at 5 Gsamples/second, why do you want to filter out everything abouve 5 Hz? Make sure you understand what you are doing. And your filter order N=4 is still low, but that is a different problem, you cannot make a filter that narrow, and that is your problem. Ivo 2009/9/18 Kaan AK??T > Dear Ivo, > > Thank you very much for examples and debugging. I had this problem with > tuning of Wn. My sample rate is pretty high f_s = 5000000000. When I > configure it as below: > > f_s = 5000000000 > [b,a]=butter(4,5./(f_s/2)) > > It warns me about bad coefficient: > > /usr/lib/python2.6/site-packages/scipy/signal/filter_design.py:221: > BadCoefficients: Badly conditionned filter coefficients (numerator): the > results may be meaningless > "results may be meaningless", BadCoefficients) > /usr/lib/python2.6/site-packages/scipy/signal/filter_design.py:221: > BadCoefficients: Badly conditionned filter coefficients (numerator): the > results may be meaningless > "results may be meaningless", BadCoefficients) > > Best regards, > Kaan > > > 17 Eyl?l 2009 17:33 tarihinde Ivo Maljevic yazd?: > > Do you know what is the sampling rate of your data? Your Wn = 5/(f_s/2), >> where f_s is the data sampling rate. >> Also, I meant "I cannot comment on the frequency *content* of the signal >> you are trying to LPF". >> >> Hope this helps. >> >> Ivo >> >> >> 2009/9/17 Ivo Maljevic >> >>> Kaan, >>> If you have two consecutive plot calls, you need to call figure in >>> between, or you will plot over the first one. See the modified code below >>> (blue). Also, put the time on the proper place (red). >>> I cannot comment on the frequency comment of the signal you are trying to >>> LPF. >>> >>> Notes: >>> >>> 1. Your butterworth filter is too short: N=2? >>> 2. Your Wn should be normalized and < 1, I think. >>> >>> Ivo >>> >>> >>> #!/usr/bin/python >>> # -*- coding: utf-8 -*- >>> import time, array, datetime >>> import matplotlib >>> import matplotlib.pyplot as plt >>> from scipy import * >>> from scipy.signal import butter, lfilter >>> >>> input = 0 >>> timing = [] >>> voltage = [] >>> values = [] >>> >>> input = 0 >>> infile = open('measurements/' + "data.csv", "r") >>> for line in infile.readlines(): >>> values = line.split(',') >>> voltage.insert(input,float( >>> >>>> values[0])) >>>> timing.insert(input,float(values[1])) >>>> input = input + 1 >>>> plt.xlabel('Time steps') >>>> plt.ylabel('Voltage') >>>> plt.title('Measurments from Oscilloscope') >>>> plt.grid(True) >>>> plt.plot(timing, voltage) >>>> >>> >>> plt.figure() >>> >>> [b,a]=butter(2,5) >>> voltagelowp=lfilter(b,a, >>> >>>> voltage) >>>> plt.plot(timing, voltagelowp) >>>> plt.show() >>>> plt.close() >>>> >>> >>> >>> >>> 2009/9/17 Kaan AK??T >>> >>>> Thanks Ivo, The problem is after filtering I got the same signal. I >>>> wanted to clear the frequencies higher then 5 Hz. My code is similar to >>>> yours but the output doesn't seem logical: >>>> >>>> #!/usr/bin/python >>>> # -*- coding: utf-8 -*- >>>> import time, array, datetime >>>> import matplotlib >>>> import matplotlib.pyplot as plt >>>> from scipy import * >>>> from scipy.signal import butter, lfilter >>>> >>>> input = 0 >>>> timing = [] >>>> voltage = [] >>>> values = [] >>>> >>>> input = 0 >>>> infile = open('measurements/' + "data.csv", "r") >>>> for line in infile.readlines(): >>>> values = line.split(',') >>>> voltage.insert(input,float(values[0])) >>>> timing.insert(input,float(values[1])) >>>> input = input + 1 >>>> plt.xlabel('Time steps') >>>> plt.ylabel('Voltage') >>>> plt.title('Measurments from Oscilloscope') >>>> plt.grid(True) >>>> plt.plot(timing, voltage) >>>> >>> >>> plt.figure() >>> >>> >>>> [b,a]=butter(2,5) >>>> voltagelowp=lfilter(b,a,voltage) >>>> plt.plot(timing, voltagelowp) >>>> >>>> plt.show() >>>> plt.close() >>>> >>>> 2009/9/17 Ivo Maljevic >>>> >>>> Does the following: >>>>> >>>>> plt.plot(timing, voltage) >>>>> plt.show() >>>>> >>>>> work before filtering? (Suggestion, you may want to put time on x-axis, >>>>> see the line above) >>>>> >>>>> I just wrote a small test program, and it works: >>>>> >>>>> >>>>> #!/usr/bin/python >>>>> >>>>> # import >>>>> import pylab as plt >>>>> from scipy import * >>>>> from scipy.signal import butter, lfilter >>>>> >>>>> t=linspace(0,1,100) >>>>> v=sin(2*pi*t) >>>>> >>>>> (b,a)=butter(2,5,btype='low') >>>>> v=lfilter(b,a,v) >>>>> plt.plot(t,v) >>>>> plt.show() >>>>> >>>>> >>>>> 2009/9/17 Kaan AK??T >>>>> >>>>>> Hi all, >>>>>> >>>>>> This is my first time that I use Scipy. I have two arrays both of them >>>>>> one dimensional. One of them is filled with voltage values and one of them >>>>>> is filled with time values. I want to apply low pass filter to clear the >>>>>> noise from the data. Can you please help me? I already wrote something but >>>>>> it is not working: >>>>>> >>>>>> (b,a)=butter(2,5,btype='low') >>>>>> voltage=lfilter(b,a,voltage) >>>>>> plt.plot(voltage,timing) >>>>>> >>>>>> Best regards, >>>>>> Kaan >>>>>> >>>>>> _______________________________________________ >>>>>> SciPy-User mailing list >>>>>> SciPy-User at scipy.org >>>>>> http://mail.scipy.org/mailman/listinfo/scipy-user >>>>>> >>>>>> >>>>> >>>>> _______________________________________________ >>>>> SciPy-User mailing list >>>>> SciPy-User at scipy.org >>>>> http://mail.scipy.org/mailman/listinfo/scipy-user >>>>> >>>>> >>>> >>>> _______________________________________________ >>>> SciPy-User mailing list >>>> SciPy-User at scipy.org >>>> http://mail.scipy.org/mailman/listinfo/scipy-user >>>> >>>> >>> >> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> >> > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cmac at mit.edu Fri Sep 18 09:02:54 2009 From: cmac at mit.edu (Christopher MacMinn) Date: Fri, 18 Sep 2009 09:02:54 -0400 Subject: [SciPy-User] Working with AVI video files in python Message-ID: <95da30590909180602o6c88f728k335d4253e7f7c787@mail.gmail.com> >> I have some videos (AVI files, MJPG encoding) that I would like to >> split into frames so that I can analyze them in scipy/numpy. > You may wish to try pyffmpeg: http://code.google.com/p/pyffmpeg/ This is the kind of thing I was looking for -- thanks! Best, Chris From jagan_cbe2003 at yahoo.co.in Fri Sep 18 09:40:40 2009 From: jagan_cbe2003 at yahoo.co.in (jagan prabhu) Date: Fri, 18 Sep 2009 06:40:40 -0700 (PDT) Subject: [SciPy-User] criteria's to get the exit mode '0 Message-ID: <698936.51411.qm@web8318.mail.in.yahoo.com> Hi, I am using the scipy optimization routine 'fmin_slsqp' and? 'fmin_l_bfgs_b' in both the case exit mode '0' represents optimization terminated successfully / convergence is achieved. What are the criteria's to get the exit mode '0' ? Because if i change my initial parameters by very small increment or decrement, i am getting huge difference in my optimized functional value & optimized parameter values. so i like to know, How do the optimization routine determines this the optimized parameters and optimized functional value? Thank you in advance. Regards, Jagan See the Web's breaking stories, chosen by people like you. Check out Yahoo! Buzz. http://in.buzz.yahoo.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From ivo.maljevic at gmail.com Fri Sep 18 10:33:57 2009 From: ivo.maljevic at gmail.com (Ivo Maljevic) Date: Fri, 18 Sep 2009 10:33:57 -0400 Subject: [SciPy-User] Low pass filter In-Reply-To: <8fa7fe790909180027qc68f82dr73bbd8876f07e3ce@mail.gmail.com> References: <8fa7fe790909170657r2c41d686m7bd6e0ba1c230c3e@mail.gmail.com> <826c64da0909170750x55ed9c5ax6ad155bf474c1d27@mail.gmail.com> <8fa7fe790909170817o525a5847l45a1af81f37faedc@mail.gmail.com> <826c64da0909170828w434c2325h3e2fe898eb662551@mail.gmail.com> <826c64da0909170833h79e1edfen1c2721ca13dd5974@mail.gmail.com> <8fa7fe790909180027qc68f82dr73bbd8876f07e3ce@mail.gmail.com> Message-ID: <826c64da0909180733k462644c1p6b15fb4276679bc6@mail.gmail.com> Just to add, if your sample rate is that high, than the frequency content may goe up to f_s/2, which is 2.5 GHz. If you want to filter out anything above 5 Hz, that is virtually the same as if you took only the mean value of your voltage signal. BUT, I am pretty sure that either your sampling rate is not that high, or your cutoff frequency is much higher than 5 Hz. 2009/9/18 Kaan AK??T > Dear Ivo, > > Thank you very much for examples and debugging. I had this problem with > tuning of Wn. My sample rate is pretty high f_s = 5000000000. When I > configure it as below: > > f_s = 5000000000 > [b,a]=butter(4,5./(f_s/2)) > > It warns me about bad coefficient: > > /usr/lib/python2.6/site-packages/scipy/signal/filter_design.py:221: > BadCoefficients: Badly conditionned filter coefficients (numerator): the > results may be meaningless > "results may be meaningless", BadCoefficients) > /usr/lib/python2.6/site-packages/scipy/signal/filter_design.py:221: > BadCoefficients: Badly conditionned filter coefficients (numerator): the > results may be meaningless > "results may be meaningless", BadCoefficients) > > Best regards, > Kaan > > > 17 Eyl?l 2009 17:33 tarihinde Ivo Maljevic yazd?: > > Do you know what is the sampling rate of your data? Your Wn = 5/(f_s/2), >> where f_s is the data sampling rate. >> Also, I meant "I cannot comment on the frequency *content* of the signal >> you are trying to LPF". >> >> Hope this helps. >> >> Ivo >> >> >> 2009/9/17 Ivo Maljevic >> >>> Kaan, >>> If you have two consecutive plot calls, you need to call figure in >>> between, or you will plot over the first one. See the modified code below >>> (blue). Also, put the time on the proper place (red). >>> I cannot comment on the frequency comment of the signal you are trying to >>> LPF. >>> >>> Notes: >>> >>> 1. Your butterworth filter is too short: N=2? >>> 2. Your Wn should be normalized and < 1, I think. >>> >>> Ivo >>> >>> >>> #!/usr/bin/python >>> # -*- coding: utf-8 -*- >>> import time, array, datetime >>> import matplotlib >>> import matplotlib.pyplot as plt >>> from scipy import * >>> from scipy.signal import butter, lfilter >>> >>> input = 0 >>> timing = [] >>> voltage = [] >>> values = [] >>> >>> input = 0 >>> infile = open('measurements/' + "data.csv", "r") >>> for line in infile.readlines(): >>> values = line.split(',') >>> voltage.insert(input,float( >>> >>>> values[0])) >>>> timing.insert(input,float(values[1])) >>>> input = input + 1 >>>> plt.xlabel('Time steps') >>>> plt.ylabel('Voltage') >>>> plt.title('Measurments from Oscilloscope') >>>> plt.grid(True) >>>> plt.plot(timing, voltage) >>>> >>> >>> plt.figure() >>> >>> [b,a]=butter(2,5) >>> voltagelowp=lfilter(b,a, >>> >>>> voltage) >>>> plt.plot(timing, voltagelowp) >>>> plt.show() >>>> plt.close() >>>> >>> >>> >>> >>> 2009/9/17 Kaan AK??T >>> >>>> Thanks Ivo, The problem is after filtering I got the same signal. I >>>> wanted to clear the frequencies higher then 5 Hz. My code is similar to >>>> yours but the output doesn't seem logical: >>>> >>>> #!/usr/bin/python >>>> # -*- coding: utf-8 -*- >>>> import time, array, datetime >>>> import matplotlib >>>> import matplotlib.pyplot as plt >>>> from scipy import * >>>> from scipy.signal import butter, lfilter >>>> >>>> input = 0 >>>> timing = [] >>>> voltage = [] >>>> values = [] >>>> >>>> input = 0 >>>> infile = open('measurements/' + "data.csv", "r") >>>> for line in infile.readlines(): >>>> values = line.split(',') >>>> voltage.insert(input,float(values[0])) >>>> timing.insert(input,float(values[1])) >>>> input = input + 1 >>>> plt.xlabel('Time steps') >>>> plt.ylabel('Voltage') >>>> plt.title('Measurments from Oscilloscope') >>>> plt.grid(True) >>>> plt.plot(timing, voltage) >>>> >>> >>> plt.figure() >>> >>> >>>> [b,a]=butter(2,5) >>>> voltagelowp=lfilter(b,a,voltage) >>>> plt.plot(timing, voltagelowp) >>>> >>>> plt.show() >>>> plt.close() >>>> >>>> 2009/9/17 Ivo Maljevic >>>> >>>> Does the following: >>>>> >>>>> plt.plot(timing, voltage) >>>>> plt.show() >>>>> >>>>> work before filtering? (Suggestion, you may want to put time on x-axis, >>>>> see the line above) >>>>> >>>>> I just wrote a small test program, and it works: >>>>> >>>>> >>>>> #!/usr/bin/python >>>>> >>>>> # import >>>>> import pylab as plt >>>>> from scipy import * >>>>> from scipy.signal import butter, lfilter >>>>> >>>>> t=linspace(0,1,100) >>>>> v=sin(2*pi*t) >>>>> >>>>> (b,a)=butter(2,5,btype='low') >>>>> v=lfilter(b,a,v) >>>>> plt.plot(t,v) >>>>> plt.show() >>>>> >>>>> >>>>> 2009/9/17 Kaan AK??T >>>>> >>>>>> Hi all, >>>>>> >>>>>> This is my first time that I use Scipy. I have two arrays both of them >>>>>> one dimensional. One of them is filled with voltage values and one of them >>>>>> is filled with time values. I want to apply low pass filter to clear the >>>>>> noise from the data. Can you please help me? I already wrote something but >>>>>> it is not working: >>>>>> >>>>>> (b,a)=butter(2,5,btype='low') >>>>>> voltage=lfilter(b,a,voltage) >>>>>> plt.plot(voltage,timing) >>>>>> >>>>>> Best regards, >>>>>> Kaan >>>>>> >>>>>> _______________________________________________ >>>>>> SciPy-User mailing list >>>>>> SciPy-User at scipy.org >>>>>> http://mail.scipy.org/mailman/listinfo/scipy-user >>>>>> >>>>>> >>>>> >>>>> _______________________________________________ >>>>> SciPy-User mailing list >>>>> SciPy-User at scipy.org >>>>> http://mail.scipy.org/mailman/listinfo/scipy-user >>>>> >>>>> >>>> >>>> _______________________________________________ >>>> SciPy-User mailing list >>>> SciPy-User at scipy.org >>>> http://mail.scipy.org/mailman/listinfo/scipy-user >>>> >>>> >>> >> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> >> > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kunguz at gmail.com Fri Sep 18 14:17:21 2009 From: kunguz at gmail.com (=?ISO-8859-9?Q?Kaan_AK=DE=DDT?=) Date: Fri, 18 Sep 2009 20:17:21 +0200 Subject: [SciPy-User] Low pass filter In-Reply-To: <826c64da0909180733k462644c1p6b15fb4276679bc6@mail.gmail.com> References: <8fa7fe790909170657r2c41d686m7bd6e0ba1c230c3e@mail.gmail.com> <826c64da0909170750x55ed9c5ax6ad155bf474c1d27@mail.gmail.com> <8fa7fe790909170817o525a5847l45a1af81f37faedc@mail.gmail.com> <826c64da0909170828w434c2325h3e2fe898eb662551@mail.gmail.com> <826c64da0909170833h79e1edfen1c2721ca13dd5974@mail.gmail.com> <8fa7fe790909180027qc68f82dr73bbd8876f07e3ce@mail.gmail.com> <826c64da0909180733k462644c1p6b15fb4276679bc6@mail.gmail.com> Message-ID: <8fa7fe790909181117y62cd2871ndb9180b1d13affe5@mail.gmail.com> Dear Ivo, Thanks to your help my problem is over. Now, I have less sampling rate for a lower order of filter. I also adjust Wp too. Thank you very much for your replies. Best regards, Kaan 18 Eyl?l 2009 16:33 tarihinde Ivo Maljevic yazd?: > Just to add, if your sample rate is that high, than the frequency content > may goe up to f_s/2, which is 2.5 GHz. > If you want to filter out anything above 5 Hz, that is virtually the same > as if you took only the mean value of > your voltage signal. BUT, I am pretty sure that either your sampling rate > is not that high, or your cutoff frequency > is much higher than 5 Hz. > > 2009/9/18 Kaan AK??T > > Dear Ivo, >> >> Thank you very much for examples and debugging. I had this problem with >> tuning of Wn. My sample rate is pretty high f_s = 5000000000. When I >> configure it as below: >> >> f_s = 5000000000 >> [b,a]=butter(4,5./(f_s/2)) >> >> It warns me about bad coefficient: >> >> /usr/lib/python2.6/site-packages/scipy/signal/filter_design.py:221: >> BadCoefficients: Badly conditionned filter coefficients (numerator): the >> results may be meaningless >> "results may be meaningless", BadCoefficients) >> /usr/lib/python2.6/site-packages/scipy/signal/filter_design.py:221: >> BadCoefficients: Badly conditionned filter coefficients (numerator): the >> results may be meaningless >> "results may be meaningless", BadCoefficients) >> >> Best regards, >> Kaan >> >> >> 17 Eyl?l 2009 17:33 tarihinde Ivo Maljevic yazd?: >> >> Do you know what is the sampling rate of your data? Your Wn = 5/(f_s/2), >>> where f_s is the data sampling rate. >>> Also, I meant "I cannot comment on the frequency *content* of the signal >>> you are trying to LPF". >>> >>> Hope this helps. >>> >>> Ivo >>> >>> >>> 2009/9/17 Ivo Maljevic >>> >>>> Kaan, >>>> If you have two consecutive plot calls, you need to call figure in >>>> between, or you will plot over the first one. See the modified code below >>>> (blue). Also, put the time on the proper place (red). >>>> I cannot comment on the frequency comment of the signal you are trying >>>> to LPF. >>>> >>>> Notes: >>>> >>>> 1. Your butterworth filter is too short: N=2? >>>> 2. Your Wn should be normalized and < 1, I think. >>>> >>>> Ivo >>>> >>>> >>>> #!/usr/bin/python >>>> # -*- coding: utf-8 -*- >>>> import time, array, datetime >>>> import matplotlib >>>> import matplotlib.pyplot as plt >>>> from scipy import * >>>> from scipy.signal import butter, lfilter >>>> >>>> input = 0 >>>> timing = [] >>>> voltage = [] >>>> values = [] >>>> >>>> input = 0 >>>> infile = open('measurements/' + "data.csv", "r") >>>> for line in infile.readlines(): >>>> values = line.split(',') >>>> voltage.insert(input,float( >>>> >>>>> values[0])) >>>>> timing.insert(input,float(values[1])) >>>>> input = input + 1 >>>>> plt.xlabel('Time steps') >>>>> plt.ylabel('Voltage') >>>>> plt.title('Measurments from Oscilloscope') >>>>> plt.grid(True) >>>>> plt.plot(timing, voltage) >>>>> >>>> >>>> plt.figure() >>>> >>>> [b,a]=butter(2,5) >>>> voltagelowp=lfilter(b,a, >>>> >>>>> voltage) >>>>> plt.plot(timing, voltagelowp) >>>>> plt.show() >>>>> plt.close() >>>>> >>>> >>>> >>>> >>>> 2009/9/17 Kaan AK??T >>>> >>>>> Thanks Ivo, The problem is after filtering I got the same signal. I >>>>> wanted to clear the frequencies higher then 5 Hz. My code is similar to >>>>> yours but the output doesn't seem logical: >>>>> >>>>> #!/usr/bin/python >>>>> # -*- coding: utf-8 -*- >>>>> import time, array, datetime >>>>> import matplotlib >>>>> import matplotlib.pyplot as plt >>>>> from scipy import * >>>>> from scipy.signal import butter, lfilter >>>>> >>>>> input = 0 >>>>> timing = [] >>>>> voltage = [] >>>>> values = [] >>>>> >>>>> input = 0 >>>>> infile = open('measurements/' + "data.csv", "r") >>>>> for line in infile.readlines(): >>>>> values = line.split(',') >>>>> voltage.insert(input,float(values[0])) >>>>> timing.insert(input,float(values[1])) >>>>> input = input + 1 >>>>> plt.xlabel('Time steps') >>>>> plt.ylabel('Voltage') >>>>> plt.title('Measurments from Oscilloscope') >>>>> plt.grid(True) >>>>> plt.plot(timing, voltage) >>>>> >>>> >>>> plt.figure() >>>> >>>> >>>>> [b,a]=butter(2,5) >>>>> voltagelowp=lfilter(b,a,voltage) >>>>> plt.plot(timing, voltagelowp) >>>>> >>>>> plt.show() >>>>> plt.close() >>>>> >>>>> 2009/9/17 Ivo Maljevic >>>>> >>>>> Does the following: >>>>>> >>>>>> plt.plot(timing, voltage) >>>>>> plt.show() >>>>>> >>>>>> work before filtering? (Suggestion, you may want to put time on >>>>>> x-axis, see the line above) >>>>>> >>>>>> I just wrote a small test program, and it works: >>>>>> >>>>>> >>>>>> #!/usr/bin/python >>>>>> >>>>>> # import >>>>>> import pylab as plt >>>>>> from scipy import * >>>>>> from scipy.signal import butter, lfilter >>>>>> >>>>>> t=linspace(0,1,100) >>>>>> v=sin(2*pi*t) >>>>>> >>>>>> (b,a)=butter(2,5,btype='low') >>>>>> v=lfilter(b,a,v) >>>>>> plt.plot(t,v) >>>>>> plt.show() >>>>>> >>>>>> >>>>>> 2009/9/17 Kaan AK??T >>>>>> >>>>>>> Hi all, >>>>>>> >>>>>>> This is my first time that I use Scipy. I have two arrays both of >>>>>>> them one dimensional. One of them is filled with voltage values and one of >>>>>>> them is filled with time values. I want to apply low pass filter to clear >>>>>>> the noise from the data. Can you please help me? I already wrote something >>>>>>> but it is not working: >>>>>>> >>>>>>> (b,a)=butter(2,5,btype='low') >>>>>>> voltage=lfilter(b,a,voltage) >>>>>>> plt.plot(voltage,timing) >>>>>>> >>>>>>> Best regards, >>>>>>> Kaan >>>>>>> >>>>>>> _______________________________________________ >>>>>>> SciPy-User mailing list >>>>>>> SciPy-User at scipy.org >>>>>>> http://mail.scipy.org/mailman/listinfo/scipy-user >>>>>>> >>>>>>> >>>>>> >>>>>> _______________________________________________ >>>>>> SciPy-User mailing list >>>>>> SciPy-User at scipy.org >>>>>> http://mail.scipy.org/mailman/listinfo/scipy-user >>>>>> >>>>>> >>>>> >>>>> _______________________________________________ >>>>> SciPy-User mailing list >>>>> SciPy-User at scipy.org >>>>> http://mail.scipy.org/mailman/listinfo/scipy-user >>>>> >>>>> >>>> >>> >>> _______________________________________________ >>> SciPy-User mailing list >>> SciPy-User at scipy.org >>> http://mail.scipy.org/mailman/listinfo/scipy-user >>> >>> >> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> >> > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Fri Sep 18 14:47:56 2009 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 18 Sep 2009 13:47:56 -0500 Subject: [SciPy-User] criteria's to get the exit mode '0 In-Reply-To: <698936.51411.qm@web8318.mail.in.yahoo.com> References: <698936.51411.qm@web8318.mail.in.yahoo.com> Message-ID: <3d375d730909181147jfb498c1ydd269177e74260e0@mail.gmail.com> On Fri, Sep 18, 2009 at 08:40, jagan prabhu wrote: > > Hi, > > I am using the scipy optimization routine 'fmin_slsqp' and? 'fmin_l_bfgs_b' in both the case exit mode '0' represents optimization terminated successfully / convergence is achieved. > > What are the criteria's to get the exit mode '0' ? > > Because if i change my initial parameters by very small increment or decrement, i am getting huge difference in my optimized functional value & optimized parameter values. > > so i like to know, > How do the optimization routine determines this the optimized parameters and optimized functional value? It's slightly different for each routine, but basically, it stops when the derivatives at the test point are close enough to zero and the derivatives nearby show that you are at a minimum rather than a maximum or a saddle point. These are all local minimizers, meaning that they can get trapped in so-called "local minima" where there are little "valleys" in the function which are not the deepest. You want the deepest valley of them all, or the global minimum, but the fmin routines cannot guarantee that you will find it. They basically require that you start with an initial guess that is close enough to the global minimum that it manages to avoid all of the local minima. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." ?-- Umberto Eco From msarahan at gmail.com Fri Sep 18 16:54:42 2009 From: msarahan at gmail.com (Mike Sarahan) Date: Fri, 18 Sep 2009 13:54:42 -0700 Subject: [SciPy-User] Working with AVI video files in python In-Reply-To: <95da30590909180602o6c88f728k335d4253e7f7c787@mail.gmail.com> References: <95da30590909180602o6c88f728k335d4253e7f7c787@mail.gmail.com> Message-ID: <8275939c0909181354s39f2a652i507716ae0c1c5cd1@mail.gmail.com> OpenCV might also be of interest. http://opencv.willowgarage.com/wiki/ FWIW, it's in the ubuntu repositories. If you decide to play with it, you might try the ctypes-opencv wrapper. I prefer it over the SWiG based wrapper that comes with OpenCV by default - I find that it's a little less kludgy. http://code.google.com/p/ctypes-opencv/ -Mike On Fri, Sep 18, 2009 at 6:02 AM, Christopher MacMinn wrote: >>> I have some videos (AVI files, MJPG encoding) that I would like to >>> split into frames so that I can analyze them in scipy/numpy. > >> You may wish to try pyffmpeg: ?http://code.google.com/p/pyffmpeg/ > > This is the kind of thing I was looking for -- thanks! > > Best, Chris > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From super.inframan at gmail.com Fri Sep 18 18:22:03 2009 From: super.inframan at gmail.com (Gustaf Nilsson) Date: Fri, 18 Sep 2009 23:22:03 +0100 Subject: [SciPy-User] Can I corner pin transform an image with scipy? Message-ID: Hi Im working on a (python)scriptable image processing tool, and have found good use of scipy already doing gaussian blurs and convolves. One function i hope to implement is a cornerpin transform. Is this something i can use scipy for? Ive tried to google it but come up with nothing. Maybe you scientific fellas have a different name for it? (Im just a nerdy computer graphics guy) heres a picture showing what im trying to do: http://help.adobe.com/en_US/AfterEffects/9.0/images/ae_01pp.png As you can see, I want to be able to give each corner of the picture arbitrary coordinates so that it is scewed like in the example. Can i do this with scipy? Cheers Gustaf -- ? ? ? ? ? ? ? ? ? ? -------------- next part -------------- An HTML attachment was scrubbed... URL: From sccolbert at gmail.com Fri Sep 18 20:36:42 2009 From: sccolbert at gmail.com (Chris Colbert) Date: Sat, 19 Sep 2009 02:36:42 +0200 Subject: [SciPy-User] Can I corner pin transform an image with scipy? In-Reply-To: References: Message-ID: <7f014ea60909181736n6bb759e0j456c855dff75edd1@mail.gmail.com> it seems to me that this could be accomplished with successive interpolations. On Sat, Sep 19, 2009 at 12:22 AM, Gustaf Nilsson wrote: > Hi > Im working on a (python)scriptable image processing tool, and have found > good use of scipy already doing gaussian blurs and convolves. > One function i hope to implement is a cornerpin transform. Is this something > i can use scipy for? Ive tried to google it but come up with nothing. Maybe > you scientific fellas have a different name for it? (Im just a nerdy > computer graphics guy) > > heres a picture showing what im trying to do: > http://help.adobe.com/en_US/AfterEffects/9.0/images/ae_01pp.png > As you can see, I want to be able to give each corner of the picture > arbitrary coordinates so that it is scewed like in the example. > > Can i do this with scipy? > > Cheers > Gustaf > > -- > ? ? ? ? ? ? ? ? ? ? > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > From jkington at wisc.edu Sat Sep 19 00:13:15 2009 From: jkington at wisc.edu (Joe Kington) Date: Fri, 18 Sep 2009 23:13:15 -0500 Subject: [SciPy-User] Can I corner pin transform an image with scipy? In-Reply-To: References: Message-ID: What you're looking for is called a projective transformation or perspective projection in other terms. It's essentially a more general version of an affine transformation. I believe PIL has a built in function to do what you want. See the QUAD option of Image.transform() Hope that helps! -Joe On Fri, Sep 18, 2009 at 5:22 PM, Gustaf Nilsson wrote: > Hi > Im working on a (python)scriptable image processing tool, and have found > good use of scipy already doing gaussian blurs and convolves. > One function i hope to implement is a cornerpin transform. Is this > something i can use scipy for? Ive tried to google it but come up with > nothing. Maybe you scientific fellas have a different name for it? (Im just > a nerdy computer graphics guy) > > heres a picture showing what im trying to do: > http://help.adobe.com/en_US/AfterEffects/9.0/images/ae_01pp.png > As you can see, I want to be able to give each corner of the picture > arbitrary coordinates so that it is scewed like in the example. > > Can i do this with scipy? > > Cheers > Gustaf > > -- > ? ? ? ? ? ? ? ? ? ? > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jkington at wisc.edu Sat Sep 19 00:27:02 2009 From: jkington at wisc.edu (Joe Kington) Date: Fri, 18 Sep 2009 23:27:02 -0500 Subject: [SciPy-User] Can I corner pin transform an image with scipy? In-Reply-To: References: Message-ID: Hmm... So maybe the quad option in Image.transform() isn't what you want... After a bit of experimenting and googleing, it seems to not actually be a perspective transformation... This may not matter for what you're doing, but if you want straight lines in your original image to be straight lines in the final image, you'll need a strict projective transformation. PIL apparently has a somewhat undocumented option in im.transform that will do what you want once you know the 8 coefficients (a,b,c,d,e,f,g,h) to describe the transformation. im2 = im.transform(im.size, Image.PERSPECTIVE, (a,b,c,d,e,f,g,h), Image.BILINEAR) You can fairly easily use numpy/scipy to find the coefficients you'll need. You just need to set up a "G" matrix using the x,y coordinates for the corners of the original image and the new positions of the 4 corners and invert it for the coefficients. For a perspective transformation, we have something like this: Starting from the original four corners, (u0,v0) ... (u3,v3) and mapping these four points to their new positions (x0,y0) ... (x3,y3), G = [ [u0, v0, 1, 0, 0, 0, -u0x0, -v0x0 ], [u1, v1, 1, 0, 0, 0, -u1x1, -v1x1 ], [u2, v2, 1, 0, 0, 0, -u2x2, -v2x2 ], [u3, v3, 1, 0, 0, 0, -u3x3, -v3x3 ], [0, 0, 0, u0, v0, 1, -u0y0, -v0,y0], [0, 0, 0, u1, v1, 1, -u1y1, -v1,y1], [0, 0, 0, u2, v2, 1, -u2y2, -v2,y2], [0, 0, 0, u3, v3, 1, -u3y3, -v3,y3] ] and d = [x0, x1, x2, x3, y0, y1, y2, y3].transpose() G and d are both composed of things we know, the starting corners, and the final corners. We want to solve for G*m = d, where m = [a,b,c,d,e,f,g,h].transpose() (these are the coefficients we want to solve for) so, m = numpy.linalg.solve(G,d) Hope that's somewhat clear, anyway. (and hopefully free of typos) The QUAD option will basically do what you want, but if you really need straight lines to stay straight, you may need to use this method... I'll admit I'm a bit confused here... I may be pointing you in a needlessly convoluted route. Hope it helps, anyway -Joe On Fri, Sep 18, 2009 at 11:13 PM, Joe Kington wrote: > What you're looking for is called a projective transformation or > perspective projection in other terms. It's essentially a more general > version of an affine transformation. > > I believe PIL has a built in function to do what you want. See the QUAD > option of Image.transform() > > Hope that helps! > -Joe > > > > > > On Fri, Sep 18, 2009 at 5:22 PM, Gustaf Nilsson wrote: > >> Hi >> Im working on a (python)scriptable image processing tool, and have found >> good use of scipy already doing gaussian blurs and convolves. >> One function i hope to implement is a cornerpin transform. Is this >> something i can use scipy for? Ive tried to google it but come up with >> nothing. Maybe you scientific fellas have a different name for it? (Im just >> a nerdy computer graphics guy) >> >> heres a picture showing what im trying to do: >> http://help.adobe.com/en_US/AfterEffects/9.0/images/ae_01pp.png >> As you can see, I want to be able to give each corner of the picture >> arbitrary coordinates so that it is scewed like in the example. >> >> Can i do this with scipy? >> >> Cheers >> Gustaf >> >> -- >> ? ? ? ? ? ? ? ? ? ? >> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From krunk7 at gmail.com Sat Sep 19 14:48:20 2009 From: krunk7 at gmail.com (James Kyle) Date: Sat, 19 Sep 2009 11:48:20 -0700 Subject: [SciPy-User] compiling scipy against atlas/lapack on os x Message-ID: Morning, I'm attempting to build scipy against a custom atlas/lapack library on Snow Leopard (this is for inclusion in macports). I've set my build environment according to the documentation here: > http://www.scipy.org/Installing_SciPy/BuildingGeneral Here's my environment and build command: CPATH='/Users/jkyle/Projects/macports/mports/include' LAPACK='/Users/ jkyle/Projects/macports/mports/lib' LIBRARY_PATH='/Users/jkyle/ Projects/macports/mports/lib' MACOSX_DEPLOYMENT_TARGET='10.6' ATLAS='/ Users/jkyle/Projects/macports/mports/lib' CC='/Users/jkyle/Projects/ macports/mports/bin/gcc-mp-4.3' BLAS='/Users/jkyle/Projects/macports/ mports/lib' CXX='/Users/jkyle/Projects/macports/mports/bin/g++-mp-4.3' CCFLAGS='-I/Users/jkyle/Projects/macports/mports/include -L/Users/ jkyle/Projects/macports/mports/lib' python2.6 setup.py --no-user-cfg config_fc --fcompiler gnu95 -- f77exec /Users/jkyle/Projects/macports/mports/bin/gfortran-mp-4.3 -- f90exec /Users/jkyle/Projects/macports/mports/bin/gfortran-mp-4.3 build And here is the configuration output: http://pastie.org/622884 Now, everything seems to go ok, but when I check the resulting libraries I noticed that they are still linking against the system vecLib and Accelerate frameworks (the system os x lapack/blas) when building the cblas.so and fblas.so libraries. The build also seems to create two sets of the cblas and fblas libraries. For example, it installs these linked against my libs: > /Users/jkyle/Projects/macports/mports/Library/Frameworks/ > Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/ > linalg/cblas.so: > /Users/jkyle/Projects/macports/mports/lib/liblapack.dylib > (compatibility version 0.0.0, current version 0.0.0) > /Users/jkyle/Projects/macports/mports/lib/libptf77blas.dylib > (compatibility version 0.0.0, current version 0.0.0) > /Users/jkyle/Projects/macports/mports/lib/libptcblas.dylib > (compatibility version 0.0.0, current version 0.0.0) > /Users/jkyle/Projects/macports/mports/lib/libatlas.dylib > (compatibility version 0.0.0, current version 0.0.0) > /Users/jkyle/Projects/macports/mports/lib/gcc43/libgfortran.3.dylib > (compatibility version 4.0.0, current version 4.0.0) > /Users/jkyle/Projects/macports/mports/lib/gcc43/libgcc_s.1.dylib > (compatibility version 1.0.0, current version 1.0.0) > /usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current > version 124.1.1) > > /Users/jkyle/Projects/macports/mports/Library/Frameworks/ > Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/ > linalg/fblas.so: > /Users/jkyle/Projects/macports/mports/lib/liblapack.dylib > (compatibility version 0.0.0, current version 0.0.0) > /Users/jkyle/Projects/macports/mports/lib/libptf77blas.dylib > (compatibility version 0.0.0, current version 0.0.0) > /Users/jkyle/Projects/macports/mports/lib/libptcblas.dylib > (compatibility version 0.0.0, current version 0.0.0) > /Users/jkyle/Projects/macports/mports/lib/libatlas.dylib > (compatibility version 0.0.0, current version 0.0.0) > /Users/jkyle/Projects/macports/mports/lib/gcc43/libgfortran.3.dylib > (compatibility version 4.0.0, current version 4.0.0) > /Users/jkyle/Projects/macports/mports/lib/gcc43/libgcc_s.1.dylib > (compatibility version 1.0.0, current version 1.0.0) > /usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current > version 124.1.1) And it installs these linked against the system libs: > > /Users/jkyle/Projects/macports/mports/Library/Frameworks/ > Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/lib/ > blas/cblas.so: > /System/Library/Frameworks/Accelerate.framework/Versions/A/ > Accelerate (compatibility version 1.0.0, current version 4.0.0) > /usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current > version 124.1.1) > /Users/jkyle/Projects/macports/mports/Library/Frameworks/ > Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/lib/ > blas/fblas.so: > /Users/jkyle/Projects/macports/mports/lib/gcc43/libgfortran.3.dylib > (compatibility version 4.0.0, current version 4.0.0) > /System/Library/Frameworks/Accelerate.framework/Versions/A/ > Accelerate (compatibility version 1.0.0, current version 4.0.0) > /Users/jkyle/Projects/macports/mports/lib/gcc43/libgcc_s.1.dylib > (compatibility version 1.0.0, current version 1.0.0) > /usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current > version 124.1.1) Lastly, I'm noticing that scipy is building with mixed compilers. I've set both CC and CXX in my build environment to 4.3. However, in the build logs I see that scipy is using gcc-4.2 for some things. For example: > creating build/temp.macosx-10.6-i386-2.6/scipy/cluster > creating build/temp.macosx-10.6-i386-2.6/scipy/cluster/src > compile options: '-I/Users/jkyle/Projects/macports/mports/Library/ > Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/ > numpy/core/include -I/Users/jkyle/Projects/macports/mports/Library/ > Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/ > numpy/core/include -I/Users/jkyle/Projects/macports/mports/Library/ > Frameworks/Python.framework/Versions/2.6/include/python2.6 -c' > gcc-mp-4.3: scipy/cluster/src/vq_module.c > gcc-mp-4.3: scipy/cluster/src/vq.c > /Users/jkyle/Projects/macports/mports/Library/Frameworks/ > Python.framework/Versions/2.6/lib/python2.6/site-packages/numpy/core/ > include/numpy/__multiarray_api.h:969: warning: '_import_array' > defined but not used > /usr/bin/gcc-4.2 -L/Users/jkyle/Projects/macports/mports/lib -bundle > -undefined dynamic_lookup build/temp.macosx-10.6-i386-2.6/scipy/ > cluster/src/vq_module.o build/temp.macosx-10.6-i386-2.6/scipy/ > cluster/src/vq.o -Lbuild/temp.macosx-10.6-i386-2.6 -o build/ > lib.macosx-10.6-i386-2.6/scipy/cluster/_vq.so > building 'scipy.cluster._hierarchy_wrap' extension > compiling C sources > C compiler: /Users/jkyle/Projects/macports/mports/bin/gcc-mp-4.3 - > fno-strict-aliasing -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 - > Wall -Wstrict-prototypes Here, the C compiler is gcc-mp-4.3, but the one actually used is gcc-4.2. Any tips are very welcome. Cheers, -james From zachary.pincus at yale.edu Sat Sep 19 17:15:43 2009 From: zachary.pincus at yale.edu (Zachary Pincus) Date: Sat, 19 Sep 2009 17:15:43 -0400 Subject: [SciPy-User] Can I corner pin transform an image with scipy? In-Reply-To: References: Message-ID: <61B2E436-F893-4A49-ACE4-55A48ACB8404@yale.edu> Hi Gustaf, Look at the routines in scipy.ndimage (this is the top-level namespace containing everything), or in scipy.ndimage.interpolation (contains just the interpolation routines). Specifically, there are affine transforms (which won't be useful here necessarily unless you can do it with like two successive skew operations or something, as Chris thought might be possible) as well as two more general tools: geometric_transform, which takes an image and a callable that given a point in the output image, calculates the corresponding point in the input image, and (essentially equivalent, but faster and simpler once you grok the inputs), map_coordinates, which takes an input image and an array provides a mapping from output to input coordinates. So, if you can calculate the reverse transform you need (like with the math Joe sent over), you can use ndimage to do the mapping. (I use map_coordinates all the time for nonlinear "warping" transformations too -- it's very general and useful.) Zach On Sep 18, 2009, at 6:22 PM, Gustaf Nilsson wrote: > Hi > Im working on a (python)scriptable image processing tool, and have > found good use of scipy already doing gaussian blurs and convolves. > One function i hope to implement is a cornerpin transform. Is this > something i can use scipy for? Ive tried to google it but come up > with nothing. Maybe you scientific fellas have a different name for > it? (Im just a nerdy computer graphics guy) > > heres a picture showing what im trying to do: > http://help.adobe.com/en_US/AfterEffects/9.0/images/ae_01pp.png > As you can see, I want to be able to give each corner of the picture > arbitrary coordinates so that it is scewed like in the example. > > Can i do this with scipy? > > Cheers > Gustaf > > -- > ? ? ? ? ? ? ? ? ? ? > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user From tpk at kraussfamily.org Sat Sep 19 23:04:18 2009 From: tpk at kraussfamily.org (Tom K.) Date: Sat, 19 Sep 2009 20:04:18 -0700 (PDT) Subject: [SciPy-User] [SciPy-user] 2d convolution with 'full' in one dimension and 'valid' in another In-Reply-To: References: Message-ID: <25527399.post@talk.nabble.com> Paul-405 wrote: > > I have a n x m matrix A and n x k matrix B where k >> m. I would like to > compute > the 'full' convolution of A and B along 2nd dimension and get a result > that is n > x (m + k - 1). > > If I select mode='full' in scipy.signal.convolve, I get a result that is > 'full' > in both dimensions and a result that is (n + n - 1) x (m + k - 1). > Currently, I > do this: > > C = np.array([sp.signal.correlate(A[i], B[i], mode='full') for i in > range(n)]) > > I do this a lot with fixed A and varying B. I was wondering if there is a > faster > way. Perhaps I should be using fft's instead. > Paul, What you describe does not sound to me like a 'valid' convolution in the other dimension - it sounds like no convolution at all. You want a 1D convolution along only one dimension of 2 ND arrays, not an ND convolution of 2 ND arrays, right? One thing that may be faster is pre-allocating the output array and assigning it in a loop: def convolve1(x, y): n1 = x.shape[-1] n2 = y.shape[-1] z = np.empty((x.shape[0], n1+n2-1), dtype=x.dtype) for i in xrange(x.shape[0]): z[i] = np.convolve(x[i], y[i]) return z This was about 2.5x faster than the one-liner that you provided for a simple example (n=1000, m=100, k=1000). Anyone else have thoughts about fast ways to do 1D convolution along only one dimension of 2 ND arrays? -- View this message in context: http://www.nabble.com/2d-convolution-with-%27full%27-in-one-dimension-and-%27valid%27-in-another-tp25500595p25527399.html Sent from the Scipy-User mailing list archive at Nabble.com. From josef.pktd at gmail.com Sun Sep 20 07:50:39 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Sun, 20 Sep 2009 07:50:39 -0400 Subject: [SciPy-User] scipy.stats.models.mixed examples ? Message-ID: <1cd32cbb0909200450y6d2a940fha8e9f21c38f343bd@mail.gmail.com> Has anyone used scipy.stats.models.mixed and has some usage examples for it? An archive search came up empty. I wanted to see if it can be updated to integrate back into statsmodels from the sandbox. But I have a hard time seeing how it works (especially because of the internal use of formula). There is only a brief (incomplete) example in the file and not a single test for it. The results look reasonable and the implementation seems to follow the reference pretty closely. During gsoc, there wasn't enough time to clean it up and verify it. class Mixed(object): """ Model for EM implementation of (repeated measures) mixed effects model. \'Maximum Likelihood Computations with Repeated Measures: Application of the EM Algorithm\' Nan Laird; Nicholas Lange; Daniel Stram Journal of the American Statistical Association, Vol. 82, No. 397. (Mar., 1987), pp. 97-105. """ Skimming through the article it could be useful for some applications. Although for panel data, as far as I have seen, it wouldn't allow for time random effects, since there is no correlation between units allowed. Thanks, Josef From jsseabold at gmail.com Sun Sep 20 10:34:33 2009 From: jsseabold at gmail.com (Skipper Seabold) Date: Sun, 20 Sep 2009 10:34:33 -0400 Subject: [SciPy-User] scipy.stats.models.mixed examples ? In-Reply-To: <1cd32cbb0909200450y6d2a940fha8e9f21c38f343bd@mail.gmail.com> References: <1cd32cbb0909200450y6d2a940fha8e9f21c38f343bd@mail.gmail.com> Message-ID: On Sun, Sep 20, 2009 at 7:50 AM, wrote: > Has anyone used scipy.stats.models.mixed and has some usage examples > for it? An archive search came up empty. > > I wanted to see if it can be updated to integrate back into > statsmodels from the sandbox. But I have a hard time seeing how it > works (especially because of the internal use of formula). There is > only a brief (incomplete) example in the file and not a single test > for it. The results look reasonable and the implementation seems to > follow the reference pretty closely. During gsoc, there wasn't enough > time to clean it up and verify it. > > class Mixed(object): > > ? ?""" > ? ?Model for > ? ?EM implementation of (repeated measures) > ? ?mixed effects model. > > ? ?\'Maximum Likelihood Computations with Repeated Measures: > ? ?Application of the EM Algorithm\' > > ? ?Nan Laird; Nicholas Lange; Daniel Stram > > ? ?Journal of the American Statistical Association, > ? ?Vol. 82, No. 397. (Mar., 1987), pp. 97-105. > ? ?""" > > Skimming through the article it could be useful for some applications. > Although for panel data, as far as I have seen, it wouldn't allow for > time random effects, since there is no correlation between units > allowed. > > Thanks, > > Josef Not what you asked for, but a few references that I collected, as I found the R stuff to be the most helpful in crafting examples to get started. http://stat.ethz.ch/R-manual/R-patched/library/nlme/html/lme.html http://cran.r-project.org/doc/contrib/Fox-Companion/appendix-mixed-models.pdf I have found the Kuhnert and Venables a good hands on intro for a number of models as well: http://cran.r-project.org/other-docs.html Skipper From xavier.gnata at gmail.com Sun Sep 20 12:27:15 2009 From: xavier.gnata at gmail.com (Xavier Gnata) Date: Sun, 20 Sep 2009 18:27:15 +0200 Subject: [SciPy-User] Bresenham algorithm? In-Reply-To: <698936.51411.qm@web8318.mail.in.yahoo.com> References: <698936.51411.qm@web8318.mail.in.yahoo.com> Message-ID: <4AB657E3.8080109@gmail.com> Hi, I would like to sum the pixels of a large 2D array along line segments. As long as the segment are horizontal or vertical is it easy :) What's a line segment? ;) Well...the definition is based on the Bresenham algorithm. Does scipy provide the Bresenham algorithm ? For sure it is simple to implement this algo in python but it is slow :( Xavier From zachary.pincus at yale.edu Sun Sep 20 13:47:06 2009 From: zachary.pincus at yale.edu (Zachary Pincus) Date: Sun, 20 Sep 2009 13:47:06 -0400 Subject: [SciPy-User] Bresenham algorithm? In-Reply-To: <4AB657E3.8080109@gmail.com> References: <698936.51411.qm@web8318.mail.in.yahoo.com> <4AB657E3.8080109@gmail.com> Message-ID: <1C4A2479-D17D-429C-A402-7BCD00D5820D@yale.edu> > Does scipy provide the Bresenham algorithm ? Not that I know of... but would it be acceptable to interpolate values? Then you could just calculate floating-point x,y points evenly spaced along the line segment, and use scipy.ndimage.map_coordinates to sample the image (with spline interpolation of any desired order) at those points. In some cases, this is probably an acceptable solution... in others, not so much. Zach From xavier.gnata at gmail.com Sun Sep 20 14:38:27 2009 From: xavier.gnata at gmail.com (Xavier Gnata) Date: Sun, 20 Sep 2009 20:38:27 +0200 Subject: [SciPy-User] Bresenham algorithm? In-Reply-To: <1C4A2479-D17D-429C-A402-7BCD00D5820D@yale.edu> References: <698936.51411.qm@web8318.mail.in.yahoo.com> <4AB657E3.8080109@gmail.com> <1C4A2479-D17D-429C-A402-7BCD00D5820D@yale.edu> Message-ID: <4AB676A3.5020904@gmail.com> Zachary Pincus wrote: >> Does scipy provide the Bresenham algorithm ? >> > > Not that I know of... but would it be acceptable to interpolate > values? Then you could just calculate floating-point x,y points evenly > spaced along the line segment, and use scipy.ndimage.map_coordinates > to sample the image (with spline interpolation of any desired order) > at those points. > > In some cases, this is probably an acceptable solution... in others, > not so much. > > Zach > > Oh yes it is. I just wanted to know if this trivial algo (floats->int or Bresenham) has been implemented in scipy. Whatever is fast is fine in my use case ;) Xavier From gruben at bigpond.net.au Sun Sep 20 19:10:31 2009 From: gruben at bigpond.net.au (Gary Ruben) Date: Mon, 21 Sep 2009 09:10:31 +1000 Subject: [SciPy-User] Bresenham algorithm? In-Reply-To: <4AB657E3.8080109@gmail.com> References: <698936.51411.qm@web8318.mail.in.yahoo.com> <4AB657E3.8080109@gmail.com> Message-ID: <4AB6B667.1050608@bigpond.net.au> I'm not sure I understand exactly what you mean, but perhaps the scipy.ndimage module can help. If I understand you, you can label the connected regions using v, nv = sn.label(image, np.ones((3,3), 'i')) then count the pixels in each connected region with something like lv = dict(zip(range(1,nv+1),[np.where(v==i,1,0).sum() for i in range(1,nv+1)])) Hope this helps, Gary Xavier Gnata wrote: > Hi, > > I would like to sum the pixels of a large 2D array along line segments. > As long as the segment are horizontal or vertical is it easy :) > > What's a line segment? ;) Well...the definition is based on the > Bresenham algorithm. > Does scipy provide the Bresenham algorithm ? > For sure it is simple to implement this algo in python but it is slow :( > > Xavier From robert.kern at gmail.com Sun Sep 20 20:31:16 2009 From: robert.kern at gmail.com (Robert Kern) Date: Sun, 20 Sep 2009 19:31:16 -0500 Subject: [SciPy-User] Bresenham algorithm? In-Reply-To: <4AB6B667.1050608@bigpond.net.au> References: <698936.51411.qm@web8318.mail.in.yahoo.com> <4AB657E3.8080109@gmail.com> <4AB6B667.1050608@bigpond.net.au> Message-ID: <3d375d730909201731u9f1a3bdx552caad880320706@mail.gmail.com> On Sun, Sep 20, 2009 at 18:10, Gary Ruben wrote: > I'm not sure I understand exactly what you mean, but perhaps the > scipy.ndimage module can help. If I understand you, you can label the > connected regions using > > v, nv = sn.label(image, np.ones((3,3), 'i')) > > then count the pixels in each connected region with something like > > lv = dict(zip(range(1,nv+1),[np.where(v==i,1,0).sum() > ? ? ? ? ? for i in range(1,nv+1)])) He doesn't have the connected region. He has two endpoints and needs to find the pixels intersected by the line segment between the two endpoints. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From dwf at cs.toronto.edu Sun Sep 20 21:30:27 2009 From: dwf at cs.toronto.edu (David Warde-Farley) Date: Sun, 20 Sep 2009 21:30:27 -0400 Subject: [SciPy-User] Bresenham algorithm? In-Reply-To: <4AB657E3.8080109@gmail.com> References: <698936.51411.qm@web8318.mail.in.yahoo.com> <4AB657E3.8080109@gmail.com> Message-ID: <66220CC3-8486-4F45-B240-A094551910B3@cs.toronto.edu> On 20-Sep-09, at 12:27 PM, Xavier Gnata wrote: > Does scipy provide the Bresenham algorithm ? > For sure it is simple to implement this algo in python but it is > slow :( It doesn't, that I know of, but it would be easy enough to use Cython to speed it up. This is precisely the sort of thing Cython is good for: iterative algorithms that cannot easily be vectorized. It'll be particularly fast since Bresenham's algorithm is all in integer arithmetic (note that you're still dealing with ints in Python). Here's an implementation I found here: http://mail.python.org/pipermail/python-list/1999-July/007163.html def bresenham(x,y,x2,y2): """Brensenham line algorithm""" steep = 0 coords = [] dx = abs(x2 - x) if (x2 - x) > 0: sx = 1 else: sx = -1 dy = abs(y2 - y) if (y2 - y) > 0: sy = 1 else: sy = -1 if dy > dx: steep = 1 x,y = y,x dx,dy = dy,dx sx,sy = sy,sx d = (2 * dy) - dx for i in range(0,dx): if steep: coords.append((y,x)) else: coords.append((x,y)) while d >= 0: y = y + sy d = d - (2 * dx) x = x + sx d = d + (2 * dy) return coords #added by me Now here's the version I've marked up with Cython: cdef extern from "math.h": int abs(int i) def bresenham(int x, int y, int x2, int y2): cdef int steep = 0 cdef int dx = abs(x2 - x) cdef int dy = abs(y2 - y) cdef int sx, sy, d, i coords = [] if (x2 - x) > 0: sx = 1 else: sx = -1 if (y2 - y) > 0: sy = 1 else: sy = -1 if dy > dx: steep = 1 x,y = y,x dx,dy = dy,dx sx,sy = sy,sx d = (2 * dy) - dx for i in range(0,dx): if steep: coords.append((y,x)) else: coords.append((x,y)) while d >= 0: y = y + sy d = d - (2 * dx) x = x + sx d = d + (2 * dy) return coords And here is the speed comparison: In [1]: from bresenham import bresenham as bresenham_py In [2]: from bresenham_cython import bresenham as bresenham_cy In [3]: a = bresenham_py(0, 0, 12900, 10500) In [4]: b = bresenham_cy(0, 0, 12900, 10500) In [5]: a == b # Check that they produce the same results Out[5]: True In [6]: timeit bresenham_py(0, 0, 12900, 10500) 100 loops, best of 3: 12.6 ms per loop In [7]: timeit bresenham_cy(0, 0, 12900, 10500) # python was already pretty fast 100 loops, best of 3: 2.27 ms per loop So, with that minimal effort and the compilation step we've already got a 5x speedup. Note that my Cython'ed version still uses a Python list. It could be easily modified so that you could pass in your array and do the summing inside this function, bypassing the need for any Python API calls at all, and gaining further speed ups. If I comment out the lines involving 'coords' (the list/tuples manipulation) In [7]: timeit bresenham_cy(0, 0, 12900, 10050) 10000 loops, best of 3: 28.7 us per loop which is almost an additional hundred-fold speedup. For information on how to pass in ndarray arguments into a Cython function, see: http://wiki.cython.org/tutorials/numpy David From seb.haase at gmail.com Mon Sep 21 02:42:44 2009 From: seb.haase at gmail.com (Sebastian Haase) Date: Sun, 20 Sep 2009 22:42:44 -0800 Subject: [SciPy-User] Bresenham algorithm? In-Reply-To: <66220CC3-8486-4F45-B240-A094551910B3@cs.toronto.edu> References: <698936.51411.qm@web8318.mail.in.yahoo.com> <4AB657E3.8080109@gmail.com> <66220CC3-8486-4F45-B240-A094551910B3@cs.toronto.edu> Message-ID: On Sun, Sep 20, 2009 at 5:30 PM, David Warde-Farley wrote: > On 20-Sep-09, at 12:27 PM, Xavier Gnata wrote: > >> Does scipy provide the Bresenham algorithm ? >> For sure it is simple to implement this algo in python but it is >> slow :( > > It doesn't, that I know of, but it would be easy enough to use Cython > to speed it up. This is precisely the sort of thing Cython is good > for: iterative algorithms that cannot easily be vectorized. It'll be > particularly fast since Bresenham's algorithm is all in integer > arithmetic (note that you're still dealing with ints in Python). > > Here's an implementation I found here: http://mail.python.org/pipermail/python-list/1999-July/007163.html > > def bresenham(x,y,x2,y2): > ? ? """Brensenham line algorithm""" > ? ? steep = 0 > ? ? coords = [] > ? ? dx = abs(x2 - x) > ? ? if (x2 - x) > 0: sx = 1 > ? ? else: sx = -1 > ? ? dy = abs(y2 - y) > ? ? if (y2 - y) > 0: sy = 1 > ? ? else: sy = -1 > ? ? if dy > dx: > ? ? ? ? steep = 1 > ? ? ? ? x,y = y,x > ? ? ? ? dx,dy = dy,dx > ? ? ? ? sx,sy = sy,sx > ? ? d = (2 * dy) - dx > ? ? for i in range(0,dx): > ? ? ? ? if steep: coords.append((y,x)) > ? ? ? ? else: coords.append((x,y)) > ? ? ? ? while d >= 0: > ? ? ? ? ? ? y = y + sy > ? ? ? ? ? ? d = d - (2 * dx) > ? ? ? ? x = x + sx > ? ? ? ? d = d + (2 * dy) > ? ? return coords #added by me > > Now here's the version I've marked up with Cython: > > cdef extern from "math.h": > ? ? int abs(int i) > > def bresenham(int x, int y, int x2, int y2): > ? ? cdef int steep = 0 > ? ? cdef int dx = abs(x2 - x) > ? ? cdef int dy = abs(y2 - y) > ? ? cdef int sx, sy, d, i > ? ? coords = [] > ? ? if (x2 - x) > 0: sx = 1 > ? ? else: sx = -1 > ? ? if (y2 - y) > 0: sy = 1 > ? ? else: sy = -1 > ? ? if dy > dx: > ? ? ? ? steep = 1 > ? ? ? ? x,y = y,x > ? ? ? ? dx,dy = dy,dx > ? ? ? ? sx,sy = sy,sx > ? ? d = (2 * dy) - dx > ? ? for i in range(0,dx): > ? ? ? ? if steep: > ? ? ? ? ? ? coords.append((y,x)) > ? ? ? ? else: > ? ? ? ? ? ? coords.append((x,y)) > ? ? ? ? while d >= 0: > ? ? ? ? ? ? y = y + sy > ? ? ? ? ? ? d = d - (2 * dx) > ? ? ? ? x = x + sx > ? ? ? ? d = d + (2 * dy) > ? ? return coords > > > And here is the speed comparison: > > In [1]: from bresenham import bresenham as bresenham_py > > In [2]: from bresenham_cython import bresenham as bresenham_cy > > In [3]: a = bresenham_py(0, 0, 12900, 10500) > > In [4]: b = bresenham_cy(0, 0, 12900, 10500) > > In [5]: a == b # Check that they produce the same results > Out[5]: True > > In [6]: timeit bresenham_py(0, 0, 12900, 10500) > 100 loops, best of 3: 12.6 ms per loop > > In [7]: timeit bresenham_cy(0, 0, 12900, 10500) # python was already > pretty fast > 100 loops, best of 3: 2.27 ms per loop > > So, with that minimal effort and the compilation step we've already > got a 5x speedup. Note that my Cython'ed version still uses a Python > list. It could be easily modified so that you could pass in your array > and do the summing inside this function, bypassing the need for any > Python API calls at all, and gaining further speed ups. > > If I comment out the lines involving 'coords' (the list/tuples > manipulation) > > In [7]: timeit bresenham_cy(0, 0, 12900, 10050) > 10000 loops, best of 3: 28.7 us per loop > > which is almost an additional hundred-fold speedup. > > For information on how to pass in ndarray arguments into a Cython > function, see: > > ? ? ? ?http://wiki.cython.org/tutorials/numpy > > David If you comment out the list/tuples handling - assuming you want to plug in numpy arrays here, you have to know the number of points up front. (To be able to pre-allocate the array correctly) Is this easy ? (rounding errors ?) Cheers, - Sebastian Haase From dwf at cs.toronto.edu Mon Sep 21 03:23:33 2009 From: dwf at cs.toronto.edu (David Warde-Farley) Date: Mon, 21 Sep 2009 03:23:33 -0400 Subject: [SciPy-User] Bresenham algorithm? In-Reply-To: References: <698936.51411.qm@web8318.mail.in.yahoo.com> <4AB657E3.8080109@gmail.com> <66220CC3-8486-4F45-B240-A094551910B3@cs.toronto.edu> Message-ID: On 21-Sep-09, at 2:42 AM, Sebastian Haase wrote: > If you comment out the list/tuples handling - assuming you want to > plug in numpy arrays here, you have to know the number of points up > front. > (To be able to pre-allocate the array correctly) > Is this easy ? (rounding errors ?) Yes, notice that the loop termination condition. You'd simply need to allocate two arrays, each 'dx' long. But in his case, he'd be better off taking the image array as an argument and accessing each pixel as he goes in order to sum them. Assuming the image is reasonably scaled and he uses an appropriate precision accumulator, that seems like the best way to approach the problem rather than allocating space for indices only to use them once (a more general purpose implementation, of course, would have to accumulate/return indices). David From Emmanuel.Lambert at intec.ugent.be Mon Sep 21 10:46:34 2009 From: Emmanuel.Lambert at intec.ugent.be (Emmanuel Lambert) Date: Mon, 21 Sep 2009 16:46:34 +0200 Subject: [SciPy-User] scipy-weave unit tests fail on KeyError Message-ID: <1253544394.3851.62.camel@emmanuel-ubuntu> Hi, I compiled SciPy and Numpy on a machine with Scientific Linux. We detected a problem with Weave and after investigation, it turns out that some of the unit tests delivered with Scipy-Weave also fail ! Below is a list of tests that fail in for example the "test_c_spec" file. They all raise a KeyError. This is with SciPy 0.7.1 on Python 2.6. I also downloaded the latest Weave code again from the SVN repository, but the problem is not resolved. Any idea on how to tackle this problem? There are no posts that help me further. I don't have this problem with the same standard scipy package that was is available for Ubuntu 9.04 (apparently the weave version number is the same). It looks like the compilation works fine, see sample stdout also below. What could cause this? thanks for any help. Emmanuel ******************* SAMPLE OF STDOUT ****************** -------------------- >> begin captured stdout << --------------------- running build_ext running build_src building extension "sc_d133102ab45193e072f8dbb5a1f6848513" sources customize UnixCCompiler customize UnixCCompiler using build_ext customize UnixCCompiler customize UnixCCompiler using build_ext building 'sc_d133102ab45193e072f8dbb5a1f6848513' extension compiling C++ sources C compiler: g++ -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O3 - Wall -fPIC compile options: '-I/user/home/gent/vsc401/vsc40157/scipy-runtime/ scipy/weave -I/user/home/gent/vsc401/vsc40157/scipy-runtime/scipy/ weave/scxx -I/user/home/gent/vsc401/vsc40157/numpy-runtime/numpy/core/ include -I/apps/gent/gengar/harpertown/software/Python/2.6.2- gimkl-0.5.0/include/python2.6 -c' g++: /user/home/gent/vsc401/vsc40157/.python26_compiled/ sc_d133102ab45193e072f8dbb5a1f6848513.cpp g++ -pthread -shared /tmp/vsc40157/python26_intermediate/ compiler_c1b5f1b73f1ce7d0c836cdad4c7c5ded/user/home/gent/vsc401/ vsc40157/.python26_compiled/sc_d133102ab45193e072f8dbb5a1f6848513.o / tmp/vsc40157/python26_intermediate/ compiler_c1b5f1b73f1ce7d0c836cdad4c7c5ded/user/home/gent/vsc401/ vsc40157/scipy-runtime/scipy/weave/scxx/weave_imp.o -o /user/home/gent/ vsc401/vsc40157/.python26_compiled/ sc_d133102ab45193e072f8dbb5a1f6848513.so running scons --------------------- >> end captured stdout << ---------------------- ********************** TESTS THAT FAIL *********************** -bash-3.2$ python ./test_c_spec.py E..........EE.................EEEE......E..........EE.................EEEE.............. ====================================================================== ERROR: test_call_function (test_c_spec.CallableConverter) ---------------------------------------------------------------------- Traceback (most recent call last): File "/user/home/gent/vsc401/vsc40157/scipy-runtime/scipy/weave/ tests/test_c_spec.py", line 296, in test_call_function compiler=self.compiler,force=1) File "/user/home/gent/vsc401/vsc40157/scipy-runtime/scipy/weave/ inline_tools.py", line 301, in inline function_catalog.add_function(code,func,module_dir) File "/user/home/gent/vsc401/vsc40157/scipy-runtime/scipy/weave/ catalog.py", line 648, in add_function self.cache[code] = self.get_functions(code) File "/user/home/gent/vsc401/vsc40157/scipy-runtime/scipy/weave/ catalog.py", line 615, in get_functions function_list = self.get_cataloged_functions(code) File "/user/home/gent/vsc401/vsc40157/scipy-runtime/scipy/weave/ catalog.py", line 529, in get_cataloged_functions if cat is not None and code in cat: File "/apps/gent/gengar/harpertown/software/Python/2.6.2-gimkl-0.5.0/ lib/python2.6/shelve.py", line 110, in __contains__ return key in self.dict File "/user/home/gent/vsc401/vsc40157/scipy-runtime/scipy/io/ dumbdbm_patched.py", line 73, in __getitem__ pos, siz = self._index[key] # may raise KeyError KeyError: 0 ====================================================================== ERROR: test_file_to_py (test_c_spec.FileConverter) ---------------------------------------------------------------------- Traceback (most recent call last): File "/user/home/gent/vsc401/vsc40157/scipy-runtime/scipy/weave/ tests/test_c_spec.py", line 262, in test_file_to_py force=1) File "/user/home/gent/vsc401/vsc40157/scipy-runtime/scipy/weave/ inline_tools.py", line 301, in inline function_catalog.add_function(code,func,module_dir) File "/user/home/gent/vsc401/vsc40157/scipy-runtime/scipy/weave/ catalog.py", line 648, in add_function self.cache[code] = self.get_functions(code) File "/user/home/gent/vsc401/vsc40157/scipy-runtime/scipy/weave/ catalog.py", line 615, in get_functions function_list = self.get_cataloged_functions(code) File "/user/home/gent/vsc401/vsc40157/scipy-runtime/scipy/weave/ catalog.py", line 529, in get_cataloged_functions if cat is not None and code in cat: File "/apps/gent/gengar/harpertown/software/Python/2.6.2-gimkl-0.5.0/ lib/python2.6/shelve.py", line 110, in __contains__ return key in self.dict File "/user/home/gent/vsc401/vsc40157/scipy-runtime/scipy/io/ dumbdbm_patched.py", line 73, in __getitem__ pos, siz = self._index[key] # may raise KeyError KeyError: 0 ====================================================================== ERROR: test_py_to_file (test_c_spec.FileConverter) ---------------------------------------------------------------------- Traceback (most recent call last): File "/user/home/gent/vsc401/vsc40157/scipy-runtime/scipy/weave/ tests/test_c_spec.py", line 246, in test_py_to_file inline_tools.inline(code,['file'],compiler=self.compiler,force=1) File "/user/home/gent/vsc401/vsc40157/scipy-runtime/scipy/weave/ inline_tools.py", line 301, in inline function_catalog.add_function(code,func,module_dir) File "/user/home/gent/vsc401/vsc40157/scipy-runtime/scipy/weave/ catalog.py", line 648, in add_function self.cache[code] = self.get_functions(code) File "/user/home/gent/vsc401/vsc40157/scipy-runtime/scipy/weave/ catalog.py", line 615, in get_functions function_list = self.get_cataloged_functions(code) File "/user/home/gent/vsc401/vsc40157/scipy-runtime/scipy/weave/ catalog.py", line 529, in get_cataloged_functions if cat is not None and code in cat: File "/apps/gent/gengar/harpertown/software/Python/2.6.2-gimkl-0.5.0/ lib/python2.6/shelve.py", line 110, in __contains__ return key in self.dict File "/user/home/gent/vsc401/vsc40157/scipy-runtime/scipy/io/ dumbdbm_patched.py", line 73, in __getitem__ pos, siz = self._index[key] # may raise KeyError KeyError: 0 ====================================================================== ERROR: test_convert_to_dict (test_c_spec.SequenceConverter) ---------------------------------------------------------------------- Traceback (most recent call last): File "/user/home/gent/vsc401/vsc40157/scipy-runtime/scipy/weave/ tests/test_c_spec.py", line 305, in test_convert_to_dict inline_tools.inline("",['d'],compiler=self.compiler,force=1) File "/user/home/gent/vsc401/vsc40157/scipy-runtime/scipy/weave/ inline_tools.py", line 301, in inline function_catalog.add_function(code,func,module_dir) File "/user/home/gent/vsc401/vsc40157/scipy-runtime/scipy/weave/ catalog.py", line 648, in add_function self.cache[code] = self.get_functions(code) File "/user/home/gent/vsc401/vsc40157/scipy-runtime/scipy/weave/ catalog.py", line 615, in get_functions function_list = self.get_cataloged_functions(code) File "/user/home/gent/vsc401/vsc40157/scipy-runtime/scipy/weave/ catalog.py", line 529, in get_cataloged_functions if cat is not None and code in cat: File "/apps/gent/gengar/harpertown/software/Python/2.6.2-gimkl-0.5.0/ lib/python2.6/shelve.py", line 110, in __contains__ return key in self.dict File "/user/home/gent/vsc401/vsc40157/scipy-runtime/scipy/io/ dumbdbm_patched.py", line 73, in __getitem__ pos, siz = self._index[key] # may raise KeyError KeyError: 0 ====================================================================== ERROR: test_convert_to_list (test_c_spec.SequenceConverter) ---------------------------------------------------------------------- Traceback (most recent call last): File "/user/home/gent/vsc401/vsc40157/scipy-runtime/scipy/weave/ tests/test_c_spec.py", line 309, in test_convert_to_list inline_tools.inline("",['l'],compiler=self.compiler,force=1) File "/user/home/gent/vsc401/vsc40157/scipy-runtime/scipy/weave/ inline_tools.py", line 301, in inline function_catalog.add_function(code,func,module_dir) File "/user/home/gent/vsc401/vsc40157/scipy-runtime/scipy/weave/ catalog.py", line 648, in add_function self.cache[code] = self.get_functions(code) File "/user/home/gent/vsc401/vsc40157/scipy-runtime/scipy/weave/ catalog.py", line 615, in get_functions function_list = self.get_cataloged_functions(code) File "/user/home/gent/vsc401/vsc40157/scipy-runtime/scipy/weave/ catalog.py", line 529, in get_cataloged_functions if cat is not None and code in cat: File "/apps/gent/gengar/harpertown/software/Python/2.6.2-gimkl-0.5.0/ lib/python2.6/shelve.py", line 110, in __contains__ return key in self.dict File "/user/home/gent/vsc401/vsc40157/scipy-runtime/scipy/io/ dumbdbm_patched.py", line 73, in __getitem__ pos, siz = self._index[key] # may raise KeyError KeyError: 0 ====================================================================== ERROR: test_convert_to_string (test_c_spec.SequenceConverter) ---------------------------------------------------------------------- Traceback (most recent call last): File "/user/home/gent/vsc401/vsc40157/scipy-runtime/scipy/weave/ tests/test_c_spec.py", line 313, in test_convert_to_string inline_tools.inline("",['s'],compiler=self.compiler,force=1) File "/user/home/gent/vsc401/vsc40157/scipy-runtime/scipy/weave/ inline_tools.py", line 301, in inline function_catalog.add_function(code,func,module_dir) File "/user/home/gent/vsc401/vsc40157/scipy-runtime/scipy/weave/ catalog.py", line 648, in add_function self.cache[code] = self.get_functions(code) File "/user/home/gent/vsc401/vsc40157/scipy-runtime/scipy/weave/ catalog.py", line 615, in get_functions function_list = self.get_cataloged_functions(code) File "/user/home/gent/vsc401/vsc40157/scipy-runtime/scipy/weave/ catalog.py", line 529, in get_cataloged_functions if cat is not None and code in cat: File "/apps/gent/gengar/harpertown/software/Python/2.6.2-gimkl-0.5.0/ lib/python2.6/shelve.py", line 110, in __contains__ return key in self.dict File "/user/home/gent/vsc401/vsc40157/scipy-runtime/scipy/io/ dumbdbm_patched.py", line 73, in __getitem__ pos, siz = self._index[key] # may raise KeyError KeyError: 0 ====================================================================== ERROR: test_convert_to_tuple (test_c_spec.SequenceConverter) ---------------------------------------------------------------------- Traceback (most recent call last): File "/user/home/gent/vsc401/vsc40157/scipy-runtime/scipy/weave/ tests/test_c_spec.py", line 317, in test_convert_to_tuple inline_tools.inline("",['t'],compiler=self.compiler,force=1) File "/user/home/gent/vsc401/vsc40157/scipy-runtime/scipy/weave/ inline_tools.py", line 301, in inline function_catalog.add_function(code,func,module_dir) File "/user/home/gent/vsc401/vsc40157/scipy-runtime/scipy/weave/ catalog.py", line 648, in add_function self.cache[code] = self.get_functions(code) File "/user/home/gent/vsc401/vsc40157/scipy-runtime/scipy/weave/ catalog.py", line 615, in get_functions function_list = self.get_cataloged_functions(code) File "/user/home/gent/vsc401/vsc40157/scipy-runtime/scipy/weave/ catalog.py", line 529, in get_cataloged_functions if cat is not None and code in cat: File "/apps/gent/gengar/harpertown/software/Python/2.6.2-gimkl-0.5.0/ lib/python2.6/shelve.py", line 110, in __contains__ return key in self.dict File "/user/home/gent/vsc401/vsc40157/scipy-runtime/scipy/io/ dumbdbm_patched.py", line 73, in __getitem__ pos, siz = self._index[key] # may raise KeyError KeyError: 0 ====================================================================== ERROR: test_call_function (test_c_spec.TestCallableConverterUnix) ---------------------------------------------------------------------- Traceback (most recent call last): File "/user/home/gent/vsc401/vsc40157/scipy-runtime/scipy/weave/ tests/test_c_spec.py", line 296, in test_call_function compiler=self.compiler,force=1) File "/user/home/gent/vsc401/vsc40157/scipy-runtime/scipy/weave/ inline_tools.py", line 301, in inline function_catalog.add_function(code,func,module_dir) File "/user/home/gent/vsc401/vsc40157/scipy-runtime/scipy/weave/ catalog.py", line 648, in add_function self.cache[code] = self.get_functions(code) File "/user/home/gent/vsc401/vsc40157/scipy-runtime/scipy/weave/ catalog.py", line 615, in get_functions function_list = self.get_cataloged_functions(code) File "/user/home/gent/vsc401/vsc40157/scipy-runtime/scipy/weave/ catalog.py", line 529, in get_cataloged_functions if cat is not None and code in cat: File "/apps/gent/gengar/harpertown/software/Python/2.6.2-gimkl-0.5.0/ lib/python2.6/shelve.py", line 110, in __contains__ return key in self.dict File "/user/home/gent/vsc401/vsc40157/scipy-runtime/scipy/io/ dumbdbm_patched.py", line 73, in __getitem__ pos, siz = self._index[key] # may raise KeyError KeyError: 0 ====================================================================== ERROR: test_file_to_py (test_c_spec.TestFileConverterUnix) ---------------------------------------------------------------------- Traceback (most recent call last): File "/user/home/gent/vsc401/vsc40157/scipy-runtime/scipy/weave/ tests/test_c_spec.py", line 262, in test_file_to_py force=1) File "/user/home/gent/vsc401/vsc40157/scipy-runtime/scipy/weave/ inline_tools.py", line 301, in inline function_catalog.add_function(code,func,module_dir) File "/user/home/gent/vsc401/vsc40157/scipy-runtime/scipy/weave/ catalog.py", line 648, in add_function self.cache[code] = self.get_functions(code) File "/user/home/gent/vsc401/vsc40157/scipy-runtime/scipy/weave/ catalog.py", line 615, in get_functions function_list = self.get_cataloged_functions(code) File "/user/home/gent/vsc401/vsc40157/scipy-runtime/scipy/weave/ catalog.py", line 529, in get_cataloged_functions if cat is not None and code in cat: File "/apps/gent/gengar/harpertown/software/Python/2.6.2-gimkl-0.5.0/ lib/python2.6/shelve.py", line 110, in __contains__ return key in self.dict File "/user/home/gent/vsc401/vsc40157/scipy-runtime/scipy/io/ dumbdbm_patched.py", line 73, in __getitem__ pos, siz = self._index[key] # may raise KeyError KeyError: 0 ====================================================================== ERROR: test_py_to_file (test_c_spec.TestFileConverterUnix) ---------------------------------------------------------------------- Traceback (most recent call last): File "/user/home/gent/vsc401/vsc40157/scipy-runtime/scipy/weave/ tests/test_c_spec.py", line 246, in test_py_to_file inline_tools.inline(code,['file'],compiler=self.compiler,force=1) File "/user/home/gent/vsc401/vsc40157/scipy-runtime/scipy/weave/ inline_tools.py", line 301, in inline function_catalog.add_function(code,func,module_dir) File "/user/home/gent/vsc401/vsc40157/scipy-runtime/scipy/weave/ catalog.py", line 648, in add_function self.cache[code] = self.get_functions(code) File "/user/home/gent/vsc401/vsc40157/scipy-runtime/scipy/weave/ catalog.py", line 615, in get_functions function_list = self.get_cataloged_functions(code) File "/user/home/gent/vsc401/vsc40157/scipy-runtime/scipy/weave/ catalog.py", line 529, in get_cataloged_functions if cat is not None and code in cat: File "/apps/gent/gengar/harpertown/software/Python/2.6.2-gimkl-0.5.0/ lib/python2.6/shelve.py", line 110, in __contains__ return key in self.dict File "/user/home/gent/vsc401/vsc40157/scipy-runtime/scipy/io/ dumbdbm_patched.py", line 73, in __getitem__ pos, siz = self._index[key] # may raise KeyError KeyError: 0 ====================================================================== ERROR: test_convert_to_dict (test_c_spec.TestSequenceConverterUnix) ---------------------------------------------------------------------- Traceback (most recent call last): File "/user/home/gent/vsc401/vsc40157/scipy-runtime/scipy/weave/ tests/test_c_spec.py", line 305, in test_convert_to_dict inline_tools.inline("",['d'],compiler=self.compiler,force=1) File "/user/home/gent/vsc401/vsc40157/scipy-runtime/scipy/weave/ inline_tools.py", line 301, in inline function_catalog.add_function(code,func,module_dir) File "/user/home/gent/vsc401/vsc40157/scipy-runtime/scipy/weave/ catalog.py", line 648, in add_function self.cache[code] = self.get_functions(code) File "/user/home/gent/vsc401/vsc40157/scipy-runtime/scipy/weave/ catalog.py", line 615, in get_functions function_list = self.get_cataloged_functions(code) File "/user/home/gent/vsc401/vsc40157/scipy-runtime/scipy/weave/ catalog.py", line 529, in get_cataloged_functions if cat is not None and code in cat: File "/apps/gent/gengar/harpertown/software/Python/2.6.2-gimkl-0.5.0/ lib/python2.6/shelve.py", line 110, in __contains__ return key in self.dict File "/user/home/gent/vsc401/vsc40157/scipy-runtime/scipy/io/ dumbdbm_patched.py", line 73, in __getitem__ pos, siz = self._index[key] # may raise KeyError KeyError: 0 ====================================================================== ERROR: test_convert_to_list (test_c_spec.TestSequenceConverterUnix) ---------------------------------------------------------------------- Traceback (most recent call last): File "/user/home/gent/vsc401/vsc40157/scipy-runtime/scipy/weave/ tests/test_c_spec.py", line 309, in test_convert_to_list inline_tools.inline("",['l'],compiler=self.compiler,force=1) File "/user/home/gent/vsc401/vsc40157/scipy-runtime/scipy/weave/ inline_tools.py", line 301, in inline function_catalog.add_function(code,func,module_dir) File "/user/home/gent/vsc401/vsc40157/scipy-runtime/scipy/weave/ catalog.py", line 648, in add_function self.cache[code] = self.get_functions(code) File "/user/home/gent/vsc401/vsc40157/scipy-runtime/scipy/weave/ catalog.py", line 615, in get_functions function_list = self.get_cataloged_functions(code) File "/user/home/gent/vsc401/vsc40157/scipy-runtime/scipy/weave/ catalog.py", line 529, in get_cataloged_functions if cat is not None and code in cat: File "/apps/gent/gengar/harpertown/software/Python/2.6.2-gimkl-0.5.0/ lib/python2.6/shelve.py", line 110, in __contains__ return key in self.dict File "/user/home/gent/vsc401/vsc40157/scipy-runtime/scipy/io/ dumbdbm_patched.py", line 73, in __getitem__ pos, siz = self._index[key] # may raise KeyError KeyError: 0 ====================================================================== ERROR: test_convert_to_string (test_c_spec.TestSequenceConverterUnix) ---------------------------------------------------------------------- Traceback (most recent call last): File "/user/home/gent/vsc401/vsc40157/scipy-runtime/scipy/weave/ tests/test_c_spec.py", line 313, in test_convert_to_string inline_tools.inline("",['s'],compiler=self.compiler,force=1) File "/user/home/gent/vsc401/vsc40157/scipy-runtime/scipy/weave/ inline_tools.py", line 301, in inline function_catalog.add_function(code,func,module_dir) File "/user/home/gent/vsc401/vsc40157/scipy-runtime/scipy/weave/ catalog.py", line 648, in add_function self.cache[code] = self.get_functions(code) File "/user/home/gent/vsc401/vsc40157/scipy-runtime/scipy/weave/ catalog.py", line 615, in get_functions function_list = self.get_cataloged_functions(code) File "/user/home/gent/vsc401/vsc40157/scipy-runtime/scipy/weave/ catalog.py", line 529, in get_cataloged_functions if cat is not None and code in cat: File "/apps/gent/gengar/harpertown/software/Python/2.6.2-gimkl-0.5.0/ lib/python2.6/shelve.py", line 110, in __contains__ return key in self.dict File "/user/home/gent/vsc401/vsc40157/scipy-runtime/scipy/io/ dumbdbm_patched.py", line 73, in __getitem__ pos, siz = self._index[key] # may raise KeyError KeyError: 0 ====================================================================== ERROR: test_convert_to_tuple (test_c_spec.TestSequenceConverterUnix) ---------------------------------------------------------------------- Traceback (most recent call last): File "/user/home/gent/vsc401/vsc40157/scipy-runtime/scipy/weave/ tests/test_c_spec.py", line 317, in test_convert_to_tuple inline_tools.inline("",['t'],compiler=self.compiler,force=1) File "/user/home/gent/vsc401/vsc40157/scipy-runtime/scipy/weave/ inline_tools.py", line 301, in inline function_catalog.add_function(code,func,module_dir) File "/user/home/gent/vsc401/vsc40157/scipy-runtime/scipy/weave/ catalog.py", line 648, in add_function self.cache[code] = self.get_functions(code) File "/user/home/gent/vsc401/vsc40157/scipy-runtime/scipy/weave/ catalog.py", line 615, in get_functions function_list = self.get_cataloged_functions(code) File "/user/home/gent/vsc401/vsc40157/scipy-runtime/scipy/weave/ catalog.py", line 529, in get_cataloged_functions if cat is not None and code in cat: File "/apps/gent/gengar/harpertown/software/Python/2.6.2-gimkl-0.5.0/ lib/python2.6/shelve.py", line 110, in __contains__ return key in self.dict File "/user/home/gent/vsc401/vsc40157/scipy-runtime/scipy/io/ dumbdbm_patched.py", line 73, in __getitem__ pos, siz = self._index[key] # may raise KeyError KeyError: 0 ---------------------------------------------------------------------- Ran 88 tests in 32.581s FAILED (errors=14) From gokhansever at gmail.com Mon Sep 21 13:45:44 2009 From: gokhansever at gmail.com (=?UTF-8?Q?G=C3=B6khan_Sever?=) Date: Mon, 21 Sep 2009 12:45:44 -0500 Subject: [SciPy-User] Simple pattern recognition In-Reply-To: <49d6b3500909161722r6f74cce6j515b756c2b0b78c5@mail.gmail.com> References: <49d6b3500909161722r6f74cce6j515b756c2b0b78c5@mail.gmail.com> Message-ID: <49d6b3500909211045g2913d62ey539171b0668ae7c3@mail.gmail.com> I asked this question at http://stackoverflow.com/questions/1449139/simple-object-recognition and get lots of nice feedback, and finally I have managed to implement what I wanted. What I was looking for is named "connected component labelling or analysis" for my "connected component extraction" I have put the code (lab2.py) and the image (particles.png) under: http://code.google.com/p/ccnworks/source/browse/#svn/trunk/AtSc450/labs What do you think of improving that code and adding into scipy's ndimage library (like connected_components()) ? Comments and suggestions are welcome :) On Wed, Sep 16, 2009 at 7:22 PM, G?khan Sever wrote: > Hello all, > > I want to be able to count predefined simple rectangle shapes on an image > as shown like in this one: > http://img7.imageshack.us/img7/2327/particles.png > > Which is in my case to count all the blue pixels (they are ice-snow flake > shadows in reality) in one of the column. > > What is the way to automate this task, which library or technique should I > study to tackle it. > > Thanks. > > -- > G?khan > -- G?khan -------------- next part -------------- An HTML attachment was scrubbed... URL: From aleck at marlboro.edu Mon Sep 21 13:57:12 2009 From: aleck at marlboro.edu (Alec Koumjian) Date: Mon, 21 Sep 2009 13:57:12 -0400 Subject: [SciPy-User] Use in GPL project Message-ID: <61a4c0ba0909211057v2fb90704t8857c467be9f14d2@mail.gmail.com> I'm a tad confused about licensing. If I use Numpy or Scipy modules in another project, can I release that project as GPLv3? Is it possible for the scipy modules to keep their BSD license and only release my own code as GPL? -------------- next part -------------- An HTML attachment was scrubbed... URL: From zachary.pincus at yale.edu Mon Sep 21 13:57:53 2009 From: zachary.pincus at yale.edu (Zachary Pincus) Date: Mon, 21 Sep 2009 13:57:53 -0400 Subject: [SciPy-User] Simple pattern recognition In-Reply-To: <49d6b3500909211045g2913d62ey539171b0668ae7c3@mail.gmail.com> References: <49d6b3500909161722r6f74cce6j515b756c2b0b78c5@mail.gmail.com> <49d6b3500909211045g2913d62ey539171b0668ae7c3@mail.gmail.com> Message-ID: I believe that pretty generic connected-component finding is already available with scipy.ndimage.label, as David suggested at the beginning of the thread... This function takes a binary array (e.g. zeros where the background is, non-zero where foreground is) and outputs an array where each connected component of non-background pixels has a unique non-zero "label" value. ndimage.find_objects will then give slices (e.g. bounding boxes) for each labeled object (or a subset of them as specified). There are also a ton of statistics you can calculate based on the labeled objects -- look at the entire ndimage.measurements namespace. Zach On Sep 21, 2009, at 1:45 PM, G?khan Sever wrote: > I asked this question at http://stackoverflow.com/questions/1449139/simple-object-recognition > and get lots of nice feedback, and finally I have managed to > implement what I wanted. > > What I was looking for is named "connected component labelling or > analysis" for my "connected component extraction" > > I have put the code (lab2.py) and the image (particles.png) under: > http://code.google.com/p/ccnworks/source/browse/#svn/trunk/AtSc450/ > labs > > What do you think of improving that code and adding into scipy's > ndimage library (like connected_components()) ? > > Comments and suggestions are welcome :) > > > On Wed, Sep 16, 2009 at 7:22 PM, G?khan Sever > wrote: > Hello all, > > I want to be able to count predefined simple rectangle shapes on an > image as shown like in this one: http://img7.imageshack.us/img7/2327/particles.png > > Which is in my case to count all the blue pixels (they are ice-snow > flake shadows in reality) in one of the column. > > What is the way to automate this task, which library or technique > should I study to tackle it. > > Thanks. > > -- > G?khan > > > > -- > G?khan > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user From gokhansever at gmail.com Mon Sep 21 14:04:26 2009 From: gokhansever at gmail.com (=?UTF-8?Q?G=C3=B6khan_Sever?=) Date: Mon, 21 Sep 2009 13:04:26 -0500 Subject: [SciPy-User] Simple pattern recognition In-Reply-To: References: <49d6b3500909161722r6f74cce6j515b756c2b0b78c5@mail.gmail.com> <49d6b3500909211045g2913d62ey539171b0668ae7c3@mail.gmail.com> Message-ID: <49d6b3500909211104m2ad0646fo6ca8a2d74735e9bc@mail.gmail.com> ndimage.label works differently than what I have done here. Later using find_objects you can get slices for row or column basis. Not possible to construct a dynamical structure to find objects that are in the in both axis. Could you look at the stackoverflow article once again and comment back? Thanks. On Mon, Sep 21, 2009 at 12:57 PM, Zachary Pincus wrote: > I believe that pretty generic connected-component finding is already > available with scipy.ndimage.label, as David suggested at the > beginning of the thread... > > This function takes a binary array (e.g. zeros where the background > is, non-zero where foreground is) and outputs an array where each > connected component of non-background pixels has a unique non-zero > "label" value. > > ndimage.find_objects will then give slices (e.g. bounding boxes) for > each labeled object (or a subset of them as specified). There are also > a ton of statistics you can calculate based on the labeled objects -- > look at the entire ndimage.measurements namespace. > > Zach > > On Sep 21, 2009, at 1:45 PM, G?khan Sever wrote: > > > I asked this question at > http://stackoverflow.com/questions/1449139/simple-object-recognition > > and get lots of nice feedback, and finally I have managed to > > implement what I wanted. > > > > What I was looking for is named "connected component labelling or > > analysis" for my "connected component extraction" > > > > I have put the code (lab2.py) and the image (particles.png) under: > > http://code.google.com/p/ccnworks/source/browse/#svn/trunk/AtSc450/ > > labs > > > > What do you think of improving that code and adding into scipy's > > ndimage library (like connected_components()) ? > > > > Comments and suggestions are welcome :) > > > > > > On Wed, Sep 16, 2009 at 7:22 PM, G?khan Sever > > wrote: > > Hello all, > > > > I want to be able to count predefined simple rectangle shapes on an > > image as shown like in this one: > http://img7.imageshack.us/img7/2327/particles.png > > > > Which is in my case to count all the blue pixels (they are ice-snow > > flake shadows in reality) in one of the column. > > > > What is the way to automate this task, which library or technique > > should I study to tackle it. > > > > Thanks. > > > > -- > > G?khan > > > > > > > > -- > > G?khan > > _______________________________________________ > > SciPy-User mailing list > > SciPy-User at scipy.org > > http://mail.scipy.org/mailman/listinfo/scipy-user > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -- G?khan -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Mon Sep 21 14:19:51 2009 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 21 Sep 2009 13:19:51 -0500 Subject: [SciPy-User] Use in GPL project In-Reply-To: <61a4c0ba0909211057v2fb90704t8857c467be9f14d2@mail.gmail.com> References: <61a4c0ba0909211057v2fb90704t8857c467be9f14d2@mail.gmail.com> Message-ID: <3d375d730909211119pddb1a01ld7b6ec8ec2b66dcc@mail.gmail.com> On Mon, Sep 21, 2009 at 12:57, Alec Koumjian wrote: > I'm a tad confused about licensing. ?If I use Numpy or Scipy modules in > another project, can I release that project as GPLv3? ?Is it possible for > the scipy modules to keep their BSD license and only release my own code as > GPL? Yes, on both counts. The BSD license is GPL-compatible. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From dwf at cs.toronto.edu Mon Sep 21 14:36:05 2009 From: dwf at cs.toronto.edu (David Warde-Farley) Date: Mon, 21 Sep 2009 14:36:05 -0400 Subject: [SciPy-User] Simple pattern recognition In-Reply-To: <49d6b3500909211104m2ad0646fo6ca8a2d74735e9bc@mail.gmail.com> References: <49d6b3500909161722r6f74cce6j515b756c2b0b78c5@mail.gmail.com> <49d6b3500909211045g2913d62ey539171b0668ae7c3@mail.gmail.com> <49d6b3500909211104m2ad0646fo6ca8a2d74735e9bc@mail.gmail.com> Message-ID: <3F97EAA3-72C2-43D2-A060-580B8127191C@cs.toronto.edu> I think Zachary is right, ndimage does what you want: In [48]: image = array( [[0,0,0,1,1,0,0], [0,0,0,1,1,1,0], [0,0,0,1,0,0,0], [0,0,0,0,0,0,0], [0,1,0,0,0,0,0], [0,1,1,0,0,0,0], [0,0,0,0,1,1,0], [0,0,0,0,1,1,1]]) In [57]: import scipy.ndimage as ndimage In [58]: labels, num_found = ndimage.label(image) In [59]: object_slices = ndimage.find_objects(labels) In [60]: image[object_slices[0]] Out[60]: array([[1, 1, 0], [1, 1, 1], [1, 0, 0]]) In [61]: image[object_slices[1]] Out[61]: array([[1, 0], [1, 1]]) In [62]: image[object_slices[2]] Out[62]: array([[1, 1, 0], [1, 1, 1]]) David On 21-Sep-09, at 2:04 PM, G?khan Sever wrote: > ndimage.label works differently than what I have done here. > > Later using find_objects you can get slices for row or column basis. > Not > possible to construct a dynamical structure to find objects that are > in the > in both axis. > > Could you look at the stackoverflow article once again and comment > back? > > Thanks. > > On Mon, Sep 21, 2009 at 12:57 PM, Zachary Pincus >wrote: > >> I believe that pretty generic connected-component finding is already >> available with scipy.ndimage.label, as David suggested at the >> beginning of the thread... >> >> This function takes a binary array (e.g. zeros where the background >> is, non-zero where foreground is) and outputs an array where each >> connected component of non-background pixels has a unique non-zero >> "label" value. >> >> ndimage.find_objects will then give slices (e.g. bounding boxes) for >> each labeled object (or a subset of them as specified). There are >> also >> a ton of statistics you can calculate based on the labeled objects -- >> look at the entire ndimage.measurements namespace. >> >> Zach >> >> On Sep 21, 2009, at 1:45 PM, G?khan Sever wrote: >> >>> I asked this question at >> http://stackoverflow.com/questions/1449139/simple-object-recognition >>> and get lots of nice feedback, and finally I have managed to >>> implement what I wanted. >>> >>> What I was looking for is named "connected component labelling or >>> analysis" for my "connected component extraction" >>> >>> I have put the code (lab2.py) and the image (particles.png) under: >>> http://code.google.com/p/ccnworks/source/browse/#svn/trunk/AtSc450/ >>> labs >>> >>> What do you think of improving that code and adding into scipy's >>> ndimage library (like connected_components()) ? >>> >>> Comments and suggestions are welcome :) >>> >>> >>> On Wed, Sep 16, 2009 at 7:22 PM, G?khan Sever >>> wrote: >>> Hello all, >>> >>> I want to be able to count predefined simple rectangle shapes on an >>> image as shown like in this one: >> http://img7.imageshack.us/img7/2327/particles.png >>> >>> Which is in my case to count all the blue pixels (they are ice-snow >>> flake shadows in reality) in one of the column. >>> >>> What is the way to automate this task, which library or technique >>> should I study to tackle it. >>> >>> Thanks. >>> >>> -- >>> G?khan >>> >>> >>> >>> -- >>> G?khan >>> _______________________________________________ >>> SciPy-User mailing list >>> SciPy-User at scipy.org >>> http://mail.scipy.org/mailman/listinfo/scipy-user >> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> > > > > -- > G?khan > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user From bsouthey at gmail.com Mon Sep 21 14:54:51 2009 From: bsouthey at gmail.com (Bruce Southey) Date: Mon, 21 Sep 2009 13:54:51 -0500 Subject: [SciPy-User] Use in GPL project In-Reply-To: <3d375d730909211119pddb1a01ld7b6ec8ec2b66dcc@mail.gmail.com> References: <61a4c0ba0909211057v2fb90704t8857c467be9f14d2@mail.gmail.com> <3d375d730909211119pddb1a01ld7b6ec8ec2b66dcc@mail.gmail.com> Message-ID: <4AB7CBFB.4090704@gmail.com> On 09/21/2009 01:19 PM, Robert Kern wrote: > On Mon, Sep 21, 2009 at 12:57, Alec Koumjian wrote: > >> I'm a tad confused about licensing. If I use Numpy or Scipy modules in >> another project, can I release that project as GPLv3? Is it possible for >> the scipy modules to keep their BSD license and only release my own code as >> GPL? >> > Yes, on both counts. The BSD license is GPL-compatible. > > See the publications by the The Software Freedom Law Center (http://www.softwarefreedom.org/) on licenses. In particular: 'Maintaining Permissive-Licensed Files in a GPL-Licensed Project: Guidelines for Developers' http://www.softwarefreedom.org/resources/2007/gpl-non-gpl-collaboration.html 'A Legal Issues Primer for Open Source and Free Software Projects' http://www.softwarefreedom.org/resources/2008/foss-primer.html 'A Practical Guide to GPL Compliance' http://www.softwarefreedom.org/resources/2008/compliance-guide.html Bruce From gokhansever at gmail.com Mon Sep 21 15:14:27 2009 From: gokhansever at gmail.com (=?UTF-8?Q?G=C3=B6khan_Sever?=) Date: Mon, 21 Sep 2009 14:14:27 -0500 Subject: [SciPy-User] [Numpy-discussion] Simple pattern recognition In-Reply-To: <3F97EAA3-72C2-43D2-A060-580B8127191C@cs.toronto.edu> References: <49d6b3500909161722r6f74cce6j515b756c2b0b78c5@mail.gmail.com> <49d6b3500909211045g2913d62ey539171b0668ae7c3@mail.gmail.com> <49d6b3500909211104m2ad0646fo6ca8a2d74735e9bc@mail.gmail.com> <3F97EAA3-72C2-43D2-A060-580B8127191C@cs.toronto.edu> Message-ID: <49d6b3500909211214o279fbb1fhdbda0d6ba8f84377@mail.gmail.com> Ahh my blindness and apologies :) The nice feeling of reinventing the wheel... Probably I forgot to reshape the image data in the first place before applying into ndimage.label(). However, this was a nice example to understand recursion, and get to know some basics of computer vision and few libraries (OpenCV, pygraph) during my research. Thanks again for all kind replies. On Mon, Sep 21, 2009 at 1:36 PM, David Warde-Farley wrote: > I think Zachary is right, ndimage does what you want: > > In [48]: image = array( > [[0,0,0,1,1,0,0], > [0,0,0,1,1,1,0], > [0,0,0,1,0,0,0], > [0,0,0,0,0,0,0], > [0,1,0,0,0,0,0], > [0,1,1,0,0,0,0], > [0,0,0,0,1,1,0], > [0,0,0,0,1,1,1]]) > > In [57]: import scipy.ndimage as ndimage > > In [58]: labels, num_found = ndimage.label(image) > > In [59]: object_slices = ndimage.find_objects(labels) > > In [60]: image[object_slices[0]] > Out[60]: > array([[1, 1, 0], > [1, 1, 1], > [1, 0, 0]]) > > In [61]: image[object_slices[1]] > Out[61]: > array([[1, 0], > [1, 1]]) > > In [62]: image[object_slices[2]] > Out[62]: > array([[1, 1, 0], > [1, 1, 1]]) > > David > > On 21-Sep-09, at 2:04 PM, G?khan Sever wrote: > > > ndimage.label works differently than what I have done here. > > > > Later using find_objects you can get slices for row or column basis. > > Not > > possible to construct a dynamical structure to find objects that are > > in the > > in both axis. > > > > Could you look at the stackoverflow article once again and comment > > back? > > > > Thanks. > > > > On Mon, Sep 21, 2009 at 12:57 PM, Zachary Pincus < > zachary.pincus at yale.edu > > >wrote: > > > >> I believe that pretty generic connected-component finding is already > >> available with scipy.ndimage.label, as David suggested at the > >> beginning of the thread... > >> > >> This function takes a binary array (e.g. zeros where the background > >> is, non-zero where foreground is) and outputs an array where each > >> connected component of non-background pixels has a unique non-zero > >> "label" value. > >> > >> ndimage.find_objects will then give slices (e.g. bounding boxes) for > >> each labeled object (or a subset of them as specified). There are > >> also > >> a ton of statistics you can calculate based on the labeled objects -- > >> look at the entire ndimage.measurements namespace. > >> > >> Zach > >> > >> On Sep 21, 2009, at 1:45 PM, G?khan Sever wrote: > >> > >>> I asked this question at > >> http://stackoverflow.com/questions/1449139/simple-object-recognition > >>> and get lots of nice feedback, and finally I have managed to > >>> implement what I wanted. > >>> > >>> What I was looking for is named "connected component labelling or > >>> analysis" for my "connected component extraction" > >>> > >>> I have put the code (lab2.py) and the image (particles.png) under: > >>> http://code.google.com/p/ccnworks/source/browse/#svn/trunk/AtSc450/ > >>> labs > >>> > >>> What do you think of improving that code and adding into scipy's > >>> ndimage library (like connected_components()) ? > >>> > >>> Comments and suggestions are welcome :) > >>> > >>> > >>> On Wed, Sep 16, 2009 at 7:22 PM, G?khan Sever > >>> wrote: > >>> Hello all, > >>> > >>> I want to be able to count predefined simple rectangle shapes on an > >>> image as shown like in this one: > >> http://img7.imageshack.us/img7/2327/particles.png > >>> > >>> Which is in my case to count all the blue pixels (they are ice-snow > >>> flake shadows in reality) in one of the column. > >>> > >>> What is the way to automate this task, which library or technique > >>> should I study to tackle it. > >>> > >>> Thanks. > >>> > >>> -- > >>> G?khan > >>> > >>> > >>> > >>> -- > >>> G?khan > >>> _______________________________________________ > >>> SciPy-User mailing list > >>> SciPy-User at scipy.org > >>> http://mail.scipy.org/mailman/listinfo/scipy-user > >> > >> _______________________________________________ > >> SciPy-User mailing list > >> SciPy-User at scipy.org > >> http://mail.scipy.org/mailman/listinfo/scipy-user > >> > > > > > > > > -- > > G?khan > > _______________________________________________ > > SciPy-User mailing list > > SciPy-User at scipy.org > > http://mail.scipy.org/mailman/listinfo/scipy-user > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > -- G?khan -------------- next part -------------- An HTML attachment was scrubbed... URL: From zachary.pincus at yale.edu Mon Sep 21 15:41:58 2009 From: zachary.pincus at yale.edu (Zachary Pincus) Date: Mon, 21 Sep 2009 15:41:58 -0400 Subject: [SciPy-User] [Numpy-discussion] Simple pattern recognition In-Reply-To: <49d6b3500909211214o279fbb1fhdbda0d6ba8f84377@mail.gmail.com> References: <49d6b3500909161722r6f74cce6j515b756c2b0b78c5@mail.gmail.com> <49d6b3500909211045g2913d62ey539171b0668ae7c3@mail.gmail.com> <49d6b3500909211104m2ad0646fo6ca8a2d74735e9bc@mail.gmail.com> <3F97EAA3-72C2-43D2-A060-580B8127191C@cs.toronto.edu> <49d6b3500909211214o279fbb1fhdbda0d6ba8f84377@mail.gmail.com> Message-ID: <133633BA-A606-4D1F-8680-A1EABD0F8F8C@yale.edu> No worries! I think I've written connected-component finding code several times over in different guises. Definitely a good exercise. On Sep 21, 2009, at 3:14 PM, G?khan Sever wrote: > Ahh my blindness and apologies :) > > The nice feeling of reinventing the wheel... > > Probably I forgot to reshape the image data in the first place > before applying into ndimage.label(). > > However, this was a nice example to understand recursion, and get to > know some basics of computer vision and few libraries (OpenCV, > pygraph) during my research. > > Thanks again for all kind replies. > > On Mon, Sep 21, 2009 at 1:36 PM, David Warde-Farley > wrote: > I think Zachary is right, ndimage does what you want: > > In [48]: image = array( > [[0,0,0,1,1,0,0], > [0,0,0,1,1,1,0], > [0,0,0,1,0,0,0], > [0,0,0,0,0,0,0], > [0,1,0,0,0,0,0], > [0,1,1,0,0,0,0], > [0,0,0,0,1,1,0], > [0,0,0,0,1,1,1]]) > > In [57]: import scipy.ndimage as ndimage > > In [58]: labels, num_found = ndimage.label(image) > > In [59]: object_slices = ndimage.find_objects(labels) > > In [60]: image[object_slices[0]] > Out[60]: > array([[1, 1, 0], > [1, 1, 1], > [1, 0, 0]]) > > In [61]: image[object_slices[1]] > Out[61]: > array([[1, 0], > [1, 1]]) > > In [62]: image[object_slices[2]] > Out[62]: > array([[1, 1, 0], > [1, 1, 1]]) > > David > > On 21-Sep-09, at 2:04 PM, G?khan Sever wrote: > > > ndimage.label works differently than what I have done here. > > > > Later using find_objects you can get slices for row or column basis. > > Not > > possible to construct a dynamical structure to find objects that are > > in the > > in both axis. > > > > Could you look at the stackoverflow article once again and comment > > back? > > > > Thanks. > > > > On Mon, Sep 21, 2009 at 12:57 PM, Zachary Pincus > >wrote: > > > >> I believe that pretty generic connected-component finding is > already > >> available with scipy.ndimage.label, as David suggested at the > >> beginning of the thread... > >> > >> This function takes a binary array (e.g. zeros where the background > >> is, non-zero where foreground is) and outputs an array where each > >> connected component of non-background pixels has a unique non-zero > >> "label" value. > >> > >> ndimage.find_objects will then give slices (e.g. bounding boxes) > for > >> each labeled object (or a subset of them as specified). There are > >> also > >> a ton of statistics you can calculate based on the labeled > objects -- > >> look at the entire ndimage.measurements namespace. > >> > >> Zach > >> > >> On Sep 21, 2009, at 1:45 PM, G?khan Sever wrote: > >> > >>> I asked this question at > >> http://stackoverflow.com/questions/1449139/simple-object- > recognition > >>> and get lots of nice feedback, and finally I have managed to > >>> implement what I wanted. > >>> > >>> What I was looking for is named "connected component labelling or > >>> analysis" for my "connected component extraction" > >>> > >>> I have put the code (lab2.py) and the image (particles.png) under: > >>> http://code.google.com/p/ccnworks/source/browse/#svn/trunk/ > AtSc450/ > >>> labs > >>> > >>> What do you think of improving that code and adding into scipy's > >>> ndimage library (like connected_components()) ? > >>> > >>> Comments and suggestions are welcome :) > >>> > >>> > >>> On Wed, Sep 16, 2009 at 7:22 PM, G?khan Sever > >>> wrote: > >>> Hello all, > >>> > >>> I want to be able to count predefined simple rectangle shapes on > an > >>> image as shown like in this one: > >> http://img7.imageshack.us/img7/2327/particles.png > >>> > >>> Which is in my case to count all the blue pixels (they are ice- > snow > >>> flake shadows in reality) in one of the column. > >>> > >>> What is the way to automate this task, which library or technique > >>> should I study to tackle it. > >>> > >>> Thanks. > >>> > >>> -- > >>> G?khan > >>> > >>> > >>> > >>> -- > >>> G?khan > >>> _______________________________________________ > >>> SciPy-User mailing list > >>> SciPy-User at scipy.org > >>> http://mail.scipy.org/mailman/listinfo/scipy-user > >> > >> _______________________________________________ > >> SciPy-User mailing list > >> SciPy-User at scipy.org > >> http://mail.scipy.org/mailman/listinfo/scipy-user > >> > > > > > > > > -- > > G?khan > > _______________________________________________ > > SciPy-User mailing list > > SciPy-User at scipy.org > > http://mail.scipy.org/mailman/listinfo/scipy-user > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > > > > -- > G?khan > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion From aisaac at american.edu Mon Sep 21 16:24:52 2009 From: aisaac at american.edu (Alan G Isaac) Date: Mon, 21 Sep 2009 16:24:52 -0400 Subject: [SciPy-User] Use in GPL project In-Reply-To: <61a4c0ba0909211057v2fb90704t8857c467be9f14d2@mail.gmail.com> References: <61a4c0ba0909211057v2fb90704t8857c467be9f14d2@mail.gmail.com> Message-ID: <4AB7E114.30002@american.edu> On 9/21/2009 1:57 PM, Alec Koumjian wrote: > If I use Numpy or Scipy modules in another project, can I release that > project as GPLv3? Yes: GPL projects can use code from BSD projects, but BSD projects cannot use code from GPL projects. (So please be sure the GPL is important to you.) Alan Isaac From josephsmidt at gmail.com Mon Sep 21 20:20:22 2009 From: josephsmidt at gmail.com (Joseph Smidt) Date: Mon, 21 Sep 2009 17:20:22 -0700 Subject: [SciPy-User] How Do You Integrate Legendre Polynomials or high order? Message-ID: <142682e10909211720h13238120n3985ea3c224f30de@mail.gmail.com> Hello, I need to integrate the integral \int_-1^1 dx/2 P_l1(x)*P_l2(x)*P_l3(x) for 0 < l1, l2, l3 < 1000. In case my write up is confusing I believe this website makes it more clear: http://en.wikipedia.org/wiki/3-jm_symbol#Other_properties Here is my python code for one value of this integral: ------------------------------------------------------------------------------ from pylab import * from scipy.integrate import quad from scipy.special import legendre a = quad(lambda x: 0.5*legendre(500)(x)*legendre(100)(x)*legendre(2)(x), -1, 1) print a[0] ------------------------------------------------------------------------------ Anyways, Scipy can't seem to integrate this this way. Does anyone have any ideas how to calculate this integral in Python? Thanks. Joseph Smidt -- ------------------------------------------------------------------------ Joseph Smidt Physics and Astronomy 4129 Frederick Reines Hall Irvine, CA 92697-4575 Office: 949-824-3269 From xavier.gnata at gmail.com Mon Sep 21 21:08:45 2009 From: xavier.gnata at gmail.com (Xavier Gnata) Date: Tue, 22 Sep 2009 03:08:45 +0200 Subject: [SciPy-User] Bresenham algorithm? In-Reply-To: References: <698936.51411.qm@web8318.mail.in.yahoo.com> <4AB657E3.8080109@gmail.com> <66220CC3-8486-4F45-B240-A094551910B3@cs.toronto.edu> Message-ID: <4AB8239D.8050409@gmail.com> David Warde-Farley wrote: > On 21-Sep-09, at 2:42 AM, Sebastian Haase wrote: > > >> If you comment out the list/tuples handling - assuming you want to >> plug in numpy arrays here, you have to know the number of points up >> front. >> (To be able to pre-allocate the array correctly) >> Is this easy ? (rounding errors ?) >> > > Yes, notice that the loop termination condition. You'd simply need to > allocate two arrays, each 'dx' long. But in his case, he'd be better > off taking the image array as an argument and accessing each pixel as > he goes in order to sum them. Assuming the image is reasonably scaled > and he uses an appropriate precision accumulator, that seems like the > best way to approach the problem rather than allocating space for > indices only to use them once (a more general purpose implementation, > of course, would have to accumulate/return indices). > > David > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > Your answers are perfect because they also provide me with a nice cython example. Thanks! From fredrik.johansson at gmail.com Mon Sep 21 21:51:34 2009 From: fredrik.johansson at gmail.com (Fredrik Johansson) Date: Tue, 22 Sep 2009 03:51:34 +0200 Subject: [SciPy-User] How Do You Integrate Legendre Polynomials or high order? In-Reply-To: <142682e10909211720h13238120n3985ea3c224f30de@mail.gmail.com> References: <142682e10909211720h13238120n3985ea3c224f30de@mail.gmail.com> Message-ID: <3d0cebfb0909211851t4d197f92n697f8f3f11aa7978@mail.gmail.com> On Tue, Sep 22, 2009 at 2:20 AM, Joseph Smidt wrote: > Hello, > > ? I need to integrate the integral ?\int_-1^1 dx/2 > P_l1(x)*P_l2(x)*P_l3(x) for 0 < l1, l2, l3 < 1000. ?In case my write > up is confusing I believe this website makes it more clear: > http://en.wikipedia.org/wiki/3-jm_symbol#Other_properties > > Here is my python code for one value of this integral: > > ------------------------------------------------------------------------------ > > from pylab import * > from scipy.integrate import quad > from scipy.special import legendre > > a = quad(lambda x: 0.5*legendre(500)(x)*legendre(100)(x)*legendre(2)(x), -1, 1) > > print a[0] > > ------------------------------------------------------------------------------ > > Anyways, Scipy can't seem to integrate this this way. ?Does anyone > have any ideas how to calculate this integral in Python? ?Thanks. > > ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?Joseph Smidt Hi, You should probably use scipy.special.lpn instead; the legendre() function isn't good for high-degree evaluation. Since these are extremely oscillatory integrands, chances are you'll have to customize the settings for quad a bit too. I'm not familiar enough with scipy.integrate to say what the best settings are. The quad or quadgl functions in mpmath (http://code.google.com/p/mpmath/) can do these integrals to full precision if you pass something like maxdegree=10, but this may be too slow depending on your needs. A custom algorithm may be a good idea here if you need to compute lots of these integrals. One way would be to use Gauss-Legendre quadrature of high degree. Since all integrands are polynomials of degree less than 3000, a 1500-point Gaussian quadrature will give the exact value (up to roundoff) for all integrals. So you could precompute a 1000 x 1500 table of P_n(x) for n = 0...999 and x ranging over the Gauss nodes (using scipy.special.lpn to evaluate at all n in one go for each x). Then to calculate any one of the integrals, just dot the Gauss quadrature weights with the three rows of index l1, l2, l3. Certainly other approaches are also possible. Fredrik From sturla at molden.no Tue Sep 22 02:48:43 2009 From: sturla at molden.no (Sturla Molden) Date: Tue, 22 Sep 2009 08:48:43 +0200 Subject: [SciPy-User] Bresenham algorithm? In-Reply-To: <66220CC3-8486-4F45-B240-A094551910B3@cs.toronto.edu> References: <698936.51411.qm@web8318.mail.in.yahoo.com> <4AB657E3.8080109@gmail.com> <66220CC3-8486-4F45-B240-A094551910B3@cs.toronto.edu> Message-ID: <4AB8734B.6020001@molden.no> David Warde-Farley skrev:Here's an implementation I found here: http://mail.python.org/pipermail/python-list/1999-July/007163.html > def bresenham(x,y,x2,y2): > """Brensenham line algorithm""" > steep = 0 > coords = [] > dx = abs(x2 - x) > if (x2 - x) > 0: sx = 1 > else: sx = -1 > dy = abs(y2 - y) > if (y2 - y) > 0: sy = 1 > else: sy = -1 > if dy > dx: > steep = 1 > x,y = y,x > dx,dy = dy,dx > sx,sy = sy,sx > d = (2 * dy) - dx > for i in range(0,dx): > if steep: coords.append((y,x)) > else: coords.append((x,y)) > while d >= 0: > y = y + sy > d = d - (2 * dx) > x = x + sx > d = d + (2 * dy) > We're on the NumPy mailing list here. Do this istead ;-) import numpy as np cimport numpy as np cimport cython cdef extern from "math.h": int abs(int i) @cython.boundscheck(False) @cython.wraparound(False) def bresenham(int x, int y, int x2, int y2): cdef np.ndarray[np.int32_t, ndim=2, mode="c"] coords cdef int steep = 0 cdef int dx = abs(x2 - x) cdef int dy = abs(y2 - y) cdef int sx, sy, d, i coords = np.zeros(dx, dtype=np.int32) if (x2 - x) > 0: sx = 1 else: sx = -1 if (y2 - y) > 0: sy = 1 else: sy = -1 if dy > dx: steep = 1 x,y = y,x dx,dy = dy,dx sx,sy = sy,sx d = (2 * dy) - dx for i in range(dx): if steep: coords[i,0] = y coords[i,1] = x else: coords[i,0] = x coords[i,1] = y while d >= 0: y = y + sy d = d - (2 * dx) x = x + sx d = d + (2 * dy) return coords Regards, Sturla From sturla at molden.no Tue Sep 22 02:55:17 2009 From: sturla at molden.no (Sturla Molden) Date: Tue, 22 Sep 2009 08:55:17 +0200 Subject: [SciPy-User] Bresenham algorithm? In-Reply-To: <4AB8734B.6020001@molden.no> References: <698936.51411.qm@web8318.mail.in.yahoo.com> <4AB657E3.8080109@gmail.com> <66220CC3-8486-4F45-B240-A094551910B3@cs.toronto.edu> <4AB8734B.6020001@molden.no> Message-ID: <4AB874D5.7080309@molden.no> Sturla Molden skrev: > @cython.boundscheck(False) > @cython.wraparound(False) > def bresenham(int x, int y, int x2, int y2): > > cdef np.ndarray[np.int32_t, ndim=2, mode="c"] coords > > cdef int steep = 0 > cdef int dx = abs(x2 - x) > cdef int dy = abs(y2 - y) > cdef int sx, sy, d, i > > coords = np.zeros(dx, dtype=np.int32) > Oops... coords = np.zeros((int(dx),2), dtype=np.int32) S.M From dwf at cs.toronto.edu Tue Sep 22 03:10:17 2009 From: dwf at cs.toronto.edu (David Warde-Farley) Date: Tue, 22 Sep 2009 03:10:17 -0400 Subject: [SciPy-User] Bresenham algorithm? In-Reply-To: <4AB8734B.6020001@molden.no> References: <698936.51411.qm@web8318.mail.in.yahoo.com> <4AB657E3.8080109@gmail.com> <66220CC3-8486-4F45-B240-A094551910B3@cs.toronto.edu> <4AB8734B.6020001@molden.no> Message-ID: On 22-Sep-09, at 2:48 AM, Sturla Molden wrote: > > We're on the NumPy mailing list here. Do this istead ;-) Fair enough, though I think for his purposes he's better off taking his ndarray (the image) as an argument and summing as he goes, rather than allocating a new array/list at all. :) David From magnusp at astro.su.se Tue Sep 22 09:25:03 2009 From: magnusp at astro.su.se (magnus_p) Date: Tue, 22 Sep 2009 06:25:03 -0700 (PDT) Subject: [SciPy-User] [SciPy-user] 2D Autocorrelation Message-ID: <25530720.post@talk.nabble.com> I am trying to do a Autocorrelation summation of a bunch of images. i.e iF( F(x)^2 ) is added for all images (F=fourier transform, iF =inverse F). The images contain 1 or 2 point-sources (i.e binary system), and shape (64,64). what I tried first was: def addAutocorr(self, cube): a = zeros((self.CUBESIZE,self.CUBESIZE)) for frame in cube: a += ifft2(fft2(frame)*fft2(frame)) return a But it doesn't seems like it is correct, the source is not centered in the middle in all frames and, not in the final frame. Do I have to shift all the frames before adding them together? Then I tried the scipy.ndimage.convolve / scipy.signal.fftconvolve with no luck... Anyone that have some advice? -- View this message in context: http://www.nabble.com/2D-Autocorrelation-tp25530720p25530720.html Sent from the Scipy-User mailing list archive at Nabble.com. From sturla at molden.no Tue Sep 22 09:33:17 2009 From: sturla at molden.no (Sturla Molden) Date: Tue, 22 Sep 2009 15:33:17 +0200 Subject: [SciPy-User] [SciPy-user] 2D Autocorrelation In-Reply-To: <25530720.post@talk.nabble.com> References: <25530720.post@talk.nabble.com> Message-ID: <4AB8D21D.7050602@molden.no> magnus_p skrev: > def addAutocorr(self, cube): > a = zeros((self.CUBESIZE,self.CUBESIZE)) > for frame in cube: > a += ifft2(fft2(frame)*fft2(frame)) > return a > This is a auto-convolution, not an auto-correlation. Try this instead: a += ifft2(fft2(frame)*fft2(fliplr(flipud(frame)))) And also do something to control edge effects (e.g. pad with zeros and crop). From kgdunn at gmail.com Tue Sep 22 10:59:55 2009 From: kgdunn at gmail.com (Kevin Dunn) Date: Tue, 22 Sep 2009 10:59:55 -0400 Subject: [SciPy-User] Bug/Error with chi-squared distribution and df<1 Message-ID: Hi there, I'm not an expert on distributions, but as far as I can tell, the chi2 distribution is defined for degrees of freedom >0. I'm getting "nan" results however when df is less than one (but still greater than 0). The chi2 CDF values agree between R, MATLAB and Scipy when the the degrees of freedom are >= 1. For example: * R: pchisq(0.95, 1) * MATLAB: chi2cdf(0.95, 1) * SciPy: scipy.stats.chi2.cdf(0.95, 1) However, changing the last argument to 0.5 returns a NaN in SciPy, but gives a result (0.8392259) in R and MATLAB. I'm suspecting there is something wrong with my SciPy installation, because the example included with SciPy, (found using scipy.stats.chi2? inside iPython) calls the chi2.cdf function with df=0.9, yet when I run that example as follows: from scipy.stats import chi2 import numpy as np numargs = chi2.numargs [ df ] = [0.9,]*numargs rv = chi2(df) x = np.linspace(0,np.minimum(rv.dist.b,3)) prb = chi2.cdf(x,df) My result for prb is as follows; which I don't think would have been used as an example if this is the expected output. array([ 0., NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN]) Is my SciPy install messed up? I'm using MacOSX and downloaded version 0.7.1 from SourceForge this morning. I just tried it in Ubuntu Linux (version 0.7.0 though), and get the same results. Thanks, Kevin From josef.pktd at gmail.com Tue Sep 22 11:22:42 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Tue, 22 Sep 2009 11:22:42 -0400 Subject: [SciPy-User] Bug/Error with chi-squared distribution and df<1 In-Reply-To: References: Message-ID: <1cd32cbb0909220822m73d32d2fi9b61c5ab81ee2fbb@mail.gmail.com> On Tue, Sep 22, 2009 at 10:59 AM, Kevin Dunn wrote: > Hi there, > > I'm not an expert on distributions, but as far as I can tell, the chi2 > distribution is defined for degrees of freedom >0. ?I'm getting "nan" > results however when df is less than one (but still greater than 0). > The chi2 CDF values agree between R, MATLAB and Scipy when the the > degrees of freedom are >= 1. ?For example: > * R: pchisq(0.95, 1) > * MATLAB: chi2cdf(0.95, 1) > * SciPy: scipy.stats.chi2.cdf(0.95, 1) > > However, changing the last argument to 0.5 returns a NaN in SciPy, but > gives a result (0.8392259) in R and MATLAB. >>> stats.chi2.veccdf(0.95, 0.5) array(0.83922587961194761) > > I'm suspecting there is something wrong with my SciPy installation, > because the example included with SciPy, (found using > scipy.stats.chi2? inside iPython) calls the chi2.cdf function with > df=0.9, yet when I run that example as follows: Where is this example? maybe it is an autogenerated example with wrong numbers? > > from scipy.stats import chi2 > import numpy as np > numargs = chi2.numargs > [ df ] = [0.9,]*numargs > rv = chi2(df) > x = np.linspace(0,np.minimum(rv.dist.b,3)) > prb = chi2.cdf(x,df) > > My result for prb is as follows; which I don't think would have been > used as an example if this is the expected output. > array([ ?0., ?NaN, ?NaN, ?NaN, ?NaN, ?NaN, ?NaN, ?NaN, ?NaN, ?NaN, ?NaN, > ? ? ? ?NaN, ?NaN, ?NaN, ?NaN, ?NaN, ?NaN, ?NaN, ?NaN, ?NaN, ?NaN, ?NaN, > ? ? ? ?NaN, ?NaN, ?NaN, ?NaN, ?NaN, ?NaN, ?NaN, ?NaN, ?NaN, ?NaN, ?NaN, > ? ? ? ?NaN, ?NaN, ?NaN, ?NaN, ?NaN, ?NaN, ?NaN, ?NaN, ?NaN, ?NaN, ?NaN, > ? ? ? ?NaN, ?NaN, ?NaN, ?NaN, ?NaN, ?NaN]) > > Is my SciPy install messed up? ?I'm using MacOSX and downloaded > version 0.7.1 from SourceForge this morning. ?I just tried it in > Ubuntu Linux (version 0.7.0 though), and get the same results. > > Thanks, > Kevin > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > chi2 uses scipy.special class chi2_gen def _cdf(self, x, df): return special.chdtr(df, x) which obviously cannot handle df<1 , which I don't know if it would ever show up in the usual statistical tests. pdf doesn't seem to have any problems >>> stats.chi2.pdf(np.arange(5),0.99) array([ Inf, 0.24042373, 0.10275665, 0.05078513, 0.02663761]) >>> stats.chi2.pdf(np.arange(5),0.5) array([ Inf, 0.14067411, 0.05073346, 0.02270277, 0.01109756]) >>> stats.chi2.pdf(np.arange(5),0.25) array([ Inf, 0.07382471, 0.02441481, 0.01038547, 0.00489731]) so numerical integration should also work >>> stats.chi2.veccdf(np.arange(1,5),0.25) array([ 0.92605422, 0.96918041, 0.98539847, 0.99265551]) >>> stats.chi2.veccdf(np.linspace(0.01,10.0,11),0.25) array([ 0.54726537, 0.92671456, 0.96937499, 0.98547096, 0.99268483, 0.99618434, 0.99796201, 0.99889274, 0.99939058, 0.99966116, 0.99981006]) >>> stats.chi2.veccdf(np.linspace(0.01,10.0,11),0.5) array([ 0.29308089, 0.84774539, 0.93248332, 0.96674206, 0.98278044, 0.99081467, 0.99500114, 0.99723983, 0.9984591 , 0.99913231, 0.99950797]) since pdf at zero is inf, I don't know how good the numbers are if the cdf is calculated for points close to zero e.g. >>> stats.chi2.veccdf(1e-8,0.5) array(0.0092772960765327081) Since I don't think df<1 is a common case, I wouldn't want to switch to numerical integration by default. But veccdf although only a private function should work although it bypasses some of the checking and broadcasting. Somehow this sounds familiar, I need to check whether this or a similar case is already on record. Hope that helps, Josef From josef.pktd at gmail.com Tue Sep 22 11:45:01 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Tue, 22 Sep 2009 11:45:01 -0400 Subject: [SciPy-User] Bug/Error with chi-squared distribution and df<1 In-Reply-To: <1cd32cbb0909220822m73d32d2fi9b61c5ab81ee2fbb@mail.gmail.com> References: <1cd32cbb0909220822m73d32d2fi9b61c5ab81ee2fbb@mail.gmail.com> Message-ID: <1cd32cbb0909220845h41a55f86kbd434a022f526c30@mail.gmail.com> On Tue, Sep 22, 2009 at 11:22 AM, wrote: > On Tue, Sep 22, 2009 at 10:59 AM, Kevin Dunn wrote: >> Hi there, >> >> I'm not an expert on distributions, but as far as I can tell, the chi2 >> distribution is defined for degrees of freedom >0. ?I'm getting "nan" >> results however when df is less than one (but still greater than 0). >> The chi2 CDF values agree between R, MATLAB and Scipy when the the >> degrees of freedom are >= 1. ?For example: >> * R: pchisq(0.95, 1) >> * MATLAB: chi2cdf(0.95, 1) >> * SciPy: scipy.stats.chi2.cdf(0.95, 1) >> > > >> However, changing the last argument to 0.5 returns a NaN in SciPy, but >> gives a result (0.8392259) in R and MATLAB. > >>>> stats.chi2.veccdf(0.95, 0.5) > array(0.83922587961194761) > >> >> I'm suspecting there is something wrong with my SciPy installation, >> because the example included with SciPy, (found using >> scipy.stats.chi2? inside iPython) calls the chi2.cdf function with >> df=0.9, yet when I run that example as follows: > > Where is this example? maybe it is an autogenerated example with wrong numbers? > >> >> from scipy.stats import chi2 >> import numpy as np >> numargs = chi2.numargs >> [ df ] = [0.9,]*numargs >> rv = chi2(df) >> x = np.linspace(0,np.minimum(rv.dist.b,3)) >> prb = chi2.cdf(x,df) >> >> My result for prb is as follows; which I don't think would have been >> used as an example if this is the expected output. >> array([ ?0., ?NaN, ?NaN, ?NaN, ?NaN, ?NaN, ?NaN, ?NaN, ?NaN, ?NaN, ?NaN, >> ? ? ? ?NaN, ?NaN, ?NaN, ?NaN, ?NaN, ?NaN, ?NaN, ?NaN, ?NaN, ?NaN, ?NaN, >> ? ? ? ?NaN, ?NaN, ?NaN, ?NaN, ?NaN, ?NaN, ?NaN, ?NaN, ?NaN, ?NaN, ?NaN, >> ? ? ? ?NaN, ?NaN, ?NaN, ?NaN, ?NaN, ?NaN, ?NaN, ?NaN, ?NaN, ?NaN, ?NaN, >> ? ? ? ?NaN, ?NaN, ?NaN, ?NaN, ?NaN, ?NaN]) >> >> Is my SciPy install messed up? ?I'm using MacOSX and downloaded >> version 0.7.1 from SourceForge this morning. ?I just tried it in >> Ubuntu Linux (version 0.7.0 though), and get the same results. >> >> Thanks, >> Kevin >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> > > chi2 uses scipy.special > > class chi2_gen > > def _cdf(self, x, df): > ? ? ? ?return special.chdtr(df, x) > > which obviously cannot handle df<1 , which I don't know if it would > ever show up in the usual statistical tests. > > pdf doesn't seem to have any problems > >>>> stats.chi2.pdf(np.arange(5),0.99) > array([ ? ? ? ?Inf, ?0.24042373, ?0.10275665, ?0.05078513, ?0.02663761]) >>>> stats.chi2.pdf(np.arange(5),0.5) > array([ ? ? ? ?Inf, ?0.14067411, ?0.05073346, ?0.02270277, ?0.01109756]) >>>> stats.chi2.pdf(np.arange(5),0.25) > array([ ? ? ? ?Inf, ?0.07382471, ?0.02441481, ?0.01038547, ?0.00489731]) > > so numerical integration should also work > >>>> stats.chi2.veccdf(np.arange(1,5),0.25) > array([ 0.92605422, ?0.96918041, ?0.98539847, ?0.99265551]) >>>> stats.chi2.veccdf(np.linspace(0.01,10.0,11),0.25) > array([ 0.54726537, ?0.92671456, ?0.96937499, ?0.98547096, ?0.99268483, > ? ? ? ?0.99618434, ?0.99796201, ?0.99889274, ?0.99939058, ?0.99966116, > ? ? ? ?0.99981006]) >>>> stats.chi2.veccdf(np.linspace(0.01,10.0,11),0.5) > array([ 0.29308089, ?0.84774539, ?0.93248332, ?0.96674206, ?0.98278044, > ? ? ? ?0.99081467, ?0.99500114, ?0.99723983, ?0.9984591 , ?0.99913231, > ? ? ? ?0.99950797]) > > since pdf at zero is inf, I don't know how good the numbers are if the > cdf is calculated for points close to zero > e.g. > >>>> stats.chi2.veccdf(1e-8,0.5) > array(0.0092772960765327081) > > > Since I don't think df<1 is a common case, I wouldn't want to switch > to numerical integration by default. But veccdf although only a > private function should work although it bypasses some of the checking > and broadcasting. > > Somehow this sounds familiar, I need to check whether this or a > similar case is already on record. I didn't find anything on the scipy trac nor on the mailinglist, since I'm subscribed to them. If you think that df<1 is an important case you could file a ticket for scipy.special. I have no idea how difficult it would be to extend the chi squared related functions for this or whether there exists another workaround. The only similar story was for the negative binomial for n<1, http://projects.scipy.org/scipy/ticket/978 . In that case scipy.special.nbdtr didn't allow the extension for n<1. Josef > > Hope that helps, > > Josef > From kgdunn at gmail.com Tue Sep 22 14:04:37 2009 From: kgdunn at gmail.com (Kevin Dunn) Date: Tue, 22 Sep 2009 14:04:37 -0400 Subject: [SciPy-User] Bug/Error with chi-squared distribution and df<1 In-Reply-To: References: Message-ID: Thanks for the prompt reply Josef; please see my comments below. > On Tue, Sep 22, 2009 at 10:59 AM, Kevin Dunn wrote: >> Hi there, >> >> I'm not an expert on distributions, but as far as I can tell, the chi2 >> distribution is defined for degrees of freedom >0. ?I'm getting "nan" >> results however when df is less than one (but still greater than 0). >> The chi2 CDF values agree between R, MATLAB and Scipy when the the >> degrees of freedom are >= 1. ?For example: >> * R: pchisq(0.95, 1) >> * MATLAB: chi2cdf(0.95, 1) >> * SciPy: scipy.stats.chi2.cdf(0.95, 1) >> > > >> However, changing the last argument to 0.5 returns a NaN in SciPy, but >> gives a result (0.8392259) in R and MATLAB. > >>>> stats.chi2.veccdf(0.95, 0.5) > array(0.83922587961194761) > >> >> I'm suspecting there is something wrong with my SciPy installation, >> because the example included with SciPy, (found using >> scipy.stats.chi2? inside iPython) calls the chi2.cdf function with >> df=0.9, yet when I run that example as follows: > > Where is this example? maybe it is an autogenerated example with wrong numbers? The example was found when typing "scipy.stats.chi2?" in ipython after importing scipy.stats. The code below was taken from the example, only I omitted the matplotlib plotting commands. >> from scipy.stats import chi2 >> import numpy as np >> numargs = chi2.numargs >> [ df ] = [0.9,]*numargs >> rv = chi2(df) >> x = np.linspace(0,np.minimum(rv.dist.b,3)) >> prb = chi2.cdf(x,df) >> >> My result for prb is as follows; which I don't think would have been >> used as an example if this is the expected output. >> array([ ?0., ?NaN, ?NaN, ?NaN, ?NaN, ?NaN, ?NaN, ?NaN, ?NaN, ?NaN, ?NaN, >> ? ? ? ?NaN, ?NaN, ?NaN, ?NaN, ?NaN, ?NaN, ?NaN, ?NaN, ?NaN, ?NaN, ?NaN, >> ? ? ? ?NaN, ?NaN, ?NaN, ?NaN, ?NaN, ?NaN, ?NaN, ?NaN, ?NaN, ?NaN, ?NaN, >> ? ? ? ?NaN, ?NaN, ?NaN, ?NaN, ?NaN, ?NaN, ?NaN, ?NaN, ?NaN, ?NaN, ?NaN, >> ? ? ? ?NaN, ?NaN, ?NaN, ?NaN, ?NaN, ?NaN]) >> >> Is my SciPy install messed up? ?I'm using MacOSX and downloaded >> version 0.7.1 from SourceForge this morning. ?I just tried it in >> Ubuntu Linux (version 0.7.0 though), and get the same results. >> > > chi2 uses scipy.special > > class chi2_gen > > def _cdf(self, x, df): > ? ? ? ?return special.chdtr(df, x) > > which obviously cannot handle df<1 , which I don't know if it would > ever show up in the usual statistical tests. I don't believe it is obvious why special.chdtr can't handle values df < 1. Please see my additional comments below. > > pdf doesn't seem to have any problems > >>>> stats.chi2.pdf(np.arange(5),0.99) > array([ ? ? ? ?Inf, ?0.24042373, ?0.10275665, ?0.05078513, ?0.02663761]) >>>> stats.chi2.pdf(np.arange(5),0.5) > array([ ? ? ? ?Inf, ?0.14067411, ?0.05073346, ?0.02270277, ?0.01109756]) >>>> stats.chi2.pdf(np.arange(5),0.25) > array([ ? ? ? ?Inf, ?0.07382471, ?0.02441481, ?0.01038547, ?0.00489731]) > > so numerical integration should also work > >>>> stats.chi2.veccdf(np.arange(1,5),0.25) > array([ 0.92605422, ?0.96918041, ?0.98539847, ?0.99265551]) >>>> stats.chi2.veccdf(np.linspace(0.01,10.0,11),0.25) > array([ 0.54726537, ?0.92671456, ?0.96937499, ?0.98547096, ?0.99268483, > ? ? ? ?0.99618434, ?0.99796201, ?0.99889274, ?0.99939058, ?0.99966116, > ? ? ? ?0.99981006]) >>>> stats.chi2.veccdf(np.linspace(0.01,10.0,11),0.5) > array([ 0.29308089, ?0.84774539, ?0.93248332, ?0.96674206, ?0.98278044, > ? ? ? ?0.99081467, ?0.99500114, ?0.99723983, ?0.9984591 , ?0.99913231, > ? ? ? ?0.99950797]) > > since pdf at zero is inf, I don't know how good the numbers are if the > cdf is calculated for points close to zero > e.g. > >>>> stats.chi2.veccdf(1e-8,0.5) > array(0.0092772960765327081) > > > Since I don't think df<1 is a common case, I wouldn't want to switch > to numerical integration by default. But veccdf although only a > private function should work although it bypasses some of the checking > and broadcasting. Thanks for pointing out the numerical integration approach; 'll give it a try. As for df < 1, please see my other comments below. > > Somehow this sounds familiar, I need to check whether this or a > similar case is already on record. > > Hope that helps, > > Josef And regarding your other reply: > I didn't find anything on the scipy trac nor on the mailinglist, since > I'm subscribed to them. If you think that df<1 is an important case > you could file a ticket for scipy.special. I have no idea how > difficult it would be to extend the chi squared related functions for > this or whether there exists another workaround. I just downloaded the latest SVN code and found the code that does the work is /scipy/special/cephes/chdtr.c In that code it seems that when the degrees of freedom are less than one, the code is (artificially) forced to return a NaN: if (df < 1.0) { mtherr( "chdtrc", DOMAIN ); return(NPY_NAN); } return( igamc( df/2.0, x/2.0 ) ); This seems a somewhat artificial constraint to me, since the gamma function can accept values of 0 < df < 1. Does someone else on this list know why that constraint is there for df<1? As for a practical usage case, I'm computing CDF values for a variable by matching moments, and the value for the degrees of freedom term is a computed value. This value is always non-integer, and can sometimes be less than one. Additional justification for df < 1 can be sought by looking at plots of the chi-squared distribution (e.g. http://en.wikipedia.org/wiki/Chi-square_distribution) when plotting with a degrees of freedom parameter (they call it "k" in the Wikipedia article). There's no reason why df can't be less than one, anymore than there's reason for df being non-integer. I'll give it a go compiling SciPy with removing those constraints in chdtr.c and see how it works. I've never done this before, but I'm busy installing the NumPy/SciPy installation requirements at the moment, and I'll report how it went. If things works out, I'll file a trac report. I've also cc'd the scipy-dev list on this reply. Thanks for your help, Kevin > The only similar story was ?for the negative binomial for n<1, > http://projects.scipy.org/scipy/ticket/978 . In that case > scipy.special.nbdtr didn't allow the extension for n<1. > > Josef From josef.pktd at gmail.com Tue Sep 22 14:27:48 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Tue, 22 Sep 2009 14:27:48 -0400 Subject: [SciPy-User] [SciPy-dev] Bug/Error with chi-squared distribution and df<1 In-Reply-To: References: Message-ID: <1cd32cbb0909221127s379c06ccoc375ad005c8a87fd@mail.gmail.com> On Tue, Sep 22, 2009 at 2:04 PM, Kevin Dunn wrote: > Thanks for the prompt reply Josef; please see my comments below. > >> On Tue, Sep 22, 2009 at 10:59 AM, Kevin Dunn wrote: >>> Hi there, >>> >>> I'm not an expert on distributions, but as far as I can tell, the chi2 >>> distribution is defined for degrees of freedom >0. ?I'm getting "nan" >>> results however when df is less than one (but still greater than 0). >>> The chi2 CDF values agree between R, MATLAB and Scipy when the the >>> degrees of freedom are >= 1. ?For example: >>> * R: pchisq(0.95, 1) >>> * MATLAB: chi2cdf(0.95, 1) >>> * SciPy: scipy.stats.chi2.cdf(0.95, 1) >>> >> >> >>> However, changing the last argument to 0.5 returns a NaN in SciPy, but >>> gives a result (0.8392259) in R and MATLAB. >> >>>>> stats.chi2.veccdf(0.95, 0.5) >> array(0.83922587961194761) >> >>> >>> I'm suspecting there is something wrong with my SciPy installation, >>> because the example included with SciPy, (found using >>> scipy.stats.chi2? inside iPython) calls the chi2.cdf function with >>> df=0.9, yet when I run that example as follows: >> >> Where is this example? maybe it is an autogenerated example with wrong numbers? > > The example was found when typing "scipy.stats.chi2?" in ipython after > importing scipy.stats. ?The code below was taken from the example, > only I omitted the matplotlib plotting commands. > >>> from scipy.stats import chi2 >>> import numpy as np >>> numargs = chi2.numargs >>> [ df ] = [0.9,]*numargs >>> rv = chi2(df) >>> x = np.linspace(0,np.minimum(rv.dist.b,3)) >>> prb = chi2.cdf(x,df) Yes, this is a generic template with autofilled names, obviously not tested if the numbers make sense in all cases. >>> >>> My result for prb is as follows; which I don't think would have been >>> used as an example if this is the expected output. >>> array([ ?0., ?NaN, ?NaN, ?NaN, ?NaN, ?NaN, ?NaN, ?NaN, ?NaN, ?NaN, ?NaN, >>> ? ? ? ?NaN, ?NaN, ?NaN, ?NaN, ?NaN, ?NaN, ?NaN, ?NaN, ?NaN, ?NaN, ?NaN, >>> ? ? ? ?NaN, ?NaN, ?NaN, ?NaN, ?NaN, ?NaN, ?NaN, ?NaN, ?NaN, ?NaN, ?NaN, >>> ? ? ? ?NaN, ?NaN, ?NaN, ?NaN, ?NaN, ?NaN, ?NaN, ?NaN, ?NaN, ?NaN, ?NaN, >>> ? ? ? ?NaN, ?NaN, ?NaN, ?NaN, ?NaN, ?NaN]) >>> >>> Is my SciPy install messed up? ?I'm using MacOSX and downloaded >>> version 0.7.1 from SourceForge this morning. ?I just tried it in >>> Ubuntu Linux (version 0.7.0 though), and get the same results. >>> >> >> chi2 uses scipy.special >> >> class chi2_gen >> >> def _cdf(self, x, df): >> ? ? ? ?return special.chdtr(df, x) >> >> which obviously cannot handle df<1 , which I don't know if it would >> ever show up in the usual statistical tests. > > I don't believe it is obvious why special.chdtr can't handle values df > < 1. Please see my additional comments below. > >> >> pdf doesn't seem to have any problems >> >>>>> stats.chi2.pdf(np.arange(5),0.99) >> array([ ? ? ? ?Inf, ?0.24042373, ?0.10275665, ?0.05078513, ?0.02663761]) >>>>> stats.chi2.pdf(np.arange(5),0.5) >> array([ ? ? ? ?Inf, ?0.14067411, ?0.05073346, ?0.02270277, ?0.01109756]) >>>>> stats.chi2.pdf(np.arange(5),0.25) >> array([ ? ? ? ?Inf, ?0.07382471, ?0.02441481, ?0.01038547, ?0.00489731]) >> >> so numerical integration should also work >> >>>>> stats.chi2.veccdf(np.arange(1,5),0.25) >> array([ 0.92605422, ?0.96918041, ?0.98539847, ?0.99265551]) >>>>> stats.chi2.veccdf(np.linspace(0.01,10.0,11),0.25) >> array([ 0.54726537, ?0.92671456, ?0.96937499, ?0.98547096, ?0.99268483, >> ? ? ? ?0.99618434, ?0.99796201, ?0.99889274, ?0.99939058, ?0.99966116, >> ? ? ? ?0.99981006]) >>>>> stats.chi2.veccdf(np.linspace(0.01,10.0,11),0.5) >> array([ 0.29308089, ?0.84774539, ?0.93248332, ?0.96674206, ?0.98278044, >> ? ? ? ?0.99081467, ?0.99500114, ?0.99723983, ?0.9984591 , ?0.99913231, >> ? ? ? ?0.99950797]) >> >> since pdf at zero is inf, I don't know how good the numbers are if the >> cdf is calculated for points close to zero >> e.g. >> >>>>> stats.chi2.veccdf(1e-8,0.5) >> array(0.0092772960765327081) >> >> >> Since I don't think df<1 is a common case, I wouldn't want to switch >> to numerical integration by default. But veccdf although only a >> private function should work although it bypasses some of the checking >> and broadcasting. > > Thanks for pointing out the numerical integration approach; 'll give > it a try. ?As for df < 1, please see my other comments below. > >> >> Somehow this sounds familiar, I need to check whether this or a >> similar case is already on record. >> >> Hope that helps, >> >> Josef > > And regarding your other reply: > >> I didn't find anything on the scipy trac nor on the mailinglist, since >> I'm subscribed to them. If you think that df<1 is an important case >> you could file a ticket for scipy.special. I have no idea how >> difficult it would be to extend the chi squared related functions for >> this or whether there exists another workaround. > > I just downloaded the latest SVN code and found the code that does the > work is /scipy/special/cephes/chdtr.c > > In that code it seems that when the degrees of freedom are less than > one, the code is (artificially) forced to return a NaN: > ? ? ? ?if (df < 1.0) > ? ? ? ?{ > ? ? ? ? ? ?mtherr( "chdtrc", DOMAIN ); > ? ? ? ? ? ?return(NPY_NAN); > ? ? ? ?} > ? ? ? return( igamc( df/2.0, x/2.0 ) ); > > This seems a somewhat artificial constraint to me, since the gamma > function can accept values of 0 < df < 1. ?Does someone else on this > list know why that constraint is there for df<1? I don't know the c code, but many of the statistical functions, have duplicate ways of calculation with scipy special. Taking the hint with incomplete gamma, the following looks good. This would mean until Pauli fixes scipy.special if it your fix works, we could also use gammainc directly. I don't know the differences between the various implementations well enough to see whether we buy some other problems with this >>> df=2;x=1.5;special.gammainc(df/2., x/2.) 0.52763344725898531 >>> df=0.5;x=1.5;special.gammainc(df/2., x/2.) 0.89993651328449831 >>> stats.chi2.cdf(x,df) nan >>> stats.chi2.cdf(1.5,2) 0.52763344725898531 >>> stats.chi2.veccdf(1.5,2) array(0.52763344725898531) >>> stats.chi2.veccdf(1.5,0.5) array(0.89993651328445579) >>> stats.chi2.veccdf(1.5,0.5) - special.gammainc(0.5/2., 1.5/2.) -4.2521541843143495e-014 I'm used to the chi square distribution as a statistical test distribution, but of course if it is treated just as a distribution that is matched to data, then these limitations (and nans) are not very useful. Thanks for looking into this. Josef > > As for a practical usage case, I'm computing CDF values for a variable > by matching moments, and the value for the degrees of freedom term is > a computed value. ?This value is always non-integer, and can sometimes > be less than one. > > Additional justification for df < 1 can be sought by looking at plots > of the chi-squared distribution (e.g. > http://en.wikipedia.org/wiki/Chi-square_distribution) when plotting > with a degrees of freedom parameter (they call it "k" in the Wikipedia > article). ?There's no reason why df can't be less than one, anymore > than there's reason for df being non-integer. > > I'll give it a go compiling SciPy with removing those constraints in > chdtr.c and see how it works. ?I've never done this before, but I'm > busy installing the NumPy/SciPy installation requirements at the > moment, and I'll report how it went. > > If things works out, I'll file a trac report. ?I've also cc'd the > scipy-dev list on this reply. > > Thanks for your help, > Kevin > >> The only similar story was ?for the negative binomial for n<1, >> http://projects.scipy.org/scipy/ticket/978 . In that case >> scipy.special.nbdtr didn't allow the extension for n<1. >> >> Josef > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > From pgmdevlist at gmail.com Tue Sep 22 14:48:48 2009 From: pgmdevlist at gmail.com (Pierre GM) Date: Tue, 22 Sep 2009 14:48:48 -0400 Subject: [SciPy-User] [SciPy-dev] Bug/Error with chi-squared distribution and df<1 In-Reply-To: <1cd32cbb0909221127s379c06ccoc375ad005c8a87fd@mail.gmail.com> References: <1cd32cbb0909221127s379c06ccoc375ad005c8a87fd@mail.gmail.com> Message-ID: <9F8549D1-6F3F-410B-A32B-2778B5CE0BAF@gmail.com> On Sep 22, 2009, at 2:27 PM, josef.pktd at gmail.com wrote: >>>> I'm not an expert on distributions, but as far as I can tell, the >>>> chi2 >>>> distribution is defined for degrees of freedom >0. Mmh, could anybody point me to a *real* case where we would have less than 1 degree of freedom ? Check the definition of the X2 on wikipedia, for example: if k variables are iid scaled normal, their sum is X2 w/ k degrees of freedom (dfs). Naturally, k, the dfs are integers, and that makes quite sense to prevent the use of 0 < k < 1. Or returning NaNs instead of raising an exception... From bsouthey at gmail.com Tue Sep 22 15:02:21 2009 From: bsouthey at gmail.com (Bruce Southey) Date: Tue, 22 Sep 2009 14:02:21 -0500 Subject: [SciPy-User] [SciPy-dev] Bug/Error with chi-squared distribution and df<1 In-Reply-To: <9F8549D1-6F3F-410B-A32B-2778B5CE0BAF@gmail.com> References: <1cd32cbb0909221127s379c06ccoc375ad005c8a87fd@mail.gmail.com> <9F8549D1-6F3F-410B-A32B-2778B5CE0BAF@gmail.com> Message-ID: <4AB91F3D.6080809@gmail.com> On 09/22/2009 01:48 PM, Pierre GM wrote: > On Sep 22, 2009, at 2:27 PM, josef.pktd at gmail.com wrote: > >>>>> I'm not an expert on distributions, but as far as I can tell, the >>>>> chi2 >>>>> distribution is defined for degrees of freedom>0. >>>>> > Mmh, could anybody point me to a *real* case where we would have less > than 1 degree of freedom ? > Check the definition of the X2 on wikipedia, for example: if k > variables are iid scaled normal, their sum is X2 w/ k degrees of > freedom (dfs). Naturally, k, the dfs are integers, and that makes > quite sense to prevent the use of 0< k< 1. Or returning NaNs instead > of raising an exception... > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > The easy and common one is when using a likelihood ratio test when one parameter is on a boundary. So when you use the LRT in a mixed effects model with one random term to test that variance of that term is zero then, you have to use a 50:50 mixture of chi-squared values at df=0 and df=1. The classic paper is: Self, S. and Liang, K-Y. (1987) Asymptotic Properties of Maximum Likelihood Estimators and Likelihood Ratio Tests under Nonstandard Conditions. Journal of the American Statistical Association 82, 605-610. Bruce From josef.pktd at gmail.com Tue Sep 22 15:02:56 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Tue, 22 Sep 2009 15:02:56 -0400 Subject: [SciPy-User] [SciPy-dev] Bug/Error with chi-squared distribution and df<1 In-Reply-To: <9F8549D1-6F3F-410B-A32B-2778B5CE0BAF@gmail.com> References: <1cd32cbb0909221127s379c06ccoc375ad005c8a87fd@mail.gmail.com> <9F8549D1-6F3F-410B-A32B-2778B5CE0BAF@gmail.com> Message-ID: <1cd32cbb0909221202k1478744fu46b7d047e0826eda@mail.gmail.com> On Tue, Sep 22, 2009 at 2:48 PM, Pierre GM wrote: > > On Sep 22, 2009, at 2:27 PM, josef.pktd at gmail.com wrote: >>>>> I'm not an expert on distributions, but as far as I can tell, the >>>>> chi2 >>>>> distribution is defined for degrees of freedom >0. > > Mmh, could anybody point me to a *real* case where we would have less > than 1 degree of freedom ? > Check the definition of the X2 on wikipedia, for example: if k > variables are iid scaled normal, their sum is X2 w/ k degrees of > freedom (dfs). Naturally, k, the dfs are integers, and that makes > quite sense to prevent the use of 0 < k < 1. Or returning NaNs instead > of raising an exception... > If you forget about the definition in the context of statistical tests, then it is just another one-parameter distribution that might fit some data. There are quite a few distributions that were developed for a narrower statistical usage, but can easily be used just for fitting, e.g. I was estimating the parameters of a t distribution, and it is much easier to do when the integer constraint is ignored. The same as for the cdf also applies to the isf/ppf of the chi2, see below. I cannot verify it currently with the generic methods because of the nans in the cdf. Josef >>> stats.chi2.ppf(0.5,2) 1.3862943611198906 >>> q=0.5;df=2;special.gammainccinv(df/2., q)*2 1.3862943611198906 >>> stats.chi2.ppf(0.5,2.5) 1.8738477677808791 >>> q=0.5;df=2.5;special.gammainccinv(df/2., q)*2 1.8738477677808791 >>> stats.chi2.ppf(0.5,0.5) nan >>> q=0.5;df=0.5;special.gammainccinv(df/2., q)*2 0.087347604705746817 >>> special.chdtri(df, q) nan >>> > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From josef.pktd at gmail.com Tue Sep 22 15:40:49 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Tue, 22 Sep 2009 15:40:49 -0400 Subject: [SciPy-User] [SciPy-dev] Bug/Error with chi-squared distribution and df<1 In-Reply-To: <4AB91F3D.6080809@gmail.com> References: <1cd32cbb0909221127s379c06ccoc375ad005c8a87fd@mail.gmail.com> <9F8549D1-6F3F-410B-A32B-2778B5CE0BAF@gmail.com> <4AB91F3D.6080809@gmail.com> Message-ID: <1cd32cbb0909221240o1c5cb01u3cd8050cb4ba963b@mail.gmail.com> On Tue, Sep 22, 2009 at 3:02 PM, Bruce Southey wrote: > On 09/22/2009 01:48 PM, Pierre GM wrote: >> On Sep 22, 2009, at 2:27 PM, josef.pktd at gmail.com wrote: >> >>>>>> I'm not an expert on distributions, but as far as I can tell, the >>>>>> chi2 >>>>>> distribution is defined for degrees of freedom>0. >>>>>> >> Mmh, could anybody point me to a *real* case where we would have less >> than 1 degree of freedom ? >> Check the definition of the X2 on wikipedia, for example: if k >> variables are iid scaled normal, their sum is X2 w/ k degrees of >> freedom (dfs). Naturally, k, the dfs are integers, and that makes >> quite sense to prevent the use of 0< ?k< ?1. Or returning NaNs instead >> of raising an exception... >> >> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> > The easy and common one is when using a likelihood ratio test when one > parameter is on a boundary. So when you use the LRT in a mixed effects > model with one random term to test that variance of that term is zero > then, you have to use a 50:50 mixture of chi-squared values at df=0 and > df=1. looking at some numerical examples and expressions for mean and variance, it looks like the chi-square distribution with df=0 is a degenerate distribution with a masspoint at zero. Josef > > The classic paper is: > Self, S. and Liang, K-Y. (1987) Asymptotic Properties of Maximum > Likelihood Estimators > and Likelihood Ratio Tests under Nonstandard Conditions. Journal of the > American > Statistical Association 82, 605-610. > > Bruce > > > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From pav at iki.fi Tue Sep 22 15:47:30 2009 From: pav at iki.fi (Pauli Virtanen) Date: Tue, 22 Sep 2009 22:47:30 +0300 Subject: [SciPy-User] [SciPy-dev] Bug/Error with chi-squared distribution and df<1 In-Reply-To: <1cd32cbb0909221127s379c06ccoc375ad005c8a87fd@mail.gmail.com> References: <1cd32cbb0909221127s379c06ccoc375ad005c8a87fd@mail.gmail.com> Message-ID: <1253648849.3968.2.camel@idol> ti, 2009-09-22 kello 14:27 -0400, josef.pktd at gmail.com kirjoitti: [clip] > Taking the hint with incomplete gamma, the following looks good. This > would mean until Pauli fixes scipy.special if it your fix works, we > could also use gammainc directly. I don't know the differences between > the various implementations well enough to see whether we buy some > other problems with this The C function `igam` called by `chdtr` is the same as what is exposed as `gammainc`. -- Pauli Virtanen From josef.pktd at gmail.com Tue Sep 22 15:58:50 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Tue, 22 Sep 2009 15:58:50 -0400 Subject: [SciPy-User] [SciPy-dev] Bug/Error with chi-squared distribution and df<1 In-Reply-To: <1253648849.3968.2.camel@idol> References: <1cd32cbb0909221127s379c06ccoc375ad005c8a87fd@mail.gmail.com> <1253648849.3968.2.camel@idol> Message-ID: <1cd32cbb0909221258g217549d8v68d89eb82abb0493@mail.gmail.com> On Tue, Sep 22, 2009 at 3:47 PM, Pauli Virtanen wrote: > ti, 2009-09-22 kello 14:27 -0400, josef.pktd at gmail.com kirjoitti: > [clip] >> Taking the hint with incomplete gamma, the following looks good. This >> would mean until Pauli fixes scipy.special if it your fix works, we >> could also use gammainc directly. I don't know the differences between >> the various implementations well enough to see whether we buy some >> other problems with this > > The C function `igam` called by `chdtr` is the same as what is exposed > as `gammainc`. > > -- > Pauli Virtanen > Thanks, then it looks like that there is no reason to enforce the df>=1 constraint. instead we could restrict df>0, the value for df=0 is wrong (should be 1) but convergence to zero looks reasonable, no idea about exact numbers Josef >>> df=1e-8;x=0.5;special.gammainc(df/2., x/2.) 0.99999999477858692 >>> df=1e-8;x=1e-6;special.gammainc(df/2., x/2.) 0.99999993034278956 >>> df=1e-8;x=1e-12;special.gammainc(df/2., x/2.) 0.99999986126524631 >>> >>> df=0;x=1e-12;special.gammainc(df/2., x/2.) 0.0 >>> df=-1;x=1;special.gammainc(df/2., x/2.) 0.0 > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From thomas.robitaille at gmail.com Tue Sep 22 17:47:38 2009 From: thomas.robitaille at gmail.com (Thomas Robitaille) Date: Tue, 22 Sep 2009 14:47:38 -0700 (PDT) Subject: [SciPy-User] [SciPy-user] scipy.weave crashes for pure inline Message-ID: <25530916.post@talk.nabble.com> Hi, I'm using a recent svn revision of scipy (5925). After installing it I went to scipy/weave/examples and ran 'python array3d.py'. I get the following error message (below). Can other people reproduce this problem? If not, maybe it's some local installation issue. Thanks, Thomas --- numpy: [[[ 0 1 2 3] [ 4 5 6 7] [ 8 9 10 11]] [[12 13 14 15] [16 17 18 19] [20 21 22 23]]] Pure Inline: Traceback (most recent call last): File "array3d.py", line 105, in main() File "array3d.py", line 98, in main pure_inline(arr) File "array3d.py", line 57, in pure_inline weave.inline(code, ['arr']) File "/Users/tom/Library/Python/2.6/site-packages/scipy/weave/inline_tools.py", line 324, in inline results = attempt_function_call(code,local_dict,global_dict) File "/Users/tom/Library/Python/2.6/site-packages/scipy/weave/inline_tools.py", line 392, in attempt_function_call function_list = function_catalog.get_functions(code,module_dir) File "/Users/tom/Library/Python/2.6/site-packages/scipy/weave/catalog.py", line 615, in get_functions function_list = self.get_cataloged_functions(code) File "/Users/tom/Library/Python/2.6/site-packages/scipy/weave/catalog.py", line 529, in get_cataloged_functions if cat is not None and code in cat: File "/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/shelve.py", line 110, in __contains__ return key in self.dict File "/Users/tom/Library/Python/2.6/site-packages/scipy/io/dumbdbm_patched.py", line 73, in __getitem__ pos, siz = self._index[key] # may raise KeyError KeyError: 0 -- View this message in context: http://www.nabble.com/scipy.weave-crashes-for-pure-inline-tp25530916p25530916.html Sent from the Scipy-User mailing list archive at Nabble.com. From jelmer.oosthoek at tno.nl Wed Sep 23 08:40:21 2009 From: jelmer.oosthoek at tno.nl (Oosthoek, J.H.P. (Jelmer)) Date: Wed, 23 Sep 2009 14:40:21 +0200 Subject: [SciPy-User] interpolation question Message-ID: <7997E92D-6869-402B-B100-24D846A0090E@mimectl> Hi, Is it possible using scipy to fill the gaps in one list using another list? I have two lists, one with time values and one with meter values. Both lists are the same length and their indexes are linked (timelist[i] and depthlist[i] belong to the same location). The timevalue list has gaps (nodata values). The shape of both lists is very similar. What I would like to do is to use the depthlist shape to fill in the gaps of the timelist. Is there a method within scipy which does the trick? I first tried simple regression but that didn't result in a smooth line. I would like to be able to use the values bordering each gap as control points. Thanks in advance, Jelmer Oosthoek This e-mail and its contents are subject to the DISCLAIMER at http://www.tno.nl/disclaimer/email.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From zachary.pincus at yale.edu Wed Sep 23 09:53:09 2009 From: zachary.pincus at yale.edu (Zachary Pincus) Date: Wed, 23 Sep 2009 09:53:09 -0400 Subject: [SciPy-User] interpolation question In-Reply-To: <7997E92D-6869-402B-B100-24D846A0090E@mimectl> References: <7997E92D-6869-402B-B100-24D846A0090E@mimectl> Message-ID: > Is it possible using scipy to fill the gaps in one list using > another list? > > I have two lists, one with time values and one with meter values. > Both lists are the same length and their indexes are linked > (timelist[i] and depthlist[i] belong to the same location). The > timevalue list has gaps (nodata values). > > The shape of both lists is very similar. What I would like to do is > to use the depthlist shape to fill in the gaps of the timelist. Is > there a method within scipy which does the trick? I first tried > simple regression but that didn't result in a smooth line. I would > like to be able to use the values bordering each gap as control > points. First: do these (depth, time) pairs, sorted by depth, describe a proper function (e.g. no depth with two associated times, etc.)? If not, you'll need to figure out a parametric representation of the function: e.g. use the indices (i, depth) and (i, time) to parameterized a 2D curve. Assuming the former case, the first thing you'll need to do is make three lists: depth_in with the depth values which have associated times, time_in with those time values, and depth_out, with the depth values that have no associated times. times_out = numpy.interp(depth_out, depth_in, time_in) will then linearly interpolate the time values for the depths in the depth_out list. You can also use the spline interpolation routines in scipy.interpolate. There are some convenient wrapper classes, but the ones I use most often are the raw fitpack routines: smoothing = upper bound on sum of squared distances between the fit curve and the (depth, time) input points. Can be zero, but then the spline might be prone to ringing. order = order of spline interpolation: 0 is nearest-neighbor, 1 is linear, 3 is recommended in general. tck = scipy.interpolate.splrep(depth_in, time_in, k=order, s=smoothing) times_out = scipy.interpolate.splev(depth_out, tck) You'll probably want to look at what times the spline provides for the depths provided in depth_in, to gauge if the curve is being over- smoothed. In the parametric case, you'll want to either do two rounds of linear interpolation for the (i, depth) and (i, time) curves, or use scipy.interpolate.splprep. Also there is scipy.interpolate.Rbf which uses radial basis functions to interpolate scattered data points. This might be a very smooth way of interpolating your data as well. Zach From josef.pktd at gmail.com Wed Sep 23 10:17:02 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Wed, 23 Sep 2009 10:17:02 -0400 Subject: [SciPy-User] interpolation question In-Reply-To: References: <7997E92D-6869-402B-B100-24D846A0090E@mimectl> Message-ID: <1cd32cbb0909230717k7cb7bb19tb801b516709c1915@mail.gmail.com> On Wed, Sep 23, 2009 at 9:53 AM, Zachary Pincus wrote: >> Is it possible using scipy to fill the gaps in one list using >> another list? >> >> I have two lists, one with time values and one with meter values. >> Both lists are the same length and their indexes are linked >> (timelist[i] and depthlist[i] belong to the same location). The >> timevalue list has gaps (nodata values). >> >> The shape of both lists is very similar. What I would like to do is >> to use the depthlist shape to fill in the gaps of the timelist. Is >> there a method within scipy which does the trick? I first tried >> simple regression but that didn't result in a smooth line. I would >> like to be able to use the values bordering each gap as control >> points. > > > First: do these (depth, time) pairs, sorted by depth, describe a > proper function (e.g. no depth with two associated times, etc.)? If > not, you'll need to figure out a parametric representation of the > function: e.g. use the indices (i, depth) and (i, time) to > parameterized a 2D curve. > > Assuming the former case, the first thing you'll need to do is make > three lists: depth_in with the depth values which have associated > times, time_in with those time values, and depth_out, with the depth > values that have no associated times. > > times_out = numpy.interp(depth_out, depth_in, time_in) will then > linearly interpolate the time values for the depths in the depth_out > list. > > You can also use the spline interpolation routines in > scipy.interpolate. There are some convenient wrapper classes, but the > ones I use most often are the raw fitpack routines: > > smoothing = upper bound on sum of squared distances between the fit > curve and the (depth, time) input points. Can be zero, but then the > spline might be prone to ringing. > order = order of spline interpolation: 0 is nearest-neighbor, 1 is > linear, 3 is recommended in general. > tck = scipy.interpolate.splrep(depth_in, time_in, k=order, s=smoothing) > times_out = scipy.interpolate.splev(depth_out, tck) > > You'll probably want to look at what times the spline provides for the > depths provided in depth_in, to gauge if the curve is being over- > smoothed. > > In the parametric case, you'll want to either do two rounds of linear > interpolation for the (i, depth) and (i, time) curves, or use > scipy.interpolate.splprep. > > Also there is scipy.interpolate.Rbf which uses radial basis functions > to interpolate scattered data points. This might be a very smooth way > of interpolating your data as well. > > Zach > If you want to use the regression approach instead of interpolation, then I would try kernel regression, which would locally be much smoother than linear regression, but might in the outcome be very similar to scipy.interpolate.Rbf. But, except for some toy examples, I haven't used this. Josef > > > > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From jelmer.oosthoek at tno.nl Wed Sep 23 12:10:41 2009 From: jelmer.oosthoek at tno.nl (Oosthoek, J.H.P. (Jelmer)) Date: Wed, 23 Sep 2009 18:10:41 +0200 Subject: [SciPy-User] interpolation question In-Reply-To: <1cd32cbb0909230717k7cb7bb19tb801b516709c1915@mail.gmail.com> References: <7997E92D-6869-402B-B100-24D846A0090E@mimectl> , <1cd32cbb0909230717k7cb7bb19tb801b516709c1915@mail.gmail.com> Message-ID: <7F71638F-379F-46D5-9A71-6BF2302D2D2D@mimectl> Dear Zach and Josef, Thanks for your help! I think this will be more than enough for me to get this working :) Thanks! Jelmer Van: josef.pktd at gmail.com Verzonden: wo 9/23/2009 16:17 Aan: SciPy Users List Onderwerp: Re: [SciPy-User] interpolation question On Wed, Sep 23, 2009 at 9:53 AM, Zachary Pincus wrote: >> Is it possible using scipy to fill the gaps in one list using >> another list? >> >> I have two lists, one with time values and one with meter values. >> Both lists are the same length and their indexes are linked >> (timelist[i] and depthlist[i] belong to the same location). The >> timevalue list has gaps (nodata values). >> >> The shape of both lists is very similar. What I would like to do is >> to use the depthlist shape to fill in the gaps of the timelist. Is >> there a method within scipy which does the trick? I first tried >> simple regression but that didn't result in a smooth line. I would >> like to be able to use the values bordering each gap as control >> points. > > > First: do these (depth, time) pairs, sorted by depth, describe a > proper function (e.g. no depth with two associated times, etc.)? If > not, you'll need to figure out a parametric representation of the > function: e.g. use the indices (i, depth) and (i, time) to > parameterized a 2D curve. > > Assuming the former case, the first thing you'll need to do is make > three lists: depth_in with the depth values which have associated > times, time_in with those time values, and depth_out, with the depth > values that have no associated times. > > times_out = numpy.interp(depth_out, depth_in, time_in) will then > linearly interpolate the time values for the depths in the depth_out > list. > > You can also use the spline interpolation routines in > scipy.interpolate. There are some convenient wrapper classes, but the > ones I use most often are the raw fitpack routines: > > smoothing = upper bound on sum of squared distances between the fit > curve and the (depth, time) input points. Can be zero, but then the > spline might be prone to ringing. > order = order of spline interpolation: 0 is nearest-neighbor, 1 is > linear, 3 is recommended in general. > tck = scipy.interpolate.splrep(depth_in, time_in, k=order, s=smoothing) > times_out = scipy.interpolate.splev(depth_out, tck) > > You'll probably want to look at what times the spline provides for the > depths provided in depth_in, to gauge if the curve is being over- > smoothed. > > In the parametric case, you'll want to either do two rounds of linear > interpolation for the (i, depth) and (i, time) curves, or use > scipy.interpolate.splprep. > > Also there is scipy.interpolate.Rbf which uses radial basis functions > to interpolate scattered data points. This might be a very smooth way > of interpolating your data as well. > > Zach > If you want to use the regression approach instead of interpolation, then I would try kernel regression, which would locally be much smoother than linear regression, but might in the outcome be very similar to scipy.interpolate.Rbf. But, except for some toy examples, I haven't used this. Josef > > > > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > _______________________________________________ SciPy-User mailing list SciPy-User at scipy.org http://mail.scipy.org/mailman/listinfo/scipy-user This e-mail and its contents are subject to the DISCLAIMER at http://www.tno.nl/disclaimer/email.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Wed Sep 23 16:42:33 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Wed, 23 Sep 2009 16:42:33 -0400 Subject: [SciPy-User] interpolation question In-Reply-To: <7F71638F-379F-46D5-9A71-6BF2302D2D2D@mimectl> References: <7997E92D-6869-402B-B100-24D846A0090E@mimectl> <1cd32cbb0909230717k7cb7bb19tb801b516709c1915@mail.gmail.com> <7F71638F-379F-46D5-9A71-6BF2302D2D2D@mimectl> Message-ID: <1cd32cbb0909231342o2fa424f6ye22fed26c82cc9eb@mail.gmail.com> On Wed, Sep 23, 2009 at 12:10 PM, Oosthoek, J.H.P. (Jelmer) wrote: > Dear Zach and Josef, > > Thanks for your help! I think this will be more than enough for me to get > this working :) > > Thanks! > > Jelmer > ________________________________ > Van: josef.pktd at gmail.com > Verzonden: wo 9/23/2009 16:17 > Aan: SciPy Users List > Onderwerp: Re: [SciPy-User] interpolation question > > On Wed, Sep 23, 2009 at 9:53 AM, Zachary Pincus > wrote: >>> Is it possible using scipy to fill the gaps in one list using >>> another list? >>> >>> I have two lists, one with time values and one with meter values. >>> Both lists are the same length and their indexes are linked >>> (timelist[i] and depthlist[i] belong to the same location). The >>> timevalue list has gaps (nodata values). >>> >>> The shape of both lists is very similar. What I would like to do is >>> to use the depthlist shape to fill in the gaps of the timelist. Is >>> there a method within scipy which does the trick? I first tried >>> simple regression but that didn't result in a smooth line. I would >>> like to be able to use the values bordering each gap as control >>> points. >> >> >> First: do these (depth, time) pairs, sorted by depth, describe a >> proper function (e.g. no depth with two associated times, etc.)? If >> not, you'll need to figure out a parametric representation of the >> function: e.g. use the indices (i, depth) and (i, time) to >> parameterized a 2D curve. >> >> Assuming the former case, the first thing you'll need to do is make >> three lists: depth_in with the depth values which have associated >> times, time_in with those time values, and depth_out, with the depth >> values that have no associated times. >> >> times_out = numpy.interp(depth_out, depth_in, time_in) will then >> linearly interpolate the time values for the depths in the depth_out >> list. >> >> You can also use the spline interpolation routines in >> scipy.interpolate. There are some convenient wrapper classes, but the >> ones I use most often are the raw fitpack routines: >> >> smoothing = upper bound on sum of squared distances between the fit >> curve and the (depth, time) input points. Can be zero, but then the >> spline might be prone to ringing. >> order = order of spline interpolation: 0 is nearest-neighbor, 1 is >> linear, 3 is recommended in general. >> tck = scipy.interpolate.splrep(depth_in, time_in, k=order, s=smoothing) >> times_out = scipy.interpolate.splev(depth_out, tck) >> >> You'll probably want to look at what times the spline provides for the >> depths provided in depth_in, to gauge if the curve is being over- >> smoothed. >> >> In the parametric case, you'll want to either do two rounds of linear >> interpolation for the (i, depth) and (i, time) curves, or use >> scipy.interpolate.splprep. >> >> Also there is scipy.interpolate.Rbf which uses radial basis functions >> to interpolate scattered data points. This might be a very smooth way >> of interpolating your data as well. >> >> Zach >> > > If you want to use the regression approach instead of interpolation, then > I would try kernel regression, which would locally be much smoother than > linear regression, but might in the outcome be very similar to > scipy.interpolate.Rbf. > But, except for some toy examples, I haven't used this. If you are interested, you could try out the attached code. I wrote this some time ago and posted it to the mailing list, but I don't think it has seen any real use. Eventually, I would like to include something like this in scikits.statsmodels. Josef > > Josef > >> >> >> >> >> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > This e-mail and its contents are subject to the DISCLAIMER at > http://www.tno.nl/disclaimer/email.html > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- '''Kernel Ridge Regression for local non-parametric regression''' import numpy as np from scipy import spatial as ssp from numpy.testing import assert_equal import matplotlib.pylab as plt def plt_closeall(n=10): '''close a number of open matplotlib windows''' for i in range(n): plt.close() def kernel_rbf(x,y,scale=1, **kwds): #scale = kwds.get('scale',1) dist = ssp.minkowski_distance_p(x[:,np.newaxis,:],y[np.newaxis,:,:],2) return np.exp(-0.5/scale*(dist)) def kernel_euclid(x,y,p=2, **kwds): return ssp.minkowski_distance(x[:,np.newaxis,:],y[np.newaxis,:,:],p) class GaussProcess(object): '''class to perform kernel ridge regression (gaussian process) Warning: this class is memory intensive, it creates nobs x nobs distance matrix and its inverse, where nobs is the number of rows (observations). See sparse version for larger number of observations Notes ----- Todo: * normalize multidimensional x array on demand, either by var or cov * add confidence band * automatic selection or proposal of smoothing parameters Reference --------- Rasmussen, C.E. and C.K.I. Williams, 2006, Gaussian Processes for Machine Learning, the MIT Press, www.GaussianProcess.org/gpal, chapter 2 ''' def __init__(self, x,y=None, kernel=kernel_rbf, scale=0.5, ridgecoeff = 1e-10, **kwds ): ''' Parameters ---------- x : 2d array (N,K) data array of explanatory variables, columns represent variables rows represent observations y : 2d array (N,1) (optional) endogenous variable that should be fitted or predicted can alternatively be specified as parameter to fit method kernel : function, default: kernel_rbf kernel: (x1,x2)->kernel matrix is a function that takes as parameter two column arrays and return the kernel or distance matrix scale : float (optional) smoothing parameter for the rbf kernel ridgecoeff : float (optional) coefficient that is multiplied with the identity matrix in the ridge regression Notes ----- After initialization, kernel matrix is calculated and if y is given as parameter then also the linear regression parameter and the fitted or estimated y values, yest, are calculated. yest is available as an attribute in this case. Both scale and the ridge coefficient smooth the fitted curve. ''' self.x = x self.kernel = kernel self.scale = scale self.ridgecoeff = ridgecoeff self.distxsample = kernel(x,x,scale=scale) self.Kinv = np.linalg.inv(self.distxsample + np.eye(*self.distxsample.shape)*ridgecoeff) if not y is None: self.y = y self.yest = self.fit(y) def fit(self,y): '''fit the training explanatory variables to a sample ouput variable''' self.parest = np.dot(self.Kinv,y) yhat = np.dot(self.distxsample,self.parest) return yhat ## print ds33.shape ## ds33_2 = kernel(x,x[::k,:],scale=scale) ## dsinv = np.linalg.inv(ds33+np.eye(*distxsample.shape)*ridgecoeff) ## B = np.dot(dsinv,y[::k,:]) def predict(self,x): '''predict new y values for a given array of explanatory variables''' self.xpredict = x distxpredict = self.kernel(x,self.x,scale=self.scale) self.ypredict = np.dot(distxpredict,self.parest) return self.ypredict def plot(self, y, plt=plt ): '''some basic plots''' #todo return proper graph handles plt.figure(); plt.plot(self.x,self.y,'bo-',self.x,self.yest,'r.-') plt.title('sample (training) points') plt.figure() plt.plot(self.xpredict,y,'bo-',self.xpredict,self.ypredict,'r.-') plt.title('all points') def example1(): m,k = 500,4 upper = 6 scale=10 xs1a = np.linspace(1,upper,m)[:,np.newaxis] xs1 = xs1a*np.ones((1,4)) + 1/(1.0+np.exp(np.random.randn(m,k))) xs1 /= np.std(xs1[::k,:],0) # normalize scale, could use cov to normalize y1true = np.sum(np.sin(xs1)+np.sqrt(xs1),1)[:,np.newaxis] y1 = y1true + 0.250 * np.random.randn(m,1) stride = 2 #use only some points as trainig points e.g 2 means every 2nd gp1 = GaussProcess(xs1[::stride,:],y1[::stride,:], kernel=kernel_euclid, ridgecoeff=1e-10) yhatr1 = gp1.predict(xs1) plt.figure() plt.plot(y1true, y1,'bo',y1true, yhatr1,'r.') plt.title('euclid kernel: true y versus noisy y and estimated y') plt.figure() plt.plot(y1,'bo-',y1true,'go-',yhatr1,'r.-') plt.title('euclid kernel: true (green), noisy (blue) and estimated (red) '+ 'observations') gp2 = GaussProcess(xs1[::stride,:],y1[::stride,:], kernel=kernel_rbf, scale=scale, ridgecoeff=1e-1) yhatr2 = gp2.predict(xs1) plt.figure() plt.plot(y1true, y1,'bo',y1true, yhatr2,'r.') plt.title('rbf kernel: true versus noisy (blue) and estimated (red) observations') plt.figure() plt.plot(y1,'bo-',y1true,'go-',yhatr2,'r.-') plt.title('rbf kernel: true (green), noisy (blue) and estimated (red) '+ 'observations') #gp2.plot(y1) def example2(m=100, scale=0.01, stride=2): #m,k = 100,1 upper = 6 xs1 = np.linspace(1,upper,m)[:,np.newaxis] y1true = np.sum(np.sin(xs1**2),1)[:,np.newaxis]/xs1 y1 = y1true + 0.05*np.random.randn(m,1) ridgecoeff = 1e-10 #stride = 2 #use only some points as trainig points e.g 2 means every 2nd gp1 = GaussProcess(xs1[::stride,:],y1[::stride,:], kernel=kernel_euclid, ridgecoeff=1e-10) yhatr1 = gp1.predict(xs1) plt.figure() plt.plot(y1true, y1,'bo',y1true, yhatr1,'r.') plt.title('euclid kernel: true versus noisy (blue) and estimated (red) observations') plt.figure() plt.plot(y1,'bo-',y1true,'go-',yhatr1,'r.-') plt.title('euclid kernel: true (green), noisy (blue) and estimated (red) '+ 'observations') gp2 = GaussProcess(xs1[::stride,:],y1[::stride,:], kernel=kernel_rbf, scale=scale, ridgecoeff=1e-2) yhatr2 = gp2.predict(xs1) plt.figure() plt.plot(y1true, y1,'bo',y1true, yhatr2,'r.') plt.title('rbf kernel: true versus noisy (blue) and estimated (red) observations') plt.figure() plt.plot(y1,'bo-',y1true,'go-',yhatr2,'r.-') plt.title('rbf kernel: true (green), noisy (blue) and estimated (red) '+ 'observations') #gp2.plot(y1) if __name__ == '__main__': example2() #example2(m=1000, scale=0.01) #example2(m=100, scale=0.5) # oversmoothing #example2(m=2000, scale=0.005) # this looks good for rbf, zoom in #example2(m=200, scale=0.01,stride=4) example1() plt.show() #plt_closeall() # use this to close the open figure windows -------------- next part -------------- import numpy as np import matplotlib.pyplot as plt from kernridgeregress_class import GaussProcess, kernel_euclid m,k = 50,4 upper = 6 scale=10 xs1 = np.linspace(1,upper,m)[:,np.newaxis] #xs1 = xs1a*np.ones((1,4)) + 1/(1.0+np.exp(np.random.randn(m,k))) #xs1 /= np.std(xs1[::k,:],0) # normalize scale, could use cov to normalize y1true = np.sum(np.sin(xs1)+np.sqrt(xs1),1)[:,np.newaxis] y1 = y1true + 0.010 * np.random.randn(m,1) stride = 3 #use only some points as trainig points e.g 2 means every 2nd gp1 = GaussProcess(xs1[::stride,:],y1[::stride,:], kernel=kernel_euclid, ridgecoeff=1e-10) yhatr1 = gp1.predict(xs1) plt.figure() plt.plot(y1true, y1,'bo',y1true, yhatr1,'r.') plt.title('euclid kernel: true y versus noisy y and estimated y') plt.figure() plt.plot(y1,'bo-',y1true,'go-',yhatr1,'r.-') plt.title('euclid kernel: true (green), noisy (blue) and estimated (red) '+ 'observations') From dan.stowell at elec.qmul.ac.uk Thu Sep 24 07:38:53 2009 From: dan.stowell at elec.qmul.ac.uk (Dan Stowell) Date: Thu, 24 Sep 2009 12:38:53 +0100 Subject: [SciPy-User] SciPy.stats.kde.gaussian_kde estimation of information-theoretic measures Message-ID: <4ABB5A4D.6080606@elec.qmul.ac.uk> Hi - I'd like to use SciPy.stats.kde.gaussian_kde to estimate Kullback-Leibler divergence. In other words, given KDE estimates of two different distributions p(x) and q(x) I'd like to evaluate things like integral of { p(x) log( p(x)/q(x) ) } Is this possible using gaussian_kde? The method kde.integrate_kde(other_kde) gets halfway there. Or if not, are there other modules that can do this kind of thing? Thanks for any suggestions Dan -- Dan Stowell Centre for Digital Music School of Electronic Engineering and Computer Science Queen Mary, University of London Mile End Road, London E1 4NS http://www.elec.qmul.ac.uk/department/staff/research/dans.htm http://www.mcld.co.uk/ From lorenzo.isella at gmail.com Thu Sep 24 08:17:08 2009 From: lorenzo.isella at gmail.com (Lorenzo Isella) Date: Thu, 24 Sep 2009 14:17:08 +0200 Subject: [SciPy-User] Smart Hashing of Integer Numbers Message-ID: <4ABB6344.40305@gmail.com> Dear All, This is my problem: I have a couple of integer numbers (which are entries of a numpy array) and I would like to combine them unambiguously into a single (possibly short) integer number. There are two requirements (1) then function f(A,B)=C must be injective (2) it would be very pleasant to be able to decompose unambiguously C into A and B. I can skip condition (2) if nothing easy to implement comes to mind. I tried something like hash((A,B)), which certainly respects (1), but the result can be a 19-digit long integer, which may not be the easiest thing to read on certain platforms. An example taken from some of my data: In [1]: hash((1159,9)) Out[1]: 3712118181786491231 What I need is an algorithm which does not output huge integer numbers unless the numbers to combine are large themselves. Any idea is appreciated (even how to simply juxtapose A and B, like A=1159 and B =9 ---> C=11599 if there is nothing better). Many thanks Lorenzo From harald.schilly at gmail.com Thu Sep 24 08:22:51 2009 From: harald.schilly at gmail.com (Harald Schilly) Date: Thu, 24 Sep 2009 14:22:51 +0200 Subject: [SciPy-User] Smart Hashing of Integer Numbers In-Reply-To: <4ABB6344.40305@gmail.com> References: <4ABB6344.40305@gmail.com> Message-ID: <20548feb0909240522m310560c5ga8cfc0c116796e0c@mail.gmail.com> On Thu, Sep 24, 2009 at 14:17, Lorenzo Isella wrote: > Dear All, > This is my problem: I have a couple of integer numbers (which are > entries of a numpy array) and I would like to combine them unambiguously > into a single (possibly short) integer number. Note, a hash code number isn't unambiguously at all. Just as an idea, do this: C=2^A*3^B and to do the reverse, factor it. H From tioguerra at gmail.com Thu Sep 24 08:35:29 2009 From: tioguerra at gmail.com (Rodrigo Guerra) Date: Thu, 24 Sep 2009 21:35:29 +0900 Subject: [SciPy-User] building svn rev 5926 on snow leopard 10.6.1 In-Reply-To: <817c9f950909170005p6a0020d3xa4f27709770a1e34@mail.gmail.com> References: <817c9f950909170005p6a0020d3xa4f27709770a1e34@mail.gmail.com> Message-ID: <817c9f950909240535j1f7c3046qb2faa3458e2b52b@mail.gmail.com> On Thu, Sep 17, 2009 at 4:05 PM, Rodrigo Guerra wrote: > /System/Library/Frameworks/vecLib.framework/Headers/clapack.h:380: > error: expected declaration specifiers before > ?AVAILABLE_MAC_OS_X_VERSION_10_6_AND_LATER? Update: I managed to make it compile SciPy in Leopard by using the 10.5 SDK: ~$ export CFLAGS='-isysroot /Developer/SDKs/MacOSX10.5.sdk' ~$ python setup.py build ~$ sudo python setup.py install But the original problem must be still there. Since this seems to be a pretty unusual problem I guess my system messed up, perhaps missing something somewhere. Got the same kind of error message trying to compile MPlayer. From robince at gmail.com Thu Sep 24 08:41:51 2009 From: robince at gmail.com (Robin) Date: Thu, 24 Sep 2009 13:41:51 +0100 Subject: [SciPy-User] Smart Hashing of Integer Numbers In-Reply-To: <4ABB6344.40305@gmail.com> References: <4ABB6344.40305@gmail.com> Message-ID: On Thu, Sep 24, 2009 at 1:17 PM, Lorenzo Isella wrote: > Dear All, > This is my problem: I have a couple of integer numbers (which are > entries of a numpy array) and I would like to combine them unambiguously > into a single (possibly short) integer number. > There are two requirements > (1) then function f(A,B)=C must be injective > (2) it would be very pleasant to be able to decompose unambiguously C > into A and B If you treat your pair of numbers A,B as a length 2 word with base m = maximum possible value of A or B, then you can get what you want by converting to decimal and back. eg C = A^m + B Here are some (probably slightly iffy) functions I have to this (be careful with the dimensions of what you pass in): def base2dec(x,b): """Convert a numerical vector to its decimal value in a given base b.""" xs = x.shape z = b**np.arange((xs[1]-1),-0.5,-1) y = np.asarray(np.dot(x, z)) return y def dec2base(x, b, digits): """Convert decimal value to a row of (digits) values representing it in a given base b.""" xs = x.shape if xs[1] != 1: raise ValueError, "Input x must be a column vector!" power = np.ones((xs[0],1)) * (b ** np.c_[digits-1:-0.5:-1,].T) x = np.tile(x,(1,digits)) y = np.floor( np.remainder(x, b*power) / power ) return y Cheers Robin From josef.pktd at gmail.com Thu Sep 24 09:19:08 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Thu, 24 Sep 2009 09:19:08 -0400 Subject: [SciPy-User] Smart Hashing of Integer Numbers In-Reply-To: References: <4ABB6344.40305@gmail.com> Message-ID: <1cd32cbb0909240619r3dcc963fr5c90e66784235925@mail.gmail.com> On Thu, Sep 24, 2009 at 8:41 AM, Robin wrote: > On Thu, Sep 24, 2009 at 1:17 PM, Lorenzo Isella > wrote: >> Dear All, >> This is my problem: I have a couple of integer numbers (which are >> entries of a numpy array) and I would like to combine them unambiguously >> into a single (possibly short) integer number. >> There are two requirements >> (1) then function f(A,B)=C must be injective >> (2) it would be very pleasant to be able to decompose unambiguously C >> into A and B > > If you treat your pair of numbers A,B as a length 2 word with base m = > maximum possible value of A or B, then you can get what you want by > converting to decimal and back. > > eg C = A^m + B > > Here are some (probably slightly iffy) functions I have to this (be > careful with the dimensions of what you pass in): > > def base2dec(x,b): > ? ?"""Convert a numerical vector to its decimal value in a given base b.""" > ? ?xs = x.shape > ? ?z = b**np.arange((xs[1]-1),-0.5,-1) > ? ?y = np.asarray(np.dot(x, z)) > ? ?return y > > def dec2base(x, b, digits): > ? ?"""Convert decimal value to a row of (digits) values representing it in a > ? ?given base b.""" > ? ?xs = x.shape > ? ?if xs[1] != 1: > ? ? ? ?raise ValueError, "Input x must be a column vector!" > ? ?power = np.ones((xs[0],1)) * (b ** np.c_[digits-1:-0.5:-1,].T) > ? ?x = np.tile(x,(1,digits)) > ? ?y = np.floor( np.remainder(x, b*power) / power ) > ? ?return y These look like useful functions, I always have to think about the conversion and what I come up with is never very general. I would like to include something like this as helper functions in scipy.stats or scikits.statsmodels for combining arrays of integer labels. Josef > > Cheers > > Robin > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From josef.pktd at gmail.com Thu Sep 24 09:49:41 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Thu, 24 Sep 2009 09:49:41 -0400 Subject: [SciPy-User] SciPy.stats.kde.gaussian_kde estimation of information-theoretic measures In-Reply-To: <4ABB5A4D.6080606@elec.qmul.ac.uk> References: <4ABB5A4D.6080606@elec.qmul.ac.uk> Message-ID: <1cd32cbb0909240649o779e25b4lfce768aff5f5d0f4@mail.gmail.com> On Thu, Sep 24, 2009 at 7:38 AM, Dan Stowell wrote: > Hi - > > I'd like to use SciPy.stats.kde.gaussian_kde to estimate > Kullback-Leibler divergence. In other words, given KDE estimates of two > different distributions p(x) and q(x) I'd like to evaluate things like > > ? ?integral of { ?p(x) log( p(x)/q(x) ) ?} > > Is this possible using gaussian_kde? The method > kde.integrate_kde(other_kde) gets halfway there. Or if not, are there > other modules that can do this kind of thing? > > Thanks for any suggestions > Dan I never managed to figure out what integrate_kde and integrate_gaussian in stats.kde are good for. So if you find any hints or use cases, I would be very glad to hear them. Both functions are pure python and take the sum over the observed points. So, it might be possible to extend them to other cases. But I'm just guessing since I never tried to figure out the theory for this functions without any use case. What is the dimension of your x? If it's small enough numerical integration might also work. scipy.maxentropy might also have useful functions for this. But it is also a package that I haven't yet looked in detail. And without sufficient background, I didn't understand much when looking at it. I hope someone has a better answer, I would also be interested in it. Josef > > -- > Dan Stowell > Centre for Digital Music > School of Electronic Engineering and Computer Science > Queen Mary, University of London > Mile End Road, London E1 4NS > http://www.elec.qmul.ac.uk/department/staff/research/dans.htm > http://www.mcld.co.uk/ > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From vanforeest at gmail.com Thu Sep 24 13:19:46 2009 From: vanforeest at gmail.com (nicky van foreest) Date: Thu, 24 Sep 2009 19:19:46 +0200 Subject: [SciPy-User] Smart Hashing of Integer Numbers In-Reply-To: <1cd32cbb0909240619r3dcc963fr5c90e66784235925@mail.gmail.com> References: <4ABB6344.40305@gmail.com> <1cd32cbb0909240619r3dcc963fr5c90e66784235925@mail.gmail.com> Message-ID: Hi, i also struggled with a similar problem for a while. I wanted to use arrays as indices to rows of a sparse matrix. For this purpose I needed the smallest possible unique integer for each new array. My solution is like this (with help of the scipy community of course). Cast the array to a dictionary key (details below), and use this key in a dict to map it to a number equal to the length of the dict. This has two nice properties: the mapping is a bijection, and the size of the target int is minimal. In some more detail: yourArray = [2,3] key = array(yourArray, dtype=int8) # i only needed very small ints in the array key.flags.writeable = False index = dict() # add keys like this: if key not in index: index[key] = len(index) Beware about the trick with setting the flags. Francesc Alted suggested me this solution, but Robert Kern had some objections against this (see the mailing list) which were just a bit too difficult for me to grasp. Hope this helps. bye Nicky 2009/9/24 : > On Thu, Sep 24, 2009 at 8:41 AM, Robin wrote: >> On Thu, Sep 24, 2009 at 1:17 PM, Lorenzo Isella >> wrote: >>> Dear All, >>> This is my problem: I have a couple of integer numbers (which are >>> entries of a numpy array) and I would like to combine them unambiguously >>> into a single (possibly short) integer number. >>> There are two requirements >>> (1) then function f(A,B)=C must be injective >>> (2) it would be very pleasant to be able to decompose unambiguously C >>> into A and B >> >> If you treat your pair of numbers A,B as a length 2 word with base m = >> maximum possible value of A or B, then you can get what you want by >> converting to decimal and back. >> >> eg C = A^m + B >> >> Here are some (probably slightly iffy) functions I have to this (be >> careful with the dimensions of what you pass in): >> >> def base2dec(x,b): >> ? ?"""Convert a numerical vector to its decimal value in a given base b.""" >> ? ?xs = x.shape >> ? ?z = b**np.arange((xs[1]-1),-0.5,-1) >> ? ?y = np.asarray(np.dot(x, z)) >> ? ?return y >> >> def dec2base(x, b, digits): >> ? ?"""Convert decimal value to a row of (digits) values representing it in a >> ? ?given base b.""" >> ? ?xs = x.shape >> ? ?if xs[1] != 1: >> ? ? ? ?raise ValueError, "Input x must be a column vector!" >> ? ?power = np.ones((xs[0],1)) * (b ** np.c_[digits-1:-0.5:-1,].T) >> ? ?x = np.tile(x,(1,digits)) >> ? ?y = np.floor( np.remainder(x, b*power) / power ) >> ? ?return y > > These look like useful functions, I always have to think about the > conversion and what I come up with is never very general. > I would like to include something like this as helper functions in > scipy.stats or scikits.statsmodels for combining arrays of integer > labels. > > Josef > > >> >> Cheers >> >> Robin >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From stefan at sun.ac.za Thu Sep 24 13:43:45 2009 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Thu, 24 Sep 2009 19:43:45 +0200 Subject: [SciPy-User] ANN: Image Processing SciKit Message-ID: <9457e7c80909241043h6b94c18q5e4fb8ccc5cdc657@mail.gmail.com> Hi all, After a short sprint at SciPy 2009, we've put together the infrastructure for an Image Processing SciKit. The source code [1] and documentatin [2] is available online. WIth the infrastructure in place, the next focus will be on getting contributions (listed at [3]) merged. If you have code for generally useful image processing algorithms available, please consider contributing. Feel free to join further discussions on the scikit mailing list [4]. Kind regards St?fan [1] http://github.com/stefanv/scikits.image [2] http://stefanv.github.com/scikits.image [3] http://conference.scipy.org/sprints [4] http://groups.google.com/group/scikits-image From robert.kern at gmail.com Thu Sep 24 14:56:47 2009 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 24 Sep 2009 13:56:47 -0500 Subject: [SciPy-User] SciPy.stats.kde.gaussian_kde estimation of information-theoretic measures In-Reply-To: <1cd32cbb0909240649o779e25b4lfce768aff5f5d0f4@mail.gmail.com> References: <4ABB5A4D.6080606@elec.qmul.ac.uk> <1cd32cbb0909240649o779e25b4lfce768aff5f5d0f4@mail.gmail.com> Message-ID: <3d375d730909241156h41596905jc64fdeac2ac09e63@mail.gmail.com> On Thu, Sep 24, 2009 at 08:49, wrote: > I never managed to figure out what integrate_kde and > integrate_gaussian in stats.kde are good for. So if you find any hints > or use cases, I would be very glad to hear them. I told you the use case I implemented them for the last time you asked. Was something unclear about my explanation? -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From josef.pktd at gmail.com Thu Sep 24 15:00:34 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Thu, 24 Sep 2009 15:00:34 -0400 Subject: [SciPy-User] SciPy.stats.kde.gaussian_kde estimation of information-theoretic measures In-Reply-To: <3d375d730909241156h41596905jc64fdeac2ac09e63@mail.gmail.com> References: <4ABB5A4D.6080606@elec.qmul.ac.uk> <1cd32cbb0909240649o779e25b4lfce768aff5f5d0f4@mail.gmail.com> <3d375d730909241156h41596905jc64fdeac2ac09e63@mail.gmail.com> Message-ID: <1cd32cbb0909241200p4d1020b5y68af7364df5f551@mail.gmail.com> On Thu, Sep 24, 2009 at 2:56 PM, Robert Kern wrote: > On Thu, Sep 24, 2009 at 08:49, ? wrote: > >> I never managed to figure out what integrate_kde and >> integrate_gaussian in stats.kde are good for. So if you find any hints >> or use cases, I would be very glad to hear them. > > I told you the use case I implemented them for the last time you > asked. Was something unclear about my explanation? But you didn't have the reference, and the description was to vague for me to spend time hunting it down. Josef > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless > enigma that is made terrible by our own mad attempt to interpret it as > though it had an underlying truth." > ?-- Umberto Eco > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From josef.pktd at gmail.com Thu Sep 24 15:04:59 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Thu, 24 Sep 2009 15:04:59 -0400 Subject: [SciPy-User] SciPy.stats.kde.gaussian_kde estimation of information-theoretic measures In-Reply-To: <1cd32cbb0909241200p4d1020b5y68af7364df5f551@mail.gmail.com> References: <4ABB5A4D.6080606@elec.qmul.ac.uk> <1cd32cbb0909240649o779e25b4lfce768aff5f5d0f4@mail.gmail.com> <3d375d730909241156h41596905jc64fdeac2ac09e63@mail.gmail.com> <1cd32cbb0909241200p4d1020b5y68af7364df5f551@mail.gmail.com> Message-ID: <1cd32cbb0909241204k678e5616g97503e6b53dc7cbb@mail.gmail.com> On Thu, Sep 24, 2009 at 3:00 PM, wrote: > On Thu, Sep 24, 2009 at 2:56 PM, Robert Kern wrote: >> On Thu, Sep 24, 2009 at 08:49, ? wrote: >> >>> I never managed to figure out what integrate_kde and >>> integrate_gaussian in stats.kde are good for. So if you find any hints >>> or use cases, I would be very glad to hear them. >> >> I told you the use case I implemented them for the last time you >> asked. Was something unclear about my explanation? > > But you didn't have the reference, and the description was to vague > for me to spend time hunting it down. What I mean is that I didn't try to find any references or use cases for it. Since I was mainly going through various parts of the scipy code, I didn't wan't to spend hours with Google just to figure out what each function might be good for. > > Josef > > >> >> -- >> Robert Kern >> >> "I have come to believe that the whole world is an enigma, a harmless >> enigma that is made terrible by our own mad attempt to interpret it as >> though it had an underlying truth." >> ?-- Umberto Eco >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> > From robert.kern at gmail.com Thu Sep 24 15:15:09 2009 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 24 Sep 2009 14:15:09 -0500 Subject: [SciPy-User] SciPy.stats.kde.gaussian_kde estimation of information-theoretic measures In-Reply-To: <1cd32cbb0909241204k678e5616g97503e6b53dc7cbb@mail.gmail.com> References: <4ABB5A4D.6080606@elec.qmul.ac.uk> <1cd32cbb0909240649o779e25b4lfce768aff5f5d0f4@mail.gmail.com> <3d375d730909241156h41596905jc64fdeac2ac09e63@mail.gmail.com> <1cd32cbb0909241200p4d1020b5y68af7364df5f551@mail.gmail.com> <1cd32cbb0909241204k678e5616g97503e6b53dc7cbb@mail.gmail.com> Message-ID: <3d375d730909241215p6b27f69am6608e0bc33c6ecd6@mail.gmail.com> On Thu, Sep 24, 2009 at 14:04, wrote: > On Thu, Sep 24, 2009 at 3:00 PM, ? wrote: >> On Thu, Sep 24, 2009 at 2:56 PM, Robert Kern wrote: >>> On Thu, Sep 24, 2009 at 08:49, ? wrote: >>> >>>> I never managed to figure out what integrate_kde and >>>> integrate_gaussian in stats.kde are good for. So if you find any hints >>>> or use cases, I would be very glad to hear them. >>> >>> I told you the use case I implemented them for the last time you >>> asked. Was something unclear about my explanation? >> >> But you didn't have the reference, and the description was to vague >> for me to spend time hunting it down. > > What I mean is that I didn't try to find any references or use cases > for it. Since I was mainly going through various parts of the scipy > code, I didn't wan't to spend hours with Google just to figure out > what each function might be good for. Ah, you have a strange definition of "use case", then. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From robert.kern at gmail.com Thu Sep 24 15:34:43 2009 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 24 Sep 2009 14:34:43 -0500 Subject: [SciPy-User] Smart Hashing of Integer Numbers In-Reply-To: References: <4ABB6344.40305@gmail.com> <1cd32cbb0909240619r3dcc963fr5c90e66784235925@mail.gmail.com> Message-ID: <3d375d730909241234p540f37o304ffe2a329ea127@mail.gmail.com> On Thu, Sep 24, 2009 at 12:19, nicky van foreest wrote: > Hi, > > i also struggled with a similar problem for a while. I wanted to use > arrays as indices to rows of a sparse matrix. For this purpose I > needed the smallest possible unique integer for each new array. > > My solution is like this (with help of the scipy community of course). > Cast the array to a dictionary key (details below), and use this key > in a dict to map it to a number equal to the length of the dict. This > has two nice properties: ?the mapping is a bijection, and the size of > the target int is minimal. > > In some more detail: > > yourArray = [2,3] > > key = array(yourArray, dtype=int8) # i only needed very small ints in the array > key.flags.writeable = False > > index = dict() > > # add keys like this: > > if key not in index: > ? ? index[key] = len(index) > > > Beware about the trick with setting the flags. Francesc Alted > suggested me this solution, but Robert Kern had some objections > against this (see the mailing list) which were just a bit too > difficult for me to grasp. Well, that code certainly doesn't work: In [1]: key = array([1, 2], dtype=uint8) In [2]: key.flags.writeable = False In [3]: hash(key) --------------------------------------------------------------------------- TypeError Traceback (most recent call last) /Users/rkern/ in () TypeError: unhashable type: 'numpy.ndarray' In [4]: index = {} In [5]: key in index --------------------------------------------------------------------------- TypeError Traceback (most recent call last) /Users/rkern/ in () TypeError: unhashable type: 'numpy.ndarray' In [6]: index[key] = len(index) --------------------------------------------------------------------------- TypeError Traceback (most recent call last) /Users/rkern/ in () TypeError: unhashable type: 'numpy.ndarray' Did you actually mean something else? -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From vanforeest at gmail.com Thu Sep 24 16:56:15 2009 From: vanforeest at gmail.com (nicky van foreest) Date: Thu, 24 Sep 2009 22:56:15 +0200 Subject: [SciPy-User] Smart Hashing of Integer Numbers In-Reply-To: <3d375d730909241234p540f37o304ffe2a329ea127@mail.gmail.com> References: <4ABB6344.40305@gmail.com> <1cd32cbb0909240619r3dcc963fr5c90e66784235925@mail.gmail.com> <3d375d730909241234p540f37o304ffe2a329ea127@mail.gmail.com> Message-ID: Sorry...and thanks again Robert. I forgot to add the .data to the key. This really works. #!/usr/bin/env python from numpy import * index = {} ar = [ [2,3], [5,4]] for a in ar: key = array(a, dtype=uint8) key.flags.writeable = False index[key.data] = len(index) for data in index: dum = frombuffer(data,dtype=uint8) print dum From robert.kern at gmail.com Thu Sep 24 19:06:07 2009 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 24 Sep 2009 18:06:07 -0500 Subject: [SciPy-User] Smart Hashing of Integer Numbers In-Reply-To: References: <4ABB6344.40305@gmail.com> <1cd32cbb0909240619r3dcc963fr5c90e66784235925@mail.gmail.com> <3d375d730909241234p540f37o304ffe2a329ea127@mail.gmail.com> Message-ID: <3d375d730909241606t649be954rab192488b29cefc8@mail.gmail.com> On Thu, Sep 24, 2009 at 15:56, nicky van foreest wrote: > Sorry...and thanks again Robert. ?I forgot to add the .data to the key. > > This really works. > > #!/usr/bin/env python > > from numpy import * > > index = {} > > ar = [ [2,3], [5,4]] > > for a in ar: > ? ?key = array(a, dtype=uint8) > ? ?key.flags.writeable = False > ? ?index[key.data] = len(index) > > for data in index: > ? ?dum = frombuffer(data,dtype=uint8) > ? ?print dum For this purpose, you might as well use key.tostring() instead of mucking about with setting the writeable flag and the .data buffer. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From nicoletti at consorzio-innova.it Fri Sep 25 04:20:00 2009 From: nicoletti at consorzio-innova.it (nicoletti) Date: Fri, 25 Sep 2009 10:20:00 +0200 Subject: [SciPy-User] SmoothBivariateSpline returns always zeros! Message-ID: <4ABC7D30.5080804@consorzio-innova.it> Dear all, I have tried to use the class SmoothBivariateSpline, but it seems to me that it returns always zeros. I attach the output and the script file. {{{ >python -u "bspline.py" /usr/lib/python2.5/site-packages/scipy/interpolate/fitpack.py:763: DeprecationWarning: integer argument expected, got float tx,ty,nxest,nyest,wrk,lwrk1,lwrk2) /usr/lib/python2.5/site-packages/scipy/interpolate/fitpack2.py:439: UserWarning: ier=1368 warnings.warn(message) [[-2.89772727 -2.86668909 -2.83510787 ..., 0.65215576 0.67305161 0.69318182] [-2.87406809 -2.84319518 -2.81177699 ..., 0.67025301 0.69120171 0.71138701] [-2.85038278 -2.81967401 -2.78841772 ..., 0.68848547 0.70948813 0.72972964] ..., [-0.72972964 -0.70948813 -0.68848547 ..., 2.78841772 2.81967401 2.85038278] [-0.71138701 -0.69120171 -0.67025301 ..., 2.81177699 2.84319518 2.87406809] [-0.69318182 -0.67305161 -0.65215576 ..., 2.83510787 2.86668909 2.89772727]] [[ 0. 0. 0. ..., 0. 0. 0.] [ 0. 0. 0. ..., 0. 0. 0.] [ 0. 0. 0. ..., 0. 0. 0.] ..., [ 0. 0. 0. ..., 0. 0. 0.] [ 0. 0. 0. ..., 0. 0. 0.] [ 0. 0. 0. ..., 0. 0. 0.]] >Exit code: 1 Time: 1,221 }}} *File python script bspline.py:* from numpy import * import scipy from enthought.chaco.shell import * from scipy import ndimage import numpy from scipy import interpolate x= numpy.linspace(-2,2,80) y = numpy.linspace(-2,2,80) z = x+ y xi= numpy.linspace(-1,1,100) yi = numpy.linspace(-2,2,100) tck = interpolate.bisplrep(x,y,z) res1 = interpolate.bisplev(xi,yi,tck) interp_ = interpolate.SmoothBivariateSpline(x,y,z,kx=5,ky=5) res2 = interp_(xi,yi) print res1, res2 -------------- next part -------------- An HTML attachment was scrubbed... URL: From jagan_cbe2003 at yahoo.co.in Fri Sep 25 10:08:26 2009 From: jagan_cbe2003 at yahoo.co.in (jagan prabhu) Date: Fri, 25 Sep 2009 07:08:26 -0700 (PDT) Subject: [SciPy-User] criteria's to get the exit mode '0' in slsqp In-Reply-To: <3d375d730909181147jfb498c1ydd269177e74260e0@mail.gmail.com> Message-ID: <336972.80138.qm@web8319.mail.in.yahoo.com> Hi all, Thank you, it was a good answer. one step ahead and specific... What are the criteria's to get the exit mode '0'? in case of 'fmin_slsqp'? ------ Jagan --- On Sat, 19/9/09, Robert Kern wrote: From: Robert Kern Subject: Re: [SciPy-User] criteria's to get the exit mode '0 To: "SciPy Users List" Date: Saturday, 19 September, 2009, 12:17 AM On Fri, Sep 18, 2009 at 08:40, jagan prabhu wrote: > > Hi, > > I am using the scipy optimization routine 'fmin_slsqp' and 'fmin_l_bfgs_b' in both the case exit mode '0' represents optimization terminated successfully / convergence is achieved. > > What are the criteria's to get the exit mode '0' ? > > Because if i change my initial parameters by very small increment or decrement, i am getting huge difference in my optimized functional value & optimized parameter values. > > so i like to know, > How do the optimization routine determines this the optimized parameters and optimized functional value? It's slightly different for each routine, but basically, it stops when the derivatives at the test point are close enough to zero and the derivatives nearby show that you are at a minimum rather than a maximum or a saddle point.? These are all local minimizers, meaning that they can get trapped in so-called "local minima" where there are little "valleys" in the function which are not the deepest. You want the deepest valley of them all, or the global minimum, but the fmin routines cannot guarantee that you will find it. They basically require that you start with an initial guess that is close enough to the global minimum that it manages to avoid all of the local minima. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco _______________________________________________ SciPy-User mailing list SciPy-User at scipy.org http://mail.scipy.org/mailman/listinfo/scipy-user From cricket scores to your friends. Try the Yahoo! India Homepage! http://in.yahoo.com/trynew -------------- next part -------------- An HTML attachment was scrubbed... URL: From super.inframan at gmail.com Fri Sep 25 11:18:28 2009 From: super.inframan at gmail.com (Gustaf Nilsson) Date: Fri, 25 Sep 2009 16:18:28 +0100 Subject: [SciPy-User] ANN: Image Processing SciKit In-Reply-To: <9457e7c80909241043h6b94c18q5e4fb8ccc5cdc657@mail.gmail.com> References: <9457e7c80909241043h6b94c18q5e4fb8ccc5cdc657@mail.gmail.com> Message-ID: Interesting. Maybe im missing something, but it would be nice with a some sort of presentation on the webpage showing how it works without the need to download and install it? Gusty 2009/9/24 St?fan van der Walt > Hi all, > > After a short sprint at SciPy 2009, we've put together the > infrastructure for an Image Processing SciKit. The source code [1] > and documentatin [2] is available online. WIth the infrastructure in > place, the next focus will be on getting contributions (listed at [3]) > merged. > > If you have code for generally useful image processing algorithms > available, please consider contributing. Feel free to join further > discussions on the scikit mailing list [4]. > > Kind regards > St?fan > > [1] http://github.com/stefanv/scikits.image > [2] http://stefanv.github.com/scikits.image > [3] http://conference.scipy.org/sprints > [4] http://groups.google.com/group/scikits-image > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -- ? ? ? ? ? ? ? ? ? ? -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Fri Sep 25 11:21:28 2009 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 25 Sep 2009 10:21:28 -0500 Subject: [SciPy-User] criteria's to get the exit mode '0' in slsqp In-Reply-To: <336972.80138.qm@web8319.mail.in.yahoo.com> References: <3d375d730909181147jfb498c1ydd269177e74260e0@mail.gmail.com> <336972.80138.qm@web8319.mail.in.yahoo.com> Message-ID: <3d375d730909250821p58a73d97r8a4da5ddb0736aab@mail.gmail.com> On Fri, Sep 25, 2009 at 09:08, jagan prabhu wrote: > > Hi all, > > Thank you, it was a good answer. one step ahead and specific... > > What are the criteria's to get the exit mode '0'? in case of 'fmin_slsqp'? I don't know off-hand. You will have to read the code. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." ?-- Umberto Eco From dwf at cs.toronto.edu Fri Sep 25 19:42:27 2009 From: dwf at cs.toronto.edu (David Warde-Farley) Date: Fri, 25 Sep 2009 19:42:27 -0400 Subject: [SciPy-User] ANN: Image Processing SciKit In-Reply-To: References: <9457e7c80909241043h6b94c18q5e4fb8ccc5cdc657@mail.gmail.com> Message-ID: On 25-Sep-09, at 11:18 AM, Gustaf Nilsson wrote: > Maybe im missing something, but it would be nice with a some sort of > presentation on the webpage showing how it works without the need to > download and install it? Really this is just a toolbox of functions; individual functions documentation have usage examples,e.g. http://stefanv.github.com/scikits.image/api/scikits.image.transform.hough_transform.html Although, it would be really helpful to have the output of those plot commands. John Hunter's sampledoc tutorial ( http://matplotlib.sourceforge.net/sampledoc/ ) contains instructions on how to do the requisite Sphinx twiddling to get matplotlib plots plotted in the Sphinx output, it's just a matter of someone actually *doing* it. This is exactly the kind of low-hanging fruit a SciPy/scikits/open source newcomer (or long-time user, first-time contributor) could do to get their feet wet, by the way :) It's basically a matter of a) forking the project on GitHub, b) following the instructions at the sampledoc tutorial to make plots work, c) committing and pushing to your own github branch and pinging Stefan to go look/update the live docs. I know Stefan's really busy with other obligations right now, as am I, but consider this an open invitation to help out if you have some time to spare. David From washakie at gmail.com Sat Sep 26 07:34:24 2009 From: washakie at gmail.com (John [H2O]) Date: Sat, 26 Sep 2009 04:34:24 -0700 (PDT) Subject: [SciPy-User] [SciPy-user] 2d interpolation, non-regular lat/lon grid - help with delauney/natgrid?? In-Reply-To: <6a17e9ee0908130937r6d09d406jc047f37442c340a9@mail.gmail.com> References: <24909685.post@talk.nabble.com> <24918109.post@talk.nabble.com> <3d375d730908111058m2c0fc5daw16fe9add8936d4ec@mail.gmail.com> <24943646.post@talk.nabble.com> <3d375d730908121611p1b8eb33cof3dc3ba831b8e7b1@mail.gmail.com> <24950836.post@talk.nabble.com> <6a17e9ee0908130220i24cad4cfx5ad556f5751bac07@mail.gmail.com> <24952162.post@talk.nabble.com> <6a17e9ee0908130548k6413dedfiaaee91e6410d296f@mail.gmail.com> <24954551.post@talk.nabble.com> <3d375d730908130927x65ed0d6fh2db206fa5afa0ad7@mail.gmail.com> <6a17e9ee0908130937r6d09d406jc047f37442c340a9@mail.gmail.com> Message-ID: <25624444.post@talk.nabble.com> Raising the issue again, because I am still having problems. To explain my situation once more, I have a set of non-regular data spanning the N Pole. I want to interpolate it to a regular lat/lon grid. Here is my current approach: # data_lon,data_lat are lat/lon pairs, irregularly spaced # m is a npstere basemap instance x,y = m(data_lon,data_lat) reg_lon = np.arange(lon.min(),lon.max()+dres,dres) nx = reg_lon.size reg_lat = np.arange(lat.min(),lat.max()+dres,dres) ny = reg_lat.size grid_lon,grid_lat = m.makegrid(nx,ny) # find the projected co-ordinates for the grid grid_x, grid_y = m(grid_lon, grid_lat) print "Using Triangulation" # triangulate data tri = delaunay.Triangulation(x,y) # interpolate data interp = tri.nn_interpolator(z) Z0 = interp(grid_x, grid_y) This works fine, however, I note that grid_lon, grid_lat are no longer equal to reg_lon, reg_lat. So it seems that my Z0 is not spaced regularly as reg_lon,reg_lat but rather according to grid_lon,grid_lat. This is fine for mapping in a projected space (i.e. using the basemap instance), but how can I 'reverse transform' the data back so that it has reg_lon,reg_lat as it's coordinates? Thanks again! -- View this message in context: http://www.nabble.com/2d-interpolation%2C-non-regular-lat-lon-grid-tp24909685p25624444.html Sent from the Scipy-User mailing list archive at Nabble.com. From tpk at kraussfamily.org Sat Sep 26 08:35:20 2009 From: tpk at kraussfamily.org (Tom K.) Date: Sat, 26 Sep 2009 05:35:20 -0700 (PDT) Subject: [SciPy-User] [SciPy-user] ANN: Image Processing SciKit In-Reply-To: <9457e7c80909241043h6b94c18q5e4fb8ccc5cdc657@mail.gmail.com> References: <9457e7c80909241043h6b94c18q5e4fb8ccc5cdc657@mail.gmail.com> Message-ID: <25624836.post@talk.nabble.com> St?fan van der Walt wrote: > > After a short sprint at SciPy 2009, we've put together the > infrastructure for an Image Processing SciKit.... > St?fan - I tried to browse to "image" from the main "scikits" link at the scipy.org page - http://scikits.appspot.com/scikits. Seems this page needs to either be updated, or link to the "new" official list of scikits (is there one? :-). -- View this message in context: http://www.nabble.com/ANN%3A-Image-Processing-SciKit-tp25599559p25624836.html Sent from the Scipy-User mailing list archive at Nabble.com. From tpk at kraussfamily.org Sat Sep 26 08:37:31 2009 From: tpk at kraussfamily.org (Tom K.) Date: Sat, 26 Sep 2009 05:37:31 -0700 (PDT) Subject: [SciPy-User] [SciPy-user] ANN: Image Processing SciKit In-Reply-To: <25624836.post@talk.nabble.com> References: <9457e7c80909241043h6b94c18q5e4fb8ccc5cdc657@mail.gmail.com> <25624836.post@talk.nabble.com> Message-ID: <25624853.post@talk.nabble.com> Tom K. wrote: > > > > St?fan van der Walt wrote: >> >> After a short sprint at SciPy 2009, we've put together the >> infrastructure for an Image Processing SciKit.... >> > > St?fan - I tried to browse to "image" from the main "scikits" link at the > scipy.org page - http://scikits.appspot.com/scikits. Seems this page > needs to either be updated, or link to the "new" official list of scikits > (is there one? :-). > Also missing from here: http://projects.scipy.org/scikits -- View this message in context: http://www.nabble.com/ANN%3A-Image-Processing-SciKit-tp25599559p25624853.html Sent from the Scipy-User mailing list archive at Nabble.com. From silva at lma.cnrs-mrs.fr Sat Sep 26 09:09:42 2009 From: silva at lma.cnrs-mrs.fr (Fabrice Silva) Date: Sat, 26 Sep 2009 15:09:42 +0200 Subject: [SciPy-User] Scikits.samplerate Message-ID: <1253970582.6928.7.camel@localhost.localdomain> I had a glance at the samplerate scikit. I think the reference provided in the description is not appropriate. These html pages are much more explicit : http://www-ccrma.stanford.edu/~jos/resample/ or the pdf document http://www-ccrma.stanford.edu/~jos/resample/resample.pdf David, may you correct this ? -- Fabrice Silva Laboratory of Mechanics and Acoustics - CNRS 31 chemin Joseph Aiguier, 13402 Marseille, France. From robert.kern at gmail.com Sat Sep 26 14:03:25 2009 From: robert.kern at gmail.com (Robert Kern) Date: Sat, 26 Sep 2009 13:03:25 -0500 Subject: [SciPy-User] [SciPy-user] 2d interpolation, non-regular lat/lon grid - help with delauney/natgrid?? In-Reply-To: <25624444.post@talk.nabble.com> References: <24909685.post@talk.nabble.com> <3d375d730908121611p1b8eb33cof3dc3ba831b8e7b1@mail.gmail.com> <24950836.post@talk.nabble.com> <6a17e9ee0908130220i24cad4cfx5ad556f5751bac07@mail.gmail.com> <24952162.post@talk.nabble.com> <6a17e9ee0908130548k6413dedfiaaee91e6410d296f@mail.gmail.com> <24954551.post@talk.nabble.com> <3d375d730908130927x65ed0d6fh2db206fa5afa0ad7@mail.gmail.com> <6a17e9ee0908130937r6d09d406jc047f37442c340a9@mail.gmail.com> <25624444.post@talk.nabble.com> Message-ID: <3d375d730909261103i44099ce8lf6077356f290b64c@mail.gmail.com> On Sat, Sep 26, 2009 at 06:34, John [H2O] wrote: > > Raising the issue again, because I am still having problems. > > To explain my situation once more, I have a set of non-regular data spanning > the N Pole. I want to interpolate it to a regular lat/lon grid. > > Here is my current approach: > # data_lon,data_lat ?are lat/lon pairs, irregularly spaced > # m is a npstere basemap instance > x,y = m(data_lon,data_lat) > reg_lon = np.arange(lon.min(),lon.max()+dres,dres) > nx = reg_lon.size > reg_lat = np.arange(lat.min(),lat.max()+dres,dres) > ny = reg_lat.size > grid_lon,grid_lat = m.makegrid(nx,ny) > > # find the projected co-ordinates for the grid > grid_x, grid_y = m(grid_lon, grid_lat) > print "Using Triangulation" > # triangulate data > tri = delaunay.Triangulation(x,y) > # interpolate data > interp = tri.nn_interpolator(z) > Z0 = interp(grid_x, grid_y) > > This works fine, however, I note that grid_lon, grid_lat are no longer equal > to reg_lon, reg_lat. So it seems that my Z0 is not spaced regularly as > reg_lon,reg_lat but rather according to grid_lon,grid_lat. > > This is fine for mapping in a projected space (i.e. using the basemap > instance), but how can I 'reverse transform' the data back so that it has > reg_lon,reg_lat as it's coordinates? Just transform reg_lon,reg_lat to the projection space an interpolate using that. Z0 = interp(*m(reg_lon, reg_lat)) -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From washakie at gmail.com Sat Sep 26 14:55:57 2009 From: washakie at gmail.com (John [H2O]) Date: Sat, 26 Sep 2009 11:55:57 -0700 (PDT) Subject: [SciPy-User] [SciPy-user] 2d interpolation, non-regular lat/lon grid - help with delauney/natgrid?? In-Reply-To: <3d375d730909261103i44099ce8lf6077356f290b64c@mail.gmail.com> References: <24909685.post@talk.nabble.com> <24918109.post@talk.nabble.com> <3d375d730908111058m2c0fc5daw16fe9add8936d4ec@mail.gmail.com> <24943646.post@talk.nabble.com> <3d375d730908121611p1b8eb33cof3dc3ba831b8e7b1@mail.gmail.com> <24950836.post@talk.nabble.com> <6a17e9ee0908130220i24cad4cfx5ad556f5751bac07@mail.gmail.com> <24952162.post@talk.nabble.com> <6a17e9ee0908130548k6413dedfiaaee91e6410d296f@mail.gmail.com> <24954551.post@talk.nabble.com> <3d375d730908130927x65ed0d6fh2db206fa5afa0ad7@mail.gmail.com> <6a17e9ee0908130937r6d09d406jc047f37442c340a9@mail.gmail.com> <25624444.post@talk.nabble.com> <3d375d730909261103i44099ce8lf6077356f290b64c@mail.gmail.com> Message-ID: <25627893.post@talk.nabble.com> Robert Kern-2 wrote: > > > > Just transform reg_lon,reg_lat to the projection space an interpolate > using that. > > Z0 = interp(*m(reg_lon, reg_lat)) > > I get the following error now: Traceback (most recent call last): File "./irregular_interp.py", line 541, in main() File "./irregular_to_regulargrid.py", line 78, in main triangulate=method_options[1]) File "./irregular_to_regulargrid.py", line 474, in grid_points newx,newy,Z = regrid(x,y,z,m,dres=dres,method=method,triangulate=triangulate) File "./irregular_to_regulargrid.py", line 336, in regrid Z0 = interp(*m(newx,newy)) File "/dist/site-packages/mpl_toolkits/basemap/__init__.py", line 823, in __call__ return self.projtran(x,y,inverse=inverse) File "/dist/site-packages/mpl_toolkits/basemap/proj.py", line 241, in __call__ outx,outy = self._proj4(x, y, inverse=inverse) File "/dist/site-packages/mpl_toolkits/basemap/pyproj.py", line 193, in __call__ _Proj._fwd(self, inx, iny, radians=radians, errcheck=errcheck) File "_proj.pyx", line 56, in _proj.Proj._fwd (src/_proj.c:876) RuntimeError: Buffer lengths not the same -- View this message in context: http://www.nabble.com/2d-interpolation%2C-non-regular-lat-lon-grid-tp24909685p25627893.html Sent from the Scipy-User mailing list archive at Nabble.com. From robert.kern at gmail.com Sat Sep 26 15:00:51 2009 From: robert.kern at gmail.com (Robert Kern) Date: Sat, 26 Sep 2009 14:00:51 -0500 Subject: [SciPy-User] [SciPy-user] 2d interpolation, non-regular lat/lon grid - help with delauney/natgrid?? In-Reply-To: <25627893.post@talk.nabble.com> References: <24909685.post@talk.nabble.com> <6a17e9ee0908130220i24cad4cfx5ad556f5751bac07@mail.gmail.com> <24952162.post@talk.nabble.com> <6a17e9ee0908130548k6413dedfiaaee91e6410d296f@mail.gmail.com> <24954551.post@talk.nabble.com> <3d375d730908130927x65ed0d6fh2db206fa5afa0ad7@mail.gmail.com> <6a17e9ee0908130937r6d09d406jc047f37442c340a9@mail.gmail.com> <25624444.post@talk.nabble.com> <3d375d730909261103i44099ce8lf6077356f290b64c@mail.gmail.com> <25627893.post@talk.nabble.com> Message-ID: <3d375d730909261200n2f2a4fbnc4ab72cc56f7c10e@mail.gmail.com> On Sat, Sep 26, 2009 at 13:55, John [H2O] wrote: > > > > Robert Kern-2 wrote: >> >> >> >> Just transform reg_lon,reg_lat to the projection space an interpolate >> using that. >> >> Z0 = interp(*m(reg_lon, reg_lat)) >> >> > > I get the following error now: > > Traceback (most recent call last): > ?File "./irregular_interp.py", line 541, in > ? ?main() > ?File "./irregular_to_regulargrid.py", line 78, in main > ? ?triangulate=method_options[1]) > ?File "./irregular_to_regulargrid.py", line 474, in grid_points > ? ?newx,newy,Z = > regrid(x,y,z,m,dres=dres,method=method,triangulate=triangulate) > ?File "./irregular_to_regulargrid.py", line 336, in regrid > ? ?Z0 = interp(*m(newx,newy)) > ?File "/dist/site-packages/mpl_toolkits/basemap/__init__.py", line 823, in > __call__ > ? ?return self.projtran(x,y,inverse=inverse) > ?File "/dist/site-packages/mpl_toolkits/basemap/proj.py", line 241, in > __call__ > ? ?outx,outy = self._proj4(x, y, inverse=inverse) > ?File "/dist/site-packages/mpl_toolkits/basemap/pyproj.py", line 193, in > __call__ > ? ?_Proj._fwd(self, inx, iny, radians=radians, errcheck=errcheck) > ?File "_proj.pyx", line 56, in _proj.Proj._fwd (src/_proj.c:876) > RuntimeError: Buffer lengths not the same We've covered this before. Go back to Scott Sinclair's first reply for how to properly construct the lat/lon grid. You need one element in each lat/lon array for each grid point. len(reg_lat) == len(reg_lon) == nx*ny. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From washakie at gmail.com Sat Sep 26 15:22:16 2009 From: washakie at gmail.com (John [H2O]) Date: Sat, 26 Sep 2009 12:22:16 -0700 (PDT) Subject: [SciPy-User] [SciPy-user] 2d interpolation, non-regular lat/lon grid - help with delauney/natgrid?? In-Reply-To: <3d375d730909261200n2f2a4fbnc4ab72cc56f7c10e@mail.gmail.com> References: <24909685.post@talk.nabble.com> <24918109.post@talk.nabble.com> <3d375d730908111058m2c0fc5daw16fe9add8936d4ec@mail.gmail.com> <24943646.post@talk.nabble.com> <3d375d730908121611p1b8eb33cof3dc3ba831b8e7b1@mail.gmail.com> <24950836.post@talk.nabble.com> <6a17e9ee0908130220i24cad4cfx5ad556f5751bac07@mail.gmail.com> <24952162.post@talk.nabble.com> <6a17e9ee0908130548k6413dedfiaaee91e6410d296f@mail.gmail.com> <24954551.post@talk.nabble.com> <3d375d730908130927x65ed0d6fh2db206fa5afa0ad7@mail.gmail.com> <6a17e9ee0908130937r6d09d406jc047f37442c340a9@mail.gmail.com> <25624444.post@talk.nabble.com> <3d375d730909261103i44099ce8lf6077356f290b64c@mail.gmail.com> <25627893.post@talk.nabble.com> <3d375d730909261200n2f2a4fbnc4ab72cc56f7c10e@mail.gmail.com> Message-ID: <25628088.post@talk.nabble.com> Yes, Sorry. I jumped to the list too soon. I caught that now, but it seems I am again back to the C++ error: Triangulation terminate called after throwing an instance of 'std::bad_alloc' what(): std::bad_alloc Aborted Frustrating. I guess the memory usage is the catch. Any suggestions on how to deal with this? Is there an alternative interpolation function I could use? -- View this message in context: http://www.nabble.com/2d-interpolation%2C-non-regular-lat-lon-grid-tp24909685p25628088.html Sent from the Scipy-User mailing list archive at Nabble.com. From atulskulkarni at gmail.com Sat Sep 26 17:54:46 2009 From: atulskulkarni at gmail.com (Atul Kulkarni) Date: Sat, 26 Sep 2009 17:54:46 -0400 Subject: [SciPy-User] Error while installing scipy from source and with --prefix option. Message-ID: <92b284af0909261454q7d8c0dbdk87dce10ee98b8adb@mail.gmail.com> Hi All, I am trying to install scipy without super user permissions and hence using --prefix options to install at a location where I can use it. I have installed numpy the same way. But I get this error. $ python setup.py install --prefix=/home/atul/ Traceback (most recent call last): File "setup.py", line 160, in setup_package() File "setup.py", line 127, in setup_package from numpy.distutils.core import setup ImportError: No module named numpy.distutils.core am i doing something wrong? Does Scipy need numpy installed in the main installation? Please help. -- Regards, Atul Kulkarni www.d.umn.edu/~kulka053 -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Sat Sep 26 17:58:08 2009 From: robert.kern at gmail.com (Robert Kern) Date: Sat, 26 Sep 2009 16:58:08 -0500 Subject: [SciPy-User] Error while installing scipy from source and with --prefix option. In-Reply-To: <92b284af0909261454q7d8c0dbdk87dce10ee98b8adb@mail.gmail.com> References: <92b284af0909261454q7d8c0dbdk87dce10ee98b8adb@mail.gmail.com> Message-ID: <3d375d730909261458g220c9e9atb925ead25a724c49@mail.gmail.com> On Sat, Sep 26, 2009 at 16:54, Atul Kulkarni wrote: > Hi All, > > I am trying to install scipy without super user permissions and hence using > --prefix options to install at a location where I can use it. I have > installed numpy the same way. But I get this error. > > $ python setup.py install --prefix=/home/atul/ > Traceback (most recent call last): > ? File "setup.py", line 160, in > ??? setup_package() > ? File "setup.py", line 127, in setup_package > ??? from numpy.distutils.core import setup > ImportError: No module named numpy.distutils.core > > am i doing something wrong? Does Scipy need numpy installed in the main > installation? Please help. Are you sure that your installation of numpy works? Can you import numpy and numpy.distutils.core? -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From atulskulkarni at gmail.com Sat Sep 26 18:01:43 2009 From: atulskulkarni at gmail.com (Atul Kulkarni) Date: Sat, 26 Sep 2009 18:01:43 -0400 Subject: [SciPy-User] Error while installing scipy from source and with --prefix option. In-Reply-To: <3d375d730909261458g220c9e9atb925ead25a724c49@mail.gmail.com> References: <92b284af0909261454q7d8c0dbdk87dce10ee98b8adb@mail.gmail.com> <3d375d730909261458g220c9e9atb925ead25a724c49@mail.gmail.com> Message-ID: <92b284af0909261501h417a4f1y417d26a6c0772d35@mail.gmail.com> I just checked. No, it does not. I installed it the same way, anything special i need to do to include that in my regular installation? On Sat, Sep 26, 2009 at 5:58 PM, Robert Kern wrote: > On Sat, Sep 26, 2009 at 16:54, Atul Kulkarni > wrote: > > Hi All, > > > > I am trying to install scipy without super user permissions and hence > using > > --prefix options to install at a location where I can use it. I have > > installed numpy the same way. But I get this error. > > > > $ python setup.py install --prefix=/home/atul/ > > Traceback (most recent call last): > > File "setup.py", line 160, in > > setup_package() > > File "setup.py", line 127, in setup_package > > from numpy.distutils.core import setup > > ImportError: No module named numpy.distutils.core > > > > am i doing something wrong? Does Scipy need numpy installed in the main > > installation? Please help. > > Are you sure that your installation of numpy works? Can you import > numpy and numpy.distutils.core? > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless > enigma that is made terrible by our own mad attempt to interpret it as > though it had an underlying truth." > -- Umberto Eco > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -- Regards, Atul Kulkarni www.d.umn.edu/~kulka053 -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Sat Sep 26 18:03:39 2009 From: robert.kern at gmail.com (Robert Kern) Date: Sat, 26 Sep 2009 17:03:39 -0500 Subject: [SciPy-User] Error while installing scipy from source and with --prefix option. In-Reply-To: <92b284af0909261501h417a4f1y417d26a6c0772d35@mail.gmail.com> References: <92b284af0909261454q7d8c0dbdk87dce10ee98b8adb@mail.gmail.com> <3d375d730909261458g220c9e9atb925ead25a724c49@mail.gmail.com> <92b284af0909261501h417a4f1y417d26a6c0772d35@mail.gmail.com> Message-ID: <3d375d730909261503i3f343040r3d08c0c376b5908@mail.gmail.com> On Sat, Sep 26, 2009 at 17:01, Atul Kulkarni wrote: > I just checked. No, it does not. I installed it the same way, anything > special i need to do to include that in my regular installation? Make an environment variable PYTHONPATH that points to the directory where you installed numpy. E.g. export PYTHONPATH=/home/atul/lib/python2.5/site-packages/ -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From jeremy at jeremysanders.net Sun Sep 27 11:15:13 2009 From: jeremy at jeremysanders.net (Jeremy Sanders) Date: Sun, 27 Sep 2009 16:15:13 +0100 Subject: [SciPy-User] [ANN] Veusz 1.5 Message-ID: Veusz 1.5 --------- Velvet Ember Under Sky Zenith ----------------------------- http://home.gna.org/veusz/ Veusz is Copyright (C) 2003-2009 Jeremy Sanders Licenced under the GPL (version 2 or greater). Veusz is a Qt4 based scientific plotting package. It is written in Python, using PyQt4 for display and user-interfaces, and numpy for handling the numeric data. Veusz is designed to produce publication-ready Postscript/PDF output. The user interface aims to be simple, consistent and powerful. Veusz provides a GUI, command line, embedding and scripting interface (based on Python) to its plotting facilities. It also allows for manipulation and editing of datasets. Changes in 1.5: * EMF export (requires pyemf and PyQt snapshot) * Character encodings supported in data import * Rewritten stylesheet handling. User can now set defaults in document for all settings. This is now under the Edit->Default Styles dialog. * A default stylesheet can be loaded for all new documents (set in preferences dialog) * Linked datasets saved in documents now use relative filename paths (with absolute paths as fallback) * Axes can now have text labels of points plotted along them (choose "labels" as axis mode) * Dataset points can be scaled to different sizes according to another dataset (this is the "Scale markers" option for point plotters) More minor changes * Custom delimiter support in CSV data importer * Add SetDataText and support text in GetData in command API * \dot and \bar added to LaTeX renderer * Option to change icon sizes displayed * Rearrange toolbar icons and create data and widget operation toolbars * Zoom button remembers previous usage * Conversion from 1D->2D datasets more robust * Expression datasets can now be a constant value * Uses colors form theme better and allow user to change some UI colors in preferences * Fix contours if coordinates can be infinite (e.g. log scaling with zero value) * nan/inf are no longer ignored when the ignore text option is selected in import dialog * Several other minor UI changes and bugfixes Important note * As the way defaults are used has been rewritten, default values are no longer saved on a per-user basis but are saved in a stylesheet and is saved in the document. You cannot currently set defaults on a widget- name basis. Features of package: * X-Y plots (with errorbars) * Line and function plots * Contour plots * Images (with colour mappings and colorbars) * Stepped plots (for histograms) * Bar graphs * Plotting dates * Fitting functions to data * Stacked plots and arrays of plots * Plot keys * Plot labels * Shapes and arrows on plots * LaTeX-like formatting for text * EPS/PDF/PNG/SVG/EMF export * Scripting interface * Dataset creation/manipulation * Embed Veusz within other programs * Text, CSV and FITS importing Requirements: Python (2.4 or greater required) http://www.python.org/ Qt >= 4.3 (free edition) http://www.trolltech.com/products/qt/ PyQt >= 4.3 (SIP is required to be installed first) http://www.riverbankcomputing.co.uk/pyqt/ http://www.riverbankcomputing.co.uk/sip/ numpy >= 1.0 http://numpy.scipy.org/ Optional: Microsoft Core Fonts (recommended for nice output) http://corefonts.sourceforge.net/ PyFITS >= 1.1 (optional for FITS import) http://www.stsci.edu/resources/software_hardware/pyfits pyemf >= 2.0.0 (optional for EMF export) http://pyemf.sourceforge.net/ For EMF export, PyQt-x11-gpl-4.6-snapshot-20090906 or better is required, to fix a bug in the C++ wrapping For documentation on using Veusz, see the "Documents" directory. The manual is in pdf, html and text format (generated from docbook). Issues with the current version: * Due to Qt, hatched regions sometimes look rather poor when exported to PostScript, PDF or SVG. * Clipping of data does not work in the SVG export as Qt currently does not support this. * Due to a bug in Qt, some long lines, or using log scales, can lead to very slow plot times under X11. This problem is seen with dashed/dotted lines. It is fixed by upgrading to Qt-4.5.1 (the Veusz binary version includes this Qt version). Switching off antialiasing in the options may help this. If you enjoy using Veusz, I would love to hear from you. Please join the mailing lists at https://gna.org/mail/?group=veusz to discuss new features or if you'd like to contribute code. The latest code can always be found in the SVN repository. Jeremy Sanders From gary.pajer at gmail.com Sun Sep 27 12:32:28 2009 From: gary.pajer at gmail.com (Gary Pajer) Date: Sun, 27 Sep 2009 12:32:28 -0400 Subject: [SciPy-User] superpack installation Message-ID: <88fe22a0909270932i63a10443x7fbfec9962897ddd@mail.gmail.com> It's been a while since I updated scipy. Today was the day. WinXP I had numpy 1.1.1, so I upgraded to 1.3 I had matplotlib 0.91, so I upgraded to 0.99 Then I went to the scipy download page, and saw only a superpack installer ... no stand alone scipy installer. Well, I don't need PyMC, and I already had updated numpy and matplotlib ... I ended up running the superpack, but I ended up with numpy 1.1.1 and matplotlib 0.91. I do have a mix of easy_installed eggs and installer-installed packages on my system ... not sure that was the issue ... at any rate it took a little (~15 minutes) time to straighten everything out. Question: is there a WinXP installer that will install *only* scipy? thanks, gary -------------- next part -------------- An HTML attachment was scrubbed... URL: From gustaf at laserpanda.com Sun Sep 27 13:44:53 2009 From: gustaf at laserpanda.com (Gustaf Nilsson) Date: Sun, 27 Sep 2009 19:44:53 +0200 Subject: [SciPy-User] embarassingly basic question Message-ID: Hi (this might even be a numpy question, not scipy) how do i do conditionals numpy arrays? What i want to do is: if any value in the array is lower than x, then make it zero, if the value is greater than x, then make it 1 i tried googling the answer, but dont think i used the right keywords cheers Gusty -- ? ? ? ? ? ? ? ? ? ? -------------- next part -------------- An HTML attachment was scrubbed... URL: From gokhansever at gmail.com Sun Sep 27 13:53:04 2009 From: gokhansever at gmail.com (=?UTF-8?Q?G=C3=B6khan_Sever?=) Date: Sun, 27 Sep 2009 12:53:04 -0500 Subject: [SciPy-User] embarassingly basic question In-Reply-To: References: Message-ID: <49d6b3500909271053s7ba51a74k78d892e3e0cb7955@mail.gmail.com> On Sun, Sep 27, 2009 at 12:44 PM, Gustaf Nilsson wrote: > Hi > (this might even be a numpy question, not scipy) > how do i do conditionals numpy arrays? > What i want to do is: if any value in the array is lower than x, then make > it zero, if the value is greater than x, then make it 1 > > i tried googling the answer, but dont think i used the right keywords > > cheers > Gusty > > Robert Kern must be sleeping :) I[1]: a = arange(10) I[2]: a[a<5] = 0 I[3]: a O[3]: array([0, 0, 0, 0, 0, 5, 6, 7, 8, 9]) I[4]: a[a>5] = 1 I[5]: a O[5]: array([0, 0, 0, 0, 0, 5, 1, 1, 1, 1]) > -- > ? ? ? ? ? ? ? ? ? ? > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -- G?khan -------------- next part -------------- An HTML attachment was scrubbed... URL: From zachary.pincus at yale.edu Sun Sep 27 14:18:16 2009 From: zachary.pincus at yale.edu (Zachary Pincus) Date: Sun, 27 Sep 2009 14:18:16 -0400 Subject: [SciPy-User] embarassingly basic question In-Reply-To: <49d6b3500909271053s7ba51a74k78d892e3e0cb7955@mail.gmail.com> References: <49d6b3500909271053s7ba51a74k78d892e3e0cb7955@mail.gmail.com> Message-ID: <2B4BBB39-C692-4734-B0F0-98CF29EA1BCF@yale.edu> numpy.where is also useful for this case: In : numpy.where(numpy.arange(10) < 5, 0, 1) Out: array([0, 0, 0, 0, 0, 1, 1, 1, 1, 1]) Note that there's a lot going on in this example, including broadcasting the scalar 0 and 1 values to 1D arrays... where can take any array with a "compatible" shape as the second or third argument: In : numpy.where(numpy.arange(10) < 5, numpy.arange(20, 30), numpy.arange(60, 70)) Out: array([20, 21, 22, 23, 24, 65, 66, 67, 68, 69]) On Sep 27, 2009, at 1:53 PM, G?khan Sever wrote: > On Sun, Sep 27, 2009 at 12:44 PM, Gustaf Nilsson > wrote: > Hi > (this might even be a numpy question, not scipy) > how do i do conditionals numpy arrays? > What i want to do is: if any value in the array is lower than x, > then make it zero, if the value is greater than x, then make it 1 > > i tried googling the answer, but dont think i used the right keywords > > cheers > Gusty > > > > Robert Kern must be sleeping :) > > I[1]: a = arange(10) > > I[2]: a[a<5] = 0 > > I[3]: a > O[3]: array([0, 0, 0, 0, 0, 5, 6, 7, 8, 9]) > > I[4]: a[a>5] = 1 > > I[5]: a > O[5]: array([0, 0, 0, 0, 0, 5, 1, 1, 1, 1]) > > > -- > ? ? ? ? ? ? ? ? ? ? > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > > > > -- > G?khan > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user From zachary.pincus at yale.edu Sun Sep 27 14:27:27 2009 From: zachary.pincus at yale.edu (Zachary Pincus) Date: Sun, 27 Sep 2009 14:27:27 -0400 Subject: [SciPy-User] embarassingly basic question In-Reply-To: <2B4BBB39-C692-4734-B0F0-98CF29EA1BCF@yale.edu> References: <49d6b3500909271053s7ba51a74k78d892e3e0cb7955@mail.gmail.com> <2B4BBB39-C692-4734-B0F0-98CF29EA1BCF@yale.edu> Message-ID: <0666F757-C56A-4A14-A028-33187B82F5BC@yale.edu> Hmm, so the original question was, "if the value in the array is lower than x, then make it zero, if the value is greater than x, then make it 1", which requires that if the value of the array equals x, it should be unchanged... G?khan's answer does correctly and mine below does not. However, this is a bit of an unusual request, more often one would want to make everything zero if it's < x and 1 if >= x (or <= and >, respectively). In this case, the answers below work, but there's an even simpler special-case answer for zeros and ones: a >= x gives a boolean array with zeros where a < x and ones where a >= x. (All basic logic operations work here.) Zach On Sep 27, 2009, at 2:18 PM, Zachary Pincus wrote: > numpy.where is also useful for this case: > > In : numpy.where(numpy.arange(10) < 5, 0, 1) > Out: array([0, 0, 0, 0, 0, 1, 1, 1, 1, 1]) > > Note that there's a lot going on in this example, including > broadcasting the scalar 0 and 1 values to 1D arrays... where can take > any array with a "compatible" shape as the second or third argument: > In : numpy.where(numpy.arange(10) < 5, numpy.arange(20, 30), > numpy.arange(60, 70)) > Out: array([20, 21, 22, 23, 24, 65, 66, 67, 68, 69]) > > > On Sep 27, 2009, at 1:53 PM, G?khan Sever wrote: > >> On Sun, Sep 27, 2009 at 12:44 PM, Gustaf Nilsson >> wrote: >> Hi >> (this might even be a numpy question, not scipy) >> how do i do conditionals numpy arrays? >> What i want to do is: if any value in the array is lower than x, >> then make it zero, if the value is greater than x, then make it 1 >> >> i tried googling the answer, but dont think i used the right keywords >> >> cheers >> Gusty >> >> >> >> Robert Kern must be sleeping :) >> >> I[1]: a = arange(10) >> >> I[2]: a[a<5] = 0 >> >> I[3]: a >> O[3]: array([0, 0, 0, 0, 0, 5, 6, 7, 8, 9]) >> >> I[4]: a[a>5] = 1 >> >> I[5]: a >> O[5]: array([0, 0, 0, 0, 0, 5, 1, 1, 1, 1]) >> >> >> -- >> ? ? ? ? ? ? ? ? ? ? >> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> >> >> >> >> -- >> G?khan >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user From gokhansever at gmail.com Sun Sep 27 14:37:44 2009 From: gokhansever at gmail.com (=?UTF-8?Q?G=C3=B6khan_Sever?=) Date: Sun, 27 Sep 2009 13:37:44 -0500 Subject: [SciPy-User] embarassingly basic question In-Reply-To: <0666F757-C56A-4A14-A028-33187B82F5BC@yale.edu> References: <49d6b3500909271053s7ba51a74k78d892e3e0cb7955@mail.gmail.com> <2B4BBB39-C692-4734-B0F0-98CF29EA1BCF@yale.edu> <0666F757-C56A-4A14-A028-33187B82F5BC@yale.edu> Message-ID: <49d6b3500909271137m5e29a569rdada44a0b85c80ed@mail.gmail.com> On Sun, Sep 27, 2009 at 1:27 PM, Zachary Pincus wrote: > Hmm, so the original question was, "if the value in the array is lower > than x, then make it zero, if the value is greater than x, then make > it 1", which requires that if the value of the array equals x, it > should be unchanged... G?khan's answer does correctly and mine below > does not. > > However, this is a bit of an unusual request, more often one would > want to make everything zero if it's < x and 1 if >= x (or <= and >, > respectively). In this case, the answers below work, but there's an > even simpler special-case answer for zeros and ones: > > a >= x > > gives a boolean array with zeros where a < x and ones where a >= x. > (All basic logic operations work here.) > > Zach > > > I saw that in my posting, too; leaving the comparison point intact. np.where seems a more elegant approach to me if greater or less than comparison what Gustaf was asking for. It is a one-liner in the end. Element-wise functionality of numpy is really so powerful and practical. The same approach doesn't work on regular Python sequences, or could they be forced to work similar to numpy's arrays? > > > On Sep 27, 2009, at 2:18 PM, Zachary Pincus wrote: > > > numpy.where is also useful for this case: > > > > In : numpy.where(numpy.arange(10) < 5, 0, 1) > > Out: array([0, 0, 0, 0, 0, 1, 1, 1, 1, 1]) > > > > Note that there's a lot going on in this example, including > > broadcasting the scalar 0 and 1 values to 1D arrays... where can take > > any array with a "compatible" shape as the second or third argument: > > In : numpy.where(numpy.arange(10) < 5, numpy.arange(20, 30), > > numpy.arange(60, 70)) > > Out: array([20, 21, 22, 23, 24, 65, 66, 67, 68, 69]) > > > > > > On Sep 27, 2009, at 1:53 PM, G?khan Sever wrote: > > > >> On Sun, Sep 27, 2009 at 12:44 PM, Gustaf Nilsson >>> wrote: > >> Hi > >> (this might even be a numpy question, not scipy) > >> how do i do conditionals numpy arrays? > >> What i want to do is: if any value in the array is lower than x, > >> then make it zero, if the value is greater than x, then make it 1 > >> > >> i tried googling the answer, but dont think i used the right keywords > >> > >> cheers > >> Gusty > >> > >> > >> > >> Robert Kern must be sleeping :) > >> > >> I[1]: a = arange(10) > >> > >> I[2]: a[a<5] = 0 > >> > >> I[3]: a > >> O[3]: array([0, 0, 0, 0, 0, 5, 6, 7, 8, 9]) > >> > >> I[4]: a[a>5] = 1 > >> > >> I[5]: a > >> O[5]: array([0, 0, 0, 0, 0, 5, 1, 1, 1, 1]) > >> > >> > >> -- > >> ? ? ? ? ? ? ? ? ? ? > >> > >> _______________________________________________ > >> SciPy-User mailing list > >> SciPy-User at scipy.org > >> http://mail.scipy.org/mailman/listinfo/scipy-user > >> > >> > >> > >> > >> -- > >> G?khan > >> _______________________________________________ > >> SciPy-User mailing list > >> SciPy-User at scipy.org > >> http://mail.scipy.org/mailman/listinfo/scipy-user > > > > _______________________________________________ > > SciPy-User mailing list > > SciPy-User at scipy.org > > http://mail.scipy.org/mailman/listinfo/scipy-user > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -- G?khan -------------- next part -------------- An HTML attachment was scrubbed... URL: From amenity at enthought.com Sun Sep 27 14:39:32 2009 From: amenity at enthought.com (Amenity Applewhite) Date: Sun, 27 Sep 2009 13:39:32 -0500 Subject: [SciPy-User] EPD Webinar October 2nd: How do I...process signals with EPD? (wait list for non-subscribers!) Message-ID: <8590D535-A490-4127-9C60-F7ECBDBCA45E@enthought.com> Having trouble viewing this email? Click here You're receiving this email because of your relationship with Enthought, Inc.. Please confirm your continued interest in receiving email from us. You may unsubscribe if you no longer wish to receive our emails. Friday, October 2nd 1pm CDT How do I...process signals with EPD? Hello! We wanted to let you know that next week we'll host another installment of our popular EPD webinar series. Although only EPD Basic or above subscribers are guaranteed seats at EPD webinars, we invite non-subscribers to add their names to the waiting list for each event. If there are available seats, you will be notified by next Thursday and given access to the webinar. Links to the waiting lists and upcoming topics are available here. These events feature detailed demonstrations of powerful Python techniques that Enthought developers use to enhance our applications or development process. Participants are often invited to participate in the demonstration, and are welcome to join the interactive VOIP discussion later in the session. This is a great opportunity to learn new methods and interact with our expert developers. If you have topics you'd like to see addressed during the webinar, feel free to let us know at media at enthought.com. How do I...process signals with EPD? One of the useful tools in the the Enthought Python Distribution (EPD) is the signal processing module of SciPy. In this webinar we will demonstrate how to analyze and process signals using the Fast Fourier Transform (FFT), and the tools in scipy.signal. Topics to be covered include designing and applying time-domain and frequency-domain filters, down-sampling data, and dealing with data streams by processing chunks at a time while handling edge-effects. Once again, to add your name to the wait-list, visit our site. We hope to see you there! Thanks, Enthought Media Quick Links... Enthought.com Enthought Python Distribution (EPD) Enthought Webinars @Facebook @Twitter Forward email This email was sent to amenity.applewhite at gmail.com by amenity at enthought.com . Update Profile/Email Address | Instant removal with SafeUnsubscribe? | Privacy Policy. Enthought, Inc. | 515 Congress Ave. | Suite 2100 | Austin | TX | 78701 -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 5.jpg Type: image/jpeg Size: 57039 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: safe_unsubscribe_logo.gif Type: image/gif Size: 218 bytes Desc: not available URL: From josef.pktd at gmail.com Sun Sep 27 16:06:26 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Sun, 27 Sep 2009 16:06:26 -0400 Subject: [SciPy-User] superpack installation In-Reply-To: <88fe22a0909270932i63a10443x7fbfec9962897ddd@mail.gmail.com> References: <88fe22a0909270932i63a10443x7fbfec9962897ddd@mail.gmail.com> Message-ID: <1cd32cbb0909271306i2b6fd228w5780d4a26efd0df5@mail.gmail.com> On Sun, Sep 27, 2009 at 12:32 PM, Gary Pajer wrote: > It's been a while since I updated scipy.? Today was the day.? WinXP > > I had numpy 1.1.1, so I upgraded to 1.3 > I had matplotlib 0.91, so I upgraded to 0.99 > > Then I went to the scipy download page, and saw only a superpack installer > ... no stand alone scipy installer.? Well, I don't need PyMC, and I already > had updated numpy and matplotlib ...?? I ended up running the superpack, but > I ended up with numpy 1.1.1 and matplotlib 0.91.??? I do have a mix of > easy_installed eggs and installer-installed packages on my system ... not > sure that was the issue ... at any rate it took a little (~15 minutes) time > to straighten everything out. > > Question:? is there a WinXP installer that will install *only* scipy? > > thanks, > gary Which superpack installer did you use? The ones for scipy at http://sourceforge.net/projects/scipy/files/ are (supposed to be) installing only scipy, superpack only refers to 3 different versions of sse support. What I don't know is whether the installer would initiate a numpy install if it doesn't find an already installed compatible version of numpy. I never had any problems with the superpacks on WindowsXP, so there might be something else going on, if numpy got reinstalled. Josef > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > From gary.pajer at gmail.com Sun Sep 27 16:19:54 2009 From: gary.pajer at gmail.com (Gary Pajer) Date: Sun, 27 Sep 2009 16:19:54 -0400 Subject: [SciPy-User] superpack installation In-Reply-To: <1cd32cbb0909271306i2b6fd228w5780d4a26efd0df5@mail.gmail.com> References: <88fe22a0909270932i63a10443x7fbfec9962897ddd@mail.gmail.com> <1cd32cbb0909271306i2b6fd228w5780d4a26efd0df5@mail.gmail.com> Message-ID: <88fe22a0909271319q36184f46h91336ecdefc153ba@mail.gmail.com> On Sun, Sep 27, 2009 at 4:06 PM, wrote: > On Sun, Sep 27, 2009 at 12:32 PM, Gary Pajer wrote: > > It's been a while since I updated scipy. Today was the day. WinXP > > > > I had numpy 1.1.1, so I upgraded to 1.3 > > I had matplotlib 0.91, so I upgraded to 0.99 > > > > Then I went to the scipy download page, and saw only a superpack > installer > > ... no stand alone scipy installer. Well, I don't need PyMC, and I > already > > had updated numpy and matplotlib ... I ended up running the superpack, > but > > I ended up with numpy 1.1.1 and matplotlib 0.91. I do have a mix of > > easy_installed eggs and installer-installed packages on my system ... not > > sure that was the issue ... at any rate it took a little (~15 minutes) > time > > to straighten everything out. > > > > Question: is there a WinXP installer that will install *only* scipy? > > > > thanks, > > gary > > Which superpack installer did you use? The ones for scipy at > http://sourceforge.net/projects/scipy/files/ are (supposed to be) > installing > only scipy, superpack only refers to 3 different versions of sse support. > > What I don't know is whether the installer would initiate a numpy install > if it doesn't find an already installed compatible version of numpy. > > I never had any problems with the superpacks on WindowsXP, so there > might be something else going on, if numpy got reinstalled. > > Josef > Hmm. I must have misread something somewhere. A google search result said something (or so I thought) about superpack and numpy and matplotlib and PyMC. I did download from sourceforge. I now think that my problem was somehow idiosyncratic to my installation. Sorry for the noise. > > > > > > > _______________________________________________ > > SciPy-User mailing list > > SciPy-User at scipy.org > > http://mail.scipy.org/mailman/listinfo/scipy-user > > > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From super.inframan at gmail.com Sun Sep 27 16:54:00 2009 From: super.inframan at gmail.com (Gustaf Nilsson) Date: Sun, 27 Sep 2009 21:54:00 +0100 Subject: [SciPy-User] embarassingly basic question In-Reply-To: <49d6b3500909271137m5e29a569rdada44a0b85c80ed@mail.gmail.com> References: <49d6b3500909271053s7ba51a74k78d892e3e0cb7955@mail.gmail.com> <2B4BBB39-C692-4734-B0F0-98CF29EA1BCF@yale.edu> <0666F757-C56A-4A14-A028-33187B82F5BC@yale.edu> <49d6b3500909271137m5e29a569rdada44a0b85c80ed@mail.gmail.com> Message-ID: Hi yeah, i realise now the flaw pointed out in my question. numpy.where seems to do exactly what i need! thanks! Gusty On Sun, Sep 27, 2009 at 7:37 PM, G?khan Sever wrote: > > > On Sun, Sep 27, 2009 at 1:27 PM, Zachary Pincus wrote: > >> Hmm, so the original question was, "if the value in the array is lower >> than x, then make it zero, if the value is greater than x, then make >> it 1", which requires that if the value of the array equals x, it >> should be unchanged... G?khan's answer does correctly and mine below >> does not. >> >> However, this is a bit of an unusual request, more often one would >> want to make everything zero if it's < x and 1 if >= x (or <= and >, >> respectively). In this case, the answers below work, but there's an >> even simpler special-case answer for zeros and ones: >> >> a >= x >> >> gives a boolean array with zeros where a < x and ones where a >= x. >> (All basic logic operations work here.) >> >> Zach >> >> >> > I saw that in my posting, too; leaving the comparison point intact. > > np.where seems a more elegant approach to me if greater or less than > comparison what Gustaf was asking for. It is a one-liner in the end. > Element-wise functionality of numpy is really so powerful and practical. The > same approach doesn't work on regular Python sequences, or could they be > forced to work similar to numpy's arrays? > > > >> >> >> On Sep 27, 2009, at 2:18 PM, Zachary Pincus wrote: >> >> > numpy.where is also useful for this case: >> > >> > In : numpy.where(numpy.arange(10) < 5, 0, 1) >> > Out: array([0, 0, 0, 0, 0, 1, 1, 1, 1, 1]) >> > >> > Note that there's a lot going on in this example, including >> > broadcasting the scalar 0 and 1 values to 1D arrays... where can take >> > any array with a "compatible" shape as the second or third argument: >> > In : numpy.where(numpy.arange(10) < 5, numpy.arange(20, 30), >> > numpy.arange(60, 70)) >> > Out: array([20, 21, 22, 23, 24, 65, 66, 67, 68, 69]) >> > >> > >> > On Sep 27, 2009, at 1:53 PM, G?khan Sever wrote: >> > >> >> On Sun, Sep 27, 2009 at 12:44 PM, Gustaf Nilsson < >> gustaf at laserpanda.com >> >>> wrote: >> >> Hi >> >> (this might even be a numpy question, not scipy) >> >> how do i do conditionals numpy arrays? >> >> What i want to do is: if any value in the array is lower than x, >> >> then make it zero, if the value is greater than x, then make it 1 >> >> >> >> i tried googling the answer, but dont think i used the right keywords >> >> >> >> cheers >> >> Gusty >> >> >> >> >> >> >> >> Robert Kern must be sleeping :) >> >> >> >> I[1]: a = arange(10) >> >> >> >> I[2]: a[a<5] = 0 >> >> >> >> I[3]: a >> >> O[3]: array([0, 0, 0, 0, 0, 5, 6, 7, 8, 9]) >> >> >> >> I[4]: a[a>5] = 1 >> >> >> >> I[5]: a >> >> O[5]: array([0, 0, 0, 0, 0, 5, 1, 1, 1, 1]) >> >> >> >> >> >> -- >> >> ? ? ? ? ? ? ? ? ? ? >> >> >> >> _______________________________________________ >> >> SciPy-User mailing list >> >> SciPy-User at scipy.org >> >> http://mail.scipy.org/mailman/listinfo/scipy-user >> >> >> >> >> >> >> >> >> >> -- >> >> G?khan >> >> _______________________________________________ >> >> SciPy-User mailing list >> >> SciPy-User at scipy.org >> >> http://mail.scipy.org/mailman/listinfo/scipy-user >> > >> > _______________________________________________ >> > SciPy-User mailing list >> > SciPy-User at scipy.org >> > http://mail.scipy.org/mailman/listinfo/scipy-user >> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> > > > > -- > G?khan > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -- ? ? ? ? ? ? ? ? ? ? -------------- next part -------------- An HTML attachment was scrubbed... URL: From jsseabold at gmail.com Sun Sep 27 17:07:33 2009 From: jsseabold at gmail.com (Skipper Seabold) Date: Sun, 27 Sep 2009 17:07:33 -0400 Subject: [SciPy-User] SciPy.stats.kde.gaussian_kde estimation of information-theoretic measures In-Reply-To: <4ABB5A4D.6080606@elec.qmul.ac.uk> References: <4ABB5A4D.6080606@elec.qmul.ac.uk> Message-ID: On Thu, Sep 24, 2009 at 7:38 AM, Dan Stowell wrote: > Hi - > > I'd like to use SciPy.stats.kde.gaussian_kde to estimate > Kullback-Leibler divergence. In other words, given KDE estimates of two > different distributions p(x) and q(x) I'd like to evaluate things like > > ? ?integral of { ?p(x) log( p(x)/q(x) ) ?} > > Is this possible using gaussian_kde? The method > kde.integrate_kde(other_kde) gets halfway there. Or if not, are there > other modules that can do this kind of thing? > You should be able to use this for relative entropy. I would be interested to hear about your experience, as I've just started to study this recently. In [1]: from scipy import stats In [2]: stats.entropy? Type: function Base Class: String Form: Namespace: Interactive File: /usr/local/lib/python2.6/dist-packages/scipy/stats/distributions.py Definition: stats.entropy(pk, qk=None) Docstring: S = entropy(pk,qk=None) calculate the entropy of a distribution given the p_k values S = -sum(pk * log(pk), axis=0) If qk is not None, then compute a relative entropy S = sum(pk * log(pk / qk), axis=0) Routine will normalize pk and qk if they don't sum to 1 Skipper From dwf at cs.toronto.edu Sun Sep 27 17:32:15 2009 From: dwf at cs.toronto.edu (David Warde-Farley) Date: Sun, 27 Sep 2009 17:32:15 -0400 Subject: [SciPy-User] embarassingly basic question In-Reply-To: References: Message-ID: <20090927213214.GA9543@rodimus> On Sun, Sep 27, 2009 at 07:44:53PM +0200, Gustaf Nilsson wrote: > Hi > (this might even be a numpy question, not scipy) Just a meta-note: you shouldn't be too worried about posting NumPy questions here. In my opinion, at least, basic NumPy queries are fair game for Scipy-user, as it's kind of a de facto catchall for all things NumPy/SciPy. Though for extremely technical questions related to NumPy specifically, NumPy-discussion will likely serve you better. David From cournape at gmail.com Sun Sep 27 22:53:22 2009 From: cournape at gmail.com (David Cournapeau) Date: Mon, 28 Sep 2009 11:53:22 +0900 Subject: [SciPy-User] superpack installation In-Reply-To: <88fe22a0909270932i63a10443x7fbfec9962897ddd@mail.gmail.com> References: <88fe22a0909270932i63a10443x7fbfec9962897ddd@mail.gmail.com> Message-ID: <5b8d13220909271953m4ef1acfl74bd30e4645144da@mail.gmail.com> On Mon, Sep 28, 2009 at 1:32 AM, Gary Pajer wrote: > It's been a while since I updated scipy.? Today was the day.? WinXP > > I had numpy 1.1.1, so I upgraded to 1.3 > I had matplotlib 0.91, so I upgraded to 0.99 > > Then I went to the scipy download page, and saw only a superpack installer > ... no stand alone scipy installer. The superpack on scipy webpage has nothing to do with the superpack from PyMC guys - it only includes scipy, and never installs anything else. cheers, David From boris.burle at univ-provence.fr Mon Sep 28 04:12:36 2009 From: boris.burle at univ-provence.fr (=?ISO-8859-1?Q?Bor=EDs_BURLE?=) Date: Mon, 28 Sep 2009 10:12:36 +0200 Subject: [SciPy-User] Extracting common triplet from coordinate lists Message-ID: <4AC06FF4.4080402@univ-provence.fr> Dear all, I have three set of Cartesian coordinates and I would like to extract the positions that are identical in the three sets. Here is a toy example: x1 = [1,2,3,4] y1 = [4,3,2,1] z1 = [3,4,1,2] x2 = [7,1,2,3] y2 = [5,4,3,2] z2 = [6,3,2,1] x3 = [7,6,1,3] y3 = [5,4,4,2] z3 = [6,9,3,1] In this example, I would like to extract the triplets (1,4,3) and (3,2,1) since they are common to the three sets. As you can notice, the position of the common triplet is variable. In this example, data are in list, but any code working on arrays would be ok for me too !! Do you have any idea on how to do that? Thanks in advance, Boris -- Bor?s BURLE Laboratoire de Neurobiologie de la Cognition CNRS et Universit? de Provence tel: (+33) 4 88 57 68 79 fax: (+33) 4 88 57 68 72 web page: http://www.up.univ-mrs.fr/lnc/ACT/act-fr.html From dwf at cs.toronto.edu Mon Sep 28 05:57:47 2009 From: dwf at cs.toronto.edu (David Warde-Farley) Date: Mon, 28 Sep 2009 05:57:47 -0400 Subject: [SciPy-User] Extracting common triplet from coordinate lists In-Reply-To: <4AC06FF4.4080402@univ-provence.fr> References: <4AC06FF4.4080402@univ-provence.fr> Message-ID: On 28-Sep-09, at 4:12 AM, Bor?s BURLE wrote: > In this example, I would like to extract the triplets (1,4,3) and > (3,2,1) since they are common to the three sets. As you can notice, > the > position of the common triplet is variable. In this example, data > are in > list, but any code working on arrays would be ok for me too !! Easiest way I can think of: In [36]: set(zip(x1, y1, z1)) & set(zip(x2, y2, z2)) & set(zip(x3, y3, z3)) Out[36]: set([(1, 4, 3), (3, 2, 1)]) David From baker.alexander at gmail.com Mon Sep 28 07:23:38 2009 From: baker.alexander at gmail.com (alexander baker) Date: Mon, 28 Sep 2009 12:23:38 +0100 Subject: [SciPy-User] Extracting common triplet from coordinate lists In-Reply-To: References: <4AC06FF4.4080402@univ-provence.fr> Message-ID: <270620220909280423k6bfd29e6g68447b8e7e8c0157@mail.gmail.com> You should be able to use orthogonality between same vectors in different matrices to detect zero values in the arccos(A * transpose(B)) matrix and distinct permutations there of. (Make sure your norm each vector first in A and B.) Alex Baker Mobile: 07788 872118 Blog: www.alexfb.com -- All science is either physics or stamp collecting. 2009/9/28 David Warde-Farley > On 28-Sep-09, at 4:12 AM, Bor?s BURLE wrote: > > > In this example, I would like to extract the triplets (1,4,3) and > > (3,2,1) since they are common to the three sets. As you can notice, > > the > > position of the common triplet is variable. In this example, data > > are in > > list, but any code working on arrays would be ok for me too !! > > Easiest way I can think of: > > In [36]: set(zip(x1, y1, z1)) & set(zip(x2, y2, z2)) & set(zip(x3, y3, > z3)) > Out[36]: set([(1, 4, 3), (3, 2, 1)]) > > David > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From boris.burle at univ-provence.fr Mon Sep 28 14:07:36 2009 From: boris.burle at univ-provence.fr (=?ISO-8859-1?Q?Bor=EDs_BURLE?=) Date: Mon, 28 Sep 2009 20:07:36 +0200 Subject: [SciPy-User] Extracting common triplet from coordinate lists In-Reply-To: References: <4AC06FF4.4080402@univ-provence.fr> Message-ID: <4AC0FB68.5080400@univ-provence.fr> Thanks very much, that was very helpful !!! B. David Warde-Farley a ?crit : > On 28-Sep-09, at 4:12 AM, Bor?s BURLE wrote: > >> In this example, I would like to extract the triplets (1,4,3) and >> (3,2,1) since they are common to the three sets. As you can notice, the >> position of the common triplet is variable. In this example, data are in >> list, but any code working on arrays would be ok for me too !! > > Easiest way I can think of: > > In [36]: set(zip(x1, y1, z1)) & set(zip(x2, y2, z2)) & set(zip(x3, y3, > z3)) > Out[36]: set([(1, 4, 3), (3, 2, 1)]) > > David -- Bor?s BURLE Laboratoire de Neurobiologie de la Cognition CNRS et Universit? de Provence tel: (+33) 4 88 57 68 79 fax: (+33) 4 88 57 68 72 web page: http://www.up.univ-mrs.fr/lnc/ACT/act-fr.html From dwf at cs.toronto.edu Mon Sep 28 16:58:49 2009 From: dwf at cs.toronto.edu (David Warde-Farley) Date: Mon, 28 Sep 2009 16:58:49 -0400 Subject: [SciPy-User] Extracting common triplet from coordinate lists In-Reply-To: <270620220909280423k6bfd29e6g68447b8e7e8c0157@mail.gmail.com> References: <4AC06FF4.4080402@univ-provence.fr> <270620220909280423k6bfd29e6g68447b8e7e8c0157@mail.gmail.com> Message-ID: <5BDBD021-6EDE-4B8D-AED3-5F74A25C4E5D@cs.toronto.edu> On 28-Sep-09, at 7:23 AM, alexander baker wrote: > You should be able to use orthogonality between same vectors in > different > matrices to detect zero values in the arccos(A * transpose(B)) > matrix and > distinct permutations there of. (Make sure your norm each vector > first in A > and B.) Except that you don't necessarily want to treat all the points as unit vectors. I think in his application he's hoping to isolate [1,2,3] and [2,4,6] as two separate points. David From alexandre.santos at ochipepe.org Tue Sep 29 11:06:57 2009 From: alexandre.santos at ochipepe.org (Alexandre Santos) Date: Tue, 29 Sep 2009 17:06:57 +0200 Subject: [SciPy-User] How to create multi-page tiff files with python tools? Message-ID: Hello, My data is encoded as multi-page tiff files. Because conditions are randomized, I need to sort the stack frames before proceeding with the analysis. While trying to do this with python tools, I seem to have hit a block: I can't find a way of creating multi-page tiff files with PIL, and saw nothing related to that in SciPy. Is there really no way of doing this in python, leaving me no option but to fall back into Matlab? Cheers, Alexandre Santos From seb.haase at gmail.com Tue Sep 29 11:14:40 2009 From: seb.haase at gmail.com (Sebastian Haase) Date: Tue, 29 Sep 2009 17:14:40 +0200 Subject: [SciPy-User] How to create multi-page tiff files with python tools? In-Reply-To: References: Message-ID: Hi, I have a working solution for this based on PIL. You can either refer to the PIL archive and look for my submitted patch and/or download my Priithon package (a python "all inclusive" package geared to image analysis) that includes the patched PIL (1.1.6) and look in useful.py for saveImg() ( saves 3D and 4D arrays into) multipage tif or saveImg_8() or saveTiffMultipage or saveTiffMultipageFromSeq Cheers, Sebastian Haase On Tue, Sep 29, 2009 at 5:06 PM, Alexandre Santos wrote: > Hello, > > My data is encoded as multi-page tiff files. Because conditions are > randomized, I need to sort the stack frames before proceeding with the > analysis. > > While trying to do this with python tools, I seem to have hit a block: > I can't find a way of creating multi-page tiff files with PIL, and saw > nothing related to that in SciPy. > > Is there really no way of doing this in python, leaving me no option > but to fall back into Matlab? > > Cheers, > Alexandre Santos > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From dwf at cs.toronto.edu Tue Sep 29 11:16:04 2009 From: dwf at cs.toronto.edu (David Warde-Farley) Date: Tue, 29 Sep 2009 11:16:04 -0400 Subject: [SciPy-User] How to create multi-page tiff files with python tools? In-Reply-To: References: Message-ID: <8125C6E1-86D8-4D8D-B474-66727FB91179@cs.toronto.edu> Do you need to create them or just read them? You can read them by opening with PIL and using the '.seek()' method to switch between frames. David On 29-Sep-09, at 11:06 AM, Alexandre Santos wrote: > Hello, > > My data is encoded as multi-page tiff files. Because conditions are > randomized, I need to sort the stack frames before proceeding with the > analysis. > > While trying to do this with python tools, I seem to have hit a block: > I can't find a way of creating multi-page tiff files with PIL, and saw > nothing related to that in SciPy. > > Is there really no way of doing this in python, leaving me no option > but to fall back into Matlab? > > Cheers, > Alexandre Santos > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user From ochipepe at gmail.com Tue Sep 29 11:34:58 2009 From: ochipepe at gmail.com (Alexandre Santos) Date: Tue, 29 Sep 2009 17:34:58 +0200 Subject: [SciPy-User] How to create multi-page tiff files with python tools? In-Reply-To: <8125C6E1-86D8-4D8D-B474-66727FB91179@cs.toronto.edu> References: <8125C6E1-86D8-4D8D-B474-66727FB91179@cs.toronto.edu> Message-ID: 2009/9/29 David Warde-Farley : > Do you need to create them or just read them? You can read them by > opening with PIL and using the '.seek()' method to switch between > frames. I would like to create ordered stacks out of the randomized ones, so I would need the ability to create them. Alex > > David > > On 29-Sep-09, at 11:06 AM, Alexandre Santos wrote: > >> Hello, >> >> My data is encoded as multi-page tiff files. Because conditions are >> randomized, I need to sort the stack frames before proceeding with the >> analysis. >> >> While trying to do this with python tools, I seem to have hit a block: >> I can't find a way of creating multi-page tiff files with PIL, and saw >> nothing related to that in SciPy. >> >> Is there really no way of doing this in python, leaving me no option >> but to fall back into Matlab? >> >> Cheers, >> Alexandre Santos >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From seb.haase at gmail.com Tue Sep 29 11:41:41 2009 From: seb.haase at gmail.com (Sebastian Haase) Date: Tue, 29 Sep 2009 17:41:41 +0200 Subject: [SciPy-User] How to create multi-page tiff files with python tools? In-Reply-To: References: <8125C6E1-86D8-4D8D-B474-66727FB91179@cs.toronto.edu> Message-ID: On Tue, Sep 29, 2009 at 5:34 PM, Alexandre Santos wrote: > 2009/9/29 David Warde-Farley : >> Do you need to create them or just read them? You can read them by >> opening with PIL and using the '.seek()' method to switch between >> frames. > > I would like to create ordered stacks out of the randomized ones, so I > would need the ability to create them. > But you could then probably just create the ordered version "in memory" as an ndim=3 ndarray - non need to actually save them back to disk... -Sebastian Haase From ochipepe at gmail.com Tue Sep 29 11:46:35 2009 From: ochipepe at gmail.com (Alexandre Santos) Date: Tue, 29 Sep 2009 17:46:35 +0200 Subject: [SciPy-User] How to create multi-page tiff files with python tools? In-Reply-To: References: <8125C6E1-86D8-4D8D-B474-66727FB91179@cs.toronto.edu> Message-ID: 2009/9/29 Sebastian Haase : > On Tue, Sep 29, 2009 at 5:34 PM, Alexandre Santos wrote: >> 2009/9/29 David Warde-Farley : >>> Do you need to create them or just read them? You can read them by >>> opening with PIL and using the '.seek()' method to switch between >>> frames. >> >> I would like to create ordered stacks out of the randomized ones, so I >> would need the ability to create them. >> > But you could then probably just create the ordered version "in > memory" as an ndim=3 ndarray - non need to actually save them back to > disk... The problem is that the ordered stacks need to be analyzed by other programs (matlab scripts and imagej, so I really would like to store them ordered. NB: I'm browsing through your package, and will probably come back with questions... > > -Sebastian Haase > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From michele.petrazzo at unipex.it Tue Sep 29 12:06:10 2009 From: michele.petrazzo at unipex.it (Michele Petrazzo) Date: Tue, 29 Sep 2009 16:06:10 +0000 (UTC) Subject: [SciPy-User] How to create multi-page tiff files with python tools? References: Message-ID: Alexandre Santos ochipepe.org> writes: > Hello, > Hello, > Is there really no way of doing this in python, leaving me no option > but to fall back into Matlab? > Some time ago I create freeimagepy, a freeimage binding made with ctypes. So, if you need, it's released on sf.net with lgpl license. Of course, if you need help, I'm here. > Cheers, > Alexandre Santos > Michele From ralf.gommers at googlemail.com Tue Sep 29 14:24:45 2009 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Tue, 29 Sep 2009 14:24:45 -0400 Subject: [SciPy-User] How to create multi-page tiff files with python tools? In-Reply-To: References: Message-ID: On Tue, Sep 29, 2009 at 11:14 AM, Sebastian Haase wrote: > Hi, > I have a working solution for this based on PIL. > You can either refer to the PIL archive and look for my submitted patch > and/or > download my Priithon package (a python "all inclusive" package geared > to image analysis) that includes the patched PIL (1.1.6) and look in > useful.py for > saveImg() ( saves 3D and 4D arrays into) multipage tif > or > saveImg_8() > or > saveTiffMultipage > or > saveTiffMultipageFromSeq > Hi Sebastian, this is very useful functionality for me as well. The question I have is if your patched PIL includes fixes for 16-bit images. Right now I'm using a patched PIL kindly provided to me by Zachary Pincus that fixes 16-bit issues. I saw that some improvements for 16-bit were included in PIL trunk but not his patches. Your patch is included it seems, so I could also run PIL trunk if someone can confirm that 16-bit TIF images work. I'd prefer Priithon though because then I could stop asking my users to compile PIL themselves... Thanks, Ralf > Cheers, > > Sebastian Haase > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From martin.enlund at gmail.com Tue Sep 29 16:52:27 2009 From: martin.enlund at gmail.com (Martin Enlund) Date: Tue, 29 Sep 2009 22:52:27 +0200 Subject: [SciPy-User] scikits.timeseries, element-wise allocation with double boolean expressions Message-ID: <887c5c2c0909291352h45b45187o6e0c507183e2c571@mail.gmail.com> Hi there. I am sure this is simple to do, but since I am failing all the time I turn to you! I am trying something like this: (dest,a,b,c,d are all the simplest possible timeseries) dest[:]=0 dest[a > b and c > d] = 1 I've tried something like this, to no avail: c = ts.time_series(zip(a > b, c > d), dtype=[('ab', float), ('cd', float)], start_date=a.start_date) Error: d = self.filled(True).all(axis=axis).view(type(self)) TypeError: cannot perform reduce with flexible type From dwf at cs.toronto.edu Tue Sep 29 20:59:16 2009 From: dwf at cs.toronto.edu (David Warde-Farley) Date: Tue, 29 Sep 2009 20:59:16 -0400 Subject: [SciPy-User] How to create multi-page tiff files with python tools? In-Reply-To: References: Message-ID: <29AB395E-0D54-497A-ADB3-DD3C78F3693D@cs.toronto.edu> On 29-Sep-09, at 2:24 PM, Ralf Gommers wrote: > On Tue, Sep 29, 2009 at 11:14 AM, Sebastian Haase > wrote: > > Hi Sebastian, this is very useful functionality for me as well. > > The question I have is if your patched PIL includes fixes for 16-bit > images. > Right now I'm using a patched PIL kindly provided to me by Zachary > Pincus > that fixes 16-bit issues. I saw that some improvements for 16-bit were > included in PIL trunk but not his patches. Your patch is included it > seems, > so I could also run PIL trunk if someone can confirm that 16-bit TIF > images > work. I'd prefer Priithon though because then I could stop asking my > users > to compile PIL themselves... I've been following this discussion somewhat and I wanted to point out that (as far as I can remember) image I/O free of PIL dependence was one of the stated goals of the image scikit. I'm not sure much progress has been made on that front yet. It seems that common requirements not being met by PIL are a) full support for multipage TIFF (loading, creating, saving) b) 16-bit multipage TIFF Rather than monkeypatching PIL four ways from Sunday, maybe it would be best to direct efforts towards building a PIL-free alternative? Incorporation of very specific code from PIL shouldn't be an issue given that PIL is quite liberally licensed. David (P.S. I'm CCing the scikits-image list as well, should you want to join it, etc.) From mattknox.ca at gmail.com Tue Sep 29 21:44:53 2009 From: mattknox.ca at gmail.com (Matt Knox) Date: Wed, 30 Sep 2009 01:44:53 +0000 (UTC) Subject: [SciPy-User] =?utf-8?q?scikits=2Etimeseries=2C=09element-wise_all?= =?utf-8?q?ocation_with_double_boolean_=09expressions?= References: <887c5c2c0909291352h45b45187o6e0c507183e2c571@mail.gmail.com> Message-ID: Martin Enlund gmail.com> writes: > > Hi there. I am sure this is simple to do, but since I am failing all > the time I turn to you! > > I am trying something like this: (dest,a,b,c,d are all the simplest > possible timeseries) > dest[:]=0 > dest[a > b and c > d] = 1 I think you are just looking for the & operator. So instead of >>> dest[a > b and c > d] = 1 you would have >>> dest[(a > b) & (c > d)] = 1 I forget what the operator precedence is here, best to use brackets to be explicit. For "or" it is | Note that this is not anything unique to the timeseries scikit but applies to indexing with boolean arrays in numpy in general. - Matt From ralf.gommers at googlemail.com Tue Sep 29 22:11:13 2009 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Tue, 29 Sep 2009 22:11:13 -0400 Subject: [SciPy-User] How to create multi-page tiff files with python tools? In-Reply-To: <29AB395E-0D54-497A-ADB3-DD3C78F3693D@cs.toronto.edu> References: <29AB395E-0D54-497A-ADB3-DD3C78F3693D@cs.toronto.edu> Message-ID: On Tue, Sep 29, 2009 at 8:59 PM, David Warde-Farley wrote: > > On 29-Sep-09, at 2:24 PM, Ralf Gommers wrote: > > Hi Sebastian, this is very useful functionality for me as well. >> >> The question I have is if your patched PIL includes fixes for 16-bit >> images. >> Right now I'm using a patched PIL kindly provided to me by Zachary Pincus >> that fixes 16-bit issues. I saw that some improvements for 16-bit were >> included in PIL trunk but not his patches. Your patch is included it >> seems, >> so I could also run PIL trunk if someone can confirm that 16-bit TIF >> images >> work. I'd prefer Priithon though because then I could stop asking my users >> to compile PIL themselves... >> > > > I've been following this discussion somewhat and I wanted to point out that > (as far as I can remember) image I/O free of PIL dependence was one of the > stated goals of the image scikit. I'm not sure much progress has been made > on that front yet. > > It seems that common requirements not being met by PIL are > a) full support for multipage TIFF (loading, creating, saving) > b) 16-bit multipage TIFF > > Rather than monkeypatching PIL four ways from Sunday, maybe it would be > best to direct efforts towards building a PIL-free alternative? > Incorporation of very specific code from PIL shouldn't be an issue given > that PIL is quite liberally licensed. > That would be great. I don't know much about PIL internals but I am up for contributing tests and documentation if such an effort is made. Cheers, Ralf > > David > > (P.S. I'm CCing the scikits-image list as well, should you want to join it, > etc.) > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From robfalck at gmail.com Tue Sep 29 23:04:29 2009 From: robfalck at gmail.com (Rob Falck) Date: Tue, 29 Sep 2009 23:04:29 -0400 Subject: [SciPy-User] criteria's to get the exit mode '0' in slsqp In-Reply-To: <336972.80138.qm@web8319.mail.in.yahoo.com> References: <3d375d730909181147jfb498c1ydd269177e74260e0@mail.gmail.com> <336972.80138.qm@web8319.mail.in.yahoo.com> Message-ID: The argument 'acc' controls the convergence tolerance for fmin_slsqp. >From briefly scanning the Fortran routine, I believe Exit 0 results when two successive iterations yield objective functions value such that abs(f-f0) < acc, the constraint violations are less than acc, and the derivative of the norm < acc. As Robert said, starting out in two slightly different places can put you on two different 'hills' with extrema quite far apart. If thats a problem for you then you may need to scale your problem, but scaling of optimization problems is a black art which fmin_slsqp itself doesn't deal with. I think the convergence criteria is tested on line 439 of http://projects.scipy.org/scipy/attachment/ticket/565/slsqp_optmz.f On Fri, Sep 25, 2009 at 10:08 AM, jagan prabhu wrote: > Hi all, > > Thank you, it was a good answer. one step ahead and specific... > > What are the criteria's to get the exit mode '0' in case of 'fmin_slsqp'? > > ------ > Jagan > > > -- - Rob Falck -------------- next part -------------- An HTML attachment was scrubbed... URL: From boris.burle at univ-provence.fr Wed Sep 30 01:47:07 2009 From: boris.burle at univ-provence.fr (=?ISO-8859-1?Q?Bor=EDs_BURLE?=) Date: Wed, 30 Sep 2009 07:47:07 +0200 Subject: [SciPy-User] [Fwd: Extracting common triplet from coordinate lists] Message-ID: <4AC2F0DB.7080307@univ-provence.fr> Dear all, Some days ago, I posted a question (reproduced below) to extract triplets. David response (In [36]: set(zip(x1, y1, z1)) & set(zip(x2, y2, z2)) & set(zip(x3, y3, z3)), perfectly did the job: the solution is very fast, and amazingly simple (python always surprises me...). David solution gave me the values of the triplets, which is want I needed. Now I realize that I would also need to extract the position of the triplets. So, in the example below, I'd like to extract [0,1,2] and [2,3,3]. A solution giving only the position in "x" list/array would be ok. Thanks in advance for your help. B. -------- Message original -------- Dear all, I have three set of Cartesian coordinates and I would like to extract the positions that are identical in the three sets. Here is a toy example: x1 = [1,2,3,4] y1 = [4,3,2,1] z1 = [3,4,1,2] x2 = [7,1,2,3] y2 = [5,4,3,2] z2 = [6,3,2,1] x3 = [7,6,1,3] y3 = [5,4,4,2] z3 = [6,9,3,1] In this example, I would like to extract the triplets (1,4,3) and (3,2,1) since they are common to the three sets. As you can notice, the position of the common triplet is variable. In this example, data are in list, but any code working on arrays would be ok for me too !! Do you have any idea on how to do that? Thanks in advance, Boris -- Bor?s BURLE Laboratoire de Neurobiologie de la Cognition CNRS et Universit? de Provence tel: (+33) 4 88 57 68 79 fax: (+33) 4 88 57 68 72 web page: http://www.up.univ-mrs.fr/lnc/ACT/act-fr.html _______________________________________________ SciPy-User mailing list SciPy-User at scipy.org http://mail.scipy.org/mailman/listinfo/scipy-user -- Bor?s BURLE Laboratoire de Neurobiologie de la Cognition CNRS et Universit? de Provence tel: (+33) 4 88 57 68 79 fax: (+33) 4 88 57 68 72 web page: http://www.up.univ-mrs.fr/lnc/ACT/act-fr.html From seb.haase at gmail.com Wed Sep 30 03:29:27 2009 From: seb.haase at gmail.com (Sebastian Haase) Date: Wed, 30 Sep 2009 09:29:27 +0200 Subject: [SciPy-User] How to create multi-page tiff files with python tools? In-Reply-To: References: Message-ID: On Tue, Sep 29, 2009 at 8:24 PM, Ralf Gommers wrote: > > > On Tue, Sep 29, 2009 at 11:14 AM, Sebastian Haase > wrote: >> >> Hi, >> I have a working solution for this based on PIL. >> You can either refer to the PIL archive and look for my submitted patch >> and/or >> download my Priithon package (a python "all inclusive" package geared >> to image analysis) that includes the patched PIL (1.1.6) and look in >> useful.py for >> saveImg() ( saves 3D and 4D arrays into) multipage tif >> or >> saveImg_8() >> or >> saveTiffMultipage >> or >> saveTiffMultipageFromSeq > > Hi Sebastian, this is very useful functionality for me as well. > > The question I have is if your patched PIL includes fixes for 16-bit images. > Right now I'm using a patched PIL kindly provided to me by Zachary Pincus > that fixes 16-bit issues. I saw that some improvements for 16-bit were > included in PIL trunk but not his patches. Your patch is included it seems, > so I could also run PIL trunk if someone can confirm that 16-bit TIF images > work. I'd prefer Priithon though because then I could stop asking my users > to compile PIL themselves... > > Thanks, > Ralf > I'm working with EMCCD microscopy images where 16-bit-int data is very important. I think the patches in Priithon and Zach's should be identical or at least equivalent. I think the upcoming 1.1.7 PIL will also include all of these patches. There was a discussion about "forking" out the basic image I/O from PIL, but generally people are against the idea of "working against" the main PIL. The upcoming 1.1.7 also seems to alleviate many concerns for the time beeing. Priithon is of course more than just a patched PIL. It includes among others: numpy, scipy, wxPython, pyOpenGL , numexpr, matplotlib and SWIG. Also there are useful functions that I collected over the years, such as an OpenGL based 2d viewer which includes sliders to scroll through higher dimensional nd-data (both single channel (grey) and multi-channel (color overlay) are supported). For more questions, there is a mailing list and a (somewhat oldish) handbook... Thanks for asking, Sebastian Haase From nicoletti at consorzio-innova.it Wed Sep 30 04:09:54 2009 From: nicoletti at consorzio-innova.it (Marco Nicoletti) Date: Wed, 30 Sep 2009 10:09:54 +0200 Subject: [SciPy-User] Forced derivative interpolation?? Message-ID: <7025675161964DF7ACE12FF253F15D32@innova.locale> Dear all, I want to implement a spline interpolation forcing the condition on the first or second derivative. In other words I have a vector of position (p), velocity (v) and acceleration (a) values; I want to interpolate the position (p) vector imposing the conditions on the velocity and acceleration values. The class UnivariateSpline() or intrp1D() in scipy.interpolate package don't take as parameter the derivatives (they export a method to evaluate derivatives). Any suggestions? Thanks very much and have a nice day! Marco Nicoletti -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at googlemail.com Wed Sep 30 10:02:03 2009 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Wed, 30 Sep 2009 10:02:03 -0400 Subject: [SciPy-User] How to create multi-page tiff files with python tools? In-Reply-To: References: Message-ID: On Wed, Sep 30, 2009 at 3:29 AM, Sebastian Haase wrote: > > I'm working with EMCCD microscopy images where 16-bit-int data is very > important. I think the patches in Priithon and Zach's should be > identical or at least equivalent. I think the upcoming 1.1.7 PIL will > also include all of these patches. > Thanks, I will give it a try, and if I have any more questions I'll ask on the Priithon list. > There was a discussion about "forking" out the basic image I/O from > PIL, but generally people are against the idea of "working against" > the main PIL. The upcoming 1.1.7 also seems to alleviate many concerns > for the time beeing. > You wouldn't have to call it "working against", and could make a serious effort to contribute changes back. Looking from the outside, PIL is still barely breathing. I just went and checked their bitbucket repo, and 1.1.7 was tagged two months ago. Still no formal release, nor any code committed in the last 4 months....... Cheers, Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From seb.haase at gmail.com Wed Sep 30 10:17:40 2009 From: seb.haase at gmail.com (Sebastian Haase) Date: Wed, 30 Sep 2009 16:17:40 +0200 Subject: [SciPy-User] How to create multi-page tiff files with python tools? In-Reply-To: References: Message-ID: On Wed, Sep 30, 2009 at 4:02 PM, Ralf Gommers wrote: > > > On Wed, Sep 30, 2009 at 3:29 AM, Sebastian Haase > wrote: >> >> I'm working with EMCCD microscopy images where 16-bit-int data is very >> important. I think the patches in Priithon and Zach's should be >> identical or at least equivalent. I think the upcoming 1.1.7 PIL will >> also include all of these patches. > > Thanks, I will give it a try, and if I have any more questions I'll ask on > the Priithon list. > >> >> There was a discussion about "forking" out the basic image I/O from >> PIL, but generally people are against the idea of "working against" >> the main PIL. The upcoming 1.1.7 also seems to alleviate many concerns >> for the time beeing. > > You wouldn't have to call it "working against", and could make a serious > effort to contribute changes back. Looking from the outside, PIL is still > barely breathing. I just went and checked their bitbucket repo, and 1.1.7 > was tagged two months ago. Still no formal release, nor any code committed > in the last 4 months....... > I think the problem is that Frederick Lundh is the only one who has permission to add/change code base. I still find it very suspicious that somewhere on the PIL website it states that you can pay (a lot of money) for a "special license" to get early access to the development version - so even you are providing (free) patches via the mailing list, you would have to pay to get access to the patched version !? A couple months ago I asked for an explanation but didn't get a reply. However, don't forget. I think many people are using PIL and it just works for them - that could also be the reason if there is no noise .... Cheers, Sebastian Haase From ralf.gommers at googlemail.com Wed Sep 30 10:55:21 2009 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Wed, 30 Sep 2009 10:55:21 -0400 Subject: [SciPy-User] How to create multi-page tiff files with python tools? In-Reply-To: References: Message-ID: On Wed, Sep 30, 2009 at 10:17 AM, Sebastian Haase wrote: > On Wed, Sep 30, 2009 at 4:02 PM, Ralf Gommers > wrote: > > > > > > On Wed, Sep 30, 2009 at 3:29 AM, Sebastian Haase > > wrote: > >> > >> I'm working with EMCCD microscopy images where 16-bit-int data is very > >> important. I think the patches in Priithon and Zach's should be > >> identical or at least equivalent. I think the upcoming 1.1.7 PIL will > >> also include all of these patches. > > > > Thanks, I will give it a try, and if I have any more questions I'll ask > on > > the Priithon list. > > > >> > >> There was a discussion about "forking" out the basic image I/O from > >> PIL, but generally people are against the idea of "working against" > >> the main PIL. The upcoming 1.1.7 also seems to alleviate many concerns > >> for the time beeing. > > > > You wouldn't have to call it "working against", and could make a serious > > effort to contribute changes back. Looking from the outside, PIL is still > > barely breathing. I just went and checked their bitbucket repo, and 1.1.7 > > was tagged two months ago. Still no formal release, nor any code > committed > > in the last 4 months....... > > > I think the problem is that Frederick Lundh is the only one who has > permission to add/change code base. > I still find it very suspicious that somewhere on the PIL website it > states that you can pay (a lot of money) for a "special license" to > get early access to the development version - so even you are > providing (free) patches via the mailing list, you would have to pay > to get access to the patched version !? > A couple months ago I asked for an explanation but didn't get a reply. > > Yeah that is very odd. An attempt to put the I/O part of PIL in a scikit may be enough of a push to improve that situation. The only other important Python library I can think of that was this inert is setuptools, and look what happened there. > > However, don't forget. I think many people are using PIL and it just > works for them - that could also be the reason if there is no noise > .... > Or because scientists don't like to make noise:) Also, PIL works for a lot of common formats, but wouldn't it be nice to have support for more image formats? I saw there is support for Andor cameras in Priithon, then there are the custom formats of Princeton Instruments, PCO, . Cheers, Ralf > Cheers, > Sebastian Haase > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ggellner at uoguelph.ca Wed Sep 30 10:59:58 2009 From: ggellner at uoguelph.ca (Gabriel Gellner) Date: Wed, 30 Sep 2009 10:59:58 -0400 Subject: [SciPy-User] Toronto Job posting: Scipy programmer needed In-Reply-To: References: Message-ID: A full time position in Toronto Canada. See the attached pdf if you are interested. Gabriel -------------- next part -------------- A non-text attachment was scrubbed... Name: 09.10.2009 Optical Software Designer MASc.pdf Type: application/pdf Size: 72047 bytes Desc: not available URL: From henrylindsaysmith at gmail.com Wed Sep 30 12:22:01 2009 From: henrylindsaysmith at gmail.com (ninjasmith) Date: Wed, 30 Sep 2009 09:22:01 -0700 (PDT) Subject: [SciPy-User] [SciPy-user] memory blowout in my script with mutiple iterations Message-ID: <25684455.post@talk.nabble.com> hi, my first post here, hope this is the right place for this kind of question as I'm sure there must be a fairly simple answer to this. I have a script that is iterating a whole bunch of wav files. reading them into an array in turn performing some lpc analysis on them and saving a bunch of specgrams and different wav generated from them. as my script iterates through the files the python memeory usage creeps up and up until the machine starts paging. I've been through the script and del'ed all the major objects I create but this doesn't seem to make any difference. any thoughts on how to tackle this proble -- View this message in context: http://www.nabble.com/memory-blowout-in-my-script-with-mutiple-iterations-tp25684455p25684455.html Sent from the Scipy-User mailing list archive at Nabble.com. From robert.kern at gmail.com Wed Sep 30 12:24:46 2009 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 30 Sep 2009 11:24:46 -0500 Subject: [SciPy-User] [SciPy-user] memory blowout in my script with mutiple iterations In-Reply-To: <25684455.post@talk.nabble.com> References: <25684455.post@talk.nabble.com> Message-ID: <3d375d730909300924s5d241760x103116a60288639b@mail.gmail.com> On Wed, Sep 30, 2009 at 11:22, ninjasmith wrote: > > hi, my first post here, hope this is the right place for this kind of > question as I'm sure there must be a fairly simple answer to this. > > I have a script that is iterating a whole bunch of wav files. ?reading them > into an array in turn performing some lpc analysis on them and saving a > bunch of specgrams and different wav generated from them. ?as my script > iterates through the files the python memeory usage creeps up and up until > the machine starts paging. > > I've been through the script and del'ed all the major objects I create but > this doesn't seem to make any difference. > > any thoughts on how to tackle this proble Not without being able to see the code. Try paring down your script to the minimal program that exhibits the large memory usage. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From jkington at wisc.edu Wed Sep 30 14:18:53 2009 From: jkington at wisc.edu (Joe Kington) Date: Wed, 30 Sep 2009 13:18:53 -0500 Subject: [SciPy-User] Forced derivative interpolation?? In-Reply-To: <7025675161964DF7ACE12FF253F15D32@innova.locale> References: <7025675161964DF7ACE12FF253F15D32@innova.locale> Message-ID: Hi Marco, Exactly what sort of constraints are you wanting to apply? i.e. Do you want a specific velocity or acceleration everywhere (and get a least squares fit to the other parameters)? Do you want to minimize the acceleration or velocity while still fitting the data? Does the interpolation need to fit the data exactly at each point where you have data, or the best fit between your constraints and the data values? Basically, as far as I know, there isn't a pre-built function in scipy to do what you want, but it's not hard to write code to do what you want. If you can describe what you need in a bit more detail, I'm pretty sure I can point you in the right direction. -Joe On Wed, Sep 30, 2009 at 3:09 AM, Marco Nicoletti < nicoletti at consorzio-innova.it> wrote: > Dear all, > > I want to implement a spline interpolation forcing the condition on the > first or second derivative. > In other words I have a vector of position (p), velocity (v) and > acceleration (a) values; > I want to interpolate the position (p) vector imposing the conditions on > the velocity and acceleration values. > > The class UnivariateSpline() or intrp1D() in scipy.interpolate package > don't take as parameter the derivatives > (they export a method to evaluate derivatives). > > Any suggestions? > > Thanks very much and have a nice day! > > Marco Nicoletti > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gustaf at laserpanda.com Wed Sep 30 14:58:46 2009 From: gustaf at laserpanda.com (Gustaf Nilsson) Date: Wed, 30 Sep 2009 20:58:46 +0200 Subject: [SciPy-User] memoryError when i have plenty of available ram Message-ID: Hiya I know someone just started a memory thread, but i didnt wanna hijack it.. My image processing app that im working on seems to crash with "memoryError" when it hits about 1.1gb of mem usage (same on two computers; has 2/4gb ram, xp 32bit) Im working with 12mpixel images at 32bit floating point, so each block of memory used in different operations is about 140mb (if that helps) Is it actually because it runs out of memory or can the error mean something else? cheers Gusty -- ? ? ? ? ? ? ? ? ? ? -------------- next part -------------- An HTML attachment was scrubbed... URL: From peridot.faceted at gmail.com Wed Sep 30 15:12:40 2009 From: peridot.faceted at gmail.com (Anne Archibald) Date: Wed, 30 Sep 2009 15:12:40 -0400 Subject: [SciPy-User] Forced derivative interpolation?? In-Reply-To: <7025675161964DF7ACE12FF253F15D32@innova.locale> References: <7025675161964DF7ACE12FF253F15D32@innova.locale> Message-ID: 2009/9/30 Marco Nicoletti : > Dear all, > > I want to implement a spline interpolation forcing the condition on the > first or second derivative. > In other words I have a vector of position (p), velocity (v) and > acceleration (a) values; > I want to interpolate the position (p) vector imposing the conditions on the > velocity and acceleration values. > > The class UnivariateSpline() or intrp1D() in scipy.interpolate package don't > take as parameter the derivatives > (they export a method to evaluate derivatives). > > Any suggestions? If I have correctly understood your question, what you want to do is produce an interpolating spline with not just specified point values but specified derivative values at the given points. Scipy has at least two different pieces of code that might help. The first is, in recent versions of scipy, scipy.interpolate.PiecewisePolynomial. This allows you to fit a piecewise polynomial through a set of points, specifying derivatives at each point. It doesn't allow you to impose a spline-like constraint that higher derivatives must be continuous at the points. Its evaluation is also implemented in pure python, so it won't be terribly fast. A second option, useful if you need fast evaluation, is to abuse scipy's spline functions. scipy.interpolate.splrep doesn't take derivatives, but what it returns is a triple t, c, k. Given a t, c, k, you can then call splev, splint, splder, etcetera to get nice fast evaluation in compiled code. So what you can do is fabricate your own t, c, and k values. t is the list of knots, c is some sort of coefficients, and k is the order of the spline. The brute-force way I found to get these splines to produce the derivatives I wanted required me to repeat values in the t array. But once you've fixed the t array, the result is linear in the c values, so a little trial and error will give you formulas to produce any curve you need. Good luck, Anne > Thanks very much and have a nice day! > > Marco Nicoletti > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > From dwf at cs.toronto.edu Wed Sep 30 16:51:59 2009 From: dwf at cs.toronto.edu (David Warde-Farley) Date: Wed, 30 Sep 2009 16:51:59 -0400 Subject: [SciPy-User] scipy.reddit.com Message-ID: <2F5DD634-70CB-4A5B-ADD8-F3FFCDF41A1B@cs.toronto.edu> In the spirit of the 'advice' site, and given that we're thinking of moving scipy.org to more static content (once I have some free time on my hands again, which should be soon!), I set up a 'subreddit' on reddit.com for Python-in-Science related links. I even came up with a somewhat spiffy logo for it. Think of it as a communal, collaboratively filtered (via up/down votes, using the arrows next to each submission) bookmarks folder/news site/etc. I'd encourage people to use it and add to it if they feel it might be of use to the community. The address is http://scipy.reddit.com/ , or equivalently http://www.reddit.com/r/scipy David From bruce at clearscienceinc.com Wed Sep 30 16:44:29 2009 From: bruce at clearscienceinc.com (Bruce Ford) Date: Wed, 30 Sep 2009 16:44:29 -0400 Subject: [SciPy-User] numpy.squeeze not squeezing Message-ID: Squeeze doesn't seem to be squeezing. What am I missing? An array extracted from a NetCDF3 file using NetCDF4 is shaped: (248,1,181.360) I want it to be shaped (248,181,360) out = np.squeeze(in) print out.shape yeilds () Am I missing a step? Any assistance would be appreciated! Bruce --------------------------------------- Bruce W. Ford Clear Science, Inc. bruce at clearscienceinc.com From robert.kern at gmail.com Wed Sep 30 17:16:39 2009 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 30 Sep 2009 16:16:39 -0500 Subject: [SciPy-User] numpy.squeeze not squeezing In-Reply-To: References: Message-ID: <3d375d730909301416w4eb6dbbei9b47a22bd141b91a@mail.gmail.com> On Wed, Sep 30, 2009 at 15:44, Bruce Ford wrote: > Squeeze doesn't seem to be squeezing. ?What am I missing? > > An array extracted from a NetCDF3 file using NetCDF4 is shaped: ?(248,1,181.360) > > I want it to be shaped (248,181,360) > > out = np.squeeze(in) > print out.shape > > yeilds ?() > > Am I missing a step? It works for me: In [1]: x = np.empty((248,1,181,360)) In [2]: np.squeeze(x).shape Out[2]: (248, 181, 360) Can you give us a minimal, self-contained script that demonstrates the problem? Being self-contained will probably be impossible, but even seeing such a minimal script will be helpful even if we can't run it with your data file. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From bruce at clearscienceinc.com Wed Sep 30 18:03:25 2009 From: bruce at clearscienceinc.com (Bruce Ford) Date: Wed, 30 Sep 2009 18:03:25 -0400 Subject: [SciPy-User] numpy.squeeze not squeezing In-Reply-To: <3d375d730909301416w4eb6dbbei9b47a22bd141b91a@mail.gmail.com> References: <3d375d730909301416w4eb6dbbei9b47a22bd141b91a@mail.gmail.com> Message-ID: Robert, thanks for responding. Your response makes me things something in my array is preventing squeeze form working correctly. Here's a cleaned up version of my script: #!/usr/local/env python from mpl_toolkits.basemap import Basemap import numpy as np #used to preform simple math functions on data from netCDF4 import Dataset #decide which file to open year = 1995 month = "%02d" % 5 #Set up file names filename = "/data/ww3/NetCDF/3_hourly/ww3."+str(year)+str(month)+ ".nc" opennc = Dataset(filename, mode="r") swh = opennc.variables['sig_wav_ht'] print swh.shape #gives (248,1,181,360) swh1 = np.squeeze(swh) print 'SWH shape: ', swh1.shape #gives () x = np.zeros((248,1,181,360)) y = np.squeeze(x) print y.shape #give (248,181,369) --------------------------------------- Bruce W. Ford Clear Science, Inc. bruce at clearscienceinc.com bruce.w.ford.ctr at navy.smil.mil http://www.ClearScienceInc.com Phone/Fax: 904-379-9704 8241 Parkridge Circle N. Jacksonville, FL 32211 Skype: bruce.w.ford Google Talk: fordbw at gmail.com On Wed, Sep 30, 2009 at 5:16 PM, Robert Kern wrote: > On Wed, Sep 30, 2009 at 15:44, Bruce Ford wrote: >> Squeeze doesn't seem to be squeezing. ?What am I missing? >> >> An array extracted from a NetCDF3 file using NetCDF4 is shaped: ?(248,1,181.360) >> >> I want it to be shaped (248,181,360) >> >> out = np.squeeze(in) >> print out.shape >> >> yeilds ?() >> >> Am I missing a step? > > It works for me: > > In [1]: x = np.empty((248,1,181,360)) > > In [2]: np.squeeze(x).shape > Out[2]: (248, 181, 360) > > Can you give us a minimal, self-contained script that demonstrates the > problem? Being self-contained will probably be impossible, but even > seeing such a minimal script will be helpful even if we can't run it > with your data file. > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless > enigma that is made terrible by our own mad attempt to interpret it as > though it had an underlying truth." > ?-- Umberto Eco > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From robert.kern at gmail.com Wed Sep 30 18:05:58 2009 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 30 Sep 2009 17:05:58 -0500 Subject: [SciPy-User] numpy.squeeze not squeezing In-Reply-To: References: <3d375d730909301416w4eb6dbbei9b47a22bd141b91a@mail.gmail.com> Message-ID: <3d375d730909301505q221473d6i450fcb23b14cfd58@mail.gmail.com> On Wed, Sep 30, 2009 at 17:03, Bruce Ford wrote: > Robert, thanks for responding. ?Your response makes me things > something in my array is preventing squeeze form working correctly. > Here's a cleaned up version of my script: > > > #!/usr/local/env python > from mpl_toolkits.basemap import Basemap > import numpy as np #used to preform simple math functions on data > from netCDF4 import Dataset > #decide which file to open > year = 1995 > month = "%02d" % 5 > > #Set up file names > filename = "/data/ww3/NetCDF/3_hourly/ww3."+str(year)+str(month)+ ".nc" > opennc = Dataset(filename, mode="r") > > swh = opennc.variables['sig_wav_ht'] > print swh.shape ?#gives (248,1,181,360) > swh1 = np.squeeze(swh) > > print 'SWH shape: ', swh1.shape ?#gives () print type(swh1) I'm not sure that swh1 is actually an ndarray. It might be a different class that masquerades as a numpy array. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From bruce at clearscienceinc.com Wed Sep 30 18:15:22 2009 From: bruce at clearscienceinc.com (Bruce Ford) Date: Wed, 30 Sep 2009 18:15:22 -0400 Subject: [SciPy-User] numpy.squeeze not squeezing In-Reply-To: <3d375d730909301505q221473d6i450fcb23b14cfd58@mail.gmail.com> References: <3d375d730909301416w4eb6dbbei9b47a22bd141b91a@mail.gmail.com> <3d375d730909301505q221473d6i450fcb23b14cfd58@mail.gmail.com> Message-ID: print type(swh1) #gave print type(swh) #gave --------------------------------------- Bruce W. Ford Clear Science, Inc. bruce at clearscienceinc.com bruce.w.ford.ctr at navy.smil.mil http://www.ClearScienceInc.com Phone/Fax: 904-379-9704 8241 Parkridge Circle N. Jacksonville, FL 32211 Skype: bruce.w.ford Google Talk: fordbw at gmail.com On Wed, Sep 30, 2009 at 6:05 PM, Robert Kern wrote: > On Wed, Sep 30, 2009 at 17:03, Bruce Ford wrote: >> Robert, thanks for responding. ?Your response makes me things >> something in my array is preventing squeeze form working correctly. >> Here's a cleaned up version of my script: >> >> >> #!/usr/local/env python >> from mpl_toolkits.basemap import Basemap >> import numpy as np #used to preform simple math functions on data >> from netCDF4 import Dataset >> #decide which file to open >> year = 1995 >> month = "%02d" % 5 >> >> #Set up file names >> filename = "/data/ww3/NetCDF/3_hourly/ww3."+str(year)+str(month)+ ".nc" >> opennc = Dataset(filename, mode="r") >> >> swh = opennc.variables['sig_wav_ht'] >> print swh.shape ?#gives (248,1,181,360) >> swh1 = np.squeeze(swh) >> >> print 'SWH shape: ', swh1.shape ?#gives () > > print type(swh1) > > I'm not sure that swh1 is actually an ndarray. It might be a different > class that masquerades as a numpy array. > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless > enigma that is made terrible by our own mad attempt to interpret it as > though it had an underlying truth." > ?-- Umberto Eco > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From robert.kern at gmail.com Wed Sep 30 18:21:26 2009 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 30 Sep 2009 17:21:26 -0500 Subject: [SciPy-User] numpy.squeeze not squeezing In-Reply-To: References: <3d375d730909301416w4eb6dbbei9b47a22bd141b91a@mail.gmail.com> <3d375d730909301505q221473d6i450fcb23b14cfd58@mail.gmail.com> Message-ID: <3d375d730909301521o7a4652bm6b947a4ba76af726@mail.gmail.com> 2009/9/30 Bruce Ford : > print type(swh1) ?#gave > > print type(swh) ?#gave Ah, yes. The latter is what I meant. Yup, my diagnosis is correct. np.squeeze() is interpreting swh as a scalar (or rank 0 array) with dtype=object rather than an array. You will have to get a real ndarray from the Variable. I am not familiar with the netcdf4 API, so you will have to refer to its documentation on how to do that. It won't be as simple as np.asarray(swh), I am afraid. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From cournape at gmail.com Wed Sep 30 21:25:50 2009 From: cournape at gmail.com (David Cournapeau) Date: Thu, 1 Oct 2009 10:25:50 +0900 Subject: [SciPy-User] memoryError when i have plenty of available ram In-Reply-To: References: Message-ID: <5b8d13220909301825ta758091pa25f269b6f91ca45@mail.gmail.com> On Thu, Oct 1, 2009 at 3:58 AM, Gustaf Nilsson wrote: > Hiya > > I know someone just started a memory thread, but i didnt wanna hijack it.. > My image processing app that im working on seems to crash with "memoryError" > when it hits about 1.1gb of mem usage (same on two computers; has 2/4gb ram, > xp 32bit) If possible, a small script which reproduces the problem would be helpful. Keep in mind that on windows, by default, your python script cannot use more than 2 Gb anyway, even if you have 4Gb of memory. David From rmay31 at gmail.com Wed Sep 30 21:25:42 2009 From: rmay31 at gmail.com (Ryan May) Date: Wed, 30 Sep 2009 20:25:42 -0500 Subject: [SciPy-User] numpy.squeeze not squeezing In-Reply-To: <3d375d730909301521o7a4652bm6b947a4ba76af726@mail.gmail.com> References: <3d375d730909301416w4eb6dbbei9b47a22bd141b91a@mail.gmail.com> <3d375d730909301505q221473d6i450fcb23b14cfd58@mail.gmail.com> <3d375d730909301521o7a4652bm6b947a4ba76af726@mail.gmail.com> Message-ID: On Wed, Sep 30, 2009 at 5:21 PM, Robert Kern wrote: > 2009/9/30 Bruce Ford : >> print type(swh1) ?#gave >> >> print type(swh) ?#gave > > Ah, yes. The latter is what I meant. > > Yup, my diagnosis is correct. np.squeeze() is interpreting swh as a > scalar (or rank 0 array) with dtype=object rather than an array. You > will have to get a real ndarray from the Variable. I am not familiar > with the netcdf4 API, so you will have to refer to its documentation > on how to do that. It won't be as simple as np.asarray(swh), I am > afraid. If it's anything like the other NetCDF bindings, it's just: swh_arr = swh[:] Ryan -- Ryan May Graduate Research Assistant School of Meteorology University of Oklahoma Sent from Norman, Oklahoma, United States From d.l.goldsmith at gmail.com Wed Sep 30 23:00:14 2009 From: d.l.goldsmith at gmail.com (David Goldsmith) Date: Wed, 30 Sep 2009 20:00:14 -0700 Subject: [SciPy-User] memoryError when i have plenty of available ram In-Reply-To: <5b8d13220909301825ta758091pa25f269b6f91ca45@mail.gmail.com> References: <5b8d13220909301825ta758091pa25f269b6f91ca45@mail.gmail.com> Message-ID: <45d1ab480909302000w629e1f6eudada779a4839d7e0@mail.gmail.com> On Wed, Sep 30, 2009 at 6:25 PM, David Cournapeau wrote: > On Thu, Oct 1, 2009 at 3:58 AM, Gustaf Nilsson > wrote: > > Hiya > > > > I know someone just started a memory thread, but i didnt wanna hijack > it.. > > My image processing app that im working on seems to crash with > "memoryError" > > when it hits about 1.1gb of mem usage (same on two computers; has 2/4gb > ram, > > xp 32bit) > > If possible, a small script which reproduces the problem would be helpful. > > Keep in mind that on windows, by default, your python script cannot > use more than 2 Gb anyway, even if you have 4Gb of memory. > Interesting. Is this true in Vista? Windows 7? You say "by default": is there a trivial "workaround" (other than dividing up your memory usage, e.g., arrays, into blocks smaller than 2Gb)? DG > > David > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From david at ar.media.kyoto-u.ac.jp Wed Sep 30 23:01:43 2009 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Thu, 01 Oct 2009 12:01:43 +0900 Subject: [SciPy-User] memoryError when i have plenty of available ram In-Reply-To: <45d1ab480909302000w629e1f6eudada779a4839d7e0@mail.gmail.com> References: <5b8d13220909301825ta758091pa25f269b6f91ca45@mail.gmail.com> <45d1ab480909302000w629e1f6eudada779a4839d7e0@mail.gmail.com> Message-ID: <4AC41B97.6080509@ar.media.kyoto-u.ac.jp> David Goldsmith wrote: > On Wed, Sep 30, 2009 at 6:25 PM, David Cournapeau > wrote: > > On Thu, Oct 1, 2009 at 3:58 AM, Gustaf Nilsson > > wrote: > > Hiya > > > > I know someone just started a memory thread, but i didnt wanna > hijack it.. > > My image processing app that im working on seems to crash with > "memoryError" > > when it hits about 1.1gb of mem usage (same on two computers; > has 2/4gb ram, > > xp 32bit) > > If possible, a small script which reproduces the problem would be > helpful. > > Keep in mind that on windows, by default, your python script cannot > use more than 2 Gb anyway, even if you have 4Gb of memory. > > > Interesting. Is this true in Vista? Windows 7? It is true for (at least) most OSes, actually, and a limitation of 32 bits addressing. The only workaround is to use several processes. The origin is that a process cannot 'see' more than 4 Gb in 32 bits, and part of it has to be reserved for the kernel - windows and linux by default limit the virtual adressing to 2 Gb per process in the userland. There are options to split between 3 Gb user /1Gb kernel or the contrary in linux, and similar in windows. There is this pretty good explanation here for linux for the gory details: http://kerneltrap.org/node/2450 (I would be surprised if windows kernel was fundamentally different - except for the fork thing of course). The true solution is to use a 64 bits OS. cheers, David From jlconlin at gmail.com Wed Sep 30 23:59:02 2009 From: jlconlin at gmail.com (Jeremy Conlin) Date: Wed, 30 Sep 2009 21:59:02 -0600 Subject: [SciPy-User] Anyone have an example of using arpack (scipy.sparse.linalg.eigen)? Message-ID: <2588da420909302059y63abab76u4be547fdfda81b33@mail.gmail.com> I need to use the arpack wrapper in scipy. Does anyone have an example of how they used this? This would be great to get me started in my research. Thanks, Jeremy