From chiaracaronna at hotmail.com Thu Feb 1 11:46:26 2007 From: chiaracaronna at hotmail.com (Chiara Caronna) Date: Thu, 01 Feb 2007 16:46:26 +0000 Subject: [SciPy-user] error importing scipy Message-ID: Hello, since a couple of days I got an error when I try to run "from scipy import *" here is the error message.... " Traceback (most recent call last): File "", line 1, in ? File "/usr/local/lib/python2.4/site-packages/scipy/linalg/__init__.py", line 8, in ? from basic import * File "/usr/local/lib/python2.4/site-packages/scipy/linalg/basic.py", line 227, in ? import decomp File "/usr/local/lib/python2.4/site-packages/scipy/linalg/decomp.py", line 21, in ? from blas import get_blas_funcs File "/usr/local/lib/python2.4/site-packages/scipy/linalg/blas.py", line 14, in ? from scipy.linalg import fblas ImportError: /usr/local/lib/python2.4/site-packages/scipy/linalg/fblas.so: undefined symbol: srotmg_" does anyone have a clue?! thanks, Chiara _________________________________________________________________ Express yourself instantly with MSN Messenger! Download today it's FREE! http://messenger.msn.click-url.com/go/onm00200471ave/direct/01/ From robert.kern at gmail.com Thu Feb 1 11:56:14 2007 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 01 Feb 2007 10:56:14 -0600 Subject: [SciPy-user] error importing scipy In-Reply-To: References: Message-ID: <45C21BAE.7040004@gmail.com> Chiara Caronna wrote: > Hello, > since a couple of days I got an error when I try to run "from scipy import > *" > here is the error message.... > > > " Traceback (most recent call last): > File "", line 1, in ? > File "/usr/local/lib/python2.4/site-packages/scipy/linalg/__init__.py", > line 8, in ? > from basic import * > File "/usr/local/lib/python2.4/site-packages/scipy/linalg/basic.py", line > 227, in ? > import decomp > File "/usr/local/lib/python2.4/site-packages/scipy/linalg/decomp.py", line > 21, in ? > from blas import get_blas_funcs > File "/usr/local/lib/python2.4/site-packages/scipy/linalg/blas.py", line > 14, in ? > from scipy.linalg import fblas > ImportError: /usr/local/lib/python2.4/site-packages/scipy/linalg/fblas.so: > undefined symbol: srotmg_" > > does anyone have a clue?! Your scipy was not linked to a BLAS library correctly. What BLAS library were you trying to link against? Where are its files located on your system? Did you use a site.cfg file to configure scipy? Can you show us the output of $ cd ~/src/scipy # or whatever $ python setup.py configure -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From chiaracaronna at hotmail.com Thu Feb 1 12:31:44 2007 From: chiaracaronna at hotmail.com (Chiara Caronna) Date: Thu, 01 Feb 2007 17:31:44 +0000 Subject: [SciPy-user] error importing scipy In-Reply-To: <45C21BAE.7040004@gmail.com> Message-ID: >From: Robert Kern >Reply-To: SciPy Users List >To: SciPy Users List >Subject: Re: [SciPy-user] error importing scipy >Date: Thu, 01 Feb 2007 10:56:14 -0600 > >Chiara Caronna wrote: > > Hello, > > since a couple of days I got an error when I try to run "from scipy >import > > *" > > here is the error message.... > > > > > > " Traceback (most recent call last): > > File "", line 1, in ? > > File >"/usr/local/lib/python2.4/site-packages/scipy/linalg/__init__.py", > > line 8, in ? > > from basic import * > > File "/usr/local/lib/python2.4/site-packages/scipy/linalg/basic.py", >line > > 227, in ? > > import decomp > > File "/usr/local/lib/python2.4/site-packages/scipy/linalg/decomp.py", >line > > 21, in ? > > from blas import get_blas_funcs > > File "/usr/local/lib/python2.4/site-packages/scipy/linalg/blas.py", >line > > 14, in ? > > from scipy.linalg import fblas > > ImportError: >/usr/local/lib/python2.4/site-packages/scipy/linalg/fblas.so: > > undefined symbol: srotmg_" > > > > does anyone have a clue?! > >Your scipy was not linked to a BLAS library correctly. What BLAS library >were >you trying to link against? '_' ... I have never tried to do that... at least I am not aware of... Where are its files located on your system? I don't know it, how can I find that out? >Did you use a site.cfg file to configure scipy? I am not sure I understood the question... I download the file scipy-0.5.2.tar.gz, extracted it and followed the instructions... so I guess the answer is "no, I didn't" Can you show us the output of > > $ cd ~/src/scipy # or whatever > $ python setup.py configure I did this: cd /usr/local/lib/python2.4/site-packages/scipy python setup.py configure here is the output: non-existing path in 'cluster': 'src/vq_wrap.cpp' Appending scipy.cluster configuration to scipy Ignoring attempt to set 'name' (from 'scipy' to 'scipy.cluster') mkl_info: libraries mkl,vml,guide not found in /usr/local/lib libraries mkl,vml,guide not found in /usr/lib NOT AVAILABLE fftw3_info: libraries fftw3 not found in /usr/local/lib libraries fftw3 not found in /usr/lib fftw3 not found NOT AVAILABLE fftw2_info: libraries rfftw,fftw not found in /usr/local/lib libraries rfftw,fftw not found in /usr/lib fftw2 not found NOT AVAILABLE dfftw_info: libraries drfftw,dfftw not found in /usr/local/lib libraries drfftw,dfftw not found in /usr/lib dfftw not found NOT AVAILABLE djbfft_info: NOT AVAILABLE could not resolve pattern in 'fftpack': 'dfftpack/*.f' non-existing path in 'fftpack': 'fftpack.pyf' non-existing path in 'fftpack': 'src/zfft.c' non-existing path in 'fftpack': 'src/drfft.c' non-existing path in 'fftpack': 'src/zrfft.c' non-existing path in 'fftpack': 'src/zfftnd.c' non-existing path in 'fftpack': 'convolve.pyf' non-existing path in 'fftpack': 'src/convolve.c' Appending scipy.fftpack configuration to scipy Ignoring attempt to set 'name' (from 'scipy' to 'scipy.fftpack') blas_opt_info: blas_mkl_info: libraries mkl,vml,guide not found in /usr/local/lib libraries mkl,vml,guide not found in /usr/lib NOT AVAILABLE atlas_blas_threads_info: Setting PTATLAS=ATLAS libraries lapack,blas not found in /usr/local/lib Setting PTATLAS=ATLAS Setting PTATLAS=ATLAS FOUND: libraries = ['lapack', 'blas'] library_dirs = ['/usr/lib'] language = c Could not locate executable gfortran Could not locate executable f95 customize GnuFCompiler invalid command 'configure' Could not locate executable ifort Could not locate executable ifc Could not locate executable ifort Could not locate executable efort Could not locate executable efc Could not locate executable ifort Could not locate executable efort Could not locate executable efc customize IntelFCompiler invalid command 'configure' customize LaheyFCompiler invalid command 'configure' customize PGroupFCompiler invalid command 'configure' customize AbsoftFCompiler invalid command 'configure' customize NAGFCompiler invalid command 'configure' customize VastFCompiler invalid command 'configure' customize CompaqFCompiler invalid command 'configure' customize IntelItaniumFCompiler invalid command 'configure' customize IntelEM64TFCompiler invalid command 'configure' customize Gnu95FCompiler invalid command 'configure' customize G95FCompiler invalid command 'configure' customize GnuFCompiler invalid command 'configure' customize Gnu95FCompiler invalid command 'configure' customize GnuFCompiler customize GnuFCompiler using config FOUND: libraries = ['lapack', 'blas'] library_dirs = ['/usr/lib'] language = c define_macros = [('ATLAS_INFO', '"\\"?.?.?\\""')] could not resolve pattern in 'integrate': 'linpack_lite/*.f' could not resolve pattern in 'integrate': 'mach/*.f' could not resolve pattern in 'integrate': 'quadpack/*.f' could not resolve pattern in 'integrate': 'odepack/*.f' non-existing path in 'integrate': '_quadpackmodule.c' non-existing path in 'integrate': '_odepackmodule.c' non-existing path in 'integrate': 'vode.pyf' Appending scipy.integrate configuration to scipy Ignoring attempt to set 'name' (from 'scipy' to 'scipy.integrate') could not resolve pattern in 'interpolate': 'fitpack/*.f' non-existing path in 'interpolate': '_fitpackmodule.c' non-existing path in 'interpolate': 'fitpack.pyf' Appending scipy.interpolate configuration to scipy Ignoring attempt to set 'name' (from 'scipy' to 'scipy.interpolate') non-existing path in 'io': 'numpyiomodule.c' Appending scipy.io configuration to scipy Ignoring attempt to set 'name' (from 'scipy' to 'scipy.io') ATLAS version ?.?.? non-existing path in 'lib/blas': 'fblas.pyf.src' non-existing path in 'lib/blas': 'fblaswrap.f.src' could not resolve pattern in 'lib/blas': 'fblas_l?.pyf.src' non-existing path in 'lib/blas': 'cblas.pyf.src' could not resolve pattern in 'lib/blas': 'cblas_l?.pyf.src' Appending scipy.lib.blas configuration to scipy.lib Ignoring attempt to set 'name' (from 'scipy.lib' to 'scipy.lib.blas') lapack_opt_info: lapack_mkl_info: NOT AVAILABLE atlas_threads_info: Setting PTATLAS=ATLAS libraries lapack,blas not found in /usr/local/lib libraries lapack_atlas not found in /usr/local/lib libraries lapack_atlas not found in /usr/lib numpy.distutils.system_info.atlas_threads_info Setting PTATLAS=ATLAS Setting PTATLAS=ATLAS FOUND: libraries = ['lapack', 'lapack', 'blas'] library_dirs = ['/usr/lib'] language = f77 customize GnuFCompiler invalid command 'configure' customize IntelFCompiler invalid command 'configure' customize LaheyFCompiler invalid command 'configure' customize PGroupFCompiler invalid command 'configure' customize AbsoftFCompiler invalid command 'configure' customize NAGFCompiler invalid command 'configure' customize VastFCompiler invalid command 'configure' customize CompaqFCompiler invalid command 'configure' customize IntelItaniumFCompiler invalid command 'configure' customize IntelEM64TFCompiler invalid command 'configure' customize Gnu95FCompiler invalid command 'configure' customize G95FCompiler invalid command 'configure' customize GnuFCompiler invalid command 'configure' customize Gnu95FCompiler invalid command 'configure' customize GnuFCompiler customize GnuFCompiler using config FOUND: libraries = ['lapack', 'lapack', 'blas'] library_dirs = ['/usr/lib'] language = f77 define_macros = [('ATLAS_INFO', '"\\"?.?.?\\""')] ATLAS version ?.?.? non-existing path in 'lib/lapack': 'flapack.pyf.src' could not resolve pattern in 'lib/lapack': 'flapack_*.pyf.src' non-existing path in 'lib/lapack': 'clapack.pyf.src' non-existing path in 'lib/lapack': 'calc_lwork.f' non-existing path in 'lib/lapack': 'atlas_version.c' Appending scipy.lib.lapack configuration to scipy.lib Ignoring attempt to set 'name' (from 'scipy.lib' to 'scipy.lib.lapack') Appending scipy.lib configuration to scipy Ignoring attempt to set 'name' (from 'scipy' to 'scipy.lib') ATLAS version ?.?.? non-existing path in 'linalg': 'src/fblaswrap.f' non-existing path in 'linalg': 'generic_fblas.pyf' non-existing path in 'linalg': 'generic_fblas1.pyf' non-existing path in 'linalg': 'generic_fblas2.pyf' non-existing path in 'linalg': 'generic_fblas3.pyf' non-existing path in 'linalg': 'generic_cblas.pyf' non-existing path in 'linalg': 'generic_cblas1.pyf' non-existing path in 'linalg': 'generic_flapack.pyf' non-existing path in 'linalg': 'flapack_user_routines.pyf' non-existing path in 'linalg': 'generic_clapack.pyf' non-existing path in 'linalg': 'src/det.f' non-existing path in 'linalg': 'src/lu.f' non-existing path in 'linalg': 'src/calc_lwork.f' non-existing path in 'linalg': 'atlas_version.c' non-existing path in 'linalg': 'iterative/STOPTEST2.f.src' non-existing path in 'linalg': 'iterative/getbreak.f.src' non-existing path in 'linalg': 'iterative/BiCGREVCOM.f.src' non-existing path in 'linalg': 'iterative/BiCGSTABREVCOM.f.src' non-existing path in 'linalg': 'iterative/CGREVCOM.f.src' non-existing path in 'linalg': 'iterative/CGSREVCOM.f.src' non-existing path in 'linalg': 'iterative/GMRESREVCOM.f.src' non-existing path in 'linalg': 'iterative/QMRREVCOM.f.src' non-existing path in 'linalg': 'iterative/_iterative.pyf.src' Appending scipy.linalg configuration to scipy Ignoring attempt to set 'name' (from 'scipy' to 'scipy.linalg') could not resolve pattern in 'linsolve': 'SuperLU/SRC/*.c' non-existing path in 'linsolve': '_zsuperlumodule.c' non-existing path in 'linsolve': '_superlu_utils.c' non-existing path in 'linsolve': '_superluobject.c' non-existing path in 'linsolve': '_dsuperlumodule.c' non-existing path in 'linsolve': '_superlu_utils.c' non-existing path in 'linsolve': '_superluobject.c' non-existing path in 'linsolve': '_csuperlumodule.c' non-existing path in 'linsolve': '_superlu_utils.c' non-existing path in 'linsolve': '_superluobject.c' non-existing path in 'linsolve': '_ssuperlumodule.c' non-existing path in 'linsolve': '_superlu_utils.c' non-existing path in 'linsolve': '_superluobject.c' umfpack_info: libraries umfpack not found in /usr/local/lib libraries umfpack not found in /usr/lib /usr/local/lib/python2.4/site-packages/numpy/distutils/system_info.py:401: UserWarning: UMFPACK sparse solver (http://www.cise.ufl.edu/research/sparse/umfpack/) not found. Directories to search for the libraries can be specified in the numpy/distutils/site.cfg file (section [umfpack]) or by setting the UMFPACK environment variable. warnings.warn(self.notfounderror.__doc__) NOT AVAILABLE non-existing path in 'linsolve/umfpack': 'umfpack.i' non-existing path in 'linsolve/umfpack': 'umfpack.i' Appending scipy.linsolve.umfpack configuration to scipy.linsolve Ignoring attempt to set 'name' (from 'scipy.linsolve' to 'scipy.linsolve.umfpack') Appending scipy.linsolve configuration to scipy Ignoring attempt to set 'name' (from 'scipy' to 'scipy.linsolve') non-existing path in 'maxentropy': 'doc' Appending scipy.maxentropy configuration to scipy Ignoring attempt to set 'name' (from 'scipy' to 'scipy.maxentropy') Appending scipy.misc configuration to scipy Ignoring attempt to set 'name' (from 'scipy' to 'scipy.misc') non-existing path in 'odr': 'odrpack/d_odr.f' non-existing path in 'odr': 'odrpack/d_mprec.f' non-existing path in 'odr': 'odrpack/dlunoc.f' non-existing path in 'odr': 'odrpack/d_lpk.f' non-existing path in 'odr': '__odrpack.c' Appending scipy.odr configuration to scipy Ignoring attempt to set 'name' (from 'scipy' to 'scipy.odr') could not resolve pattern in 'optimize': 'minpack/*f' non-existing path in 'optimize': '_minpackmodule.c' could not resolve pattern in 'optimize': 'Zeros/*.c' non-existing path in 'optimize': 'zeros.c' non-existing path in 'optimize': 'lbfgsb/lbfgsb.pyf' non-existing path in 'optimize': 'lbfgsb/routines.f' non-existing path in 'optimize': 'tnc/moduleTNC.c' non-existing path in 'optimize': 'tnc/tnc.c' non-existing path in 'optimize': 'cobyla/cobyla.pyf' non-existing path in 'optimize': 'cobyla/cobyla2.f' non-existing path in 'optimize': 'cobyla/trstlp.f' non-existing path in 'optimize': 'minpack2/minpack2.pyf' non-existing path in 'optimize': 'minpack2/dcsrch.f' non-existing path in 'optimize': 'minpack2/dcstep.f' Appending scipy.optimize configuration to scipy Ignoring attempt to set 'name' (from 'scipy' to 'scipy.optimize') Appending scipy.sandbox configuration to scipy Ignoring attempt to set 'name' (from 'scipy' to 'scipy.sandbox') non-existing path in 'signal': 'sigtoolsmodule.c' non-existing path in 'signal': 'firfilter.c' non-existing path in 'signal': 'medianfilter.c' non-existing path in 'signal': 'sigtools.h' non-existing path in 'signal': 'splinemodule.c' non-existing path in 'signal': 'S_bspline_util.c' non-existing path in 'signal': 'D_bspline_util.c' non-existing path in 'signal': 'C_bspline_util.c' non-existing path in 'signal': 'Z_bspline_util.c' non-existing path in 'signal': 'bspline_util.c' Appending scipy.signal configuration to scipy Ignoring attempt to set 'name' (from 'scipy' to 'scipy.signal') non-existing path in 'sparse': 'sparsetools/spblas.f.src' non-existing path in 'sparse': 'sparsetools/spconv.f.src' non-existing path in 'sparse': 'sparsetools/sparsetools.pyf.src' Appending scipy.sparse configuration to scipy Ignoring attempt to set 'name' (from 'scipy' to 'scipy.sparse') could not resolve pattern in 'special': 'c_misc/*.c' could not resolve pattern in 'special': 'cephes/*.c' could not resolve pattern in 'special': 'mach/*.f' could not resolve pattern in 'special': 'amos/*.f' could not resolve pattern in 'special': 'toms/*.f' could not resolve pattern in 'special': 'cdflib/*.f' could not resolve pattern in 'special': 'specfun/*.f' non-existing path in 'special': '_cephesmodule.c' non-existing path in 'special': 'amos_wrappers.c' non-existing path in 'special': 'specfun_wrappers.c' non-existing path in 'special': 'toms_wrappers.c' non-existing path in 'special': 'cdf_wrappers.c' non-existing path in 'special': 'ufunc_extras.c' non-existing path in 'special': 'specfun.pyf' Appending scipy.special configuration to scipy Ignoring attempt to set 'name' (from 'scipy' to 'scipy.special') could not resolve pattern in 'stats': 'statlib/*.f' non-existing path in 'stats': 'statlib.pyf' non-existing path in 'stats': 'futil.f' non-existing path in 'stats': 'mvn.pyf' non-existing path in 'stats': 'mvndst.f' Appending scipy.stats configuration to scipy Ignoring attempt to set 'name' (from 'scipy' to 'scipy.stats') non-existing path in 'ndimage': 'src/nd_image.c' non-existing path in 'ndimage': 'src/ni_filters.c' non-existing path in 'ndimage': 'src/ni_fourier.c' non-existing path in 'ndimage': 'src/ni_interpolation.c' non-existing path in 'ndimage': 'src/ni_measure.c' non-existing path in 'ndimage': 'src/ni_morphology.c' non-existing path in 'ndimage': 'src/ni_support.c' non-existing path in 'ndimage': 'src' Appending scipy.ndimage configuration to scipy Ignoring attempt to set 'name' (from 'scipy' to 'scipy.ndimage') Warning: Assuming default configuration (stsci/convolve/{setup_convolve,setup}.py was not found) Traceback (most recent call last): File "setup.py", line 31, in ? setup(**configuration(top_path='').todict()) File "setup.py", line 23, in configuration config.add_subpackage('stsci') File "/usr/local/lib/python2.4/site-packages/numpy/distutils/misc_util.py", line 765, in add_subpackage caller_level = 2) File "/usr/local/lib/python2.4/site-packages/numpy/distutils/misc_util.py", line 748, in get_subpackage caller_level = caller_level + 1) File "/usr/local/lib/python2.4/site-packages/numpy/distutils/misc_util.py", line 695, in _get_configuration_from_setup_py config = setup_module.configuration(*args) File "/usr/local/lib/python2.4/site-packages/scipy/stsci/setup.py", line 5, in configuration config.add_subpackage('convolve') File "/usr/local/lib/python2.4/site-packages/numpy/distutils/misc_util.py", line 765, in add_subpackage caller_level = 2) File "/usr/local/lib/python2.4/site-packages/numpy/distutils/misc_util.py", line 741, in get_subpackage caller_level = caller_level+1) File "/usr/local/lib/python2.4/site-packages/numpy/distutils/misc_util.py", line 541, in __init__ raise ValueError("%r is not a directory" % (package_path,)) ValueError: 'stsci/convolve' is not a directory > >-- >Robert Kern > >"I have come to believe that the whole world is an enigma, a harmless >enigma > that is made terrible by our own mad attempt to interpret it as though it >had > an underlying truth." > -- Umberto Eco >_______________________________________________ >SciPy-user mailing list >SciPy-user at scipy.org >http://projects.scipy.org/mailman/listinfo/scipy-user _________________________________________________________________ Don't just search. Find. Check out the new MSN Search! http://search.msn.com/ From robert.kern at gmail.com Thu Feb 1 12:41:46 2007 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 01 Feb 2007 11:41:46 -0600 Subject: [SciPy-user] error importing scipy In-Reply-To: References: Message-ID: <45C2265A.4040101@gmail.com> Chiara Caronna wrote: >> From: Robert Kern >> Your scipy was not linked to a BLAS library correctly. What BLAS library >> were >> you trying to link against? > > '_' ... I have never tried to do that... at least I am not aware of... Okay, what kind of system are you on? If Linux, what distribution? What LAPACK and BLAS packages do you have installed? -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From chiaracaronna at hotmail.com Thu Feb 1 12:49:34 2007 From: chiaracaronna at hotmail.com (Chiara Caronna) Date: Thu, 01 Feb 2007 17:49:34 +0000 Subject: [SciPy-user] error importing scipy In-Reply-To: <45C2265A.4040101@gmail.com> Message-ID: >From: Robert Kern >Reply-To: SciPy Users List >To: SciPy Users List >Subject: Re: [SciPy-user] error importing scipy >Date: Thu, 01 Feb 2007 11:41:46 -0600 > >Chiara Caronna wrote: > >> From: Robert Kern > > >> Your scipy was not linked to a BLAS library correctly. What BLAS >library > >> were > >> you trying to link against? > > > > '_' ... I have never tried to do that... at least I am not aware of... > >Okay, what kind of system are you on? If Linux, what distribution? I have Linux, Suse 8.4 (quite old, I know, but I can't change it, it's my office pc...) >What LAPACK and BLAS packages do you have installed? I have never installed them... how can I check which I have? > >-- >Robert Kern > >"I have come to believe that the whole world is an enigma, a harmless >enigma > that is made terrible by our own mad attempt to interpret it as though it >had > an underlying truth." > -- Umberto Eco >_______________________________________________ >SciPy-user mailing list >SciPy-user at scipy.org >http://projects.scipy.org/mailman/listinfo/scipy-user _________________________________________________________________ Express yourself instantly with MSN Messenger! Download today it's FREE! http://messenger.msn.click-url.com/go/onm00200471ave/direct/01/ From rowen at cesmail.net Thu Feb 1 13:21:58 2007 From: rowen at cesmail.net (Russell E. Owen) Date: Thu, 01 Feb 2007 10:21:58 -0800 Subject: [SciPy-user] What it the best way to install on a Mac? References: <723eb6930701310617v42a4a098s5582ef6dff01e8d4@mail.gmail.com> <45C0C5F5.2000100@unc.edu> <20070131223800.GA20121@avicenna.cc.columbia.edu> Message-ID: In article <20070131223800.GA20121 at avicenna.cc.columbia.edu>, Lev Givon wrote: > ...It might be desirable to encourage a matplotlib person to add some > information regarding the installation of that package, as I suspect > that many folks in need of installation wisdom think in terms of the > tetrad ipython/numpy/scipy/matplotlib. My instructions for building a universal matplotlib for MacOS X from source are here (google for "build matplotlib for mac"): http://www.astro.washington.edu/owen/BuildingMatplotlibForMac.html I would be happy to move this to a wiki if anyone can suggest an appropriate one. Having good instructions for building complex packages like matplotlib is very important. It should be kept current and readily available. That said, building matplotlib is tricky enough that few users will want to bother. And they shouldn't have to. Most people will be much happier with downloadable packages. -- Russell From pgreisen at gmail.com Thu Feb 1 14:12:11 2007 From: pgreisen at gmail.com (Per Jr. Greisen) Date: Thu, 1 Feb 2007 20:12:11 +0100 Subject: [SciPy-user] error importing scipy In-Reply-To: References: <45C2265A.4040101@gmail.com> Message-ID: Hi, Maybe this work . on suse machines there should be a yast to configure new programs. From yast it should be possible to get BLAS and LAPACK libraries (regarding suse 8.4 maybe you should try to upgrade it) On 2/1/07, Chiara Caronna wrote: > > > > > >From: Robert Kern > >Reply-To: SciPy Users List > >To: SciPy Users List > >Subject: Re: [SciPy-user] error importing scipy > >Date: Thu, 01 Feb 2007 11:41:46 -0600 > > > >Chiara Caronna wrote: > > >> From: Robert Kern > > > > >> Your scipy was not linked to a BLAS library correctly. What BLAS > >library > > >> were > > >> you trying to link against? > > > > > > '_' ... I have never tried to do that... at least I am not aware of... > > > >Okay, what kind of system are you on? If Linux, what distribution? > > I have Linux, Suse 8.4 (quite old, I know, but I can't change it, it's my > office pc...) > > >What LAPACK and BLAS packages do you have installed? > I have never installed them... how can I check which I have? > > > > > > >-- > >Robert Kern > > > >"I have come to believe that the whole world is an enigma, a harmless > >enigma > > that is made terrible by our own mad attempt to interpret it as though > it > >had > > an underlying truth." > > -- Umberto Eco > >_______________________________________________ > >SciPy-user mailing list > >SciPy-user at scipy.org > >http://projects.scipy.org/mailman/listinfo/scipy-user > > _________________________________________________________________ > Express yourself instantly with MSN Messenger! Download today it's FREE! > http://messenger.msn.click-url.com/go/onm00200471ave/direct/01/ > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -- Best regards Per Jr. Greisen "If you make something idiot-proof, the universe creates a better idiot." -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Thu Feb 1 14:53:07 2007 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 01 Feb 2007 13:53:07 -0600 Subject: [SciPy-user] error importing scipy In-Reply-To: References: Message-ID: <45C24523.8090706@gmail.com> Chiara Caronna wrote: >> From: Robert Kern >> Reply-To: SciPy Users List >> To: SciPy Users List >> Subject: Re: [SciPy-user] error importing scipy >> Date: Thu, 01 Feb 2007 11:41:46 -0600 >> >> Chiara Caronna wrote: >>>> From: Robert Kern >>>> Your scipy was not linked to a BLAS library correctly. What BLAS >> library >>>> were >>>> you trying to link against? >>> '_' ... I have never tried to do that... at least I am not aware of... >> Okay, what kind of system are you on? If Linux, what distribution? > > I have Linux, Suse 8.4 (quite old, I know, but I can't change it, it's my > office pc...) > >> What LAPACK and BLAS packages do you have installed? > I have never installed them... how can I check which I have? Apparently you have since the configuration process has found some such libraries in /usr/lib/. That means that they were most likely installed from SuSE packages. You might need to talk to the person that administers your computer to find out what packages they installed. SuSE has often been problematic, especially with regards to its LAPACK and BLAS libraries. I'm afraid that I don't know much of the details, like which versions of which specific packages are known to work. Other people with SuSE experience will have to chime in here. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From anand at soe.ucsc.edu Thu Feb 1 14:56:28 2007 From: anand at soe.ucsc.edu (Anand Patil) Date: Thu, 01 Feb 2007 11:56:28 -0800 Subject: [SciPy-user] numpy installer not finding ATLAS in spite of correct environment variable Message-ID: <45C245EC.7080305@cse.ucsc.edu> Hi all, I'm trying to install numpy on a machine in which /usr/local/atlas/lib/Linux_HAMMER64SSE2_4 contains: libatlas.a libcblas.a liblapack.a libptcblas.a libtstatlas.a Makefile Make.inc I've set the relevant environment variables as follows: setenv ATLAS /usr/local/atlas/lib/Linux_HAMMER64SSE2_4/ setenv BLAS /usr/local/atlas/lib/Linux_HAMMER64SSE2_4/ setenv LAPACK /usr/local/atlas/lib/Linux_HAMMER64SSE2_4/ setenv BLAS_SRC /usr/local/atlas/src/blas setenv LAPACK_SRC /usr/local/atlas/src/lapack However, numpy's installer does this: atlas_blas_threads_info: Setting PTATLAS=ATLAS libraries lapack,blas not found in /usr/local/atlas/lib/Linux_HAMMER64SSE2_4/ libraries lapack,blas not found in /usr/local/lib libraries lapack,blas not found in /usr/lib/sse2 libraries lapack,blas not found in /usr/lib NOT AVAILABLE atlas_blas_info: libraries lapack,blas not found in /usr/local/atlas/lib/Linux_HAMMER64SSE2_4/ libraries lapack,blas not found in /usr/local/lib libraries lapack,blas not found in /usr/lib/sse2 libraries lapack,blas not found in /usr/lib NOT AVAILABLE /projects/mangellab/anand/numpy-1.0.1/numpy/distutils/system_info.py:1301: UserWarning: Atlas (http://math-atlas.sourceforge.net/) libraries not found. Directories to search for the libraries can be specified in the numpy/distutils/site.cfg file (section [atlas]) or by setting the ATLAS environment variable. warnings.warn(AtlasNotFoundError.__doc__) blas_info: libraries blas not found in /usr/local/atlas/lib/Linux_HAMMER64SSE2_4/ libraries blas not found in /usr/local/lib libraries blas not found in /usr/lib NOT AVAILABLE /projects/mangellab/anand/numpy-1.0.1/numpy/distutils/system_info.py:1310: UserWarning: Blas (http://www.netlib.org/blas/) libraries not found. Directories to search for the libraries can be specified in the numpy/distutils/site.cfg file (section [blas]) or by setting the BLAS environment variable. warnings.warn(BlasNotFoundError.__doc__) blas_src_info: NOT AVAILABLE /projects/mangellab/anand/numpy-1.0.1/numpy/distutils/system_info.py:1313: UserWarning: Blas (http://www.netlib.org/blas/) sources not found. Directories to search for the sources can be specified in the numpy/distutils/site.cfg file (section [blas_src]) or by setting the BLAS_SRC environment variable. warnings.warn(BlasSrcNotFoundError.__doc__) NOT AVAILABLE lapack_opt_info: lapack_mkl_info: mkl_info: libraries mkl,vml,guide not found in /usr/local/lib libraries mkl,vml,guide not found in /usr/lib NOT AVAILABLE NOT AVAILABLE atlas_threads_info: Setting PTATLAS=ATLAS libraries lapack,blas not found in /usr/local/atlas/lib/Linux_HAMMER64SSE2_4/ libraries lapack_atlas not found in /usr/local/atlas/lib/Linux_HAMMER64SSE2_4/ libraries lapack,blas not found in /usr/local/lib libraries lapack_atlas not found in /usr/local/lib libraries lapack,blas not found in /usr/lib/sse2 libraries lapack_atlas not found in /usr/lib/sse2 libraries lapack,blas not found in /usr/lib libraries lapack_atlas not found in /usr/lib numpy.distutils.system_info.atlas_threads_info NOT AVAILABLE atlas_info: libraries lapack,blas not found in /usr/local/atlas/lib/Linux_HAMMER64SSE2_4/ libraries lapack_atlas not found in /usr/local/atlas/lib/Linux_HAMMER64SSE2_4/ libraries lapack,blas not found in /usr/local/lib libraries lapack_atlas not found in /usr/local/lib libraries lapack,blas not found in /usr/lib/sse2 libraries lapack_atlas not found in /usr/lib/sse2 libraries lapack,blas not found in /usr/lib libraries lapack_atlas not found in /usr/lib numpy.distutils.system_info.atlas_info NOT AVAILABLE /projects/mangellab/anand/numpy-1.0.1/numpy/distutils/system_info.py:1210: UserWarning: Atlas (http://math-atlas.sourceforge.net/) libraries not found. Directories to search for the libraries can be specified in the numpy/distutils/site.cfg file (section [atlas]) or by setting the ATLAS environment variable. warnings.warn(AtlasNotFoundError.__doc__) lapack_info: FOUND: libraries = ['lapack'] library_dirs = ['/usr/local/atlas/lib/Linux_HAMMER64SSE2_4/'] language = f77 /projects/mangellab/anand/numpy-1.0.1/numpy/distutils/system_info.py:1234: UserWarning: Blas (http://www.netlib.org/blas/) libraries not found. Directories to search for the libraries can be specified in the numpy/distutils/site.cfg file (section [blas]) or by setting the BLAS environment variable. warnings.warn(BlasNotFoundError.__doc__) /projects/mangellab/anand/numpy-1.0.1/numpy/distutils/system_info.py:1237: UserWarning: Blas (http://www.netlib.org/blas/) sources not found. Directories to search for the sources can be specified in the numpy/distutils/site.cfg file (section [blas_src]) or by setting the BLAS_SRC environment variable. warnings.warn(BlasSrcNotFoundError.__doc__) NOT AVAILABLE Then, even though it apparently found lapack, it builds lapack_lite: gcc: numpy/linalg/dlamch.c gcc: numpy/linalg/lapack_litemodule.c gcc: numpy/linalg/zlapack_lite.c gcc: numpy/linalg/dlapack_lite.c gcc: numpy/linalg/blas_lite.c gcc: numpy/linalg/f2c_lite.c What am I doing wrong? Thanks much, Anand From robert.kern at gmail.com Thu Feb 1 14:59:36 2007 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 01 Feb 2007 13:59:36 -0600 Subject: [SciPy-user] numpy installer not finding ATLAS in spite of correct environment variable In-Reply-To: <45C245EC.7080305@cse.ucsc.edu> References: <45C245EC.7080305@cse.ucsc.edu> Message-ID: <45C246A8.8050707@gmail.com> Anand Patil wrote: > Hi all, > > I'm trying to install numpy on a machine in which > /usr/local/atlas/lib/Linux_HAMMER64SSE2_4 contains: > > libatlas.a libcblas.a liblapack.a libptcblas.a libtstatlas.a > Makefile Make.inc You are missing libf77blas.a which should have the FORTRAN-compatible symbols that numpy.linalg uses. Make sure that the ATLAS configuration process recognizes your FORTRAN compiler (probably g77). Then it should build that library. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From chiaracaronna at hotmail.com Thu Feb 1 15:07:48 2007 From: chiaracaronna at hotmail.com (Chiara Caronna) Date: Thu, 01 Feb 2007 20:07:48 +0000 Subject: [SciPy-user] error importing scipy In-Reply-To: <45C24523.8090706@gmail.com> Message-ID: Ok, thank you anyway fro your time! >From: Robert Kern >Reply-To: SciPy Users List >To: SciPy Users List >Subject: Re: [SciPy-user] error importing scipy >Date: Thu, 01 Feb 2007 13:53:07 -0600 > >Chiara Caronna wrote: > >> From: Robert Kern > >> Reply-To: SciPy Users List > >> To: SciPy Users List > >> Subject: Re: [SciPy-user] error importing scipy > >> Date: Thu, 01 Feb 2007 11:41:46 -0600 > >> > >> Chiara Caronna wrote: > >>>> From: Robert Kern > >>>> Your scipy was not linked to a BLAS library correctly. What BLAS > >> library > >>>> were > >>>> you trying to link against? > >>> '_' ... I have never tried to do that... at least I am not aware of... > >> Okay, what kind of system are you on? If Linux, what distribution? > > > > I have Linux, Suse 8.4 (quite old, I know, but I can't change it, it's >my > > office pc...) > > > >> What LAPACK and BLAS packages do you have installed? > > I have never installed them... how can I check which I have? > >Apparently you have since the configuration process has found some such >libraries in /usr/lib/. That means that they were most likely installed >from >SuSE packages. You might need to talk to the person that administers your >computer to find out what packages they installed. > >SuSE has often been problematic, especially with regards to its LAPACK and >BLAS >libraries. I'm afraid that I don't know much of the details, like which >versions >of which specific packages are known to work. Other people with SuSE >experience >will have to chime in here. > >-- >Robert Kern > >"I have come to believe that the whole world is an enigma, a harmless >enigma > that is made terrible by our own mad attempt to interpret it as though it >had > an underlying truth." > -- Umberto Eco >_______________________________________________ >SciPy-user mailing list >SciPy-user at scipy.org >http://projects.scipy.org/mailman/listinfo/scipy-user _________________________________________________________________ Don't just search. Find. Check out the new MSN Search! http://search.msn.click-url.com/go/onm00200636ave/direct/01/ From suhlhorn at gmail.com Thu Feb 1 15:33:41 2007 From: suhlhorn at gmail.com (Stephen Uhlhorn) Date: Thu, 1 Feb 2007 15:33:41 -0500 Subject: [SciPy-user] NameError on scipy import Message-ID: <47edc1a0702011233j5e280f52i40ac4237aca764c8@mail.gmail.com> I built numpy/scipy on my OS X 10.4 machine following the instructions at the scipy.org page. I installed ActiveState python 2.4 and had no build errors thru the entire process including numpy 1.0.1 and scipy 0.5.2. The post-install numpy test passed without error, but importing scipy gave this error: >>> import scipy Traceback (most recent call last): File "", line 1, in ? File "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages/scipy/__init__.py", line 33, in ? del lib NameError: name 'lib' is not defined Should I update to the latest svn version? Any suggestions? Thanks- -stephen From robert.kern at gmail.com Thu Feb 1 15:41:20 2007 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 01 Feb 2007 14:41:20 -0600 Subject: [SciPy-user] NameError on scipy import In-Reply-To: <47edc1a0702011233j5e280f52i40ac4237aca764c8@mail.gmail.com> References: <47edc1a0702011233j5e280f52i40ac4237aca764c8@mail.gmail.com> Message-ID: <45C25070.2010004@gmail.com> Stephen Uhlhorn wrote: > I built numpy/scipy on my OS X 10.4 machine following the instructions > at the scipy.org page. > > I installed ActiveState python 2.4 and had no build errors thru the > entire process including numpy 1.0.1 and scipy 0.5.2. The post-install > numpy test passed without error, but importing scipy gave this error: > >>>> import scipy > Traceback (most recent call last): > File "", line 1, in ? > File "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages/scipy/__init__.py", > line 33, in ? > del lib > NameError: name 'lib' is not defined > > Should I update to the latest svn version? Any suggestions? Double-check that the scipy that is trying to be imported is the same as the one you thought you installed. There is no line "del lib" in scipy 0.5.2's __init__.py. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From eike.welk at gmx.net Thu Feb 1 15:46:51 2007 From: eike.welk at gmx.net (Eike Welk) Date: Thu, 01 Feb 2007 21:46:51 +0100 Subject: [SciPy-user] error importing scipy In-Reply-To: <45C24523.8090706@gmail.com> References: <45C24523.8090706@gmail.com> Message-ID: <200702012146.52456.eike.welk@gmx.net> On Thursday 01 February 2007 20:53, Robert Kern wrote: > SuSE has often been problematic, especially with regards to its > LAPACK and BLAS libraries. I'm afraid that I don't know much of the > details, like which versions of which specific packages are known > to work. Other people with SuSE experience will have to chime in > here. Yes, Suse ship broken packages, also for Suse 10.2. I use comunity contributed packages from: http://repos.opensuse.org/science/ But these are not helpfull for Chiara because the are none for Suse 8.4. So, is there a way to compile NumPy and SciPy without using BLAS and LAPACK at all? Regards, Eike. From suhlhorn at gmail.com Thu Feb 1 15:54:13 2007 From: suhlhorn at gmail.com (Stephen Uhlhorn) Date: Thu, 1 Feb 2007 15:54:13 -0500 Subject: [SciPy-user] NameError on scipy import In-Reply-To: <45C25070.2010004@gmail.com> References: <47edc1a0702011233j5e280f52i40ac4237aca764c8@mail.gmail.com> <45C25070.2010004@gmail.com> Message-ID: <47edc1a0702011254y5b60c32cw4b460c05fa2616c6@mail.gmail.com> I do have another installation via fink... many problems. How do I find out which scipy my python is importing? On 2/1/07, Robert Kern wrote: > Stephen Uhlhorn wrote: > > I built numpy/scipy on my OS X 10.4 machine following the instructions > > at the scipy.org page. > > > > I installed ActiveState python 2.4 and had no build errors thru the > > entire process including numpy 1.0.1 and scipy 0.5.2. The post-install > > numpy test passed without error, but importing scipy gave this error: > > > >>>> import scipy > > Traceback (most recent call last): > > File "", line 1, in ? > > File "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages/scipy/__init__.py", > > line 33, in ? > > del lib > > NameError: name 'lib' is not defined > > > > Should I update to the latest svn version? Any suggestions? > > Double-check that the scipy that is trying to be imported is the same as the one > you thought you installed. There is no line "del lib" in scipy 0.5.2's __init__.py. > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless enigma > that is made terrible by our own mad attempt to interpret it as though it had > an underlying truth." > -- Umberto Eco > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From robert.kern at gmail.com Thu Feb 1 15:56:55 2007 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 01 Feb 2007 14:56:55 -0600 Subject: [SciPy-user] NameError on scipy import In-Reply-To: <47edc1a0702011254y5b60c32cw4b460c05fa2616c6@mail.gmail.com> References: <47edc1a0702011233j5e280f52i40ac4237aca764c8@mail.gmail.com> <45C25070.2010004@gmail.com> <47edc1a0702011254y5b60c32cw4b460c05fa2616c6@mail.gmail.com> Message-ID: <45C25417.5010000@gmail.com> Stephen Uhlhorn wrote: > I do have another installation via fink... many problems. > > How do I find out which scipy my python is importing? The traceback showed you the filename: /Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages/scipy/__init__.py -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From robert.kern at gmail.com Thu Feb 1 15:58:20 2007 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 01 Feb 2007 14:58:20 -0600 Subject: [SciPy-user] error importing scipy In-Reply-To: <200702012146.52456.eike.welk@gmx.net> References: <45C24523.8090706@gmail.com> <200702012146.52456.eike.welk@gmx.net> Message-ID: <45C2546C.5020208@gmail.com> Eike Welk wrote: > So, is there a way to compile NumPy and SciPy without using BLAS and > LAPACK at all? Depends on what you mean by that. Strictly speaking, no; at least parts of BLAS and LAPACK must be used somewhere along the line. numpy includes implementations for the parts it needs if an already installed BLAS and LAPACK cannot be found. However, you can build BLAS and LAPACK yourself relatively easily if you don't care about optimization. http://www.scipy.org/Installing_SciPy/BuildingGeneral#head-e618da78f29d5a85f680cc47e574a84951c8dffb -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From suhlhorn at gmail.com Thu Feb 1 16:09:52 2007 From: suhlhorn at gmail.com (Stephen Uhlhorn) Date: Thu, 1 Feb 2007 16:09:52 -0500 Subject: [SciPy-user] NameError on scipy import In-Reply-To: <45C25417.5010000@gmail.com> References: <47edc1a0702011233j5e280f52i40ac4237aca764c8@mail.gmail.com> <45C25070.2010004@gmail.com> <47edc1a0702011254y5b60c32cw4b460c05fa2616c6@mail.gmail.com> <45C25417.5010000@gmail.com> Message-ID: <47edc1a0702011309w3179bc43j6b9b86e1dd11782@mail.gmail.com> The one in the path below is definitely the version I built. My working numpy is installed under the same tree. The other (fink) version is under /sw/lib/python/site-packages and I changed my path so that the python called is completely outside the fink tree. Should I update scipy via svn? On 2/1/07, Robert Kern wrote: > Stephen Uhlhorn wrote: > > I do have another installation via fink... many problems. > > > > How do I find out which scipy my python is importing? > > The traceback showed you the filename: > > /Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages/scipy/__init__.py > > -- > Robert Kern From robert.kern at gmail.com Thu Feb 1 16:13:41 2007 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 01 Feb 2007 15:13:41 -0600 Subject: [SciPy-user] NameError on scipy import In-Reply-To: <47edc1a0702011309w3179bc43j6b9b86e1dd11782@mail.gmail.com> References: <47edc1a0702011233j5e280f52i40ac4237aca764c8@mail.gmail.com> <45C25070.2010004@gmail.com> <47edc1a0702011254y5b60c32cw4b460c05fa2616c6@mail.gmail.com> <45C25417.5010000@gmail.com> <47edc1a0702011309w3179bc43j6b9b86e1dd11782@mail.gmail.com> Message-ID: <45C25805.1030003@gmail.com> Stephen Uhlhorn wrote: > The one in the path below is definitely the version I built. My > working numpy is installed under the same tree. The other (fink) > version is under /sw/lib/python/site-packages and I changed my path so > that the python called is completely outside the fink tree. > > Should I update scipy via svn? I would double-check that the file /Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages/scipy/__init__.py is the same as from the source that you built. That file in the scipy-0.5.2.tar.gz that I just downloaded does not have the line that is causing the error. Most likely, you should delete the directory /Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages/scipy/ and re-install from the source that you have (provided that it does not also have the erroneous "del lib" line). -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From suhlhorn at gmail.com Thu Feb 1 16:13:37 2007 From: suhlhorn at gmail.com (Stephen Uhlhorn) Date: Thu, 1 Feb 2007 16:13:37 -0500 Subject: [SciPy-user] NameError on scipy import In-Reply-To: <45C25070.2010004@gmail.com> References: <47edc1a0702011233j5e280f52i40ac4237aca764c8@mail.gmail.com> <45C25070.2010004@gmail.com> Message-ID: <47edc1a0702011313v6c359d5aude097f23389758a0@mail.gmail.com> I just checked the file and 'del lib' is in __init__.py My scipy is from the 0.5.2 tarball. On 2/1/07, Robert Kern wrote: > Double-check that the scipy that is trying to be imported is the same as the one > you thought you installed. There is no line "del lib" in scipy 0.5.2's __init__.py. > > -- > Robert Kern > From robert.kern at gmail.com Thu Feb 1 16:17:07 2007 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 01 Feb 2007 15:17:07 -0600 Subject: [SciPy-user] NameError on scipy import In-Reply-To: <47edc1a0702011313v6c359d5aude097f23389758a0@mail.gmail.com> References: <47edc1a0702011233j5e280f52i40ac4237aca764c8@mail.gmail.com> <45C25070.2010004@gmail.com> <47edc1a0702011313v6c359d5aude097f23389758a0@mail.gmail.com> Message-ID: <45C258D3.2030907@gmail.com> Stephen Uhlhorn wrote: > I just checked the file and 'del lib' is in __init__.py My scipy is > from the 0.5.2 tarball. But is that line in the __init__.py that resides in the tarball that you have rather than the one that is installed? It does not exist in the tarball that I just downloaded. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From eike.welk at gmx.net Thu Feb 1 16:23:47 2007 From: eike.welk at gmx.net (Eike Welk) Date: Thu, 01 Feb 2007 22:23:47 +0100 Subject: [SciPy-user] error importing scipy In-Reply-To: <45C2546C.5020208@gmail.com> References: <200702012146.52456.eike.welk@gmx.net> <45C2546C.5020208@gmail.com> Message-ID: <200702012223.47275.eike.welk@gmx.net> Thank you! Eike. From suhlhorn at gmail.com Thu Feb 1 18:11:49 2007 From: suhlhorn at gmail.com (Stephen Uhlhorn) Date: Thu, 1 Feb 2007 18:11:49 -0500 Subject: [SciPy-user] NameError on scipy import In-Reply-To: <45C258D3.2030907@gmail.com> References: <47edc1a0702011233j5e280f52i40ac4237aca764c8@mail.gmail.com> <45C25070.2010004@gmail.com> <47edc1a0702011313v6c359d5aude097f23389758a0@mail.gmail.com> <45C258D3.2030907@gmail.com> Message-ID: <47edc1a0702011511g290bb0dk1a1f95506a605fc9@mail.gmail.com> Strange... I cleaned out my build directory and the scipy site-package dir to make sure the files matched up... now it won't compile: /usr/bin/ld: can't locate file for: -lcc_dynamic collect2: ld returned 1 exit status /usr/bin/ld: can't locate file for: -lcc_dynamic collect2: ld returned 1 exit status error: Command "/usr/local/bin/g77 -g -Wall -undefined dynamic_lookup -bundle build/temp.darwin-8.8. 0-Power_Macintosh-2.4/build/src.darwin-8.8.0-Power_Macintosh-2.4/Lib/fftpack/_fftpackmodule.o build/ temp.darwin-8.8.0-Power_Macintosh-2.4/Lib/fftpack/src/zfft.o build/temp.darwin-8.8.0-Power_Macintosh -2.4/Lib/fftpack/src/drfft.o build/temp.darwin-8.8.0-Power_Macintosh-2.4/Lib/fftpack/src/zrfft.o bui ld/temp.darwin-8.8.0-Power_Macintosh-2.4/Lib/fftpack/src/zfftnd.o build/temp.darwin-8.8.0-Power_Maci ntosh-2.4/build/src.darwin-8.8.0-Power_Macintosh-2.4/fortranobject.o -L/usr/local/lib -L/usr/local/l ib/gcc/powerpc-apple-darwin7.9.0/3.4.4 -Lbuild/temp.darwin-8.8.0-Power_Macintosh-2.4 -ldfftpack -lff tw3 -lg2c -lcc_dynamic -o build/lib.darwin-8.8.0-Power_Macintosh-2.4/scipy/fftpack/_fftpack.so -lSys temStubs" failed with exit status 1 I tried making the changes to Lib/fftpack.setup.py according to the instructions, but no go... any ideas? -stephen On 2/1/07, Robert Kern wrote: > Stephen Uhlhorn wrote: > > I just checked the file and 'del lib' is in __init__.py My scipy is > > from the 0.5.2 tarball. > > But is that line in the __init__.py that resides in the tarball that you have > rather than the one that is installed? It does not exist in the tarball that I > just downloaded. > > -- > Robert Kern From robert.kern at gmail.com Thu Feb 1 18:18:44 2007 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 01 Feb 2007 17:18:44 -0600 Subject: [SciPy-user] NameError on scipy import In-Reply-To: <47edc1a0702011511g290bb0dk1a1f95506a605fc9@mail.gmail.com> References: <47edc1a0702011233j5e280f52i40ac4237aca764c8@mail.gmail.com> <45C25070.2010004@gmail.com> <47edc1a0702011313v6c359d5aude097f23389758a0@mail.gmail.com> <45C258D3.2030907@gmail.com> <47edc1a0702011511g290bb0dk1a1f95506a605fc9@mail.gmail.com> Message-ID: <45C27554.4020408@gmail.com> Stephen Uhlhorn wrote: > Strange... I cleaned out my build directory and the scipy site-package > dir to make sure the files matched up... now it won't compile: > > /usr/bin/ld: can't locate file for: -lcc_dynamic > collect2: ld returned 1 exit status > /usr/bin/ld: can't locate file for: -lcc_dynamic > collect2: ld returned 1 exit status > error: Command "/usr/local/bin/g77 -g -Wall -undefined dynamic_lookup > -bundle build/temp.darwin-8.8. > 0-Power_Macintosh-2.4/build/src.darwin-8.8.0-Power_Macintosh-2.4/Lib/fftpack/_fftpackmodule.o > build/ You will need to use gfortran for a Universal build of Python on Tiger. g77 does not build Universal binaries. See the instructions I give here: http://projects.scipy.org/pipermail/numpy-discussion/2007-January/025368.html > temp.darwin-8.8.0-Power_Macintosh-2.4/Lib/fftpack/src/zfft.o > build/temp.darwin-8.8.0-Power_Macintosh > -2.4/Lib/fftpack/src/drfft.o > build/temp.darwin-8.8.0-Power_Macintosh-2.4/Lib/fftpack/src/zrfft.o > bui > ld/temp.darwin-8.8.0-Power_Macintosh-2.4/Lib/fftpack/src/zfftnd.o > build/temp.darwin-8.8.0-Power_Maci > ntosh-2.4/build/src.darwin-8.8.0-Power_Macintosh-2.4/fortranobject.o > -L/usr/local/lib -L/usr/local/l > ib/gcc/powerpc-apple-darwin7.9.0/3.4.4 > -Lbuild/temp.darwin-8.8.0-Power_Macintosh-2.4 -ldfftpack -lff > tw3 -lg2c -lcc_dynamic -o > build/lib.darwin-8.8.0-Power_Macintosh-2.4/scipy/fftpack/_fftpack.so > -lSys > temStubs" failed with exit status 1 > > I tried making the changes to Lib/fftpack.setup.py according to the > instructions, but no go... any ideas? What instructions? (This has nothing to do with your problem, but I am unaware of instructions which tell you to modify Lib/fftpack/setup.py and would like to nail down any false information). -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From suhlhorn at gmail.com Thu Feb 1 18:30:58 2007 From: suhlhorn at gmail.com (Stephen Uhlhorn) Date: Thu, 1 Feb 2007 18:30:58 -0500 Subject: [SciPy-user] NameError on scipy import In-Reply-To: <45C27554.4020408@gmail.com> References: <47edc1a0702011233j5e280f52i40ac4237aca764c8@mail.gmail.com> <45C25070.2010004@gmail.com> <47edc1a0702011313v6c359d5aude097f23389758a0@mail.gmail.com> <45C258D3.2030907@gmail.com> <47edc1a0702011511g290bb0dk1a1f95506a605fc9@mail.gmail.com> <45C27554.4020408@gmail.com> Message-ID: <47edc1a0702011530m832edc4s942de3367f5e507a@mail.gmail.com> > What instructions? (This has nothing to do with your problem, but I am unaware > of instructions which tell you to modify Lib/fftpack/setup.py and would like to > nail down any false information). Just followed the advice from here: http://www.scipy.org/Installing_SciPy/Mac_OS_X#head-bf65e6cbe5d205c7e8fd574139a9e4db7b3d12e3 Perhaps its oudated? From robert.kern at gmail.com Thu Feb 1 18:40:12 2007 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 01 Feb 2007 17:40:12 -0600 Subject: [SciPy-user] NameError on scipy import In-Reply-To: <47edc1a0702011530m832edc4s942de3367f5e507a@mail.gmail.com> References: <47edc1a0702011233j5e280f52i40ac4237aca764c8@mail.gmail.com> <45C25070.2010004@gmail.com> <47edc1a0702011313v6c359d5aude097f23389758a0@mail.gmail.com> <45C258D3.2030907@gmail.com> <47edc1a0702011511g290bb0dk1a1f95506a605fc9@mail.gmail.com> <45C27554.4020408@gmail.com> <47edc1a0702011530m832edc4s942de3367f5e507a@mail.gmail.com> Message-ID: <45C27A5C.5010801@gmail.com> Stephen Uhlhorn wrote: >> What instructions? (This has nothing to do with your problem, but I am unaware >> of instructions which tell you to modify Lib/fftpack/setup.py and would like to >> nail down any false information). > > Just followed the advice from here: > > http://www.scipy.org/Installing_SciPy/Mac_OS_X#head-bf65e6cbe5d205c7e8fd574139a9e4db7b3d12e3 > > Perhaps its oudated? Yes, apparently. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From aisaac at american.edu Thu Feb 1 22:23:27 2007 From: aisaac at american.edu (Alan G Isaac) Date: Thu, 1 Feb 2007 22:23:27 -0500 Subject: [SciPy-user] Release 0.6.1 of pyaudio, renamed pyaudiolab In-Reply-To: <45C0A076.2090805@ar.media.kyoto-u.ac.jp> References: <45C0A076.2090805@ar.media.kyoto-u.ac.jp> Message-ID: On Wed, 31 Jan 2007, David Cournapeau apparently wrote: > With pyaudiolab, you should be able to read and write most > common audio files from and to numpy arrays. The > underlying IO operations are done using libsndfile from > Erik Castro Lopo (http://www.mega-nerd.com/libsndfile/) I think it is worth mentioning (on this list) that pyaudiolab uses the SciPy license and libsndfile is LGPL. Cheers, Alan Isaac From david at ar.media.kyoto-u.ac.jp Thu Feb 1 22:50:42 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Fri, 02 Feb 2007 12:50:42 +0900 Subject: [SciPy-user] Release 0.6.1 of pyaudio, renamed pyaudiolab In-Reply-To: References: <45C0A076.2090805@ar.media.kyoto-u.ac.jp> Message-ID: <45C2B512.9070003@ar.media.kyoto-u.ac.jp> Alan G Isaac wrote: > On Wed, 31 Jan 2007, David Cournapeau apparently wrote: >> With pyaudiolab, you should be able to read and write most >> common audio files from and to numpy arrays. The >> underlying IO operations are done using libsndfile from >> Erik Castro Lopo (http://www.mega-nerd.com/libsndfile/) > > I think it is worth mentioning (on this list) that > pyaudiolab uses the SciPy license and libsndfile is LGPL. Indeed, I forgot to mention this fact in the announcement. It is mentioned somewhere in the source, but it should be done better. It is the only reason that pyaudiolab is not part of scipy. Your post made me realize that I actually didn't look at how applying the license correctly, which is not good at all (that's the first project I started from scratch). I will change that. David From nwagner at iam.uni-stuttgart.de Fri Feb 2 02:45:02 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Fri, 02 Feb 2007 08:45:02 +0100 Subject: [SciPy-user] asanyarray Message-ID: <45C2EBFE.3060902@iam.uni-stuttgart.de> Hi, I am confused about the behaviour of asanyarray Python 2.4.1 (#1, Oct 13 2006, 16:51:58) [GCC 4.0.2 20050901 (prerelease) (SUSE Linux)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> from scipy import * >>> A = io.mmread('mhd416a.mtx.gz') >>> A <416x416 sparse matrix of type '' with 8562 stored elements in COOrdinate format> >>> shape(A) (416, 416) >>> A=asanyarray(A) >>> A array(<416x416 sparse matrix of type '' with 8562 stored elements in COOrdinate format>, dtype=object) >>> shape(A) () Help on function asanyarray in module numpy.core.numeric: asanyarray(a, dtype=None, order=None) Returns a as an array, but will pass subclasses through. Why is the shape altered by asanyarray ? Nils From robert.kern at gmail.com Fri Feb 2 02:56:39 2007 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 02 Feb 2007 01:56:39 -0600 Subject: [SciPy-user] asanyarray In-Reply-To: <45C2EBFE.3060902@iam.uni-stuttgart.de> References: <45C2EBFE.3060902@iam.uni-stuttgart.de> Message-ID: <45C2EEB7.9010305@gmail.com> Nils Wagner wrote: > Hi, > > I am confused about the behaviour of asanyarray > > Python 2.4.1 (#1, Oct 13 2006, 16:51:58) > [GCC 4.0.2 20050901 (prerelease) (SUSE Linux)] on linux2 > Type "help", "copyright", "credits" or "license" for more information. >>>> from scipy import * >>>> A = io.mmread('mhd416a.mtx.gz') >>>> A > <416x416 sparse matrix of type '' > with 8562 stored elements in COOrdinate format> >>>> shape(A) > (416, 416) >>>> A=asanyarray(A) >>>> A > array(<416x416 sparse matrix of type '' > with 8562 stored elements in COOrdinate format>, dtype=object) >>>> shape(A) > () > > Help on function asanyarray in module numpy.core.numeric: > > asanyarray(a, dtype=None, order=None) > Returns a as an array, but will pass subclasses through. > > Why is the shape altered by asanyarray ? Because sparse arrays are not instances of a subclass of numpy.ndarray. Thus, asanyarray(A) is interpreted as a 0-dim object array. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From chiaracaronna at hotmail.com Fri Feb 2 03:14:15 2007 From: chiaracaronna at hotmail.com (Chiara Caronna) Date: Fri, 02 Feb 2007 08:14:15 +0000 Subject: [SciPy-user] error importing scipy In-Reply-To: Message-ID: >From: "Per Jr. Greisen" >Reply-To: SciPy Users List >To: "SciPy Users List" >Subject: Re: [SciPy-user] error importing scipy >Date: Thu, 1 Feb 2007 20:12:11 +0100 > >Hi, > >Maybe this work . on suse machines there should be a yast to configure new >programs. From yast it should be possible to get BLAS and LAPACK libraries Hi, I checked that out, and apparently blas and lapack are already installed... ? > >(regarding suse 8.4 maybe you should try to upgrade it) (I really would like, but I can't...) > >On 2/1/07, Chiara Caronna wrote: >> >> >> >> >> >From: Robert Kern >> >Reply-To: SciPy Users List >> >To: SciPy Users List >> >Subject: Re: [SciPy-user] error importing scipy >> >Date: Thu, 01 Feb 2007 11:41:46 -0600 >> > >> >Chiara Caronna wrote: >> > >> From: Robert Kern >> > >> > >> Your scipy was not linked to a BLAS library correctly. What BLAS >> >library >> > >> were >> > >> you trying to link against? >> > > >> > > '_' ... I have never tried to do that... at least I am not aware >>of... >> > >> >Okay, what kind of system are you on? If Linux, what distribution? >> >>I have Linux, Suse 8.4 (quite old, I know, but I can't change it, it's my >>office pc...) >> >> >What LAPACK and BLAS packages do you have installed? >>I have never installed them... how can I check which I have? >> >> >> >> > >> >-- >> >Robert Kern >> > >> >"I have come to believe that the whole world is an enigma, a harmless >> >enigma >> > that is made terrible by our own mad attempt to interpret it as though >>it >> >had >> > an underlying truth." >> > -- Umberto Eco >> >_______________________________________________ >> >SciPy-user mailing list >> >SciPy-user at scipy.org >> >http://projects.scipy.org/mailman/listinfo/scipy-user >> >>_________________________________________________________________ >>Express yourself instantly with MSN Messenger! Download today it's FREE! >>http://messenger.msn.click-url.com/go/onm00200471ave/direct/01/ >> >>_______________________________________________ >>SciPy-user mailing list >>SciPy-user at scipy.org >>http://projects.scipy.org/mailman/listinfo/scipy-user >> > > > >-- >Best regards >Per Jr. Greisen > >"If you make something idiot-proof, the universe creates a better idiot." >_______________________________________________ >SciPy-user mailing list >SciPy-user at scipy.org >http://projects.scipy.org/mailman/listinfo/scipy-user _________________________________________________________________ Express yourself instantly with MSN Messenger! Download today it's FREE! http://messenger.msn.click-url.com/go/onm00200471ave/direct/01/ From nwagner at iam.uni-stuttgart.de Fri Feb 2 03:22:26 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Fri, 02 Feb 2007 09:22:26 +0100 Subject: [SciPy-user] arpack.speigs.ARPACK_gen_eigs Message-ID: <45C2F4C2.5090307@iam.uni-stuttgart.de> Hi, The shift must be real in arpack.speigs.ARPACK_gen_eigs. For what reason ? How can I compute the eigenvalues enclosed by the green circle ? Nils -------------- next part -------------- A non-text attachment was scrubbed... Name: arpack.png Type: image/png Size: 26196 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: test_ARPACK_gen_eigs.py Type: text/x-python Size: 883 bytes Desc: not available URL: From nwagner at iam.uni-stuttgart.de Fri Feb 2 03:27:24 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Fri, 02 Feb 2007 09:27:24 +0100 Subject: [SciPy-user] error importing scipy In-Reply-To: References: Message-ID: <45C2F5EC.8030501@iam.uni-stuttgart.de> Chiara Caronna wrote: > > >> From: "Per Jr. Greisen" >> Reply-To: SciPy Users List >> To: "SciPy Users List" >> Subject: Re: [SciPy-user] error importing scipy >> Date: Thu, 1 Feb 2007 20:12:11 +0100 >> >> Hi, >> >> Maybe this work . on suse machines there should be a yast to configure new >> programs. From yast it should be possible to get BLAS and LAPACK libraries >> > > Hi, I checked that out, and apparently blas and lapack are already > installed... > ? > The problem is that the libraries shipped by SUSE are incomplete ! Please remove the rpm's and compile BLAS and LAPACK from scratch. http://www.scipy.org/Installing_SciPy/BuildingGeneral?highlight=%28%28----%28-%2A%29%28%5Cr%29%3F%5Cn%29%28.%2A%29CategoryInstallation%5Cb%29#head-e618da78f29d5a85f680cc47e574a84951c8dffb Nils >> (regarding suse 8.4 maybe you should try to upgrade it) >> > > (I really would like, but I can't...) > >> On 2/1/07, Chiara Caronna wrote: >> >>> >>> >>> >>>> From: Robert Kern >>>> Reply-To: SciPy Users List >>>> To: SciPy Users List >>>> Subject: Re: [SciPy-user] error importing scipy >>>> Date: Thu, 01 Feb 2007 11:41:46 -0600 >>>> >>>> Chiara Caronna wrote: >>>> >>>>>> From: Robert Kern >>>>>> >>>>>> Your scipy was not linked to a BLAS library correctly. What BLAS >>>>>> >>>> library >>>> >>>>>> were >>>>>> you trying to link against? >>>>>> >>>>> '_' ... I have never tried to do that... at least I am not aware >>>>> >>> of... >>> >>>> Okay, what kind of system are you on? If Linux, what distribution? >>>> >>> I have Linux, Suse 8.4 (quite old, I know, but I can't change it, it's my >>> office pc...) >>> >>> >>>> What LAPACK and BLAS packages do you have installed? >>>> >>> I have never installed them... how can I check which I have? >>> >>> >>> >>> >>>> -- >>>> Robert Kern >>>> >>>> "I have come to believe that the whole world is an enigma, a harmless >>>> enigma >>>> that is made terrible by our own mad attempt to interpret it as though >>>> >>> it >>> >>>> had >>>> an underlying truth." >>>> -- Umberto Eco >>>> _______________________________________________ >>>> SciPy-user mailing list >>>> SciPy-user at scipy.org >>>> http://projects.scipy.org/mailman/listinfo/scipy-user >>>> >>> _________________________________________________________________ >>> Express yourself instantly with MSN Messenger! Download today it's FREE! >>> http://messenger.msn.click-url.com/go/onm00200471ave/direct/01/ >>> >>> _______________________________________________ >>> SciPy-user mailing list >>> SciPy-user at scipy.org >>> http://projects.scipy.org/mailman/listinfo/scipy-user >>> >>> >> >> -- >> Best regards >> Per Jr. Greisen >> >> "If you make something idiot-proof, the universe creates a better idiot." >> > > > >> _______________________________________________ >> SciPy-user mailing list >> SciPy-user at scipy.org >> http://projects.scipy.org/mailman/listinfo/scipy-user >> > > _________________________________________________________________ > Express yourself instantly with MSN Messenger! Download today it's FREE! > http://messenger.msn.click-url.com/go/onm00200471ave/direct/01/ > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From suhlhorn at gmail.com Fri Feb 2 13:57:24 2007 From: suhlhorn at gmail.com (Stephen Uhlhorn) Date: Fri, 2 Feb 2007 13:57:24 -0500 Subject: [SciPy-user] NameError on scipy import In-Reply-To: <45C27554.4020408@gmail.com> References: <47edc1a0702011233j5e280f52i40ac4237aca764c8@mail.gmail.com> <45C25070.2010004@gmail.com> <47edc1a0702011313v6c359d5aude097f23389758a0@mail.gmail.com> <45C258D3.2030907@gmail.com> <47edc1a0702011511g290bb0dk1a1f95506a605fc9@mail.gmail.com> <45C27554.4020408@gmail.com> Message-ID: <47edc1a0702021057g64013cb1nccac4a250e474ae2@mail.gmail.com> I followed your directions below up to the point of installing matplotlib (my real purpose). After getting numpy/scipy built correctly (now they are), can I use the matplotlib binary .egg installation with the wxpython library? -stephen On 2/1/07, Robert Kern wrote: > You will need to use gfortran for a Universal build of Python on Tiger. g77 does > not build Universal binaries. See the instructions I give here: > > http://projects.scipy.org/pipermail/numpy-discussion/2007-January/025368.html From robert.kern at gmail.com Fri Feb 2 14:25:43 2007 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 02 Feb 2007 13:25:43 -0600 Subject: [SciPy-user] NameError on scipy import In-Reply-To: <47edc1a0702021057g64013cb1nccac4a250e474ae2@mail.gmail.com> References: <47edc1a0702011233j5e280f52i40ac4237aca764c8@mail.gmail.com> <45C25070.2010004@gmail.com> <47edc1a0702011313v6c359d5aude097f23389758a0@mail.gmail.com> <45C258D3.2030907@gmail.com> <47edc1a0702011511g290bb0dk1a1f95506a605fc9@mail.gmail.com> <45C27554.4020408@gmail.com> <47edc1a0702021057g64013cb1nccac4a250e474ae2@mail.gmail.com> Message-ID: <45C39037.5040303@gmail.com> Stephen Uhlhorn wrote: > I followed your directions below up to the point of installing > matplotlib (my real purpose). After getting numpy/scipy built > correctly (now they are), can I use the matplotlib binary .egg > installation with the wxpython library? I don't know. I don't pay attention to the matplotlib binary releases. Try it and see. If it doesn't work, then an egg is easy enough to uninstall. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From griffitts_lists at comcast.net Sat Feb 3 17:11:08 2007 From: griffitts_lists at comcast.net (Jonathan Griffitts) Date: Sat, 3 Feb 2007 15:11:08 -0700 Subject: [SciPy-user] Windows binary 0.5.2 requires SSE2 Message-ID: Hi! I have installed SciPy 0.5.2 for Python 2.5 on several Windows computers, using the precompiled binary scipy-0.5.2.win32-py2.5.exe. This works fine on some, but on others it crashes Python with an Illegal Instruction exception. The crash is easy to find by running the test() suite or by attempting to make any use of scipy.integrate.quad. Digging into it, I see that the exception comes from _quadpack.pyd, and it dies at a MOVSD instruction. I believe MOVSD is an SSE2 instruction that is only implemented on the more recent CPUs from both Intel and AMD. Has anyone out there compiled win32 binaries for Python 2.5 and older CPUs without SSE2? If so, could I beg for a copy? If not, I'll dig into getting set up to compile SciPy myself. Thanks, -- Jonathan Griffitts AnyWare Engineering Boulder, CO, USA From hasslerjc at comcast.net Sat Feb 3 17:47:04 2007 From: hasslerjc at comcast.net (John Hassler) Date: Sat, 03 Feb 2007 17:47:04 -0500 Subject: [SciPy-user] Windows binary 0.5.2 requires SSE2 In-Reply-To: References: Message-ID: <45C510E8.3060602@comcast.net> An HTML attachment was scrubbed... URL: From v-nijs at kellogg.northwestern.edu Sun Feb 4 18:06:29 2007 From: v-nijs at kellogg.northwestern.edu (Vincent Nijs) Date: Sun, 04 Feb 2007 17:06:29 -0600 Subject: [SciPy-user] QME-Dev wxSciPY workbench 0.0.9.24 released - updated and corrected for Python 2.4 In-Reply-To: <20070128135932.6628.qmail@web27407.mail.ukl.yahoo.com> Message-ID: Has anyone been able to run workbench on a mac, os x 10.4 ppc, Python 2.4, wxPython 2.8. When I try to run it, it seems to start but then quits before anything appears on screen without any error messages. Vincent On 1/28/07 7:59 AM, "Robert VERGNES" wrote: > Version Alpha 0.0.9.2.4 updated today. > > New ZIP file for download. > > Correction made to the plotting/Graph issue with Python 2.4 > + Example files added for review of workbench. > > https://sourceforge.net/project/showfiles.php?group_id=181979 > > Note: wxPython 2.8 is still necessary to run the wrokbench. > > > > D?couvrez une nouvelle fa?on d'obtenir des r?ponses ? toutes vos questions ! > Profitez des connaissances, des opinions et des exp?riences des internautes > sur Yahoo! Questions/R?ponses > . > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user -------------- next part -------------- An HTML attachment was scrubbed... URL: From edschofield at gmail.com Sun Feb 4 19:40:15 2007 From: edschofield at gmail.com (Ed Schofield) Date: Mon, 5 Feb 2007 01:40:15 +0100 Subject: [SciPy-user] RHEL 4 SciPy Install Failure In-Reply-To: References: Message-ID: <1b5a37350702041640r23d5be9enbd498f6bda97e923@mail.gmail.com> On 1/18/07, Chad Kidder wrote: > > I was trying to install SciPy today after a successful numpy install and > it went > well until it got to the fortran compilation of dfftpack. Below is an > exerpt of > the error. Anyone know what's going on? We've improved the CPU detection code in NumPy SVN so SciPy compiles on Core2 CPUs. If you want a simple patch to get it working, without moving to NumPy SVN, apply this: http://projects.scipy.org/scipy/numpy/changeset/3538 -- Ed -------------- next part -------------- An HTML attachment was scrubbed... URL: From gael.varoquaux at normalesup.org Mon Feb 5 02:00:19 2007 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Mon, 5 Feb 2007 08:00:19 +0100 Subject: [SciPy-user] Getting started wiki page In-Reply-To: <45AD641F.3080406@ru.nl> References: <20070111225611.GD4479@clipper.ens.fr> <45AD641F.3080406@ru.nl> Message-ID: <20070205070019.GB9953@clipper.ens.fr> Hi Stef, Sorry for taking so long to answer your mail, I am a bit overwhelmed currently. On Wed, Jan 17, 2007 at 12:47:43AM +0100, Stef Mientki wrote: > apparently not everybody is allowed to edit the pages, Just create an account and log in. > I saw yesterday the lecture of Eric Jones, Travis Oliphant, > despite the not-overwhelming image quality, I think it's real information > for newbies. > And if the lecture is too long, at least the handouts are great !! The lecture is pretty good as it presents python first and numpy/scipy afterward. It is quite outdated, though (reference to numeric and to another plotting library than matplotlib) so I am not to keen on linking it from the "Getting_Started" page, as it might confuse beginners. The goal of the "Getting_Started" page is not to provide a tutorial, but to proivde links to the minimal amount of reading needed to start hacking in python, and to provide info that is not in tutorials (like wha modules to use, for instance). > Another interesting point for newbies, might be an overview of > variable types, what's the difference between list and tupple ? I > have put my personal notes on my website, maybe it's worth to > include something like this I think this should go in a mini python tutorial, just like the first part of Travis and Eric's talk. If you want to create a page on the scipy wiki that is a mini python tutorial and add this data to it, I think it would be great. Such a page, combined with Dave Kuhlman's course, would be great to get people started. Regards, Ga?l From chiaracaronna at hotmail.com Mon Feb 5 10:38:13 2007 From: chiaracaronna at hotmail.com (Chiara Caronna) Date: Mon, 05 Feb 2007 15:38:13 +0000 Subject: [SciPy-user] newbie: easy question (I hope) In-Reply-To: <45C2F5EC.8030501@iam.uni-stuttgart.de> Message-ID: Hi, I have an old code, where I used numarray, and there is this command: where c is a 1d array c= unpack(format, string) ccdimage=N.array(c,shape=[nrows,ncol]) if I use array from numpy I got this error: "shape is an invalid keyword argument for this function" I need to transform c in an array of shape [nrows,ncol], how can I do it with numpy? _________________________________________________________________ Don't just search. Find. Check out the new MSN Search! http://search.msn.com/ From gael.varoquaux at normalesup.org Mon Feb 5 11:54:16 2007 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Mon, 5 Feb 2007 17:54:16 +0100 Subject: [SciPy-user] newbie: easy question (I hope) In-Reply-To: References: <45C2F5EC.8030501@iam.uni-stuttgart.de> Message-ID: <20070205165415.GE17139@clipper.ens.fr> On Mon, Feb 05, 2007 at 03:38:13PM +0000, Chiara Caronna wrote: > I have an old code, where I used numarray, and there is this command: > where c is a 1d array > c= unpack(format, string) > ccdimage=N.array(c,shape=[nrows,ncol]) > if I use array from numpy I got this error: > "shape is an invalid keyword argument for this function" > I need to transform c in an array of shape [nrows,ncol], how can I do it > with numpy? c is an array, right, not, a list. Then you can do something like c = unpack(format, string) ccdimage = N.reshape(c, (nrows, nol)) If c is a list you can convert it to a 1D array using N.array(c). HTH, Ga?l From chiaracaronna at hotmail.com Mon Feb 5 11:55:35 2007 From: chiaracaronna at hotmail.com (Chiara Caronna) Date: Mon, 05 Feb 2007 16:55:35 +0000 Subject: [SciPy-user] newbie: easy question (I hope) In-Reply-To: <20070205165415.GE17139@clipper.ens.fr> Message-ID: it works, thank you! Chiara >From: Gael Varoquaux >Reply-To: SciPy Users List >To: SciPy Users List >Subject: Re: [SciPy-user] newbie: easy question (I hope) >Date: Mon, 5 Feb 2007 17:54:16 +0100 > >On Mon, Feb 05, 2007 at 03:38:13PM +0000, Chiara Caronna wrote: > > I have an old code, where I used numarray, and there is this command: > > where c is a 1d array > > > c= unpack(format, string) > > ccdimage=N.array(c,shape=[nrows,ncol]) > > > if I use array from numpy I got this error: > > > "shape is an invalid keyword argument for this function" > > I need to transform c in an array of shape [nrows,ncol], how can I do it > > with numpy? > >c is an array, right, not, a list. > >Then you can do something like > >c = unpack(format, string) >ccdimage = N.reshape(c, (nrows, nol)) > >If c is a list you can convert it to a 1D array using N.array(c). > >HTH, > >Ga?l >_______________________________________________ >SciPy-user mailing list >SciPy-user at scipy.org >http://projects.scipy.org/mailman/listinfo/scipy-user _________________________________________________________________ Express yourself instantly with MSN Messenger! Download today it's FREE! http://messenger.msn.click-url.com/go/onm00200471ave/direct/01/ From oliphant at ee.byu.edu Mon Feb 5 14:20:59 2007 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Mon, 05 Feb 2007 12:20:59 -0700 Subject: [SciPy-user] newbie: easy question (I hope) In-Reply-To: References: Message-ID: <45C7839B.7080904@ee.byu.edu> Chiara Caronna wrote: >Hi, >I have an old code, where I used numarray, and there is this command: >where c is a 1d array > >c= unpack(format, string) >ccdimage=N.array(c,shape=[nrows,ncol]) > >if I use array from numpy I got this error: > >"shape is an invalid keyword argument for this function" >I need to transform c in an array of shape [nrows,ncol], how can I do it >with numpy? > > What exactly are you asking? If you want to use numarray-compatibile syntax then use the numpy.numarray module like this import numpy.numarray as N This is a compatibility-layer and is not recommended except for porting older code. Otherwise you change the shape of an array using the reshape method or setting the shape attribute of an ndarray. -Travis From chiaracaronna at hotmail.com Mon Feb 5 14:25:49 2007 From: chiaracaronna at hotmail.com (Chiara Caronna) Date: Mon, 05 Feb 2007 19:25:49 +0000 Subject: [SciPy-user] newbie: easy question (I hope) In-Reply-To: <45C7839B.7080904@ee.byu.edu> Message-ID: >From: Travis Oliphant >Reply-To: SciPy Users List >To: SciPy Users List >Subject: Re: [SciPy-user] newbie: easy question (I hope) >Date: Mon, 05 Feb 2007 12:20:59 -0700 > >Chiara Caronna wrote: > > >Hi, > >I have an old code, where I used numarray, and there is this command: > >where c is a 1d array > > > >c= unpack(format, string) > >ccdimage=N.array(c,shape=[nrows,ncol]) > > > >if I use array from numpy I got this error: > > > >"shape is an invalid keyword argument for this function" > >I need to transform c in an array of shape [nrows,ncol], how can I do it > >with numpy? > > > > >What exactly are you asking? > >If you want to use numarray-compatibile syntax then use the >numpy.numarray module like this > >import numpy.numarray as N > >This is a compatibility-layer and is not recommended except for porting >older code. > >Otherwise you change the shape of an array using the reshape method or >setting the shape attribute of an ndarray. > Actually I wanted to remove all the numarray syntax from my code... they are not so much and I prefer to have a "cleaner" code. I used the reshape method, thanks Chiara >-Travis > >_______________________________________________ >SciPy-user mailing list >SciPy-user at scipy.org >http://projects.scipy.org/mailman/listinfo/scipy-user _________________________________________________________________ Express yourself instantly with MSN Messenger! Download today it's FREE! http://messenger.msn.click-url.com/go/onm00200471ave/direct/01/ From ewald.zietsman at gmail.com Tue Feb 6 03:02:26 2007 From: ewald.zietsman at gmail.com (Ewald Zietsman) Date: Tue, 6 Feb 2007 10:02:26 +0200 Subject: [SciPy-user] scipy.optimize.leastsq error estimates Message-ID: Hi all, I want to fit a sinusoid of the form A*cos(2*pi*f*t) + B*sin(2*pi*f*t) to irregularly spaced data so that I can get a wave of the form C*cos(2*pi*f*t + phi) where C**2 = A**2 + B**2 and phi = arctan(-B/A). I have implemented this using the leastsq function but, I'd would like to also know the variances ( or standard errors ) of A,B and f. Is there a way I can get the variance-covariance matrix out from leastsq? or at least get a good estimate of the standard errors of my unknowns? -Ewald Zietsman -------------- next part -------------- An HTML attachment was scrubbed... URL: From ckkart at hoc.net Tue Feb 6 03:06:50 2007 From: ckkart at hoc.net (Christian Kristukat) Date: Tue, 06 Feb 2007 17:06:50 +0900 Subject: [SciPy-user] scipy.optimize.leastsq error estimates In-Reply-To: References: Message-ID: <45C8371A.9040608@hoc.net> Ewald Zietsman wrote: > Hi all, > > I want to fit a sinusoid of the form A*cos(2*pi*f*t) + B*sin(2*pi*f*t) > to irregularly spaced data so that I can get a wave of the form > C*cos(2*pi*f*t + phi) where C**2 = A**2 + B**2 and phi = arctan(-B/A). I > have implemented this using the leastsq function but, I'd would like to > also know the variances ( or standard errors ) of A,B and f. Is there a > way I can get the variance-covariance matrix out from leastsq? or at > least get a good estimate of the standard errors of my unknowns? When setting full_output=True leastsq will return the covraiance matrix: cov_x -- uses the fjac and ipvt optional outputs to construct an estimate of the covariance matrix of the solution. None if a singular matrix encountered (indicates infinite covariance in some direction). However I recommend using scipy.sandbox.odr instead which returns confidence intervals for all parameters. Christian From nwagner at iam.uni-stuttgart.de Tue Feb 6 03:14:08 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Tue, 06 Feb 2007 09:14:08 +0100 Subject: [SciPy-user] scipy.optimize.leastsq error estimates In-Reply-To: <45C8371A.9040608@hoc.net> References: <45C8371A.9040608@hoc.net> Message-ID: <45C838D0.6060404@iam.uni-stuttgart.de> Christian Kristukat wrote: > Ewald Zietsman wrote: > >> Hi all, >> >> I want to fit a sinusoid of the form A*cos(2*pi*f*t) + B*sin(2*pi*f*t) >> to irregularly spaced data so that I can get a wave of the form >> C*cos(2*pi*f*t + phi) where C**2 = A**2 + B**2 and phi = arctan(-B/A). I >> have implemented this using the leastsq function but, I'd would like to >> also know the variances ( or standard errors ) of A,B and f. Is there a >> way I can get the variance-covariance matrix out from leastsq? or at >> least get a good estimate of the standard errors of my unknowns? >> > > When setting full_output=True leastsq will return the covraiance matrix: > > cov_x -- uses the fjac and ipvt optional outputs to construct an > estimate of the covariance matrix of the solution. > None if a singular matrix encountered (indicates > infinite covariance in some direction). > > However I recommend using scipy.sandbox.odr instead which returns confidence > intervals for all parameters. > > Christian > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > AFAIK odr is directly available through scipy.odr. So I guess the odr directory in the sandbox is obsolete. Is that correct ? Nils From ckkart at hoc.net Tue Feb 6 03:37:46 2007 From: ckkart at hoc.net (Christian Kristukat) Date: Tue, 06 Feb 2007 17:37:46 +0900 Subject: [SciPy-user] scipy.optimize.leastsq error estimates In-Reply-To: <45C838D0.6060404@iam.uni-stuttgart.de> References: <45C8371A.9040608@hoc.net> <45C838D0.6060404@iam.uni-stuttgart.de> Message-ID: <45C83E5A.90101@hoc.net> Nils Wagner wrote: > > AFAIK odr is directly available through scipy.odr. > So I guess the odr directory in the sandbox is obsolete. Is that correct ? > At least not in scipy 0.5.2.dev2299. Might be differnt in svn. Christian From nwagner at iam.uni-stuttgart.de Tue Feb 6 03:47:00 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Tue, 06 Feb 2007 09:47:00 +0100 Subject: [SciPy-user] scipy.optimize.leastsq error estimates In-Reply-To: <45C83E5A.90101@hoc.net> References: <45C8371A.9040608@hoc.net> <45C838D0.6060404@iam.uni-stuttgart.de> <45C83E5A.90101@hoc.net> Message-ID: <45C84084.9080305@iam.uni-stuttgart.de> Christian Kristukat wrote: > Nils Wagner wrote: > >> >> AFAIK odr is directly available through scipy.odr. >> So I guess the odr directory in the sandbox is obsolete. Is that correct ? >> >> > > At least not in scipy 0.5.2.dev2299. Might be differnt in svn. > > Christian > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > I am using >>> scipy.__version__ '0.5.3.dev2667' Nils From robert.kern at gmail.com Tue Feb 6 04:47:17 2007 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 06 Feb 2007 03:47:17 -0600 Subject: [SciPy-user] scipy.optimize.leastsq error estimates In-Reply-To: <45C838D0.6060404@iam.uni-stuttgart.de> References: <45C8371A.9040608@hoc.net> <45C838D0.6060404@iam.uni-stuttgart.de> Message-ID: <45C84EA5.8080307@gmail.com> Nils Wagner wrote: > AFAIK odr is directly available through scipy.odr. > So I guess the odr directory in the sandbox is obsolete. Is that correct ? There is no more odr/ directory in the sandbox since it got moved into the main package. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From joshuafr at gmail.com Tue Feb 6 05:42:09 2007 From: joshuafr at gmail.com (Joshua Petterson) Date: Tue, 6 Feb 2007 11:42:09 +0100 Subject: [SciPy-user] errors with scipy.stats Message-ID: Hi, I'm going into trouble with scipy.stats, does someone can explain me that?: ############################# |datas|[135]>import scipy.stats |datas|[136]>a.dtype Out [136]:dtype('float64') |datas|[137]>a.fill_value() Out [137]:-9.9999999999999995e-21 |datas|[138]>a.shape Out [138]:(744,) |datas|[144]>a.max() Out [144]:36.0 |datas|[146]>a.min() Out [146]:0.0 |datas|[147]>a.mean() Out [147]:array(3.20519835841) |datas|[148]>stats.mean(a) Out [148]:1750.47849462 |datas|[152]>his = stats.histogram2(a, [0,10,20,30,40,50]) |datas|[153]>his Out [153]:array([692, 27, 10, 2, 0, 13]) ######################### then mean and histogram2 give wrong results! Thanks for your help! -------------- next part -------------- An HTML attachment was scrubbed... URL: From v-nijs at kellogg.northwestern.edu Tue Feb 6 13:00:29 2007 From: v-nijs at kellogg.northwestern.edu (Vincent Nijs) Date: Tue, 06 Feb 2007 12:00:29 -0600 Subject: [SciPy-user] Installing umfpack on mac? In-Reply-To: Message-ID: Are there any installation instructions for installing umfpack on mac? I tried install suitesparse via macports but this seems to be broken. Vincent -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Tue Feb 6 13:24:46 2007 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 06 Feb 2007 12:24:46 -0600 Subject: [SciPy-user] errors with scipy.stats In-Reply-To: References: Message-ID: <45C8C7EE.8010401@gmail.com> Joshua Petterson wrote: > Hi, > I'm going into trouble with scipy.stats, does someone can explain me that?: > ############################# > |datas|[135]>import scipy.stats > > |datas|[136]>a.dtype > Out [136]:dtype('float64') > > |datas|[137]>a.fill_value() > Out [137]:-9.9999999999999995e-21 This appears like a is a masked array, correct? Not all of the functions in scipy.stats support masked arrays. The methods on masked arrays do support masked arrays, of course. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From pgmdevlist at gmail.com Tue Feb 6 13:32:20 2007 From: pgmdevlist at gmail.com (Pierre GM) Date: Tue, 6 Feb 2007 13:32:20 -0500 Subject: [SciPy-user] errors with scipy.stats In-Reply-To: <45C8C7EE.8010401@gmail.com> References: <45C8C7EE.8010401@gmail.com> Message-ID: <200702061332.20674.pgmdevlist@gmail.com> On Tuesday 06 February 2007 13:24:46 Robert Kern wrote: > This appears like a is a masked array, correct? Not all of the functions in > scipy.stats support masked arrays. The methods on masked arrays do support > masked arrays, of course. If the data is 1D or can be ravelled safely, you can try to compress it before applying stats methods. At least you get rid of your masked data... From david.warde.farley at utoronto.ca Tue Feb 6 16:37:18 2007 From: david.warde.farley at utoronto.ca (David Warde-Farley) Date: Tue, 6 Feb 2007 16:37:18 -0500 Subject: [SciPy-user] find(M) equivalent for sparse matrices? Message-ID: Hi there, I'm porting some Matlab code to Numpy and having some trouble. Basically, in the matlab code we have a line that does a find() on a sparse matrix and grabs out the rows, columns and values of the non- zero element positions. It then modifies each element in a manner dependent on its row and column, and then stores all the elements back into their original spots. I can't figure out a way to do this, as comparisons don't seem to be implemented for the sparse matrix classes. I suppose I should mention I'm using Python 2.4, scipy 0.5.1, numpy 1.0. Anyone got an idea of how to do this? By the way, what I'm doing can be expressed as left and right multiplying it by a (the same on both sides) diagonal matrix, but this appears to take a lot longer than it should when I write it that way (with the diagonal stored as a sparse). David From emsellem at obs.univ-lyon1.fr Wed Feb 7 08:07:34 2007 From: emsellem at obs.univ-lyon1.fr (Eric Emsellem) Date: Wed, 07 Feb 2007 14:07:34 +0100 Subject: [SciPy-user] back with a problem in scipy installation... In-Reply-To: <45BF808B.1090807@gmail.com> References: 45B46D75.5030300@obs.univ-lyon1.fr <45B4883A.2050606@obs.univ-lyon1.fr> <45B4F692.6080904@gmail.com> <45BF6B41.304@obs.univ-lyon1.fr> <45BF808B.1090807@gmail.com> Message-ID: <45C9CF16.1060607@obs.univ-lyon1.fr> An HTML attachment was scrubbed... URL: From sgarcia at olfac.univ-lyon1.fr Wed Feb 7 08:49:17 2007 From: sgarcia at olfac.univ-lyon1.fr (Samuel GARCIA) Date: Wed, 07 Feb 2007 14:49:17 +0100 Subject: [SciPy-user] FFT speed ? Message-ID: <45C9D8DD.9040101@olfac.univ-lyon1.fr> Hello list, I have a problem with fft speed. could someone try this : from scipy import * import time a = rand((186888));t1=time.time();fft(a);t2=time.time();t2-t1 # for me about 0.67791390419006348 s. on centrino 1.6GHz a = rand((186890));t1=time.time();fft(a);t2=time.time();t2-t1 # for me about 1.622053861618042 s. on centrino 1.6GHz a = rand((186889));t1=time.time();fft(a);t2=time.time();t2-t1 # the computation is infinit !!!! (no time to wait) Question : why is there a problem specialy for that size (186889) of fft ? It is not the only size with that problem. Thank Sam -- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Samuel Garcia Universite Claude Bernard LYON 1 CNRS - UMR5020, Laboratoire des Neurosciences et Systemes Sensoriels 50, avenue Tony Garnier 69366 LYON Cedex 07 FRANCE T?l : 04 37 28 74 64 Fax : 04 37 28 76 01 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ From nwagner at iam.uni-stuttgart.de Wed Feb 7 08:59:00 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Wed, 07 Feb 2007 14:59:00 +0100 Subject: [SciPy-user] FFT speed ? In-Reply-To: <45C9D8DD.9040101@olfac.univ-lyon1.fr> References: <45C9D8DD.9040101@olfac.univ-lyon1.fr> Message-ID: <45C9DB24.6080805@iam.uni-stuttgart.de> Samuel GARCIA wrote: > Hello list, > > I have a problem with fft speed. > could someone try this : > > from scipy import * > import time > > a = rand((186888));t1=time.time();fft(a);t2=time.time();t2-t1 > > # for me about 0.67791390419006348 s. on centrino 1.6GHz > > a = rand((186890));t1=time.time();fft(a);t2=time.time();t2-t1 > > # for me about 1.622053861618042 s. on centrino 1.6GHz > > a = rand((186889));t1=time.time();fft(a);t2=time.time();t2-t1 > > # the computation is infinit !!!! (no time to wait) > > > Question : why is there a problem specialy for that size (186889) of fft ? > It is not the only size with that problem. > > > Thank > > Sam > > > What you can find with help (fft) is ... "This is most efficient for n a power of two" Nils From ckkart at hoc.net Wed Feb 7 09:02:23 2007 From: ckkart at hoc.net (Christian Kristukat) Date: Wed, 07 Feb 2007 23:02:23 +0900 Subject: [SciPy-user] FFT speed ? In-Reply-To: <45C9D8DD.9040101@olfac.univ-lyon1.fr> References: <45C9D8DD.9040101@olfac.univ-lyon1.fr> Message-ID: <45C9DBEF.8090205@hoc.net> Samuel GARCIA wrote: > Hello list, > > I have a problem with fft speed. > could someone try this : > > from scipy import * > import time > > a = rand((186888));t1=time.time();fft(a);t2=time.time();t2-t1 > > # for me about 0.67791390419006348 s. on centrino 1.6GHz > > a = rand((186890));t1=time.time();fft(a);t2=time.time();t2-t1 > > # for me about 1.622053861618042 s. on centrino 1.6GHz > > a = rand((186889));t1=time.time();fft(a);t2=time.time();t2-t1 > > # the computation is infinit !!!! (no time to wait) > > > Question : why is there a problem specialy for that size (186889) of fft ? > It is not the only size with that problem. I guess this is due to the design of fft which is made for input lengths of 2**x. If you try e.g. 262144=2**18 which is even larger than 186889 you'll see that it's pretty fast. Christian From sgarcia at olfac.univ-lyon1.fr Wed Feb 7 09:04:07 2007 From: sgarcia at olfac.univ-lyon1.fr (Samuel GARCIA) Date: Wed, 07 Feb 2007 15:04:07 +0100 Subject: [SciPy-user] FFT speed ? In-Reply-To: <45C9DB24.6080805@iam.uni-stuttgart.de> References: <45C9D8DD.9040101@olfac.univ-lyon1.fr> <45C9DB24.6080805@iam.uni-stuttgart.de> Message-ID: <45C9DC57.6030603@olfac.univ-lyon1.fr> Yes I know this. But 186888 186889 and 186890 are not power of 2 and the computation time is very very different just for a difference of size of only one point. What is the reason ? And how to deal with that ? (I realy need to compute fft with a random nomber of point) Nils Wagner wrote: > Samuel GARCIA wrote: > >> Hello list, >> >> I have a problem with fft speed. >> could someone try this : >> >> from scipy import * >> import time >> >> a = rand((186888));t1=time.time();fft(a);t2=time.time();t2-t1 >> >> # for me about 0.67791390419006348 s. on centrino 1.6GHz >> >> a = rand((186890));t1=time.time();fft(a);t2=time.time();t2-t1 >> >> # for me about 1.622053861618042 s. on centrino 1.6GHz >> >> a = rand((186889));t1=time.time();fft(a);t2=time.time();t2-t1 >> >> # the computation is infinit !!!! (no time to wait) >> >> >> Question : why is there a problem specialy for that size (186889) of fft ? >> It is not the only size with that problem. >> >> >> Thank >> >> Sam >> >> >> >> > What you can find with help (fft) is ... > > "This is most efficient for n a power of two" > > Nils > > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Samuel Garcia Universite Claude Bernard LYON 1 CNRS - UMR5020, Laboratoire des Neurosciences et Systemes Sensoriels 50, avenue Tony Garnier 69366 LYON Cedex 07 FRANCE T?l : 04 37 28 74 64 Fax : 04 37 28 76 01 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -------------- next part -------------- An HTML attachment was scrubbed... URL: From sgarcia at olfac.univ-lyon1.fr Wed Feb 7 09:07:48 2007 From: sgarcia at olfac.univ-lyon1.fr (Samuel GARCIA) Date: Wed, 07 Feb 2007 15:07:48 +0100 Subject: [SciPy-user] FFT speed ? In-Reply-To: <45C9DBEF.8090205@hoc.net> References: <45C9D8DD.9040101@olfac.univ-lyon1.fr> <45C9DBEF.8090205@hoc.net> Message-ID: <45C9DD34.1060807@olfac.univ-lyon1.fr> Yes it is pretty fast. The difference of time is not a problem for the end user (between 0.2 and 1 s). But specialy the value of 186889 is really a problem, the computation is infinit. Did you try tis value especialy ? Christian Kristukat wrote: > Samuel GARCIA wrote: > >> Hello list, >> >> I have a problem with fft speed. >> could someone try this : >> >> from scipy import * >> import time >> >> a = rand((186888));t1=time.time();fft(a);t2=time.time();t2-t1 >> >> # for me about 0.67791390419006348 s. on centrino 1.6GHz >> >> a = rand((186890));t1=time.time();fft(a);t2=time.time();t2-t1 >> >> # for me about 1.622053861618042 s. on centrino 1.6GHz >> >> a = rand((186889));t1=time.time();fft(a);t2=time.time();t2-t1 >> >> # the computation is infinit !!!! (no time to wait) >> >> >> Question : why is there a problem specialy for that size (186889) of fft ? >> It is not the only size with that problem. >> > > I guess this is due to the design of fft which is made for input lengths of > 2**x. If you try e.g. 262144=2**18 which is even larger than 186889 you'll see > that it's pretty fast. > > Christian > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Samuel Garcia Universite Claude Bernard LYON 1 CNRS - UMR5020, Laboratoire des Neurosciences et Systemes Sensoriels 50, avenue Tony Garnier 69366 LYON Cedex 07 FRANCE T?l : 04 37 28 74 64 Fax : 04 37 28 76 01 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -------------- next part -------------- An HTML attachment was scrubbed... URL: From nwagner at iam.uni-stuttgart.de Wed Feb 7 09:13:05 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Wed, 07 Feb 2007 15:13:05 +0100 Subject: [SciPy-user] FFT speed ? In-Reply-To: <45C9DD34.1060807@olfac.univ-lyon1.fr> References: <45C9D8DD.9040101@olfac.univ-lyon1.fr> <45C9DBEF.8090205@hoc.net> <45C9DD34.1060807@olfac.univ-lyon1.fr> Message-ID: <45C9DE71.1070302@iam.uni-stuttgart.de> Samuel GARCIA wrote: > Yes it is pretty fast. > The difference of time is not a problem for the end user (between 0.2 > and 1 s). > But specialy the value of 186889 is really a problem, the computation > is infinit. > Did you try tis value especialy ? > You may try a = rand((186889)) t1=time.time() fft(a,pow(2,18)) t2=time.time() print t2-t1 Nils From sgarcia at olfac.univ-lyon1.fr Wed Feb 7 09:18:19 2007 From: sgarcia at olfac.univ-lyon1.fr (Samuel GARCIA) Date: Wed, 07 Feb 2007 15:18:19 +0100 Subject: [SciPy-user] FFT speed ? In-Reply-To: <45C9DE71.1070302@iam.uni-stuttgart.de> References: <45C9D8DD.9040101@olfac.univ-lyon1.fr> <45C9DBEF.8090205@hoc.net> <45C9DD34.1060807@olfac.univ-lyon1.fr> <45C9DE71.1070302@iam.uni-stuttgart.de> Message-ID: <45C9DFAB.2050303@olfac.univ-lyon1.fr> yes I was thinking of doing something like that but fft(a,pow(2,18)).shape is of course 262144 (2**18) and when I use ifft after that The length of my signal have changed I have an interpolated signal ... new problem for me ... Nils Wagner wrote: > Samuel GARCIA wrote: > >> Yes it is pretty fast. >> The difference of time is not a problem for the end user (between 0.2 >> and 1 s). >> But specialy the value of 186889 is really a problem, the computation >> is infinit. >> Did you try tis value especialy ? >> >> > You may try > > a = rand((186889)) > t1=time.time() > fft(a,pow(2,18)) > t2=time.time() > print t2-t1 > > Nils > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Samuel Garcia Universite Claude Bernard LYON 1 CNRS - UMR5020, Laboratoire des Neurosciences et Systemes Sensoriels 50, avenue Tony Garnier 69366 LYON Cedex 07 FRANCE T?l : 04 37 28 74 64 Fax : 04 37 28 76 01 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefan at sun.ac.za Wed Feb 7 09:39:21 2007 From: stefan at sun.ac.za (Stefan van der Walt) Date: Wed, 7 Feb 2007 16:39:21 +0200 Subject: [SciPy-user] FFT speed ? In-Reply-To: <45C9DFAB.2050303@olfac.univ-lyon1.fr> References: <45C9D8DD.9040101@olfac.univ-lyon1.fr> <45C9DBEF.8090205@hoc.net> <45C9DD34.1060807@olfac.univ-lyon1.fr> <45C9DE71.1070302@iam.uni-stuttgart.de> <45C9DFAB.2050303@olfac.univ-lyon1.fr> Message-ID: <20070207143921.GE6274@mentat.za.net> On Wed, Feb 07, 2007 at 03:18:19PM +0100, Samuel GARCIA wrote: > yes I was thinking of doing something like that but > fft(a,pow(2,18)).shape is of course 262144 (2**18) > and when I use ifft after that The length of my signal have changed I have an > interpolated signal ... new problem for me ... An interpolated signal? In [19]: N.real(N.fft.ifft(N.fft.fft(N.arange(11),16))) Out[19]: array([ 0., 1., 2., 3., 4., 5., 6., 7., 8., 9., 10., -0., 0., 0., 0., 0.]) Padding in the Fourier-domain, on the other hand: In [20]: N.real(N.fft.ifft(N.fft.fft(N.arange(11)),16)) Out[20]: array([ 0. , 2.10035464, 3.04455839, 1.35780382, 3.23688237, 3.12291155, 2.35790278, 4.12133152, 3.4375 , 3.29213721, 5.00323313, 3.69035768, 4.32561763, 5.98165513, 3.3443057 , 6.58344845]) Cheers St?fan From paul.ray at nrl.navy.mil Wed Feb 7 09:42:46 2007 From: paul.ray at nrl.navy.mil (Paul Ray) Date: Wed, 7 Feb 2007 09:42:46 -0500 Subject: [SciPy-user] SciPy-user Digest, Vol 42, Issue 10 In-Reply-To: References: Message-ID: <15276EE7-5A45-4D31-82DB-C4FF4D634F2A@nrl.navy.mil> On Feb 7, 2007, at 9:04 AM, scipy-user-request at scipy.org wrote: > I have a problem with fft speed. The speed of most FFT algorithms depends greatly on the largest prime factor of N. Here are the factors of your example numbers: >factor 186888 186888: 2 2 2 3 13 599 >factor 186890 186890: 2 5 11 1699 >factor 186889 186889: 186889 Note that 186889 is prime! This is the worst case situation! Numbers which are powers of 2 can be FFT'ed in order N*Log_2(N) time, while prime numbers take order N**2 time. This is what you are seeing. You might do a survey of FFT algorithms to see which ones perform best on prime Ns. Many you will find, work ONLY on power of 2 Ns! You should make sure that your SciPy is using FFTW3 for the FFT engine for sure, since I think it does about as well as possible on prime Ns. You can read about FFTW at http://fftw.org I rarely use FFT in SciPy/NumPy since there are many confusing statements in the instructions about FFTW2/FFTW3 support which don't make sense. The API changed from FFTW2 to FFTW3 and it is really important to get this right. I think FFTW2 support should be removed and someone should do a check to make sure the FFTW3 support is implemented correctly. Cheers, -- Paul From sgarcia at olfac.univ-lyon1.fr Wed Feb 7 10:05:18 2007 From: sgarcia at olfac.univ-lyon1.fr (Samuel GARCIA) Date: Wed, 07 Feb 2007 16:05:18 +0100 Subject: [SciPy-user] FFT speed ? In-Reply-To: <20070207143921.GE6274@mentat.za.net> References: <45C9D8DD.9040101@olfac.univ-lyon1.fr> <45C9DBEF.8090205@hoc.net> <45C9DD34.1060807@olfac.univ-lyon1.fr> <45C9DE71.1070302@iam.uni-stuttgart.de> <45C9DFAB.2050303@olfac.univ-lyon1.fr> <20070207143921.GE6274@mentat.za.net> Message-ID: <45C9EAAE.9010308@olfac.univ-lyon1.fr> Ok sorry. I answered without trying ! Thank a lot. I will solve that way. Sam Stefan van der Walt wrote: > On Wed, Feb 07, 2007 at 03:18:19PM +0100, Samuel GARCIA wrote: > >> yes I was thinking of doing something like that but >> fft(a,pow(2,18)).shape is of course 262144 (2**18) >> and when I use ifft after that The length of my signal have changed I have an >> interpolated signal ... new problem for me ... >> > > An interpolated signal? > > In [19]: N.real(N.fft.ifft(N.fft.fft(N.arange(11),16))) > Out[19]: > array([ 0., 1., 2., 3., 4., 5., 6., 7., 8., 9., 10., > -0., 0., 0., 0., 0.]) > > Padding in the Fourier-domain, on the other hand: > > In [20]: N.real(N.fft.ifft(N.fft.fft(N.arange(11)),16)) > Out[20]: > array([ 0. , 2.10035464, 3.04455839, 1.35780382, 3.23688237, > 3.12291155, 2.35790278, 4.12133152, 3.4375 , 3.29213721, > 5.00323313, 3.69035768, 4.32561763, 5.98165513, 3.3443057 , > 6.58344845]) > > > Cheers > St?fan > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Samuel Garcia Universite Claude Bernard LYON 1 CNRS - UMR5020, Laboratoire des Neurosciences et Systemes Sensoriels 50, avenue Tony Garnier 69366 LYON Cedex 07 FRANCE T?l : 04 37 28 74 64 Fax : 04 37 28 76 01 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -------------- next part -------------- An HTML attachment was scrubbed... URL: From emsellem at obs.univ-lyon1.fr Wed Feb 7 10:20:29 2007 From: emsellem at obs.univ-lyon1.fr (Eric Emsellem) Date: Wed, 07 Feb 2007 16:20:29 +0100 Subject: [SciPy-user] back with a problem in scipy installation... Message-ID: <45C9EE3D.1020907@obs.univ-lyon1.fr> Hi again (sorry for the second post: the first one was scrubbed), I now went through a full install of atlas, lapack, blas, numpy, scipy again. I still have some failure in scipy (none in numpy except for the umfpack which I haven't installed) you can find below. I also include the explicit site.cfg that I had to twiggle in order for scipy to find atlas (no Environment variable or other way did the work for some reason): it looks weird if I compare it to "advised" site.cfg but this is the only option that recognised atlas properly. any help is welcome as always. thanks again Eric ======================= below is a summary of: 1/ the failure in scipy 2/ my site.cfg used for building numpy and scipy *************************************************************************************************** ******** Output of scipy.test() = (starting with some failure due to the lack of umfpack and then..): .................... Warning: FAILURE importing tests for /usr/local/lib/python2.4/site-packages/scipy/linsolve/umfpack/tests/test_umfpack.py:17: AttributeError: 'module' object has no attribute 'umfpack' (in ?) ................................ Found 4 tests for scipy.io.recaster Warning: FAILURE importing tests for /usr/local/lib/python2.4/site-packages/scipy/linsolve/umfpack/tests/test_umfpack.py:17: AttributeError: 'module' object has no attribute 'umfpack' (in ?) ................... ..zswap:n=4 ..zswap:n=3 ..................Residual: 1.05006950608e-07 ......................................................................................... ====================================================================== FAIL: check_syevr (scipy.lib.tests.test_lapack.test_flapack_float) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/local/lib/python2.4/site-packages/scipy/lib/lapack/tests/esv_tests.py", line 41, in check_syevr assert_array_almost_equal(w,exact_w) File "/usr/local/lib/python2.4/site-packages/numpy/testing/utils.py", line 230, in assert_array_almost_equal header='Arrays are not almost equal') File "/usr/local/lib/python2.4/site-packages/numpy/testing/utils.py", line 215, in assert_array_compare assert cond, msg AssertionError: Arrays are not almost equal (mismatch 33.3333333333%) x: array([-0.66992444, 0.48769444, 9.18222618], dtype=float32) y: array([-0.66992434, 0.48769389, 9.18223045]) ====================================================================== FAIL: check_syevr_irange (scipy.lib.tests.test_lapack.test_flapack_float) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/local/lib/python2.4/site-packages/scipy/lib/lapack/tests/esv_tests.py", line 66, in check_syevr_irange assert_array_almost_equal(w,exact_w[rslice]) File "/usr/local/lib/python2.4/site-packages/numpy/testing/utils.py", line 230, in assert_array_almost_equal header='Arrays are not almost equal') File "/usr/local/lib/python2.4/site-packages/numpy/testing/utils.py", line 215, in assert_array_compare assert cond, msg AssertionError: Arrays are not almost equal (mismatch 33.3333333333%) x: array([-0.66992444, 0.48769444, 9.18222618], dtype=float32) y: array([-0.66992434, 0.48769389, 9.18223045]) ---------------------------------------------------------------------- Ran 1619 tests in 5.764s FAILED (failures=2) *************************************************************************************************** ******** My site.cfg used for both numpy and scipy ******** it finds atlas, but that's all. If I uncomment the lines on lapack_opt and blas_opt, it finds the [DEFAULT] library_dirs = /usr/local/lib/atlas include_dirs = /usr/local/lib/atlas [atlas] library_dirs = /usr/local/lib/atlas atlas_libs = lapack, f77blas, cblas, atlas # [blas_opt] # library_dirs = /usr/local/lib/atlas # blas_libs = fblas, f77blas, cblas # [lapack_opt] # library_dirs = /usr/local/lib/atlas # lapack_libs = lapack [blas] library_dirs = /usr/local/lib/atlas libraries = f77blas, cblas, atlas # [lapack] library_dirs = /usr/local/lib/atlas libraries = lapack, f77blas, cblas, atlas [fftw] libraries = fftw3 -- ==================================================================== Eric Emsellem emsellem at obs.univ-lyon1.fr Centre de Recherche Astrophysique de Lyon 9 av. Charles-Andre tel: +33 (0)4 78 86 83 84 69561 Saint-Genis Laval Cedex fax: +33 (0)4 78 86 83 86 France http://www-obs.univ-lyon1.fr/eric.emsellem ==================================================================== From edschofield at gmail.com Wed Feb 7 10:53:35 2007 From: edschofield at gmail.com (Ed Schofield) Date: Wed, 7 Feb 2007 16:53:35 +0100 Subject: [SciPy-user] build issue on 64-bit Intel Core2 Duo In-Reply-To: <200701311116.22016.sransom@nrao.edu> References: <1170227383.12321.11.camel@nadav.envision.co.il> <200701311116.22016.sransom@nrao.edu> Message-ID: <1b5a37350702070753q6e47ad86l9446192288fed75e@mail.gmail.com> On Wednesday 31 January 2007 02:09, Nadav Horesh wrote: > >python numpy/distutils/cpuinfo.py: > > > >CPU information: getNCPUs=4 has_mmx has_sse has_sse2 is_64bit > > is_Intel is_XEON is_Xeon is_i686 > > Well, > Crossing our systems' info it looks to me that the best and straight > forward way to identify Intel's imitation to amd64 arch is: > > def _is_Nocona(self): > return self.is_64bit() and self.is_Intel() and self.is_i686() > > Did not test it but it fits and looks logical. > > Nadav. On 1/31/07, Scott Ransom wrote: > > This works for me on a 64-bit Debian Core2 Duo system. > > eiger:~$ uname -a > Linux eiger 2.6.18-3-amd64 #1 SMP Sun Dec 10 19:57:44 CET 2006 x86_64 > GNU/Linux > > eiger:~$ python cpuinfo.py > CPU information: getNCPUs=2 has_mmx has_sse has_sse2 is_64bit is_Intel > is_Nocona is_i686 I can also confirm that this works for Core2. I've checked it in to SVN without the redundant is_Intel() check. I've also added a separate entry for Core2 processors, which will be supported with an explicit -march flag in GCC 4.3. -- Ed -------------- next part -------------- An HTML attachment was scrubbed... URL: From sransom at nrao.edu Wed Feb 7 11:12:36 2007 From: sransom at nrao.edu (Scott Ransom) Date: Wed, 7 Feb 2007 11:12:36 -0500 Subject: [SciPy-user] FFT speed ? In-Reply-To: <20070207143921.GE6274@mentat.za.net> References: <45C9D8DD.9040101@olfac.univ-lyon1.fr> <45C9DBEF.8090205@hoc.net> <45C9DD34.1060807@olfac.univ-lyon1.fr> <45C9DE71.1070302@iam.uni-stuttgart.de> <45C9DFAB.2050303@olfac.univ-lyon1.fr> <20070207143921.GE6274@mentat.za.net> Message-ID: <20070207161236.GB19902@ssh.cv.nrao.edu> On Wed, Feb 07, 2007 at 04:39:21PM +0200, Stefan van der Walt wrote: > On Wed, Feb 07, 2007 at 03:18:19PM +0100, Samuel GARCIA wrote: > > yes I was thinking of doing something like that but > > fft(a,pow(2,18)).shape is of course 262144 (2**18) > > and when I use ifft after that The length of my signal have changed I have an > > interpolated signal ... new problem for me ... > > An interpolated signal? Padding a time series gives you an interpolated (actually, the term often used is "oversampled" Fourier spectrum). Scott > In [19]: N.real(N.fft.ifft(N.fft.fft(N.arange(11),16))) > Out[19]: > array([ 0., 1., 2., 3., 4., 5., 6., 7., 8., 9., 10., > -0., 0., 0., 0., 0.]) > > Padding in the Fourier-domain, on the other hand: > > In [20]: N.real(N.fft.ifft(N.fft.fft(N.arange(11)),16)) > Out[20]: > array([ 0. , 2.10035464, 3.04455839, 1.35780382, 3.23688237, > 3.12291155, 2.35790278, 4.12133152, 3.4375 , 3.29213721, > 5.00323313, 3.69035768, 4.32561763, 5.98165513, 3.3443057 , > 6.58344845]) > > > Cheers > St?fan > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user -- -- Scott M. Ransom Address: NRAO Phone: (434) 296-0320 520 Edgemont Rd. email: sransom at nrao.edu Charlottesville, VA 22903 USA GPG Fingerprint: 06A9 9553 78BE 16DB 407B FFCA 9BFA B6FF FFD3 2989 From cimrman3 at ntc.zcu.cz Wed Feb 7 11:44:18 2007 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Wed, 07 Feb 2007 17:44:18 +0100 Subject: [SciPy-user] Installing umfpack on mac? In-Reply-To: References: Message-ID: <45CA01E2.70200@ntc.zcu.cz> Vincent Nijs wrote: > Are there any installation instructions for installing umfpack on mac? I > tried install suitesparse via macports but this seems to be broken. Try checking UMFPACK/Doc/UserGuide.pdf, but there are only generic Unix/Windows instructions, IMHO. r. From peridot.faceted at gmail.com Wed Feb 7 12:13:12 2007 From: peridot.faceted at gmail.com (Anne Archibald) Date: Wed, 7 Feb 2007 12:13:12 -0500 Subject: [SciPy-user] FFT speed ? In-Reply-To: <45C9DC57.6030603@olfac.univ-lyon1.fr> References: <45C9D8DD.9040101@olfac.univ-lyon1.fr> <45C9DB24.6080805@iam.uni-stuttgart.de> <45C9DC57.6030603@olfac.univ-lyon1.fr> Message-ID: On 07/02/07, Samuel GARCIA wrote: > > Yes I know this. > But 186888 186889 and 186890 are not power of 2 and the computation time is > very very different just for a difference of size of only one point. > What is the reason ? > And how to deal with that ? (I realy need to compute fft with a random > nomber of point) Many problems can be solved by padding. But once in a while one comes up which needs a particular number of points, and it's not always a power of two. Can FFTW (or any of the FFT packages numpy/scipy can use) compute an FFT of size 186889 in a reasonable time? I know there are algorithms for large prime factors, and for small prime factors, and that you can combine the two (though perhaps primes of moderate size are a problem). Anne M. Archibald From peridot.faceted at gmail.com Wed Feb 7 12:20:13 2007 From: peridot.faceted at gmail.com (Anne Archibald) Date: Wed, 7 Feb 2007 12:20:13 -0500 Subject: [SciPy-user] SciPy-user Digest, Vol 42, Issue 10 In-Reply-To: <15276EE7-5A45-4D31-82DB-C4FF4D634F2A@nrl.navy.mil> References: <15276EE7-5A45-4D31-82DB-C4FF4D634F2A@nrl.navy.mil> Message-ID: On 07/02/07, Paul Ray wrote: > > On Feb 7, 2007, at 9:04 AM, scipy-user-request at scipy.org wrote: > > > I have a problem with fft speed. > > The speed of most FFT algorithms depends greatly on the largest prime > factor of N. > > Here are the factors of your example numbers: > >factor 186888 > 186888: 2 2 2 3 13 599 > > >factor 186890 > 186890: 2 5 11 1699 > > >factor 186889 > 186889: 186889 > > Note that 186889 is prime! This is the worst case situation! > Numbers which are powers of 2 can be FFT'ed in order N*Log_2(N) time, > while prime numbers take order N**2 time. This is what you are seeing. > You might do a survey of FFT algorithms to see which ones perform > best on prime Ns. Many you will find, work ONLY on power of 2 Ns! > You should make sure that your SciPy is using FFTW3 for the FFT > engine for sure, since I think it does about as well as possible on > prime Ns. You can read about FFTW at http://fftw.org There are at least two algorithms to efficiently compute prime-length FFTs (Rader's conversion and the chirp-z transform). How does one determine which FFT package one is actually using? (I normally use the scipy and numpy that are packaged in ubuntu edgy, but even if you compile it yourself it may not be obvious whether it found the packages you have installed.) Anne M. Archibald From stefan at sun.ac.za Wed Feb 7 12:34:32 2007 From: stefan at sun.ac.za (Stefan van der Walt) Date: Wed, 7 Feb 2007 19:34:32 +0200 Subject: [SciPy-user] SciPy-user Digest, Vol 42, Issue 10 In-Reply-To: References: <15276EE7-5A45-4D31-82DB-C4FF4D634F2A@nrl.navy.mil> Message-ID: <20070207173432.GG6274@mentat.za.net> On Wed, Feb 07, 2007 at 12:20:13PM -0500, Anne Archibald wrote: > On 07/02/07, Paul Ray wrote: > > > > On Feb 7, 2007, at 9:04 AM, scipy-user-request at scipy.org wrote: > > > > > I have a problem with fft speed. > > > > The speed of most FFT algorithms depends greatly on the largest prime > > factor of N. > > > > Here are the factors of your example numbers: > > >factor 186888 > > 186888: 2 2 2 3 13 599 > > > > >factor 186890 > > 186890: 2 5 11 1699 > > > > >factor 186889 > > 186889: 186889 > > > > Note that 186889 is prime! This is the worst case situation! > > Numbers which are powers of 2 can be FFT'ed in order N*Log_2(N) time, > > while prime numbers take order N**2 time. This is what you are seeing. > > You might do a survey of FFT algorithms to see which ones perform > > best on prime Ns. Many you will find, work ONLY on power of 2 Ns! > > You should make sure that your SciPy is using FFTW3 for the FFT > > engine for sure, since I think it does about as well as possible on > > prime Ns. You can read about FFTW at http://fftw.org > > There are at least two algorithms to efficiently compute prime-length > FFTs (Rader's conversion and the chirp-z transform). Here is a rough implementation (I don't guarantee anything). Cheers St?fan -------------- next part -------------- A non-text attachment was scrubbed... Name: chirpz.py Type: text/x-python Size: 1135 bytes Desc: not available URL: From sransom at nrao.edu Wed Feb 7 12:35:43 2007 From: sransom at nrao.edu (Scott Ransom) Date: Wed, 7 Feb 2007 12:35:43 -0500 Subject: [SciPy-user] FFT speed ? In-Reply-To: References: <45C9D8DD.9040101@olfac.univ-lyon1.fr> <45C9DB24.6080805@iam.uni-stuttgart.de> <45C9DC57.6030603@olfac.univ-lyon1.fr> Message-ID: <20070207173543.GA19870@ssh.cv.nrao.edu> On Wed, Feb 07, 2007 at 12:13:12PM -0500, Anne Archibald wrote: > On 07/02/07, Samuel GARCIA wrote: > > > > Yes I know this. > > But 186888 186889 and 186890 are not power of 2 and the computation time is > > very very different just for a difference of size of only one point. > > What is the reason ? > > And how to deal with that ? (I realy need to compute fft with a random > > nomber of point) > > Many problems can be solved by padding. But once in a while one comes > up which needs a particular number of points, and it's not always a > power of two. > > Can FFTW (or any of the FFT packages numpy/scipy can use) compute an > FFT of size 186889 in a reasonable time? I know there are algorithms > for large prime factors, and for small prime factors, and that you can > combine the two (though perhaps primes of moderate size are a > problem). I know that FFTW uses O(NlogN) algorithms for any N, even large prime factors. It is quite likely that for large prime N, FFTPACK (which is what numpy uses) goes to a standard DFT algorithm (O(N^2)). One important thing to remember is that the constant in front of the NlogN is highly dependent on the algorithm. That is why even though FFTW v3 uses NlogN algorithms for all N, some N (like powers of 2 and those composed of only small prime factors) are _much_ faster than those for other N. But the bottom line is that no matter what the constant, for large N, O(NlogN) is _much_ faster than O(N^2). Scott -- Scott M. Ransom Address: NRAO Phone: (434) 296-0320 520 Edgemont Rd. email: sransom at nrao.edu Charlottesville, VA 22903 USA GPG Fingerprint: 06A9 9553 78BE 16DB 407B FFCA 9BFA B6FF FFD3 2989 From griffitts_lists at comcast.net Wed Feb 7 14:00:50 2007 From: griffitts_lists at comcast.net (Jonathan Griffitts) Date: Wed, 7 Feb 2007 12:00:50 -0700 Subject: [SciPy-user] Windows binary 0.5.2 requires SSE2 In-Reply-To: <45C510E8.3060602@comcast.net> References: <45C510E8.3060602@comcast.net> Message-ID: In message <45C510E8.3060602 at comcast.net>, John Hassler wrote >Jonathan Griffitts wrote: > Hi! > I have installed SciPy 0.5.2 for Python 2.5 on several Windows > computers, using the precompiled binary scipy-0.5.2.win32-py2.5.exe. > This works fine on some, but on others it crashes Python with an > Illegal > Instruction exception.? The crash is easy to find by running the > test() > suite or by attempting to make any use of scipy.integrate.quad. > Digging into it, I see that the exception comes from _quadpack.pyd, > and > it dies at a MOVSD instruction.? I believe MOVSD is an SSE2 > instruction > that is only implemented on the more recent CPUs from both Intel and > AMD. > >It's a problem with the Athlon.? It also happened with a previous >version of SciPy this summer.? This is not just an Athlon issue. It also fails on a Pentium 3 processor. To reiterate, this binary (scipy-0.5.2.win32-py2.5) uses SSE2 instructions, which are available only on the newer CPUs from both AMD and Intel. -- Jonathan Griffitts AnyWare Engineering Boulder, CO, USA From peridot.faceted at gmail.com Wed Feb 7 14:15:23 2007 From: peridot.faceted at gmail.com (Anne Archibald) Date: Wed, 7 Feb 2007 14:15:23 -0500 Subject: [SciPy-user] FFT speed ? In-Reply-To: <20070207173543.GA19870@ssh.cv.nrao.edu> References: <45C9D8DD.9040101@olfac.univ-lyon1.fr> <45C9DB24.6080805@iam.uni-stuttgart.de> <45C9DC57.6030603@olfac.univ-lyon1.fr> <20070207173543.GA19870@ssh.cv.nrao.edu> Message-ID: On 07/02/07, Scott Ransom wrote: > On Wed, Feb 07, 2007 at 12:13:12PM -0500, Anne Archibald wrote: > > Can FFTW (or any of the FFT packages numpy/scipy can use) compute an > > FFT of size 186889 in a reasonable time? I know there are algorithms > > for large prime factors, and for small prime factors, and that you can > > combine the two (though perhaps primes of moderate size are a > > problem). > > I know that FFTW uses O(NlogN) algorithms for any N, even large > prime factors. It is quite likely that for large prime N, FFTPACK > (which is what numpy uses) goes to a standard DFT algorithm > (O(N^2)). Indeed, for numbers in the vicinity of 10000, some take 500 times as long as others (on my machine), so I suspect that it is falling back to a DFT. > One important thing to remember is that the constant in front of > the NlogN is highly dependent on the algorithm. That is why even > though FFTW v3 uses NlogN algorithms for all N, some N (like > powers of 2 and those composed of only small prime factors) are > _much_ faster than those for other N. But the bottom line is that > no matter what the constant, for large N, O(NlogN) is _much_ faster > than O(N^2). Of course. But when you say the constant varies, do you mean by a factor of ten? a hundred? a thousand? On my machine, scipy.show_config reports that it can find FFTW2 but not FFTW3; does that mean it is actually *using* FFTW2? How does one tell? Does scipy provide any access to the special features of FFTW's interface? (wisdom, for efficiently computing many FFTs on the same array, for example) Anne M. Archibald From rowen at cesmail.net Wed Feb 7 14:27:35 2007 From: rowen at cesmail.net (Russell E. Owen) Date: Wed, 07 Feb 2007 11:27:35 -0800 Subject: [SciPy-user] Hints for easy install of scipy on RHEL 4? Message-ID: We installed scipy on RHEL 4 using the "built in" blas/lapack. Much of it works but some components that we need fail, apparently because RedHat's blas/lapack is incomplete. So...can anyone point me to an existing RPM of a complete blas/lapack (or a complete Atlas) that we can just use? Google showed lots of RPMs but they didn't appear to be for RedHat Enterprise Linux and I'm not unix-savvy enough to know what alternatives we can get away with. -- Russell From pebarrett at gmail.com Wed Feb 7 14:56:26 2007 From: pebarrett at gmail.com (Paul Barrett) Date: Wed, 7 Feb 2007 14:56:26 -0500 Subject: [SciPy-user] Hints for easy install of scipy on RHEL 4? In-Reply-To: References: Message-ID: <40e64fa20702071156s20437821m3a2336b3a3a2719c@mail.gmail.com> Russell, Is RHEL 4 necessary? I had the same problems building scipy, so I decided to save myself some grief. I installed FC 6 instead, while I wait for RHEL 5. If you have to have RHEL as opposed to FC, then I suggest you upgrade to RHEL 5beta2, which is based on FC 6/7. Just my $.02 worth. -- Paul On 2/7/07, Russell E. Owen wrote: > We installed scipy on RHEL 4 using the "built in" blas/lapack. Much of > it works but some components that we need fail, apparently because > RedHat's blas/lapack is incomplete. > > So...can anyone point me to an existing RPM of a complete blas/lapack > (or a complete Atlas) that we can just use? Google showed lots of RPMs > but they didn't appear to be for RedHat Enterprise Linux and I'm not > unix-savvy enough to know what alternatives we can get away with. > > -- Russell > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From a.h.jaffe at gmail.com Wed Feb 7 15:48:06 2007 From: a.h.jaffe at gmail.com (Andrew Jaffe) Date: Wed, 07 Feb 2007 20:48:06 +0000 Subject: [SciPy-user] FFT speed ? In-Reply-To: <20070207173543.GA19870@ssh.cv.nrao.edu> References: <45C9D8DD.9040101@olfac.univ-lyon1.fr> <45C9DB24.6080805@iam.uni-stuttgart.de> <45C9DC57.6030603@olfac.univ-lyon1.fr> <20070207173543.GA19870@ssh.cv.nrao.edu> Message-ID: Scott Ransom wrote: > On Wed, Feb 07, 2007 at 12:13:12PM -0500, Anne Archibald wrote: >> On 07/02/07, Samuel GARCIA wrote: >>> Yes I know this. >>> But 186888 186889 and 186890 are not power of 2 and the computation time is >>> very very different just for a difference of size of only one point. >>> What is the reason ? >>> And how to deal with that ? (I realy need to compute fft with a random >>> nomber of point) It's probably worth pointing out that 186889 is, in fact, prime, which is certainly the worst case for any algorithm. Andrew From david at ar.media.kyoto-u.ac.jp Wed Feb 7 19:12:14 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Thu, 08 Feb 2007 09:12:14 +0900 Subject: [SciPy-user] FFT speed ? In-Reply-To: References: <45C9D8DD.9040101@olfac.univ-lyon1.fr> <45C9DB24.6080805@iam.uni-stuttgart.de> <45C9DC57.6030603@olfac.univ-lyon1.fr> <20070207173543.GA19870@ssh.cv.nrao.edu> Message-ID: <45CA6ADE.3060400@ar.media.kyoto-u.ac.jp> Andrew Jaffe wrote: > Scott Ransom wrote: >> On Wed, Feb 07, 2007 at 12:13:12PM -0500, Anne Archibald wrote: >>> On 07/02/07, Samuel GARCIA wrote: >>>> Yes I know this. >>>> But 186888 186889 and 186890 are not power of 2 and the computation time is >>>> very very different just for a difference of size of only one point. >>>> What is the reason ? >>>> And how to deal with that ? (I realy need to compute fft with a random >>>> nomber of point) > > > It's probably worth pointing out that 186889 is, in fact, prime, which > is certainly the worst case for any algorithm. Indeed, this is an important point. There are several things to check, and there are some problems with FFT as implemented now in numpy/scipy when used with fftw (it is suboptimal by several factors). - First, is the numpy/scipy installed fft really using fftw ? - Also, if fftw is used, it is important to remember that the first time you are using a certain size, fftw has to compute a plan, which may take time. I tested on my machine (Pentium 4, 3.2 Ghz) fftw (in C) for the given size, each result being the best shot on 100 iterations (source attached): testing cached for size 186888 computing plan...done ! cycle is 1.357218e+10, 1.357218e+08 per execution on average, min is 1.323569e+08 testing cached for size 186889 computing plan...done ! cycle is 6.188211e+10, 6.188211e+08 per execution on average, min is 6.016043e+08 testing cached for size 186890 computing plan...done ! cycle is 1.623674e+10, 1.623674e+08 per execution on average, min is 1.595604e+08 testing cached for size 131072 computing plan...done ! cycle is 1.730136e+10, 1.730136e+08 per execution on average, min is 1.682170e+08 testing cached for size 262144 computing plan...done ! cycle is 3.018974e+10, 3.018974e+08 per execution on average, min is 2.978997e+08 cycle being the number of CPU cycles taken by the 100 iterations. The fact that 2 ** 17 is slower than 186888 is weird, though. May be due to some weird cache effects, I don't know. For reference, on my laptop (pentium M, thus much better memory performances and FPU performances overall): testing cached for size 186888 computing plan...done ! cycle is 1.152835e+10, 1.152835e+08 per execution on average, min is 1.070749e+08 testing cached for size 186889 computing plan...done ! cycle is 3.845418e+10, 3.845418e+08 per execution on average, min is 3.654266e+08 testing cached for size 186890 computing plan...done ! cycle is 1.389736e+10, 1.389736e+08 per execution on average, min is 1.314722e+08 testing cached for size 131072 computing plan...done ! cycle is 5.142997e+09, 5.142997e+07 per execution on average, min is 4.860954e+07 testing cached for size 262144 computing plan...done ! cycle is 1.347407e+10, 1.347407e+08 per execution on average, min is 1.287470e+08 Which is more logical... This once again shows how bad a Pentium 4 is for FPU performances on a per cycle point of view :) For speed issues, 0 padding is the easiest solution I can think of, David -------------- next part -------------- A non-text attachment was scrubbed... Name: test.c Type: text/x-csrc Size: 2684 bytes Desc: not available URL: From david at ar.media.kyoto-u.ac.jp Wed Feb 7 20:18:58 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Thu, 08 Feb 2007 10:18:58 +0900 Subject: [SciPy-user] FFT speed ? In-Reply-To: <45CA6ADE.3060400@ar.media.kyoto-u.ac.jp> References: <45C9D8DD.9040101@olfac.univ-lyon1.fr> <45C9DB24.6080805@iam.uni-stuttgart.de> <45C9DC57.6030603@olfac.univ-lyon1.fr> <20070207173543.GA19870@ssh.cv.nrao.edu> <45CA6ADE.3060400@ar.media.kyoto-u.ac.jp> Message-ID: <45CA7A82.40507@ar.media.kyoto-u.ac.jp> David Cournapeau wrote: > Andrew Jaffe wrote: >> >> >> It's probably worth pointing out that 186889 is, in fact, prime, >> which is certainly the worst case for any algorithm. I went a bit further to see why it stuck on 186889. First, it only happens with numpy.fft.fft, not with scipy.fftpack. So fftw has nothing to do with it. It seems like the fft used in numpy is really slow for prime number (eg N^2, which becomes quite big when your number is 186889...). One thing which could be done at least is to enable SIGINT to be sent to the function to abort it (It takes around 15 minutes to complete on my workstsation). I guess the question is: is there any other implementation of fft usable for prime number in numpy ? David From sgarcia at olfac.univ-lyon1.fr Thu Feb 8 04:06:07 2007 From: sgarcia at olfac.univ-lyon1.fr (Samuel GARCIA) Date: Thu, 08 Feb 2007 10:06:07 +0100 Subject: [SciPy-user] FFT speed ? In-Reply-To: <20070207161236.GB19902@ssh.cv.nrao.edu> References: <45C9D8DD.9040101@olfac.univ-lyon1.fr> <45C9DBEF.8090205@hoc.net> <45C9DD34.1060807@olfac.univ-lyon1.fr> <45C9DE71.1070302@iam.uni-stuttgart.de> <45C9DFAB.2050303@olfac.univ-lyon1.fr> <20070207143921.GE6274@mentat.za.net> <20070207161236.GB19902@ssh.cv.nrao.edu> Message-ID: <45CAE7FF.30406@olfac.univ-lyon1.fr> Thanks all, for fast and efficient answers. Zeros padding was really my solution. Sam Scott Ransom wrote: > On Wed, Feb 07, 2007 at 04:39:21PM +0200, Stefan van der Walt wrote: > >> On Wed, Feb 07, 2007 at 03:18:19PM +0100, Samuel GARCIA wrote: >> >>> yes I was thinking of doing something like that but >>> fft(a,pow(2,18)).shape is of course 262144 (2**18) >>> and when I use ifft after that The length of my signal have changed I have an >>> interpolated signal ... new problem for me ... >>> >> An interpolated signal? >> > > Padding a time series gives you an interpolated (actually, the > term often used is "oversampled" Fourier spectrum). > > Scott > > >> In [19]: N.real(N.fft.ifft(N.fft.fft(N.arange(11),16))) >> Out[19]: >> array([ 0., 1., 2., 3., 4., 5., 6., 7., 8., 9., 10., >> -0., 0., 0., 0., 0.]) >> >> Padding in the Fourier-domain, on the other hand: >> >> In [20]: N.real(N.fft.ifft(N.fft.fft(N.arange(11)),16)) >> Out[20]: >> array([ 0. , 2.10035464, 3.04455839, 1.35780382, 3.23688237, >> 3.12291155, 2.35790278, 4.12133152, 3.4375 , 3.29213721, >> 5.00323313, 3.69035768, 4.32561763, 5.98165513, 3.3443057 , >> 6.58344845]) >> >> >> Cheers >> St?fan >> _______________________________________________ >> SciPy-user mailing list >> SciPy-user at scipy.org >> http://projects.scipy.org/mailman/listinfo/scipy-user >> > > -- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Samuel Garcia Universite Claude Bernard LYON 1 CNRS - UMR5020, Laboratoire des Neurosciences et Systemes Sensoriels 50, avenue Tony Garnier 69366 LYON Cedex 07 FRANCE T?l : 04 37 28 74 64 Fax : 04 37 28 76 01 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -------------- next part -------------- An HTML attachment was scrubbed... URL: From edschofield at gmail.com Thu Feb 8 05:38:41 2007 From: edschofield at gmail.com (Ed Schofield) Date: Thu, 8 Feb 2007 11:38:41 +0100 Subject: [SciPy-user] Windows binary 0.5.2 requires SSE2 In-Reply-To: References: <45C510E8.3060602@comcast.net> Message-ID: <1b5a37350702080238q6ab7b505r21940797f3bb006d@mail.gmail.com> On 2/7/07, Jonathan Griffitts wrote: > > In message <45C510E8.3060602 at comcast.net>, John Hassler > wrote > >Jonathan Griffitts wrote: > > Hi! > > I have installed SciPy 0.5.2 for Python 2.5 on several Windows > > computers, using the precompiled binary scipy-0.5.2.win32-py2.5.exe. > > This works fine on some, but on others it crashes Python with an > > Illegal > > Instruction exception. The crash is easy to find by running the > > test() > > suite or by attempting to make any use of scipy.integrate.quad. > > Digging into it, I see that the exception comes from _quadpack.pyd, > > and > > it dies at a MOVSD instruction. I believe MOVSD is an SSE2 > > instruction > > that is only implemented on the more recent CPUs from both Intel and > > AMD. > > > >It's a problem with the Athlon. It also happened with a previous > >version of SciPy this summer. > > This is not just an Athlon issue. It also fails on a Pentium 3 > processor. > > To reiterate, this binary (scipy-0.5.2.win32-py2.5) uses SSE2 > instructions, which are available only on the newer CPUs from both AMD > and Intel. Thanks for the info. We linked the 0.5.0 and (I think) 0.4.9 Win32 binaries to the ATLAS library labelled "ATLAS-P2" that doesn't require SSE2 instructions. Sorry that we've reverted to the SSE2 builds in the the last release or two. Rebuilding takes time, and it may be more productive for us to wait for the next release, when we'll try to get it right. Meanwhile I've changed the release notes on the SourceForge page to reflect the SSE2 requirement. I'll also explain the situation on the SciPy.org Download page (as soon as I can log in ;) ... -- Ed -------------- next part -------------- An HTML attachment was scrubbed... URL: From pecontal at obs.univ-lyon1.fr Thu Feb 8 11:42:57 2007 From: pecontal at obs.univ-lyon1.fr (Emmanuel Pecontal) Date: Thu, 8 Feb 2007 17:42:57 +0100 Subject: [SciPy-user] scipy print using ipython Message-ID: <200702081742.57873.pecontal@obs.univ-lyon1.fr> Hello, I am using the fmin_l_bfgs_b function in scipy. This function has a parameter for controling the iteration printing during the minimization. In fact the printing is done by the fortran routine via a command like: write (6,1002) iter,f,sbgnrm which is a print on the screen. For some reason I don't understand, when I use the routine in ipython, the print is done only by bunch of lines every few minutes... just like if it was bufferized before printing. Does someone knows the reason of this beheviour? is it a python, ipython or scipy problem? Cheers Emmanuel -- Emmanuel P?contal CRAL - Observatoire de Lyon 9, Av. Charles Andre F-69561 Saint Genis Laval Cedex tel (33) (0)4.78.86.83.76 - fax (33) (0)4.78.86.83.86 email : pecontal at obs.univ-lyon1.fr ~ -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.vergnes at yahoo.fr Thu Feb 8 12:21:05 2007 From: robert.vergnes at yahoo.fr (Robert VERGNES) Date: Thu, 8 Feb 2007 18:21:05 +0100 (CET) Subject: [SciPy-user] RE : Re: QME-Dev wxSciPY workbench 0.0.9.24 released - updated and corrected for Python 2.4 In-Reply-To: Message-ID: <20070208172105.4581.qmail@web27402.mail.ukl.yahoo.com> For ppc , i don't know. never try and i don't have a ppc. i can try on mac osXx86 only... next week I will make a test. Vincent Nijs a ?crit : Has anyone been able to run workbench on a mac, os x 10.4 ppc, Python 2.4, wxPython 2.8. When I try to run it, it seems to start but then quits before anything appears on screen without any error messages. Vincent On 1/28/07 7:59 AM, "Robert VERGNES" wrote: Version Alpha 0.0.9.2.4 updated today. New ZIP file for download. Correction made to the plotting/Graph issue with Python 2.4 + Example files added for review of workbench. https://sourceforge.net/project/showfiles.php?group_id=181979 Note: wxPython 2.8 is still necessary to run the wrokbench. --------------------------------- D?couvrez une nouvelle fa?on d'obtenir des r?ponses ? toutes vos questions ! Profitez des connaissances, des opinions et des exp?riences des internautes sur Yahoo! Questions/R?ponses . --------------------------------- _______________________________________________ SciPy-user mailing list SciPy-user at scipy.org http://projects.scipy.org/mailman/listinfo/scipy-user_______________________________________________ SciPy-user mailing list SciPy-user at scipy.org http://projects.scipy.org/mailman/listinfo/scipy-user --------------------------------- D?couvrez une nouvelle fa?on d'obtenir des r?ponses ? toutes vos questions ! Profitez des connaissances, des opinions et des exp?riences des internautes sur Yahoo! Questions/R?ponses. -------------- next part -------------- An HTML attachment was scrubbed... URL: From rowen at cesmail.net Fri Feb 9 15:45:23 2007 From: rowen at cesmail.net (Russell E. Owen) Date: Fri, 09 Feb 2007 12:45:23 -0800 Subject: [SciPy-user] Hints for easy install of scipy on RHEL 4? References: <40e64fa20702071156s20437821m3a2336b3a3a2719c@mail.gmail.com> Message-ID: In article <40e64fa20702071156s20437821m3a2336b3a3a2719c at mail.gmail.com>, "Paul Barrett" wrote: (in response to my question about how to most easily obtain a scipy-friendly blas/lapack for RHEL 4, because the one that comes with RHEL 4 is not) > Is RHEL 4 necessary? I had the same problems building scipy, so I > decided to save myself some grief. I installed FC 6 instead, while I > wait for RHEL 5. If you have to have RHEL as opposed to FC, then I > suggest you upgrade to RHEL 5beta2, which is based on FC 6/7. That is very interesting. So if we can hold off for RHEL 5 then the problem magically goes away? I'll see if we can do that. (This is for a large department's worth of linux boxes. I don't manage them, but do have permission to install software in a space for shared software. Thus I typically can't install RPMs and can only advise when it comes to OS versions.) Regards, -- Russell From Glen.Mabey at swri.org Fri Feb 9 15:57:14 2007 From: Glen.Mabey at swri.org (Glen W. Mabey) Date: Fri, 9 Feb 2007 14:57:14 -0600 Subject: [SciPy-user] parallel usage of fftw Message-ID: <20070209205714.GK16082@swri16wm.electro.swri.edu> Hello, I've read at http://www.fftw.org/fftw3_doc/Usage-of-Multi_002dthreaded-FFTW.html#Usage-of-Multi_002dthreaded-FFTW that getting fftw to use multiple threads to speed up performance on SMP machines is quite easy to initiate. Has anyone used tried this? I haven't ever used fftw without scipy, but if there were some way to easily implement a python interface such that fftw_init_threads(), fftw_plan_with_nthreads(), and fftw_cleanup_threads() could be called, it would be really cool. I'm having a bit of a time trying to figure out how fftpack is married to fftw ... maybe I should just try calling those functions with ctypes? Any input would be greatly appreciated. Thanks, Glen From krlong at sandia.gov Fri Feb 9 16:22:01 2007 From: krlong at sandia.gov (Kevin Long) Date: Fri, 9 Feb 2007 15:22:01 -0600 Subject: [SciPy-user] problem with _num.seterr when importing scipy Message-ID: <200702091522.01125.krlong@sandia.gov> Hello, I'm getting an error message about "_num.seterr" when importing scipy. Output is below. python Python 2.4.4 (#1, Feb 9 2007, 14:45:36) [GCC 3.4.6] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import scipy Traceback (most recent call last): File "", line 1, in ? File "/usr/local/lib/python2.4/site-packages/scipy/__init__.py", line 37, in ? _num.seterr(all='ignore') TypeError: seterr() got an unexpected keyword argument 'all' >>> I've googled the error message, but found no other reports of it. To get past this problem, I commented out line 37 in __init.py__ and pressed bravely (or foolishly) onwards. Things seem to work OK after that, but I'm nervous about commenting something out of your code. This is on SuSE 10.1, but I'm not using the versions of python, gcc, blas, or lapack that were bundled with the system. The python version is 2.4.4. Lapack is the distribution from netlib. The BLAS is the version shipped with netlib's lapack, with srotmg.f, srotm.f, drotm.f, and drotmg.f added (from netlib's blas.tar.gz) to provide the complete BLAS needed by scipy. Everything was built with gcc 3.4.6. Any ideas? Thank you, Kevin Long -- ----------------------------------------------------------------------------- Dr. Kevin Long Computational Science and Mathematics Research Department Sandia National Laboratories MS 9217 krlong at sandia.gov Livermore, CA 94551 (925)-294-4910 ----------------------------------------------------------------------------- From robert.kern at gmail.com Fri Feb 9 16:24:14 2007 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 09 Feb 2007 13:24:14 -0800 Subject: [SciPy-user] problem with _num.seterr when importing scipy In-Reply-To: <200702091522.01125.krlong@sandia.gov> References: <200702091522.01125.krlong@sandia.gov> Message-ID: <45CCE67E.1090508@gmail.com> Kevin Long wrote: > Hello, > > I'm getting an error message about "_num.seterr" when importing scipy. > Output is below. > > python > Python 2.4.4 (#1, Feb 9 2007, 14:45:36) > [GCC 3.4.6] on linux2 > Type "help", "copyright", "credits" or "license" for more information. >>>> import scipy > Traceback (most recent call last): > File "", line 1, in ? > File "/usr/local/lib/python2.4/site-packages/scipy/__init__.py", line 37, > in ? > _num.seterr(all='ignore') > TypeError: seterr() got an unexpected keyword argument 'all' >>>> What versions of numpy and scipy do you have installed? E.g.: >>> import numpy >>> print numpy.__version__ 1.0.2.dev3521 >>> import scipy >>> print scipy.__version__ 0.5.3.dev2620 -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From krlong at sandia.gov Fri Feb 9 16:31:11 2007 From: krlong at sandia.gov (Kevin Long) Date: Fri, 9 Feb 2007 15:31:11 -0600 Subject: [SciPy-user] problem with _num.seterr when importing scipy In-Reply-To: <45CCE67E.1090508@gmail.com> References: <200702091522.01125.krlong@sandia.gov> <45CCE67E.1090508@gmail.com> Message-ID: <200702091531.11770.krlong@sandia.gov> Hi Robert, Numpy 1.0b5 and scipy 0.5.2. - kevin On Friday 09 February 2007 15:24, Robert Kern wrote: > Kevin Long wrote: > > Hello, > > > > I'm getting an error message about "_num.seterr" when importing scipy. > > Output is below. > > > > python > > Python 2.4.4 (#1, Feb 9 2007, 14:45:36) > > [GCC 3.4.6] on linux2 > > Type "help", "copyright", "credits" or "license" for more information. > > > >>>> import scipy > > > > Traceback (most recent call last): > > File "", line 1, in ? > > File "/usr/local/lib/python2.4/site-packages/scipy/__init__.py", line > > 37, in ? > > _num.seterr(all='ignore') > > TypeError: seterr() got an unexpected keyword argument 'all' > > What versions of numpy and scipy do you have installed? E.g.: > >>> import numpy > >>> print numpy.__version__ > > 1.0.2.dev3521 > > >>> import scipy > >>> print scipy.__version__ > > 0.5.3.dev2620 From anand at soe.ucsc.edu Fri Feb 9 17:12:34 2007 From: anand at soe.ucsc.edu (Anand Patil) Date: Fri, 09 Feb 2007 14:12:34 -0800 Subject: [SciPy-user] Vectorization question Message-ID: <45CCF1D2.1050404@cse.ucsc.edu> Hi all, I want to make array A from array B like so: A[t, j, k] = \sum_i B[t, j, i] B[t, i, k] That is, for each t A[t,] = dot(B[t,], B[t,]) There's no loopless way to do this in numpy, right? Thanks much, Anand From oliphant at ee.byu.edu Fri Feb 9 17:20:22 2007 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Fri, 09 Feb 2007 15:20:22 -0700 Subject: [SciPy-user] Vectorization question In-Reply-To: <45CCF1D2.1050404@cse.ucsc.edu> References: <45CCF1D2.1050404@cse.ucsc.edu> Message-ID: <45CCF3A6.3010402@ee.byu.edu> Anand Patil wrote: >Hi all, > >I want to make array A from array B like so: > >A[t, j, k] = \sum_i B[t, j, i] B[t, i, k] > >That is, for each t > >A[t,] = dot(B[t,], B[t,]) > >There's no loopless way to do this in numpy, right? > > You should be able to do this just using A = dot(B,B) Because the dot function returns the sum of products over the last dimension of the first argument and the second-to-last dimension of the second argument. -Travis From jbattat at cfa.harvard.edu Fri Feb 9 17:44:56 2007 From: jbattat at cfa.harvard.edu (James Battat) Date: Fri, 9 Feb 2007 17:44:56 -0500 (EST) Subject: [SciPy-user] new behaviour of c_[] in scipy 0.5.2 Message-ID: Hi, In the past: (scipy 0.4.8, numpy 0.9.6) >>> print scipy.c_[1,2,3] [1,2,3] but now: (scipy 0.5.2, numpy 1.0.1) >>> print scipy.c_[1,2,3] [[1,2,3]] A nested array! This breaks my old code because: >>> array = scipy.c_[1,2,3] >>> print array[1] IndexError: index is out of bounds Is the current behaviour expected? Thanks for your help, James ************************** Harvard University Dept. of Astronomy 60 Garden Street MS-10 Cambridge, MA 02138 phone 617.496.0742 lab 617.495.3267 email jbattat at cfa.harvard.edu ************************** From brad.malone at gmail.com Fri Feb 9 17:57:29 2007 From: brad.malone at gmail.com (Brad Malone) Date: Fri, 9 Feb 2007 14:57:29 -0800 Subject: [SciPy-user] Error on trying to install SciPy Message-ID: Hi everyone, sorry to be a bother but I'm getting really frustrated with trying to install SciPy. When I build or install this is what it tells me before it exits. Anyone have a clue why this would be? I REALLY appreciate any help. IPO link: can not find "(" ifort: error: problem during multi-file optimization compilation (code 1) error: Command "/auto/opt/intel/fc/9.0/bin/ifort -shared -nofor_main -tpp6 -xM -arch SSE build/temp.linux-i686-2.4/build/src.linux-i686-2.4 /Lib/fftpack/_fftpackmodule.o build/temp.linux-i686-2.4/Lib/fftpack/src/zfft.o build/temp.linux-i686-2.4/Lib/fftpack/src/drfft.o build/temp.linux-i686-2.4/Lib/fftpack/src/zrfft.o build/temp.linux-i686-2.4/Lib/fftpack/src/zfftnd.o build/temp.linux-i686-2.4/build/src.linux-i686-2.4/fortranobject.o -L/opt/lib -Lbuild/temp.linux-i686-2.4 -ldfftpack -lrfftw -lfftw -o build/lib.linux-i686-2.4/scipy/fftpack/_fftpack.so" failed with exit status 1 Best, Brad -------------- next part -------------- An HTML attachment was scrubbed... URL: From anand at soe.ucsc.edu Fri Feb 9 21:00:03 2007 From: anand at soe.ucsc.edu (Anand Patil) Date: Fri, 09 Feb 2007 18:00:03 -0800 Subject: [SciPy-user] Vectorization question Message-ID: <45CD2723.5040201@cse.ucsc.edu> >Anand Patil wrote: >>/Hi all, />>/ />>/I want to make array A from array B like so: />>/ />>/A[t, j, k] = \sum_i B[t, j, i] B[t, i, k] />>/ />>/That is, for each t />>/ />>/A[t,] = dot(B[t,], B[t,]) />>/ />>/There's no loopless way to do this in numpy, right? />>/ />>/ />You should be able to do this just using > >A = dot(B,B) > >Because the dot function returns the sum of products over the last >dimension of the first argument and the second-to-last dimension of the >second argument. > >-Travis The output array I'm looking for is rank-3, but as I understand them dot and tensordot can only ever return even-rank arrays: In [1]: from numpy import zeros, dot In [2]: B=zeros((4,3,3)) In [3]: dot(B,B).shape Out[3]: (4, 3, 4, 3) In [4]: Thanks, Anand From pebarrett at gmail.com Sat Feb 10 07:42:03 2007 From: pebarrett at gmail.com (Paul Barrett) Date: Sat, 10 Feb 2007 07:42:03 -0500 Subject: [SciPy-user] Hints for easy install of scipy on RHEL 4? In-Reply-To: References: <40e64fa20702071156s20437821m3a2336b3a3a2719c@mail.gmail.com> Message-ID: <40e64fa20702100442p747cc1c0k355a145d383d5b8e@mail.gmail.com> On 2/9/07, Russell E. Owen wrote: > In article > <40e64fa20702071156s20437821m3a2336b3a3a2719c at mail.gmail.com>, > "Paul Barrett" wrote: > > (in response to my question about how to most easily obtain a > scipy-friendly blas/lapack for RHEL 4, because the one that comes with > RHEL 4 is not) > > > Is RHEL 4 necessary? I had the same problems building scipy, so I > > decided to save myself some grief. I installed FC 6 instead, while I > > wait for RHEL 5. If you have to have RHEL as opposed to FC, then I > > suggest you upgrade to RHEL 5beta2, which is based on FC 6/7. > > That is very interesting. So if we can hold off for RHEL 5 then the > problem magically goes away? I'll see if we can do that. > > (This is for a large department's worth of linux boxes. I don't manage > them, but do have permission to install software in a space for shared > software. Thus I typically can't install RPMs and can only advise when > it comes to OS versions.) Yes, we are in the same situation. We are beginning to investigate RHEL 5b2 now, which is available for download. RHEL 5 final was suppose to be available at the beginning of the year, but they are apparently putting some final touches on the smartcard and virtualization software, which many enterprises need. Given how stable FC 6 has been, I sure RHEL 5b2 is fine if you do not need these features. -- Paul From gnata at obs.univ-lyon1.fr Sat Feb 10 11:51:10 2007 From: gnata at obs.univ-lyon1.fr (Xavier Gnata) Date: Sat, 10 Feb 2007 17:51:10 +0100 Subject: [SciPy-user] scipy print using ipython In-Reply-To: <200702081742.57873.pecontal@obs.univ-lyon1.fr> References: <200702081742.57873.pecontal@obs.univ-lyon1.fr> Message-ID: <45CDF7FE.2090801@obs.univ-lyon1.fr> Hello, On my box I cannot reproduce this behavior with the following code: x0 = rand(100000) xopt = fmin_l_bfgs_b(rosen, x0, fprime=rosen_der,iprint=1) Cheers, Xavier ps : iprime = "any positive value" works also well on my box. > Hello, > > I am using the fmin_l_bfgs_b function in scipy. This function has a > parameter > > for controling the iteration printing during the minimization. In fact > the printing > > is done by the fortran routine via a command like: > > write (6,1002) iter,f,sbgnrm > > which is a print on the screen. > > For some reason I don't understand, when I use the routine in ipython, > the print > > is done only by bunch of lines every few minutes... just like if it > was bufferized > > before printing. Does someone knows the reason of this beheviour? is > it a python, > > ipython or scipy problem? > > Cheers > > Emmanuel > > -- > > Emmanuel P?contal > > CRAL - Observatoire de Lyon > > 9, Av. Charles Andre > > F-69561 Saint Genis Laval Cedex > > tel (33) (0)4.78.86.83.76 - fax (33) (0)4.78.86.83.86 > > email : pecontal at obs.univ-lyon1.fr > > ~ > > ------------------------------------------------------------------------ > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -- ############################################ Xavier Gnata CRAL - Observatoire de Lyon 9, avenue Charles Andr? 69561 Saint Genis Laval cedex Phone: +33 4 78 86 85 28 Fax: +33 4 78 86 83 86 E-mail: gnata at obs.univ-lyon1.fr ############################################ From strang at nmr.mgh.harvard.edu Sat Feb 10 19:47:00 2007 From: strang at nmr.mgh.harvard.edu (Gary Strangman) Date: Sat, 10 Feb 2007 19:47:00 -0500 (EST) Subject: [SciPy-user] Hilbert-Huang? Message-ID: Hi all, Does anyone know of (or have) a python implementation of the Hilbert-Huang transform out there somewhere? Numpy, Numeric, numarray, scipy, doesn't matter to me. 'fraid my searches have come up empty ... -best Gary From robert.kern at gmail.com Sat Feb 10 21:26:48 2007 From: robert.kern at gmail.com (Robert Kern) Date: Sat, 10 Feb 2007 20:26:48 -0600 Subject: [SciPy-user] Hilbert-Huang? In-Reply-To: References: Message-ID: <45CE7EE8.708@gmail.com> Gary Strangman wrote: > Hi all, > > Does anyone know of (or have) a python implementation of the Hilbert-Huang > transform out there somewhere? Numpy, Numeric, numarray, scipy, doesn't > matter to me. 'fraid my searches have come up empty ... As it appears to be patented, you probably won't find one. http://techtransfer.gsfc.nasa.gov/HHT/#patents -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From zunzun at zunzun.com Sun Feb 11 07:15:51 2007 From: zunzun at zunzun.com (zunzun at zunzun.com) Date: Sun, 11 Feb 2007 07:15:51 -0500 Subject: [SciPy-user] Hilbert-Huang? In-Reply-To: <45CE7EE8.708@gmail.com> References: <45CE7EE8.708@gmail.com> Message-ID: <20070211121551.GA10312@zunzun.com> From topengineer at gmail.com Sun Feb 11 10:01:51 2007 From: topengineer at gmail.com (Hui Chang Moon) Date: Mon, 12 Feb 2007 00:01:51 +0900 Subject: [SciPy-user] Doesn't Scipy have the Fermi-Dirac distribution? Message-ID: <296323b50702110701k110cd1f9k10fa49576cda83b1@mail.gmail.com> I am a graduate school student. Nowadays I'm writing a semiconductor simulation code with numpy and scipy. In my field, the Fermi-Dirac distribution function is frequently used. I hope to know if the scipy has the Fermi-Dirac distribution function or not. Have a nice day~! ^o^ -------------- next part -------------- An HTML attachment was scrubbed... URL: From strang at nmr.mgh.harvard.edu Sun Feb 11 10:16:03 2007 From: strang at nmr.mgh.harvard.edu (Gary Strangman) Date: Sun, 11 Feb 2007 10:16:03 -0500 (EST) Subject: [SciPy-user] Hilbert-Huang? In-Reply-To: <45CE7EE8.708@gmail.com> References: <45CE7EE8.708@gmail.com> Message-ID: Yeah, I figured HHT was unavailable, but thought I'd ask anyway. Anyone know of a python version of (the related and I believe not patented) empirical mode decomposition? Gary On Sat, 10 Feb 2007, Robert Kern wrote: > Gary Strangman wrote: >> Hi all, >> >> Does anyone know of (or have) a python implementation of the Hilbert-Huang >> transform out there somewhere? Numpy, Numeric, numarray, scipy, doesn't >> matter to me. 'fraid my searches have come up empty ... > > As it appears to be patented, you probably won't find one. > > http://techtransfer.gsfc.nasa.gov/HHT/#patents > > From lev at columbia.edu Sun Feb 11 15:13:08 2007 From: lev at columbia.edu (Lev Givon) Date: Sun, 11 Feb 2007 15:13:08 -0500 Subject: [SciPy-user] Doesn't Scipy have the Fermi-Dirac distribution? In-Reply-To: <296323b50702110701k110cd1f9k10fa49576cda83b1@mail.gmail.com> References: <296323b50702110701k110cd1f9k10fa49576cda83b1@mail.gmail.com> Message-ID: <20070211201308.GH2273@avicenna.cc.columbia.edu> Received from Hui Chang Moon on Sun, Feb 11, 2007 at 10:01:51AM EST: > I am a graduate school student. > Nowadays I'm writing a semiconductor simulation code with numpy and scipy. > In my field, the Fermi-Dirac distribution function is frequently used. > I hope to know if the scipy has the Fermi-Dirac distribution function or > not. > > Have a nice day~! ^o^ As of version 0.5.2, I don't believe so. It is worth noting that one can create a random variable with the Fermi-Dirac distribution as its PDF by extending the scipy.stats.rv_continuous class as described in $PYTHONHOME/site-packages/scipy/stats/distribution.py L.G. From sinclaird at ukzn.ac.za Mon Feb 12 01:02:17 2007 From: sinclaird at ukzn.ac.za (Scott Sinclair) Date: Mon, 12 Feb 2007 08:02:17 +0200 Subject: [SciPy-user] Hilbert-Huang? In-Reply-To: References: <45CE7EE8.708@gmail.com> Message-ID: <45D01EC8.F934.009F.0@ukzn.ac.za> >>> Gary Strangman 2/11/2007 17:16 >>> Anyone know of a python version of (the related and I believe not patented) empirical mode decomposition? >>> No Python implementation (to my knowledge) but you can find a MATLAB implementation here http://perso.ens-lyon.fr/patrick.flandrin/emd.html It shouldn't be too hard to translate, the algorithm is pretty simple. Cheers, Scott Please find our Email Disclaimer here: http://www.ukzn.ac.za/disclaimer/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From oliphant at ee.byu.edu Mon Feb 12 12:09:43 2007 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Mon, 12 Feb 2007 10:09:43 -0700 Subject: [SciPy-user] Vectorization question In-Reply-To: <45CD2723.5040201@cse.ucsc.edu> References: <45CD2723.5040201@cse.ucsc.edu> Message-ID: <45D09F57.5080104@ee.byu.edu> Anand Patil wrote: >>Anand Patil wrote: >> >> > > > >>>/Hi all, >>> >>> >/>>/ >/>>/I want to make array A from array B like so: >/>>/ >/>>/A[t, j, k] = \sum_i B[t, j, i] B[t, i, k] >/>>/ >/>>/That is, for each t >/>>/ >/>>/A[t,] = dot(B[t,], B[t,]) >/>>/ >/>>/There's no loopless way to do this in numpy, right? >/>>/ >/>>/ >/>You should be able to do this just using > > >>A = dot(B,B) >> >>Because the dot function returns the sum of products over the last >>dimension of the first argument and the second-to-last dimension of the >>second argument. >> >>-Travis >> >> > > >The output array I'm looking for is rank-3, but as I understand them dot and tensordot can only ever return even-rank arrays: > > > I see the difference now. Yes, you are right, you are wanting to do a sum of products without an outer-product. You would have to extract the "diagonal" of the result. -Travis From zunzun at zunzun.com Tue Feb 13 03:09:16 2007 From: zunzun at zunzun.com (zunzun at zunzun.com) Date: Tue, 13 Feb 2007 03:09:16 -0500 Subject: [SciPy-user] Fit statistics for sum of squared relative error Message-ID: <20070213080915.GA5684@zunzun.com> I've been doing fits to lowest sum of squared relative error for a while now. These are useful when a data set exhibits increasing heteroscedasticity as the values of the independent variable increase, i.e., data scatter proportional to distance along the x axis. You can test this at http://zunzun.com by selecting a fitting target of "Lowest sum of squared relative errors" when fitting, and this is also in the Python Equations package at http://sf.net/ptojects/pythonequations. Having investigated fit statistics for some time now, it seems *everything* is geared toward absolute error, with not the slightest drop that I can find regarding relative error. These statistics are needed when performing SSQREL rather than SSQABS fitting. Can I safely use existing routines for covariance matrices and parameter standard errors simply by substituting dy(relative)/dx and relative error whenever dy(absolute)/dx and absolute error are used? I apologize, but this is over my head and I would like to report the fit statistics properly. James Phillips http;//zunzun.com P.S. I don't yet have much in the way of fit statistics on the web site, this is what I'm currently working on. I found much of what I need in the BSD-style licensed MPFIT.py at http://cars9.uchicago.edu/software/python/mpfit.html written by Mark Rivers. From nwagner at iam.uni-stuttgart.de Tue Feb 13 05:14:14 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Tue, 13 Feb 2007 11:14:14 +0100 Subject: [SciPy-user] Mittag-Leffler function Message-ID: <45D18F76.7080108@iam.uni-stuttgart.de> Hi, has someone implemented the Mittag-Leffler function with the aid of scipy ? A Matlab code is available here http://www.mathworks.com/matlabcentral/fileexchange/loadFile.do?objectId=8738&objectType=FILE Nils References: http://mathworld.wolfram.com/Mittag-LefflerFunction.html http://en.wikipedia.org/wiki/Mittag-Leffler_function From ryanlists at gmail.com Tue Feb 13 14:57:46 2007 From: ryanlists at gmail.com (Ryan Krauss) Date: Tue, 13 Feb 2007 13:57:46 -0600 Subject: [SciPy-user] problem with signal.residue Message-ID: I think I have found a small bug in signal.residue and may have found a simple solution. The problem seems to come from polydiv requiring that the numerator polynomial be of degree at most 1 less than the denominator. If I have a denominator of s^2+3*s+2, the numerator must have an s coefficient (even if that coefficient is 0) for signal.residue to work: In [75]: a Out[75]: array([1, 3, 2]) In [76]: signal.residue([1],a) --------------------------------------------------------------------------- exceptions.ValueError Traceback (most recent cal last) C:\Python24\ C:\Python24\Lib\site-packages\scipy\signal\signaltools.py in residue(b, a, tol, rtype) 1054 1055 b,a = map(asarray,(b,a)) -> 1056 k,b = polydiv(b,a) 1057 p = roots(a) 1058 r = p*0.0 C:\Python24\Lib\site-packages\numpy\lib\polynomial.py in polydiv(u, v) 399 n = len(v) - 1 400 scale = 1. / v[0] --> 401 q = NX.zeros((m-n+1,), float) 402 r = u.copy() 403 for k in range(0, m-n+1): ValueError: negative dimensions are not allowed In [77]: signal.residue([0,1],a) Out[77]: (array([ 1.+0.j, -1.+0.j]), array([-1.+0.j, -2.+0.j]), array([], dtype=float64)) I think the simple solution is to replace line 1056 with these four lines: if len(b) References: Message-ID: <20070214010607.GA5728@localhost> On Tue, Feb 13, 2007 at 01:57:46PM -0600, Ryan Krauss wrote: > I think I have found a small bug in signal.residue and may have found > a simple solution. The problem seems to come from polydiv requiring > that the numerator polynomial be of degree at most 1 less than the > denominator. If I have a denominator of s^2+3*s+2, the numerator must > have an s coefficient (even if that coefficient is 0) for > signal.residue to work: > > In [75]: a > Out[75]: array([1, 3, 2]) > > In [76]: signal.residue([1],a) > --------------------------------------------------------------------------- > exceptions.ValueError Traceback (most recent cal > last) [snip] > In [77]: signal.residue([0,1],a) > Out[77]: > (array([ 1.+0.j, -1.+0.j]), > array([-1.+0.j, -2.+0.j]), > array([], dtype=float64)) > > > I think the simple solution is to replace line 1056 with these four lines: > if len(b) k=[] > else: > k,b = polydiv(b,a) > > where the last line above is the old line 1056. Basically, specify > that there is no k term if the len of b is less than the len of a. > > Is this too simple? What do I do to actually submit this if it is the > right solution? I think you are right. This seems to be a bug. Please register and open a ticket at http://projects.scipy.org/scipy/scipy and state the problem and the specified solution. Thanks. Kumar -- Kumar Appaiah, 462, Jamuna Hostel, Indian Institute of Technology Madras, Chennai - 600 036 From nwagner at iam.uni-stuttgart.de Wed Feb 14 03:02:34 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Wed, 14 Feb 2007 09:02:34 +0100 Subject: [SciPy-user] Array manipulation Message-ID: <45D2C21A.3040809@iam.uni-stuttgart.de> Hi, I would like to remove the i-th column and row from a two-dimensional array A. The remaining array should be kept and stored in B A = random.rand(n,n) This task is very easy if i is the first or last row/column. In that case one can use B = A[1:,1:] or B=A[:-1,:-1] But, what is the best way to get B if 0 < i < n-1 ? Nils From joris at ster.kuleuven.ac.be Wed Feb 14 03:34:40 2007 From: joris at ster.kuleuven.ac.be (joris at ster.kuleuven.ac.be) Date: Wed, 14 Feb 2007 09:34:40 +0100 Subject: [SciPy-user] Array manipulation Message-ID: <1171442080.45d2c9a00bd8a@webmail.ster.kuleuven.be> Hi Nils, It's not a one-liner, but it may serve you: In [40]: n = 4 In [41]: i = 2 In [42]: In [42]: A = arange(n*n).reshape(n,n) In [43]: A Out[43]: array([[ 0, 1, 2, 3], [ 4, 5, 6, 7], [ 8, 9, 10, 11], [12, 13, 14, 15]]) In [44]: I = range(n) In [45]: I.remove(i) In [46]: B = A[ix_(I,I)] In [47]: B Out[47]: array([[ 0, 1, 3], [ 4, 5, 7], [12, 13, 15]]) Cheers, Joris On Wednesday 14 February 2007 09:02, Nils Wagner wrote: >Hi, > >I would like to remove the i-th column and row from a two-dimensional >array A. The remaining array >should be kept and stored in B > >A = random.rand(n,n) > >This task is very easy if i is the first or last row/column. In that >case one can use > >B = A[1:,1:] > >or > >B=A[:-1,:-1] > >But, what is the best way to get B if 0 < i < n-1 ? > >Nils Disclaimer: http://www.kuleuven.be/cwis/email_disclaimer.htm From stefan at sun.ac.za Wed Feb 14 04:03:53 2007 From: stefan at sun.ac.za (Stefan van der Walt) Date: Wed, 14 Feb 2007 11:03:53 +0200 Subject: [SciPy-user] Array manipulation In-Reply-To: <45D2C21A.3040809@iam.uni-stuttgart.de> References: <45D2C21A.3040809@iam.uni-stuttgart.de> Message-ID: <20070214090353.GV6150@mentat.za.net> On Wed, Feb 14, 2007 at 09:02:34AM +0100, Nils Wagner wrote: > Hi, > > I would like to remove the i-th column and row from a two-dimensional > array A. The remaining array > should be kept and stored in B > > A = random.rand(n,n) > > This task is very easy if i is the first or last row/column. In that > case one can use > > B = A[1:,1:] > > or > > B=A[:-1,:-1] > > But, what is the best way to get B if 0 < i < n-1 ? numpy.delete(x, i, 1) Cheers St?fan From nwagner at iam.uni-stuttgart.de Wed Feb 14 05:13:01 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Wed, 14 Feb 2007 11:13:01 +0100 Subject: [SciPy-user] Array manipulation In-Reply-To: <20070214090353.GV6150@mentat.za.net> References: <45D2C21A.3040809@iam.uni-stuttgart.de> <20070214090353.GV6150@mentat.za.net> Message-ID: <45D2E0AD.7050409@iam.uni-stuttgart.de> Stefan van der Walt wrote: > On Wed, Feb 14, 2007 at 09:02:34AM +0100, Nils Wagner wrote: > >> Hi, >> >> I would like to remove the i-th column and row from a two-dimensional >> array A. The remaining array >> should be kept and stored in B >> >> A = random.rand(n,n) >> >> This task is very easy if i is the first or last row/column. In that >> case one can use >> >> B = A[1:,1:] >> >> or >> >> B=A[:-1,:-1] >> >> But, what is the best way to get B if 0 < i < n-1 ? >> > > numpy.delete(x, i, 1) > > Cheers > St?fan > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > Hi Joris and Stefan, Thank you for your replies. It's nice to see that there are different ways to realize this task. BTW, "insert" is missing in http://www.scipy.org/Numpy_Example_List. I forgot my password to access the wiki page. Joris, please can you add an example for insert. I saw that you have just edited that page. Thanks in advance. Cheers Nils From chiaracaronna at hotmail.com Wed Feb 14 05:22:15 2007 From: chiaracaronna at hotmail.com (Chiara Caronna) Date: Wed, 14 Feb 2007 10:22:15 +0000 Subject: [SciPy-user] scipy.optimize.leastsq error estimates In-Reply-To: <45C84EA5.8080307@gmail.com> Message-ID: Hi, I am also interested in having errors from the fit, and I tried to import the module scipy.odr as you said, but I got this errors: File "/usr/local/lib/python2.4/site-packages/scipy/odr/__init__.py", line 49, in ? import odrpack File "/usr/local/lib/python2.4/site-packages/scipy/odr/odrpack.py", line 103, in ? from scipy.sandbox.odr import __odrpack ImportError: No module named odr And the same is if I try to import sandbox.odr import scipy.sandbox.odr Traceback (most recent call last): File "", line 1, in ? ImportError: No module named odr Where am I wrong? Thank you, Chiara >From: Robert Kern >Reply-To: SciPy Users List >To: SciPy Users List >Subject: Re: [SciPy-user] scipy.optimize.leastsq error estimates >Date: Tue, 06 Feb 2007 03:47:17 -0600 > >Nils Wagner wrote: > > > AFAIK odr is directly available through scipy.odr. > > So I guess the odr directory in the sandbox is obsolete. Is that correct >? > >There is no more odr/ directory in the sandbox since it got moved into the >main >package. > >-- >Robert Kern > >"I have come to believe that the whole world is an enigma, a harmless >enigma > that is made terrible by our own mad attempt to interpret it as though it >had > an underlying truth." > -- Umberto Eco >_______________________________________________ >SciPy-user mailing list >SciPy-user at scipy.org >http://projects.scipy.org/mailman/listinfo/scipy-user _________________________________________________________________ Express yourself instantly with MSN Messenger! Download today it's FREE! http://messenger.msn.click-url.com/go/onm00200471ave/direct/01/ From nwagner at iam.uni-stuttgart.de Wed Feb 14 05:24:56 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Wed, 14 Feb 2007 11:24:56 +0100 Subject: [SciPy-user] scipy.optimize.leastsq error estimates In-Reply-To: References: Message-ID: <45D2E378.5070401@iam.uni-stuttgart.de> Chiara Caronna wrote: > Hi, > I am also interested in having errors from the fit, and I tried to import > the module scipy.odr as you said, but I got this errors: > > File "/usr/local/lib/python2.4/site-packages/scipy/odr/__init__.py", line > 49, in ? > import odrpack > File "/usr/local/lib/python2.4/site-packages/scipy/odr/odrpack.py", line > 103, in ? > from scipy.sandbox.odr import __odrpack > ImportError: No module named odr > > > And the same is if I try to import sandbox.odr > > import scipy.sandbox.odr > > Traceback (most recent call last): > File "", line 1, in ? > ImportError: No module named odr > > Where am I wrong? > Thank you, > Chiara > > > > > > >> From: Robert Kern >> Reply-To: SciPy Users List >> To: SciPy Users List >> Subject: Re: [SciPy-user] scipy.optimize.leastsq error estimates >> Date: Tue, 06 Feb 2007 03:47:17 -0600 >> >> Nils Wagner wrote: >> >> >>> AFAIK odr is directly available through scipy.odr. >>> So I guess the odr directory in the sandbox is obsolete. Is that correct >>> >> ? >> >> There is no more odr/ directory in the sandbox since it got moved into the >> main >> package. >> >> -- >> Robert Kern >> >> "I have come to believe that the whole world is an enigma, a harmless >> enigma >> that is made terrible by our own mad attempt to interpret it as though it >> had >> an underlying truth." >> -- Umberto Eco >> _______________________________________________ >> SciPy-user mailing list >> SciPy-user at scipy.org >> http://projects.scipy.org/mailman/listinfo/scipy-user >> > > _________________________________________________________________ > Express yourself instantly with MSN Messenger! Download today it's FREE! > http://messenger.msn.click-url.com/go/onm00200471ave/direct/01/ > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > odr is in the main tree. Python 2.4 (#1, Oct 13 2006, 16:43:49) [GCC 3.3.5 20050117 (prerelease) (SUSE Linux)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import scipy >>> scipy.__version__ '0.5.3.dev2704' >>> import scipy.odr works fine for me. Nils From joris at ster.kuleuven.ac.be Wed Feb 14 05:36:55 2007 From: joris at ster.kuleuven.ac.be (joris at ster.kuleuven.ac.be) Date: Wed, 14 Feb 2007 11:36:55 +0100 Subject: [SciPy-user] Array manipulation Message-ID: <1171449415.45d2e64700ec8@webmail.ster.kuleuven.be> On Wednesday 14 February 2007 11:13, Nils Wagner wrote: >BTW, "insert" is missing in http://www.scipy.org/Numpy_Example_List. >I forgot my password to access the wiki page. Joris, please can you >add an example for insert. I saw that you have just edited that page. Done. Ciao, Joris Disclaimer: http://www.kuleuven.be/cwis/email_disclaimer.htm From nwagner at iam.uni-stuttgart.de Wed Feb 14 05:48:01 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Wed, 14 Feb 2007 11:48:01 +0100 Subject: [SciPy-user] Experiences with LAPACK3.1 and scipy Message-ID: <45D2E8E1.4000305@iam.uni-stuttgart.de> Hi all, has someone tested LAPACK3.1 in connection with scipy ? I am curious about it. Version 3.1 has many improvements http://www.netlib.org/lapack/lapack-3.1.0.changes I am mainly interested in the new MRRR and the Hessenberg QR algorithm with the small bulge multi-shift QR algorithm together with aggressive early deflation. Are there wrappers for these routines ? Nils From chiaracaronna at hotmail.com Wed Feb 14 07:30:24 2007 From: chiaracaronna at hotmail.com (Chiara Caronna) Date: Wed, 14 Feb 2007 12:30:24 +0000 Subject: [SciPy-user] scipy.optimize.leastsq error estimates In-Reply-To: <45D2E378.5070401@iam.uni-stuttgart.de> Message-ID: ok I have scipy 0.5.2... I guess this is the problem... How can I get the 0.5.3 version, It seems that 0.5.2 is the last version available on scipy.org... >From: Nils Wagner >Reply-To: SciPy Users List >To: SciPy Users List >Subject: Re: [SciPy-user] scipy.optimize.leastsq error estimates >Date: Wed, 14 Feb 2007 11:24:56 +0100 > >Chiara Caronna wrote: > > Hi, > > I am also interested in having errors from the fit, and I tried to >import > > the module scipy.odr as you said, but I got this errors: > > > > File "/usr/local/lib/python2.4/site-packages/scipy/odr/__init__.py", >line > > 49, in ? > > import odrpack > > File "/usr/local/lib/python2.4/site-packages/scipy/odr/odrpack.py", >line > > 103, in ? > > from scipy.sandbox.odr import __odrpack > > ImportError: No module named odr > > > > > > And the same is if I try to import sandbox.odr > > > > import scipy.sandbox.odr > > > > Traceback (most recent call last): > > File "", line 1, in ? > > ImportError: No module named odr > > > > Where am I wrong? > > Thank you, > > Chiara > > > > > > > > > > > > > >> From: Robert Kern > >> Reply-To: SciPy Users List > >> To: SciPy Users List > >> Subject: Re: [SciPy-user] scipy.optimize.leastsq error estimates > >> Date: Tue, 06 Feb 2007 03:47:17 -0600 > >> > >> Nils Wagner wrote: > >> > >> > >>> AFAIK odr is directly available through scipy.odr. > >>> So I guess the odr directory in the sandbox is obsolete. Is that >correct > >>> > >> ? > >> > >> There is no more odr/ directory in the sandbox since it got moved into >the > >> main > >> package. > >> > >> -- > >> Robert Kern > >> > >> "I have come to believe that the whole world is an enigma, a harmless > >> enigma > >> that is made terrible by our own mad attempt to interpret it as though >it > >> had > >> an underlying truth." > >> -- Umberto Eco > >> _______________________________________________ > >> SciPy-user mailing list > >> SciPy-user at scipy.org > >> http://projects.scipy.org/mailman/listinfo/scipy-user > >> > > > > _________________________________________________________________ > > Express yourself instantly with MSN Messenger! Download today it's FREE! > > http://messenger.msn.click-url.com/go/onm00200471ave/direct/01/ > > > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user > > >odr is in the main tree. > >Python 2.4 (#1, Oct 13 2006, 16:43:49) >[GCC 3.3.5 20050117 (prerelease) (SUSE Linux)] on linux2 >Type "help", "copyright", "credits" or "license" for more information. > >>> import scipy > >>> scipy.__version__ >'0.5.3.dev2704' > >>> import scipy.odr > >works fine for me. > >Nils > > > > >_______________________________________________ >SciPy-user mailing list >SciPy-user at scipy.org >http://projects.scipy.org/mailman/listinfo/scipy-user _________________________________________________________________ Express yourself instantly with MSN Messenger! Download today it's FREE! http://messenger.msn.click-url.com/go/onm00200471ave/direct/01/ From ckkart at hoc.net Wed Feb 14 07:37:49 2007 From: ckkart at hoc.net (Christian Kristukat) Date: Wed, 14 Feb 2007 21:37:49 +0900 Subject: [SciPy-user] scipy.optimize.leastsq error estimates In-Reply-To: References: Message-ID: <45D3029D.5060808@hoc.net> Chiara Caronna wrote: > ok I have scipy 0.5.2... I guess this is the problem... > How can I get the 0.5.3 version, It seems that 0.5.2 is the last version > available on scipy.org... I guess Nils is using the svn version. So make a checkout as described here http://www.scipy.org/Download and edit the file Lib/sandbox/setup.py to make sure that the odr module will be built. Christian From nwagner at iam.uni-stuttgart.de Wed Feb 14 07:40:58 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Wed, 14 Feb 2007 13:40:58 +0100 Subject: [SciPy-user] scipy.optimize.leastsq error estimates In-Reply-To: References: Message-ID: <45D3035A.3040700@iam.uni-stuttgart.de> Chiara Caronna wrote: > ok I have scipy 0.5.2... I guess this is the problem... > How can I get the 0.5.3 version, It seems that 0.5.2 is the last version > available on scipy.org... > > You can get the latest version via svn co http://svn.scipy.org/svn/scipy/trunk scipy If you use 0.5.2 look into the directory scipy/Lib/sandbox and create a file called enabled_packages.txt which should contain odr in the first line (a new line for each package) Afterwards you have to reinstall scipy. Assuming that 0.5.2 has odr in the sandbox you can import odr from the sandbox afterwards. If you use the svn version you can import odr from the main tree. import scipy.odr Anyway there is an open ticket wrt odr http://projects.scipy.org/scipy/scipy/ticket/357 Nils >> From: Nils Wagner >> Reply-To: SciPy Users List >> To: SciPy Users List >> Subject: Re: [SciPy-user] scipy.optimize.leastsq error estimates >> Date: Wed, 14 Feb 2007 11:24:56 +0100 >> >> Chiara Caronna wrote: >> >>> Hi, >>> I am also interested in having errors from the fit, and I tried to >>> >> import >> >>> the module scipy.odr as you said, but I got this errors: >>> >>> File "/usr/local/lib/python2.4/site-packages/scipy/odr/__init__.py", >>> >> line >> >>> 49, in ? >>> import odrpack >>> File "/usr/local/lib/python2.4/site-packages/scipy/odr/odrpack.py", >>> >> line >> >>> 103, in ? >>> from scipy.sandbox.odr import __odrpack >>> ImportError: No module named odr >>> >>> >>> And the same is if I try to import sandbox.odr >>> >>> import scipy.sandbox.odr >>> >>> Traceback (most recent call last): >>> File "", line 1, in ? >>> ImportError: No module named odr >>> >>> Where am I wrong? >>> Thank you, >>> Chiara >>> >>> >>> >>> >>> >>> >>> >>>> From: Robert Kern >>>> Reply-To: SciPy Users List >>>> To: SciPy Users List >>>> Subject: Re: [SciPy-user] scipy.optimize.leastsq error estimates >>>> Date: Tue, 06 Feb 2007 03:47:17 -0600 >>>> >>>> Nils Wagner wrote: >>>> >>>> >>>> >>>>> AFAIK odr is directly available through scipy.odr. >>>>> So I guess the odr directory in the sandbox is obsolete. Is that >>>>> >> correct >> >>>> ? >>>> >>>> There is no more odr/ directory in the sandbox since it got moved into >>>> >> the >> >>>> main >>>> package. >>>> >>>> -- >>>> Robert Kern >>>> >>>> "I have come to believe that the whole world is an enigma, a harmless >>>> enigma >>>> that is made terrible by our own mad attempt to interpret it as though >>>> >> it >> >>>> had >>>> an underlying truth." >>>> -- Umberto Eco >>>> _______________________________________________ >>>> SciPy-user mailing list >>>> SciPy-user at scipy.org >>>> http://projects.scipy.org/mailman/listinfo/scipy-user >>>> >>>> >>> _________________________________________________________________ >>> Express yourself instantly with MSN Messenger! Download today it's FREE! >>> http://messenger.msn.click-url.com/go/onm00200471ave/direct/01/ >>> >>> _______________________________________________ >>> SciPy-user mailing list >>> SciPy-user at scipy.org >>> http://projects.scipy.org/mailman/listinfo/scipy-user >>> >>> >> odr is in the main tree. >> >> Python 2.4 (#1, Oct 13 2006, 16:43:49) >> [GCC 3.3.5 20050117 (prerelease) (SUSE Linux)] on linux2 >> Type "help", "copyright", "credits" or "license" for more information. >> >>>>> import scipy >>>>> scipy.__version__ >>>>> >> '0.5.3.dev2704' >> >>>>> import scipy.odr >>>>> >> works fine for me. >> >> Nils >> >> >> >> >> _______________________________________________ >> SciPy-user mailing list >> SciPy-user at scipy.org >> http://projects.scipy.org/mailman/listinfo/scipy-user >> > > _________________________________________________________________ > Express yourself instantly with MSN Messenger! Download today it's FREE! > http://messenger.msn.click-url.com/go/onm00200471ave/direct/01/ > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From nwagner at iam.uni-stuttgart.de Wed Feb 14 07:53:58 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Wed, 14 Feb 2007 13:53:58 +0100 Subject: [SciPy-user] Wriitng a list to an ascii file Message-ID: <45D30666.9000603@iam.uni-stuttgart.de> Hi, I have a list >>> data[0] [1, 3.98, 3.131435898787629, 146.25144654434166] >>> data[1] [2, 4.0014113157358633, -0.23777483140261779, 169.32304332115922] >>> data[2] [3, 4.0000070483264265, -0.0011815833829018629, 167.64445564164987] >>> data[3] [4, 4.0000000001757536, -2.946728727692971e-08, 167.63609488306861] where the first entry is an integer and the remaining entries are floats. How can I write this list to an ascii file without destroying the integer type of the first entry? Nils io.write_array() yields 1.000000000000000e+00 3.980000000000000e+00 3.131435898787629e+00 1.462514465443417e+02 2.000000000000000e+00 4.001411315735863e+00 -2.377748314026178e-01 1.693230433211592e+02 3.000000000000000e+00 4.000007048326427e+00 -1.181583382901863e-03 1.676444556416499e+02 4.000000000000000e+00 4.000000000175754e+00 -2.946728727692971e-08 1.676360948830686e+02 but I would prefer 1 3.980000000000000e+00 3.131435898787629e+00 1.462514465443417e+02 2 4.001411315735863e+00 -2.377748314026178e-01 1.693230433211592e+02 3 4.000007048326427e+00 -1.181583382901863e-03 1.676444556416499e+02 4 4.000000000175754e+00 -2.946728727692971e-08 1.676360948830686e+02 From chiaracaronna at hotmail.com Wed Feb 14 07:54:30 2007 From: chiaracaronna at hotmail.com (Chiara Caronna) Date: Wed, 14 Feb 2007 12:54:30 +0000 Subject: [SciPy-user] scipy.optimize.leastsq error estimates In-Reply-To: <45D3035A.3040700@iam.uni-stuttgart.de> Message-ID: >If you use 0.5.2 > >look into the directory scipy/Lib/sandbox >and create a file called enabled_packages.txt which should contain > >odr > >in the first line (a new line for each package) I did what you said, but when reinstalling scipy I got this error: File "Lib/sandbox/setup.py", line 22, in configuration config.add_subpackage(p) File "/usr/local/lib/python2.4/site-packages/numpy/distutils/misc_util.py", line 765, in add_subpackage caller_level = 2) File "/usr/local/lib/python2.4/site-packages/numpy/distutils/misc_util.py", line 741, in get_subpackage caller_level = caller_level+1) File "/usr/local/lib/python2.4/site-packages/numpy/distutils/misc_util.py", line 541, in __init__ raise ValueError("%r is not a directory" % (package_path,)) ValueError: 'Lib/sandbox/odr' is not a directory >Afterwards you have to reinstall scipy. Assuming that 0.5.2 has odr in >the sandbox >you can import odr from the sandbox afterwards. >If you use the svn version you can import odr from the main tree. > >import scipy.odr > >Anyway there is an open ticket wrt odr >http://projects.scipy.org/scipy/scipy/ticket/357 > >Nils > > > >> From: Nils Wagner > >> Reply-To: SciPy Users List > >> To: SciPy Users List > >> Subject: Re: [SciPy-user] scipy.optimize.leastsq error estimates > >> Date: Wed, 14 Feb 2007 11:24:56 +0100 > >> > >> Chiara Caronna wrote: > >> > >>> Hi, > >>> I am also interested in having errors from the fit, and I tried to > >>> > >> import > >> > >>> the module scipy.odr as you said, but I got this errors: > >>> > >>> File "/usr/local/lib/python2.4/site-packages/scipy/odr/__init__.py", > >>> > >> line > >> > >>> 49, in ? > >>> import odrpack > >>> File "/usr/local/lib/python2.4/site-packages/scipy/odr/odrpack.py", > >>> > >> line > >> > >>> 103, in ? > >>> from scipy.sandbox.odr import __odrpack > >>> ImportError: No module named odr > >>> > >>> > >>> And the same is if I try to import sandbox.odr > >>> > >>> import scipy.sandbox.odr > >>> > >>> Traceback (most recent call last): > >>> File "", line 1, in ? > >>> ImportError: No module named odr > >>> > >>> Where am I wrong? > >>> Thank you, > >>> Chiara > >>> > >>> > >>> > >>> > >>> > >>> > >>> > >>>> From: Robert Kern > >>>> Reply-To: SciPy Users List > >>>> To: SciPy Users List > >>>> Subject: Re: [SciPy-user] scipy.optimize.leastsq error estimates > >>>> Date: Tue, 06 Feb 2007 03:47:17 -0600 > >>>> > >>>> Nils Wagner wrote: > >>>> > >>>> > >>>> > >>>>> AFAIK odr is directly available through scipy.odr. > >>>>> So I guess the odr directory in the sandbox is obsolete. Is that > >>>>> > >> correct > >> > >>>> ? > >>>> > >>>> There is no more odr/ directory in the sandbox since it got moved >into > >>>> > >> the > >> > >>>> main > >>>> package. > >>>> > >>>> -- > >>>> Robert Kern > >>>> > >>>> "I have come to believe that the whole world is an enigma, a harmless > >>>> enigma > >>>> that is made terrible by our own mad attempt to interpret it as >though > >>>> > >> it > >> > >>>> had > >>>> an underlying truth." > >>>> -- Umberto Eco > >>>> _______________________________________________ > >>>> SciPy-user mailing list > >>>> SciPy-user at scipy.org > >>>> http://projects.scipy.org/mailman/listinfo/scipy-user > >>>> > >>>> > >>> _________________________________________________________________ > >>> Express yourself instantly with MSN Messenger! Download today it's >FREE! > >>> http://messenger.msn.click-url.com/go/onm00200471ave/direct/01/ > >>> > >>> _______________________________________________ > >>> SciPy-user mailing list > >>> SciPy-user at scipy.org > >>> http://projects.scipy.org/mailman/listinfo/scipy-user > >>> > >>> > >> odr is in the main tree. > >> > >> Python 2.4 (#1, Oct 13 2006, 16:43:49) > >> [GCC 3.3.5 20050117 (prerelease) (SUSE Linux)] on linux2 > >> Type "help", "copyright", "credits" or "license" for more information. > >> > >>>>> import scipy > >>>>> scipy.__version__ > >>>>> > >> '0.5.3.dev2704' > >> > >>>>> import scipy.odr > >>>>> > >> works fine for me. > >> > >> Nils > >> > >> > >> > >> > >> _______________________________________________ > >> SciPy-user mailing list > >> SciPy-user at scipy.org > >> http://projects.scipy.org/mailman/listinfo/scipy-user > >> > > > > _________________________________________________________________ > > Express yourself instantly with MSN Messenger! Download today it's FREE! > > http://messenger.msn.click-url.com/go/onm00200471ave/direct/01/ > > > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > >_______________________________________________ >SciPy-user mailing list >SciPy-user at scipy.org >http://projects.scipy.org/mailman/listinfo/scipy-user _________________________________________________________________ Express yourself instantly with MSN Messenger! Download today it's FREE! http://messenger.msn.click-url.com/go/onm00200471ave/direct/01/ From nwagner at iam.uni-stuttgart.de Wed Feb 14 08:01:19 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Wed, 14 Feb 2007 14:01:19 +0100 Subject: [SciPy-user] scipy.optimize.leastsq error estimates In-Reply-To: References: Message-ID: <45D3081F.9090708@iam.uni-stuttgart.de> Chiara Caronna wrote: > > >> If you use 0.5.2 >> >> look into the directory scipy/Lib/sandbox >> and create a file called enabled_packages.txt which should contain >> >> odr >> >> in the first line (a new line for each package) >> > > I did what you said, but when reinstalling scipy I got this error: > > File "Lib/sandbox/setup.py", line 22, in configuration > config.add_subpackage(p) > File > "/usr/local/lib/python2.4/site-packages/numpy/distutils/misc_util.py", line > 765, in add_subpackage > caller_level = 2) > File > "/usr/local/lib/python2.4/site-packages/numpy/distutils/misc_util.py", line > 741, in get_subpackage > caller_level = caller_level+1) > File > "/usr/local/lib/python2.4/site-packages/numpy/distutils/misc_util.py", line > 541, in __init__ > raise ValueError("%r is not a directory" % (package_path,)) > ValueError: 'Lib/sandbox/odr' is not a directory > > > What is the output of ls -l in your sandbox directory I have Lib/sandbox> ls -l total 17 drwxr-xr-x 5 root root 272 2006-09-03 11:30 ann drwxr-xr-x 6 root root 376 2006-11-21 08:45 arpack drwxr-xr-x 4 root root 184 2006-11-06 08:48 arraysetops drwxr-xr-x 3 root root 232 2006-06-27 08:32 buildgrid drwxr-xr-x 5 root root 344 2006-12-15 15:35 cdavid drwxr-xr-x 3 root root 192 2006-06-22 09:18 constants drwxr-xr-x 3 root root 248 2006-03-15 08:29 cow drwxr-xr-x 4 root root 544 2006-10-02 08:19 delaunay -rw-r--r-- 1 root root 40 2007-02-08 13:11 enabled_packages.txt drwxr-xr-x 5 root root 272 2006-03-15 08:29 exmplpackage drwxr-xr-x 6 root root 280 2006-04-24 08:41 fdfpack drwxr-xr-x 3 root root 600 2006-09-03 11:30 ga drwxr-xr-x 3 root root 360 2006-09-03 11:30 gplt drwxr-xr-x 3 root root 352 2006-09-03 11:30 image -rw-r--r-- 1 root root 0 2006-02-28 08:50 __init__.py drwxr-xr-x 4 root root 520 2007-02-12 08:33 maskedarray drwxr-xr-x 6 root root 688 2007-02-12 08:33 models drwxr-xr-x 5 root root 232 2006-11-23 08:29 montecarlo drwxr-xr-x 3 root root 320 2006-07-10 08:09 netcdf drwxr-xr-x 4 root root 264 2007-01-11 10:57 newoptimize drwxr-xr-x 4 root root 432 2006-11-17 09:02 numexpr drwxr-xr-x 3 root root 72 2006-10-05 08:30 oliphant drwxr-xr-x 3 root root 456 2006-09-03 11:30 plt drwxr-xr-x 6 root root 904 2006-12-07 09:05 pyem drwxr-xr-x 13 root root 592 2006-08-16 16:21 pysparse drwxr-xr-x 4 root root 232 2007-02-08 13:11 rbf drwxr-xr-x 3 root root 184 2006-09-03 11:30 rkern -rw-r--r-- 1 root root 2732 2007-02-08 13:07 setup.py -rw-r--r-- 1 root root 1028 2007-02-08 13:11 setup.pyc drwxr-xr-x 5 root root 360 2007-02-09 08:42 spline drwxr-xr-x 3 root root 192 2006-10-10 08:22 stats drwxr-xr-x 5 root root 464 2006-09-11 08:44 svm drwxr-xr-x 9 root root 576 2007-02-13 08:27 timeseries drwxr-xr-x 3 root root 72 2006-12-07 08:52 wavelet drwxr-xr-x 6 root root 1864 2007-01-23 08:35 xplt Note that I am using the svn version. So odr is not present in the sandbox. How about 0.5.2 ? Do you have a directory odr in the sandbox ? Nils From pgreisen at gmail.com Wed Feb 14 08:04:33 2007 From: pgreisen at gmail.com (Per Jr. Greisen) Date: Wed, 14 Feb 2007 14:04:33 +0100 Subject: [SciPy-user] Wriitng a list to an ascii file In-Reply-To: <45D30666.9000603@iam.uni-stuttgart.de> References: <45D30666.9000603@iam.uni-stuttgart.de> Message-ID: why dont you split the data between integer and floats ? On 2/14/07, Nils Wagner wrote: > > Hi, > > I have a list > > >>> data[0] > [1, 3.98, 3.131435898787629, 146.25144654434166] > >>> data[1] > [2, 4.0014113157358633, -0.23777483140261779, 169.32304332115922] > >>> data[2] > [3, 4.0000070483264265, -0.0011815833829018629, 167.64445564164987] > >>> data[3] > [4, 4.0000000001757536, -2.946728727692971e-08, 167.63609488306861] > > where the first entry is an integer and the remaining entries are floats. > > How can I write this list to an ascii file without destroying the > integer type of the first entry? > > Nils > > io.write_array() yields > > 1.000000000000000e+00 3.980000000000000e+00 3.131435898787629e+00 > 1.462514465443417e+02 > 2.000000000000000e+00 4.001411315735863e+00 -2.377748314026178e-01 > 1.693230433211592e+02 > 3.000000000000000e+00 4.000007048326427e+00 -1.181583382901863e-03 > 1.676444556416499e+02 > 4.000000000000000e+00 4.000000000175754e+00 -2.946728727692971e-08 > 1.676360948830686e+02 > > but I would prefer > > 1 3.980000000000000e+00 3.131435898787629e+00 1.462514465443417e+02 > 2 4.001411315735863e+00 -2.377748314026178e-01 1.693230433211592e+02 > 3 4.000007048326427e+00 -1.181583382901863e-03 1.676444556416499e+02 > 4 4.000000000175754e+00 -2.946728727692971e-08 1.676360948830686e+02 > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -- Best regards Per Jr. Greisen "If you make something idiot-proof, the universe creates a better idiot." -------------- next part -------------- An HTML attachment was scrubbed... URL: From chiaracaronna at hotmail.com Wed Feb 14 08:09:13 2007 From: chiaracaronna at hotmail.com (Chiara Caronna) Date: Wed, 14 Feb 2007 13:09:13 +0000 Subject: [SciPy-user] scipy.optimize.leastsq error estimates In-Reply-To: <45D3081F.9090708@iam.uni-stuttgart.de> Message-ID: Ah. No, there is not an odr directory... :( Here is the output: drwxr-xr-x 28 root root 4096 2007-02-14 13:46 . drwxr-xr-x 22 root root 4096 2007-01-17 09:37 .. -rw-r--r-- 1 500 1000 0 2006-01-05 04:35 __init__.py drwxr-xr-x 4 root root 4096 2006-12-08 05:05 ann drwxr-xr-x 4 root root 4096 2006-12-08 05:05 arpack drwxr-xr-x 3 root root 4096 2006-12-08 05:05 arraysetops drwxr-xr-x 2 root root 4096 2006-12-08 05:05 buildgrid drwxr-xr-x 2 root root 4096 2006-12-08 05:05 constants drwxr-xr-x 2 root root 4096 2006-12-08 05:05 cow drwxr-xr-x 3 root root 4096 2006-12-08 05:05 delaunay -rw-r--r-- 1 root root 4 2007-02-14 13:46 enabled_packages.txt drwxr-xr-x 4 root root 4096 2006-12-08 05:05 exmplpackage drwxr-xr-x 5 root root 4096 2006-12-08 05:05 fdfpack drwxr-xr-x 2 root root 4096 2006-12-08 05:05 ga drwxr-xr-x 2 root root 4096 2006-12-08 05:05 gplt drwxr-xr-x 2 root root 4096 2006-12-08 05:05 image drwxr-xr-x 5 root root 4096 2006-12-08 05:05 models drwxr-xr-x 4 root root 4096 2006-12-08 05:05 montecarlo drwxr-xr-x 2 root root 4096 2006-12-08 05:05 netcdf drwxr-xr-x 2 root root 4096 2006-12-08 05:05 newoptimize drwxr-xr-x 3 root root 4096 2006-12-08 05:05 numexpr drwxr-xr-x 2 root root 4096 2006-12-08 05:05 plt drwxr-xr-x 5 root root 4096 2006-12-08 05:05 pyem drwxr-xr-x 12 root root 4096 2006-12-08 05:05 pysparse drwxr-xr-x 2 root root 4096 2006-12-08 05:05 rkern -rw-r--r-- 1 500 1000 2656 2006-12-02 04:24 setup.py -rw-r--r-- 1 root root 1028 2007-01-17 09:37 setup.pyc drwxr-xr-x 4 root root 4096 2006-12-08 05:05 spline drwxr-xr-x 2 root root 4096 2006-12-08 05:05 stats drwxr-xr-x 4 root root 4096 2006-12-08 05:05 svm drwxr-xr-x 3 root root 4096 2006-12-08 05:05 umfpack drwxr-xr-x 5 root root 4096 2006-12-08 05:05 xplt >From: Nils Wagner >Reply-To: SciPy Users List >To: SciPy Users List >Subject: Re: [SciPy-user] scipy.optimize.leastsq error estimates >Date: Wed, 14 Feb 2007 14:01:19 +0100 > >Chiara Caronna wrote: > > > > > >> If you use 0.5.2 > >> > >> look into the directory scipy/Lib/sandbox > >> and create a file called enabled_packages.txt which should contain > >> > >> odr > >> > >> in the first line (a new line for each package) > >> > > > > I did what you said, but when reinstalling scipy I got this error: > > > > File "Lib/sandbox/setup.py", line 22, in configuration > > config.add_subpackage(p) > > File > > "/usr/local/lib/python2.4/site-packages/numpy/distutils/misc_util.py", >line > > 765, in add_subpackage > > caller_level = 2) > > File > > "/usr/local/lib/python2.4/site-packages/numpy/distutils/misc_util.py", >line > > 741, in get_subpackage > > caller_level = caller_level+1) > > File > > "/usr/local/lib/python2.4/site-packages/numpy/distutils/misc_util.py", >line > > 541, in __init__ > > raise ValueError("%r is not a directory" % (package_path,)) > > ValueError: 'Lib/sandbox/odr' is not a directory > > > > > > > >What is the output of ls -l in your sandbox directory >I have >Lib/sandbox> ls -l > total 17 >drwxr-xr-x 5 root root 272 2006-09-03 11:30 ann >drwxr-xr-x 6 root root 376 2006-11-21 08:45 arpack >drwxr-xr-x 4 root root 184 2006-11-06 08:48 arraysetops >drwxr-xr-x 3 root root 232 2006-06-27 08:32 buildgrid >drwxr-xr-x 5 root root 344 2006-12-15 15:35 cdavid >drwxr-xr-x 3 root root 192 2006-06-22 09:18 constants >drwxr-xr-x 3 root root 248 2006-03-15 08:29 cow >drwxr-xr-x 4 root root 544 2006-10-02 08:19 delaunay >-rw-r--r-- 1 root root 40 2007-02-08 13:11 enabled_packages.txt >drwxr-xr-x 5 root root 272 2006-03-15 08:29 exmplpackage >drwxr-xr-x 6 root root 280 2006-04-24 08:41 fdfpack >drwxr-xr-x 3 root root 600 2006-09-03 11:30 ga >drwxr-xr-x 3 root root 360 2006-09-03 11:30 gplt >drwxr-xr-x 3 root root 352 2006-09-03 11:30 image >-rw-r--r-- 1 root root 0 2006-02-28 08:50 __init__.py >drwxr-xr-x 4 root root 520 2007-02-12 08:33 maskedarray >drwxr-xr-x 6 root root 688 2007-02-12 08:33 models >drwxr-xr-x 5 root root 232 2006-11-23 08:29 montecarlo >drwxr-xr-x 3 root root 320 2006-07-10 08:09 netcdf >drwxr-xr-x 4 root root 264 2007-01-11 10:57 newoptimize >drwxr-xr-x 4 root root 432 2006-11-17 09:02 numexpr >drwxr-xr-x 3 root root 72 2006-10-05 08:30 oliphant >drwxr-xr-x 3 root root 456 2006-09-03 11:30 plt >drwxr-xr-x 6 root root 904 2006-12-07 09:05 pyem >drwxr-xr-x 13 root root 592 2006-08-16 16:21 pysparse >drwxr-xr-x 4 root root 232 2007-02-08 13:11 rbf >drwxr-xr-x 3 root root 184 2006-09-03 11:30 rkern >-rw-r--r-- 1 root root 2732 2007-02-08 13:07 setup.py >-rw-r--r-- 1 root root 1028 2007-02-08 13:11 setup.pyc >drwxr-xr-x 5 root root 360 2007-02-09 08:42 spline >drwxr-xr-x 3 root root 192 2006-10-10 08:22 stats >drwxr-xr-x 5 root root 464 2006-09-11 08:44 svm >drwxr-xr-x 9 root root 576 2007-02-13 08:27 timeseries >drwxr-xr-x 3 root root 72 2006-12-07 08:52 wavelet >drwxr-xr-x 6 root root 1864 2007-01-23 08:35 xplt > >Note that I am using the svn version. So odr is not present in the sandbox. >How about 0.5.2 ? >Do you have a directory odr in the sandbox ? > >Nils > > >_______________________________________________ >SciPy-user mailing list >SciPy-user at scipy.org >http://projects.scipy.org/mailman/listinfo/scipy-user _________________________________________________________________ FREE pop-up blocking with the new MSN Toolbar - get it now! http://toolbar.msn.click-url.com/go/onm00200415ave/direct/01/ From nwagner at iam.uni-stuttgart.de Wed Feb 14 08:13:38 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Wed, 14 Feb 2007 14:13:38 +0100 Subject: [SciPy-user] scipy.optimize.leastsq error estimates In-Reply-To: References: Message-ID: <45D30B02.10208@iam.uni-stuttgart.de> Chiara Caronna wrote: > Ah. No, there is not an odr directory... :( > > Here is the output: > > > drwxr-xr-x 28 root root 4096 2007-02-14 13:46 . > drwxr-xr-x 22 root root 4096 2007-01-17 09:37 .. > -rw-r--r-- 1 500 1000 0 2006-01-05 04:35 __init__.py > drwxr-xr-x 4 root root 4096 2006-12-08 05:05 ann > drwxr-xr-x 4 root root 4096 2006-12-08 05:05 arpack > drwxr-xr-x 3 root root 4096 2006-12-08 05:05 arraysetops > drwxr-xr-x 2 root root 4096 2006-12-08 05:05 buildgrid > drwxr-xr-x 2 root root 4096 2006-12-08 05:05 constants > drwxr-xr-x 2 root root 4096 2006-12-08 05:05 cow > drwxr-xr-x 3 root root 4096 2006-12-08 05:05 delaunay > -rw-r--r-- 1 root root 4 2007-02-14 13:46 > enabled_packages.txt > drwxr-xr-x 4 root root 4096 2006-12-08 05:05 exmplpackage > drwxr-xr-x 5 root root 4096 2006-12-08 05:05 fdfpack > drwxr-xr-x 2 root root 4096 2006-12-08 05:05 ga > drwxr-xr-x 2 root root 4096 2006-12-08 05:05 gplt > drwxr-xr-x 2 root root 4096 2006-12-08 05:05 image > drwxr-xr-x 5 root root 4096 2006-12-08 05:05 models > drwxr-xr-x 4 root root 4096 2006-12-08 05:05 montecarlo > drwxr-xr-x 2 root root 4096 2006-12-08 05:05 netcdf > drwxr-xr-x 2 root root 4096 2006-12-08 05:05 newoptimize > drwxr-xr-x 3 root root 4096 2006-12-08 05:05 numexpr > drwxr-xr-x 2 root root 4096 2006-12-08 05:05 plt > drwxr-xr-x 5 root root 4096 2006-12-08 05:05 pyem > drwxr-xr-x 12 root root 4096 2006-12-08 05:05 pysparse > drwxr-xr-x 2 root root 4096 2006-12-08 05:05 rkern > -rw-r--r-- 1 500 1000 2656 2006-12-02 04:24 setup.py > -rw-r--r-- 1 root root 1028 2007-01-17 09:37 setup.pyc > drwxr-xr-x 4 root root 4096 2006-12-08 05:05 spline > drwxr-xr-x 2 root root 4096 2006-12-08 05:05 stats > drwxr-xr-x 4 root root 4096 2006-12-08 05:05 svm > drwxr-xr-x 3 root root 4096 2006-12-08 05:05 umfpack > drwxr-xr-x 5 root root 4096 2006-12-08 05:05 xplt > > > >> From: Nils Wagner >> Reply-To: SciPy Users List >> To: SciPy Users List >> Subject: Re: [SciPy-user] scipy.optimize.leastsq error estimates >> Date: Wed, 14 Feb 2007 14:01:19 +0100 >> >> Chiara Caronna wrote: >> >>> >>>> If you use 0.5.2 >>>> >>>> look into the directory scipy/Lib/sandbox >>>> and create a file called enabled_packages.txt which should contain >>>> >>>> odr >>>> >>>> in the first line (a new line for each package) >>>> >>>> >>> I did what you said, but when reinstalling scipy I got this error: >>> >>> File "Lib/sandbox/setup.py", line 22, in configuration >>> config.add_subpackage(p) >>> File >>> "/usr/local/lib/python2.4/site-packages/numpy/distutils/misc_util.py", >>> >> line >> >>> 765, in add_subpackage >>> caller_level = 2) >>> File >>> "/usr/local/lib/python2.4/site-packages/numpy/distutils/misc_util.py", >>> >> line >> >>> 741, in get_subpackage >>> caller_level = caller_level+1) >>> File >>> "/usr/local/lib/python2.4/site-packages/numpy/distutils/misc_util.py", >>> >> line >> >>> 541, in __init__ >>> raise ValueError("%r is not a directory" % (package_path,)) >>> ValueError: 'Lib/sandbox/odr' is not a directory >>> >>> >>> >>> >> What is the output of ls -l in your sandbox directory >> I have >> Lib/sandbox> ls -l >> total 17 >> drwxr-xr-x 5 root root 272 2006-09-03 11:30 ann >> drwxr-xr-x 6 root root 376 2006-11-21 08:45 arpack >> drwxr-xr-x 4 root root 184 2006-11-06 08:48 arraysetops >> drwxr-xr-x 3 root root 232 2006-06-27 08:32 buildgrid >> drwxr-xr-x 5 root root 344 2006-12-15 15:35 cdavid >> drwxr-xr-x 3 root root 192 2006-06-22 09:18 constants >> drwxr-xr-x 3 root root 248 2006-03-15 08:29 cow >> drwxr-xr-x 4 root root 544 2006-10-02 08:19 delaunay >> -rw-r--r-- 1 root root 40 2007-02-08 13:11 enabled_packages.txt >> drwxr-xr-x 5 root root 272 2006-03-15 08:29 exmplpackage >> drwxr-xr-x 6 root root 280 2006-04-24 08:41 fdfpack >> drwxr-xr-x 3 root root 600 2006-09-03 11:30 ga >> drwxr-xr-x 3 root root 360 2006-09-03 11:30 gplt >> drwxr-xr-x 3 root root 352 2006-09-03 11:30 image >> -rw-r--r-- 1 root root 0 2006-02-28 08:50 __init__.py >> drwxr-xr-x 4 root root 520 2007-02-12 08:33 maskedarray >> drwxr-xr-x 6 root root 688 2007-02-12 08:33 models >> drwxr-xr-x 5 root root 232 2006-11-23 08:29 montecarlo >> drwxr-xr-x 3 root root 320 2006-07-10 08:09 netcdf >> drwxr-xr-x 4 root root 264 2007-01-11 10:57 newoptimize >> drwxr-xr-x 4 root root 432 2006-11-17 09:02 numexpr >> drwxr-xr-x 3 root root 72 2006-10-05 08:30 oliphant >> drwxr-xr-x 3 root root 456 2006-09-03 11:30 plt >> drwxr-xr-x 6 root root 904 2006-12-07 09:05 pyem >> drwxr-xr-x 13 root root 592 2006-08-16 16:21 pysparse >> drwxr-xr-x 4 root root 232 2007-02-08 13:11 rbf >> drwxr-xr-x 3 root root 184 2006-09-03 11:30 rkern >> -rw-r--r-- 1 root root 2732 2007-02-08 13:07 setup.py >> -rw-r--r-- 1 root root 1028 2007-02-08 13:11 setup.pyc >> drwxr-xr-x 5 root root 360 2007-02-09 08:42 spline >> drwxr-xr-x 3 root root 192 2006-10-10 08:22 stats >> drwxr-xr-x 5 root root 464 2006-09-11 08:44 svm >> drwxr-xr-x 9 root root 576 2007-02-13 08:27 timeseries >> drwxr-xr-x 3 root root 72 2006-12-07 08:52 wavelet >> drwxr-xr-x 6 root root 1864 2007-01-23 08:35 xplt >> >> Note that I am using the svn version. So odr is not present in the sandbox. >> How about 0.5.2 ? >> Do you have a directory odr in the sandbox ? >> >> Nils >> >> >> _______________________________________________ >> SciPy-user mailing list >> SciPy-user at scipy.org >> http://projects.scipy.org/mailman/listinfo/scipy-user >> > > _________________________________________________________________ > FREE pop-up blocking with the new MSN Toolbar - get it now! > http://toolbar.msn.click-url.com/go/onm00200415ave/direct/01/ > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > And in scipy/Lib ? I have drwxr-xr-x 6 root root 280 2007-01-24 13:58 cluster drwxr-xr-x 7 root root 536 2007-01-11 09:16 fftpack drwxr-xr-x 2 root root 80 2006-03-20 11:42 image -rw-r--r-- 1 root root 2356 2007-01-11 09:16 __init__.py drwxr-xr-x 8 root root 656 2007-02-12 09:54 integrate drwxr-xr-x 5 root root 456 2007-02-12 09:54 interpolate drwxr-xr-x 6 root root 672 2007-02-12 09:54 io drwxr-xr-x 5 root root 232 2007-01-11 09:16 lib drwxr-xr-x 6 root root 1024 2007-01-25 09:35 linalg drwxr-xr-x 5 root root 552 2007-01-25 09:35 linsolve drwxr-xr-x 2 root root 80 2006-01-19 10:27 maxent drwxr-xr-x 5 root root 296 2007-01-11 09:16 maxentropy drwxr-xr-x 3 root root 392 2007-01-11 09:16 misc drwxr-xr-x 2 root root 80 2006-01-19 10:27 montecarlo drwxr-xr-x 5 root root 456 2007-02-12 09:54 ndimage drwxr-xr-x 2 root root 80 2006-03-17 10:51 nd_image drwxr-xr-x 5 root root 360 2007-01-24 14:00 odr drwxr-xr-x 10 root root 712 2007-01-29 10:06 optimize drwxr-xr-x 35 root root 1032 2007-02-12 09:54 sandbox -rw-r--r-- 1 root root 679 2005-12-01 10:57 scipy_version.pyc -rw-r--r-- 1 root root 1126 2006-11-28 16:59 setup.py -rw-r--r-- 1 root root 1141 2006-11-28 17:02 setup.pyc drwxr-xr-x 5 root root 744 2007-01-30 15:29 signal drwxr-xr-x 5 root root 272 2007-01-15 09:33 sparse drwxr-xr-x 12 root root 1056 2007-01-26 11:21 special drwxr-xr-x 5 root root 568 2007-01-11 09:16 stats drwxr-xr-x 5 root root 208 2006-08-04 09:02 stsci drwxr-xr-x 3 root root 232 2006-10-02 08:38 tests -rw-r--r-- 1 root root 485 2006-12-08 09:21 version.py -rw-r--r-- 1 root root 580 2006-12-08 09:23 version.pyc drwxr-xr-x 8 root root 1272 2007-01-22 09:42 weave Also look into your setup.py file in scipy/Lib def configuration(parent_package='',top_path=None): from numpy.distutils.misc_util import Configuration config = Configuration('scipy',parent_package,top_path) config.add_subpackage('cluster') config.add_subpackage('fftpack') config.add_subpackage('integrate') config.add_subpackage('interpolate') config.add_subpackage('io') config.add_subpackage('lib') config.add_subpackage('linalg') config.add_subpackage('linsolve') config.add_subpackage('maxentropy') config.add_subpackage('misc') config.add_subpackage('odr') # This should be enabled !! config.add_subpackage('optimize') config.add_subpackage('sandbox') config.add_subpackage('signal') config.add_subpackage('sparse') config.add_subpackage('special') config.add_subpackage('stats') #config.add_subpackage('ndimage') config.add_subpackage('stsci') config.add_subpackage('weave') config.make_svn_version_py() # installs __svn_version__.py config.make_config_py() return config if __name__ == '__main__': from numpy.distutils.core import setup setup(**configuration(top_path='').todict()) Nils From chiaracaronna at hotmail.com Wed Feb 14 08:19:49 2007 From: chiaracaronna at hotmail.com (Chiara Caronna) Date: Wed, 14 Feb 2007 13:19:49 +0000 Subject: [SciPy-user] scipy.optimize.leastsq error estimates In-Reply-To: <45D30B02.10208@iam.uni-stuttgart.de> Message-ID: Ok, I have an odr directory in Lib/scipy and the file setup.py looks ok: def configuration(parent_package='',top_path=None): from numpy.distutils.misc_util import Configuration config = Configuration('scipy',parent_package,top_path) config.add_subpackage('cluster') config.add_subpackage('fftpack') config.add_subpackage('integrate') config.add_subpackage('interpolate') config.add_subpackage('io') config.add_subpackage('lib') config.add_subpackage('linalg') config.add_subpackage('linsolve') config.add_subpackage('maxentropy') config.add_subpackage('misc') config.add_subpackage('odr') config.add_subpackage('optimize') config.add_subpackage('sandbox') config.add_subpackage('signal') config.add_subpackage('sparse') config.add_subpackage('special') config.add_subpackage('stats') config.add_subpackage('ndimage') config.add_subpackage('stsci') config.add_subpackage('weave') config.make_svn_version_py() # installs __svn_version__.py config.make_config_py() return config if __name__ == '__main__': from numpy.distutils.core import setup setup(**configuration(top_path='').todict()) so.... why it doesn't work?! >From: Nils Wagner >Reply-To: SciPy Users List >To: SciPy Users List >Subject: Re: [SciPy-user] scipy.optimize.leastsq error estimates >Date: Wed, 14 Feb 2007 14:13:38 +0100 > >Chiara Caronna wrote: > > Ah. No, there is not an odr directory... :( > > > > Here is the output: > > > > > > drwxr-xr-x 28 root root 4096 2007-02-14 13:46 . > > drwxr-xr-x 22 root root 4096 2007-01-17 09:37 .. > > -rw-r--r-- 1 500 1000 0 2006-01-05 04:35 __init__.py > > drwxr-xr-x 4 root root 4096 2006-12-08 05:05 ann > > drwxr-xr-x 4 root root 4096 2006-12-08 05:05 arpack > > drwxr-xr-x 3 root root 4096 2006-12-08 05:05 arraysetops > > drwxr-xr-x 2 root root 4096 2006-12-08 05:05 buildgrid > > drwxr-xr-x 2 root root 4096 2006-12-08 05:05 constants > > drwxr-xr-x 2 root root 4096 2006-12-08 05:05 cow > > drwxr-xr-x 3 root root 4096 2006-12-08 05:05 delaunay > > -rw-r--r-- 1 root root 4 2007-02-14 13:46 > > enabled_packages.txt > > drwxr-xr-x 4 root root 4096 2006-12-08 05:05 exmplpackage > > drwxr-xr-x 5 root root 4096 2006-12-08 05:05 fdfpack > > drwxr-xr-x 2 root root 4096 2006-12-08 05:05 ga > > drwxr-xr-x 2 root root 4096 2006-12-08 05:05 gplt > > drwxr-xr-x 2 root root 4096 2006-12-08 05:05 image > > drwxr-xr-x 5 root root 4096 2006-12-08 05:05 models > > drwxr-xr-x 4 root root 4096 2006-12-08 05:05 montecarlo > > drwxr-xr-x 2 root root 4096 2006-12-08 05:05 netcdf > > drwxr-xr-x 2 root root 4096 2006-12-08 05:05 newoptimize > > drwxr-xr-x 3 root root 4096 2006-12-08 05:05 numexpr > > drwxr-xr-x 2 root root 4096 2006-12-08 05:05 plt > > drwxr-xr-x 5 root root 4096 2006-12-08 05:05 pyem > > drwxr-xr-x 12 root root 4096 2006-12-08 05:05 pysparse > > drwxr-xr-x 2 root root 4096 2006-12-08 05:05 rkern > > -rw-r--r-- 1 500 1000 2656 2006-12-02 04:24 setup.py > > -rw-r--r-- 1 root root 1028 2007-01-17 09:37 setup.pyc > > drwxr-xr-x 4 root root 4096 2006-12-08 05:05 spline > > drwxr-xr-x 2 root root 4096 2006-12-08 05:05 stats > > drwxr-xr-x 4 root root 4096 2006-12-08 05:05 svm > > drwxr-xr-x 3 root root 4096 2006-12-08 05:05 umfpack > > drwxr-xr-x 5 root root 4096 2006-12-08 05:05 xplt > > > > > > > >> From: Nils Wagner > >> Reply-To: SciPy Users List > >> To: SciPy Users List > >> Subject: Re: [SciPy-user] scipy.optimize.leastsq error estimates > >> Date: Wed, 14 Feb 2007 14:01:19 +0100 > >> > >> Chiara Caronna wrote: > >> > >>> > >>>> If you use 0.5.2 > >>>> > >>>> look into the directory scipy/Lib/sandbox > >>>> and create a file called enabled_packages.txt which should contain > >>>> > >>>> odr > >>>> > >>>> in the first line (a new line for each package) > >>>> > >>>> > >>> I did what you said, but when reinstalling scipy I got this error: > >>> > >>> File "Lib/sandbox/setup.py", line 22, in configuration > >>> config.add_subpackage(p) > >>> File > >>> "/usr/local/lib/python2.4/site-packages/numpy/distutils/misc_util.py", > >>> > >> line > >> > >>> 765, in add_subpackage > >>> caller_level = 2) > >>> File > >>> "/usr/local/lib/python2.4/site-packages/numpy/distutils/misc_util.py", > >>> > >> line > >> > >>> 741, in get_subpackage > >>> caller_level = caller_level+1) > >>> File > >>> "/usr/local/lib/python2.4/site-packages/numpy/distutils/misc_util.py", > >>> > >> line > >> > >>> 541, in __init__ > >>> raise ValueError("%r is not a directory" % (package_path,)) > >>> ValueError: 'Lib/sandbox/odr' is not a directory > >>> > >>> > >>> > >>> > >> What is the output of ls -l in your sandbox directory > >> I have > >> Lib/sandbox> ls -l > >> total 17 > >> drwxr-xr-x 5 root root 272 2006-09-03 11:30 ann > >> drwxr-xr-x 6 root root 376 2006-11-21 08:45 arpack > >> drwxr-xr-x 4 root root 184 2006-11-06 08:48 arraysetops > >> drwxr-xr-x 3 root root 232 2006-06-27 08:32 buildgrid > >> drwxr-xr-x 5 root root 344 2006-12-15 15:35 cdavid > >> drwxr-xr-x 3 root root 192 2006-06-22 09:18 constants > >> drwxr-xr-x 3 root root 248 2006-03-15 08:29 cow > >> drwxr-xr-x 4 root root 544 2006-10-02 08:19 delaunay > >> -rw-r--r-- 1 root root 40 2007-02-08 13:11 enabled_packages.txt > >> drwxr-xr-x 5 root root 272 2006-03-15 08:29 exmplpackage > >> drwxr-xr-x 6 root root 280 2006-04-24 08:41 fdfpack > >> drwxr-xr-x 3 root root 600 2006-09-03 11:30 ga > >> drwxr-xr-x 3 root root 360 2006-09-03 11:30 gplt > >> drwxr-xr-x 3 root root 352 2006-09-03 11:30 image > >> -rw-r--r-- 1 root root 0 2006-02-28 08:50 __init__.py > >> drwxr-xr-x 4 root root 520 2007-02-12 08:33 maskedarray > >> drwxr-xr-x 6 root root 688 2007-02-12 08:33 models > >> drwxr-xr-x 5 root root 232 2006-11-23 08:29 montecarlo > >> drwxr-xr-x 3 root root 320 2006-07-10 08:09 netcdf > >> drwxr-xr-x 4 root root 264 2007-01-11 10:57 newoptimize > >> drwxr-xr-x 4 root root 432 2006-11-17 09:02 numexpr > >> drwxr-xr-x 3 root root 72 2006-10-05 08:30 oliphant > >> drwxr-xr-x 3 root root 456 2006-09-03 11:30 plt > >> drwxr-xr-x 6 root root 904 2006-12-07 09:05 pyem > >> drwxr-xr-x 13 root root 592 2006-08-16 16:21 pysparse > >> drwxr-xr-x 4 root root 232 2007-02-08 13:11 rbf > >> drwxr-xr-x 3 root root 184 2006-09-03 11:30 rkern > >> -rw-r--r-- 1 root root 2732 2007-02-08 13:07 setup.py > >> -rw-r--r-- 1 root root 1028 2007-02-08 13:11 setup.pyc > >> drwxr-xr-x 5 root root 360 2007-02-09 08:42 spline > >> drwxr-xr-x 3 root root 192 2006-10-10 08:22 stats > >> drwxr-xr-x 5 root root 464 2006-09-11 08:44 svm > >> drwxr-xr-x 9 root root 576 2007-02-13 08:27 timeseries > >> drwxr-xr-x 3 root root 72 2006-12-07 08:52 wavelet > >> drwxr-xr-x 6 root root 1864 2007-01-23 08:35 xplt > >> > >> Note that I am using the svn version. So odr is not present in the >sandbox. > >> How about 0.5.2 ? > >> Do you have a directory odr in the sandbox ? > >> > >> Nils > >> > >> > >> _______________________________________________ > >> SciPy-user mailing list > >> SciPy-user at scipy.org > >> http://projects.scipy.org/mailman/listinfo/scipy-user > >> > > > > _________________________________________________________________ > > FREE pop-up blocking with the new MSN Toolbar - get it now! > > http://toolbar.msn.click-url.com/go/onm00200415ave/direct/01/ > > > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user > > >And in scipy/Lib ? I have > > drwxr-xr-x 6 root root 280 2007-01-24 13:58 cluster >drwxr-xr-x 7 root root 536 2007-01-11 09:16 fftpack >drwxr-xr-x 2 root root 80 2006-03-20 11:42 image >-rw-r--r-- 1 root root 2356 2007-01-11 09:16 __init__.py >drwxr-xr-x 8 root root 656 2007-02-12 09:54 integrate >drwxr-xr-x 5 root root 456 2007-02-12 09:54 interpolate >drwxr-xr-x 6 root root 672 2007-02-12 09:54 io >drwxr-xr-x 5 root root 232 2007-01-11 09:16 lib >drwxr-xr-x 6 root root 1024 2007-01-25 09:35 linalg >drwxr-xr-x 5 root root 552 2007-01-25 09:35 linsolve >drwxr-xr-x 2 root root 80 2006-01-19 10:27 maxent >drwxr-xr-x 5 root root 296 2007-01-11 09:16 maxentropy >drwxr-xr-x 3 root root 392 2007-01-11 09:16 misc >drwxr-xr-x 2 root root 80 2006-01-19 10:27 montecarlo >drwxr-xr-x 5 root root 456 2007-02-12 09:54 ndimage >drwxr-xr-x 2 root root 80 2006-03-17 10:51 nd_image >drwxr-xr-x 5 root root 360 2007-01-24 14:00 odr >drwxr-xr-x 10 root root 712 2007-01-29 10:06 optimize >drwxr-xr-x 35 root root 1032 2007-02-12 09:54 sandbox >-rw-r--r-- 1 root root 679 2005-12-01 10:57 scipy_version.pyc >-rw-r--r-- 1 root root 1126 2006-11-28 16:59 setup.py >-rw-r--r-- 1 root root 1141 2006-11-28 17:02 setup.pyc >drwxr-xr-x 5 root root 744 2007-01-30 15:29 signal >drwxr-xr-x 5 root root 272 2007-01-15 09:33 sparse >drwxr-xr-x 12 root root 1056 2007-01-26 11:21 special >drwxr-xr-x 5 root root 568 2007-01-11 09:16 stats >drwxr-xr-x 5 root root 208 2006-08-04 09:02 stsci >drwxr-xr-x 3 root root 232 2006-10-02 08:38 tests >-rw-r--r-- 1 root root 485 2006-12-08 09:21 version.py >-rw-r--r-- 1 root root 580 2006-12-08 09:23 version.pyc >drwxr-xr-x 8 root root 1272 2007-01-22 09:42 weave > >Also look into your setup.py file in scipy/Lib > >def configuration(parent_package='',top_path=None): > from numpy.distutils.misc_util import Configuration > config = Configuration('scipy',parent_package,top_path) > config.add_subpackage('cluster') > config.add_subpackage('fftpack') > config.add_subpackage('integrate') > config.add_subpackage('interpolate') > config.add_subpackage('io') > config.add_subpackage('lib') > config.add_subpackage('linalg') > config.add_subpackage('linsolve') > config.add_subpackage('maxentropy') > config.add_subpackage('misc') > config.add_subpackage('odr') # This should be enabled !! > config.add_subpackage('optimize') > config.add_subpackage('sandbox') > config.add_subpackage('signal') > config.add_subpackage('sparse') > config.add_subpackage('special') > config.add_subpackage('stats') > #config.add_subpackage('ndimage') > config.add_subpackage('stsci') > config.add_subpackage('weave') > config.make_svn_version_py() # installs __svn_version__.py > config.make_config_py() > return config > >if __name__ == '__main__': > from numpy.distutils.core import setup > setup(**configuration(top_path='').todict()) > >Nils > >_______________________________________________ >SciPy-user mailing list >SciPy-user at scipy.org >http://projects.scipy.org/mailman/listinfo/scipy-user _________________________________________________________________ Express yourself instantly with MSN Messenger! Download today it's FREE! http://messenger.msn.click-url.com/go/onm00200471ave/direct/01/ From nwagner at iam.uni-stuttgart.de Wed Feb 14 08:27:30 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Wed, 14 Feb 2007 14:27:30 +0100 Subject: [SciPy-user] scipy.optimize.leastsq error estimates In-Reply-To: References: Message-ID: <45D30E42.2040408@iam.uni-stuttgart.de> Chiara Caronna wrote: > Ok, I have an odr directory in Lib/scipy and the file setup.py looks ok: > > def configuration(parent_package='',top_path=None): > from numpy.distutils.misc_util import Configuration > config = Configuration('scipy',parent_package,top_path) > config.add_subpackage('cluster') > config.add_subpackage('fftpack') > config.add_subpackage('integrate') > config.add_subpackage('interpolate') > config.add_subpackage('io') > config.add_subpackage('lib') > config.add_subpackage('linalg') > config.add_subpackage('linsolve') > config.add_subpackage('maxentropy') > config.add_subpackage('misc') > config.add_subpackage('odr') > config.add_subpackage('optimize') > config.add_subpackage('sandbox') > config.add_subpackage('signal') > config.add_subpackage('sparse') > config.add_subpackage('special') > config.add_subpackage('stats') > config.add_subpackage('ndimage') > config.add_subpackage('stsci') > config.add_subpackage('weave') > config.make_svn_version_py() # installs __svn_version__.py > config.make_config_py() > return config > > if __name__ == '__main__': > from numpy.distutils.core import setup > setup(**configuration(top_path='').todict()) > > so.... why it doesn't work?! > > Sorry I am running out of ideas ? It works for me Python 2.4.1 (#1, Oct 13 2006, 16:51:58) [GCC 4.0.2 20050901 (prerelease) (SUSE Linux)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import scipy.odr >>> import scipy >>> scipy.__version__ '0.5.3.dev2707' Nils From chiaracaronna at hotmail.com Wed Feb 14 08:34:51 2007 From: chiaracaronna at hotmail.com (Chiara Caronna) Date: Wed, 14 Feb 2007 13:34:51 +0000 Subject: [SciPy-user] scipy.optimize.leastsq error estimates In-Reply-To: <45D30E42.2040408@iam.uni-stuttgart.de> Message-ID: Ok, the error I got is this: >>>import scipy.odr Traceback (most recent call last): File "", line 1, in ? File "/usr/local/lib/python2.4/site-packages/scipy/odr/__init__.py", line 49, in ? import odrpack File "/usr/local/lib/python2.4/site-packages/scipy/odr/odrpack.py", line 103, in ? from scipy.sandbox.odr import __odrpack ImportError: No module named odr can you check you file : "/usr/local/lib/python2.4/site-packages/scipy/odr/odrpack.py" at line 103? I think the problem is that it tries to import something from scipy.sandbox.odr that doesn't exist... >From: Nils Wagner >Reply-To: SciPy Users List >To: SciPy Users List >Subject: Re: [SciPy-user] scipy.optimize.leastsq error estimates >Date: Wed, 14 Feb 2007 14:27:30 +0100 > >Chiara Caronna wrote: > > Ok, I have an odr directory in Lib/scipy and the file setup.py looks ok: > > > > def configuration(parent_package='',top_path=None): > > from numpy.distutils.misc_util import Configuration > > config = Configuration('scipy',parent_package,top_path) > > config.add_subpackage('cluster') > > config.add_subpackage('fftpack') > > config.add_subpackage('integrate') > > config.add_subpackage('interpolate') > > config.add_subpackage('io') > > config.add_subpackage('lib') > > config.add_subpackage('linalg') > > config.add_subpackage('linsolve') > > config.add_subpackage('maxentropy') > > config.add_subpackage('misc') > > config.add_subpackage('odr') > > config.add_subpackage('optimize') > > config.add_subpackage('sandbox') > > config.add_subpackage('signal') > > config.add_subpackage('sparse') > > config.add_subpackage('special') > > config.add_subpackage('stats') > > config.add_subpackage('ndimage') > > config.add_subpackage('stsci') > > config.add_subpackage('weave') > > config.make_svn_version_py() # installs __svn_version__.py > > config.make_config_py() > > return config > > > > if __name__ == '__main__': > > from numpy.distutils.core import setup > > setup(**configuration(top_path='').todict()) > > > > so.... why it doesn't work?! > > > > > >Sorry I am running out of ideas ? >It works for me > >Python 2.4.1 (#1, Oct 13 2006, 16:51:58) >[GCC 4.0.2 20050901 (prerelease) (SUSE Linux)] on linux2 >Type "help", "copyright", "credits" or "license" for more information. > >>> import scipy.odr > >>> import scipy > >>> scipy.__version__ >'0.5.3.dev2707' > >Nils > >_______________________________________________ >SciPy-user mailing list >SciPy-user at scipy.org >http://projects.scipy.org/mailman/listinfo/scipy-user _________________________________________________________________ Express yourself instantly with MSN Messenger! Download today it's FREE! http://messenger.msn.click-url.com/go/onm00200471ave/direct/01/ From nwagner at iam.uni-stuttgart.de Wed Feb 14 08:42:51 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Wed, 14 Feb 2007 14:42:51 +0100 Subject: [SciPy-user] scipy.optimize.leastsq error estimates In-Reply-To: References: Message-ID: <45D311DB.2040903@iam.uni-stuttgart.de> Chiara Caronna wrote: > Ok, the error I got is this: > > >>>> import scipy.odr >>>> > Traceback (most recent call last): > File "", line 1, in ? > File "/usr/local/lib/python2.4/site-packages/scipy/odr/__init__.py", line > 49, in > ? > import odrpack > File "/usr/local/lib/python2.4/site-packages/scipy/odr/odrpack.py", line > 103, in > ? > from scipy.sandbox.odr import __odrpack > ImportError: No module named odr > > can you check you file : > "/usr/local/lib/python2.4/site-packages/scipy/odr/odrpack.py" at line 103? > I think the problem is that it tries to import something from > scipy.sandbox.odr that doesn't exist... > > > I have from scipy.odr import __odrpack What is the output of scipy.__version__ on your machine ? I suggest to remove the build directory and scipy below site-packages with rm -rf build rm -rf scipy Afterwards reinstall everything from scratch with python setput.py install HTH Nils From chiaracaronna at hotmail.com Wed Feb 14 08:48:16 2007 From: chiaracaronna at hotmail.com (Chiara Caronna) Date: Wed, 14 Feb 2007 13:48:16 +0000 Subject: [SciPy-user] scipy.optimize.leastsq error estimates In-Reply-To: <45D311DB.2040903@iam.uni-stuttgart.de> Message-ID: >From: Nils Wagner >Reply-To: SciPy Users List >To: SciPy Users List >Subject: Re: [SciPy-user] scipy.optimize.leastsq error estimates >Date: Wed, 14 Feb 2007 14:42:51 +0100 > >Chiara Caronna wrote: > > Ok, the error I got is this: > > > > > >>>> import scipy.odr > >>>> > > Traceback (most recent call last): > > File "", line 1, in ? > > File "/usr/local/lib/python2.4/site-packages/scipy/odr/__init__.py", >line > > 49, in > > ? > > import odrpack > > File "/usr/local/lib/python2.4/site-packages/scipy/odr/odrpack.py", >line > > 103, in > > ? > > from scipy.sandbox.odr import __odrpack > > ImportError: No module named odr > > > > can you check you file : > > "/usr/local/lib/python2.4/site-packages/scipy/odr/odrpack.py" at line >103? > > I think the problem is that it tries to import something from > > scipy.sandbox.odr that doesn't exist... > > > > > > >I have >from scipy.odr import __odrpack > I changed my "from scipy.sandbox.odr import __odrpack" into "from scipy.odr import __odrpack" and now it works... I'll see if this works fine now, other wise I'll try to reinstall everything. Thank's a lot for your help! Chiara >What is the output of > >scipy.__version__ > >on your machine ? > >I suggest to remove the build directory and scipy below site-packages with > >rm -rf build >rm -rf scipy > >Afterwards reinstall everything from scratch with > >python setput.py install > >HTH > >Nils > > >_______________________________________________ >SciPy-user mailing list >SciPy-user at scipy.org >http://projects.scipy.org/mailman/listinfo/scipy-user _________________________________________________________________ FREE pop-up blocking with the new MSN Toolbar - get it now! http://toolbar.msn.click-url.com/go/onm00200415ave/direct/01/ From nwagner at iam.uni-stuttgart.de Wed Feb 14 08:58:14 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Wed, 14 Feb 2007 14:58:14 +0100 Subject: [SciPy-user] scipy.optimize.leastsq error estimates In-Reply-To: References: Message-ID: <45D31576.2060100@iam.uni-stuttgart.de> > > I changed my "from scipy.sandbox.odr import __odrpack" into "from scipy.odr > import __odrpack" and now it works... I'll see if this works fine now, other > wise I'll try to reinstall everything. Thank's a lot for your help! > > You're welcome ! Nils From lorenzo.isella at gmail.com Wed Feb 14 10:47:58 2007 From: lorenzo.isella at gmail.com (Lorenzo Isella) Date: Wed, 14 Feb 2007 16:47:58 +0100 Subject: [SciPy-user] Newbie Question about Scipy Message-ID: Dear All, I am pretty new to Python, but it has such a good reputation that I decided to give it a try. I am slightly puzzled about the syntax modulename.function. I am going through the SciPy tutorial by Oliphant (btw, is there anywhere online a free updated version of a document of this kind?). To use SciPy I normally do the following: ~$ ipython Python 2.4.4 (#2, Jan 13 2007, 17:50:26) Type "copyright", "credits" or "license" for more information. IPython 0.7.3 -- An enhanced Interactive Python. ? -> Introduction to IPython's features. %magic -> Information about IPython's 'magic' % functions. help -> Python's own help system. object? -> Details about 'object'. ?object also works, ?? prints more. In [1]: from scipy import * However, statements like the ones in the guide: In [12]: from integrate import quad --------------------------------------------------------------------------- exceptions.ImportError Traceback (most recent call last) /home/iselllo/ ImportError: No module named integrate do not work. So I have to use: scipy.integrate. Similarly, the function gamma is not recognized, but special.gamma is. How is this chosen by the system? Then: once I have import everything from scipy, is importing explicitly the gamma function a necessity at all? **************************************************************************** On a more general ground, how does Python compare with e.g. MatLab or Octave for scientific computing? Which are the advantages and drawbacks (sorry if this is not the right forum). Kind Regards Lorenzo From akumar at iitm.ac.in Wed Feb 14 11:16:08 2007 From: akumar at iitm.ac.in (Kumar Appaiah) Date: Wed, 14 Feb 2007 21:46:08 +0530 Subject: [SciPy-user] Newbie Question about Scipy In-Reply-To: References: Message-ID: <20070214161608.GA12622@localhost> On Wed, Feb 14, 2007 at 04:47:58PM +0100, Lorenzo Isella wrote: > However, statements like the ones in the guide: > > In [12]: from integrate import quad > --------------------------------------------------------------------------- > exceptions.ImportError Traceback (most > recent call last) > > /home/iselllo/ > > ImportError: No module named integrate > > do not work. > So I have to use: scipy.integrate. Similarly, the function gamma is > not recognized, but special.gamma is. How about from scipy.integrate import quad from scipy.special import gamma That works. But most of us non-experts don't bother importing things in bits and pieces, so I somehow am used to special., signal. etc. But of course, it does make a difference if you have to use the function over and over again. And here's another cheap Python trick: import scipy g = scipy.special.gamma print g == scipy.special.gamma # should be True print g(0.5) # prints sqrt(pi) So, g is special.gamma. HTH. Kumar -- Kumar Appaiah, 462, Jamuna Hostel, Indian Institute of Technology Madras, Chennai - 600 036 From ryanlists at gmail.com Wed Feb 14 11:07:32 2007 From: ryanlists at gmail.com (Ryan Krauss) Date: Wed, 14 Feb 2007 10:07:32 -0600 Subject: [SciPy-user] Newbie Question about Scipy In-Reply-To: References: Message-ID: Hey Lorenzo, Your question has to do with a feature of Python called namespaces. This can be a little vague and weird to the newcomer, but it is a great strength of Python. It is also very necessary because there are so many people writing so many modules for Python. Namespaces allow two different modules to have functions with the same name without conflicting with one another. So, if module1 and module2 both have functions called sqrt, module1.sqrt and module2.sqrt can be two different functions. Using from module1 import * will load that module's sqrt into the global namespace. This may seem weird, but trust me, it is a good thing. If you googled for "python namespaces" you might get a better explanation. There is an in-depth discussion of the concept in the book "Learning Python" by Mark Lutz - most libraries should have it. The question about how python (or better scipy/numpy/matplotlib/ipython) compares with Matlab or Octave will start a war if asked in the wrong places. Most people on this list will tell you stories similar to mine: I switched from Matlab to Python midway through my Ph.D. work and it was one of the best decisions I ever made. I find writing Python code so much faster and easier that I enjoy it quite a bit. I think the only risk is that if you are closely attached to some Matlab toolbox that doesn't yet exist in Python/Scipy and friends, you will need to go through the learning process of writing some things yourself. Fortunately, Python and the number of existing modules make this process not so bad. Ryan On 2/14/07, Lorenzo Isella wrote: > Dear All, > I am pretty new to Python, but it has such a good reputation that I > decided to give it a try. > I am slightly puzzled about the syntax modulename.function. > I am going through the SciPy tutorial by Oliphant (btw, is there > anywhere online a free updated version of a document of this kind?). > To use SciPy I normally do the following: > > ~$ ipython > Python 2.4.4 (#2, Jan 13 2007, 17:50:26) > Type "copyright", "credits" or "license" for more information. > > IPython 0.7.3 -- An enhanced Interactive Python. > ? -> Introduction to IPython's features. > %magic -> Information about IPython's 'magic' % functions. > help -> Python's own help system. > object? -> Details about 'object'. ?object also works, ?? prints more. > > In [1]: from scipy import * > > However, statements like the ones in the guide: > > In [12]: from integrate import quad > --------------------------------------------------------------------------- > exceptions.ImportError Traceback (most > recent call last) > > /home/iselllo/ > > ImportError: No module named integrate > > do not work. > So I have to use: scipy.integrate. Similarly, the function gamma is > not recognized, but special.gamma is. > How is this chosen by the system? Then: once I have import everything > from scipy, is importing explicitly the gamma function a necessity at > all? > **************************************************************************** > On a more general ground, how does Python compare with e.g. MatLab or > Octave for scientific computing? Which are the advantages and > drawbacks (sorry if this is not the right forum). > Kind Regards > > Lorenzo > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From bryanv at enthought.com Wed Feb 14 11:11:50 2007 From: bryanv at enthought.com (Bryan Van de Ven) Date: Wed, 14 Feb 2007 10:11:50 -0600 Subject: [SciPy-user] Newbie Question about Scipy In-Reply-To: References: Message-ID: <45D334C6.9060305@enthought.com> Lorenzo, Here is some python documentation about modules and submodules with some examples: http://docs.python.org/tut/node8.html#SECTION008400000000000000000 "from scipy import *" imports the integrate module, which means you can use "integrate.quad(...)" If you just want to be able to do "quad(...)" then you need either "from scipy.integrate import quad" or "from scipy.integrate import *" As for python vs Matlab, I will mention one area of comparison that I am most familiar with. If you have any need for building out an interactive GUI data analysis tool, even an extremely modest one, python is a clear win. Python has available Chaco2, Matplotlib and VTK/TVTK (and others) for displaying 2D and 3D data, and then your choice of toolkits, WX, GTK, Qt (and others), for building GUI applications large or small. By contrast, building even the simplest of interactive GUI applications in Matlab is a nightmare. Lorenzo Isella wrote: > Dear All, > I am pretty new to Python, but it has such a good reputation that I > decided to give it a try. > I am slightly puzzled about the syntax modulename.function. > I am going through the SciPy tutorial by Oliphant (btw, is there > anywhere online a free updated version of a document of this kind?). > To use SciPy I normally do the following: > > ~$ ipython > Python 2.4.4 (#2, Jan 13 2007, 17:50:26) > Type "copyright", "credits" or "license" for more information. > > IPython 0.7.3 -- An enhanced Interactive Python. > ? -> Introduction to IPython's features. > %magic -> Information about IPython's 'magic' % functions. > help -> Python's own help system. > object? -> Details about 'object'. ?object also works, ?? prints more. > > In [1]: from scipy import * > > However, statements like the ones in the guide: > > In [12]: from integrate import quad > --------------------------------------------------------------------------- > exceptions.ImportError Traceback (most > recent call last) > > /home/iselllo/ > > ImportError: No module named integrate > > do not work. > So I have to use: scipy.integrate. Similarly, the function gamma is > not recognized, but special.gamma is. > How is this chosen by the system? Then: once I have import everything > from scipy, is importing explicitly the gamma function a necessity at > all? > **************************************************************************** > On a more general ground, how does Python compare with e.g. MatLab or > Octave for scientific computing? Which are the advantages and > drawbacks (sorry if this is not the right forum). > Kind Regards > > Lorenzo > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From sgarcia at olfac.univ-lyon1.fr Wed Feb 14 12:10:03 2007 From: sgarcia at olfac.univ-lyon1.fr (Samuel GARCIA) Date: Wed, 14 Feb 2007 18:10:03 +0100 Subject: [SciPy-user] str(numpy.inf) Message-ID: <45D3426B.1050600@olfac.univ-lyon1.fr> Hi list, I have this problem str(numpy.inf) under linux give 'inf' str(numpy.inf) under win32 give '1.#INF' It is a problem for me because with a GUI in LineEdit, for example, the end user who have to enter a float can enter inf under linux but not under win32. Any idea to solve this ? Thanks a lot. Sam -- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Samuel Garcia Universite Claude Bernard LYON 1 CNRS - UMR5020, Laboratoire des Neurosciences et Systemes Sensoriels 50, avenue Tony Garnier 69366 LYON Cedex 07 FRANCE T?l : 04 37 28 74 64 Fax : 04 37 28 76 01 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ From robert.kern at gmail.com Wed Feb 14 12:36:01 2007 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 14 Feb 2007 11:36:01 -0600 Subject: [SciPy-user] str(numpy.inf) In-Reply-To: <45D3426B.1050600@olfac.univ-lyon1.fr> References: <45D3426B.1050600@olfac.univ-lyon1.fr> Message-ID: <45D34881.3070601@gmail.com> Samuel GARCIA wrote: > Hi list, > > I have this problem > str(numpy.inf) under linux give 'inf' > str(numpy.inf) under win32 give '1.#INF' > > It is a problem for me because with a GUI in LineEdit, for example, the > end user who have to enter a float can enter inf under linux but not > under win32. > > Any idea to solve this ? No. We (and Python) defer to the C library for such things, and they each use different representations. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From sgarcia at olfac.univ-lyon1.fr Wed Feb 14 12:51:07 2007 From: sgarcia at olfac.univ-lyon1.fr (Samuel GARCIA) Date: Wed, 14 Feb 2007 18:51:07 +0100 Subject: [SciPy-user] str(numpy.inf) In-Reply-To: <45D34881.3070601@gmail.com> References: <45D3426B.1050600@olfac.univ-lyon1.fr> <45D34881.3070601@gmail.com> Message-ID: <45D34C0B.3010405@olfac.univ-lyon1.fr> bad luck, I will write a patch in my GUI to detect the platform and detect inf -inf and nan in lineditbox. If any anyone have a suggestion, thank you because dealing with platform specificities is boring. thank you sam Robert Kern wrote: > Samuel GARCIA wrote: > >> Hi list, >> >> I have this problem >> str(numpy.inf) under linux give 'inf' >> str(numpy.inf) under win32 give '1.#INF' >> >> It is a problem for me because with a GUI in LineEdit, for example, the >> end user who have to enter a float can enter inf under linux but not >> under win32. >> >> Any idea to solve this ? >> > > No. We (and Python) defer to the C library for such things, and they each use > different representations. > > -- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Samuel Garcia Universite Claude Bernard LYON 1 CNRS - UMR5020, Laboratoire des Neurosciences et Systemes Sensoriels 50, avenue Tony Garnier 69366 LYON Cedex 07 FRANCE T?l : 04 37 28 74 64 Fax : 04 37 28 76 01 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -------------- next part -------------- An HTML attachment was scrubbed... URL: From ryanlists at gmail.com Wed Feb 14 15:36:17 2007 From: ryanlists at gmail.com (Ryan Krauss) Date: Wed, 14 Feb 2007 14:36:17 -0600 Subject: [SciPy-user] problem with signal.residue In-Reply-To: <20070214010607.GA5728@localhost> References: <20070214010607.GA5728@localhost> Message-ID: I will add this to my ticket, but there is also a problem with signal.residue when the first denominator coefficent isn't zero. I think I fixed it and I tested my fix against Maxima's partfrac command and it looks right. Bascially, I added the line rscale = a[0] near the beginning of the function and then change the last line to return return r/rscale, p, k This is (nearly) identical to the approach in residuez, so that and the agreement with Maxima give me a good feeling about it. The .wxm file I am attaching is a wxMaxima script for the test, but it requires a fairly recent version of wxMaxima (I am using 0.7.1). Ryan On 2/13/07, Kumar Appaiah wrote: > On Tue, Feb 13, 2007 at 01:57:46PM -0600, Ryan Krauss wrote: > > I think I have found a small bug in signal.residue and may have found > > a simple solution. The problem seems to come from polydiv requiring > > that the numerator polynomial be of degree at most 1 less than the > > denominator. If I have a denominator of s^2+3*s+2, the numerator must > > have an s coefficient (even if that coefficient is 0) for > > signal.residue to work: > > > > In [75]: a > > Out[75]: array([1, 3, 2]) > > > > In [76]: signal.residue([1],a) > > --------------------------------------------------------------------------- > > exceptions.ValueError Traceback (most recent cal > > last) > [snip] > > In [77]: signal.residue([0,1],a) > > Out[77]: > > (array([ 1.+0.j, -1.+0.j]), > > array([-1.+0.j, -2.+0.j]), > > array([], dtype=float64)) > > > > > > I think the simple solution is to replace line 1056 with these four lines: > > if len(b) > k=[] > > else: > > k,b = polydiv(b,a) > > > > where the last line above is the old line 1056. Basically, specify > > that there is no k term if the len of b is less than the len of a. > > > > Is this too simple? What do I do to actually submit this if it is the > > right solution? > > I think you are right. This seems to be a bug. > Please register and open a ticket at > http://projects.scipy.org/scipy/scipy and state the problem and the > specified solution. > > Thanks. > > Kumar > -- > Kumar Appaiah, > 462, Jamuna Hostel, > Indian Institute of Technology Madras, > Chennai - 600 036 > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- A non-text attachment was scrubbed... Name: python_residue_test.wxm Type: application/octet-stream Size: 999 bytes Desc: not available URL: -------------- next part -------------- from scipy import * from pylab import figure, cla, clf, plot, subplot, show, ylabel, xlabel, xlim, ylim, semilogx, legend, title, savefig, yticks, grid, rcParams from IPython.Debugger import Pdb import copy, os, sys #Test 1 k=100.0 b=10.0 signal.residue([1.0],[b,k,0.0]) #Test2 num=array([1.0,3,-1,10,1,7]) den=array([15.0,3115.5,173311.5,1197300.0,2115000.0]) signal.residue(num,den) -------------- next part -------------- # Author: Travis Oliphant # signaltools.py -- 2002 import types import sigtools from scipy import special, linalg from scipy.fftpack import fft, ifft, ifftshift, fft2, ifft2 from numpy import polyadd, polymul, polydiv, polysub, \ roots, poly, polyval, polyder, cast, asarray, isscalar, atleast_1d, \ ones, sin, linspace, real, extract, real_if_close, zeros, array, arange, \ where, sqrt, rank, newaxis, argmax, product, cos, pi, exp, \ ravel, size, less_equal, sum, r_, iscomplexobj, take, \ argsort, allclose, expand_dims, unique, prod, sort, reshape, c_, \ transpose, dot, any, minimum, maximum, mean import numpy from scipy.fftpack import fftn, ifftn from scipy.misc import factorial from IPython.Debugger import Pdb _modedict = {'valid':0, 'same':1, 'full':2} _boundarydict = {'fill':0, 'pad':0, 'wrap':2, 'circular':2, 'symm':1, 'symmetric':1, 'reflect':4} def _valfrommode(mode): try: val = _modedict[mode] except KeyError: if mode not in [0,1,2]: raise ValueError, "Acceptable mode flags are 'valid' (0), 'same' (1), or 'full' (2)." val = mode return val def _bvalfromboundary(boundary): try: val = _boundarydict[boundary] << 2 except KeyError: if val not in [0,1,2] : raise ValueError, "Acceptable boundary flags are 'fill', 'wrap' (or 'circular'), \n and 'symm' (or 'symmetric')." val = boundary << 2 return val def correlate(in1, in2, mode='full'): """Cross-correlate two N-dimensional arrays. Description: Cross-correlate in1 and in2 with the output size determined by mode. Inputs: in1 -- an N-dimensional array. in2 -- an array with the same number of dimensions as in1. mode -- a flag indicating the size of the output 'valid' (0): The output consists only of those elements that do not rely on the zero-padding. 'same' (1): The output is the same size as the largest input centered with respect to the 'full' output. 'full' (2): The output is the full discrete linear cross-correlation of the inputs. (Default) Outputs: (out,) out -- an N-dimensional array containing a subset of the discrete linear cross-correlation of in1 with in2. """ # Code is faster if kernel is smallest array. volume = asarray(in1) kernel = asarray(in2) if rank(volume) == rank(kernel) == 0: return volume*kernel if (product(kernel.shape,axis=0) > product(volume.shape,axis=0)): temp = kernel kernel = volume volume = temp del temp val = _valfrommode(mode) return sigtools._correlateND(volume, kernel, val) def _centered(arr, newsize): # Return the center newsize portion of the array. newsize = asarray(newsize) currsize = array(arr.shape) startind = (currsize - newsize) / 2 endind = startind + newsize myslice = [slice(startind[k], endind[k]) for k in range(len(endind))] return arr[tuple(myslice)] def fftconvolve(in1, in2, mode="full"): """Convolve two N-dimensional arrays using FFT. See convolve. """ s1 = array(in1.shape) s2 = array(in2.shape) if (s1.dtype.char in ['D','F']) or (s2.dtype.char in ['D', 'F']): cmplx=1 else: cmplx=0 size = s1+s2-1 IN1 = fftn(in1,size) IN1 *= fftn(in2,size) ret = ifftn(IN1) del IN1 if not cmplx: ret = real(ret) if mode == "full": return ret elif mode == "same": if product(s1,axis=0) > product(s2,axis=0): osize = s1 else: osize = s2 return _centered(ret,osize) elif mode == "valid": return _centered(ret,abs(s2-s1)+1) def convolve(in1, in2, mode='full'): """Convolve two N-dimensional arrays. Description: Convolve in1 and in2 with output size determined by mode. Inputs: in1 -- an N-dimensional array. in2 -- an array with the same number of dimensions as in1. mode -- a flag indicating the size of the output 'valid' (0): The output consists only of those elements that are computed by scaling the larger array with all the values of the smaller array. 'same' (1): The output is the same size as the largest input centered with respect to the 'full' output. 'full' (2): The output is the full discrete linear convolution of the inputs. (Default) Outputs: (out,) out -- an N-dimensional array containing a subset of the discrete linear convolution of in1 with in2. """ volume = asarray(in1) kernel = asarray(in2) if rank(volume) == rank(kernel) == 0: return volume*kernel if (product(kernel.shape,axis=0) > product(volume.shape,axis=0)): temp = kernel kernel = volume volume = temp del temp slice_obj = [slice(None,None,-1)]*len(kernel.shape) val = _valfrommode(mode) return sigtools._correlateND(volume,kernel[slice_obj],val) def order_filter(a, domain, order): """Perform an order filter on an N-dimensional array. Description: Perform an order filter on the array in. The domain argument acts as a mask centered over each pixel. The non-zero elements of domain are used to select elements surrounding each input pixel which are placed in a list. The list is sorted, and the output for that pixel is the element corresponding to rank in the sorted list. Inputs: in -- an N-dimensional input array. domain -- a mask array with the same number of dimensions as in. Each dimension should have an odd number of elements. rank -- an non-negative integer which selects the element from the sorted list (0 corresponds to the largest element, 1 is the next largest element, etc.) Output: (out,) out -- the results of the order filter in an array with the same shape as in. """ domain = asarray(domain) size = domain.shape for k in range(len(size)): if (size[k] % 2) != 1: raise ValueError, "Each dimension of domain argument should have an odd number of elements." return sigtools._orderfilterND(a, domain, rank) def medfilt(volume,kernel_size=None): """Perform a median filter on an N-dimensional array. Description: Apply a median filter to the input array using a local window-size given by kernel_size. Inputs: in -- An N-dimensional input array. kernel_size -- A scalar or an N-length list giving the size of the median filter window in each dimension. Elements of kernel_size should be odd. If kernel_size is a scalar, then this scalar is used as the size in each dimension. Outputs: (out,) out -- An array the same size as input containing the median filtered result. """ volume = asarray(volume) if kernel_size is None: kernel_size = [3] * len(volume.shape) kernel_size = asarray(kernel_size) if len(kernel_size.shape) == 0: kernel_size = [kernel_size.item()] * len(volume.shape) kernel_size = asarray(kernel_size) for k in range(len(volume.shape)): if (kernel_size[k] % 2) != 1: raise ValueError, "Each element of kernel_size should be odd." domain = ones(kernel_size) numels = product(kernel_size,axis=0) order = int(numels/2) return sigtools._order_filterND(volume,domain,order) def wiener(im,mysize=None,noise=None): """Perform a Wiener filter on an N-dimensional array. Description: Apply a Wiener filter to the N-dimensional array in. Inputs: in -- an N-dimensional array. kernel_size -- A scalar or an N-length list giving the size of the median filter window in each dimension. Elements of kernel_size should be odd. If kernel_size is a scalar, then this scalar is used as the size in each dimension. noise -- The noise-power to use. If None, then noise is estimated as the average of the local variance of the input. Outputs: (out,) out -- Wiener filtered result with the same shape as in. """ im = asarray(im) if mysize is None: mysize = [3] * len(im.shape) mysize = asarray(mysize); # Estimate the local mean lMean = correlate(im,ones(mysize),1) / product(mysize,axis=0) # Estimate the local variance lVar = correlate(im**2,ones(mysize),1) / product(mysize,axis=0) - lMean**2 # Estimate the noise power if needed. if noise==None: noise = mean(ravel(lVar),axis=0) res = (im - lMean) res *= (1-noise / lVar) res += lMean out = where(lVar < noise, lMean, res) return out def convolve2d(in1, in2, mode='full', boundary='fill', fillvalue=0): """Convolve two 2-dimensional arrays. Description: Convolve in1 and in2 with output size determined by mode and boundary conditions determined by boundary and fillvalue. Inputs: in1 -- a 2-dimensional array. in2 -- a 2-dimensional array. mode -- a flag indicating the size of the output 'valid' (0): The output consists only of those elements that do not rely on the zero-padding. 'same' (1): The output is the same size as the input centered with respect to the 'full' output. 'full' (2): The output is the full discrete linear convolution of the inputs. (*Default*) boundary -- a flag indicating how to handle boundaries 'fill' : pad input arrays with fillvalue. (*Default*) 'wrap' : circular boundary conditions. 'symm' : symmetrical boundary conditions. fillvalue -- value to fill pad input arrays with (*Default* = 0) Outputs: (out,) out -- a 2-dimensional array containing a subset of the discrete linear convolution of in1 with in2. """ val = _valfrommode(mode) bval = _bvalfromboundary(boundary) return sigtools._convolve2d(in1,in2,1,val,bval,fillvalue) def correlate2d(in1, in2, mode='full', boundary='fill', fillvalue=0): """Cross-correlate two 2-dimensional arrays. Description: Cross correlate in1 and in2 with output size determined by mode and boundary conditions determined by boundary and fillvalue. Inputs: in1 -- a 2-dimensional array. in2 -- a 2-dimensional array. mode -- a flag indicating the size of the output 'valid' (0): The output consists only of those elements that do not rely on the zero-padding. 'same' (1): The output is the same size as the input centered with respect to the 'full' output. 'full' (2): The output is the full discrete linear convolution of the inputs. (*Default*) boundary -- a flag indicating how to handle boundaries 'fill' : pad input arrays with fillvalue. (*Default*) 'wrap' : circular boundary conditions. 'symm' : symmetrical boundary conditions. fillvalue -- value to fill pad input arrays with (*Default* = 0) Outputs: (out,) out -- a 2-dimensional array containing a subset of the discrete linear cross-correlation of in1 with in2. """ val = _valfrommode(mode) bval = _bvalfromboundary(boundary) return sigtools._convolve2d(in1, in2, 0,val,bval,fillvalue) def medfilt2d(input, kernel_size=3): """Median filter two 2-dimensional arrays. Description: Apply a median filter to the input array using a local window-size given by kernel_size (must be odd). Inputs: in -- An 2 dimensional input array. kernel_size -- A scalar or an length-2 list giving the size of the median filter window in each dimension. Elements of kernel_size should be odd. If kernel_size is a scalar, then this scalar is used as the size in each dimension. Outputs: (out,) out -- An array the same size as input containing the median filtered result. """ image = asarray(input) if kernel_size is None: kernel_size = [3] * 2 kernel_size = asarray(kernel_size) if len(kernel_size.shape) == 0: kernel_size = [kernel_size.item()] * 2 kernel_size = asarray(kernel_size) for size in kernel_size: if (size % 2) != 1: raise ValueError, "Each element of kernel_size should be odd." return sigtools._medfilt2d(image, kernel_size) def remez(numtaps, bands, desired, weight=None, Hz=1, type='bandpass', maxiter=25, grid_density=16): """Calculate the minimax optimal filter using Remez exchange algorithm. Description: Calculate the filter-coefficients for the finite impulse response (FIR) filter whose transfer function minimizes the maximum error between the desired gain and the realized gain in the specified bands using the remez exchange algorithm. Inputs: numtaps -- The desired number of taps in the filter. bands -- A montonic sequence containing the band edges. All elements must be non-negative and less than 1/2 the sampling frequency as given by Hz. desired -- A sequency half the size of bands containing the desired gain in each of the specified bands weight -- A relative weighting to give to each band region. type --- The type of filter: 'bandpass' : flat response in bands. 'differentiator' : frequency proportional response in bands. Outputs: (out,) out -- A rank-1 array containing the coefficients of the optimal (in a minimax sense) filter. """ # Convert type try: tnum = {'bandpass':1, 'differentiator':2}[type] except KeyError: raise ValueError, "Type must be 'bandpass', or 'differentiator'" # Convert weight if weight is None: weight = [1] * len(desired) bands = asarray(bands).copy() return sigtools._remez(numtaps, bands, desired, weight, tnum, Hz, maxiter, grid_density) def lfilter(b, a, x, axis=-1, zi=None): """Filter data along one-dimension with an IIR or FIR filter. Description Filter a data sequence, x, using a digital filter. This works for many fundamental data types (including Object type). The filter is a direct form II transposed implementation of the standard difference equation (see "Algorithm"). Inputs: b -- The numerator coefficient vector in a 1-D sequence. a -- The denominator coefficient vector in a 1-D sequence. If a[0] is not 1, then both a and b are normalized by a[0]. x -- An N-dimensional input array. axis -- The axis of the input data array along which to apply the linear filter. The filter is applied to each subarray along this axis (*Default* = -1) zi -- Initial conditions for the filter delays. It is a vector (or array of vectors for an N-dimensional input) of length max(len(a),len(b)). If zi=None or is not given then initial rest is assumed. SEE signal.lfiltic for more information. Outputs: (y, {zf}) y -- The output of the digital filter. zf -- If zi is None, this is not returned, otherwise, zf holds the final filter delay values. Algorithm: The filter function is implemented as a direct II transposed structure. This means that the filter implements y[n] = b[0]*x[n] + b[1]*x[n-1] + ... + b[nb]*x[n-nb] - a[1]*y[n-1] + ... + a[na]*y[n-na] using the following difference equations: y[m] = b[0]*x[m] + z[0,m-1] z[0,m] = b[1]*x[m] + z[1,m-1] - a[1]*y[m] ... z[n-3,m] = b[n-2]*x[m] + z[n-2,m-1] - a[n-2]*y[m] z[n-2,m] = b[n-1]*x[m] - a[n-1]*y[m] where m is the output sample number and n=max(len(a),len(b)) is the model order. The rational transfer function describing this filter in the z-transform domain is -1 -nb b[0] + b[1]z + ... + b[nb] z Y(z) = ---------------------------------- X(z) -1 -na a[0] + a[1]z + ... + a[na] z """ if isscalar(a): a = [a] if zi is None: return sigtools._linear_filter(b, a, x, axis) else: return sigtools._linear_filter(b, a, x, axis, zi) def lfiltic(b,a,y,x=None): """Given a linear filter (b,a) and initial conditions on the output y and the input x, return the inital conditions on the state vector zi which is used by lfilter to generate the output given the input. If M=len(b)-1 and N=len(a)-1. Then, the initial conditions are given in the vectors x and y as x = {x[-1],x[-2],...,x[-M]} y = {y[-1],y[-2],...,y[-N]} If x is not given, its inital conditions are assumed zero. If either vector is too short, then zeros are added to achieve the proper length. The output vector zi contains zi = {z_0[-1], z_1[-1], ..., z_K-1[-1]} where K=max(M,N). """ N = size(a)-1 M = size(b)-1 K = max(M,N) y = asarray(y) zi = zeros(K,y.dtype.char) if x is None: x = zeros(M,y.dtype.char) else: x = asarray(x) L = size(x) if L < M: x = r_[x,zeros(M-L)] L = size(y) if L < N: y = r_[y,zeros(N-L)] for m in range(M): zi[m] = sum(b[m+1:]*x[:M-m],axis=0) for m in range(N): zi[m] -= sum(a[m+1:]*y[:N-m],axis=0) return zi def deconvolve(signal, divisor): """Deconvolves divisor out of signal. """ num = atleast_1d(signal) den = atleast_1d(divisor) N = len(num) D = len(den) if D > N: quot = []; rem = num; else: input = ones(N-D+1, float) input[1:] = 0 quot = lfilter(num, den, input) rem = num - convolve(den, quot, mode='full') return quot, rem def boxcar(M,sym=1): """The M-point boxcar window. """ return ones(M, float) def triang(M,sym=1): """The M-point triangular window. """ if M < 1: return array([]) if M == 1: return ones(1,'d') odd = M % 2 if not sym and not odd: M = M + 1 n = arange(1,int((M+1)/2)+1) if M % 2 == 0: w = (2*n-1.0)/M w = r_[w, w[::-1]] else: w = 2*n/(M+1.0) w = r_[w, w[-2::-1]] if not sym and not odd: w = w[:-1] return w def parzen(M,sym=1): """The M-point Parzen window. """ if M < 1: return array([]) if M == 1: return ones(1,'d') odd = M % 2 if not sym and not odd: M = M+1 n = arange(-(M-1)/2.0,(M-1)/2.0+0.5,1.0) na = extract(n < -(M-1)/4.0, n) nb = extract(abs(n) <= (M-1)/4.0, n) wa = 2*(1-abs(na)/(M/2.0))**3.0 wb = 1-6*(abs(nb)/(M/2.0))**2.0 + 6*(abs(nb)/(M/2.0))**3.0 w = r_[wa,wb,wa[::-1]] if not sym and not odd: w = w[:-1] return w def bohman(M,sym=1): """The M-point Bohman window. """ if M < 1: return array([]) if M == 1: return ones(1,'d') odd = M % 2 if not sym and not odd: M = M+1 fac = abs(linspace(-1,1,M)[1:-1]) w = (1 - fac)* cos(pi*fac) + 1.0/pi*sin(pi*fac) w = r_[0,w,0] if not sym and not odd: w = w[:-1] return w def blackman(M,sym=1): """The M-point Blackman window. """ if M < 1: return array([]) if M == 1: return ones(1,'d') odd = M % 2 if not sym and not odd: M = M+1 n = arange(0,M) w = 0.42-0.5*cos(2.0*pi*n/(M-1)) + 0.08*cos(4.0*pi*n/(M-1)) if not sym and not odd: w = w[:-1] return w def nuttall(M,sym=1): """A minimum 4-term Blackman-Harris window according to Nuttall. """ if M < 1: return array([]) if M == 1: return ones(1,'d') odd = M % 2 if not sym and not odd: M = M+1 a = [0.3635819, 0.4891775, 0.1365995, 0.0106411] n = arange(0,M) fac = n*2*pi/(M-1.0) w = a[0] - a[1]*cos(fac) + a[2]*cos(2*fac) - a[3]*cos(3*fac) if not sym and not odd: w = w[:-1] return w def blackmanharris(M,sym=1): """The M-point minimum 4-term Blackman-Harris window. """ if M < 1: return array([]) if M == 1: return ones(1,'d') odd = M % 2 if not sym and not odd: M = M+1 a = [0.35875, 0.48829, 0.14128, 0.01168]; n = arange(0,M) fac = n*2*pi/(M-1.0) w = a[0] - a[1]*cos(fac) + a[2]*cos(2*fac) - a[3]*cos(3*fac) if not sym and not odd: w = w[:-1] return w def flattop(M,sym=1): """The M-point Flat top window. """ if M < 1: return array([]) if M == 1: return ones(1,'d') odd = M % 2 if not sym and not odd: M = M+1 a = [0.2156, 0.4160, 0.2781, 0.0836, 0.0069] n = arange(0,M) fac = n*2*pi/(M-1.0) w = a[0] - a[1]*cos(fac) + a[2]*cos(2*fac) - a[3]*cos(3*fac) + a[4]*cos(4*fac) if not sym and not odd: w = w[:-1] return w def bartlett(M,sym=1): """The M-point Bartlett window. """ if M < 1: return array([]) if M == 1: return ones(1,'d') odd = M % 2 if not sym and not odd: M = M+1 n = arange(0,M) w = where(less_equal(n,(M-1)/2.0),2.0*n/(M-1),2.0-2.0*n/(M-1)) if not sym and not odd: w = w[:-1] return w def hanning(M,sym=1): """The M-point Hanning window. """ if M < 1: return array([]) if M == 1: return ones(1,'d') odd = M % 2 if not sym and not odd: M = M+1 n = arange(0,M) w = 0.5-0.5*cos(2.0*pi*n/(M-1)) if not sym and not odd: w = w[:-1] return w hann = hanning def barthann(M,sym=1): """Return the M-point modified Bartlett-Hann window. """ if M < 1: return array([]) if M == 1: return ones(1,'d') odd = M % 2 if not sym and not odd: M = M+1 n = arange(0,M) fac = abs(n/(M-1.0)-0.5) w = 0.62 - 0.48*fac + 0.38*cos(2*pi*fac) if not sym and not odd: w = w[:-1] return w def hamming(M,sym=1): """The M-point Hamming window. """ if M < 1: return array([]) if M == 1: return ones(1,'d') odd = M % 2 if not sym and not odd: M = M+1 n = arange(0,M) w = 0.54-0.46*cos(2.0*pi*n/(M-1)) if not sym and not odd: w = w[:-1] return w def kaiser(M,beta,sym=1): """Return a Kaiser window of length M with shape parameter beta. """ if M < 1: return array([]) if M == 1: return ones(1,'d') odd = M % 2 if not sym and not odd: M = M+1 n = arange(0,M) alpha = (M-1)/2.0 w = special.i0(beta * sqrt(1-((n-alpha)/alpha)**2.0))/special.i0(beta) if not sym and not odd: w = w[:-1] return w def gaussian(M,std,sym=1): """Return a Gaussian window of length M with standard-deviation std. """ if M < 1: return array([]) if M == 1: return ones(1,'d') odd = M % 2 if not sym and not odd: M = M + 1 n = arange(0,M)-(M-1.0)/2.0 sig2 = 2*std*std w = exp(-n**2 / sig2) if not sym and not odd: w = w[:-1] return w def general_gaussian(M,p,sig,sym=1): """Return a window with a generalized Gaussian shape. exp(-0.5*(x/sig)**(2*p)) half power point is at (2*log(2)))**(1/(2*p))*sig """ if M < 1: return array([]) if M == 1: return ones(1,'d') odd = M % 2 if not sym and not odd: M = M+1 n = arange(0,M)-(M-1.0)/2.0 w = exp(-0.5*(n/sig)**(2*p)) if not sym and not odd: w = w[:-1] return w def slepian(M,width,sym=1): """Return the M-point slepian window. """ if (M*width > 27.38): raise ValueError, "Cannot reliably obtain slepian sequences for"\ " M*width > 27.38." if M < 1: return array([]) if M == 1: return ones(1,'d') odd = M % 2 if not sym and not odd: M = M+1 twoF = width/2.0 alpha = (M-1)/2.0 m = arange(0,M)-alpha n = m[:,newaxis] k = m[newaxis,:] AF = twoF*special.sinc(twoF*(n-k)) [lam,vec] = linalg.eig(AF) ind = argmax(abs(lam),axis=-1) w = abs(vec[:,ind]) w = w / max(w) if not sym and not odd: w = w[:-1] return w def hilbert(x, N=None): """Return the hilbert transform of x of length N. """ x = asarray(x) if N is None: N = len(x) if N <=0: raise ValueError, "N must be positive." if iscomplexobj(x): print "Warning: imaginary part of x ignored." x = real(x) Xf = fft(x,N,axis=0) h = zeros(N) if N % 2 == 0: h[0] = h[N/2] = 1 h[1:N/2] = 2 else: h[0] = 1 h[1:(N+1)/2] = 2 if len(x.shape) > 1: h = h[:, newaxis] x = ifft(Xf*h) return x def hilbert2(x,N=None): """Return the '2-D' hilbert transform of x of length N. """ x = asarray(x) x = asarray(x) if N is None: N = x.shape if len(N) < 2: if N <=0: raise ValueError, "N must be positive." N = (N,N) if iscomplexobj(x): print "Warning: imaginary part of x ignored." x = real(x) print N Xf = fft2(x,N,axes=(0,1)) h1 = zeros(N[0],'d') h2 = zeros(N[1],'d') for p in range(2): h = eval("h%d"%(p+1)) N1 = N[p] if N1 % 2 == 0: h[0] = h[N1/2] = 1 h[1:N1/2] = 2 else: h[0] = 1 h[1:(N1+1)/2] = 2 exec("h%d = h" % (p+1), globals(), locals()) h = h1[:,newaxis] * h2[newaxis,:] k = len(x.shape) while k > 2: h = h[:, newaxis] k -= 1 x = ifft2(Xf*h,axes=(0,1)) return x def cmplx_sort(p): "sort roots based on magnitude." p = asarray(p) if iscomplexobj(p): indx = argsort(abs(p)) else: indx = argsort(p) return take(p,indx,0), indx def unique_roots(p,tol=1e-3,rtype='min'): """Determine the unique roots and their multiplicities in two lists Inputs: p -- The list of roots tol --- The tolerance for two roots to be considered equal. rtype --- How to determine the returned root from the close ones: 'max': pick the maximum 'min': pick the minimum 'avg': average roots Outputs: (pout, mult) pout -- The list of sorted roots mult -- The multiplicity of each root """ if rtype in ['max','maximum']: comproot = numpy.maximum elif rtype in ['min','minimum']: comproot = numpy.minimum elif rtype in ['avg','mean']: comproot = numpy.mean p = asarray(p)*1.0 tol = abs(tol) p, indx = cmplx_sort(p) pout = [] mult = [] indx = -1 curp = p[0] + 5*tol sameroots = [] for k in range(len(p)): tr = p[k] if abs(tr-curp) < tol: sameroots.append(tr) curp = comproot(sameroots) pout[indx] = curp mult[indx] += 1 else: pout.append(tr) curp = tr sameroots = [tr] indx += 1 mult.append(1) return array(pout), array(mult) def invres(r,p,k,tol=1e-3,rtype='avg'): """Compute b(s) and a(s) from partial fraction expansion: r,p,k If M = len(b) and N = len(a) b(s) b[0] x**(M-1) + b[1] x**(M-2) + ... + b[M-1] H(s) = ------ = ---------------------------------------------- a(s) a[0] x**(N-1) + a[1] x**(N-2) + ... + a[N-1] r[0] r[1] r[-1] = -------- + -------- + ... + --------- + k(s) (s-p[0]) (s-p[1]) (s-p[-1]) If there are any repeated roots (closer than tol), then the partial fraction expansion has terms like r[i] r[i+1] r[i+n-1] -------- + ----------- + ... + ----------- (s-p[i]) (s-p[i])**2 (s-p[i])**n See also: residue, poly, polyval, unique_roots """ extra = k p, indx = cmplx_sort(p) r = take(r,indx,0) pout, mult = unique_roots(p,tol=tol,rtype=rtype) p = [] for k in range(len(pout)): p.extend([pout[k]]*mult[k]) a = atleast_1d(poly(p)) if len(extra) > 0: b = polymul(extra,a) else: b = [0] indx = 0 for k in range(len(pout)): temp = [] for l in range(len(pout)): if l != k: temp.extend([pout[l]]*mult[l]) for m in range(mult[k]): t2 = temp[:] t2.extend([pout[k]]*(mult[k]-m-1)) b = polyadd(b,r[indx]*poly(t2)) indx += 1 b = real_if_close(b) while allclose(b[0], 0, rtol=1e-14) and (b.shape[-1] > 1): b = b[1:] return b, a def residue(b,a,tol=1e-3,rtype='avg'): """Compute partial-fraction expansion of b(s) / a(s). If M = len(b) and N = len(a) b(s) b[0] s**(M-1) + b[1] s**(M-2) + ... + b[M-1] H(s) = ------ = ---------------------------------------------- a(s) a[0] s**(N-1) + a[1] s**(N-2) + ... + a[N-1] r[0] r[1] r[-1] = -------- + -------- + ... + --------- + k(s) (s-p[0]) (s-p[1]) (s-p[-1]) If there are any repeated roots (closer than tol), then the partial fraction expansion has terms like r[i] r[i+1] r[i+n-1] -------- + ----------- + ... + ----------- (s-p[i]) (s-p[i])**2 (s-p[i])**n See also: invres, poly, polyval, unique_roots """ b,a = map(asarray,(b,a)) rscale = a[0] if len(b) m: # compute next derivative of bn(s) / an(s) term1 = polymul(polyder(bn,1),an) term2 = polymul(bn,polyder(an,1)) bn = polysub(term1,term2) an = polymul(an,an) r[indx+m-1] = polyval(bn,pout[n]) / polyval(an,pout[n]) \ / factorial(sig-m) indx += sig return r/rscale, p, k def residuez(b,a,tol=1e-3,rtype='avg'): """Compute partial-fraction expansion of b(z) / a(z). If M = len(b) and N = len(a) b(z) b[0] + b[1] z**(-1) + ... + b[M-1] z**(-M+1) H(z) = ------ = ---------------------------------------------- a(z) a[0] + a[1] z**(-1) + ... + a[N-1] z**(-N+1) r[0] r[-1] = --------------- + ... + ---------------- + k[0] + k[1]z**(-1) ... (1-p[0]z**(-1)) (1-p[-1]z**(-1)) If there are any repeated roots (closer than tol), then the partial fraction expansion has terms like r[i] r[i+1] r[i+n-1] -------------- + ------------------ + ... + ------------------ (1-p[i]z**(-1)) (1-p[i]z**(-1))**2 (1-p[i]z**(-1))**n See also: invresz, poly, polyval, unique_roots """ b,a = map(asarray,(b,a)) gain = a[0] brev, arev = b[::-1],a[::-1] krev,brev = polydiv(brev,arev) if krev == []: k = [] else: k = krev[::-1] b = brev[::-1] p = roots(a) r = p*0.0 pout, mult = unique_roots(p,tol=tol,rtype=rtype) p = [] for n in range(len(pout)): p.extend([pout[n]]*mult[n]) p = asarray(p) # Compute the residue from the general formula (for discrete-time) # the polynomial is in z**(-1) and the multiplication is by terms # like this (1-p[i] z**(-1))**mult[i]. After differentiation, # we must divide by (-p[i])**(m-k) as well as (m-k)! indx = 0 for n in range(len(pout)): bn = brev.copy() pn = [] for l in range(len(pout)): if l != n: pn.extend([pout[l]]*mult[l]) an = atleast_1d(poly(pn))[::-1] # bn(z) / an(z) is (1-po[n] z**(-1))**Nn * b(z) / a(z) where Nn is # multiplicity of pole at po[n] and b(z) and a(z) are polynomials. sig = mult[n] for m in range(sig,0,-1): if sig > m: # compute next derivative of bn(s) / an(s) term1 = polymul(polyder(bn,1),an) term2 = polymul(bn,polyder(an,1)) bn = polysub(term1,term2) an = polymul(an,an) r[indx+m-1] = polyval(bn,1.0/pout[n]) / polyval(an,1.0/pout[n]) \ / factorial(sig-m) / (-pout[n])**(sig-m) indx += sig return r/gain, p, k def invresz(r,p,k,tol=1e-3,rtype='avg'): """Compute b(z) and a(z) from partial fraction expansion: r,p,k If M = len(b) and N = len(a) b(z) b[0] + b[1] z**(-1) + ... + b[M-1] z**(-M+1) H(z) = ------ = ---------------------------------------------- a(z) a[0] + a[1] z**(-1) + ... + a[N-1] z**(-N+1) r[0] r[-1] = --------------- + ... + ---------------- + k[0] + k[1]z**(-1) ... (1-p[0]z**(-1)) (1-p[-1]z**(-1)) If there are any repeated roots (closer than tol), then the partial fraction expansion has terms like r[i] r[i+1] r[i+n-1] -------------- + ------------------ + ... + ------------------ (1-p[i]z**(-1)) (1-p[i]z**(-1))**2 (1-p[i]z**(-1))**n See also: residuez, poly, polyval, unique_roots """ extra = asarray(k) p, indx = cmplx_sort(p) r = take(r,indx,0) pout, mult = unique_roots(p,tol=tol,rtype=rtype) p = [] for k in range(len(pout)): p.extend([pout[k]]*mult[k]) a = atleast_1d(poly(p)) if len(extra) > 0: b = polymul(extra,a) else: b = [0] indx = 0 brev = asarray(b)[::-1] for k in range(len(pout)): temp = [] # Construct polynomial which does not include any of this root for l in range(len(pout)): if l != k: temp.extend([pout[l]]*mult[l]) for m in range(mult[k]): t2 = temp[:] t2.extend([pout[k]]*(mult[k]-m-1)) brev = polyadd(brev,(r[indx]*poly(t2))[::-1]) indx += 1 b = real_if_close(brev[::-1]) return b, a def get_window(window,Nx,fftbins=1): """Return a window of length Nx and type window. If fftbins is 1, create a "periodic" window ready to use with ifftshift and be multiplied by the result of an fft (SEE ALSO fftfreq). Window types: boxcar, triang, blackman, hamming, hanning, bartlett, parzen, bohman, blackmanharris, nuttall, barthann, kaiser (needs beta), gaussian (needs std), general_gaussian (needs power, width), slepian (needs width) If the window requires no parameters, then it can be a string. If the window requires parameters, the window argument should be a tuple with the first argument the string name of the window, and the next arguments the needed parameters. If window is a floating point number, it is interpreted as the beta parameter of the kaiser window. """ sym = not fftbins try: beta = float(window) except (TypeError, ValueError): args = () if isinstance(window, types.TupleType): winstr = window[0] if len(window) > 1: args = window[1:] elif isinstance(window, types.StringType): if window in ['kaiser', 'ksr', 'gaussian', 'gauss', 'gss', 'general gaussian', 'general_gaussian', 'general gauss', 'general_gauss', 'ggs']: raise ValueError, "That window needs a parameter -- pass a tuple" else: winstr = window if winstr in ['blackman', 'black', 'blk']: winfunc = blackman elif winstr in ['triangle', 'triang', 'tri']: winfunc = triang elif winstr in ['hamming', 'hamm', 'ham']: winfunc = hamming elif winstr in ['bartlett', 'bart', 'brt']: winfunc = bartlett elif winstr in ['hanning', 'hann', 'han']: winfunc = hanning elif winstr in ['blackmanharris', 'blackharr','bkh']: winfunc = blackmanharris elif winstr in ['parzen', 'parz', 'par']: winfunc = parzen elif winstr in ['bohman', 'bman', 'bmn']: winfunc = bohman elif winstr in ['nuttall', 'nutl', 'nut']: winfunc = nuttall elif winstr in ['barthann', 'brthan', 'bth']: winfunc = barthann elif winstr in ['flattop', 'flat', 'flt']: winfunc = flattop elif winstr in ['kaiser', 'ksr']: winfunc = kaiser elif winstr in ['gaussian', 'gauss', 'gss']: winfunc = gaussian elif winstr in ['general gaussian', 'general_gaussian', 'general gauss', 'general_gauss', 'ggs']: winfunc = general_gaussian elif winstr in ['boxcar', 'box', 'ones']: winfunc = boxcar elif winstr in ['slepian', 'slep', 'optimal', 'dss']: winfunc = slepian else: raise ValueError, "Unknown window type." params = (Nx,)+args + (sym,) else: winfunc = kaiser params = (Nx,beta,sym) return winfunc(*params) def resample(x,num,t=None,axis=0,window=None): """Resample to num samples using Fourier method along the given axis. The resampled signal starts at the same value of x but is sampled with a spacing of len(x) / num * (spacing of x). Because a Fourier method is used, the signal is assumed periodic. Window controls a Fourier-domain window that tapers the Fourier spectrum before zero-padding to aleviate ringing in the resampled values for sampled signals you didn't intend to be interpreted as band-limited. If window is a string then use the named window. If window is a float, then it represents a value of beta for a kaiser window. If window is a tuple, then the first component is a string representing the window, and the next arguments are parameters for that window. Possible windows are: 'blackman' ('black', 'blk') 'hamming' ('hamm', 'ham') 'bartlett' ('bart', 'brt') 'hanning' ('hann', 'han') 'kaiser' ('ksr') # requires parameter (beta) 'gaussian' ('gauss', 'gss') # requires parameter (std.) 'general gauss' ('general', 'ggs') # requires two parameters (power, width) The first sample of the returned vector is the same as the first sample of the input vector, the spacing between samples is changed from dx to dx * len(x) / num If t is not None, then it represents the old sample positions, and the new sample positions will be returned as well as the new samples. """ x = asarray(x) X = fft(x,axis=axis) Nx = x.shape[axis] if window is not None: W = ifftshift(get_window(window,Nx)) newshape = ones(len(x.shape)) newshape[axis] = len(W) W=W.reshape(newshape) X = X*W sl = [slice(None)]*len(x.shape) newshape = list(x.shape) newshape[axis] = num N = int(numpy.minimum(num,Nx)) Y = zeros(newshape,'D') sl[axis] = slice(0,(N+1)/2) Y[sl] = X[sl] sl[axis] = slice(-(N-1)/2,None) Y[sl] = X[sl] y = ifft(Y,axis=axis)*(float(num)/float(Nx)) if x.dtype.char not in ['F','D']: y = y.real if t is None: return y else: new_t = arange(0,num)*(t[1]-t[0])* Nx / float(num) + t[0] return y, new_t def detrend(data, axis=-1, type='linear', bp=0): """Remove linear trend along axis from data. If type is 'constant' then remove mean only. If bp is given, then it is a sequence of points at which to break a piecewise-linear fit to the data. """ if type not in ['linear','l','constant','c']: raise ValueError, "Trend type must be linear or constant" data = asarray(data) dtype = data.dtype.char if dtype not in 'dfDF': dtype = 'd' if type in ['constant','c']: ret = data - expand_dims(mean(data,axis),axis) return ret else: dshape = data.shape N = dshape[axis] bp = sort(unique(r_[0,bp,N])) if any(bp > N): raise ValueError, "Breakpoints must be less than length of data along given axis." Nreg = len(bp) - 1 # Restructure data so that axis is along first dimension and # all other dimensions are collapsed into second dimension rnk = len(dshape) if axis < 0: axis = axis + rnk newdims = r_[axis,0:axis,axis+1:rnk] newdata = reshape(transpose(data,tuple(newdims)),(N,prod(dshape,axis=0)/N)) newdata = newdata.copy() # make sure we have a copy if newdata.dtype.char not in 'dfDF': newdata = newdata.astype(dtype) # Find leastsq fit and remove it for each piece for m in range(Nreg): Npts = bp[m+1] - bp[m] A = ones((Npts,2),dtype) A[:,0] = cast[dtype](arange(1,Npts+1)*1.0/Npts) sl = slice(bp[m],bp[m+1]) coef,resids,rank,s = linalg.lstsq(A,newdata[sl]) newdata[sl] = newdata[sl] - dot(A,coef) # Put data back in original shape. tdshape = take(dshape,newdims,0) ret = reshape(newdata,tuple(tdshape)) vals = range(1,rnk) olddims = vals[:axis] + [0] + vals[axis:] ret = transpose(ret,tuple(olddims)) return ret From tgrav at mac.com Wed Feb 14 21:16:50 2007 From: tgrav at mac.com (Tommy Grav) Date: Wed, 14 Feb 2007 21:16:50 -0500 Subject: [SciPy-user] lsq problem Message-ID: <86E3078D-C9B7-4516-B573-9C731D6FC45E@mac.com> I need to fit a gaussian profile to a set of points and would like to use scipy (or numpy) to do the least square fitting if possible. I am however unsure if the proper routines are available, so I thought I would ask to get some hints to get going in the right direction. The input are two 1-dimensional arrays x and flux, together with a function def Gaussian(a,b,c,x1): return a*exp(-(pow(x1,2)/pow(c,2))) - c I would like to find the values of (a,b,c), such that the difference between the gaussian and fluxes are minimalized. Would scipy.linalg.lstsq be the right function to use, or is this problem not linear? (I know I could find out this particular problem with a little research, but I am under a little time pressure and I can not for the life of me remember my old math classes). If the problem is not linear, is there another function that can be used or do I have to code up my own lstsq function to solve the problem? Thanks in advance for any hints to the answers. Cheers Tommy From amcmorl at gmail.com Wed Feb 14 23:01:08 2007 From: amcmorl at gmail.com (Angus McMorland) Date: Thu, 15 Feb 2007 17:01:08 +1300 Subject: [SciPy-user] lsq problem In-Reply-To: <86E3078D-C9B7-4516-B573-9C731D6FC45E@mac.com> References: <86E3078D-C9B7-4516-B573-9C731D6FC45E@mac.com> Message-ID: Hi Tommy, On 15/02/07, Tommy Grav wrote: > I need to fit a gaussian profile to a set of points and would like to > use scipy (or numpy) to > do the least square fitting if possible. I am however unsure if the > proper routines are > available, so I thought I would ask to get some hints to get going in > the right direction. > > The input are two 1-dimensional arrays x and flux, together with a > function > > def Gaussian(a,b,c,x1): > return a*exp(-(pow(x1,2)/pow(c,2))) - c > > I would like to find the values of (a,b,c), such that the difference > between the gaussian > and fluxes are minimalized. Would scipy.linalg.lstsq be the right > function to use, or is this > problem not linear? (I know I could find out this particular problem > with a little research, but > I am under a little time pressure and I can not for the life of me > remember my old math > classes). If the problem is not linear, is there another function > that can be used or do I have > to code up my own lstsq function to solve the problem? > > Thanks in advance for any hints to the answers. Using scipy.optimize.leastsq, this problem is pretty easy to solve. Check the docstring for that function. Basically, you need to construct an error function: I use the one below, but hopefully you can see how to adapt this to your needs: def erf(p, I, r): (A, k, c) = p return I - A * exp( -(r - c)**2 / k**2 ) then in your code: p0 = (1,1,1) # starting guesses (anything vaguely close seems to be okay) plsq = scipy.optimize.leastsq(erf, p0, args=(flux, x) A = plsq[0][0] k = plsq[0][1] c = plsq[0][2] I hope that helps, A. -- AJC McMorland, PhD Student Physiology, University of Auckland From gael.varoquaux at normalesup.org Thu Feb 15 01:54:59 2007 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Thu, 15 Feb 2007 07:54:59 +0100 Subject: [SciPy-user] lsq problem In-Reply-To: <86E3078D-C9B7-4516-B573-9C731D6FC45E@mac.com> References: <86E3078D-C9B7-4516-B573-9C731D6FC45E@mac.com> Message-ID: <20070215065459.GA17713@clipper.ens.fr> Hi Tommy, Maybe the cookook example at http://scipy.org/Cookbook/FittingData can help you. Ga?l On Wed, Feb 14, 2007 at 09:16:50PM -0500, Tommy Grav wrote: > I need to fit a gaussian profile to a set of points and would like to > use scipy (or numpy) to > do the least square fitting if possible. I am however unsure if the > proper routines are > available, so I thought I would ask to get some hints to get going in > the right direction. > The input are two 1-dimensional arrays x and flux, together with a > function > def Gaussian(a,b,c,x1): > return a*exp(-(pow(x1,2)/pow(c,2))) - c > I would like to find the values of (a,b,c), such that the difference > between the gaussian > and fluxes are minimalized. Would scipy.linalg.lstsq be the right > function to use, or is this > problem not linear? (I know I could find out this particular problem > with a little research, but > I am under a little time pressure and I can not for the life of me > remember my old math > classes). If the problem is not linear, is there another function > that can be used or do I have > to code up my own lstsq function to solve the problem? > Thanks in advance for any hints to the answers. > Cheers > Tommy > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user -- Gael Varoquaux, Groupe d'optique atomique, Laboratoire Charles Fabry de l'Institut d'Optique Campus Polytechnique, RD 128 91127 Palaiseau cedex FRANCE !!!! NEW Phone number !!!! Tel : 33 (0) 1 64 53 33 49 - Fax : 33 (0) 1 64 53 31 01 Labs: 33 (0) 1 64 53 33 63 - 33 (0) 1 64 53 33 62 From niklassaers at gmail.com Fri Feb 16 05:15:35 2007 From: niklassaers at gmail.com (Niklas Saers) Date: Fri, 16 Feb 2007 11:15:35 +0100 Subject: [SciPy-user] Scipy on Python 2.5 / OS X 10.4.8 Message-ID: Hi guys, has anyone successfully compiled Scipy for Python 2.5 under Mac OS X 10.4.8? I notice that the binaries on the webpage only support up to Python 2.4 so I would like to build them. However, running "python setup.py build" I get: g95 -L/sw/lib build/temp.macosx-10.3-fat-2.5/build/src.macosx-10.3- fat-2.5/Lib/fftpack/_fftpackmodule.o build/temp.macosx-10.3-fat-2.5/ Lib/fftpack/src/zfft.o build/temp.macosx-10.3-fat-2.5/Lib/fftpack/src/ drfft.o build/temp.macosx-10.3-fat-2.5/Lib/fftpack/src/zrfft.o build/ temp.macosx-10.3-fat-2.5/Lib/fftpack/src/zfftnd.o build/ temp.macosx-10.3-fat-2.5/build/src.macosx-10.3-fat-2.5/ fortranobject.o -Lbuild/temp.macosx-10.3-fat-2.5 -ldfftpack -o build/ lib.macosx-10.3-fat-2.5/scipy/fftpack/_fftpack.so /sw/lib/odcctools590/bin/ld: Undefined symbols: _PyArg_ParseTupleAndKeywords _PyCObject_AsVoidPtr _PyCObject_Type _PyComplex_Type _PyDict_SetItemString _PyErr_Clear _PyErr_Format _PyErr_NewException _PyErr_Occurred _PyErr_Print _PyErr_SetString _PyExc_ImportError _PyExc_RuntimeError _PyImport_ImportModule _PyInt_Type _PyModule_GetDict _PyNumber_Int _PyObject_GetAttrString _PySequence_Check _PySequence_GetItem _PyString_FromString _PyString_Type _PyType_IsSubtype _PyType_Type _Py_BuildValue _Py_InitModule4 __Py_NoneStruct _PyCObject_FromVoidPtr _PyDict_DelItemString _PyDict_GetItemString _PyDict_New _PyExc_AttributeError _PyExc_TypeError _PyExc_ValueError _PyMem_Free _PyObject_Str _PyObject_Type _PyString_AsString _PyString_ConcatAndDel _Py_FindMethod __PyObject_New _MAIN_ /sw/lib/odcctools590/bin/ld: Undefined symbols: _PyArg_ParseTupleAndKeywords _PyCObject_AsVoidPtr _PyCObject_Type _PyComplex_Type _PyDict_SetItemString _PyErr_Clear _PyErr_Format _PyErr_NewException _PyErr_Occurred _PyErr_Print _PyErr_SetString _PyExc_ImportError _PyExc_RuntimeError _PyImport_ImportModule _PyInt_Type _PyModule_GetDict _PyNumber_Int _PyObject_GetAttrString _PySequence_Check _PySequence_GetItem _PyString_FromString _PyString_Type _PyType_IsSubtype _PyType_Type _Py_BuildValue _Py_InitModule4 __Py_NoneStruct _PyCObject_FromVoidPtr _PyDict_DelItemString _PyDict_GetItemString _PyDict_New _PyExc_AttributeError _PyExc_TypeError _PyExc_ValueError _PyMem_Free _PyObject_Str _PyObject_Type _PyString_AsString _PyString_ConcatAndDel _Py_FindMethod __PyObject_New _MAIN_ error: Command "g95 -L/sw/lib build/temp.macosx-10.3-fat-2.5/build/ src.macosx-10.3-fat-2.5/Lib/fftpack/_fftpackmodule.o build/ temp.macosx-10.3-fat-2.5/Lib/fftpack/src/zfft.o build/ temp.macosx-10.3-fat-2.5/Lib/fftpack/src/drfft.o build/ temp.macosx-10.3-fat-2.5/Lib/fftpack/src/zrfft.o build/ temp.macosx-10.3-fat-2.5/Lib/fftpack/src/zfftnd.o build/ temp.macosx-10.3-fat-2.5/build/src.macosx-10.3-fat-2.5/ fortranobject.o -Lbuild/temp.macosx-10.3-fat-2.5 -ldfftpack -o build/ lib.macosx-10.3-fat-2.5/scipy/fftpack/_fftpack.so" failed with exit status 1 I have no idea of why it sais 10.3 rather than 10.4, but perhaps this is not the OS version? I use FFTW 2.1.5 after the website recommended the 2.1 branch for performance over 3.0 and I'm going to be using FFTs a lot Could anyone help me out? :-) Cheers Nik From sgarcia at olfac.univ-lyon1.fr Fri Feb 16 06:21:42 2007 From: sgarcia at olfac.univ-lyon1.fr (Samuel GARCIA) Date: Fri, 16 Feb 2007 12:21:42 +0100 Subject: [SciPy-user] str(numpy.inf) In-Reply-To: <45D34881.3070601@gmail.com> References: <45D3426B.1050600@olfac.univ-lyon1.fr> <45D34881.3070601@gmail.com> Message-ID: <45D593C6.3060400@olfac.univ-lyon1.fr> Because of that I also have problem with pickle when I want to store a float which is not finite. Am I the first with this problem under win32 ? Sam Robert Kern wrote: > Samuel GARCIA wrote: > >> Hi list, >> >> I have this problem >> str(numpy.inf) under linux give 'inf' >> str(numpy.inf) under win32 give '1.#INF' >> >> It is a problem for me because with a GUI in LineEdit, for example, the >> end user who have to enter a float can enter inf under linux but not >> under win32. >> >> Any idea to solve this ? >> > > No. We (and Python) defer to the C library for such things, and they each use > different representations. > > -- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Samuel Garcia Universite Claude Bernard LYON 1 CNRS - UMR5020, Laboratoire des Neurosciences et Systemes Sensoriels 50, avenue Tony Garnier 69366 LYON Cedex 07 FRANCE T?l : 04 37 28 74 64 Fax : 04 37 28 76 01 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -------------- next part -------------- An HTML attachment was scrubbed... URL: From gary.pajer at gmail.com Sat Feb 17 09:46:42 2007 From: gary.pajer at gmail.com (Gary Pajer) Date: Sat, 17 Feb 2007 09:46:42 -0500 Subject: [SciPy-user] bose-einstein distribution ? Message-ID: <88fe22a0702170646n57ec4e04p57c89fe835b4e4fb@mail.gmail.com> Does the Bose-Einstein distribution exist in scipy.stats? (Perhaps as a name I don't recognize, or a special case of another distribution?) -gary From meesters at uni-mainz.de Sat Feb 17 19:28:26 2007 From: meesters at uni-mainz.de (Meesters, Christian) Date: Sun, 18 Feb 2007 01:28:26 +0100 Subject: [SciPy-user] savitzky golay filtering Message-ID: Hi I wanted to do a Savitzky Golay filtering on my data and came aross this piece of code: http://www.dalkescientific.com/writings/NBN/data/savitzky_golay.py Well, translating the necessary bits into current numpy/scipy code is driving me crazy. Can somebody give me a hint on the "M = ..."-line, please? (Or is there a better way to do this filtering with scipy?) TIA Christian From ckkart at hoc.net Sat Feb 17 20:12:59 2007 From: ckkart at hoc.net (Christian Kristukat) Date: Sun, 18 Feb 2007 10:12:59 +0900 Subject: [SciPy-user] savitzky golay filtering In-Reply-To: References: Message-ID: <45D7A81B.9010006@hoc.net> Hi, Meesters, Christian wrote: > Hi > > I wanted to do a Savitzky Golay filtering on my data and came aross this piece of code: > http://www.dalkescientific.com/writings/NBN/data/savitzky_golay.py > Well, translating the necessary bits into current numpy/scipy code is driving me crazy. Can somebody give me a hint on the "M = ..."-line, please? import numpy as N .. .. M = N.dot(N.linalg.inv(N.dot(B.transpose(),B)),B.transpose()) > (Or is there a better way to do this filtering with scipy?) I don't know. Christian From oliphant at ee.byu.edu Sat Feb 17 20:47:38 2007 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Sat, 17 Feb 2007 18:47:38 -0700 Subject: [SciPy-user] savitzky golay filtering In-Reply-To: <45D7A81B.9010006@hoc.net> References: <45D7A81B.9010006@hoc.net> Message-ID: <45D7B03A.5030603@ee.byu.edu> Christian Kristukat wrote: > Hi, > > Meesters, Christian wrote: > >> Hi >> >> I wanted to do a Savitzky Golay filtering on my data and came aross this piece of code: >> http://www.dalkescientific.com/writings/NBN/data/savitzky_golay.py >> Well, translating the necessary bits into current numpy/scipy code is driving me crazy. Can somebody give me a hint on the "M = ..."-line, please? >> You can always import from numpy.oldnumeric and numpy.oldnumeric.linear_algebra Or, import numpy as N B = N.mat(B) M = (B.T*B).I * B.T return M.A # if you want an array returned. But, probably what you really want is to replace the whole line with M = N.linalg.pinv(B) -Travis From nwagner at iam.uni-stuttgart.de Sun Feb 18 05:39:45 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Sun, 18 Feb 2007 11:39:45 +0100 Subject: [SciPy-user] numpy/scipy on an Intel(R) Core(TM)2 CPU Message-ID: Hi all, I am going to install numpy/scipy (svn version) on an Intel(R) Core(TM)2 CPU. And I would like to build everything from scratch (including BLAS/LAPACK/ATLAS). Which compiler options/flags should I use for BLAS/LAPACK in that case ? Which fortran compiler g77/gfortran is currently recommended to build numpy/scipy ? BTW, I am using openSUSE 10.2. Nils From david at ar.media.kyoto-u.ac.jp Sun Feb 18 05:44:36 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Sun, 18 Feb 2007 19:44:36 +0900 Subject: [SciPy-user] numpy/scipy on an Intel(R) Core(TM)2 CPU In-Reply-To: References: Message-ID: <45D82E14.6050903@ar.media.kyoto-u.ac.jp> Nils Wagner wrote: > Hi all, > > I am going to install numpy/scipy (svn version) > on an Intel(R) Core(TM)2 CPU. And I would like to build > everything from scratch > (including BLAS/LAPACK/ATLAS). > > Which compiler options/flags should I use for BLAS/LAPACK > in that case ? > > Which fortran compiler g77/gfortran is > currently recommended to build numpy/scipy ? > > BTW, I am using openSUSE 10.2. > > Nils > For Atlas, you should use the flags set by the ATLAS build system. If you use recent ATLAS sources (3.7.* serie), there are some arch default for your CPU, I think. cheers, David From nwagner at iam.uni-stuttgart.de Sun Feb 18 09:37:09 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Sun, 18 Feb 2007 15:37:09 +0100 Subject: [SciPy-user] numpy/scipy on an Intel(R) Core(TM)2 CPU and CPU THROTTLING In-Reply-To: <45D82E14.6050903@ar.media.kyoto-u.ac.jp> References: <45D82E14.6050903@ar.media.kyoto-u.ac.jp> Message-ID: <45D86495.2070905@iam.uni-stuttgart.de> David Cournapeau wrote: > Nils Wagner wrote: > >> Hi all, >> >> I am going to install numpy/scipy (svn version) >> on an Intel(R) Core(TM)2 CPU. And I would like to build >> everything from scratch >> (including BLAS/LAPACK/ATLAS). >> >> Which compiler options/flags should I use for BLAS/LAPACK >> in that case ? >> >> Which fortran compiler g77/gfortran is >> currently recommended to build numpy/scipy ? >> >> BTW, I am using openSUSE 10.2. >> >> Nils >> >> > For Atlas, you should use the flags set by the ATLAS build system. If > you use recent ATLAS sources (3.7.* serie), there are some arch default > for your CPU, I think. > > cheers, > > David > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > Hi again, I am confused about CPU THROTTLING. How can I switch off CPU throttling on SuSE Linux 10.2 ? I found a command "cpufreq-set" but I don't know how to proceed. ATLAS uses gfortran by default but I have used g77 to build BLAS/LAPACK. Can I replace gfortran in Make.inc by g77 ? I have used /usr/local/src/ATLAS/configure -Fa alg -fPIC -Si cputhrchk 0 to configure ATLAS (3.7.28) Nils http://www.kernel.org/pub/linux/utils/kernel/cpufreq/cpufreq-set.html From nvf at uwm.edu Sun Feb 18 13:53:12 2007 From: nvf at uwm.edu (Nick Fotopoulos) Date: Sun, 18 Feb 2007 12:53:12 -0600 Subject: [SciPy-user] Scipy on Python 2.5 / OS X 10.4.8 Message-ID: On 2/16/07, scipy-user-request at scipy.org wrote: > ---------------------------------------------------------------------- > > Message: 1 > Date: Fri, 16 Feb 2007 11:15:35 +0100 > From: Niklas Saers > Subject: [SciPy-user] Scipy on Python 2.5 / OS X 10.4.8 > To: SciPy Users List > Message-ID: > Content-Type: text/plain; charset=US-ASCII; delsp=yes; format=flowed > > Hi guys, > has anyone successfully compiled Scipy for Python 2.5 under Mac OS X > 10.4.8? I notice that the binaries on the webpage only support up to > Python 2.4 so I would like to build them. However, running "python > setup.py build" I get: > > g95 -L/sw/lib build/temp.macosx-10.3-fat-2.5/build/src.macosx-10.3- > fat-2.5/Lib/fftpack/_fftpackmodule.o build/temp.macosx-10.3-fat-2.5/ > Lib/fftpack/src/zfft.o build/temp.macosx-10.3-fat-2.5/Lib/fftpack/src/ > drfft.o build/temp.macosx-10.3-fat-2.5/Lib/fftpack/src/zrfft.o build/ > temp.macosx-10.3-fat-2.5/Lib/fftpack/src/zfftnd.o build/ > temp.macosx-10.3-fat-2.5/build/src.macosx-10.3-fat-2.5/ > fortranobject.o -Lbuild/temp.macosx-10.3-fat-2.5 -ldfftpack -o build/ > lib.macosx-10.3-fat-2.5/scipy/fftpack/_fftpack.so > /sw/lib/odcctools590/bin/ld: Undefined symbols: > _PyArg_ParseTupleAndKeywords > _PyCObject_AsVoidPtr > _PyCObject_Type > _PyComplex_Type > _PyDict_SetItemString > _PyErr_Clear > _PyErr_Format > _PyErr_NewException > _PyErr_Occurred > _PyErr_Print > _PyErr_SetString > _PyExc_ImportError > _PyExc_RuntimeError > _PyImport_ImportModule > _PyInt_Type > _PyModule_GetDict > _PyNumber_Int > _PyObject_GetAttrString > _PySequence_Check > _PySequence_GetItem > _PyString_FromString > _PyString_Type > _PyType_IsSubtype > _PyType_Type > _Py_BuildValue > _Py_InitModule4 > __Py_NoneStruct > _PyCObject_FromVoidPtr > _PyDict_DelItemString > _PyDict_GetItemString > _PyDict_New > _PyExc_AttributeError > _PyExc_TypeError > _PyExc_ValueError > _PyMem_Free > _PyObject_Str > _PyObject_Type > _PyString_AsString > _PyString_ConcatAndDel > _Py_FindMethod > __PyObject_New > _MAIN_ > /sw/lib/odcctools590/bin/ld: Undefined symbols: > _PyArg_ParseTupleAndKeywords > _PyCObject_AsVoidPtr > _PyCObject_Type > _PyComplex_Type > _PyDict_SetItemString > _PyErr_Clear > _PyErr_Format > _PyErr_NewException > _PyErr_Occurred > _PyErr_Print > _PyErr_SetString > _PyExc_ImportError > _PyExc_RuntimeError > _PyImport_ImportModule > _PyInt_Type > _PyModule_GetDict > _PyNumber_Int > _PyObject_GetAttrString > _PySequence_Check > _PySequence_GetItem > _PyString_FromString > _PyString_Type > _PyType_IsSubtype > _PyType_Type > _Py_BuildValue > _Py_InitModule4 > __Py_NoneStruct > _PyCObject_FromVoidPtr > _PyDict_DelItemString > _PyDict_GetItemString > _PyDict_New > _PyExc_AttributeError > _PyExc_TypeError > _PyExc_ValueError > _PyMem_Free > _PyObject_Str > _PyObject_Type > _PyString_AsString > _PyString_ConcatAndDel > _Py_FindMethod > __PyObject_New > _MAIN_ > error: Command "g95 -L/sw/lib build/temp.macosx-10.3-fat-2.5/build/ > src.macosx-10.3-fat-2.5/Lib/fftpack/_fftpackmodule.o build/ > temp.macosx-10.3-fat-2.5/Lib/fftpack/src/zfft.o build/ > temp.macosx-10.3-fat-2.5/Lib/fftpack/src/drfft.o build/ > temp.macosx-10.3-fat-2.5/Lib/fftpack/src/zrfft.o build/ > temp.macosx-10.3-fat-2.5/Lib/fftpack/src/zfftnd.o build/ > temp.macosx-10.3-fat-2.5/build/src.macosx-10.3-fat-2.5/ > fortranobject.o -Lbuild/temp.macosx-10.3-fat-2.5 -ldfftpack -o build/ > lib.macosx-10.3-fat-2.5/scipy/fftpack/_fftpack.so" failed with exit > status 1 > > I have no idea of why it sais 10.3 rather than 10.4, but perhaps this > is not the OS version? I use FFTW 2.1.5 after the website recommended > the 2.1 branch for performance over 3.0 and I'm going to be using > FFTs a lot > > Could anyone help me out? :-) I just wanted to chime in that I have had the same problem on a PPC Mac and on an Intel Mac. My workaround with the iBook (PPC) was to compile Python 2.5 from source. I'll probably do the same on my MBP (Intel), but that's rather inconvenient. It'd make me quite happy if someone could help Nik and I. As an aside, on my MBP, I am testing (for work) the Python 2.3 that shipped with OSX 10.4. Despite the numerous warnings I've found online not to use it, I have found that numpy, scipy, and matplotlib work just fine with 2.3, but I will have to do The Readline Fix in order to make interactive sessions at the command-line usable. Thanks, Nick From cygnusx1 at mac.com Sun Feb 18 17:22:08 2007 From: cygnusx1 at mac.com (Tom Bridgman) Date: Sun, 18 Feb 2007 17:22:08 -0500 Subject: [SciPy-user] More elegant solution for binning lookup? Message-ID: I've browsed the numpy and scipy lists and available docs and haven't found an answer to this. SciPy v0.5.1, numpy v1.0rc3. I've got an array of samples through a region and a value 'R' where I want to find the proper bin in 'radius'. 'R' will generally not match any value in 'radius'. Here's the cleanest solution I found: >>> import scipy >>> import numpy >>> radius = scipy.arange(0.0, 1.5, 1.5/1000) >>> r=0.7274 >>> bin = numpy.nonzero(numpy.where(radius > r, 0, 1))[0][-1] >>> bin 484 >>> radius[484:486] array([ 0.726 , 0.7275]) But even this looks rather ugly and not very intuitive as to what I'm doing. Is there a function built-in to scipy or numpy for this? Thanks, Tom -- W.T. Bridgman, Ph.D. Physics & Astronomy From robert.kern at gmail.com Sun Feb 18 18:09:15 2007 From: robert.kern at gmail.com (Robert Kern) Date: Sun, 18 Feb 2007 17:09:15 -0600 Subject: [SciPy-user] Scipy on Python 2.5 / OS X 10.4.8 In-Reply-To: References: Message-ID: <45D8DC9B.9070005@gmail.com> Niklas Saers wrote: > Hi guys, > has anyone successfully compiled Scipy for Python 2.5 under Mac OS X > 10.4.8? I notice that the binaries on the webpage only support up to > Python 2.4 so I would like to build them. However, running "python > setup.py build" I get: > > g95 -L/sw/lib build/temp.macosx-10.3-fat-2.5/build/src.macosx-10.3- > fat-2.5/Lib/fftpack/_fftpackmodule.o build/temp.macosx-10.3-fat-2.5/ > Lib/fftpack/src/zfft.o build/temp.macosx-10.3-fat-2.5/Lib/fftpack/src/ > drfft.o build/temp.macosx-10.3-fat-2.5/Lib/fftpack/src/zrfft.o build/ > temp.macosx-10.3-fat-2.5/Lib/fftpack/src/zfftnd.o build/ > temp.macosx-10.3-fat-2.5/build/src.macosx-10.3-fat-2.5/ > fortranobject.o -Lbuild/temp.macosx-10.3-fat-2.5 -ldfftpack -o build/ > lib.macosx-10.3-fat-2.5/scipy/fftpack/_fftpack.so > /sw/lib/odcctools590/bin/ld: Undefined symbols: It looks like you have the LDFLAGS environment variable defined. That *overrides* all of the link flags, including the ones that distutils provides to link against the Python libraries. Don't use LDFLAGS. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From stefan at sun.ac.za Sun Feb 18 18:21:13 2007 From: stefan at sun.ac.za (Stefan van der Walt) Date: Mon, 19 Feb 2007 01:21:13 +0200 Subject: [SciPy-user] More elegant solution for binning lookup? In-Reply-To: References: Message-ID: <20070218232113.GG8850@mentat.za.net> Hi Tom On Sun, Feb 18, 2007 at 05:22:08PM -0500, Tom Bridgman wrote: > I've got an array of samples through a region and a value 'R' where I > want to find the proper bin in 'radius'. 'R' will generally not > match any value in 'radius'. Would digitize do the job? digitize(x,bins) Return the index of the bin to which each value of x belongs. Each index i returned is such that bins[i-1] <= x < bins[i] if bins is monotonically increasing, or bins [i-1] > x >= bins[i] if bins is monotonically decreasing. Beyond the bounds of the bins 0 or len(bins) is returned as appropriate. Cheers St?fan From david at ar.media.kyoto-u.ac.jp Sun Feb 18 22:35:11 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Mon, 19 Feb 2007 12:35:11 +0900 Subject: [SciPy-user] numpy/scipy on an Intel(R) Core(TM)2 CPU and CPU THROTTLING In-Reply-To: <45D86495.2070905@iam.uni-stuttgart.de> References: <45D82E14.6050903@ar.media.kyoto-u.ac.jp> <45D86495.2070905@iam.uni-stuttgart.de> Message-ID: <45D91AEF.8030809@ar.media.kyoto-u.ac.jp> Nils Wagner wrote: > David Cournapeau wrote: >> Nils Wagner wrote: >> >>> Hi all, >>> >>> I am going to install numpy/scipy (svn version) >>> on an Intel(R) Core(TM)2 CPU. And I would like to build >>> everything from scratch >>> (including BLAS/LAPACK/ATLAS). >>> >>> Which compiler options/flags should I use for BLAS/LAPACK >>> in that case ? >>> >>> Which fortran compiler g77/gfortran is >>> currently recommended to build numpy/scipy ? >>> >>> BTW, I am using openSUSE 10.2. >>> >>> Nils >>> >>> >> For Atlas, you should use the flags set by the ATLAS build system. If >> you use recent ATLAS sources (3.7.* serie), there are some arch default >> for your CPU, I think. >> >> cheers, >> >> David >> _______________________________________________ >> SciPy-user mailing list >> SciPy-user at scipy.org >> http://projects.scipy.org/mailman/listinfo/scipy-user >> > Hi again, > > I am confused about CPU THROTTLING. > > How can I switch off CPU throttling on SuSE Linux 10.2 ? I don't know anything about Suse, but if the command line cupfreq-set works, you should try (as root generally) as written in the docs: cpufreq-selector -g performance cpufreq works by using so called governor (hence the -g), and the performance one is more or less equivalent to disabling cpu throttling, as I understand it (note that I don't know much about those things; but if I do that on my laptop, it is working as expected). > > I found a command "cpufreq-set" but I don't know how to proceed. > > ATLAS uses gfortran by default but I have used g77 to build BLAS/LAPACK. ATLAS is written in C, so in my understanding is that the fortran compiler is just used to build the Fortran interface to the C blas/lapack built by ATLAS. I think the key point is to always keep the same compiler (eg if you use gfortran, make sure you always use it for all the libraries used by numpy/scipy like umfpack, etc...). One way to check which one is "better", would be to check other fortran libraries you are using, I guess (eg, what is the Suse default: g77 or gfortran ?). cheers David From nwagner at iam.uni-stuttgart.de Mon Feb 19 02:33:43 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Mon, 19 Feb 2007 08:33:43 +0100 Subject: [SciPy-user] numpy/scipy on an Intel(R) Core(TM)2 CPU and CPU THROTTLING In-Reply-To: <45D91AEF.8030809@ar.media.kyoto-u.ac.jp> References: <45D82E14.6050903@ar.media.kyoto-u.ac.jp> <45D86495.2070905@iam.uni-stuttgart.de> <45D91AEF.8030809@ar.media.kyoto-u.ac.jp> Message-ID: <45D952D7.8040009@iam.uni-stuttgart.de> David Cournapeau wrote: > Nils Wagner wrote: > >> David Cournapeau wrote: >> >>> Nils Wagner wrote: >>> >>> >>>> Hi all, >>>> >>>> I am going to install numpy/scipy (svn version) >>>> on an Intel(R) Core(TM)2 CPU. And I would like to build >>>> everything from scratch >>>> (including BLAS/LAPACK/ATLAS). >>>> >>>> Which compiler options/flags should I use for BLAS/LAPACK >>>> in that case ? >>>> >>>> Which fortran compiler g77/gfortran is >>>> currently recommended to build numpy/scipy ? >>>> >>>> BTW, I am using openSUSE 10.2. >>>> >>>> Nils >>>> >>>> >>>> >>> For Atlas, you should use the flags set by the ATLAS build system. If >>> you use recent ATLAS sources (3.7.* serie), there are some arch default >>> for your CPU, I think. >>> >>> cheers, >>> >>> David >>> _______________________________________________ >>> SciPy-user mailing list >>> SciPy-user at scipy.org >>> http://projects.scipy.org/mailman/listinfo/scipy-user >>> >>> >> Hi again, >> >> I am confused about CPU THROTTLING. >> >> How can I switch off CPU throttling on SuSE Linux 10.2 ? >> > I don't know anything about Suse, but if the command line cupfreq-set > works, you should try (as root generally) as written in the docs: > > cpufreq-selector -g performance > > On SuSE Linux you have cpufreq-set -g performance > cpufreq works by using so called governor (hence the -g), and the > performance one is more or less equivalent to disabling cpu throttling, > as I understand it (note that I don't know much about those things; but > if I do that on my laptop, it is working as expected). > >> I found a command "cpufreq-set" but I don't know how to proceed. >> >> ATLAS uses gfortran by default but I have used g77 to build BLAS/LAPACK. >> > ATLAS is written in C, so in my understanding is that the fortran > compiler is just used to build the Fortran interface to the C > blas/lapack built by ATLAS. I think the key point is to always keep the > same compiler (eg if you use gfortran, make sure you always use it for > all the libraries used by numpy/scipy like umfpack, etc...). > > One way to check which one is "better", would be to check other fortran > libraries you are using, I guess (eg, what is the Suse default: g77 or > gfortran ?). > > In the end I have used g77. How do I build/install numpy/scipy with gfortran ? If I just use python setup.py build python setup.py install to build numpy/scipy g77 is called. Cheers, Nils > cheers > > David > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From nvf at uwm.edu Mon Feb 19 18:52:43 2007 From: nvf at uwm.edu (Nick Fotopoulos) Date: Mon, 19 Feb 2007 17:52:43 -0600 Subject: [SciPy-user] Scipy on Python 2.5 / OS X 10.4.8 Message-ID: On 2/19/07, scipy-user-request at scipy.org wrote: > ------------------------------ > > Message: 3 > Date: Sun, 18 Feb 2007 17:09:15 -0600 > From: Robert Kern > Subject: Re: [SciPy-user] Scipy on Python 2.5 / OS X 10.4.8 > To: SciPy Users List > Message-ID: <45D8DC9B.9070005 at gmail.com> > Content-Type: text/plain; charset=UTF-8 > > Niklas Saers wrote: > > Hi guys, > > has anyone successfully compiled Scipy for Python 2.5 under Mac OS X > > 10.4.8? I notice that the binaries on the webpage only support up to > > Python 2.4 so I would like to build them. However, running "python > > setup.py build" I get: > > > > g95 -L/sw/lib build/temp.macosx-10.3-fat-2.5/build/src.macosx-10.3- > > fat-2.5/Lib/fftpack/_fftpackmodule.o build/temp.macosx-10.3-fat-2.5/ > > Lib/fftpack/src/zfft.o build/temp.macosx-10.3-fat-2.5/Lib/fftpack/src/ > > drfft.o build/temp.macosx-10.3-fat-2.5/Lib/fftpack/src/zrfft.o build/ > > temp.macosx-10.3-fat-2.5/Lib/fftpack/src/zfftnd.o build/ > > temp.macosx-10.3-fat-2.5/build/src.macosx-10.3-fat-2.5/ > > fortranobject.o -Lbuild/temp.macosx-10.3-fat-2.5 -ldfftpack -o build/ > > lib.macosx-10.3-fat-2.5/scipy/fftpack/_fftpack.so > > /sw/lib/odcctools590/bin/ld: Undefined symbols: > > It looks like you have the LDFLAGS environment variable defined. That > *overrides* all of the link flags, including the ones that distutils provides to > link against the Python libraries. Don't use LDFLAGS. Robert, At least for me, this is not the case. LDFLAGS is not set, but I get the same "ld: Undefined symbols" error, with both MacPython 2.4 and MacPython 2.5. The failing command is different than Nik's, though. Here is my failing command with MacPython 2.4: error: Command "/usr/local/bin/gfortran -Wall -bundle build/temp.macosx-10.3-fat-2.4/build/src.macosx-10.3-fat-2.4/Lib/fftpack/_fftpackmodule.o build/temp.macosx-10.3-fat-2.4/Lib/fftpack/src/zfft.o build/temp.macosx-10.3-fat-2.4/Lib/fftpack/src/drfft.o build/temp.macosx-10.3-fat-2.4/Lib/fftpack/src/zrfft.o build/temp.macosx-10.3-fat-2.4/Lib/fftpack/src/zfftnd.o build/temp.macosx-10.3-fat-2.4/build/src.macosx-10.3-fat-2.4/fortranobject.o -L/opt/lscsoft/non-lsc/lib -L/usr/local/lib/gcc/i386-apple-darwin8.8.1/4.3.0 -Lbuild/temp.macosx-10.3-fat-2.4 -ldfftpack -lfftw3 -lgfortran -o build/lib.macosx-10.3-fat-2.4/scipy/fftpack/_fftpack.so" failed with exit status 1 Numpy and scipy are fresh from SVN. gfortran is from hpc.sourceforge.net, OS is OSX 10.4, and architecture is Intel Any other ideas? Is there other useful information I could provide? My site.cfg and highlights of my python setup.py config are below. Many thanks, Nick ===================== [DEFAULT] search_static_first=true [fftw3] library_dirs = /opt/lscsoft/non-lsc/lib fftw3_libs = fftw3 include_dirs = /opt/lscsoft/non-lsc/include ===================== fftw3_info: FOUND: libraries = ['fftw3'] library_dirs = ['/opt/lscsoft/non-lsc/lib'] define_macros = [('SCIPY_FFTW3_H', None)] include_dirs = ['/opt/lscsoft/non-lsc/include'] blas_opt_info: FOUND: extra_link_args = ['-Wl,-framework', '-Wl,Accelerate'] define_macros = [('NO_ATLAS_INFO', 3)] extra_compile_args = ['-faltivec', '-I/System/Library/Frameworks/vecLib.framework/Headers'] lapack_opt_info: FOUND: extra_link_args = ['-Wl,-framework', '-Wl,Accelerate'] define_macros = [('NO_ATLAS_INFO', 3)] extra_compile_args = ['-faltivec'] mkl_info, djbfft_info, and umfpack_info are all NOT AVAILABLE, which seems fine by me. From meesters at uni-mainz.de Tue Feb 20 04:53:43 2007 From: meesters at uni-mainz.de (Christian Meesters) Date: Tue, 20 Feb 2007 10:53:43 +0100 Subject: [SciPy-user] savitzky golay filtering In-Reply-To: <45D7B03A.5030603@ee.byu.edu> References: <45D7A81B.9010006@hoc.net> <45D7B03A.5030603@ee.byu.edu> Message-ID: <200702201053.43561.meesters@uni-mainz.de> Thanks Travis & Christian! The filter not only looks like expected, the filtering also works like a charm on my data - without loosing resolution. Looks like taken from a textbook, absolutely great! Cheers Christian From rhc28 at cornell.edu Tue Feb 20 15:02:37 2007 From: rhc28 at cornell.edu (Rob Clewley) Date: Tue, 20 Feb 2007 15:02:37 -0500 Subject: [SciPy-user] ANN: PyDSTool now compatible with numpy 1.0.1, scipy 0.5.2 and 64-bit CPUs. Message-ID: We are pleased to announce version 0.84 of PyDSTool, an open-source dynamical systems simulation, modeling, and analysis package. This long-overdue release is primarily intended to bring existing PyDSTool functionality up to date with the latest numpy and scipy releases (previous versions required scipy 0.3.2, numarray, numeric, etc). Also, PyDSTool is now compatible with 64-bit CPUs. While we have added a few new features and made several fixes, major improvements to functionality are in the pipeline for version 0.90. Please see http://pydstool.sourceforge.net for release notes and documentation, and http://sourceforge.net/projects/pydstool for downloading. As ever, please send us feedback if you have any problems with this new release or ideas and code contributions for future releases. Regards, Rob, Erik, and Drew. Center for Applied Mathematics, Cornell University. ****************** PyDSTool is an integrated simulation, modeling and analysis package for dynamical systems, written in Python (and partly in C). It is being developed at Cornell University, and the source code is available under the terms of the BSD license. PyDSTool runs on Linux, Windows, and Macs, and aims to have a minimal number of package dependencies. From s.mientki at ru.nl Tue Feb 20 15:42:45 2007 From: s.mientki at ru.nl (Stef Mientki) Date: Tue, 20 Feb 2007 21:42:45 +0100 Subject: [SciPy-user] ANN: PyDSTool now compatible with numpy 1.0.1, scipy 0.5.2 and 64-bit CPUs. In-Reply-To: References: Message-ID: <45DB5D45.9020108@ru.nl> Sounds GREAT ! thank you ! I just took a quick look, the comparison to SimuLink looks good, now if someone could make a comparison with Modelica ;-) cheers, Stef Mientki Rob Clewley wrote: > We are pleased to announce version 0.84 of PyDSTool, an open-source > dynamical systems simulation, modeling, and analysis package. > > This long-overdue release is primarily intended to bring existing > PyDSTool functionality up to date with the latest numpy and scipy releases > (previous versions required scipy 0.3.2, numarray, numeric, etc). > Also, PyDSTool is now compatible with 64-bit CPUs. > > While we have added a few new features and made several fixes, major > improvements to functionality are in the pipeline for version 0.90. > > Please see http://pydstool.sourceforge.net for release notes and documentation, > and http://sourceforge.net/projects/pydstool for downloading. As ever, please > send us feedback if you have any problems with this new release or ideas and > code contributions for future releases. > > Regards, > > Rob, Erik, and Drew. > Center for Applied Mathematics, > Cornell University. > > ****************** > > PyDSTool is an integrated simulation, modeling and analysis package > for dynamical systems, written in Python (and partly in C). It is > being developed at Cornell University, and the source code is > available under the terms of the BSD license. PyDSTool runs on Linux, > Windows, and Macs, and aims to have a minimal number of package > dependencies. > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > > From nwagner at iam.uni-stuttgart.de Tue Feb 20 15:59:53 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Tue, 20 Feb 2007 21:59:53 +0100 Subject: [SciPy-user] ANN: PyDSTool now compatible with numpy 1.0.1, scipy 0.5.2 and 64-bit CPUs. In-Reply-To: <45DB5D45.9020108@ru.nl> References: <45DB5D45.9020108@ru.nl> Message-ID: On Tue, 20 Feb 2007 21:42:45 +0100 Stef Mientki wrote: > Sounds GREAT ! > thank you ! > I just took a quick look, > the comparison to SimuLink looks good, > now if someone could make a comparison with Modelica ;-) > > cheers, > Stef Mientki > > Rob Clewley wrote: >> We are pleased to announce version 0.84 of PyDSTool, an >>open-source >> dynamical systems simulation, modeling, and analysis >>package. >> >> This long-overdue release is primarily intended to bring >>existing >> PyDSTool functionality up to date with the latest numpy >>and scipy releases >> (previous versions required scipy 0.3.2, numarray, >>numeric, etc). >> Also, PyDSTool is now compatible with 64-bit CPUs. >> >> While we have added a few new features and made several >>fixes, major >> improvements to functionality are in the pipeline for >>version 0.90. >> >> Please see http://pydstool.sourceforge.net for release >>notes and documentation, >> and http://sourceforge.net/projects/pydstool for >>downloading. As ever, please >> send us feedback if you have any problems with this new >>release or ideas and >> code contributions for future releases. >> >> Regards, >> >> Rob, Erik, and Drew. >> Center for Applied Mathematics, >> Cornell University. >> >> ****************** >> >> PyDSTool is an integrated simulation, modeling and >>analysis package >> for dynamical systems, written in Python (and partly in >>C). It is >> being developed at Cornell University, and the source >>code is >> available under the terms of the BSD license. PyDSTool >>runs on Linux, >> Windows, and Macs, and aims to have a minimal number of >>package >> dependencies. >> _______________________________________________ >> SciPy-user mailing list >> SciPy-user at scipy.org >> http://projects.scipy.org/mailman/listinfo/scipy-user >> >> >> > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user Is there a way to get PyDSTool via cvs ? I cannot find any module at http://pydstool.cvs.sourceforge.net/pydstool/ Am I missing something ? Nils From rhc28 at cornell.edu Wed Feb 21 11:30:48 2007 From: rhc28 at cornell.edu (Rob Clewley) Date: Wed, 21 Feb 2007 11:30:48 -0500 Subject: [SciPy-user] ANN: PyDSTool now compatible with numpy 1.0.1, scipy 0.5.2 and 64-bit CPUs. In-Reply-To: References: <45DB5D45.9020108@ru.nl> Message-ID: There is no CVS repository at Sourceforge. We are moving to trac with our local SVN repository and you can already get the source from http://jay.cam.cornell.edu/svn. Rob From nwagner at iam.uni-stuttgart.de Wed Feb 21 11:36:17 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Wed, 21 Feb 2007 17:36:17 +0100 Subject: [SciPy-user] ANN: PyDSTool now compatible with numpy 1.0.1, scipy 0.5.2 and 64-bit CPUs. In-Reply-To: References: <45DB5D45.9020108@ru.nl> Message-ID: <45DC7501.5020304@iam.uni-stuttgart.de> Rob Clewley wrote: > There is no CVS repository at Sourceforge. We are moving to trac with > our local SVN repository and you can already get the source from > http://jay.cam.cornell.edu/svn. > > Rob > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > Many thanks! I assume it is svn co http://jay.cam.cornell.edu/svn/PyDSTool/trunk/PyDSTool/ Is that correct ? Nils From cclarke at chrisdev.com Wed Feb 21 13:51:18 2007 From: cclarke at chrisdev.com (Christopher Clarke) Date: Wed, 21 Feb 2007 14:51:18 -0400 Subject: [SciPy-user] Sort and N dimensional array by 1d array Message-ID: <433D0017-04DE-46D2-B665-8304CD9A4742@chrisdev.com> Hi I am looking for an efficient way put a two dimensional array in the same order as a 1 d array In the case of arrays key, b and c key=array([1,3,6,0 ,4]) b = array([6,7,8,9,11]) c=array([[6,10],[7,11],[8,6],[2,5],[21,55]]) ixs=argsort(key) sb=take(b,ixs) however for when i do sc=take(c,ixs) sc.shape=(5,) (i've lost the second column) does this mean i have to go take(c[:,0],ixs) take(c[:,1],ixs) and concatenate the results. I know i am supposed to figure out how to specify the 2d sorted indecies for the 2 d array but??? Regards Chris From jks at iki.fi Wed Feb 21 14:08:10 2007 From: jks at iki.fi (=?iso-8859-1?Q?Jouni_K=2E_Sepp=E4nen?=) Date: Wed, 21 Feb 2007 21:08:10 +0200 Subject: [SciPy-user] Sort and N dimensional array by 1d array References: <433D0017-04DE-46D2-B665-8304CD9A4742@chrisdev.com> Message-ID: Christopher Clarke writes: > however for when i do > sc=take(c,ixs) > sc.shape=(5,) (i've lost the second column) Does take(c,ixs,axis=0) do what you want? -- Jouni K. Sepp?nen http://www.iki.fi/jks From cclarke at chrisdev.com Wed Feb 21 14:18:52 2007 From: cclarke at chrisdev.com (Christopher Clarke) Date: Wed, 21 Feb 2007 15:18:52 -0400 Subject: [SciPy-user] Sort and N dimensional array by 1d array In-Reply-To: References: <433D0017-04DE-46D2-B665-8304CD9A4742@chrisdev.com> Message-ID: <694BF6C1-2BA2-468C-8800-C5FD83F630FF@chrisdev.com> Er Yes!! Embarrassing i just checked the code!!! For some reason i used axis=1 and getting crap so i kept thinking that there was something more eloborate Thanks a lot Regards Chris On 21 Feb 2007, at 15:08, Jouni K. Sepp?nen wrote: > Christopher Clarke writes: > >> however for when i do >> sc=take(c,ixs) >> sc.shape=(5,) (i've lost the second column) > > Does take(c,ixs,axis=0) do what you want? > > -- > Jouni K. Sepp?nen > http://www.iki.fi/jks > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user From rhc28 at cornell.edu Wed Feb 21 14:55:11 2007 From: rhc28 at cornell.edu (Rob Clewley) Date: Wed, 21 Feb 2007 14:55:11 -0500 Subject: [SciPy-user] ANN: PyDSTool now compatible with numpy 1.0.1, scipy 0.5.2 and 64-bit CPUs. In-Reply-To: <45DC7501.5020304@iam.uni-stuttgart.de> References: <45DB5D45.9020108@ru.nl> <45DC7501.5020304@iam.uni-stuttgart.de> Message-ID: > svn co http://jay.cam.cornell.edu/svn/PyDSTool/trunk/PyDSTool/ > > Is that correct ? Yes, indeed it is. From fonnesbeck.mailing.lists at gmail.com Thu Feb 22 11:45:37 2007 From: fonnesbeck.mailing.lists at gmail.com (Christopher Fonnesbeck) Date: Thu, 22 Feb 2007 11:45:37 -0500 Subject: [SciPy-user] nan bug in distributions.norm.cdf Message-ID: <563dd7570702220845m58ab43adr9ace08b3a5b1097@mail.gmail.com> For some reason, perfectly valid normal random variates return a nan when passed to the normal cdf in the stats.distributions package: In [10]: from scipy.stats import distributions as d ... In [31]: d.norm.cdf(-0.73646593092) Out[31]: array(nan) In [32]: d.norm.cdf(-0.7) Out[32]: array(0.24196365222307303) In [33]: d.norm.cdf(-0.8) Out[33]: array(0.21185539858339669) Simply rounding this value makes it work. Not sure why this happens. Using a relatively recent svn build on OSX. cf -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Thu Feb 22 12:15:29 2007 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 22 Feb 2007 11:15:29 -0600 Subject: [SciPy-user] nan bug in distributions.norm.cdf In-Reply-To: <563dd7570702220845m58ab43adr9ace08b3a5b1097@mail.gmail.com> References: <563dd7570702220845m58ab43adr9ace08b3a5b1097@mail.gmail.com> Message-ID: <45DDCFB1.4050103@gmail.com> Christopher Fonnesbeck wrote: > For some reason, perfectly valid normal random variates return a nan > when passed to the normal cdf in the stats.distributions package: > > In [10]: from scipy.stats import distributions as d > ... > In [31]: d.norm.cdf(-0.73646593092) > Out[31]: array(nan) > > In [32]: d.norm.cdf(-0.7) > Out[32]: array(0.24196365222307303 ) > > In [33]: d.norm.cdf(-0.8) > Out[33]: array(0.21185539858339669) > > Simply rounding this value makes it work. Not sure why this happens. > Using a relatively recent svn build on OSX. Current SVN on Intel OS X: Python 2.5 (r25:51918, Sep 19 2006, 08:49:13) [GCC 4.0.1 (Apple Computer, Inc. build 5341)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> from scipy import stats >>> stats.norm.cdf(-0.73646593092) array(0.23072359685139249) -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From bryanv at enthought.com Thu Feb 22 11:56:21 2007 From: bryanv at enthought.com (Bryan Van de Ven) Date: Thu, 22 Feb 2007 10:56:21 -0600 Subject: [SciPy-user] nan bug in distributions.norm.cdf In-Reply-To: <563dd7570702220845m58ab43adr9ace08b3a5b1097@mail.gmail.com> References: <563dd7570702220845m58ab43adr9ace08b3a5b1097@mail.gmail.com> Message-ID: <45DDCB35.4020502@enthought.com> I just tried this on OSX (Intel MacBook, Darwin) and I don't see the problem: In [1]: from scipy.stats import distributions as d In [2]: d.norm.cdf(-0.73646593092) Out[2]: array(0.23072359685139249) In [3]: import scipy In [4]: scipy.__version__ Out[4]: '0.5.3.dev2409' What platform/version are you using? Christopher Fonnesbeck wrote: > For some reason, perfectly valid normal random variates return a nan > when passed to the normal cdf in the stats.distributions package: > > In [10]: from scipy.stats import distributions as d > ... > In [31]: d.norm.cdf(-0.73646593092) > Out[31]: array(nan) > > In [32]: d.norm.cdf(-0.7) > Out[32]: array(0.24196365222307303 ) > > In [33]: d.norm.cdf(-0.8) > Out[33]: array(0.21185539858339669) > > Simply rounding this value makes it work. Not sure why this happens. > Using a relatively recent svn build on OSX. > > cf > > > ------------------------------------------------------------------------ > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user From rshepard at appl-ecosys.com Thu Feb 22 13:17:15 2007 From: rshepard at appl-ecosys.com (Rich Shepard) Date: Thu, 22 Feb 2007 10:17:15 -0800 (PST) Subject: [SciPy-user] Need Advice With Arrays and Calculating Eigenvectors Message-ID: As a newcomer to python, NumPy, and SciPy I need to learn how to most efficiently manipulate data. Today's need is reducing a list of tuples to three smaller lists of tuples, creating a symmetrical matrix from each list, and calculating the Eigenvector of each of the three symmetrical matrices. The starting list contains 9 tuples. Each tuple has 30 items: a category name, a subcategory name, and 28 floats. These were selected from a database, and a tuple looks like this one: (u'soc', u'pro', 1.3196923076923075, 3.8109999999999999, 1.6943846153846154, 2.7393076923076922, 3.825538461538462, 5.0640769230769234, 3.609923076923077, 3.1429999999999998, 1.5936153846153849, 1.4893846153846153, 2.6563076923076929, 2.2156923076923074, 3.7973076923076921, 2.6884615384615387, 2.7008461538461543, 3.4992307692307687, 2.3813846153846154, 3.2199230769230769, 1.7726923076923078, 2.9855384615384613, 2.8829230769230771, 3.7862307692307695, 2.3791538461538462, 4.0949230769230773, 2.8703846153846153, 2.8296923076923073, 3.319230769230769, 1.8083076923076922) The three 'soc' subcategories need to have each of the 28 floats averaged and assigned to another tuple for 'soc'; same with the other two categories. That produces a list of three tuples, each with 29 items. Each of these 28 floats represents the average of a pair-wise comparison of values (in the non-numeric sense). So the first float above represents the cell (1,2), the second float represents the value of the cell (1,3) and so on. The diagonal of the matrix is 1. When I have these three symmetrical matrices, I want to call eigen() on each one to calculate the principal Eignevector. I can think of indirect ways of doing all this, but I'm sure that there are much more efficient approaches known to those who have done this before. So, I'd like your suggestions and recommendations. Of course, if I've not clearly explained my needs, please ask. Rich -- Richard B. Shepard, Ph.D. | The Environmental Permitting Applied Ecosystem Services, Inc. | Accelerator(TM) Voice: 503-667-4517 Fax: 503-667-8863 From car at melix.org Thu Feb 22 13:51:21 2007 From: car at melix.org (Charles-Antoine Robelin) Date: Thu, 22 Feb 2007 10:51:21 -0800 Subject: [SciPy-user] io.read_array with strings Message-ID: I have been using io.read_array successfully to read ASCII files containing integers and floats, and I would like to import strings into arrays as well. I tried with io.read_array, but did not get it to work: If I create an array manually (i.e., numpy.array([['a1', 'd3', 'gg'],['wq', 'ty', 'e']])), the type (dtype) of its elements is '|S4', so I suspect numpy.arrays can handle strings. However, io.read_array(, separator=',') on the following file: a1, d3, gg wq, ty, e returns an array of floats with the correct shape, containing the numbers it could find (a1 -> 1.; d3 -> 3.) and 0. where no number could be found. I could not find how to force the type "strings," such as atype='' in the call of read_array. Is importing strings possible with io.read_array, or with another function, without having to parse manually? Thanks in advance. From fonnesbeck.mailing.lists at gmail.com Thu Feb 22 15:17:57 2007 From: fonnesbeck.mailing.lists at gmail.com (Christopher Fonnesbeck) Date: Thu, 22 Feb 2007 15:17:57 -0500 Subject: [SciPy-user] nan bug in distributions.norm.cdf In-Reply-To: <45DDCB35.4020502@enthought.com> References: <563dd7570702220845m58ab43adr9ace08b3a5b1097@mail.gmail.com> <45DDCB35.4020502@enthought.com> Message-ID: <563dd7570702221217q3de21385v84fdf82904510588@mail.gmail.com> On 2/22/07, Bryan Van de Ven wrote: > > I just tried this on OSX (Intel MacBook, Darwin) and I don't see the > problem: > > In [1]: from scipy.stats import distributions as d > > In [2]: d.norm.cdf(-0.73646593092) > Out[2]: array(0.23072359685139249) > > In [3]: import scipy > > In [4]: scipy.__version__ > Out[4]: '0.5.3.dev2409' > > What platform/version are you using? In [61]: scipy.__version__ Out[61]: '0.5.3.dev2095' on PPC. It seems to occur every so often for a variety of inoccuous-looking values. I will build anew from a more recent svn update and see if the problem goes away. cf -------------- next part -------------- An HTML attachment was scrubbed... URL: From fperez.net at gmail.com Thu Feb 22 15:28:33 2007 From: fperez.net at gmail.com (Fernando Perez) Date: Thu, 22 Feb 2007 13:28:33 -0700 Subject: [SciPy-user] Fwd: [sage-devel] Fwd: (Summer Of Code) are you interested in numerical optimization (+solving non-smooth non-linear systems of equations)? In-Reply-To: <85e81ba30702221200o6ff005d7n658e70098a1befd8@mail.gmail.com> References: <45DD68B9.10801@ukr.net> <85e81ba30702221200o6ff005d7n658e70098a1befd8@mail.gmail.com> Message-ID: [ Forwarded from the SAGE dev list, since this may well be right up someone's alley from the scipy crowd] ---------- Forwarded message ---------- From: William Stein Date: Feb 22, 2007 1:00 PM Subject: [sage-devel] Fwd: (Summer Of Code) are you interested in numerical optimization (+solving non-smooth non-linear systems of equations)? To: sage-devel at googlegroups.com, suvrit at cs.utexas.edu Does anybody have any thoughts about this potentially excellent idea of a package that could help SAGE? ---------- Forwarded message ---------- From: dmitrey Date: Feb 22, 2007 1:56 AM Subject: (Summer Of Code) are you interested in numerical optimization (+solving non-smooth non-linear systems of equations)? To: wstein at gmail.com Hallo William Stein! I found your email address in http://wiki.python.org/moin/SummerOfCode/Mentors As far as I understood there are very few constrained solvers for Python. In google I failed to find anything but the CVXOPT, which consists mostly of wrappers to commercial mosek, and some LP/MIP wrappers to GNU C- or f- code (and some optimization routines from scipy, of course). So I'm last-year post-graduate (institute of cybernetics, Ukraine national science academy, optimization department). Our department research methods of optimization for non-smooth (& noisy) funcs since 1964 or so (under leadership of academician Naum Z. Shor till 2002 when he gone away), and parallel department under leadership of academician I. Sergienko & dr. V.Shilo researches combinatory optimization problems (something like matlab bintprog, MAXCUT etc, some weeks ago they published article of their GRASP-based code that won comparison vs CPLEX & some other commercial solvers). So all our software is opensourse & free, mostly fortran & C written. 3-4 months ago I began to write (in m-files) OpenOpt for MATLAB/Octave (1st ver 0.15 had been reliesed in November 25, 2006). It's equivalent to commercial TOMLAB or GAMS (currently puny of course, but free (GNU GPL2)) and contains 4 global solvers, that I connected to the OpenOpt environment from matworks fileexchange, 2 my local nonsmooth solvers - ShorEllipsoid for nVars = 1...10 & ralg for nVars = 1...1000, and nonSmoothSolve - MATLAB fsolve equivalent for non-smooth & noisy funcs. There is a good comparison in Examples/nonSmoothSolveEx.m, which shows, that fsolve fails even on low-noise or low-non-smooth funcs, but nonSmoothSolve - not. ralg & ShorEllipsoid are MATLAB equivalents to fminsearch; however, they can handle (as long as nonSmoothSolve): lb<=x<=ub; Ax<=b; Aeq*x=beq; c(x)<=0; h(x)=0; %as far as I understood from CVXOPT documentation it can't handle these constraints - see http://www.ee.ucla.edu/~vandenbe/cvxopt/doc/e-nlcp.html as long as gradients or subgradients df, dc, dh of f, c, h. MATLAB fminsearch can't handle anything mentioned above, and in 95% it fails comparison to ralg (but I can't say I tried too much examples). Just try, for example, f(x)=sum(abs(x).^1.2.^(0:(length(x)-1))); x0 = cos(1:60).'; or Lemarechal.m from OpenOpt/test (convex, continuous, non-smooth) These and some more examples are in OpenOpt/Examples, see ooexample5.m, ooexample2.m and others. This directory also contains some pictures of convergence, automatically generated by the files. Also OpenOpt performs auto scaling (but I not tested properly it yet); providing patterns of f, c, h (when no (sub)gradient is provided by user) can greatly speedup calculations; possibility of parallel calculations while obtaining df/dx numerically (via MATLAB dfeval, Octave users must provide similar func in prob.parallel.fun) and some more features. So are you interested in Python version of the OpenOpt? If yes, I probably would be able to contact with other Kiev institute of System Analis, where a group under leadership of academician Pshenichniy (perished some months ago) & dr. Nikitin develop smooth algorithms of optimization, and their IP-based smooth solvers (constrained, of course; including 2nd-order solvers) are considered to be one of essential. So I probably would be able to write for you Pyhton OpenOpt ver (GNU GPL2) with essential equivalent of MATLAB fmincon, as long as ralg, ShorEllipsoid, bintprog; drawing pictures of convergence, using patterns of dependences, parallel obtaining of (sub)gradients to f(), c(), h(); network problems solvers etc. If you have any questions, or can add any financial support, or want to look at my CV (I had about 1-1.5 years of Python experience & 3-4 years of optimization in MATLAB), or anything else - you can contact me via email or icq 275 - 976 - 670 (invisible). The more financial support can be obtained, the more time can I spend for the OpenOpt Python version, & the more icyb optim department workers can be involved into the OpenOpt for Python development (any salary amount are essential to the Ukraine workers). If I gain enough money, I propose: creating the same environment for Python as it is done for MATLAB/Octave; (1-1.5 months) writing ralg() & ShorEllipsoid() solvers (Unconstrained: ~1 week, constrained: +2-3 weeks) writing nonSmoothSolve() : ~ 1-2 weeks writing MATLAB bintprog (f*x->min, A*x<=b, Aeq*x=beq) equivalent based on rd. Shilo (& others) version of GRASP (works better than current CPLEX!): ~2 weeks writing MATLAB fmincon equivalent (smooth constrained optimization, c(x)<=0, h(x)=0, linear constraints +1st & 2nd derivatives) based on works of dr. Nikitin & academician Pshenichniy: (I can't estimate time for now, I must contact to him before) -------- + following implementation of other solvers, their upgrade & maintenance. I guess in future you could easily connect the py-code to the SAGE project, for example like you did with MAXIMA package. some links: ftp where OpenOpt versions are storing: http://www.box.net/public/6bsuq765t4 (you would better to download OpenOpt0.36.tar.bz2 from here) my page at matlabexchange area http://www.mathworks.com/matlabcentral/fileexchange/loadFile.do?objectId=13115&objectType=file Let me attach a graphic file generated by the OpenOpt ooexample3.m WBR, Dmitrey -- William Stein Associate Professor of Mathematics University of Washington --~--~---------~--~----~------------~-------~--~----~ To post to this group, send email to sage-devel at googlegroups.com To unsubscribe from this group, send email to sage-devel-unsubscribe at googlegroups.com For more options, visit this group at http://groups.google.com/group/sage-devel URLs: http://sage.scipy.org/sage/ and http://modular.math.washington.edu/sage/ -~----------~----~----~----~------~----~------~--~--- From v-nijs at kellogg.northwestern.edu Thu Feb 22 15:45:26 2007 From: v-nijs at kellogg.northwestern.edu (Vincent Nijs) Date: Thu, 22 Feb 2007 14:45:26 -0600 Subject: [SciPy-user] io.read_array with strings In-Reply-To: Message-ID: Charles, You could read-in the array using python's csv module. This will return an array of strings. Then loop through each column and convert to an appropriate type using something like: def csvconvert(col): try: return col.astype('i') except ValueError: try: return col.astype('f') except ValueError: return col The issue is how to store the returned columns. A normal array can, as far as I know, only hold one data type. You could use a recarray or store the data in a dictionary. I have used both approaches in a data base class I posted on the cookbook page: http://www.scipy.org/Cookbook/dbase Vincent On 2/22/07 12:51 PM, "Charles-Antoine Robelin" wrote: > I have been using io.read_array successfully to read ASCII > files containing integers and floats, and I would like to > import strings into arrays as well. > > I tried with io.read_array, but did not get it to work: > If I create an array manually (i.e., numpy.array([['a1', > 'd3', 'gg'],['wq', 'ty', 'e']])), the type (dtype) of its > elements is '|S4', so I suspect numpy.arrays can handle > strings. > > However, io.read_array(, separator=',') on the > following file: > a1, d3, gg > wq, ty, e > returns an array of floats with the correct shape, > containing the numbers it could find (a1 -> 1.; d3 -> 3.) > and 0. where no number could be found. > I could not find how to force the type "strings," such as > atype='' in the call of read_array. > > Is importing strings possible with io.read_array, or with > another function, without having to parse manually? > > Thanks in advance. > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -- From jraddison at gmail.com Fri Feb 23 01:22:16 2007 From: jraddison at gmail.com (Jason Addison) Date: Thu, 22 Feb 2007 20:22:16 -1000 Subject: [SciPy-user] Mac OS X 10.4 can not import from scipy.linalg Message-ID: <4ae645d30702222222k6bd2c9d6k2309949b3c4e64f9@mail.gmail.com> I installed the prebuilts: MacPython python-2.4.4-macosx2006-10-18.dmg MacPython.mpkg SciPy ScipySuperpack-Intel-10.4-py2.4.dmg PyMC-1.1-py2.4-macosx10.4.mpkg gfortranCompleteInstaller.mpkg matplotlib-0.87.7-py2.4-macosx10.4.mpkg numpy-1.0.2.dev3522-py2.4-macosx10.4.mpkg scipy-0.5.3.dev2630-py2.4-macosx10.4.mpkg I'm using a MacBook Pro with Mac OS 10.4.8 with fink and Mac Developer Tools installed. After installing, I tried the tutorial: jra$ python Python 2.4.4 (#1, Oct 18 2006, 10:34:39) [GCC 4.0.1 (Apple Computer, Inc. build 5341)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> from numpy import matrix >>> from scipy.linalg import inv, det, eig Traceback (most recent call last): File "", line 1, in ? File "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages/scipy/linalg/__init__.py", line 8, in ? from basic import * File "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages/scipy/linalg/basic.py", line 17, in ? from lapack import get_lapack_funcs File "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages/scipy/linalg/lapack.py", line 17, in ? from scipy.linalg import flapack ImportError: Inappropriate file type for dynamic loading >>> Searching for possibly relevent conflicting files I find stuff like: lapack: clapack.h in vecLib.framework blas: blas.hpp in ublas from boost numeric Does anyone have any idea on how to fix this? Is it supposed to work? Do I have something else installed that is conflicting? Thanks for your help ... jra From giorgio.luciano at chimica.unige.it Fri Feb 23 03:57:45 2007 From: giorgio.luciano at chimica.unige.it (Giorgio Luciano) Date: Fri, 23 Feb 2007 09:57:45 +0100 Subject: [SciPy-user] Clipborad more friendly tha IDE Message-ID: <45DEAC89.9020908@chimica.unige.it> Sorry for the easy question, I've searched the web but find nothing "really" helpful. Is there some editor more user friendly for Windows to have a workspace where to copy and paste from gnumeric/excel like the matlab workspace I'm using orange, which is wonderful, but if you have a lot of array to copy and paste is not so comfortable as a simple command line x=[(ctrl+v)] Thanks in advance to all. I guess that a workspace like matlab for numpy/scipy will make a lot of people to switch more easily form matlab :))) Giorgio From robert.vergnes at yahoo.fr Fri Feb 23 09:40:24 2007 From: robert.vergnes at yahoo.fr (Robert VERGNES) Date: Fri, 23 Feb 2007 15:40:24 +0100 (CET) Subject: [SciPy-user] RE : Clipborad more friendly tha IDE In-Reply-To: <45DEAC89.9020908@chimica.unige.it> Message-ID: <20070223144024.31833.qmail@web27408.mail.ukl.yahoo.com> Hello You can look at: http://sourceforge.net/projects/qme-dev/ But it is in development. Could you give some info about orange ? Best Regards, Robert Giorgio Luciano a ?crit : Sorry for the easy question, I've searched the web but find nothing "really" helpful. Is there some editor more user friendly for Windows to have a workspace where to copy and paste from gnumeric/excel like the matlab workspace I'm using orange, which is wonderful, but if you have a lot of array to copy and paste is not so comfortable as a simple command line x=[(ctrl+v)] Thanks in advance to all. I guess that a workspace like matlab for numpy/scipy will make a lot of people to switch more easily form matlab :))) Giorgio _______________________________________________ SciPy-user mailing list SciPy-user at scipy.org http://projects.scipy.org/mailman/listinfo/scipy-user --------------------------------- D?couvrez une nouvelle fa?on d'obtenir des r?ponses ? toutes vos questions ! Profitez des connaissances, des opinions et des exp?riences des internautes sur Yahoo! Questions/R?ponses. -------------- next part -------------- An HTML attachment was scrubbed... URL: From oliver.tomic at matforsk.no Fri Feb 23 09:50:43 2007 From: oliver.tomic at matforsk.no (oliver.tomic at matforsk.no) Date: Fri, 23 Feb 2007 15:50:43 +0100 Subject: [SciPy-user] error in std? Message-ID: Hi list, we are in the process of switching our application from: (OLD configuration) Python 2.4.2 scipy 0.49 numpy 0.98 to: (NEW configuration) Python 2.5 scipy 0.52 numpy 1.0.1 When I do the following in the OLD configuration: data = array([1,2,3,4,5,6]) stand = std(data) the result for stand is 1.870828693386..... ,which is exactly what should be. However, when I do exactly the same under the NEW configuration the result for stand is 1.70782512766. I am clueless. Has anyone experienced the same problem? Any help appreciated. Oliver From olivetti at itc.it Fri Feb 23 10:00:39 2007 From: olivetti at itc.it (Emanuele Olivetti) Date: Fri, 23 Feb 2007 16:00:39 +0100 Subject: [SciPy-user] error in std? In-Reply-To: References: Message-ID: <45DF0197.5080107@itc.it> oliver.tomic at matforsk.no wrote: > Hi list, ... > the result for stand is 1.870828693386..... ,which is exactly what should > be. > > However, when I do exactly the same under the NEW configuration the result > for stand is 1.70782512766. > > I am clueless. Has anyone experienced the same problem? Any help > appreciated. There was a post on Numpy-discussion about why dividing by N instead of N-1 in std from some recent release on: http://projects.scipy.org/pipermail/numpy-discussion/2006-November/024821.html but there was no answer. Let see what happens this time ;) Emanuele From giorgio.luciano at chimica.unige.it Fri Feb 23 10:05:27 2007 From: giorgio.luciano at chimica.unige.it (Giorgio Luciano) Date: Fri, 23 Feb 2007 16:05:27 +0100 Subject: [SciPy-user] error in std? In-Reply-To: <45DF0197.5080107@itc.it> References: <45DF0197.5080107@itc.it> Message-ID: <45DF02B7.5090908@chimica.unige.it> Yes it seems no reply, for me is better this way, since for having results consistent with Matlab I had always to manually change it ;) From oliver.tomic at matforsk.no Fri Feb 23 10:13:50 2007 From: oliver.tomic at matforsk.no (oliver.tomic at matforsk.no) Date: Fri, 23 Feb 2007 16:13:50 +0100 Subject: [SciPy-user] error in std? In-Reply-To: <45DF02B7.5090908@chimica.unige.it> Message-ID: Thank you guys! I have to admit that I'd like to have it the old way, since it is consistent with the commercial software Unscrambler. But I guess everybody has an own opinion on this. :-) Thanks again! Oliver scipy-user-bounces at scipy.org wrote on 23.02.2007 16:05:27: > Yes it seems no reply, > for me is better this way, since for having results consistent with > Matlab I had always to manually change it ;) > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user From lists at zopyx.com Fri Feb 23 14:56:38 2007 From: lists at zopyx.com (Andreas Jung) Date: Fri, 23 Feb 2007 13:56:38 -0600 Subject: [SciPy-user] [CFP] Zope conference on Zope in Science Message-ID: <4CD1FCE7AA6F2B571A65C013@suxmac2-local.local> Dear Python & Zope Community, the eighth Zope conference organized by the German Zope User Group (DZUG) will be held this year at the Potsdam Institute for Climate Impace Research from 4. to 5 June 2007 (near Berlin). The topic of the conference will be Zope in sciences. Proposals for talks and workshops can be submitted until 01.04.2007 http://www.zope.de/redaktion/dzug/tagung/potsdam-2007/dzug-conference-2007-call-for-papers-zope-in-science or http://www.zope.de/redaktion/dzug/tagung/potsdam-2007/dzug-conference-2007-call-for-papers-zope-in-science Both German and English proposals are highly welcome. You can find further information about the Zope conference here: http://www.zope.de/8-dzug-tagung Regards, Andreas Jung Assistant Chairman DZUG e.V. -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 186 bytes Desc: not available URL: From jraddison at gmail.com Fri Feb 23 17:12:42 2007 From: jraddison at gmail.com (Jason Addison) Date: Fri, 23 Feb 2007 12:12:42 -1000 Subject: [SciPy-user] how to uninstall Mac OS X ScipySuperpack Intel? Message-ID: <4ae645d30702231412j7d1a3ce3seaee457e40839109@mail.gmail.com> I installed ScipySuperpack-Intel and have had some trouble. I thinking about just giving up on it and compiling my own. Before I do, I'd like to clean up from the packages that I installed: PyMC-1.1-py2.4-macosx10.4.mpkg gfortranCompleteInstaller.mpkg matplotlib-0.87.7-py2.4-macosx10.4.mpkg numpy-1.0.2.dev3522-py2.4-macosx10.4.mpkg scipy-0.5.3.dev2630-py2.4-macosx10.4.mpkg It looks like these install things deep into the filesystem, but I'm not sure what and where. Is there an easy way to uninstall? Is there a hard way? I didn't see any READMEs or such. Thanks ... jra From robert.kern at gmail.com Fri Feb 23 17:30:12 2007 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 23 Feb 2007 16:30:12 -0600 Subject: [SciPy-user] how to uninstall Mac OS X ScipySuperpack Intel? In-Reply-To: <4ae645d30702231412j7d1a3ce3seaee457e40839109@mail.gmail.com> References: <4ae645d30702231412j7d1a3ce3seaee457e40839109@mail.gmail.com> Message-ID: <45DF6AF4.9010404@gmail.com> Jason Addison wrote: > I installed ScipySuperpack-Intel and have had some trouble. I thinking > about just giving up on it and compiling my own. Before I do, I'd like > to clean up from the packages that I installed: > > PyMC-1.1-py2.4-macosx10.4.mpkg > gfortranCompleteInstaller.mpkg > matplotlib-0.87.7-py2.4-macosx10.4.mpkg > numpy-1.0.2.dev3522-py2.4-macosx10.4.mpkg > scipy-0.5.3.dev2630-py2.4-macosx10.4.mpkg > > It looks like these install things deep into the filesystem, but I'm > not sure what and where. Is there an easy way to uninstall? Is there a > hard way? I didn't see any READMEs or such. There's no easy way. Blame Apple for that. There is a hard way, though. In /Library/Receipts, there are bundle directories with metadata about the packages that you have installed. You need to use the lsbom(1) program to extract the file names: $ lsbom /Library/Receipts/py2app-purelib-0.2.5-py2.5-macosx10.4.pkg/Contents/Archive.bom . 40775 501/80 ./Py2App 40775 501/80 ./Py2App/altgraph 40775 501/80 ./Py2App/altgraph/Dot.py 100664 501/80 8425 1846799381 ./Py2App/altgraph/Dot.pyc 100664 501/80 9409 2590730341 ./Py2App/altgraph/Dot.pyo 100664 501/80 9409 2590730341 ./Py2App/altgraph/Graph.py 100664 501/80 19562 2645573956 ./Py2App/altgraph/Graph.pyc 100664 501/80 25554 1326377293 ./Py2App/altgraph/Graph.pyo 100664 501/80 25554 1326377293 ... Note that the paths are relative. Look in the Contents/Info.plist file of the bundle for the IFPkgFlagDefaultLocation value to find the path it is relative to. The list of "files" also includes directories. Do not remove these until they are empty and you know that other packages don't have stuff in there. Also note that those packages you listed are meta-packages. Inside each of these are individual packages. You will only see receipts for the individual packages. PITA, frankly. This is one reason why I gave up on these Installer.app packages. They're just not suited for distributing Python libraries. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From lanceboyle at qwest.net Fri Feb 23 18:42:30 2007 From: lanceboyle at qwest.net (Jerry) Date: Fri, 23 Feb 2007 16:42:30 -0700 Subject: [SciPy-user] how to uninstall Mac OS X ScipySuperpack Intel? In-Reply-To: <4ae645d30702231412j7d1a3ce3seaee457e40839109@mail.gmail.com> References: <4ae645d30702231412j7d1a3ce3seaee457e40839109@mail.gmail.com> Message-ID: <890D0024-F5FB-413D-B895-53119FD69223@qwest.net> Use Desinstaller. http://www.versiontracker.com/dyn/moreinfo/macosx/ 13955. It's not a PITA. Jerry On Feb 23, 2007, at 3:12 PM, Jason Addison wrote: > I installed ScipySuperpack-Intel and have had some trouble. I thinking > about just giving up on it and compiling my own. Before I do, I'd like > to clean up from the packages that I installed: > > PyMC-1.1-py2.4-macosx10.4.mpkg > gfortranCompleteInstaller.mpkg > matplotlib-0.87.7-py2.4-macosx10.4.mpkg > numpy-1.0.2.dev3522-py2.4-macosx10.4.mpkg > scipy-0.5.3.dev2630-py2.4-macosx10.4.mpkg > > It looks like these install things deep into the filesystem, but I'm > not sure what and where. Is there an easy way to uninstall? Is there a > hard way? I didn't see any READMEs or such. > > Thanks ... jra From pwang at enthought.com Sat Feb 24 01:22:11 2007 From: pwang at enthought.com (Peter Wang) Date: Sat, 24 Feb 2007 00:22:11 -0600 Subject: [SciPy-user] how to uninstall Mac OS X ScipySuperpack Intel? In-Reply-To: <890D0024-F5FB-413D-B895-53119FD69223@qwest.net> References: <4ae645d30702231412j7d1a3ce3seaee457e40839109@mail.gmail.com> <890D0024-F5FB-413D-B895-53119FD69223@qwest.net> Message-ID: <010048B3-61AB-48C1-9FD6-2308293DC2EC@enthought.com> On Feb 23, 2007, at 5:42 PM, Jerry wrote: > Use Desinstaller. http://www.versiontracker.com/dyn/moreinfo/macosx/ > 13955. It's not a PITA. > Jerry It took me a second to realize that the full URL is: http://www.versiontracker.com/dyn/moreinfo/macosx/13955 I sat here thinking that "13955" was l33t sp34k for "LEGSS", as in, "this DesInstaller program has real legss [sic]". -Peter From topengineer at gmail.com Sat Feb 24 02:04:03 2007 From: topengineer at gmail.com (Hui Chang Moon) Date: Sat, 24 Feb 2007 16:04:03 +0900 Subject: [SciPy-user] Does Scipy have the differenciation function? Message-ID: <296323b50702232304m102b7558gea03f63016fe1406@mail.gmail.com> Hello, Scipy-user Group members, I want to know if the Scipy has the differencitaion function. I can find the integration fuction (quad), but I can't know where the differenciation function. Whoever knows the differenciation function, let me know. Thank you. -------------- next part -------------- An HTML attachment was scrubbed... URL: From nwagner at iam.uni-stuttgart.de Sat Feb 24 02:41:15 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Sat, 24 Feb 2007 08:41:15 +0100 Subject: [SciPy-user] Does Scipy have the differenciation function? In-Reply-To: <296323b50702232304m102b7558gea03f63016fe1406@mail.gmail.com> References: <296323b50702232304m102b7558gea03f63016fe1406@mail.gmail.com> Message-ID: On Sat, 24 Feb 2007 16:04:03 +0900 "Hui Chang Moon" wrote: > Hello, Scipy-user Group members, > > I want to know if the Scipy has the differencitaion >function. > I can find the integration fuction (quad), but I can't >know where the > differenciation function. > > Whoever knows the differenciation function, let me know. > > Thank you. You might use from scipy import * # # Using Complex Variables to Estimate Derivatives of Real Functions # William Squire, George Trapp # SIAM Review, Vol. 40, No. 1 (Mar., 1998), pp. 110-112 # # def f(x): return sin(x) def fp(x): """ First derivative of f """ return cos(x) print 'The derivative of f is fp' eps = 1.e-8 print print 'Analytical solution',fp(0.1) print print 'Numerical solution',f(0.1+1j*eps).imag/eps Otherwise you can use interpolate splrep -- find smoothing spline given (x,y) points on curve. splprep -- find smoothing spline given parametrically defined curve. splev -- evaluate the spline or its derivatives. splint -- compute definite integral of a spline. sproot -- find the roots of a cubic spline. spalde -- compute all derivatives of a spline at given points. Nils From gruben at bigpond.net.au Sat Feb 24 02:53:43 2007 From: gruben at bigpond.net.au (Gary Ruben) Date: Sat, 24 Feb 2007 18:53:43 +1100 Subject: [SciPy-user] Does Scipy have the differenciation function? In-Reply-To: References: <296323b50702232304m102b7558gea03f63016fe1406@mail.gmail.com> Message-ID: <45DFEF07.4000907@bigpond.net.au> In [1]: from scipy import derivative In [2]: derivative? Type: function Base Class: String Form: Namespace: Interactive File: c:\python24\lib\site-packages\scipy\misc\common.py Definition: derivative(func, x0, dx=1.0, n=1, args=(), order=3) Docstring: Given a function, use a central difference formula with spacing dx to compute the nth derivative at x0. order is the number of points to use and must be odd. Warning: Decreasing the step size too small can result in round-off error. Gary R. > "Hui Chang Moon" wrote: >> Hello, Scipy-user Group members, >> >> I want to know if the Scipy has the differencitaion >> function. >> I can find the integration fuction (quad), but I can't >> know where the >> differenciation function. >> >> Whoever knows the differenciation function, let me know. >> >> Thank you. From lorenzo.isella at gmail.com Sat Feb 24 12:44:54 2007 From: lorenzo.isella at gmail.com (Lorenzo Isella) Date: Sat, 24 Feb 2007 18:44:54 +0100 Subject: [SciPy-user] Optimization Message-ID: <45E07996.3000307@gmail.com> Dear All, I am going through the Scipy manual and I am trying to reproduce, as an exercise to learn Python and Scipy, some nonlinear least-square optimization which I was able to carry out using another language. I am collecting some questions which hopefully will help me understand Python a bit better: (1) What is the difference between from pylab import * and import pylab? (2) the 2nd question may be a non-problem: in running some of the examples in the tutorial by Olinfant, I could not use the xplt module, no matter whether trying import scipy.xplt, from scipy.xplt import * et similia. I bumped into http://lists.debian.org/debian-science/2007/01/msg00007.html which may be the answer (I am running Debian on my box as well and I installed Scipy from the Debian repository). Can I install xplt by itself? BTW, I also visited http://www.scipy.org/Cookbook/xplt but even using from scipy.sandbox import xplt does not help. (3) Is there a way to have arrays starting with index 1 rather than zero in Python? As you can guess, I do not have a strong C background. (4) This is the main question: I am trying to fit some experimental data to a log-normal curve. I would like to follow the same steps as in the tutorial, but something seems to be going wrong. I cut and paste the code I am using and attach a .csv file so that one can reproduce step by step my work. (5) Finally, if I load scipy and then write, e.g., z=10.3, how is z handled? Is it a floating point number? What if, for instance, I need to have a very large number of significant digits because they do matter for some computation I want to run? Can I have the equivalent of format long [Matlab statement], so that every non-integer number is by default treated with a certain precision? Here is the code: #! /usr/bin/env python from pylab import plot, show, ylim, yticks from scipy import * import pylab # now I want to try reading some .csv file data = pylab.load("120km-TPM.csv",delimiter=',') vecdim=shape(data) # now I introduce a vector with the dimensions of data, the file I read print "the dimensions of data are" print vecdim # now very careful! in Python the arrays start with index zero. diam=data[0:vecdim[0],0] # it means: slice the rows from the 1st one (0) to the last one ( # (vecdim[0]) for the first column (0) print "the dimensions of diam are" print shape(diam) #plot(diam,data[:,1]) #show() # uncomment them to plot a distribution # 1st problem: if uncomment the previous two lines, I get a warning and until I close # the window, the script does not progress. # now I try performing a least-square fitting from scipy.optimize import leastsq x=diam # just a list of diameters y_meas=data[0:vecdim[0],1] # measured data, for example the 2nd column of the .csv file def residuals(p, y, x): A1,mu1,myvar1 = p err = y-log(10.0)*A1/sqrt(2.0*pi)/log(myvar1)*exp(-((log(x/mu1))**2.0)/2.0/log(myvar1)/log(myvar1)) return err def peval(x, p): return log(10.0)*p[0]/sqrt(2.0*pi)/log(p[2])*exp(-((log(x/p[1]))**2.0)/2.0/log(p[2])/log(p[2])) p0 = [50000.0,90.0, 1.59] print array(p0) # now I try actually solving the problem print "ok up to here" plsq = leastsq(residuals, p0, args=(y_meas, x)) print "ok up to here2" print plsq[0] print array([A, k, theta]) print "So far so good" which produces this output on my box: $ ./read-and-plot.py the dimensions of data are (104, 10) the dimensions of diam are (104,) [ 5.00000000e+04 9.00000000e+01 1.59000000e+00] ok up to here TypeError: array cannot be safely cast to required type Traceback (most recent call last): File "./read-and-plot.py", line 51, in ? plsq = leastsq(residuals, p0, args=(y_meas, x)) File "/usr/lib/python2.4/site-packages/scipy/optimize/minpack.py", line 266, in leastsq retval = _minpack._lmdif(func,x0,args,full_output,ftol,xtol,gtol,maxfev,epsfcn,factor,diag) minpack.error: Result from function call is not a proper array of floats. Many thanks Lorenzo -------------- next part -------------- A non-text attachment was scrubbed... Name: 120km-TPM.csv Type: text/csv Size: 9696 bytes Desc: not available URL: From gael.varoquaux at normalesup.org Sat Feb 24 14:42:00 2007 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Sat, 24 Feb 2007 20:42:00 +0100 Subject: [SciPy-user] Carateristic distance in a cloud of points In-Reply-To: <45E07996.3000307@gmail.com> References: <45E07996.3000307@gmail.com> Message-ID: <20070224194200.GI7867@clipper.ens.fr> I have a cloud of points (for instance given as a (n,3) shaped array, with columns formed by the x, y and z column vectors). I would like to find the mean distance in this cloud of points. I do not need an exact value, I am just interested in a typical distance. I could do it in a brute force way: ++++++++++++++++++++++++++++++++++++++++++ from scipy import * x = arange(1, 5) points = c_[x, x, x] diffs = abs(points[newaxis, :] - points[:, newaxis]) dists = sqrt(diffs[..., 0]**2 + diffs[..., 1]**2 + diffs[..., 2]**2).ravel() dists = dists[dists>0] mean(dists) ++++++++++++++++++++++++++++++++++++++++++ This is actually not as ugly and slow as I originaly thought. Are there any better ways of doing this ? Thanks, Ga?l From peridot.faceted at gmail.com Sat Feb 24 16:13:57 2007 From: peridot.faceted at gmail.com (Anne Archibald) Date: Sat, 24 Feb 2007 16:13:57 -0500 Subject: [SciPy-user] Carateristic distance in a cloud of points In-Reply-To: <20070224194200.GI7867@clipper.ens.fr> References: <45E07996.3000307@gmail.com> <20070224194200.GI7867@clipper.ens.fr> Message-ID: On 24/02/07, Gael Varoquaux wrote: > I have a cloud of points (for instance given as a (n,3) shaped array, > with columns formed by the x, y and z column vectors). > > I would like to find the mean distance in this cloud of points. I do not > need an exact value, I am just interested in a typical distance. This is actually quite tricky, depending on what you mean by a "typical" distance - distances can have all sorts of distributions. Imagine for example a cloud that is actually two small clouds a long way apart, or a cloud with a few very distant outliers or a Julia set (for which the distance behaves like a power law whose exponent is related to the fractal dimension)... well, you get the point. > I could do it in a brute force way: This can be tidied slightly: > ++++++++++++++++++++++++++++++++++++++++++ > from scipy import * > x = arange(1, 5) > > points = c_[x, x, x] > diffs = abs(points[newaxis, :] - points[:, newaxis]) There's no need for an absolute value here. > dists = sqrt(diffs[..., 0]**2 + diffs[..., 1]**2 + diffs[..., 2]**2).ravel() sqrt(sum(diffs**2,axis=2)).ravel() will do the same. > dists = dists[dists>0] > mean(dists) > ++++++++++++++++++++++++++++++++++++++++++ > Are there any better ways of doing this ? Well, depending what you want from "typical distance" the median might do a better job (or not). Or you might be satisfied with a random sample of 100 points (say): p = points[random.randint(shape(points)[0],size=100)] and then use the above procedure. Alternatively, if you're willing to be crude: lwh = ptp(points,axis=0) # size of the bounding box d = sqrt(sum((lwh/2)**2)) I end up using sqrt(sum(X**2,axis=Y)) rather often, I wonder if there's a tidy idiom for it? It's the L2 norm, after all... Anne From sidgalt at gmail.com Sat Feb 24 18:01:38 2007 From: sidgalt at gmail.com (Siddhartha Jain) Date: Sun, 25 Feb 2007 04:31:38 +0530 Subject: [SciPy-user] fmin_cobyla help-passing constraints and bounds Message-ID: <8a8343a10702241501i7cb29776k3a8303441d352465@mail.gmail.com> I am trying to solve an optimization problem with more than 500 variables. Will fmin_cobyla be able to solve such a large problem in a reasonable amount of time? If so, then can anyone help me as to how to pass the variable bounds and constraints to fmin_cobyla? Thank you -------------- next part -------------- An HTML attachment was scrubbed... URL: From gael.varoquaux at normalesup.org Sat Feb 24 19:03:03 2007 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Sun, 25 Feb 2007 01:03:03 +0100 Subject: [SciPy-user] Carateristic distance in a cloud of points In-Reply-To: References: <45E07996.3000307@gmail.com> <20070224194200.GI7867@clipper.ens.fr> Message-ID: <20070225000303.GJ7867@clipper.ens.fr> Interesting remarks. You forced me to think a bit more about what I was trying to achieve. What I am trying to do is to find out the right size to use for symbols when used on a 3D cloud of points. I am not sure what the right "typical distance" should be used. If those symbols are arrows then it seems that should be smaller than the typical inter-point distance. I have in mind something like this: if you have n points, find out the distribution of distances, divide it by n**3, then take the value at 0.2 from the smallest. I am having diffculties expressing my point, but the idea would be to consider that the typical distribution will increase as n**3 (which is not obvious, for instance if the points are along a plane) and take the lower tail of the distribution, as we are interested in having symbols smaller than the inter-point distance. Taking not the smallest value, but the value at "20%" from the bottom helps getting rid of singular situations where a few points are very close but the major part is spread out. The problem is that the "good" solution does depend on the problem, and there will never be a one size fits all solution. I am interested in other suggestions. Cheers, Ga?l From gael.varoquaux at normalesup.org Sat Feb 24 19:10:39 2007 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Sun, 25 Feb 2007 01:10:39 +0100 Subject: [SciPy-user] Carateristic distance in a cloud of points In-Reply-To: <20070225000303.GJ7867@clipper.ens.fr> References: <45E07996.3000307@gmail.com> <20070224194200.GI7867@clipper.ens.fr> <20070225000303.GJ7867@clipper.ens.fr> Message-ID: <20070225001039.GK7867@clipper.ens.fr> Correcting a stupid mistake. It bothered me to leave it, sorry for the noise. Ga?l On Sun, Feb 25, 2007 at 01:03:03AM +0100, Gael Varoquaux wrote: > Interesting remarks. You forced me to think a bit more about what I was > trying to achieve. > What I am trying to do is to find out the right size to use for symbols > when used on a 3D cloud of points. I am not sure what the right "typical > distance" should be used. If those symbols are arrows then it seems that > should be smaller than the typical inter-point distance. I have in mind > something like this: > if you have n points, find out the distribution of distances, divide it > by n**3, then take the value at 0.2 from the smallest. > I am having diffculties expressing my point, but the idea would be to > consider that the typical distribution will increase as n**3 (which is n**(1/3.) > not obvious, for instance if the points are along a plane) and take the > lower tail of the distribution, as we are interested in having symbols > smaller than the inter-point distance. Taking not the smallest value, but > the value at "20%" from the bottom helps getting rid of singular > situations where a few points are very close but the major part is spread > out. From anand at soe.ucsc.edu Sun Feb 25 13:32:55 2007 From: anand at soe.ucsc.edu (Anand Patil) Date: Sun, 25 Feb 2007 10:32:55 -0800 Subject: [SciPy-user] Carateristic distance in a cloud of points In-Reply-To: References: Message-ID: <45E1D657.8070806@cse.ucsc.edu> >Correcting a stupid mistake. It bothered me to leave it, sorry for the >noise. > >Ga?l > >On Sun, Feb 25, 2007 at 01:03:03AM +0100, Gael Varoquaux wrote: > > >>Interesting remarks. You forced me to think a bit more about what I was >>trying to achieve. >> >> > > > >>What I am trying to do is to find out the right size to use for symbols >>when used on a 3D cloud of points. I am not sure what the right "typical >>distance" should be used. If those symbols are arrows then it seems that >>should be smaller than the typical inter-point distance. I have in mind >>something like this: >> >> > > > >>if you have n points, find out the distribution of distances, divide it >>by n**3, then take the value at 0.2 from the smallest. >> >> > > > >>I am having diffculties expressing my point, but the idea would be to >>consider that the typical distribution will increase as n**3 (which is >> >> > n**(1/3.) > > >>not obvious, for instance if the points are along a plane) and take the >>lower tail of the distribution, as we are interested in having symbols >>smaller than the inter-point distance. Taking not the smallest value, but >>the value at "20%" from the bottom helps getting rid of singular >>situations where a few points are very close but the major part is spread >>out. >> >> Hi Gael, Scaling relative to the cloud of points might be an easier way to go than scaling relative to the actual interpoint spacing. It would make sure your arrowheads are readable on the plot, even though some may appear oversized relative to their shafts. Also, points that are far apart in 3d space might appear close when viewed from particular angles, like 'optical doubles' in astronomy. If you want to make your symbols roughly the right size relative to the whole cloud, you might like a widely used quick-and-dirty method from statistics.: from scipy import * from numpy.linalg import eigh points = 2.*randn(100,3) C = cov(points.transpose()) D,V=eigh(points) Then the `error ellipse', the ellipsoid that kind of sort of tries to fit the point cloud, has major axes given by the columns of V with length equal to the sqrt of the corresponding elements of D. You could then calculate approximately how big or small the point cloud looks by projecting the major axes into your viewing plane. Hope that helps... that would be a milestone for me, my first time actually helping someone else on a Python mailing list. Cheers, Anand From rshepard at appl-ecosys.com Sun Feb 25 14:10:34 2007 From: rshepard at appl-ecosys.com (Rich Shepard) Date: Sun, 25 Feb 2007 11:10:34 -0800 (PST) Subject: [SciPy-user] Need Advice With Arrays and Calculating Eigenvectors In-Reply-To: References: Message-ID: On Thu, 22 Feb 2007, Rich Shepard wrote: > As a newcomer to python, NumPy, and SciPy I need to learn how to most > efficiently manipulate data. Today's need is reducing a list of tuples to > three smaller lists of tuples, creating a symmetrical matrix from each list, > and calculating the Eigenvector of each of the three symmetrical matrices. I have a function that does part of the above. I know that it's highly crude, inefficient, and not taking advantage of python functional coding features such as introspection. That's because I'm not yet sure how to code it better. First, a python message when I invoke the application. In this module, I have 'from scipi import *' per Travis' book. What I see as the application loads is: Overwriting info= from scipy.misc (was from numpy.lib.utils) This doesn't seem to harm anything, but perhaps it needs fixing. Second, here's the function (followed by the output of the print statements): def weightcalc(): # First: average for each position by category stmt1 = """select cat, pos, avg(pr1), avg(pr2), avg(pr3), avg(pr4), avg(pr5), avg(pr6), avg(pr7), avg(pr8), avg(pr9), avg(pr10), avg(pr11), avg(pr12), avg(pr13), avg(pr14), avg(pr15), avg(pr16), avg(pr17), avg(pr18), avg(pr19), avg(pr20), avg(pr21), avg(pr22), avg(pr23), avg(pr24), avg(pr25), avg(pr26), avg(pr27), avg(pr28) from voting group by cat, pos""" appData.cur.execute(stmt1) prefbar = appData.cur.fetchall() # Now, average for all positions within each category ec = [] en = [] ep = [] nc = [] nn = [] np = [] sc = [] sn = [] sp = [] catEco = [] catNat = [] catSoc = [] diag = identity(8, dtype=float) for item in prefbar: if item[0] == 'eco' and item[1] == 'con': ec.append(item[2:]) if item[0] == 'eco' and item[1] == 'neu': en.append(item[2:]) if item[0] == 'eco' and item[1] == 'pro': ep.append(item[2:]) if item[0] == 'nat' and item[1] == 'con': nc.append(item[2:]) if item[0] == 'nat' and item[1] == 'neu': nn.append(item[2:]) if item[0] == 'nat' and item[1] == 'pro': np.append(item[2:]) if item[0] == 'soc' and item[1] == 'con': sc.append(item[2:]) if item[0] == 'soc' and item[1] == 'neu': sn.append(item[2:]) if item[0] == 'soc' and item[1] == 'pro': sp.append(item[2:]) # three lists, each of three tuples. Need to be converted to arrays and averaged. catEco.append(ec + en + ep) print catEco, '\n' catNat.append(nc + nn + np) print catNat, '\n' catSoc.append(sc + sn + sp) print catSoc and here is the output of catEco: [[(2.4884848484848487, 3.3123939393939401, 3.144090909090909, 2.5676060606060607, 3.2095151515151517, 3.4157878787878788, 2.5132727272727275, 2.7514242424242425, 2.9628787878787879, 2.446939393939394, 2.7069393939393938, 3.1676666666666669, 2.8530303030303035, 2.6058484848484853, 3.0955454545454546, 2.6283939393939395, 2.4350606060606061, 3.2610303030303034, 2.3926969696969698, 2.4951212121212123, 2.5276666666666676, 2.668848484848485, 3.4265757575757578, 2.9714545454545456, 2.8431818181818187, 3.0674545454545461, 2.8712727272727272, 2.1262424242424243), (2.0477142857142856, 1.0064285714285715, 3.1869285714285711, 3.5895000000000001, 3.9467142857142861, 3.2696428571428569, 2.9104285714285716, 2.5850714285714282, 4.8555714285714293, 3.3554999999999997, 2.3430714285714282, 3.5795714285714282, 1.3627857142857143, 0.83778571428571436, 2.4744999999999999, 2.8067142857142855, 3.143642857142857, 2.4637857142857138, 3.7382142857142857, 3.2875000000000001, 2.1167857142857143, 3.5459285714285715, 3.5667142857142857, 3.1280714285714284, 3.580428571428572, 1.0882857142857143, 3.0217142857142858, 3.8292857142857142), (2.3360769230769227, 2.0547692307692311, 2.8591538461538457, 2.4986923076923073, 2.809769230769231, 2.2041538461538464, 3.9557692307692309, 3.1109230769230769, 1.8777692307692309, 1.6783846153846156, 2.4337692307692307, 1.8520769230769232, 4.0975384615384618, 3.3513846153846147, 1.9008461538461536, 2.9993846153846158, 1.8076923076923079, 2.6881538461538463, 2.453615384615385, 3.5579999999999998, 1.2396153846153848, 3.8225384615384614, 2.8304615384615386, 2.6258461538461537, 2.2387692307692308, 3.381615384615384, 2.8569999999999998, 2.9676153846153848)]] Where I am stuck is making catEco (and the other two lists) NumPy arrays, and calculating the average of the three values in the same position within each tuple. Also, I need no more than 2 decimal places for each value, but I don't know where to place a format specifier. Please suggest how to both improve the function's structure and produce a ndarray[] that is the average tuple values in each list. Rich -- Richard B. Shepard, Ph.D. | The Environmental Permitting Applied Ecosystem Services, Inc. | Accelerator(TM) Voice: 503-667-4517 Fax: 503-667-8863 From rshepard at appl-ecosys.com Sun Feb 25 16:08:52 2007 From: rshepard at appl-ecosys.com (Rich Shepard) Date: Sun, 25 Feb 2007 13:08:52 -0800 (PST) Subject: [SciPy-user] Need Advice With Arrays and Calculating Eigenvectors In-Reply-To: References: Message-ID: On Sun, 25 Feb 2007, Rich Shepard wrote: > I have a function that does part of the above. I know that it's highly > crude, inefficient, and not taking advantage of python functional coding features > such as introspection. That's because I'm not yet sure how to code it > better. Would still appreciate suggestions for tightening it up. The latest version is this: def weightcalc(): # First: average for each position by category meanvotes = [] stmt1 = """select cat, pos, avg(pr1), avg(pr2), avg(pr3), avg(pr4), avg(pr5), avg(pr6), avg(pr7), avg(pr8), avg(pr9), avg(pr10), avg(pr11), avg(pr12), avg(pr13), avg(pr14), avg(pr15), avg(pr16), avg(pr17), avg(pr18), avg(pr19), avg(pr20), avg(pr21), avg(pr22), avg(pr23), avg(pr24), avg(pr25), avg(pr26), avg(pr27), avg(pr28) from voting group by cat, pos""" appData.cur.execute(stmt1) prefbar = appData.cur.fetchall() # print prefbar # Now, average for all positions within each category ec = [] en = [] ep = [] nc = [] nn = [] np = [] sc = [] sn = [] sp = [] catEco = [] catNat = [] catSoc = [] diag = identity(8, dtype=float) for item in prefbar: if item[0] == 'eco' and item[1] == 'con': ec.append(item[2:]) if item[0] == 'eco' and item[1] == 'neu': en.append(item[2:]) if item[0] == 'eco' and item[1] == 'pro': ep.append(item[2:]) if item[0] == 'nat' and item[1] == 'con': nc.append(item[2:]) if item[0] == 'nat' and item[1] == 'neu': nn.append(item[2:]) if item[0] == 'nat' and item[1] == 'pro': np.append(item[2:]) if item[0] == 'soc' and item[1] == 'con': sc.append(item[2:]) if item[0] == 'soc' and item[1] == 'neu': sn.append(item[2:]) if item[0] == 'soc' and item[1] == 'pro': sp.append(item[2:]) # three lists, each of three tuples. Need to be converted to arrays and averaged. catEco.append(ec + en + ep) catNat.append(nc + nn + np) catSoc.append(sc + sn + sp) # here are the numpy arrays aEco = array(catEco, dtype=float) aNat = array(catNat, dtype=float) aSoc = array(catSoc, dtype=float) # here are the numpy arrays of averages barEco = average(aEco, axis=1) barNat = average(aNat, axis=1) barSoc = average(aSoc, axis=1) Got all this worked out by reading the book and trial-and-error. Next step is to convert each of barEco, barNat, and barSoc into symmetrical matrices with unit diagonals. Each of these arrays holds the values to the right of the diagonal in the symmetrical matrices; the matching cells to the left of the diagonal are (1.0 / right cell value). Rich -- Richard B. Shepard, Ph.D. | The Environmental Permitting Applied Ecosystem Services, Inc. | Accelerator(TM) Voice: 503-667-4517 Fax: 503-667-8863 From rshepard at appl-ecosys.com Sun Feb 25 17:16:52 2007 From: rshepard at appl-ecosys.com (Rich Shepard) Date: Sun, 25 Feb 2007 14:16:52 -0800 (PST) Subject: [SciPy-user] Transforming 1-d array to 2-d array Message-ID: I have this array (called barEco): [[ 2.29075869 2.12453058 3.06339111 2.88526612 3.32199956 2.96319486 3.12649018 2.81580625 3.23207315 2.493608 2.49459335 2.86643834 2.77111816 2.26500627 2.4902972 2.81149761 2.46213192 2.80432329 2.86150888 3.1135404 1.96135592 3.34577184 3.27458386 2.90845738 2.88745987 2.51245188 2.91666234 2.97438117]] and I want to convert it to: [[ -- 2.29075869 2.12453058 3.06339111 2.88526612 3.32199956 2.96319486 3.12649018] [ -- -- 2.81580625 3.23207315 2.493608 2.49459335 2.86643834 2.77111816] [ -- -- -- 2.26500627 2.4902972 2.81149761 2.46213192 2.80432329] [ -- -- -- -- 2.86150888 3.1135404 1.96135592 3.34577184] [ -- -- -- -- -- 3.27458386 2.90845738 2.88745987] [ -- -- -- -- -- -- 2.51245188 2.91666234] [ -- -- -- -- -- -- -- 2.97438117] [ -- -- -- -- -- -- -- -- ]] with 1.00 as the diagonal. It appears that the eye() function is the tool, but when I try foo = eye(barEco,8,8,1) print foo python responds Traceback (most recent call last): File "/data1/eikos/scopingPage.py", line 184, in OnCalcWeights inpWts = functions.weightcalc() File "/data1/eikos/functions.py", line 164, in weightcalc foo = eye(barEco,8,8,1) File "/usr/lib/python2.4/site-packages/numpy/lib/twodim_base.py", line 48, in eye m = equal(subtract.outer(arange(N), arange(M)),-k) TypeError: only length-1 arrays can be converted to Python scalars So, either I used eye() incorrectly, or that's not how to make the conversion. What should I be doing? Rich -- Richard B. Shepard, Ph.D. | The Environmental Permitting Applied Ecosystem Services, Inc. | Accelerator(TM) Voice: 503-667-4517 Fax: 503-667-8863 From rshepard at appl-ecosys.com Sun Feb 25 17:31:35 2007 From: rshepard at appl-ecosys.com (Rich Shepard) Date: Sun, 25 Feb 2007 14:31:35 -0800 (PST) Subject: [SciPy-user] Transforming 1-d array to 2-d array In-Reply-To: References: Message-ID: On Sun, 25 Feb 2007, Rich Shepard wrote: > It appears that the eye() function is the tool, but when I try > > foo = eye(barEco,8,8,1) > print foo I've also tried triu() and mat(), but neither prints the results I need. Rich -- Richard B. Shepard, Ph.D. | The Environmental Permitting Applied Ecosystem Services, Inc. | Accelerator(TM) Voice: 503-667-4517 Fax: 503-667-8863 From williams at astro.ox.ac.uk Sun Feb 25 18:57:22 2007 From: williams at astro.ox.ac.uk (Michael Williams) Date: Sun, 25 Feb 2007 23:57:22 +0000 Subject: [SciPy-user] how to uninstall Mac OS X ScipySuperpack Intel? In-Reply-To: <45DF6AF4.9010404@gmail.com> References: <4ae645d30702231412j7d1a3ce3seaee457e40839109@mail.gmail.com> <45DF6AF4.9010404@gmail.com> Message-ID: <20070225235722.GD11570@astro.ox.ac.uk> On Fri, Feb 23, 2007 at 04:30:12PM -0600, Robert Kern wrote: > There's no easy way. Blame Apple for that. There is a hard way, > though. In /Library/Receipts, there are bundle directories with > metadata about the packages that you have installed. You need to use > the lsbom(1) program to extract the file names: You can also extract this list by double-clicking on the .[m]pkg file, launching Installer, then choosing Show Files from the File menu. -- Mike From ronald at ivec.org Sun Feb 25 20:49:22 2007 From: ronald at ivec.org (Ronald Jones) Date: Mon, 26 Feb 2007 10:49:22 +0900 Subject: [SciPy-user] Failed scipy 0.5.2 tests on IA64 Intel platform Message-ID: <1172454562.4656.4.camel@sambucca.ivec.org> Hello, I have recently installed numpy1.0.1/scipy0.5.2 on a IA64 machine using the intel compiler(v9.1.043) with the mkl (v8.1.014) libraries. The code compiles and runs. But six of the scipy.test(1) tests fail. Is this a known problem? There is also a problem with scipy.linsolve.umfpack. Is this significant? I tried the 19 Feb svn version of scipy and it failed 10 tests. The output of the scipy 0.5.2 tests are attached. regards, Ronald -- Ronald Jones iVEC, 'The hub of advanced computing in Western Australia' 26 Dick Perry Avenue, Technology Park Kensington WA 6151 Australia Phone: +61 8 6436 8633 Fax: +61 8 6436 8555 Email: ronald.jones at ivec.org WWW: http://www.ivec.org -------------- next part -------------- scipy.test(1) Found 4 tests for scipy.io.array_import Found 1 tests for scipy.cluster.vq Found 128 tests for scipy.linalg.fblas Found 397 tests for scipy.ndimage Found 10 tests for scipy.integrate.quadpack Found 98 tests for scipy.stats.stats Found 53 tests for scipy.linalg.decomp Found 3 tests for scipy.integrate.quadrature Found 96 tests for scipy.sparse.sparse Found 20 tests for scipy.fftpack.pseudo_diffs Found 6 tests for scipy.optimize.optimize Found 6 tests for scipy.interpolate.fitpack Found 6 tests for scipy.interpolate Found 12 tests for scipy.io.mmio Found 10 tests for scipy.stats.morestats Found 4 tests for scipy.linalg.lapack Found 18 tests for scipy.fftpack.basic Found 4 tests for scipy.io.recaster Warning: FAILURE importing tests for /opt/scipy/0.5.2-forceIntel/lib/python2.4/site-packages/scipy/linsolve/umfpack/tests/test_umfpack.py:17: AttributeError: 'module' object has no attribute 'umfpack' (in ?) Found 4 tests for scipy.optimize.zeros Found 28 tests for scipy.io.mio Found 41 tests for scipy.linalg.basic Found 2 tests for scipy.maxentropy.maxentropy Found 358 tests for scipy.special.basic Found 128 tests for scipy.lib.blas.fblas Found 7 tests for scipy.linalg.matfuncs **************************************************************** WARNING: clapack module is empty ----------- See scipy/INSTALL.txt for troubleshooting. Notes: * If atlas library is not found by numpy/distutils/system_info.py, then scipy uses flapack instead of clapack. **************************************************************** Found 42 tests for scipy.lib.lapack Warning: FAILURE importing tests for /opt/scipy/0.5.2-forceIntel/lib/python2.4/site-packages/scipy/linsolve/umfpack/tests/test_umfpack.py:17: AttributeError: 'module' object has no attribute 'umfpack' (in ?) Found 1 tests for scipy.optimize.cobyla Found 16 tests for scipy.lib.blas Found 1 tests for scipy.integrate Found 14 tests for scipy.linalg.blas Found 70 tests for scipy.stats.distributions Found 4 tests for scipy.fftpack.helper Found 4 tests for scipy.signal.signaltools Found 0 tests for __main__ Warning: 1000000 bytes requested, 20 bytes read. ........caxpy:n=4 ..caxpy:n=3 ....ccopy:n=4 ..ccopy:n=3 .............cscal:n=4 ....cswap:n=4 ..cswap:n=3 .....daxpy:n=4 ..daxpy:n=3 ....dcopy:n=4 ..dcopy:n=3 .............dscal:n=4 ....dswap:n=4 ..dswap:n=3 .....saxpy:n=4 ..saxpy:n=3 ....scopy:n=4 ..scopy:n=3 .............sscal:n=4 ....sswap:n=4 ..sswap:n=3 .....zaxpy:n=4 ..zaxpy:n=3 ....zcopy:n=4 ..zcopy:n=3 .............zscal:n=4 ....zswap:n=4 ..zswap:n=3 ..........F......................................................................................................................................................................................................F.........F.......................................................................................................................F........................................................................................................................................................................................................................................................................................................................................................./opt/scipy/0.5.2/lib/python2.4/site-packages/scipy/interpolate/fitpack2.py:457: UserWarning: The coefficients of the spline returned have been computed as the minimal norm least-squares solution of a (numerically) rank deficient system (deficiency=7). If deficiency is large, the results may be inaccurate. Deficiency may strongly depend on the value of eps. warnings.warn(message) .................................................................................................................................................................................................................................................................................................F..............................................................................................................................................................................................................caxpy:n=4 ..caxpy:n=3 ....ccopy:n=4 ..ccopy:n=3 .............cscal:n=4 ....cswap:n=4 ..cswap:n=3 .....daxpy:n=4 ..daxpy:n=3 ....dcopy:n=4 ..dcopy:n=3 .............dscal:n=4 ....dswap:n=4 ..dswap:n=3 .....saxpy:n=4 ..saxpy:n=3 ....scopy:n=4 ..scopy:n=3 .............sscal:n=4 ....sswap:n=4 ..sswap:n=3 .....zaxpy:n=4 ..zaxpy:n=3 ....zcopy:n=4 ..zcopy:n=3 .............zscal:n=4 ....zswap:n=4 ..zswap:n=3 .................................F............................................................................................................................... ====================================================================== FAIL: affine transform 9 ---------------------------------------------------------------------- Traceback (most recent call last): File "/opt/scipy/0.5.2-forceIntel/lib/python2.4/site-packages/scipy/ndimage/tests/test_ndimage.py", line 1849, in test_affine_transform09 AssertionError ====================================================================== FAIL: geometric transform 10 ---------------------------------------------------------------------- Traceback (most recent call last): File "/opt/scipy/0.5.2-forceIntel/lib/python2.4/site-packages/scipy/ndimage/tests/test_ndimage.py", line 1575, in test_geometric_transform10 AssertionError ====================================================================== FAIL: geometric transform 22 ---------------------------------------------------------------------- Traceback (most recent call last): File "/opt/scipy/0.5.2-forceIntel/lib/python2.4/site-packages/scipy/ndimage/tests/test_ndimage.py", line 1697, in test_geometric_transform22 AssertionError ====================================================================== FAIL: shift 9 ---------------------------------------------------------------------- Traceback (most recent call last): File "/opt/scipy/0.5.2-forceIntel/lib/python2.4/site-packages/scipy/ndimage/tests/test_ndimage.py", line 2066, in test_shift09 AssertionError ====================================================================== FAIL: check_pbdv (scipy.special.tests.test_basic.test_cephes) ---------------------------------------------------------------------- Traceback (most recent call last): File "/opt/scipy/0.5.2-forceIntel/lib/python2.4/site-packages/scipy/special/tests/test_basic.py", line 367, in check_pbdv File "/opt/numpy/1.0.1//lib/python2.4/site-packages/numpy/testing/utils.py", line 137, in assert_equal assert_equal(actual[k], desired[k], 'item=%r\n%s' % (k,err_msg), verbose) File "/opt/numpy/1.0.1//lib/python2.4/site-packages/numpy/testing/utils.py", line 143, in assert_equal assert desired == actual, msg AssertionError: Items are not equal: item=1 ACTUAL: 1.0 DESIRED: 0.0 ====================================================================== FAIL: check_syev (scipy.lib.tests.test_lapack.test_flapack_float) ---------------------------------------------------------------------- Traceback (most recent call last): File "/opt/scipy/0.5.2-forceIntel/lib/python2.4/site-packages/scipy/lib/lapack/tests/esv_tests.py", line 15, in check_syev File "/opt/numpy/1.0.1//lib/python2.4/site-packages/numpy/testing/utils.py", line 230, in assert_array_almost_equal header='Arrays are not almost equal') File "/opt/numpy/1.0.1//lib/python2.4/site-packages/numpy/testing/utils.py", line 215, in assert_array_compare assert cond, msg AssertionError: Arrays are not almost equal (mismatch 33.3333333333%) x: array([ 0.12523627, 0.38202888, -0.27607083]) y: array([ 0.1252365 , 0.38202912, -0.27607253], dtype=float32) ---------------------------------------------------------------------- Ran 1596 tests in 3.633s FAILED (failures=6) Don't worry about a warning regarding the number of bytes read. Took 13 points. Resizing... 16 17 24 Resizing... 20 7 35 Resizing... 23 7 47 Resizing... 24 25 58 Resizing... 28 7 68 Resizing... 28 27 73 Use minimum degree ordering on A'+A. Use minimum degree ordering on A'+A. Resizing... 16 17 24 Resizing... 20 7 35 Resizing... 23 7 47 Resizing... 24 25 58 Resizing... 28 7 68 Resizing... 28 27 73 Use minimum degree ordering on A'+A. Resizing... 16 17 24 Resizing... 20 7 35 Resizing... 23 7 47 Resizing... 24 25 58 Resizing... 28 7 68 Resizing... 28 27 73 Use minimum degree ordering on A'+A. Ties preclude use of exact statistic. Ties preclude use of exact statistic. **************************************************************** WARNING: clapack module is empty ----------- See scipy/INSTALL.txt for troubleshooting. Notes: * If atlas library is not found by numpy/distutils/system_info.py, then scipy uses flapack instead of clapack. **************************************************************** Result may be inaccurate, approximate err = 1.292748962e-08 Result may be inaccurate, approximate err = 7.27595761418e-12 Residual: 1.05006926991e-07 **************************************************************** WARNING: cblas module is empty ----------- See scipy/INSTALL.txt for troubleshooting. Notes: * If atlas library is not found by numpy/distutils/system_info.py, then scipy uses fblas instead of cblas. **************************************************************** From a.u.r.e.l.i.a.n at gmx.net Mon Feb 26 03:17:02 2007 From: a.u.r.e.l.i.a.n at gmx.net (Johannes Loehnert) Date: Mon, 26 Feb 2007 09:17:02 +0100 Subject: [SciPy-user] Need Advice With Arrays and Calculating Eigenvectors In-Reply-To: References: Message-ID: <200702260917.02628.a.u.r.e.l.i.a.n@gmx.net> Hi, here's your code as I would have written it. :-) My comments are those starting with four #'s. As a side remark, usually one uses four spaces for intendation. I hope I did not mix it up. An alternative approach to what I did below would be to map the keys ('eco', 'nat', 'soc') to integers and use lists instead of dicts. Johannes def weightcalc(): # First: average for each position by category meanvotes = [] #### Do you use this? #### not much can be done about this stmt1 = """select cat, pos, avg(pr1), avg(pr2), avg(pr3), avg(pr4), avg(pr5), avg(pr6), avg(pr7), avg(pr8), avg(pr9), avg(pr10), avg(pr11), avg(pr12), avg(pr13), avg(pr14), avg(pr15), avg(pr16), avg(pr17), avg(pr18), avg(pr19), avg(pr20), avg(pr21), avg(pr22), avg(pr23), avg(pr24), avg(pr25), avg(pr26), avg(pr27), avg(pr28) from voting group by cat, pos""" appData.cur.execute(stmt1) prefbar = appData.cur.fetchall() # print prefbar #### using a dict saves us from creating all the lists data = {} for item in prefbar: #### create dict entries on demand if item[0] not in data: data[item[0]] = {} if item[1] not in data[item[0]]: data[item[0]][item[1]] = [] #### append to list data[item[0]][item[1]].append(item[2:]) catarrays = {} averages = {} for key in ['eco', 'nat', 'soc']: catarrays[key] = [] for subkey in ['con', 'neu', 'pro'] #### btw. I don't understand why you throw con, neu, pro in one list #### now after sorting them out in advance. try: catarrays[key].append(data[key][subkey]) except KeyError: #### data[key][subkey] was not set print 'No data for %s,%s'%(key, subkey) #### convert to array catarrays[key] = array(catarrays[key]) #### average averages[key] = average(catarrays(key), axis=1) From lev at columbia.edu Mon Feb 26 05:46:56 2007 From: lev at columbia.edu (Lev Givon) Date: Mon, 26 Feb 2007 05:46:56 -0500 Subject: [SciPy-user] question re odeint Message-ID: <20070226104656.GA6348@localhost.cc.columbia.edu> While using scipy.integrate.odeint recently, I noticed that it modifies the object passed as the system's initial value. Is this behavior intentional? It seems that it would be preferable for odeint to use a copy of the initial value rather than manipulating whatever is passed as the second parameter directly. L.G. From nwagner at iam.uni-stuttgart.de Mon Feb 26 06:38:20 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Mon, 26 Feb 2007 12:38:20 +0100 Subject: [SciPy-user] question re odeint In-Reply-To: <20070226104656.GA6348@localhost.cc.columbia.edu> References: <20070226104656.GA6348@localhost.cc.columbia.edu> Message-ID: <45E2C6AC.6090409@iam.uni-stuttgart.de> Lev Givon wrote: > While using scipy.integrate.odeint recently, I noticed that it > modifies the object passed as the system's initial value. Is this > behavior intentional? It seems that it would be preferable for odeint > to use a copy of the initial value rather than manipulating whatever > is passed as the second parameter directly. > > L.G. > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > Fixed in svn http://projects.scipy.org/scipy/scipy/changeset/2704 Nils From novin01 at gmail.com Mon Feb 26 06:15:43 2007 From: novin01 at gmail.com (Dave) Date: Mon, 26 Feb 2007 11:15:43 +0000 (UTC) Subject: [SciPy-user] Transforming 1-d array to 2-d array References: Message-ID: Rich Shepard appl-ecosys.com> writes: > > On Sun, 25 Feb 2007, Rich Shepard wrote: > > > It appears that the eye() function is the tool, but when I try > > > > foo = eye(barEco,8,8,1) > > print foo > > I've also tried triu() and mat(), but neither prints the results I need. > > Rich > eye simply creates an array with ones on the diagonal. To solve this problem I would create a zero array of the correct size and then index into the array. #Define array size N = 8 #Define the data to put in the upper-diagonal part of the array myData = rand(N*(N-1)/2) #Create an index to the upper-diagonal part idx = triu(ones([8,8])-eye(8)).nonzero() #Instantiate a zero array and populate the upper-triangular part with myData A = zeros([N,N],dtype=float) A[idx] = myData #Place ones on the diagonal A += eye(N) From mjakubik at ta3.sk Mon Feb 26 08:34:57 2007 From: mjakubik at ta3.sk (Marian Jakubik) Date: Mon, 26 Feb 2007 14:34:57 +0100 Subject: [SciPy-user] Fitting - Gauss-normal distribution Message-ID: <20070226143457.4a90b2db@jakubik.ta3.sk> Hi, I am a SciPy newbie solving this problem: I would like to fit data with gaussian normal distribution.... First, I generated data: list=normal(0.00714,0.0005,140) Then I plot this data: pylab.hist(list,20) And at the end, I'd like to plot a gauss fit in the graph, also.... Could anyone help me, please? Marian This is my code: from numpy import * from RandomArray import * import pylab as p list=normal(0.00714,0.0005,140) p.hist(list,20) p.show() From cimrman3 at ntc.zcu.cz Mon Feb 26 08:48:01 2007 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Mon, 26 Feb 2007 14:48:01 +0100 Subject: [SciPy-user] ANN: new version of SFE (00.18.09) is available Message-ID: <45E2E511.4040405@ntc.zcu.cz> Hi, version 00.18.09 brings an improved problem definition syntax (regions are now per term, e.g. dw_poisson.Omega( ... )) and Navier-Stokes terms, see examples at http://ui505p06-mbs.ntc.zcu.cz/sfe cheers, r. From meesters at uni-mainz.de Mon Feb 26 08:54:27 2007 From: meesters at uni-mainz.de (Christian Meesters) Date: Mon, 26 Feb 2007 14:54:27 +0100 Subject: [SciPy-user] Fitting - Gauss-normal distribution In-Reply-To: <20070226143457.4a90b2db@jakubik.ta3.sk> References: <20070226143457.4a90b2db@jakubik.ta3.sk> Message-ID: <200702261454.28465.meesters@uni-mainz.de> Hi, you are welcome to use this code - not tested, since it is only rougly translated from a more complex function of mine: from scipy import std from scipy.optimize import leastsq def fit_gaussian(x_data, y_data): # ?_data should be numpy arrays # estimate the expectation value expect = y_data[argmax(x_data)] # find +/- 10 elements around the peak subxdata = x_data[expect-10:expect+11] subydata = y_data[expect-10:expect+11] #estimate the std sigma = std([inpt for inpt subydata in if inpt > 100.0])/len(subydata)**2 #really dirty hack!! #estimate the maximum maximum = max(y_data) #define starting paramters (as 'first guess') parameters0 = [sigma, expect, maximum] def __residuals(params, value, inpt): """ calculates the resdiuals """ sigma, expect, maximum = params err = value - (maximum * exp((-((inpt-expect)/sigma)**2)/2)) #the equation above allows for adding a constant return err def __peval(inpt, params): """ evaluates the function """ sigma, expect, maximum = params return (maximum * exp((-((inpt-expect)/sigma)**2)/2)) #calculate fit paramters plsq = leastsq(__residuals, parameters0, args=(subintensity, subchannel)) #calculate 'full width half maximum' parameter for a gaussian fit fwhm = 2*sqrt(2*log(2))*plsq[0][0] return plsq[0], fwhm, subxdata , subydata, __peval(subchannel, plsq[0]) The code above is not really neat, but allows for a peak shifted along your 'y-axis'. The return value is a tuple of (simga,mu,max),FWHMsubarray of x data, subarray of ydata, and the fitted function as an array. See http://www.scipy.org/Wiki/Documentation?action=AttachFile&do=get&target=scipy_tutorial.pdf for more information - the description of leastsq is really good! HTH Christian On Monday 26 February 2007 14:34, Marian Jakubik wrote: > Hi, I am a SciPy newbie solving this problem: > > I would like to fit data with gaussian normal distribution.... First, I > generated data: > > list=normal(0.00714,0.0005,140) > > Then I plot this data: > > pylab.hist(list,20) > > And at the end, I'd like to plot a gauss fit in the graph, also.... > > Could anyone help me, please? > > Marian > > This is my code: > > from numpy import * > from RandomArray import * > import pylab as p > > list=normal(0.00714,0.0005,140) > > p.hist(list,20) > p.show() > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user From david.huard at gmail.com Mon Feb 26 10:04:16 2007 From: david.huard at gmail.com (David Huard) Date: Mon, 26 Feb 2007 10:04:16 -0500 Subject: [SciPy-user] Fitting - Gauss-normal distribution In-Reply-To: <200702261454.28465.meesters@uni-mainz.de> References: <20070226143457.4a90b2db@jakubik.ta3.sk> <200702261454.28465.meesters@uni-mainz.de> Message-ID: <91cf711d0702260704i2e52d4b1j48927d5dee4678dd@mail.gmail.com> Hi Marian, You could fit the normal using the method of moments: mu = list.mean() sigma = list.std() x = linspace(list.min(), list.max(), 100) pdf = scipy.stats.norm.pdf(x, mu, sigma) s = subplot(111) s.plot(x,y) s.hist(list, 20, normed=True) David 2007/2/26, Christian Meesters : > > Hi, > > you are welcome to use this code - not tested, since it is only rougly > translated from a more complex function of mine: > > from scipy import std > from scipy.optimize import leastsq > > def fit_gaussian(x_data, y_data): # ?_data should be numpy arrays > # estimate the expectation value > expect = y_data[argmax(x_data)] > # find +/- 10 elements around the peak > subxdata = x_data[expect-10:expect+11] > subydata = y_data[expect-10:expect+11] > #estimate the std > sigma = std([inpt for inpt subydata in if inpt > 100.0 > ])/len(subydata)**2 > #really dirty hack!! > #estimate the maximum > maximum = max(y_data) > #define starting paramters (as 'first guess') > parameters0 = [sigma, expect, maximum] > > def __residuals(params, value, inpt): > """ > calculates the resdiuals > """ > sigma, expect, maximum = params > err = value - (maximum * exp((-((inpt-expect)/sigma)**2)/2)) > #the equation above allows for adding a constant > return err > > def __peval(inpt, params): > """ > evaluates the function > """ > sigma, expect, maximum = params > return (maximum * exp((-((inpt-expect)/sigma)**2)/2)) > > #calculate fit paramters > plsq = leastsq(__residuals, parameters0, args=(subintensity, > subchannel)) > #calculate 'full width half maximum' parameter for a gaussian fit > fwhm = 2*sqrt(2*log(2))*plsq[0][0] > return plsq[0], fwhm, subxdata , subydata, __peval(subchannel, > plsq[0]) > > The code above is not really neat, but allows for a peak shifted along > your > 'y-axis'. The return value is a tuple of (simga,mu,max),FWHMsubarray of x > data, subarray of ydata, > and the fitted function as an array. > > See > > http://www.scipy.org/Wiki/Documentation?action=AttachFile&do=get&target=scipy_tutorial.pdf > for more information - the description of leastsq is really good! > > HTH > Christian > > > > > > > On Monday 26 February 2007 14:34, Marian Jakubik wrote: > > Hi, I am a SciPy newbie solving this problem: > > > > I would like to fit data with gaussian normal distribution.... First, I > > generated data: > > > > list=normal(0.00714,0.0005,140) > > > > Then I plot this data: > > > > pylab.hist(list,20) > > > > And at the end, I'd like to plot a gauss fit in the graph, also.... > > > > Could anyone help me, please? > > > > Marian > > > > This is my code: > > > > from numpy import * > > from RandomArray import * > > import pylab as p > > > > list=normal(0.00714,0.0005,140) > > > > p.hist(list,20) > > p.show() > > > > > > > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From david.warde.farley at utoronto.ca Mon Feb 26 13:46:22 2007 From: david.warde.farley at utoronto.ca (David Warde-Farley) Date: Mon, 26 Feb 2007 13:46:22 -0500 Subject: [SciPy-user] Fitting - Gauss-normal distribution In-Reply-To: <91cf711d0702260704i2e52d4b1j48927d5dee4678dd@mail.gmail.com> References: <20070226143457.4a90b2db@jakubik.ta3.sk> <200702261454.28465.meesters@uni-mainz.de> <91cf711d0702260704i2e52d4b1j48927d5dee4678dd@mail.gmail.com> Message-ID: <1172515582.19486.17.camel@rodimus> Hi, So, it should be noted that the method of moments estimators for a Gaussian distribution are also the maximum likelihood estimators, i.e. the ones that maximize p(data|parameters), as well as the best least square estimator (since taking the log of the density function gives you scaled squared distance from the mean). So optimizing iteratively is hardly necessary in this case. i.e. what David Huard wrote is probably what you're looking for, and the best you're going to be able to do if your goal is just to fit a Gaussian to the data. David On Mon, 2007-02-26 at 10:04 -0500, David Huard wrote: > Hi Marian, > > You could fit the normal using the method of moments: > > mu = list.mean() > sigma = list.std() > > x = linspace(list.min(), list.max(), 100) > pdf = scipy.stats.norm.pdf(x, mu, sigma) > s = subplot(111) > s.plot(x,y) > s.hist(list, 20, normed=True) > > > David > > 2007/2/26, Christian Meesters : > Hi, > > you are welcome to use this code - not tested, since it is > only rougly > translated from a more complex function of mine: > > from scipy import std > from scipy.optimize import leastsq > > def fit_gaussian(x_data, y_data): # ?_data should be numpy > arrays > # estimate the expectation value > expect = y_data[argmax(x_data)] > # find +/- 10 elements around the peak > subxdata = x_data[expect-10:expect+11] > subydata = y_data[expect-10:expect+11] > #estimate the std > sigma = std([inpt for inpt subydata in if inpt > > 100.0])/len(subydata)**2 > #really dirty hack!! > #estimate the maximum > maximum = max(y_data) > #define starting paramters (as 'first guess') > parameters0 = [sigma, expect, maximum] > > def __residuals(params, value, inpt): > """ > calculates the resdiuals > """ > sigma, expect, maximum = params > err = value - (maximum * > exp((-((inpt-expect)/sigma)**2)/2)) > #the equation above allows for adding a constant > return err > > def __peval(inpt, params): > """ > evaluates the function > """ > sigma, expect, maximum = params > return (maximum * > exp((-((inpt-expect)/sigma)**2)/2)) > > #calculate fit paramters > plsq = leastsq(__residuals, parameters0, > args=(subintensity, > subchannel)) > #calculate 'full width half maximum' parameter for a > gaussian fit > fwhm = 2*sqrt(2*log(2))*plsq[0][0] > return plsq[0], fwhm, subxdata , subydata, > __peval(subchannel, > plsq[0]) > > The code above is not really neat, but allows for a peak > shifted along your > 'y-axis'. The return value is a tuple of > (simga,mu,max),FWHMsubarray of x > data, subarray of ydata, > and the fitted function as an array. > > See > http://www.scipy.org/Wiki/Documentation?action=AttachFile&do=get&target=scipy_tutorial.pdf > for more information - the description of leastsq is really > good! > > HTH > Christian > > > > > > > On Monday 26 February 2007 14:34, Marian Jakubik wrote: > > Hi, I am a SciPy newbie solving this problem: > > > > I would like to fit data with gaussian normal > distribution.... First, I > > generated data: > > > > list=normal(0.00714,0.0005,140) > > > > Then I plot this data: > > > > pylab.hist (list,20) > > > > And at the end, I'd like to plot a gauss fit in the graph, > also.... > > > > Could anyone help me, please? > > > > Marian > > > > This is my code: > > > > from numpy import * > > from RandomArray import * > > import pylab as p > > > > list=normal(0.00714,0.0005,140) > > > > p.hist(list,20) > > p.show() > > > > > > > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user From rshepard at appl-ecosys.com Mon Feb 26 15:39:06 2007 From: rshepard at appl-ecosys.com (Rich Shepard) Date: Mon, 26 Feb 2007 12:39:06 -0800 (PST) Subject: [SciPy-user] Need Advice With Arrays and Calculating Eigenvectors In-Reply-To: <200702260917.02628.a.u.r.e.l.i.a.n@gmx.net> References: <200702260917.02628.a.u.r.e.l.i.a.n@gmx.net> Message-ID: On Mon, 26 Feb 2007, Johannes Loehnert wrote: > here's your code as I would have written it. :-) My comments are those > starting with four #'s. Thank you, Johannes. > As a side remark, usually one uses four spaces for intendation. I hope I did > not mix it up. Yes, I know that's the standard for python code contributed to projects. I've used two space tabs for indenting C code for a couple of decades now, and I find it easier to read. Since this code is being used by us, we format for our convenience. > An alternative approach to what I did below would be to map the keys ('eco', > 'nat', 'soc') to integers and use lists instead of dicts. That can be done for this function. But, if dicts work, that's OK, too. > def weightcalc(): > # First: average for each position by category > meanvotes = [] #### Do you use this? No, I meant to take that out, and have. > #### not much can be done about this > stmt1 = """select cat, pos, avg(pr1), avg(pr2), avg(pr3), avg(pr4), > avg(pr5), avg(pr6), avg(pr7), avg(pr8), avg(pr9), avg(pr10), avg(pr11), > avg(pr12), avg(pr13), avg(pr14), avg(pr15), avg(pr16), avg(pr17), > avg(pr18), avg(pr19), avg(pr20), avg(pr21), avg(pr22), avg(pr23), > avg(pr24), avg(pr25), avg(pr26), avg(pr27), avg(pr28) from voting group by > cat, pos""" > appData.cur.execute(stmt1) > prefbar = appData.cur.fetchall() > # print prefbar This is the source of the data: a SQLite3 database table. > #### using a dict saves us from creating all the lists > data = {} > for item in prefbar: > #### create dict entries on demand > if item[0] not in data: > data[item[0]] = {} > if item[1] not in data[item[0]]: > data[item[0]][item[1]] = [] > > #### append to list > data[item[0]][item[1]].append(item[2:]) The above seems to do the opposite of what I need. 'prefbar' is a list of tuples, and the first two items of each tuple are strings. I want to remove those strings and have only the reals. Doesn't the above just copy prefbar to data? > catarrays = {} > averages = {} > for key in ['eco', 'nat', 'soc']: > catarrays[key] = [] > for subkey in ['con', 'neu', 'pro'] > #### btw. I don't understand why you throw con, neu, pro in one list > #### now after sorting them out in advance. Let me try to explain. I have 9 sets of data as records in the database. The sets are eco/con, eco/neu, eco/pro, nat/con, nat/neu, nat/pro, soc/con, soc/neu, and soc/pro. First, I need to average the 28 items in each of those 9 sets. Second, I need to average the three average values for each of the 28 items within the main sets of eco, nat, and soc. Results can be skewed if only an single, overall average of the 28 items is calculated in a single step. > try: > catarrays[key].append(data[key][subkey]) > except KeyError: > #### data[key][subkey] was not set > print 'No data for %s,%s'%(key, subkey) > #### convert to array > catarrays[key] = array(catarrays[key]) > #### average > averages[key] = average(catarrays(key), axis=1) This seems to be taking the averages in one step. We need them to be in two steps. Am I mis-reading this? Thanks, Rich -- Richard B. Shepard, Ph.D. | The Environmental Permitting Applied Ecosystem Services, Inc. | Accelerator(TM) Voice: 503-667-4517 Fax: 503-667-8863 From rshepard at appl-ecosys.com Mon Feb 26 15:43:37 2007 From: rshepard at appl-ecosys.com (Rich Shepard) Date: Mon, 26 Feb 2007 12:43:37 -0800 (PST) Subject: [SciPy-user] Transforming 1-d array to 2-d array In-Reply-To: References: Message-ID: On Mon, 26 Feb 2007, Dave wrote: > eye simply creates an array with ones on the diagonal. To solve this > problem I would create a zero array of the correct size and then index > into the array. Hi, Dave, What, then, is the difference between eye() and identity()? > #Define array size > N = 8 > > #Define the data to put in the upper-diagonal part of the array > myData = rand(N*(N-1)/2) Well, rather than random, these data are the averages of averages of each item in the list of tuples retrieved from the database table. > #Create an index to the upper-diagonal part > idx = triu(ones([8,8])-eye(8)).nonzero() > #Instantiate a zero array and populate the upper-triangular part with myData > A = zeros([N,N],dtype=float) > A[idx] = myData > > #Place ones on the diagonal > A += eye(N) I don't immediately understand what this is doing. I'll have to play with it in ipython. Ultimately, I need the averaged item values in the upper half of an 8x8 matrix, 1s along the diagonal, and 1/values in the lower half of the matrix. Probably not the simplest introduction to NymPy and SciPy, but it's what I need to do. Many thanks, Rich -- Richard B. Shepard, Ph.D. | The Environmental Permitting Applied Ecosystem Services, Inc. | Accelerator(TM) Voice: 503-667-4517 Fax: 503-667-8863 From rshepard at appl-ecosys.com Mon Feb 26 16:21:46 2007 From: rshepard at appl-ecosys.com (Rich Shepard) Date: Mon, 26 Feb 2007 13:21:46 -0800 (PST) Subject: [SciPy-user] Transforming 1-d array to 2-d array In-Reply-To: References: Message-ID: On Mon, 26 Feb 2007, Dave wrote: > #Define array size > N = 8 > #Define the data to put in the upper-diagonal part of the array > myData = rand(N*(N-1)/2) > #Create an index to the upper-diagonal part > idx = triu(ones([8,8])-eye(8)).nonzero() > #Instantiate a zero array and populate the upper-triangular part with myData > A = zeros([N,N],dtype=float) > A[idx] = myData > #Place ones on the diagonal > A += eye(N) Wow! It certainly does what I need. If only I understood why ... and how to create these solutions myself. I've read the brief descriptions of eye() and triu() in the book; where can I read how to apply them to solutions like this? Many thanks, Dave, Rich -- Richard B. Shepard, Ph.D. | The Environmental Permitting Applied Ecosystem Services, Inc. | Accelerator(TM) Voice: 503-667-4517 Fax: 503-667-8863 From rshepard at appl-ecosys.com Mon Feb 26 16:44:42 2007 From: rshepard at appl-ecosys.com (Rich Shepard) Date: Mon, 26 Feb 2007 13:44:42 -0800 (PST) Subject: [SciPy-user] Transforming 1-d array to 2-d array In-Reply-To: References: Message-ID: On Mon, 26 Feb 2007, Dave wrote: Dave, I tried applying this to one of my data arrays, but it fails at the next to last step. I'll put in my code and the results of print statements and ask what's different between your working example and my non-working data. > #Define array size > N = 8 > > #Define the data to put in the upper-diagonal part of the array > myData = rand(N*(N-1)/2) I have three arrays; the first one is named barEco and contains: [[ 2.29075869 2.12453058 3.06339111 2.88526612 3.32199956 2.96319486 3.12649018 2.81580625 3.23207315 2.493608 2.49459335 2.86643834 2.77111816 2.26500627 2.4902972 2.81149761 2.46213192 2.80432329 2.86150888 3.1135404 1.96135592 3.34577184 3.27458386 2.90845738 2.88745987 2.51245188 2.91666234 2.97438117]] I see that your myData array has only single brackets while my data have two. That seems to have resulted from manipulations where the database records were separated into individual lists and then averages computed > #Create an index to the upper-diagonal part > idx = triu(ones([8,8])-eye(8)).nonzero() idx = triu(ones([8,8])-eye(8)).nonzero() print idx (array([0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 3, 3, 3, 3, 4, 4, 4, 5, 5, 6]), array([1, 2, 3, 4, 5, 6, 7, 2, 3, 4, 5, 6, 7, 3, 4, 5, 6, 7, 4, 5, 6, 7, 5, 6, 7, 6, 7, 7])) So this is OK. > #Instantiate a zero array and populate the upper-triangular part with myData > A = zeros([N,N],dtype=float) symEco = zeros([N,N],dtype=float) print symEco [[ 0. 0. 0. 0. 0. 0. 0. 0.] [ 0. 0. 0. 0. 0. 0. 0. 0.] [ 0. 0. 0. 0. 0. 0. 0. 0.] [ 0. 0. 0. 0. 0. 0. 0. 0.] [ 0. 0. 0. 0. 0. 0. 0. 0.] [ 0. 0. 0. 0. 0. 0. 0. 0.] [ 0. 0. 0. 0. 0. 0. 0. 0.] [ 0. 0. 0. 0. 0. 0. 0. 0.]] > A[idx] = myData symEco[idx] = barEco print symEco Traceback (most recent call last): File "/data1/eikos/scopingPage.py", line 184, in OnCalcWeights inpWts = functions.weightcalc() File "/data1/eikos/functions.py", line 177, in weightcalc symEco[idx] = barEco ValueError: array is not broadcastable to correct shape Rich -- Richard B. Shepard, Ph.D. | The Environmental Permitting Applied Ecosystem Services, Inc. | Accelerator(TM) Voice: 503-667-4517 Fax: 503-667-8863 From peridot.faceted at gmail.com Mon Feb 26 19:52:30 2007 From: peridot.faceted at gmail.com (Anne Archibald) Date: Mon, 26 Feb 2007 19:52:30 -0500 Subject: [SciPy-user] Fitting - Gauss-normal distribution In-Reply-To: <1172515582.19486.17.camel@rodimus> References: <20070226143457.4a90b2db@jakubik.ta3.sk> <200702261454.28465.meesters@uni-mainz.de> <91cf711d0702260704i2e52d4b1j48927d5dee4678dd@mail.gmail.com> <1172515582.19486.17.camel@rodimus> Message-ID: On 26/02/07, David Warde-Farley wrote: > So, it should be noted that the method of moments estimators for a > Gaussian distribution are also the maximum likelihood estimators, i.e. > the ones that maximize p(data|parameters), as well as the best least > square estimator (since taking the log of the density function gives you > scaled squared distance from the mean). So optimizing iteratively is > hardly necessary in this case. Indeed, fitting a Gaussian is pretty easy. If you want to fit something more sphisticated (even just two Gaussians, for a bimodal distribution), the way to go is probably not to constuct a histogram first. A good approach is to fit for a maximum-likelihood estimate. That is, if you have a pdf f(p1, p2, ..., pn, x) that has n parameters and gives the probability (density) for x given all those parameters, set up a nonlinear optimization for the product f(p1, ..., pn, x1)*...*f(p1, ..., xm). You are likely to have better numerical behaviour if you instead minimize the negative logarithm of this. Anne M. Archibald From david at ar.media.kyoto-u.ac.jp Mon Feb 26 22:53:58 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Tue, 27 Feb 2007 12:53:58 +0900 Subject: [SciPy-user] Fitting - Gauss-normal distribution In-Reply-To: References: <20070226143457.4a90b2db@jakubik.ta3.sk> <200702261454.28465.meesters@uni-mainz.de> <91cf711d0702260704i2e52d4b1j48927d5dee4678dd@mail.gmail.com> <1172515582.19486.17.camel@rodimus> Message-ID: <45E3AB56.4060606@ar.media.kyoto-u.ac.jp> Anne Archibald wrote: > On 26/02/07, David Warde-Farley wrote: > > >> So, it should be noted that the method of moments estimators for a >> Gaussian distribution are also the maximum likelihood estimators, i.e. >> the ones that maximize p(data|parameters), as well as the best least >> square estimator (since taking the log of the density function gives you >> scaled squared distance from the mean). So optimizing iteratively is >> hardly necessary in this case. >> > > Indeed, fitting a Gaussian is pretty easy. If you want to fit > something more sphisticated (even just two Gaussians, for a bimodal > distribution), the way to go is probably not to constuct a histogram > first. A good approach is to fit for a maximum-likelihood estimate. > That is, if you have a pdf f(p1, p2, ..., pn, x) that has n parameters > and gives the probability (density) for x given all those parameters, > set up a nonlinear optimization for the product f(p1, ..., pn, > x1)*...*f(p1, ..., xm). > Note that one standard iterative algorithm to fit a mixture of Gaussian using maximum likelihood method (Expectation Maximization) is implemented in the scipy.sandbox.pyem. David From rshepard at appl-ecosys.com Mon Feb 26 22:57:18 2007 From: rshepard at appl-ecosys.com (Rich Shepard) Date: Mon, 26 Feb 2007 19:57:18 -0800 (PST) Subject: [SciPy-user] Transforming 1-d array to 2-d array In-Reply-To: References: Message-ID: On Mon, 26 Feb 2007, Rich Shepard wrote: > I have three arrays; the first one is named barEco and contains: > > [[ 2.29075869 2.12453058 3.06339111 2.88526612 3.32199956 2.96319486 > 3.12649018 2.81580625 3.23207315 2.493608 2.49459335 2.86643834 > 2.77111816 2.26500627 2.4902972 2.81149761 2.46213192 2.80432329 > 2.86150888 3.1135404 1.96135592 3.34577184 3.27458386 2.90845738 > 2.88745987 2.51245188 2.91666234 2.97438117]] > > I see that your myData array has only single brackets while my data have > two. That seems to have resulted from manipulations where the database > records were separated into individual lists and then averages computed Well, duh! I used an intermediate array, then extracted [0] from it, and everything works. I still don't fully understand the 'idx' variable, but I'm sure that will come with further reading, thinking about it, and playing with the python shell. The final step is to take the inverse of each of the values in the upper half and put it in the mirror position of the lower half. More reading required to figure that one out. Thanks again, Rich -- Richard B. Shepard, Ph.D. | The Environmental Permitting Applied Ecosystem Services, Inc. | Accelerator(TM) Voice: 503-667-4517 Fax: 503-667-8863 From rshepard at appl-ecosys.com Mon Feb 26 23:35:14 2007 From: rshepard at appl-ecosys.com (Rich Shepard) Date: Mon, 26 Feb 2007 20:35:14 -0800 (PST) Subject: [SciPy-user] Transforming 1-d array to 2-d array In-Reply-To: References: Message-ID: On Mon, 26 Feb 2007, Dave wrote: > #Define the data to put in the upper-diagonal part of the array > myData = rand(N*(N-1)/2) > > #Create an index to the upper-diagonal part > idx = triu(ones([8,8])-eye(8)).nonzero() > > #Instantiate a zero array and populate the upper-triangular part with myData > A = zeros([N,N],dtype=float) > A[idx] = myData > > #Place ones on the diagonal > A += eye(N) Dave, Now I need a bit more guidance, to complete the symmetrical matrices. Instead of 'myData' above, my starting arrays are barEco, barNat, and barSoc. They are the upper halves of the matrix 'A' above, with 1s in the diagonal. I created an index to the lower-diagonal part: udx = tril(ones([8,8])-eye(8)).nonzero() And used them to produce the bottom halves with 1s as the diagonal; for example: [[ 1. 0. 0. 0. 0. 0. 0. 0.] [ 0.4365366 1. 0. 0. 0. 0. 0. 0.] [ 0.47069221 0.32643563 1. 0. 0. 0. 0. 0.] [ 0.34658848 0.30102352 0.33747359 1. 0. 0. 0. 0.] [ 0.31984748 0.35513807 0.30939894 0.40102534 1. 0. 0. 0.] [ 0.40086694 0.348865 0.36086516 0.44149988 0.4015585 1. 0. 0.] [ 0.35568232 0.40615208 0.35659227 0.34946598 0.32117778 0.50985137 1. 0.] [ 0.2988847 0.30538231 0.34382488 0.34632516 0.39801757 0.34285765 0.33620439 1.]] Now, my question is how to combine the upper half ('A', above) with the lower half leaving only one diagonal with 1s. I've tried several ways, but they're all incorrect: they don't work and python complains. Rich -- Richard B. Shepard, Ph.D. | The Environmental Permitting Applied Ecosystem Services, Inc. | Accelerator(TM) Voice: 503-667-4517 Fax: 503-667-8863 From matthieu.brucher at gmail.com Tue Feb 27 02:36:05 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Tue, 27 Feb 2007 08:36:05 +0100 Subject: [SciPy-user] Proposal for more generic optimizers In-Reply-To: References: Message-ID: Hi, I'm migratting toward Python for some weeks, but I do not find the tools I had to develop for my PhD in SciPy at the moment. I can't, for instance, find an elegant way to save the set of parameters used in an optimization for the standard algorithms. What is more, I think they can be more generic. What I did in C++, and I'd like your opinion about porting it in Python, was to define a standard optimizer with no iteration loop - iterate was a pure virtual method called by a optimize method -. This iteration loop was then defined for standard optimizer or damped optimizer. Each time, the parameters tested could be saved. Then, the step that had to be taken was an instance of a class that used a gradient step, a Newton step, ... and the same was used for the stoping criterion. The function was a class that defined value, gradient, hessian, ... if needed. For instance, a simplified instruction could have been : Optimizer* optimizer = StandardOptimizeroptimize(); optimizer->getOptimalParameters(); The "step" argument was a constant by which the computed step had to be multiplied, by default, it was 1. I know that this kind of writting is not as clear and lightweight as the current one, which is used by Matlab too. But perhaps giving more latitude to the user can be proposed with this system. If people want, I can try making a real Python example... Matthieu -------------- next part -------------- An HTML attachment was scrubbed... URL: From a.u.r.e.l.i.a.n at gmx.net Tue Feb 27 03:23:10 2007 From: a.u.r.e.l.i.a.n at gmx.net (Johannes Loehnert) Date: Tue, 27 Feb 2007 09:23:10 +0100 Subject: [SciPy-user] Need Advice With Arrays and Calculating Eigenvectors In-Reply-To: References: <200702260917.02628.a.u.r.e.l.i.a.n@gmx.net> Message-ID: <200702270923.10266.a.u.r.e.l.i.a.n@gmx.net> Hi, > > #### using a dict saves us from creating all the lists > > data = {} > > > > for item in prefbar: > > #### create dict entries on demand > > if item[0] not in data: > > data[item[0]] = {} > > if item[1] not in data[item[0]]: > > data[item[0]][item[1]] = [] > > > > #### append to list > > data[item[0]][item[1]].append(item[2:]) > > The above seems to do the opposite of what I need. 'prefbar' is a list > of tuples, and the first two items of each tuple are strings. I want to > remove those strings and have only the reals. Doesn't the above just copy > prefbar to data? No. data is a dict containing the three main keys (eco, nat, soc). Each maps to a dict containing the subkeys (con, neu, pro). Each of those maps to a list of data. E.g. for item = [('eco', 'con', 1, 2, 3), ('eco', 'con', 4,5,6), ('eco', 'neu', 7,8,9), ('nat', 'neu', 10,11,12)] the result would be data == {'eco': {'con': [(1,2,3), (4,5,6)], 'neu': [(7,8,9)]}, 'nat': {'neu': [(10,11,12)]}}. So with data['eco']['con'] you get back [(1,2,3), (4,5,6)]. > > > catarrays = {} > > averages = {} > > for key in ['eco', 'nat', 'soc']: > > catarrays[key] = [] > > for subkey in ['con', 'neu', 'pro'] > > #### btw. I don't understand why you throw con, neu, pro in one > > list #### now after sorting them out in advance. > > Let me try to explain. I have 9 sets of data as records in the database. > The sets are eco/con, eco/neu, eco/pro, nat/con, nat/neu, nat/pro, soc/con, > soc/neu, and soc/pro. So you get only one data row for each category/subcategory combination? Or can there be multiple? > First, I need to average the 28 items in each of those 9 sets. > Second, I need to average the three average values for each of the 28 > items within the main sets of eco, nat, and soc. what do you mean by item? A float number? > > Results can be skewed if only an single, overall average of the 28 items > is calculated in a single step. > > > try: > > catarrays[key].append(data[key][subkey]) > > except KeyError: > > #### data[key][subkey] was not set > > print 'No data for %s,%s'%(key, subkey) > > #### convert to array > > catarrays[key] = array(catarrays[key]) > > #### average > > averages[key] = average(catarrays(key), axis=1) > > This seems to be taking the averages in one step. We need them to be in > two steps. Am I mis-reading this? Well, it ought to do exactly what your code did. averages['eco'] == barEco IIANM. Johannes From rshepard at appl-ecosys.com Tue Feb 27 08:52:48 2007 From: rshepard at appl-ecosys.com (Rich Shepard) Date: Tue, 27 Feb 2007 05:52:48 -0800 (PST) Subject: [SciPy-user] Need Advice With Arrays and Calculating Eigenvectors In-Reply-To: <200702270923.10266.a.u.r.e.l.i.a.n@gmx.net> References: <200702260917.02628.a.u.r.e.l.i.a.n@gmx.net> <200702270923.10266.a.u.r.e.l.i.a.n@gmx.net> Message-ID: On Tue, 27 Feb 2007, Johannes Loehnert wrote: > No. data is a dict containing the three main keys (eco, nat, soc). Each > maps to a dict containing the subkeys (con, neu, pro). Each of those maps > to a list of data. E.g. for > > item = [('eco', 'con', 1, 2, 3), ('eco', 'con', 4,5,6), ('eco', 'neu', 7,8,9), > ('nat', 'neu', 10,11,12)] > > the result would be > > data == {'eco': {'con': [(1,2,3), (4,5,6)], > 'neu': [(7,8,9)]}, > 'nat': {'neu': [(10,11,12)]}}. > > So with data['eco']['con'] you get back [(1,2,3), (4,5,6)]. A-ha! Now I see it. Thanks very much, Johannes. > So you get only one data row for each category/subcategory combination? Or > can there be multiple? Multiple rows for each category/subcategory. >> First, I need to average the 28 items in each of those 9 sets. >> Second, I need to average the three average values for each of the 28 >> items within the main sets of eco, nat, and soc. > > what do you mean by item? A float number? Yes. The floats are avereaged by subcategory, then the subcategories are averaged by category. > Well, it ought to do exactly what your code did. averages['eco'] == barEco > IIANM. I see now how to write the code so it works. I mis-understood the first message. Again, thank you very much. Rich -- Richard B. Shepard, Ph.D. | The Environmental Permitting Applied Ecosystem Services, Inc. | Accelerator(TM) Voice: 503-667-4517 Fax: 503-667-8863 From rshepard at appl-ecosys.com Tue Feb 27 09:04:19 2007 From: rshepard at appl-ecosys.com (Rich Shepard) Date: Tue, 27 Feb 2007 06:04:19 -0800 (PST) Subject: [SciPy-user] Transforming 1-d array to 2-d array In-Reply-To: References: Message-ID: On Mon, 26 Feb 2007, Rich Shepard wrote: > Now I need a bit more guidance, to complete the symmetrical matrices. It occurred to me after I posted this message that I'm going along the wrong path. I don't need a lower index, just the lower triangular half of the array with zeros along the principal diagonal and the upper half. Then I can add them (cell-wise, not matrix addition) and get the filled array I need. Rich -- Richard B. Shepard, Ph.D. | The Environmental Permitting Applied Ecosystem Services, Inc. | Accelerator(TM) Voice: 503-667-4517 Fax: 503-667-8863 From aisaac at american.edu Tue Feb 27 10:06:12 2007 From: aisaac at american.edu (Alan G Isaac) Date: Tue, 27 Feb 2007 10:06:12 -0500 Subject: [SciPy-user] Proposal for more generic optimizers In-Reply-To: References: Message-ID: On Tue, 27 Feb 2007, Matthieu Brucher apparently wrote: > Optimizer* optimizer = StandardOptimizer parameters not relevant > in Python*/(function, GradientStep(), SimpleCriterion(NbMaxIterations), > step, saveParameters); > optimizer->optimize(); > optimizer->getOptimalParameters(); Seems like a good approach. (Put step and saveParameters in a keyword dict.) Cheers, Alan Isaac From matthieu.brucher at gmail.com Tue Feb 27 11:13:46 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Tue, 27 Feb 2007 17:13:46 +0100 Subject: [SciPy-user] Proposal for more generic optimizers In-Reply-To: References: Message-ID: Thanks for the tip :) I wonder the saveParamaters could be avoided by using a default list that does nothing - append does nothing -, and pepole could add a container defining a "append" method. That would save a test as well. Matthieu 2007/2/27, Alan G Isaac : > > On Tue, 27 Feb 2007, Matthieu Brucher apparently wrote: > > Optimizer* optimizer = StandardOptimizer > parameters not relevant > > in Python*/(function, GradientStep(), SimpleCriterion(NbMaxIterations), > > step, saveParameters); > > optimizer->optimize(); > > optimizer->getOptimalParameters(); > > Seems like a good approach. > (Put step and saveParameters in a keyword dict.) > > Cheers, > Alan Isaac > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From krish.subramaniam at gmail.com Tue Feb 27 12:23:17 2007 From: krish.subramaniam at gmail.com (Krish Subramaniam) Date: Tue, 27 Feb 2007 09:23:17 -0800 Subject: [SciPy-user] A NifTI-1 I/O library for Numpy wrapped through ctypes Message-ID: Hello Folks I am new here. Whipped up a quick tutorial "A NifTI-1 I/O library for Numpy wrapped through ctypes" for beginners since I am not an advanced user myself. I have to thank the Numpy developers for their wonderful job. This is a write-up on how to get a library wrapped in ctypes and "translate" the data to a Numpy N-D object. First ever draft. Would it find a way into your wiki or something? so that people who are in the same background as mine ( scientific, wrap an existing C module, seriously contemplate using Scipy instead of Matlab) would benefit. You can find it here : http://krish.caltech.edu/niftiio.pdf ( Should I have mailed the devel list instead ? ) Regards Krish Subramaniam P.S ( I think my previous mail didn't get through. If this is the 2nd mail you are receiving, please pardon ) From peridot.faceted at gmail.com Tue Feb 27 12:42:13 2007 From: peridot.faceted at gmail.com (Anne Archibald) Date: Tue, 27 Feb 2007 12:42:13 -0500 Subject: [SciPy-user] A NifTI-1 I/O library for Numpy wrapped through ctypes In-Reply-To: References: Message-ID: On 27/02/07, Krish Subramaniam wrote: > Hello Folks > > I am new here. > > Whipped up a quick tutorial "A NifTI-1 I/O library for Numpy wrapped > through ctypes" for beginners since I am not an advanced user myself. > I have to thank the Numpy developers for their wonderful job. This is > a write-up on how to get a library wrapped in ctypes and "translate" > the data to a Numpy N-D object. First ever draft. > > Would it find a way into your wiki or something? so that people who > are in the same background as mine ( scientific, wrap an existing C > module, seriously contemplate using Scipy instead of Matlab) would > benefit. You can find it here : > > http://krish.caltech.edu/niftiio.pdf That's a handy document! If you want, you can just register for an account on the wiki (the usual two-minute thing) and attach it to some page (not sure what's appropriate just now). If you wanted to take a little longer, you could copy the text into a wiki page on its own, which would make it more searchable. Thanks for writing the document, Anne M. Archibald From krish.subramaniam at gmail.com Tue Feb 27 12:52:02 2007 From: krish.subramaniam at gmail.com (Krish Subramaniam) Date: Tue, 27 Feb 2007 09:52:02 -0800 Subject: [SciPy-user] A NifTI-1 I/O library for Numpy wrapped through ctypes In-Reply-To: References: Message-ID: Cool .. I can write it in the wiki markup language so that it's searchable.. I can include more code / examples and hopefully a few figures.. Thanks Krish On 2/27/07, Anne Archibald wrote: > On 27/02/07, Krish Subramaniam wrote: > > Hello Folks > > > > I am new here. > > > > Whipped up a quick tutorial "A NifTI-1 I/O library for Numpy wrapped > > through ctypes" for beginners since I am not an advanced user myself. > > I have to thank the Numpy developers for their wonderful job. This is > > a write-up on how to get a library wrapped in ctypes and "translate" > > the data to a Numpy N-D object. First ever draft. > > > > Would it find a way into your wiki or something? so that people who > > are in the same background as mine ( scientific, wrap an existing C > > module, seriously contemplate using Scipy instead of Matlab) would > > benefit. You can find it here : > > > > http://krish.caltech.edu/niftiio.pdf > > That's a handy document! > > If you want, you can just register for an account on the wiki (the > usual two-minute thing) and attach it to some page (not sure what's > appropriate just now). If you wanted to take a little longer, you > could copy the text into a wiki page on its own, which would make it > more searchable. > > Thanks for writing the document, > Anne M. Archibald > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From rshepard at appl-ecosys.com Tue Feb 27 14:09:37 2007 From: rshepard at appl-ecosys.com (Rich Shepard) Date: Tue, 27 Feb 2007 11:09:37 -0800 (PST) Subject: [SciPy-user] Combining Arrays Message-ID: I have two arrays: one is the upper triangular half with 1s in the diagonal, the other has the values for the lower triangular half. I also have an array that specifies how the values in the lower half array are to be assigned to rows and columns of the 2D array. However, as this is my first use of NumPy, I am stuck at how to put these together to form what I need. Here's the upper array (call it topArr): [[ 1. 2.29075869 2.12453058 3.06339111 2.88526612 3.32199956 2.96319486 3.12649018] [ 0. 1. 2.81580625 3.23207315 2.493608 2.49459335 2.86643834 2.77111816] [ 0. 0. 1. 2.26500627 2.4902972 2.81149761 2.46213192 2.80432329] [ 0. 0. 0. 1. 2.86150888 3.1135404 1.96135592 3.34577184] [ 0. 0. 0. 0. 1. 3.27458386 2.90845738 2.88745987] [ 0. 0. 0. 0. 0. 1. 2.51245188 2.91666234] [ 0. 0. 0. 0. 0. 0. 1. 2.97438117] [ 0. 0. 0. 0. 0. 0. 0. 1. ]] Here's the lower array (call it botArr): [ 0.4365366 0.47069221 0.32643563 0.34658848 0.30102352 0.33747359 0.31984748 0.35513807 0.30939894 0.40102534 0.40086694 0.348865 0.36086516 0.44149988 0.4015585 0.35568232 0.40615208 0.35659227 0.34946598 0.32117778 0.50985137 0.2988847 0.30538231 0.34382488 0.34632516 0.39801757 0.34285765 0.33620439] And, using 'udx = tril(ones([8,8])-eye(8)).nonzero()' this is udx: (array([1, 2, 2, 3, 3, 3, 4, 4, 4, 4, 5, 5, 5, 5, 5, 6, 6, 6, 6, 6, 6, 7, 7, 7, 7, 7, 7, 7]), array([0, 0, 1, 0, 1, 2, 0, 1, 2, 3, 0, 1, 2, 3, 4, 0, 1, 2, 3, 4, 5, 0, 1, 2, 3, 4, 5, 6])) What I need is the array: [[ 1.00 2.29 2.12 3.06 ... ] [ 0.44 1.00 2.82 3.23 ... ] [ 0.47 0.22 1.00 2.26 ... ] ... ]] How do I do this, please? Rich -- Richard B. Shepard, Ph.D. | The Environmental Permitting Applied Ecosystem Services, Inc. | Accelerator(TM) Voice: 503-667-4517 Fax: 503-667-8863 From as8ca at virginia.edu Tue Feb 27 14:21:29 2007 From: as8ca at virginia.edu (Alok Singhal) Date: Tue, 27 Feb 2007 14:21:29 -0500 Subject: [SciPy-user] Combining Arrays In-Reply-To: References: Message-ID: <20070227192129.GA3612@virginia.edu> Hi Rich, On 27/02/07: 11:09, Rich Shepard wrote: > I have two arrays: one is the upper triangular half with 1s in the > diagonal, the other has the values for the lower triangular half. I also > have an array that specifies how the values in the lower half array are to > be assigned to rows and columns of the 2D array. However, as this is my > first use of NumPy, I am stuck at how to put these together to form what I > need. [snip] > What I need is the array: > > [[ 1.00 2.29 2.12 3.06 ... ] > [ 0.44 1.00 2.82 3.23 ... ] > [ 0.47 0.22 1.00 2.26 ... ] > ... > ]] > > How do I do this, please? This works for me: new = zeros(topArr.shape, dtype=float) new[udx] = botArr new = new + topArr -Alok -- Alok Singhal * * Graduate Student, dept. of Astronomy * * * University of Virginia http://www.astro.virginia.edu/~as8ca/ * * From rshepard at appl-ecosys.com Tue Feb 27 14:53:31 2007 From: rshepard at appl-ecosys.com (Rich Shepard) Date: Tue, 27 Feb 2007 11:53:31 -0800 (PST) Subject: [SciPy-user] Combining Arrays In-Reply-To: <20070227192129.GA3612@virginia.edu> References: <20070227192129.GA3612@virginia.edu> Message-ID: On Tue, 27 Feb 2007, Alok Singhal wrote: > This works for me: > > new = zeros(topArr.shape, dtype=float) > new[udx] = botArr > new = new + topArr Alok, I had the first two steps, but needed the last one. What I kept trying borked each time. Wonder if this is documented somewhere. I did not see it in the beta of the tutorial, the numpy example lists, or in Travis' book. But, it works like a charm. Thank you very much. Now to read how to use the eigen() function .... Rich -- Richard B. Shepard, Ph.D. | The Environmental Permitting Applied Ecosystem Services, Inc. | Accelerator(TM) Voice: 503-667-4517 Fax: 503-667-8863 From bnuttall at uky.edu Tue Feb 27 15:09:47 2007 From: bnuttall at uky.edu (Brandon Nuttall) Date: Tue, 27 Feb 2007 15:09:47 -0500 Subject: [SciPy-user] Using SciPy/NumPy optimization Message-ID: <6.0.1.1.2.20070227131721.0287bfe0@pop.uky.edu> Folks, The usual confessions: I am relatively new to Python programming and SciPy. I have a problem I'm looking for some help in solving. I have a list of data pairs [[x1,y1], [x2,y2], ..., [xn,yn]] and am trying to find the best fit of that data to an equation: y = a*(1+b*c*y)^(-1/b) The parameters, b and c, are constrained: 1) 0= max(x1, x2, ... , xn). My "best fit" goal is either to minimize the root mean square deviation (or consequently maximize the r-square value). Any suggestions? Thanks. Brandon C. Nuttall BNUTTALL at UKY.EDU Kentucky Geological Survey (859) 257-5500 University of Kentucky (859) 257-1147 (fax) 228 Mining & Mineral Resources Bldg http://www.uky.edu/KGS/home.htm Lexington, Kentucky 40506-0107 From rshepard at appl-ecosys.com Tue Feb 27 15:24:20 2007 From: rshepard at appl-ecosys.com (Rich Shepard) Date: Tue, 27 Feb 2007 12:24:20 -0800 (PST) Subject: [SciPy-user] Proper Use of NumPy's eig() Message-ID: I start the module with from numpy import linalg and I need to find the principal Eigenvector of the matrix E. But, when I try eigE = eig(E) python responds "NameError: global name 'eig' is not defined." Do I need another module from NumPy, should I explicitly cast E to a matrix, or am I calling the function incorrectly? Rich -- Richard B. Shepard, Ph.D. | The Environmental Permitting Applied Ecosystem Services, Inc. | Accelerator(TM) Voice: 503-667-4517 Fax: 503-667-8863 From robert.kern at gmail.com Tue Feb 27 15:25:59 2007 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 27 Feb 2007 14:25:59 -0600 Subject: [SciPy-user] Using SciPy/NumPy optimization In-Reply-To: <6.0.1.1.2.20070227131721.0287bfe0@pop.uky.edu> References: <6.0.1.1.2.20070227131721.0287bfe0@pop.uky.edu> Message-ID: <45E493D7.2070703@gmail.com> Brandon Nuttall wrote: > Folks, > > The usual confessions: I am relatively new to Python programming and SciPy. > I have a problem I'm looking for some help in solving. > > I have a list of data pairs [[x1,y1], [x2,y2], ..., [xn,yn]] and am trying > to find the best fit of that data to an equation: > > y = a*(1+b*c*y)^(-1/b) Presumably, you mean y = a*(1 + b*c*x) ** (-1.0/b) to correct a typo and use Python notation. > The parameters, b and c, are constrained: > 1) 0 2) -1<=c<=1 > > Parameter a is only weakly constrained in that x is usually >= max(x1, x2, > ... , xn). > > My "best fit" goal is either to minimize the root mean square deviation (or > consequently maximize the r-square value). There are a number of constrained optimizers in scipy.optimize . scipy.optimize.fmin_tnc seems most appropriate for simple bounds like you have. In order to get a function to minimize that depends on data, I usually like to use a class: import numpy as np from scipy.optimize import fmin_tnc class LossFunction(object): def __init__(self, x, y): self.x = x self.y = y def __call__(self, abc): """ A function suitable for passing to the fmin() minimizers. """ a, b, c = abc y = a*(1.0 + b*c*self.x) ** (-1.0/b) dy = self.y - y return dy*dy x = np.array([...]) y = np.array([...]) lf = LossFunction(x, y) abc0 = np.array([x.max(), 2.5, 0.0]) # or whatever retcode, nfeval, abc_optimal = fmin_tnc(lf, abc0, bounds=[(None, None), (0., 5.), (-1., 1.)]) -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From bryanv at enthought.com Tue Feb 27 15:26:16 2007 From: bryanv at enthought.com (Bryan Van de Ven) Date: Tue, 27 Feb 2007 14:26:16 -0600 Subject: [SciPy-user] Proper Use of NumPy's eig() In-Reply-To: References: Message-ID: <45E493E8.8090003@enthought.com> eig in in the linalg module, so you need to use eigE = linalg.eig(E) Rich Shepard wrote: > I start the module with > from numpy import linalg > > and I need to find the principal Eigenvector of the matrix E. But, when I > try > > eigE = eig(E) > > python responds "NameError: global name 'eig' is not defined." > > Do I need another module from NumPy, should I explicitly cast E to a > matrix, or am I calling the function incorrectly? > > Rich > From david.warde.farley at utoronto.ca Tue Feb 27 15:27:10 2007 From: david.warde.farley at utoronto.ca (David Warde-Farley) Date: Tue, 27 Feb 2007 15:27:10 -0500 Subject: [SciPy-user] Proper Use of NumPy's eig() In-Reply-To: References: Message-ID: <52878754-DDDD-4AE2-9601-880BA43DD446@utoronto.ca> If you're going to do it that way you'll have to call linalg.eig() David On 27-Feb-07, at 3:24 PM, Rich Shepard wrote: > I start the module with > from numpy import linalg > > and I need to find the principal Eigenvector of the matrix E. But, > when I > try > > eigE = eig(E) > > python responds "NameError: global name 'eig' is not defined." > > Do I need another module from NumPy, should I explicitly cast E > to a > matrix, or am I calling the function incorrectly? > > Rich > > -- > Richard B. Shepard, Ph.D. | The Environmental > Permitting > Applied Ecosystem Services, Inc. | Accelerator(TM) > Voice: 503-667-4517 Fax: > 503-667-8863 > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user From robert.kern at gmail.com Tue Feb 27 15:27:37 2007 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 27 Feb 2007 14:27:37 -0600 Subject: [SciPy-user] Proper Use of NumPy's eig() In-Reply-To: References: Message-ID: <45E49439.4090603@gmail.com> Rich Shepard wrote: > I start the module with > from numpy import linalg > > and I need to find the principal Eigenvector of the matrix E. But, when I > try > > eigE = eig(E) > > python responds "NameError: global name 'eig' is not defined." > > Do I need another module from NumPy, should I explicitly cast E to a > matrix, or am I calling the function incorrectly? eigE = linalg.eig(E) Importing modules like that doesn't put their contents into the current namespace; rather it puts the module object itself into the namespace. http://docs.python.org/tut/node8.html -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From jeremit0 at gmail.com Tue Feb 27 15:31:43 2007 From: jeremit0 at gmail.com (Jeremy Conlin) Date: Tue, 27 Feb 2007 15:31:43 -0500 Subject: [SciPy-user] no linalg module and failed check_integer test Message-ID: <3db594f70702271231s2b0f3fd3l79f1342b11d408d2@mail.gmail.com> I just installed scipy from svn and I am having some problems. The most vexing is that I can no longer load the linalg package, at least I can't do anything with it. For example: Python 2.5 (r25:51918, Sep 19 2006, 08:49:13) [GCC 4.0.1 (Apple Computer, Inc. build 5341)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import scipy >>> scipy.pkgload('linalg') >>> help(scipy.linalg) Traceback (most recent call last): File "", line 1, in AttributeError: 'module' object has no attribute 'linalg' >>> A = scipy.ones((10,10)) >>> scipy.linalg.eig(A) Traceback (most recent call last): File "", line 1, in AttributeError: 'module' object has no attribute 'linalg' >>> I didn't notice any errors during the installation. I know this isn't a lot of information to go on, but does anyone know what the problem is? Secondly, and this might be related, when I run scipy.test(1,10) I get the error shown below. Could this be the cause of my linalg problem? Thanks in advance, Jeremy test_init (scipy.io.tests.test_npfile.test_npfile) ... ok test_parse_endian (scipy.io.tests.test_npfile.test_npfile) ... ok test_read_write_array (scipy.io.tests.test_npfile.test_npfile) ... ok test_read_write_raw (scipy.io.tests.test_npfile.test_npfile) ... ok test_remaining_bytes (scipy.io.tests.test_npfile.test_npfile) ... ok ====================================================================== ERROR: check_integer (scipy.io.tests.test_array_import.test_read_array) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/io/tests/test_array_import.py", line 55, in check_integer from scipy import stats File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/stats/__init__.py", line 7, in from stats import * File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/stats/stats.py", line 190, in import scipy.special as special File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/special/__init__.py", line 8, in from basic import * File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/special/basic.py", line 8, in from _cephes import * ImportError: dlopen(/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/special/_cephes.so, 2): Symbol not found: ___dso_handle Referenced from: /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/special/_cephes.so Expected in: dynamic lookup ---------------------------------------------------------------------- Ran 554 tests in 2.424s FAILED (errors=1) From rshepard at appl-ecosys.com Tue Feb 27 15:32:16 2007 From: rshepard at appl-ecosys.com (Rich Shepard) Date: Tue, 27 Feb 2007 12:32:16 -0800 (PST) Subject: [SciPy-user] Proper Use of NumPy's eig() In-Reply-To: <45E493E8.8090003@enthought.com> References: <45E493E8.8090003@enthought.com> Message-ID: On Tue, 27 Feb 2007, Bryan Van de Ven wrote: > eig in in the linalg module, so you need to use > eigE = linalg.eig(E) Ah, yes. Thank you very much. Rich -- Richard B. Shepard, Ph.D. | The Environmental Permitting Applied Ecosystem Services, Inc. | Accelerator(TM) Voice: 503-667-4517 Fax: 503-667-8863 From rshepard at appl-ecosys.com Tue Feb 27 15:33:31 2007 From: rshepard at appl-ecosys.com (Rich Shepard) Date: Tue, 27 Feb 2007 12:33:31 -0800 (PST) Subject: [SciPy-user] Proper Use of NumPy's eig() In-Reply-To: <52878754-DDDD-4AE2-9601-880BA43DD446@utoronto.ca> References: <52878754-DDDD-4AE2-9601-880BA43DD446@utoronto.ca> Message-ID: On Tue, 27 Feb 2007, David Warde-Farley wrote: > If you're going to do it that way you'll have to call linalg.eig() David, Yes, it was an obvious oversight on my part. Do you recommend another way? Thanks, Rich -- Richard B. Shepard, Ph.D. | The Environmental Permitting Applied Ecosystem Services, Inc. | Accelerator(TM) Voice: 503-667-4517 Fax: 503-667-8863 From david.warde.farley at utoronto.ca Tue Feb 27 15:37:31 2007 From: david.warde.farley at utoronto.ca (David Warde-Farley) Date: Tue, 27 Feb 2007 15:37:31 -0500 Subject: [SciPy-user] Proper Use of NumPy's eig() In-Reply-To: References: <52878754-DDDD-4AE2-9601-880BA43DD446@utoronto.ca> Message-ID: <03F2185E-C21E-414D-AF24-BEBF1922BE8A@utoronto.ca> Hah, it looks like 3 of us all jumped on the question at once. Sorry. :) If you're really keen on having short names for things you could use from scipy.linalg import eig or import scipy.linalg as L in the latter case you'd use L.eig(). I personally don't like cluttering up the namespace too much but I think that's my own obsessive streak. :) David On 27-Feb-07, at 3:33 PM, Rich Shepard wrote: > On Tue, 27 Feb 2007, David Warde-Farley wrote: > >> If you're going to do it that way you'll have to call linalg.eig() > > David, > > Yes, it was an obvious oversight on my part. Do you recommend > another way? > > Thanks, > > Rich > > -- > Richard B. Shepard, Ph.D. | The Environmental > Permitting > Applied Ecosystem Services, Inc. | Accelerator(TM) > Voice: 503-667-4517 Fax: > 503-667-8863 > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user From robert.kern at gmail.com Tue Feb 27 15:47:33 2007 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 27 Feb 2007 14:47:33 -0600 Subject: [SciPy-user] no linalg module and failed check_integer test In-Reply-To: <3db594f70702271231s2b0f3fd3l79f1342b11d408d2@mail.gmail.com> References: <3db594f70702271231s2b0f3fd3l79f1342b11d408d2@mail.gmail.com> Message-ID: <45E498E5.4040601@gmail.com> Jeremy Conlin wrote: > I just installed scipy from svn and I am having some problems. The > most vexing is that I can no longer load the linalg package, at least > I can't do anything with it. For example: > > Python 2.5 (r25:51918, Sep 19 2006, 08:49:13) > [GCC 4.0.1 (Apple Computer, Inc. build 5341)] on darwin > Type "help", "copyright", "credits" or "license" for more information. >>>> import scipy >>>> scipy.pkgload('linalg') >>>> help(scipy.linalg) > Traceback (most recent call last): > File "", line 1, in > AttributeError: 'module' object has no attribute 'linalg' >>>> A = scipy.ones((10,10)) >>>> scipy.linalg.eig(A) > Traceback (most recent call last): > File "", line 1, in > AttributeError: 'module' object has no attribute 'linalg' > > I didn't notice any errors during the installation. I know this isn't > a lot of information to go on, but does anyone know what the problem > is? Don't bother with pkgload. Just import scipy.linalg. > Secondly, and this might be related, when I run > > scipy.test(1,10) > > I get the error shown below. Could this be the cause of my linalg problem? > ImportError: dlopen(/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/special/_cephes.so, > 2): Symbol not found: ___dso_handle > Referenced from: > /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/special/_cephes.so > Expected in: dynamic lookup No, it's different. Exactly what versions of OS X (version number and Intel or PPC), gcc, gfortran (also where you got it from), Xcode do you have installed? You might need to install the latest version of cctools from here (AFAIK, the best place to get it): ftp://gcc.gnu.org/pub/gcc/infrastructure/cctools-590.36.dmg That said, I've never installed it on my MacBook, and I've never had this problem. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From lists.steve at arachnedesign.net Tue Feb 27 15:48:42 2007 From: lists.steve at arachnedesign.net (Steve Lianoglou) Date: Tue, 27 Feb 2007 15:48:42 -0500 Subject: [SciPy-user] Proper Use of NumPy's eig() In-Reply-To: <03F2185E-C21E-414D-AF24-BEBF1922BE8A@utoronto.ca> References: <52878754-DDDD-4AE2-9601-880BA43DD446@utoronto.ca> <03F2185E-C21E-414D-AF24-BEBF1922BE8A@utoronto.ca> Message-ID: <5A3FFD61-82F5-4E71-BA7D-087E1D5D7B45@arachnedesign.net> > If you're really keen on having short names for things you could use > > from scipy.linalg import eig > > or > > import scipy.linalg as L > > in the latter case you'd use L.eig(). I personally don't like > cluttering up the namespace too much but I think that's my own > obsessive streak. :) And if you just want all of the functions in the scipy.linalg module to enter the global namespace, you can do: from scipy.linalg import * # ... eigE = eig(E) Typically, this approach isn't recommended when you're writing code, but if you're just exploring data in the shell and feel like the the module prefixes are too much of a hassle for you to keep typing, this is your other option. -steve From jpeacock at mesoscopic.mines.edu Tue Feb 27 15:54:07 2007 From: jpeacock at mesoscopic.mines.edu (Jared Peacock) Date: Tue, 27 Feb 2007 13:54:07 -0700 Subject: [SciPy-user] Using mpfit Message-ID: <17892.39535.850287.24023@hipparchus.mines.edu> I'm trying to use mpfit but I can't seem to get it to work. I'm testing the code using this program, which is similar to the example from the mpfit text: import mpfit2 import Numeric x=Numeric.arange(1.,stop=100.) p0=[5.7, 2.2, 500., 1.5, 2000.] y=6.+3.5*x+200.*x**2+2.7*Numeric.sqrt(x)+2700.*Numeric.log(x) err=.1 def myfunct(p,fjac=None,x=None,y=None,err=None): f=p[0]+p[1]*x+p[2]*x**2+p[3]*Numeric.sqrt(x)+p[4]*Numeric.log(x) status=0 return [status,(y-f)/err] fa = {'x':x, 'y':y, 'err':err} m = mpfit2.mpfit(myfunct, xall=p0, functkw=fa) print 'status = ', m.status #if (m.status <= 0): print 'error message = ', m.errmsg print 'parameters = ', m.params But when I run it I get this error: m = mpfit2.mpfit(myfunct, xall=p0, functkw=fa) File "mpfit2.py", line 1007, in __init__ [self.status, fvec] = self.call(fcn, self.params, functkw) TypeError: unpack non-sequence I'm not sure what this means or how to fix it. Does anybody have and incite to this problem and using mpfit? J. Peacock From rshepard at appl-ecosys.com Tue Feb 27 15:58:35 2007 From: rshepard at appl-ecosys.com (Rich Shepard) Date: Tue, 27 Feb 2007 12:58:35 -0800 (PST) Subject: [SciPy-user] Proper Use of NumPy's eig() In-Reply-To: <03F2185E-C21E-414D-AF24-BEBF1922BE8A@utoronto.ca> References: <52878754-DDDD-4AE2-9601-880BA43DD446@utoronto.ca> <03F2185E-C21E-414D-AF24-BEBF1922BE8A@utoronto.ca> Message-ID: On Tue, 27 Feb 2007, David Warde-Farley wrote: > If you're really keen on having short names for things you could use > from scipy.linalg import eig > or > import scipy.linalg as L > in the latter case you'd use L.eig(). I personally don't like cluttering up > the namespace too much but I think that's my own obsessive streak. :) David, I've no preference one way or the other. However, I don't see the answer I expected. My expectation is that all values would be in the range [0.00-1.00], but they're not. When I print eigE I see: (array([ 8.88174744e+00+0.j , 3.54286503e-01+2.48721395j, 3.54286503e-01-2.48721395j, -3.11162331e-01+1.00980412j, -3.11162331e-01-1.00980412j, -2.79755841e-01+0.46954619j, -2.79755841e-01-0.46954619j, -4.08484096e-01+0.j ]), array([[ 6.24249034e-01 +0.00000000e+00j, 4.46875199e-01 -2.63555328e-01j, 4.46875199e-01 +2.63555328e-01j, -1.42390973e-01 -5.61667808e-01j, -1.42390973e-01 +5.61667808e-01j, -6.46262210e-01 +0.00000000e+00j, -6.46262210e-01 -0.00000000e+00j, 3.21082397e-01 +0.00000000e+00j], [ 5.11335982e-01 +0.00000000e+00j, 5.74875531e-01 +0.00000000e+00j, 5.74875531e-01 -0.00000000e+00j, 6.00826858e-01 +0.00000000e+00j, 6.00826858e-01 -0.00000000e+00j, 3.96191087e-01 -2.86601017e-01j, 3.96191087e-01 +2.86601017e-01j, -1.11778945e-01 +0.00000000e+00j], [ 3.67333773e-01 +0.00000000e+00j, 2.17622233e-01 +2.64832461e-01j, 2.17622233e-01 -2.64832461e-01j, 5.23950814e-02 +3.20941855e-01j, 5.23950814e-02 -3.20941855e-01j, 9.79467401e-02 +4.26775994e-01j, 9.79467401e-02 -4.26775994e-01j, 3.25049690e-01 +0.00000000e+00j], [ 3.01189122e-01 +0.00000000e+00j, -1.99915129e-03 +3.51819621e-01j, -1.99915129e-03 -3.51819621e-01j, -3.31543315e-01 +4.14230032e-02j, -3.31543315e-01 -4.14230032e-02j, -2.40393098e-01 -1.72385172e-01j, -2.40393098e-01 +1.72385172e-01j, -5.55853156e-01 +0.00000000e+00j], [ 2.43449050e-01 +0.00000000e+00j, -2.39280748e-01 +1.81665666e-01j, -2.39280748e-01 -1.81665666e-01j, 7.36841917e-02 -2.29964404e-01j, 7.36841917e-02 +2.29964404e-01j, 1.64067922e-01 -9.40947682e-02j, 1.64067922e-01 +9.40947682e-02j, 5.32558922e-01 +0.00000000e+00j], [ 1.82948476e-01 +0.00000000e+00j, -1.73636742e-01 -5.63298529e-02j, -1.73636742e-01 +5.63298529e-02j, 1.02155185e-01 +6.19835455e-02j, 1.02155185e-01 -6.19835455e-02j, -5.20559632e-02 +1.19076414e-01j, -5.20559632e-02 -1.19076414e-01j, -3.31196888e-01 +0.00000000e+00j], [ 1.43655139e-01 +0.00000000e+00j, -8.17472085e-02 -1.40491017e-01j, -8.17472085e-02 +1.40491017e-01j, -3.15727963e-03 +7.94710216e-02j, -3.15727963e-03 -7.94710216e-02j, 4.00006513e-03 -8.97730556e-02j, 4.00006513e-03 +8.97730556e-02j, 2.50876482e-01 +0.00000000e+00j], [ 9.91225725e-02 +0.00000000e+00j, 3.30357162e-02 -8.93890809e-02j, 3.30357162e-02 +8.93890809e-02j, -8.33994799e-02 +1.92423688e-03j, -8.33994799e-02 -1.92423688e-03j, 4.33406761e-02 +3.72302707e-02j, 4.33406761e-02 -3.72302707e-02j, -1.16327714e-01 +0.00000000e+00j]])) Since eig(E) "Return[s] all solutions (lamda, x) to the equation Ax = lamda x. The first element of the return tuple contains all the eigenvalues. The second element of the return tuple contains the eigenvectors in the columns (x[:,i] is the ith eigenvector)." I can't interpret the above. If the first tuple has all the Eigenvalues, how do I extract the principal Eigenvector from the rest? When I did this manually a couple of years ago, I used Octave to calculate the principal Eigenvector and the answer was easy for me to see. Rich -- Richard B. Shepard, Ph.D. | The Environmental Permitting Applied Ecosystem Services, Inc. | Accelerator(TM) Voice: 503-667-4517 Fax: 503-667-8863 From jeremit0 at gmail.com Tue Feb 27 16:02:31 2007 From: jeremit0 at gmail.com (Jeremy Conlin) Date: Tue, 27 Feb 2007 16:02:31 -0500 Subject: [SciPy-user] no linalg module and failed check_integer test In-Reply-To: <45E498E5.4040601@gmail.com> References: <3db594f70702271231s2b0f3fd3l79f1342b11d408d2@mail.gmail.com> <45E498E5.4040601@gmail.com> Message-ID: <3db594f70702271302t3ebc0785maa65a1211c00fd22@mail.gmail.com> On 2/27/07, Robert Kern wrote: > > I didn't notice any errors during the installation. I know this isn't > > a lot of information to go on, but does anyone know what the problem > > is? > > Don't bother with pkgload. Just import scipy.linalg. I tried that: Python 2.5 (r25:51918, Sep 19 2006, 08:49:13) [GCC 4.0.1 (Apple Computer, Inc. build 5341)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import scipy >>> import scipy.linalg Traceback (most recent call last): File "", line 1, in File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/linalg/__init__.py", line 8, in from basic import * File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/linalg/basic.py", line 17, in from lapack import get_lapack_funcs File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/linalg/lapack.py", line 17, in from scipy.linalg import flapack ImportError: dlopen(/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/linalg/flapack.so, 2): Symbol not found: ___dso_handle Referenced from: /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/linalg/flapack.so Expected in: dynamic lookup >>> Sorry. I don't understand what all of those mean or I would try something. > > ImportError: dlopen(/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/special/_cephes.so, > > 2): Symbol not found: ___dso_handle > > Referenced from: > > /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/special/_cephes.so > > Expected in: dynamic lookup > > No, it's different. Exactly what versions of OS X (version number and Intel or > PPC), gcc, gfortran (also where you got it from), Xcode do you have installed? > You might need to install the latest version of cctools from here (AFAIK, the > best place to get it): > > ftp://gcc.gnu.org/pub/gcc/infrastructure/cctools-590.36.dmg > > That said, I've never installed it on my MacBook, and I've never had this problem. > I have the latest version of Xcode installed. I am running on a MacBook Pro with the latest OS X version. I am running gcc version 4.0.1 and gfortran version 4.3.0. I got gfortran by following the instructions on the install scipy page for Mac OS X; it links to this package: http://prdownloads.sourceforge.net/hpc/gfortran-intel-bin.tar.gz?download Thanks, Jeremy From bryanv at enthought.com Tue Feb 27 16:11:13 2007 From: bryanv at enthought.com (Bryan Van de Ven) Date: Tue, 27 Feb 2007 15:11:13 -0600 Subject: [SciPy-user] Proper Use of NumPy's eig() In-Reply-To: References: <52878754-DDDD-4AE2-9601-880BA43DD446@utoronto.ca> <03F2185E-C21E-414D-AF24-BEBF1922BE8A@utoronto.ca> Message-ID: <45E49E71.6040605@enthought.com> If by principal eigenvector you mean the eigenvector corresponding to the largest-magnitude eigenvalue, then you can try something like: a=array([[1,0],[0,-3]]) evals, evects = eig(a) peig = evects[where(abs(evals)==max(abs(evals)))] Rich Shepard wrote: > On Tue, 27 Feb 2007, David Warde-Farley wrote: > >> If you're really keen on having short names for things you could use >> from scipy.linalg import eig >> or >> import scipy.linalg as L >> in the latter case you'd use L.eig(). I personally don't like cluttering up >> the namespace too much but I think that's my own obsessive streak. :) > > David, > > I've no preference one way or the other. > > However, I don't see the answer I expected. My expectation is that all > values would be in the range [0.00-1.00], but they're not. > > When I print eigE I see: > > (array([ 8.88174744e+00+0.j , 3.54286503e-01+2.48721395j, > 3.54286503e-01-2.48721395j, -3.11162331e-01+1.00980412j, > -3.11162331e-01-1.00980412j, -2.79755841e-01+0.46954619j, > -2.79755841e-01-0.46954619j, -4.08484096e-01+0.j ]), > array([[ 6.24249034e-01 +0.00000000e+00j, > 4.46875199e-01 -2.63555328e-01j, > 4.46875199e-01 +2.63555328e-01j, > -1.42390973e-01 -5.61667808e-01j, > -1.42390973e-01 +5.61667808e-01j, > -6.46262210e-01 +0.00000000e+00j, > -6.46262210e-01 -0.00000000e+00j, > 3.21082397e-01 +0.00000000e+00j], > [ 5.11335982e-01 +0.00000000e+00j, > 5.74875531e-01 +0.00000000e+00j, > 5.74875531e-01 -0.00000000e+00j, > 6.00826858e-01 +0.00000000e+00j, > 6.00826858e-01 -0.00000000e+00j, > 3.96191087e-01 -2.86601017e-01j, > 3.96191087e-01 +2.86601017e-01j, > -1.11778945e-01 +0.00000000e+00j], > [ 3.67333773e-01 +0.00000000e+00j, > 2.17622233e-01 +2.64832461e-01j, > 2.17622233e-01 -2.64832461e-01j, > 5.23950814e-02 +3.20941855e-01j, > 5.23950814e-02 -3.20941855e-01j, > 9.79467401e-02 +4.26775994e-01j, > 9.79467401e-02 -4.26775994e-01j, > 3.25049690e-01 +0.00000000e+00j], > [ 3.01189122e-01 +0.00000000e+00j, > -1.99915129e-03 +3.51819621e-01j, > -1.99915129e-03 -3.51819621e-01j, > -3.31543315e-01 +4.14230032e-02j, > -3.31543315e-01 -4.14230032e-02j, > -2.40393098e-01 -1.72385172e-01j, > -2.40393098e-01 +1.72385172e-01j, > -5.55853156e-01 +0.00000000e+00j], > [ 2.43449050e-01 +0.00000000e+00j, > -2.39280748e-01 +1.81665666e-01j, > -2.39280748e-01 -1.81665666e-01j, > 7.36841917e-02 -2.29964404e-01j, > 7.36841917e-02 +2.29964404e-01j, > 1.64067922e-01 -9.40947682e-02j, > 1.64067922e-01 +9.40947682e-02j, > 5.32558922e-01 +0.00000000e+00j], > [ 1.82948476e-01 +0.00000000e+00j, > -1.73636742e-01 -5.63298529e-02j, > -1.73636742e-01 +5.63298529e-02j, > 1.02155185e-01 +6.19835455e-02j, > 1.02155185e-01 -6.19835455e-02j, > -5.20559632e-02 +1.19076414e-01j, > -5.20559632e-02 -1.19076414e-01j, > -3.31196888e-01 +0.00000000e+00j], > [ 1.43655139e-01 +0.00000000e+00j, > -8.17472085e-02 -1.40491017e-01j, > -8.17472085e-02 +1.40491017e-01j, > -3.15727963e-03 +7.94710216e-02j, > -3.15727963e-03 -7.94710216e-02j, > 4.00006513e-03 -8.97730556e-02j, > 4.00006513e-03 +8.97730556e-02j, > 2.50876482e-01 +0.00000000e+00j], > [ 9.91225725e-02 +0.00000000e+00j, > 3.30357162e-02 -8.93890809e-02j, > 3.30357162e-02 +8.93890809e-02j, > -8.33994799e-02 +1.92423688e-03j, > -8.33994799e-02 -1.92423688e-03j, > 4.33406761e-02 +3.72302707e-02j, > 4.33406761e-02 -3.72302707e-02j, > -1.16327714e-01 +0.00000000e+00j]])) > > Since eig(E) "Return[s] all solutions (lamda, x) to the equation Ax = > lamda x. The first element of the return tuple contains all the eigenvalues. > The second element of the return tuple contains the eigenvectors in the > columns (x[:,i] is the ith eigenvector)." > > I can't interpret the above. If the first tuple has all the Eigenvalues, > how do I extract the principal Eigenvector from the rest? When I did this > manually a couple of years ago, I used Octave to calculate the principal > Eigenvector and the answer was easy for me to see. > > Rich > From robert.kern at gmail.com Tue Feb 27 16:11:21 2007 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 27 Feb 2007 15:11:21 -0600 Subject: [SciPy-user] Proper Use of NumPy's eig() In-Reply-To: References: <52878754-DDDD-4AE2-9601-880BA43DD446@utoronto.ca> <03F2185E-C21E-414D-AF24-BEBF1922BE8A@utoronto.ca> Message-ID: <45E49E79.6080905@gmail.com> Rich Shepard wrote: > However, I don't see the answer I expected. My expectation is that all > values would be in the range [0.00-1.00], but they're not. > > When I print eigE I see: > > (array([ 8.88174744e+00+0.j , 3.54286503e-01+2.48721395j, > 3.54286503e-01-2.48721395j, -3.11162331e-01+1.00980412j, > -3.11162331e-01-1.00980412j, -2.79755841e-01+0.46954619j, > -2.79755841e-01-0.46954619j, -4.08484096e-01+0.j ]), > array([[ 6.24249034e-01 +0.00000000e+00j, ... Without knowing your input, I can't see anything particularly wrong. Unless if E were real-symmetric (or complex-Hermitian), you are likely to end up with complex eigenvalues. > Since eig(E) "Return[s] all solutions (lamda, x) to the equation Ax = > lamda x. The first element of the return tuple contains all the eigenvalues. > The second element of the return tuple contains the eigenvectors in the > columns (x[:,i] is the ith eigenvector)." > > I can't interpret the above. If the first tuple has all the Eigenvalues, > how do I extract the principal Eigenvector from the rest? When I did this > manually a couple of years ago, I used Octave to calculate the principal > Eigenvector and the answer was easy for me to see. I don't think that the notion of a principal eigenvector is well-defined if the matrix is not symmetric. But if you do have a symmetric matrix: import numpy as np from scipy import linalg eigvals, eigvecs = linalg.eig(E) i = np.real_if_close(eigvals).argmax() principal_eigvec = eigvecs[:, i] -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From robert.kern at gmail.com Tue Feb 27 16:14:15 2007 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 27 Feb 2007 15:14:15 -0600 Subject: [SciPy-user] no linalg module and failed check_integer test In-Reply-To: <3db594f70702271302t3ebc0785maa65a1211c00fd22@mail.gmail.com> References: <3db594f70702271231s2b0f3fd3l79f1342b11d408d2@mail.gmail.com> <45E498E5.4040601@gmail.com> <3db594f70702271302t3ebc0785maa65a1211c00fd22@mail.gmail.com> Message-ID: <45E49F27.1090507@gmail.com> Jeremy Conlin wrote: > On 2/27/07, Robert Kern wrote: >>> I didn't notice any errors during the installation. I know this isn't >>> a lot of information to go on, but does anyone know what the problem >>> is? >> Don't bother with pkgload. Just import scipy.linalg. > > I tried that: > ImportError: dlopen(/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/linalg/flapack.so, > 2): Symbol not found: ___dso_handle > Referenced from: > /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/linalg/flapack.so > Expected in: dynamic lookup > > Sorry. I don't understand what all of those mean or I would try something. Then I was wrong: the next error is related. The fact that pkgload() hid this message is why I suggest not using it. >>> ImportError: dlopen(/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/special/_cephes.so, >>> 2): Symbol not found: ___dso_handle >>> Referenced from: >>> /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/special/_cephes.so >>> Expected in: dynamic lookup >> No, it's different. Exactly what versions of OS X (version number and Intel or >> PPC), gcc, gfortran (also where you got it from), Xcode do you have installed? >> You might need to install the latest version of cctools from here (AFAIK, the >> best place to get it): >> >> ftp://gcc.gnu.org/pub/gcc/infrastructure/cctools-590.36.dmg >> >> That said, I've never installed it on my MacBook, and I've never had this problem. >> > I have the latest version of Xcode installed. I am running on a > MacBook Pro with the latest OS X version. I am running gcc version > 4.0.1 and gfortran version 4.3.0. I got gfortran by following the > instructions on the install scipy page for Mac OS X; it links to this > package: > http://prdownloads.sourceforge.net/hpc/gfortran-intel-bin.tar.gz?download When did you install gfortran? Gaurav Khanna has the unfortunate habit of uploading new builds without changing the filename. Try installing the cctools package I gave and rebuilding scipy. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From jeremit0 at gmail.com Tue Feb 27 16:17:49 2007 From: jeremit0 at gmail.com (Jeremy Conlin) Date: Tue, 27 Feb 2007 16:17:49 -0500 Subject: [SciPy-user] no linalg module and failed check_integer test In-Reply-To: <45E49F27.1090507@gmail.com> References: <3db594f70702271231s2b0f3fd3l79f1342b11d408d2@mail.gmail.com> <45E498E5.4040601@gmail.com> <3db594f70702271302t3ebc0785maa65a1211c00fd22@mail.gmail.com> <45E49F27.1090507@gmail.com> Message-ID: <3db594f70702271317t4b8de4ffvb4b6330410c8bb1c@mail.gmail.com> On 2/27/07, Robert Kern wrote: > Jeremy Conlin wrote: > > On 2/27/07, Robert Kern wrote: > >>> I didn't notice any errors during the installation. I know this isn't > >>> a lot of information to go on, but does anyone know what the problem > >>> is? > >> Don't bother with pkgload. Just import scipy.linalg. > > > > I tried that: > > > ImportError: dlopen(/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/linalg/flapack.so, > > 2): Symbol not found: ___dso_handle > > Referenced from: > > /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/linalg/flapack.so > > Expected in: dynamic lookup > > > > Sorry. I don't understand what all of those mean or I would try something. > > Then I was wrong: the next error is related. The fact that pkgload() hid this > message is why I suggest not using it. > > >>> ImportError: dlopen(/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/special/_cephes.so, > >>> 2): Symbol not found: ___dso_handle > >>> Referenced from: > >>> /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/special/_cephes.so > >>> Expected in: dynamic lookup > >> No, it's different. Exactly what versions of OS X (version number and Intel or > >> PPC), gcc, gfortran (also where you got it from), Xcode do you have installed? > >> You might need to install the latest version of cctools from here (AFAIK, the > >> best place to get it): > >> > >> ftp://gcc.gnu.org/pub/gcc/infrastructure/cctools-590.36.dmg > >> > >> That said, I've never installed it on my MacBook, and I've never had this problem. > >> > > I have the latest version of Xcode installed. I am running on a > > MacBook Pro with the latest OS X version. I am running gcc version > > 4.0.1 and gfortran version 4.3.0. I got gfortran by following the > > instructions on the install scipy page for Mac OS X; it links to this > > package: > > http://prdownloads.sourceforge.net/hpc/gfortran-intel-bin.tar.gz?download > > When did you install gfortran? Gaurav Khanna has the unfortunate habit of > uploading new builds without changing the filename. I installed gfortran about a week or two ago. > > Try installing the cctools package I gave and rebuilding scipy. I did just that and it fixed my problem with loading linalg! Thanks. However, scipy still fails on some tests, but it did get further this time. Can I trust the results from scipy if these tests fail? I have copied the failed portions below. Thanks again, Jeremy ====================================================================== FAIL: check_expon (scipy.stats.tests.test_morestats.test_anderson) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/stats/tests/test_morestats.py", line 57, in check_expon assert_array_less(A, crit[-2:]) File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/numpy/testing/utils.py", line 235, in assert_array_less header='Arrays are not less-ordered') File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/numpy/testing/utils.py", line 215, in assert_array_compare assert cond, msg AssertionError: Arrays are not less-ordered (mismatch 100.0%) x: array(1.9823844122912462) y: array([ 1.587, 1.934]) ====================================================================== FAIL: check_dot (scipy.lib.tests.test_blas.test_fblas1_simple) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/lib/blas/tests/test_blas.py", line 76, in check_dot assert_almost_equal(f([3j,-4,3-4j],[2,3,1]),-9+2j) File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/numpy/testing/utils.py", line 156, in assert_almost_equal assert round(abs(desired - actual),decimal) == 0, msg AssertionError: Items are not equal: ACTUAL: 6.3815410082967491e-37j DESIRED: (-9+2j) ====================================================================== FAIL: check_dot (scipy.linalg.tests.test_blas.test_fblas1_simple) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/linalg/tests/test_blas.py", line 75, in check_dot assert_almost_equal(f([3j,-4,3-4j],[2,3,1]),-9+2j) File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/numpy/testing/utils.py", line 156, in assert_almost_equal assert round(abs(desired - actual),decimal) == 0, msg AssertionError: Items are not equal: ACTUAL: 6.4704707719591164e-37j DESIRED: (-9+2j) ---------------------------------------------------------------------- Ran 1620 tests in 4.936s FAILED (failures=3) From rshepard at appl-ecosys.com Tue Feb 27 16:44:56 2007 From: rshepard at appl-ecosys.com (Rich Shepard) Date: Tue, 27 Feb 2007 13:44:56 -0800 (PST) Subject: [SciPy-user] Proper Use of NumPy's eig() In-Reply-To: <45E49E71.6040605@enthought.com> References: <52878754-DDDD-4AE2-9601-880BA43DD446@utoronto.ca> <03F2185E-C21E-414D-AF24-BEBF1922BE8A@utoronto.ca> <45E49E71.6040605@enthought.com> Message-ID: On Tue, 27 Feb 2007, Bryan Van de Ven wrote: > If by principal eigenvector you mean the eigenvector corresponding to the > largest-magnitude eigenvalue, then you can try something like: > > a=array([[1,0],[0,-3]]) > evals, evects = eig(a) > peig = evects[where(abs(evals)==max(abs(evals)))] Thank you, Bryan. That is just what I meant. The numbers are not coming out in a way that makes sense, so I assume that my input data -- randomly generated as a test case -- is incorrect. Either that, or my algorithm to create the symmetrical matrix is flawed. Time to take a very close look from the start. Rich -- Richard B. Shepard, Ph.D. | The Environmental Permitting Applied Ecosystem Services, Inc. | Accelerator(TM) Voice: 503-667-4517 Fax: 503-667-8863 From as8ca at virginia.edu Tue Feb 27 16:46:29 2007 From: as8ca at virginia.edu (Alok Singhal) Date: Tue, 27 Feb 2007 16:46:29 -0500 Subject: [SciPy-user] Using SciPy/NumPy optimization In-Reply-To: <45E493D7.2070703@gmail.com> References: <6.0.1.1.2.20070227131721.0287bfe0@pop.uky.edu> <45E493D7.2070703@gmail.com> Message-ID: <20070227214629.GA6215@virginia.edu> Hi, Sorry about the long post. I did not ask the original question, but I tried the solution posted and could not get it to work. Original question: given a set of data [[x1, y1], ..., [xn, yn]], how to fit y = a*(1 + b*c*x) ** (-1.0/b) to it, subject to the constraints: 0 < b <= 5 -1 <= c <= 1 a ~ >= max(x1, ... xn) On 27/02/07: 14:25, Robert Kern wrote: > There are a number of constrained optimizers in scipy.optimize . > scipy.optimize.fmin_tnc seems most appropriate for simple bounds like you have. > In order to get a function to minimize that depends on data, I usually like to > use a class: > > import numpy as np > from scipy.optimize import fmin_tnc > > class LossFunction(object): > def __init__(self, x, y): > self.x = x > self.y = y > > def __call__(self, abc): > """ A function suitable for passing to the fmin() minimizers. > """ > a, b, c = abc > y = a*(1.0 + b*c*self.x) ** (-1.0/b) > dy = self.y - y > return dy*dy > > x = np.array([...]) > y = np.array([...]) > lf = LossFunction(x, y) > abc0 = np.array([x.max(), 2.5, 0.0]) # or whatever > retcode, nfeval, abc_optimal = fmin_tnc(lf, abc0, > bounds=[(None, None), (0., 5.), (-1., 1.)]) I used optimize.leastsq to do the fitting, and it worked well me (see http://www.scipy.org/Cookbook/FittingData). But I was trying the above method, and unfortunately it doesn't work as presented. When I define the variables x and y by: from scipy.optimize import fmin_tnc from scipy import rand from numpy import mgrid, log def f(abc, x): return abc[0]*(1.0+abc[1]*abc[2]*x)**(-1.0/abc[1]) a = 15.0 b = 2.5 c = 0.3 abc = (a, b, c) num_points = 151 x = mgrid[1:10:num_points*1j] y = f(abc, x) + rand(num_points) - 0.5 abc0 = [x.max(), 2.5, 0.0] lf = LossFunction(x, y) retcode, nfeval, abc_optimal = fmin_tnc(lf, abc0, bounds=[(None, None), (0., 5.), (-1., 1.)]) The call to fmin_tnc gives me an error: File "/usr/lib/python2.4/site-packages/scipy/optimize/tnc.py", line 191, in func_and_grad f, g = func(x, *args) ValueError: too many values to unpack Looking at the documentation and the source file, the first parameter to fmin_tnc should be a function that returns the function value and the gradient vector (with respect to the parameters being estimated). Also, it seems that I can tell fmin_tnc to estimate the gradient itself, if I specify approx_grad as True: retcode, nfeval, abc_optimal = fmin_tnc(lf, abc0, bounds=[(None, None), (0., 5.), (-1., 1.)], approx_grad=True) But this gives me an error saying: File "/usr/lib/python2.4/site-packages/scipy/optimize/optimize.py", line 576, in approx_fprime grad[k] = (apply(f,(xk+ei,)+args) - f0)/epsilon ValueError: setting an array element with a sequence. Then I tried redefining the __call__ in LossFunction to return the gradient as well: class LossFunction(object): def __init__(self, x, y): self.x = x self.y = y def __call__(self, abc): a, b, c = abc y = a*(1.0 + b*c*self.x) ** (-1.0/b) dy = self.y - y d = [0, 0, 0] d[0] = (1.0 + b*c*self.x) ** (-1.0/b) d[1] = y * (log(a*(1.0+b*c*self.x))/b**2 - c*self.x/(b*(1.0+b*c*self.x))) d[2] = -a/b*(1.0 + b*c*self.x) ** (-1.0/b) * b*self.x return dy*dy, d When I call this function instead, I get another error: File "/usr/lib/python2.4/site-packages/scipy/optimize/tnc.py", line 221, in fmin_tnc fmin, ftol, rescale) ValueError: Bad return value from minimized function. At this point, I gave up, but I would like to understand what I am doing wrong. To me, it seems like fmin_tnc will find the values of parameters (a, b, c in this case) that minimize a given function that depends *only* on a, b, c. I don't understand how to make it work so as to fit existing data. I am sure I am missing something, but I don't know what that is. Thanks for all your help, Alok -- Alok Singhal * * Graduate Student, dept. of Astronomy * * * University of Virginia http://www.astro.virginia.edu/~as8ca/ * * From bnuttall at uky.edu Tue Feb 27 17:16:23 2007 From: bnuttall at uky.edu (Brandon Nuttall) Date: Tue, 27 Feb 2007 17:16:23 -0500 Subject: [SciPy-user] Using SciPy/NumPy optimization In-Reply-To: <45E493D7.2070703@gmail.com> References: <6.0.1.1.2.20070227131721.0287bfe0@pop.uky.edu> <45E493D7.2070703@gmail.com> Message-ID: <6.0.1.1.2.20070227170539.0283f608@pop.uky.edu> Robert, Thanks. At 03:25 PM 2/27/2007, you wrote: >Presumably, you mean > > y = a*(1 + b*c*x) ** (-1.0/b) > >to correct a typo and use Python notation. Yes, you are correct. > import numpy as np > from scipy.optimize import fmin_tnc > > class LossFunction(object): > def __init__(self, x, y): > self.x = x > self.y = y > > def __call__(self, abc): > """ A function suitable for passing to the fmin() minimizers. > """ > a, b, c = abc > y = a*(1.0 + b*c*self.x) ** (-1.0/b) > dy = self.y - y > return dy*dy > > x = np.array([...]) > y = np.array([...]) > lf = LossFunction(x, y) > abc0 = np.array([x.max(), 2.5, 0.0]) # or whatever > retcode, nfeval, abc_optimal = fmin_tnc(lf, abc0, > bounds=[(None, None), (0., 5.), (-1., 1.)]) OK. I'm on my way home to try it out. Brandon C. Nuttall BNUTTALL at UKY.EDU Kentucky Geological Survey (859) 257-5500 University of Kentucky (859) 257-1147 (fax) 228 Mining & Mineral Resources Bldg http://www.uky.edu/KGS/home.htm Lexington, Kentucky 40506-0107 From rshepard at appl-ecosys.com Tue Feb 27 17:21:25 2007 From: rshepard at appl-ecosys.com (Rich Shepard) Date: Tue, 27 Feb 2007 14:21:25 -0800 (PST) Subject: [SciPy-user] Proper Use of NumPy's eig() In-Reply-To: <45E49E79.6080905@gmail.com> References: <52878754-DDDD-4AE2-9601-880BA43DD446@utoronto.ca> <03F2185E-C21E-414D-AF24-BEBF1922BE8A@utoronto.ca> <45E49E79.6080905@gmail.com> Message-ID: On Tue, 27 Feb 2007, Robert Kern wrote: > Without knowing your input, I can't see anything particularly wrong. > Unless if E were real-symmetric (or complex-Hermitian), you are likely to > end up with complex eigenvalues. Robert, It's been decades since I last worked a lot with linear algebra. I suspect that I've mis-handled the calculations between raw input and final symmectric matrix. It is supposed to be symmetrical. And the values of the principal eigenvector should add to 1.00 because they should be the relative weights of input factor. > I don't think that the notion of a principal eigenvector is well-defined if the > matrix is not symmetric. But if you do have a symmetric matrix: > import numpy as np > from scipy import linalg > > eigvals, eigvecs = linalg.eig(E) > i = np.real_if_close(eigvals).argmax() > principal_eigvec = eigvecs[:, i] This produces: [ 6.24249034e-01+0.j 5.11335982e-01+0.j 3.67333773e-01+0.j 3.01189122e-01+0.j 2.43449050e-01+0.j 1.82948476e-01+0.j 1.43655139e-01+0.j 9.91225725e-02+0.j] ... and these total approximately 2.472. So, it's time to look at the input (which is almost certainly OK) and more closely at how those input values are extracted, averaged, and reduced to a symmetrical matrix. Thanks, Rich -- Richard B. Shepard, Ph.D. | The Environmental Permitting Applied Ecosystem Services, Inc. | Accelerator(TM) Voice: 503-667-4517 Fax: 503-667-8863 From rshepard at appl-ecosys.com Tue Feb 27 18:41:23 2007 From: rshepard at appl-ecosys.com (Rich Shepard) Date: Tue, 27 Feb 2007 15:41:23 -0800 (PST) Subject: [SciPy-user] Proper Use of NumPy's eig() In-Reply-To: References: <52878754-DDDD-4AE2-9601-880BA43DD446@utoronto.ca> <03F2185E-C21E-414D-AF24-BEBF1922BE8A@utoronto.ca> <45E49E79.6080905@gmail.com> Message-ID: On Tue, 27 Feb 2007, Rich Shepard wrote: > This produces: > > [ 6.24249034e-01+0.j 5.11335982e-01+0.j 3.67333773e-01+0.j > 3.01189122e-01+0.j 2.43449050e-01+0.j 1.82948476e-01+0.j > 1.43655139e-01+0.j 9.91225725e-02+0.j] > > ... and these total approximately 2.472. > > So, it's time to look at the input (which is almost certainly OK) and > more closely at how those input values are extracted, averaged, and > reduced to a symmetrical matrix. All: Since it's been almost 2 years since my book was published, and even longer since I worked the example for the book, I forgot the last -- and most important -- step: to normalize those values returned as the principal eigenvector. Yes, they are correctly computed, I just need to take the process one more step. My sincere thanks to all of you. I now have a much better understanding how to apply NumPy and SciPy to address this need. Rich -- Richard B. Shepard, Ph.D. | The Environmental Permitting Applied Ecosystem Services, Inc. | Accelerator(TM) Voice: 503-667-4517 Fax: 503-667-8863 From rblove at airmail.net Tue Feb 27 22:10:29 2007 From: rblove at airmail.net (Robert Love) Date: Tue, 27 Feb 2007 21:10:29 -0600 Subject: [SciPy-user] Any Books on SciPy? Message-ID: Are there any good, up to date books that people recommend for numerical work with Python? I see the book Python Scripting for Computational Science Hans Petter Langtangen Does anyone have opinions on this? Is it current? Are there better books? All pointers appreciated. From david.warde.farley at utoronto.ca Tue Feb 27 22:54:34 2007 From: david.warde.farley at utoronto.ca (David Warde-Farley) Date: Tue, 27 Feb 2007 22:54:34 -0500 Subject: [SciPy-user] Any Books on SciPy? In-Reply-To: References: Message-ID: <1172634874.6867.13.camel@rodimus> On Tue, 2007-02-27 at 21:10 -0600, Robert Love wrote: > Are there any good, up to date books that people recommend for > numerical work with Python? Travis Oliphant's "Guide to NumPy" (eBook) tells you more than you'll probably ever need to know about NumPy specifically, but it's less a recipe book and more a reference manual. I don't know of any, but I'd sure like to see them, since it's a bit of a barrier to entry for new users who see similar books for Matlab, etc. David From jdh2358 at gmail.com Tue Feb 27 22:59:28 2007 From: jdh2358 at gmail.com (John Hunter) Date: Tue, 27 Feb 2007 21:59:28 -0600 Subject: [SciPy-user] Any Books on SciPy? In-Reply-To: References: Message-ID: <88e473830702271959o6a4dcc2dv3dd2d5fb7db0bfa1@mail.gmail.com> On 2/27/07, Robert Love wrote: > Are there any good, up to date books that people recommend for > numerical work with Python? > > I see the book > > Python Scripting for Computational Science > Hans Petter Langtangen > > Does anyone have opinions on this? Is it current? Are there better > books? I specifically do not recommend this book -- I own it but in my opinion it is outdated and is more a collection of the author's personal idioms than the current common practice in the scientific python community. For numerical work in python most people use * numpy - for array math. The best documentation is Travis' online book http://www.tramy.us. For free, the numarray documentation is excellent and the API is very similar for common use cases - http://www.stsci.edu/resources/software_hardware/numarray/doc * scipy - there is no comprehensive printed documentation that I know of. The online help, site docs, and wiki are your best bet. - http://www.scipy.org/Documentation * ipython - the enhanced python shell which is widely used in the scientific computing community for interactive work. Has special modes to support numpy, scipy, matplotlib and distributed computing. - http://ipython.scipy.org/moin/Documentation * matplotlib - 2D graphics, charts and the like. ipython has a 'ipython -pylab' mode which loads matplotlib and numpy for a matlab like environment . http://matplotlib.sourceforge.net/tutorial.html and http://matplotlib.sourceforge.net/users_guide_0.87.7.pdf * Enthought Tool Suite - provides a comprehensive package for scientific computing including the above modules and many others for application development and more. Provides 3D graphics through the Mayavi2/VTK packages and 2D plotting through Chaco, and lots of tools to facilitate wx based GUI development - http://code.enthought.com/ There is a lot more, particularly for domain specific stiff, but these links are good starting points. Unfortunately, there is no one-stop-shop for a guide to scientific computing in python - Travis' documentation is the closest thing we have but it pretty much just covers numpy which is *the* core package. Fernando Perez and I have a very brief and limited started guide covering multiple packages (ipython, numpy, matplotlib, scipy, VTK) but I don't have the PDF handy (Fernando, do you have the roadshow doc handy?). Eric Jones and Travis (authors of scipy) have some talk notes at http://www.nanohub.org/resources/?id=99 but these are a bit out of date. JDH From steve at shrogers.com Tue Feb 27 23:05:58 2007 From: steve at shrogers.com (Steven H. Rogers) Date: Tue, 27 Feb 2007 21:05:58 -0700 Subject: [SciPy-user] NumPy in Teaching Message-ID: <45E4FFA6.9010408@shrogers.com> I'm doing an informal survey on the use of Array Programming Languages for teaching. If you're using NumPy in this manner I'd like to hear from you. What subject was/is taught, academic level, results, lessons learned, etc. Regards, Steve From gary.pajer at gmail.com Tue Feb 27 23:27:02 2007 From: gary.pajer at gmail.com (Gary Pajer) Date: Tue, 27 Feb 2007 23:27:02 -0500 Subject: [SciPy-user] Do I have LAPACK or not? Message-ID: <88fe22a0702272027s214476f8r99852705adb1b544@mail.gmail.com> When I compile I get the output below. Has LAPACK been found or not? I'm guessing yes, but I have some doubt ... thanks, gary ----------------------------------------------------- atlas_info: libraries f77blas,cblas,atlas not found in /usr/lib/atlas libraries lapack_atlas not found in /usr/lib/atlas libraries lapack not found in /usr/lib/sse2 libraries f77blas,cblas,atlas not found in /usr/lib libraries lapack_atlas not found in /usr/lib numpy.distutils.system_info.atlas_info /usr/lib/python2.4/site-packages/numpy/distutils/system_info.py:903: UserWarning: ********************************************************************* Could not find lapack library within the ATLAS installation. ********************************************************************* warnings.warn(message) FOUND: libraries = ['f77blas', 'cblas', 'atlas'] library_dirs = ['/usr/lib/sse2'] language = c define_macros = [('ATLAS_WITHOUT_LAPACK', None)] lapack_info: FOUND: libraries = ['lapack'] library_dirs = ['/usr/lib'] language = f77 FOUND: libraries = ['f77blas', 'cblas', 'atlas', 'lapack'] library_dirs = ['/usr/lib/sse2', '/usr/lib'] language = f77 define_macros = [('ATLAS_WITHOUT_LAPACK', None), ('ATLAS_INFO', '"\\"3.6.0\\""')] ATLAS version 3.6.0 From rshepard at appl-ecosys.com Tue Feb 27 23:55:36 2007 From: rshepard at appl-ecosys.com (Rich Shepard) Date: Tue, 27 Feb 2007 20:55:36 -0800 (PST) Subject: [SciPy-user] Do I have LAPACK or not? In-Reply-To: <88fe22a0702272027s214476f8r99852705adb1b544@mail.gmail.com> References: <88fe22a0702272027s214476f8r99852705adb1b544@mail.gmail.com> Message-ID: On Tue, 27 Feb 2007, Gary Pajer wrote: > When I compile I get the output below. Has LAPACK been found or not? > I'm guessing yes, but I have some doubt ... Gary, What is the result of 'locate LAPACK?' On my system it returns /usr/local/scipy/LAPACK/ and all the subdirectories and files therein. Rich -- Richard B. Shepard, Ph.D. | The Environmental Permitting Applied Ecosystem Services, Inc. | Accelerator(TM) Voice: 503-667-4517 Fax: 503-667-8863 From fperez.net at gmail.com Wed Feb 28 01:19:51 2007 From: fperez.net at gmail.com (Fernando Perez) Date: Tue, 27 Feb 2007 23:19:51 -0700 Subject: [SciPy-user] Any Books on SciPy? In-Reply-To: <88e473830702271959o6a4dcc2dv3dd2d5fb7db0bfa1@mail.gmail.com> References: <88e473830702271959o6a4dcc2dv3dd2d5fb7db0bfa1@mail.gmail.com> Message-ID: On 2/27/07, John Hunter wrote: > On 2/27/07, Robert Love wrote: > > Are there any good, up to date books that people recommend for > > numerical work with Python? > > > > I see the book > > > > Python Scripting for Computational Science > > Hans Petter Langtangen > > > > Does anyone have opinions on this? Is it current? Are there better > > books? > > I specifically do not recommend this book -- I own it but in my > opinion it is outdated and is more a collection of the author's > personal idioms than the current common practice in the scientific > python community. For numerical work in python most people use I happen to share John's opinion, and I also have a copy of this book. While it's technically correct, well written and fairly comprehensive (probably /too/ much, since it's a bit all over the map), I strongly dislike his approach. Much of the book uses his custom, home-made collection of scripts and tools, which you can only download if you go to a site and type a word from a certain page in the book (a simple 'protection' system). Now you have an unmaintained, unreleased (publicly), set of tools to learn from that don't have any licensing explicitly specified. Oh, and a good chunk of the tools in his distribution (since I have the book, I have the code) use Perl. Go figure (there's also a tcl directory thrown in for good measure). One of Python's main strengths for scientific work is precisely the openness and interoperability of the various tools, and we all do our part to help that be the case. The fact that this book follows an approach more or less orthogonal to those ideas makes me very much uninterested in using it. > There is a lot more, particularly for domain specific stiff, but these > links are good starting points. Unfortunately, there is no > one-stop-shop for a guide to scientific computing in python - Travis' > documentation is the closest thing we have but it pretty much just > covers numpy which is *the* core package. Fernando Perez and I have a > very brief and limited started guide covering multiple packages > (ipython, numpy, matplotlib, scipy, VTK) but I don't have the PDF > handy (Fernando, do you have the roadshow doc handy?). Well, you asked for it :) http://amath.colorado.edu/faculty/fperez/tmp/py4science.pdf It's worth stressing, in the strongest possible terms, that this should NOT, in any way, shape or form, be considered anything beyond a pre-pre-alpha, pre-draft of a project for a possible book :) Besides, it's already outdated in several important places (numpy, mayavi, no TVTK,...). After all I said about the Langtangen book, at least it's a real one. Our pdf draft is most certainly not. So if you need a book, with all of its limitations, Langtangen's is currently the only game in town that covers the whole spectrum of python for scientific computing. If John and I ever end up stranded on a desert island for 3 months with great internet access and poor diving gear, we might actually finish ours, but don't hold your breath. Honestly, I think that today your best bet is: 1. Buy Travis' book. It's fantastic, has everything you need to know about numpy, and you'll be supporting numpy itself. 2. Print Perry Greenfield's tutorial (http://new.scipy.org/wikis/topical_software/Tutorial). I think he's updating it for numpy now. 3. Have a look at some of the other info in http://new.scipy.org/Documentation, in particular D. Kuhlman's course is very nice. Regards, f From gael.varoquaux at normalesup.org Wed Feb 28 02:29:32 2007 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Wed, 28 Feb 2007 08:29:32 +0100 Subject: [SciPy-user] Any Books on SciPy? In-Reply-To: References: <88e473830702271959o6a4dcc2dv3dd2d5fb7db0bfa1@mail.gmail.com> Message-ID: <20070228072932.GA3249@clipper.ens.fr> On Tue, Feb 27, 2007 at 11:19:51PM -0700, Fernando Perez wrote: > http://amath.colorado.edu/faculty/fperez/tmp/py4science.pdf Nice ! Really nice ! I am very happy that someone is writing something about all this in a consistent way. I have two questions: what will be the distribution mode (printed matter, internet and free, both) ? This is important because an easily and freely available book really helps spreading the word. Second question: what will be the licence ? The reason I ask this is that I could consider trying to contribute if I can reuse the part I contribute. As you are probably aware (I think you follow the enthought-dev ML) Prabhu and I are currently trying to find time to code a nice "mlab" interface to mayavi2. Once it has made some progress (expect two to three months) it will be much nicer than the current way of using mayavi from python. Of course there is the problem that mayavi2 is not properly distributed currently. That might change as enthought is currently pushing for a modular eggs-based version of their tool suite. Anyway if we can get something out of mayavi's mlab, I could write document how to use it and contribute a modified version of it to your book. Similarly you could use a modified version of my traitsUI tutorial http://www.gael-varoquaux.info/computers/traits_tutorial/index.html if you think this is not out of scope. Thumbs up for such work ! Ga?l From gael.varoquaux at normalesup.org Wed Feb 28 02:31:56 2007 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Wed, 28 Feb 2007 08:31:56 +0100 Subject: [SciPy-user] Any Books on SciPy? In-Reply-To: <20070228072932.GA3249@clipper.ens.fr> References: <88e473830702271959o6a4dcc2dv3dd2d5fb7db0bfa1@mail.gmail.com> <20070228072932.GA3249@clipper.ens.fr> Message-ID: <20070228073156.GB3249@clipper.ens.fr> Damn it, I intended to send this e-mail off list ! On Wed, Feb 28, 2007 at 08:29:32AM +0100, Gael Varoquaux wrote: > On Tue, Feb 27, 2007 at 11:19:51PM -0700, Fernando Perez wrote: > > http://amath.colorado.edu/faculty/fperez/tmp/py4science.pdf > Nice ! Really nice ! I am very happy that someone is writing something > about all this in a consistent way. > I have two questions: what will be the distribution mode (printed matter, > internet and free, both) ? This is important because an easily and freely > available book really helps spreading the word. Second question: what > will be the licence ? The reason I ask this is that I could consider > trying to contribute if I can reuse the part I contribute. > As you are probably aware (I think you follow the enthought-dev ML) > Prabhu and I are currently trying to find time to code a nice "mlab" > interface to mayavi2. Once it has made some progress (expect two to three > months) it will be much nicer than the current way of using mayavi from > python. Of course there is the problem that mayavi2 is not properly > distributed currently. That might change as enthought is currently > pushing for a modular eggs-based version of their tool suite. > Anyway if we can get something out of mayavi's mlab, I could write > document how to use it and contribute a modified version of it to your > book. > Similarly you could use a modified version of my traitsUI tutorial > http://www.gael-varoquaux.info/computers/traits_tutorial/index.html if > you think this is not out of scope. > Thumbs up for such work ! > Ga?l From fperez.net at gmail.com Wed Feb 28 03:22:34 2007 From: fperez.net at gmail.com (Fernando Perez) Date: Wed, 28 Feb 2007 01:22:34 -0700 Subject: [SciPy-user] Any Books on SciPy? In-Reply-To: <20070228072932.GA3249@clipper.ens.fr> References: <88e473830702271959o6a4dcc2dv3dd2d5fb7db0bfa1@mail.gmail.com> <20070228072932.GA3249@clipper.ens.fr> Message-ID: On 2/28/07, Gael Varoquaux wrote: > On Tue, Feb 27, 2007 at 11:19:51PM -0700, Fernando Perez wrote: > > http://amath.colorado.edu/faculty/fperez/tmp/py4science.pdf I'll reply on-list since you sent it on-list :) But feel free to ask off-list anything in particular. > Nice ! Really nice ! I am very happy that someone is writing something > about all this in a consistent way. Thanks, but it's unfortunately a bit of a stillborn effort at this point. > I have two questions: what will be the distribution mode (printed matter, > internet and free, both) ? This is important because an easily and freely > available book really helps spreading the word. Second question: what > will be the licence ? The reason I ask this is that I could consider > trying to contribute if I can reuse the part I contribute. Well, nothing had been set in stone yet, but I'd favor a combination of a printed book by a publisher with some track record in science, hopefully with an agreement that would allow free online redistribution. Such things have been done recently, so perhaps convincing a publisher of this might not be altogether impossible. > As you are probably aware (I think you follow the enthought-dev ML) > Prabhu and I are currently trying to find time to code a nice "mlab" > interface to mayavi2. Once it has made some progress (expect two to three > months) it will be much nicer than the current way of using mayavi from > python. Of course there is the problem that mayavi2 is not properly > distributed currently. That might change as enthought is currently > pushing for a modular eggs-based version of their tool suite. I know, I've been quietly following that discussion with the utmost interest. I'm hoping it will be ready by Saturday, b/c I'm having a mini-sprint at my house with some students to port some old CFD code to the new mayavi. So you have 3 days to finish it :) > Anyway if we can get something out of mayavi's mlab, I could write > document how to use it and contribute a modified version of it to your > book. > > Similarly you could use a modified version of my traitsUI tutorial > http://www.gael-varoquaux.info/computers/traits_tutorial/index.html if > you think this is not out of scope. > > Thumbs up for such work ! Thanks! But as I said, while I'm quite open on the possibilities for licensing and distribution, the grim reality is that right now, I simply don't have /any/ time to put into this. All of my 'free' (funny, I know) time right now has to go into pushing the new code effort for distributed and parallel computing in ipython. We've made good progress recently on that front and some of the things that were holding back integration of the trunk into the dev branch, so that we might finally have a single codebase. So while it would be very nice for this to be finished, realistically I don't see that coming from me in the near future. Regards, f From nwagner at iam.uni-stuttgart.de Wed Feb 28 03:37:24 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Wed, 28 Feb 2007 09:37:24 +0100 Subject: [SciPy-user] Proper Use of NumPy's eig() In-Reply-To: <45E49E79.6080905@gmail.com> References: <52878754-DDDD-4AE2-9601-880BA43DD446@utoronto.ca> <03F2185E-C21E-414D-AF24-BEBF1922BE8A@utoronto.ca> <45E49E79.6080905@gmail.com> Message-ID: <45E53F44.6080308@iam.uni-stuttgart.de> Robert Kern wrote: > Rich Shepard wrote: > > >> However, I don't see the answer I expected. My expectation is that all >> values would be in the range [0.00-1.00], but they're not. >> >> When I print eigE I see: >> >> (array([ 8.88174744e+00+0.j , 3.54286503e-01+2.48721395j, >> 3.54286503e-01-2.48721395j, -3.11162331e-01+1.00980412j, >> -3.11162331e-01-1.00980412j, -2.79755841e-01+0.46954619j, >> -2.79755841e-01-0.46954619j, -4.08484096e-01+0.j ]), >> array([[ 6.24249034e-01 +0.00000000e+00j, >> > > ... > > Without knowing your input, I can't see anything particularly wrong. Unless if E > were real-symmetric (or complex-Hermitian), you are likely to end up with > complex eigenvalues. > > >> Since eig(E) "Return[s] all solutions (lamda, x) to the equation Ax = >> lamda x. The first element of the return tuple contains all the eigenvalues. >> The second element of the return tuple contains the eigenvectors in the >> columns (x[:,i] is the ith eigenvector)." >> >> I can't interpret the above. If the first tuple has all the Eigenvalues, >> how do I extract the principal Eigenvector from the rest? When I did this >> manually a couple of years ago, I used Octave to calculate the principal >> Eigenvector and the answer was easy for me to see. >> > > I don't think that the notion of a principal eigenvector is well-defined if the > matrix is not symmetric. But if you do have a symmetric matrix: > > > import numpy as np > from scipy import linalg > > eigvals, eigvecs = linalg.eig(E) > i = np.real_if_close(eigvals).argmax() > principal_eigvec = eigvecs[:, i] > > Hi Rich, here is another approach to compute the principal eigenpair (absil.py) Nils -------------- next part -------------- A non-text attachment was scrubbed... Name: absil.py Type: text/x-python Size: 1019 bytes Desc: not available URL: From steve at shrogers.com Wed Feb 28 06:29:38 2007 From: steve at shrogers.com (Steven H. Rogers) Date: Wed, 28 Feb 2007 04:29:38 -0700 Subject: [SciPy-user] Any Books on SciPy? In-Reply-To: References: <88e473830702271959o6a4dcc2dv3dd2d5fb7db0bfa1@mail.gmail.com> Message-ID: <45E567A2.3050406@shrogers.com> Fernando Perez wrote: > On 2/27/07, John Hunter wrote: >> On 2/27/07, Robert Love wrote: >>> Are there any good, up to date books that people recommend for >>> numerical work with Python? >>> >>> I see the book >>> >>> Python Scripting for Computational Science >>> Hans Petter Langtangen >>> >>> Does anyone have opinions on this? Is it current? Are there better >>> books? >> I specifically do not recommend this book -- I own it but in my >> opinion it is outdated and is more a collection of the author's >> personal idioms than the current common practice in the scientific >> python community. For numerical work in python most people use > > I happen to share John's opinion, and I also have a copy of this book. > While it's technically correct, well written and fairly comprehensive > (probably /too/ much, since it's a bit all over the map), I strongly > dislike his approach. Much of the book uses his custom, home-made > collection of scripts and tools, which you can only download if you go > to a site and type a word from a certain page in the book (a simple > 'protection' system). > Concur with John and Fernando. I found a copy in a local bookstore and bought it because it _is_ the only book covering the subject and I wanted to show that there is demand for such material. # Steve From gary.pajer at gmail.com Wed Feb 28 07:11:36 2007 From: gary.pajer at gmail.com (Gary Pajer) Date: Wed, 28 Feb 2007 07:11:36 -0500 Subject: [SciPy-user] Do I have LAPACK or not? In-Reply-To: References: <88fe22a0702272027s214476f8r99852705adb1b544@mail.gmail.com> Message-ID: <88fe22a0702280411i335688a7ldd0f8c9f35f1642f@mail.gmail.com> On 2/27/07, Rich Shepard wrote: > On Tue, 27 Feb 2007, Gary Pajer wrote: > > > When I compile I get the output below. Has LAPACK been found or not? > > I'm guessing yes, but I have some doubt ... > > Gary, > > What is the result of 'locate LAPACK?' On my system it returns > /usr/local/scipy/LAPACK/ and all the subdirectories and files therein. > > Rich I get /svn/scipy/trunk/Lib/sandbox/arpack/ARPACK/LAPACK but locate lapack gets /usr/lib/sse2/liblapack_atlas.so.3.0 /usr/lib/sse2/liblapack_atlas.so.3 /usr/lib/sse2/liblapack_atlas.a /usr/lib/sse2/liblapack_atlas.so and /usr/lib/python2.4/site-packages/scipy/lib/lapack and /usr/lib/python2.4/site-packages/scipy/linalg/flapack.so /usr/lib/python2.4/site-packages/scipy/linalg/clapack.so and /usr/lib/atlas/sse2/liblapack.so.3.0 /usr/lib/atlas/sse2/liblapack.so.3 /usr/lib/atlas/sse2/liblapack.a /usr/lib/atlas/sse2/liblapack.s and /usr/lib/liblapack.so.3.0 /usr/lib/liblapack-3.so /usr/lib/liblapack-3.a /usr/lib/liblapack.so.3 /usr/lib/liblapack.a /usr/lib/liblapack.so similar for atlas and blas I did not build ATLAS myself, I apt-got it. (Kubuntu 6.10, Edgy) (python 2.4.4, both numpy and scipy from recent SVNs) (I never did understand the difference between ATLAS, LAPACK, and BLAS and whether or not (and what) I should build, and why the standard builds of one of the above is said to be incomplete. I just cross my fingers and hit return.) > > -- > Richard B. Shepard, Ph.D. | The Environmental Permitting > Applied Ecosystem Services, Inc. | Accelerator(TM) > Voice: 503-667-4517 Fax: 503-667-8863 > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From dd55 at cornell.edu Wed Feb 28 08:21:35 2007 From: dd55 at cornell.edu (Darren Dale) Date: Wed, 28 Feb 2007 08:21:35 -0500 Subject: [SciPy-user] Any Books on SciPy? In-Reply-To: <45E567A2.3050406@shrogers.com> References: <45E567A2.3050406@shrogers.com> Message-ID: <200702280821.35981.dd55@cornell.edu> On Wednesday 28 February 2007 06:29:38 am Steven H. Rogers wrote: > Fernando Perez wrote: > > On 2/27/07, John Hunter wrote: > >> On 2/27/07, Robert Love wrote: > >>> Are there any good, up to date books that people recommend for > >>> numerical work with Python? > >>> > >>> I see the book > >>> > >>> Python Scripting for Computational Science > >>> Hans Petter Langtangen > >>> > >>> Does anyone have opinions on this? Is it current? Are there better > >>> books? > >> > >> I specifically do not recommend this book -- I own it but in my > >> opinion it is outdated and is more a collection of the author's > >> personal idioms than the current common practice in the scientific > >> python community. For numerical work in python most people use > > > > I happen to share John's opinion, and I also have a copy of this book. > > While it's technically correct, well written and fairly comprehensive > > (probably /too/ much, since it's a bit all over the map), I strongly > > dislike his approach. Much of the book uses his custom, home-made > > collection of scripts and tools, which you can only download if you go > > to a site and type a word from a certain page in the book (a simple > > 'protection' system). > > Concur with John and Fernando. I found a copy in a local bookstore and > bought it because it _is_ the only book covering the subject and I > wanted to show that there is demand for such material. I did the same thing, and came to the same conclusion about the book. I haven't seen anyone mention "Numerical Methods in Engineering with Python" by Jaan Kiusalaas. It was published in 2005, and uses numarray, so it is somewhat dated considering all the impressive developments with NumPy and SciPy. I mention it for the sake of completeness. Darren From giorgio.luciano at chimica.unige.it Wed Feb 28 08:29:39 2007 From: giorgio.luciano at chimica.unige.it (Giorgio Luciano) Date: Wed, 28 Feb 2007 14:29:39 +0100 Subject: [SciPy-user] Any Books on SciPy? In-Reply-To: <200702280821.35981.dd55@cornell.edu> References: <45E567A2.3050406@shrogers.com> <200702280821.35981.dd55@cornell.edu> Message-ID: <45E583C3.9020307@chimica.unige.it> I'm setting up a new site about chemometrics hopefully (cross fingered), it should be online on Thursday. With other co-author we are also thinking about writing a "chapter of a book"/article about how to use python/scipy to perform common chemometric tasks. Why dont' we all merge our knowledge in different field and try to have something more "comprehensive" ? prbably togheter we can succeed in seeing all in completion without all rewriting twice same things We will also be glade to give to scipy the functions to seem them included if anyone interested just drop me a line Giorgio > From gael.varoquaux at normalesup.org Wed Feb 28 09:08:31 2007 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Wed, 28 Feb 2007 15:08:31 +0100 Subject: [SciPy-user] Any Books on SciPy? In-Reply-To: References: <88e473830702271959o6a4dcc2dv3dd2d5fb7db0bfa1@mail.gmail.com> <20070228072932.GA3249@clipper.ens.fr> Message-ID: <20070228140828.GE3249@clipper.ens.fr> On Wed, Feb 28, 2007 at 01:22:34AM -0700, Fernando Perez wrote: > > As you are probably aware (I think you follow the enthought-dev ML) > > Prabhu and I are currently trying to find time to code a nice "mlab" > > interface to mayavi2. Once it has made some progress (expect two to three > > months) it will be much nicer than the current way of using mayavi from > > python. Of course there is the problem that mayavi2 is not properly > > distributed currently. That might change as enthought is currently > > pushing for a modular eggs-based version of their tool suite. > I know, I've been quietly following that discussion with the utmost > interest. I'm hoping it will be ready by Saturday, b/c I'm having a > mini-sprint at my house with some students to port some old CFD code > to the new mayavi. So you have 3 days to finish it :) Well well. Currently the API is not stable at all, and I do not want to freeze it (cf my last about this on the enthougt-dev ML). I can let this project steel even more sleep from me than my work already does, and move a bit forward to try and establish something that is closer to the API that I am aiming to, but even if I a manage to get something out (and when I am to tired I am not terribly productive), I need Prabhu to review it. All I can say is that mayavi.tools.mlab is not currently in its final form and I do not suggest using it, unless you are planning to modify the code you are writing. From the user point of view following the API change should not be a huge amount of work. The changes I would like do to the API are exposed in the mails: https://mail.enthought.com/pipermail/enthought-dev/2007-February/004425.html [[Enthought-dev] Mlab API, and usability] and https://mail.enthought.com/pipermail/enthought-dev/2007-February/004442.html [Enthought-dev] Mlab: contour3d and quiver3d. I am interested on feedback on these propositions, by the way. Ga?l From novin01 at gmail.com Wed Feb 28 09:13:27 2007 From: novin01 at gmail.com (Dave) Date: Wed, 28 Feb 2007 14:13:27 +0000 (UTC) Subject: [SciPy-user] Transforming 1-d array to 2-d array References: Message-ID: Rich Shepard appl-ecosys.com> writes: > > On Mon, 26 Feb 2007, Rich Shepard wrote: > > > Now I need a bit more guidance, to complete the symmetrical matrices. > > It occurred to me after I posted this message that I'm going along the > wrong path. I don't need a lower index, just the lower triangular half of > the array with zeros along the principal diagonal and the upper half. Then > I can add them (cell-wise, not matrix addition) and get the filled array I > need. > > Rich > Sorry, have been quite busy.... The double brackets was the cause of your original error I suspect - hence taking [0] fixed the problem. I'm not sure if your data is in an array but if not you can cast it as an array as follows: barEco = asarray(barEco).squeeze() The .squeeze() method will get rid of any extraneous brackets (singleton dimensions) I'm not too sure what you're now trying to achieve, but hopefully the code below will help you understang array indexing. -Dave from numpy import asarray, ones, tril, triu, zeros from numpy.random import rand N = 5 upperData = (10.0*rand(N*(N-1)/2)).round() #dummy data lowerData = (10.0*rand(N*(N-1)/2)).round() #dummy data # Index into the upper half of an NxN array upper_idx = triu(ones([N,N])-eye(N)).nonzero() # Index into the upper half of an NxN array lower_idx = tril(ones([N,N])-eye(N)).nonzero() # Index into the diagonal elements of an NxN array diag_idx = eye(N).nonzero() C = zeros([N,N],dtype=float) print C C[upper_idx] = upperData print C C[lower_idx] = lowerData print C C += eye(N) print C A = 2.0*ones([N,N],dtype=float) print A B = 3.0*ones([N,N],dtype=float) print B print triu(A,k=1) print tril(B,k=-1) print triu(A,k=1) + tril(B,k=-1) print triu(A,k=1) + tril(B,k=-1) + eye(N) From ryanlists at gmail.com Wed Feb 28 09:16:43 2007 From: ryanlists at gmail.com (Ryan Krauss) Date: Wed, 28 Feb 2007 08:16:43 -0600 Subject: [SciPy-user] NumPy in Teaching In-Reply-To: <45E4FFA6.9010408@shrogers.com> References: <45E4FFA6.9010408@shrogers.com> Message-ID: I am teaching system dynamics, controls, and mechatronics and letting the students choose between matlab and python. I don't know if I have any lessons learned yet. Not many of the students choose python. I think the problem is that getting everything installed seems overwhelming. Enthought python is good, but the packages are slightly old, because everything is in flux. So, I tell my students to install it and then update Numpy/Scipy/Matplotlib. My other problem is that the other faculty don't know python, so Matlab is taught and expected. On 2/27/07, Steven H. Rogers wrote: > I'm doing an informal survey on the use of Array Programming Languages > for teaching. If you're using NumPy in this manner I'd like to hear > from you. What subject was/is taught, academic level, results, lessons > learned, etc. > > Regards, > Steve > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From rshepard at appl-ecosys.com Wed Feb 28 09:20:06 2007 From: rshepard at appl-ecosys.com (Rich Shepard) Date: Wed, 28 Feb 2007 06:20:06 -0800 (PST) Subject: [SciPy-user] Transforming 1-d array to 2-d array In-Reply-To: References: Message-ID: On Wed, 28 Feb 2007, Dave wrote: > Sorry, have been quite busy.... Dave, Apologies not necessary. > The double brackets was the cause of your original error I suspect - hence > taking [0] fixed the problem. That's true. > I'm not too sure what you're now trying to achieve, but hopefully the code > below will help you understand array indexing. Thank you. I've read the indexing section of Travis' book several times and have a general understanding. It will become clearer as I use NumPy more. Rich -- Richard B. Shepard, Ph.D. | The Environmental Permitting Applied Ecosystem Services, Inc. | Accelerator(TM) Voice: 503-667-4517 Fax: 503-667-8863 From nwagner at iam.uni-stuttgart.de Wed Feb 28 09:25:46 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Wed, 28 Feb 2007 15:25:46 +0100 Subject: [SciPy-user] Optimizers and the number of iterations Message-ID: <45E590EA.7010308@iam.uni-stuttgart.de> Hi all, AFAIK, only the number of function and gradient evaluations are directly available via full_output=1. Is there a better way to get the number of iterations ? What I have used so far, is x_opt, allvec=optimize.fmin_cg(func,x_0,retall=1) print 'The number of iterations is',shape(allvec)[0]-1 Nils From perry at stsci.edu Wed Feb 28 09:34:04 2007 From: perry at stsci.edu (Perry Greenfield) Date: Wed, 28 Feb 2007 09:34:04 -0500 Subject: [SciPy-user] Any Books on SciPy? In-Reply-To: References: <88e473830702271959o6a4dcc2dv3dd2d5fb7db0bfa1@mail.gmail.com> Message-ID: On Feb 28, 2007, at 1:19 AM, Fernando Perez wrote: > > 2. Print Perry Greenfield's tutorial > (http://new.scipy.org/wikis/topical_software/Tutorial). I think he's > updating it for numpy now. Yes, that's right. Hopefully it won't take too long to do (a week or two depending on my other work). Perry From gonzalezmancera+scipy at gmail.com Wed Feb 28 09:42:14 2007 From: gonzalezmancera+scipy at gmail.com (Andres Gonzalez-Mancera) Date: Wed, 28 Feb 2007 09:42:14 -0500 Subject: [SciPy-user] Any Books on SciPy? Message-ID: I agree that there is a huge void in proper Scipy documentation which is essential for broader adoption. I know you can always go to the source files but this is not something everybody will do. Although the Documentation has been growing over the past year we're still missing some central 'official' documentation or at least tutorial. While I'm on the subject, I bought Travis' book early last year. I understood that we should have received updates as they became available. I never heard or received anything. Have there been any updates in the past year or was I left out of the loop? The reason I ask this is because I understand there were substantial changes to numpy in the road to 1.0. Are these changes documented anywhere other than the developers mailing lists? Thank you very much, Andres On 2/28/07, scipy-user-request at scipy.org wrote: > Message: 8 > Date: Tue, 27 Feb 2007 23:19:51 -0700 > From: "Fernando Perez" > Subject: Re: [SciPy-user] Any Books on SciPy? > To: "SciPy Users List" > Message-ID: > > Content-Type: text/plain; charset=ISO-8859-1; format=flowed > > On 2/27/07, John Hunter wrote: > > On 2/27/07, Robert Love wrote: > > > Are there any good, up to date books that people recommend for > > > numerical work with Python? > > > > > > I see the book > > > > > > Python Scripting for Computational Science > > > Hans Petter Langtangen > > > > > > Does anyone have opinions on this? Is it current? Are there better > > > books? > > > > I specifically do not recommend this book -- I own it but in my > > opinion it is outdated and is more a collection of the author's > > personal idioms than the current common practice in the scientific > > python community. For numerical work in python most people use > > I happen to share John's opinion, and I also have a copy of this book. > While it's technically correct, well written and fairly comprehensive > (probably /too/ much, since it's a bit all over the map), I strongly > dislike his approach. Much of the book uses his custom, home-made > collection of scripts and tools, which you can only download if you go > to a site and type a word from a certain page in the book (a simple > 'protection' system). > > Now you have an unmaintained, unreleased (publicly), set of tools to > learn from that don't have any licensing explicitly specified. > > Oh, and a good chunk of the tools in his distribution (since I have > the book, I have the code) use Perl. Go figure (there's also a tcl > directory thrown in for good measure). > > One of Python's main strengths for scientific work is precisely the > openness and interoperability of the various tools, and we all do our > part to help that be the case. The fact that this book follows an > approach more or less orthogonal to those ideas makes me very much > uninterested in using it. > > > There is a lot more, particularly for domain specific stiff, but these > > links are good starting points. Unfortunately, there is no > > one-stop-shop for a guide to scientific computing in python - Travis' > > documentation is the closest thing we have but it pretty much just > > covers numpy which is *the* core package. Fernando Perez and I have a > > very brief and limited started guide covering multiple packages > > (ipython, numpy, matplotlib, scipy, VTK) but I don't have the PDF > > handy (Fernando, do you have the roadshow doc handy?). > > Well, you asked for it :) > > http://amath.colorado.edu/faculty/fperez/tmp/py4science.pdf > > It's worth stressing, in the strongest possible terms, that this > should NOT, in any way, shape or form, be considered anything beyond a > pre-pre-alpha, pre-draft of a project for a possible book :) Besides, > it's already outdated in several important places (numpy, mayavi, no > TVTK,...). > > After all I said about the Langtangen book, at least it's a real one. > Our pdf draft is most certainly not. So if you need a book, with all > of its limitations, Langtangen's is currently the only game in town > that covers the whole spectrum of python for scientific computing. If > John and I ever end up stranded on a desert island for 3 months with > great internet access and poor diving gear, we might actually finish > ours, but don't hold your breath. > > Honestly, I think that today your best bet is: > > 1. Buy Travis' book. It's fantastic, has everything you need to know > about numpy, and you'll be supporting numpy itself. > > 2. Print Perry Greenfield's tutorial > (http://new.scipy.org/wikis/topical_software/Tutorial). I think he's > updating it for numpy now. > > 3. Have a look at some of the other info in > http://new.scipy.org/Documentation, in particular D. Kuhlman's course > is very nice. > > > Regards, > > f > -- Andres Gonzalez-Mancera Biofluid Mechanics Lab Department of Mechanical Engineering University of Maryland, Baltimore County andres.gonzalez at umbc.edu 410-455-3347 From lou_boog2000 at yahoo.com Wed Feb 28 10:26:10 2007 From: lou_boog2000 at yahoo.com (Lou Pecora) Date: Wed, 28 Feb 2007 07:26:10 -0800 (PST) Subject: [SciPy-user] Any Books on SciPy? In-Reply-To: Message-ID: <20070228152610.79882.qmail@web34405.mail.mud.yahoo.com> While I agree with some of the criticisms of the Langtangen book I do think it covers some topics that are really hard to find elsewhere. He does go into depth and cover a lot of writing C/C++ extensions. Overall the book has a lot of information. Thumbs down on parts that rely on his own code, especially the Perl stuff, but Thumbs Up on parts that show you details you have a hard time finding elsewhere. The biggest problem with the book is one that ALL printed documents face with Python (and probably other open source projects): staying current. Things change fast and books go out of date fast. Web sources and discussion list like this one are essential. The way I see it as a user of scientific python packages the biggest barriers to new users and even veterans are: * Getting documention, examples and tutorials for various package. Can vary greatly depending on each package. * Figuring out how to install packages (an on-going battle for all) * Figuring out package dependencies (e.g. do I need wxPython with matplotlib?) * Figuring out which versions of each package are compatible with other packages' versions. I bet I'm not the only one who gets nervous when he/she has to update a package. --- Fernando Perez wrote: > On 2/27/07, John Hunter wrote: > > On 2/27/07, Robert Love > wrote: > > > Are there any good, up to date books that people > recommend for > > > numerical work with Python? > > > > > > I see the book > > > > > > Python Scripting for Computational Science > > > Hans Petter Langtangen > > > > > > Does anyone have opinions on this? Is it > current? Are there better -- Lou Pecora, my views are my own. --------------- Three laws of thermodynamics: First law: "You can't win." Second law: "You can't break even." Third law: "You can't quit." -- Allen Ginsberg, beat poet ____________________________________________________________________________________ Expecting? Get great news right away with email Auto-Check. Try the Yahoo! Mail Beta. http://advision.webevents.yahoo.com/mailbeta/newmail_tools.html From bnuttall at uky.edu Wed Feb 28 10:56:04 2007 From: bnuttall at uky.edu (Brandon Nuttall) Date: Wed, 28 Feb 2007 10:56:04 -0500 Subject: [SciPy-user] Using SciPy/NumPy optimization In-Reply-To: <6.0.1.1.2.20070227170539.0283f608@pop.uky.edu> References: <6.0.1.1.2.20070227131721.0287bfe0@pop.uky.edu> <45E493D7.2070703@gmail.com> <6.0.1.1.2.20070227170539.0283f608@pop.uky.edu> Message-ID: <6.0.1.1.2.20070228093647.02798d38@pop.uky.edu> Folks, I've gotten further, but still am not there. I have the following code: import numpy as np from scipy.optimize import fmin_tnc class hyp_func: def __init__(self,*args): self.x, self.y = args self.avg = y.mean() def rmsd(self,*args): """A function suitable for passing to the fmin() minimizers """ a, b, c = args[0] sse = 0.0 sst = 0.0 # minimize the RMSD for i in range(len(x)): y = a*(1.0+b*c*x[i])**(-1.0/b) sse += (self.y[i]-y)**2 sst += (self.avg-y)**2 # this won't work because I don't know what x should be or if it is related diff = [0.0,0.0] # this is just an initial value and won't get any results # diff[0] = (-a*c-a*c*c*b*?????)**((-1-b)/b) # diff[1] = ((-1-b)/b)*(-a*c*c*b)*(-a*c-a*c*c*b*?????)**(((-1-b)/b)-1) return (sse / sst, diff) # fake data for testing a, b, c = [1.25,0.75,0.25] x = [] y = [] for i in range(1,61): x.append(float(i)) y.append(a*(1.0+b*c*float(i))**(-1.0/b)) x = np.array(x) y = np.array(y) myfunc=hyp_func(x,y) params0 = [1.0,0.5,0.5] retcode, nfeval, optimal = fmin_tnc(myfunc.rmsd,params0, bounds=[(None,None),(0.001,5.0), (-1.0,1.0)]) myfunc.rmsd() now returns the root mean square deviation which is to be minimized. However, in looking at the example code in scipy/optimize/tnc.py, I find that myfunc.rmsd() needs to return a second argument, g, the gradient of the function. It looks like this needs to be the first and second derivative of my function which if my [really, really] rusty calculus should be: y' = (-a*c-a*c*c*b*x)**((-1-b)/b) y'' = ((-1-b)/b)*(-a*c*c*b)*(-a*c-a*c*c*b*x)**(((-1-b)/b)-1) I'm not sure these derivatives are particularly informative about which way to go to minimize the RMSD. At the moment, the code fails with a SystemError (error return without exception set) in the call to moduleTNC.minimize(). So, any suggestions on my next step? FWIW: In tnc.py, in test(), the call to fmin_tnc() has a typo in the keyword arguments. "maxnfeval=" should be "maxfun=". When running tnc.py to execute the tests, example() and test1fg() run. The test2fg() detects an infeasible condition (apparently, I guess that is what was to be tested); Python raises an exception and terminates without running the other tests. Brandon C. Nuttall BNUTTALL at UKY.EDU Kentucky Geological Survey (859) 257-5500 University of Kentucky (859) 257-1147 (fax) 228 Mining & Mineral Resources Bldg http://www.uky.edu/KGS/home.htm Lexington, Kentucky 40506-0107 From hetland at tamu.edu Wed Feb 28 11:49:50 2007 From: hetland at tamu.edu (Rob Hetland) Date: Wed, 28 Feb 2007 10:49:50 -0600 Subject: [SciPy-user] Any Books on SciPy? In-Reply-To: <20070228152610.79882.qmail@web34405.mail.mud.yahoo.com> References: <20070228152610.79882.qmail@web34405.mail.mud.yahoo.com> Message-ID: <1618CF9B-AD0D-4E1C-A0FF-C39289FDFF63@tamu.edu> I would help write part of a book. I'm sure others would, too. But none of us have the time to write a whole book ourselves. Who could coordinate such an effort? What format would be used (LaTeX, etc.)? How would consistency be maintained across sections? I think these are not insurmountable obstacles, and a free, printable PDF howto manual (as opposed to a reference book like the NumPy book) in the spirit of the Lerning ... O'Reilly books is very, very important. The tutorials on the wiki are a good place to start, but in the end, people want a book. I know I prefer books to online documentation when I am really diving into something. -Rob ---- Rob Hetland, Associate Professor Dept. of Oceanography, Texas A&M University http://pong.tamu.edu/~rob phone: 979-458-0096, fax: 979-845-6331 From grante at visi.com Wed Feb 28 11:49:34 2007 From: grante at visi.com (Grant Edwards) Date: Wed, 28 Feb 2007 16:49:34 +0000 (UTC) Subject: [SciPy-user] Any Books on SciPy? References: <88e473830702271959o6a4dcc2dv3dd2d5fb7db0bfa1@mail.gmail.com> Message-ID: On 2007-02-28, Fernando Perez wrote: > Well, you asked for it :) > > http://amath.colorado.edu/faculty/fperez/tmp/py4science.pdf Cool. If you're interested in suggestions for additional topics, you might want to add a chapter on scientific python: http://sourcesup.cru.fr/projects/scientific-py/ Though it's probably not "cool" these days compared to some of the other visualization options, I still get a _lot_ of use out of gnuplot-py: http://gnuplot-py.sourceforge.net/ -- Grant Edwards grante Yow! Civilization is at fun! Anyway, it keeps visi.com me busy!! From as8ca at virginia.edu Wed Feb 28 11:50:24 2007 From: as8ca at virginia.edu (Alok Singhal) Date: Wed, 28 Feb 2007 11:50:24 -0500 Subject: [SciPy-user] Using SciPy/NumPy optimization In-Reply-To: <6.0.1.1.2.20070228093647.02798d38@pop.uky.edu> References: <6.0.1.1.2.20070227131721.0287bfe0@pop.uky.edu> <45E493D7.2070703@gmail.com> <6.0.1.1.2.20070227170539.0283f608@pop.uky.edu> <6.0.1.1.2.20070228093647.02798d38@pop.uky.edu> Message-ID: <20070228165024.GA3521@virginia.edu> On 28/02/07: 10:56, Brandon Nuttall wrote: > myfunc.rmsd() now returns the root mean square deviation which is to be > minimized. However, in looking at the example code in > scipy/optimize/tnc.py, I find that myfunc.rmsd() needs to return a second > argument, g, the gradient of the function. It looks like this needs to be > the first and second derivative of my function which if my [really, really] > rusty calculus should be: > > y' = (-a*c-a*c*c*b*x)**((-1-b)/b) > y'' = ((-1-b)/b)*(-a*c*c*b)*(-a*c-a*c*c*b*x)**(((-1-b)/b)-1) > I think the second return value should be a list of (first, partial) derivatives with respect to the parameters being estimated, i.e., a, b, c in your case. The example in optimize/tnc.py defines the function as pow(x[0],2.0)+pow(abs(x[1]),3.0), and the derivatives g[0] and g[1] are defined as 2.0*x[0] and sign(x[1])*3.0*pow(abs(x[1]),2.0), which are derivatives of f with respect to x[0] and x[1], the parameters. I calculated those derivatives (and I think did it right :-) ), you can see them in an earlier message by me in this thread. > I'm not sure these derivatives are particularly informative about which way > to go to minimize the RMSD. Yeah, that is why it makes more sense to calculate the derivatives wrt the parameters being estimated. Here is another way to do the fitting (adapted from scipy cookbook): import numpy as np from scipy import rand from scipy.optimize import leastsq import pylab n = 151 x = np.mgrid[1:10:n*1j] def f(p, x): return p[0]*(1.0+p[1]*p[2]*x)**(-1.0/p[1]) def errf(p, x, y): return f(p, x) - y abc = [15.0, 2.5, 0.3] y = f(abc, x) + rand(n) - 0.5 abc_guess = [30.0, 4.0, 1.0] abc1, success = leastsq(errf, abc_guess[:], args = (x, y)) print success # should be 1 print abc1 # I get [ 14.71976079 2.4536336 0.27560377] pylab.plot(x, y, x, f(abc1, x)) pylab.show() -Alok -- Alok Singhal * * Graduate Student, dept. of Astronomy * * * University of Virginia http://www.astro.virginia.edu/~as8ca/ * * From jdh2358 at gmail.com Wed Feb 28 12:10:47 2007 From: jdh2358 at gmail.com (John Hunter) Date: Wed, 28 Feb 2007 11:10:47 -0600 Subject: [SciPy-user] Any Books on SciPy? In-Reply-To: <1618CF9B-AD0D-4E1C-A0FF-C39289FDFF63@tamu.edu> References: <20070228152610.79882.qmail@web34405.mail.mud.yahoo.com> <1618CF9B-AD0D-4E1C-A0FF-C39289FDFF63@tamu.edu> Message-ID: <88e473830702280910x3cf80db7sa9991e00c3b65a74@mail.gmail.com> On 2/28/07, Rob Hetland wrote: > I would help write part of a book. I'm sure others would, too. But > none of us have the time to write a whole book ourselves. > Who could coordinate such an effort? > What format would be used (LaTeX, etc.)? > How would consistency be maintained across sections? > I think these are not insurmountable obstacles, and a free, printable > PDF howto manual (as opposed to a reference book like the NumPy book) When Fernando and I first started the project, we wanted to keep this with as few authors as possible simply because most team written books aren't that good, and are not well integrated. Maintenance also becomes difficult since open source is a rapidly moving target (as Fernando noted several of our chapters are already out of date) and the more authors you have the more difficult it becomes to herd the cats. So we decided to try and tackle it alone. Unfortunately, both of us are too involved with our other commitments to put the work in that is necessary, and speaking for myself, I would be happy to revisit the idea of a collaborative project. We would need a couple of people to step up as editors who commit a fair amount of time to insuring style, cross referencing, proper topics, etc... We could then solicit chapters on all the relevant major packages, either from the authors of the packages or from heavy users. We would need a few introductory chapters which emphasize integrated usage, followed by package specific chapters emphasizing deeper features. At this point, I can probably only commit to an mpl chapter and maybe some work on some integration. I would be happy to do this under an open document license, while still striving for hardcopy printing. The src for the document Fernando posted lives in matplotlib svn as a collection of lyx chapters (egad the last commit was 20 months ago) http://matplotlib.svn.sourceforge.net/viewvc/matplotlib/trunk/course/ I will defer to Fernando's wishes on this, but I am happy to try and move the ball forward by bringing in any one offering to do some work. Perry's tutorial is very nice and covers a lot of ground, often in more detail than what Fernando and I have done, and might be a better starting point, depending on his interests. Or some variant of his tutorial might serve well for the introductory chapters, followed by the package specific stuff. JDH From paul.ray at nrl.navy.mil Wed Feb 28 12:15:11 2007 From: paul.ray at nrl.navy.mil (Paul Ray) Date: Wed, 28 Feb 2007 12:15:11 -0500 Subject: [SciPy-user] Numpy eBook In-Reply-To: References: Message-ID: <013D2783-DCB2-448F-B23E-A907FB4E9A64@nrl.navy.mil> On Feb 28, 2007, at 11:50 AM, scipy-user-request at scipy.org wrote: > While I'm on the subject, I bought Travis' book early last year. I > understood that we should have received updates as they became > available. I never heard or received anything. Have there been any > updates in the past year or was I left out of the loop? The reason I > ask this is because I understand there were substantial changes to > numpy in the road to 1.0. Are these changes documented anywhere other > than the developers mailing lists? I got one update in late December 2005, and none since then. I have certainly been hoping for a post-1.0 revision as well. Cheers, -- Paul -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 5012 bytes Desc: not available URL: From coughlan at ski.org Wed Feb 28 12:20:23 2007 From: coughlan at ski.org (James Coughlan) Date: Wed, 28 Feb 2007 09:20:23 -0800 Subject: [SciPy-user] Numpy eBook In-Reply-To: <013D2783-DCB2-448F-B23E-A907FB4E9A64@nrl.navy.mil> References: <013D2783-DCB2-448F-B23E-A907FB4E9A64@nrl.navy.mil> Message-ID: <45E5B9D7.4050107@ski.org> I bought Travis's book in mid-2006 and received an update in December. -James Paul Ray wrote: > > On Feb 28, 2007, at 11:50 AM, scipy-user-request at scipy.org wrote: > >> While I'm on the subject, I bought Travis' book early last year. I >> understood that we should have received updates as they became >> available. I never heard or received anything. Have there been any >> updates in the past year or was I left out of the loop? The reason I >> ask this is because I understand there were substantial changes to >> numpy in the road to 1.0. Are these changes documented anywhere other >> than the developers mailing lists? > > I got one update in late December 2005, and none since then. I have > certainly been hoping for a post-1.0 revision as well. > > Cheers, > > -- Paul > > ------------------------------------------------------------------------ > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -- ------------------------------------------------------- James Coughlan, Ph.D., Associate Scientist Smith-Kettlewell Eye Research Institute Email: coughlan at ski.org URL: http://www.ski.org/Rehab/Coughlan_lab/ Phone: 415-345-2146 Fax: 415-345-8455 ------------------------------------------------------- From robert.kern at gmail.com Wed Feb 28 12:43:58 2007 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 28 Feb 2007 11:43:58 -0600 Subject: [SciPy-user] Using SciPy/NumPy optimization In-Reply-To: <6.0.1.1.2.20070228093647.02798d38@pop.uky.edu> References: <6.0.1.1.2.20070227131721.0287bfe0@pop.uky.edu> <45E493D7.2070703@gmail.com> <6.0.1.1.2.20070227170539.0283f608@pop.uky.edu> <6.0.1.1.2.20070228093647.02798d38@pop.uky.edu> Message-ID: <45E5BF5E.6020601@gmail.com> Brandon Nuttall wrote: > Folks, > > I've gotten further, but still am not there. I have the following code: > > import numpy as np > from scipy.optimize import fmin_tnc > > class hyp_func: > def __init__(self,*args): > self.x, self.y = args > self.avg = y.mean() > def rmsd(self,*args): > """A function suitable for passing to the fmin() minimizers > """ > a, b, c = args[0] > sse = 0.0 > sst = 0.0 > # minimize the RMSD > for i in range(len(x)): > y = a*(1.0+b*c*x[i])**(-1.0/b) > sse += (self.y[i]-y)**2 > sst += (self.avg-y)**2 Don't do this looping. Instead, just make sure that self.x and self.y are arrays and use array math. y = a*(1.0 + b*c*self.x) ** (-1.0/b) dy = self.y - y sse = (dy*dy).sum() sst = ((y - avg)**2).sum() -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From oliphant at ee.byu.edu Wed Feb 28 12:45:06 2007 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Wed, 28 Feb 2007 10:45:06 -0700 Subject: [SciPy-user] Any Books on SciPy? In-Reply-To: References: Message-ID: <45E5BFA2.6090302@ee.byu.edu> Andres Gonzalez-Mancera wrote: >I agree that there is a huge void in proper Scipy documentation which >is essential for broader adoption. I know you can always go to the >source files but this is not something everybody will do. Although the >Documentation has been growing over the past year we're still missing >some central 'official' documentation or at least tutorial. > >While I'm on the subject, I bought Travis' book early last year. I >understood that we should have received updates as they became >available. I never heard or received anything. Have there been any >updates in the past year or was I left out of the loop? The reason I >ask this is because I understand there were substantial changes to >numpy in the road to 1.0. Are these changes documented anywhere other >than the developers mailing lists? > > The current version of "Guide to NumPy" is very complete. There are a few editing and typographical changes needed, but that is it. I sent updates to everyone who I have an email adress for in December. If you did not receive it then I don't have the right email address for you. Please let me know and we will get it fixed (I'll send you an update right away and you'll be set to receive the final update in April/May). Thank you so much, everybody, for all of your support. I have received approximately 1000 orders so far over the past 18 months which has literally saved my skin and made it possible to continue working on NumPy. I'm even more enthused that the number of downloads of NumPy 1.0.1 has reached 30000. We are also making inroads with the Python developers and uf all goes well should have a working array interface in Python 3.0 and Python 2.6 Best regards, -Travis From cclarke at chrisdev.com Wed Feb 28 13:01:27 2007 From: cclarke at chrisdev.com (Christopher Clarke) Date: Wed, 28 Feb 2007 14:01:27 -0400 Subject: [SciPy-user] Any Books on SciPy? In-Reply-To: <45E5BFA2.6090302@ee.byu.edu> References: <45E5BFA2.6090302@ee.byu.edu> Message-ID: <65893979-F4DF-416A-AA8F-AF73FB33DF9D@chrisdev.com> Hi Travis I agree the the "Guide to Numpy" book was well worth the price (I was really happy when PayPal started to work in Trinidad!!!) I am suggesting that the examples i.e http://www.scipy.org/ Numpy_Example_List should also be included Regards Chris On 28 Feb 2007, at 13:45, Travis Oliphant wrote: > Andres Gonzalez-Mancera wrote: > >> I agree that there is a huge void in proper Scipy documentation which >> is essential for broader adoption. I know you can always go to the >> source files but this is not something everybody will do. Although >> the >> Documentation has been growing over the past year we're still missing >> some central 'official' documentation or at least tutorial. >> >> While I'm on the subject, I bought Travis' book early last year. I >> understood that we should have received updates as they became >> available. I never heard or received anything. Have there been any >> updates in the past year or was I left out of the loop? The reason I >> ask this is because I understand there were substantial changes to >> numpy in the road to 1.0. Are these changes documented anywhere other >> than the developers mailing lists? >> >> > > The current version of "Guide to NumPy" is very complete. There are a > few editing and typographical changes needed, but that is it. > > I sent updates to everyone who I have an email adress for in December. > If you did not receive it then I don't have the right email address > for > you. Please let me know and we will get it fixed (I'll send you an > update right away and you'll be set to receive the final update in > April/May). > > Thank you so much, everybody, for all of your support. I have > received > approximately 1000 orders so far over the past 18 months which has > literally saved my skin and made it possible to continue working on > NumPy. > > I'm even more enthused that the number of downloads of NumPy 1.0.1 has > reached 30000. We are also making inroads with the Python developers > and uf all goes well should have a working array interface in > Python 3.0 > and Python 2.6 > > Best regards, > > -Travis > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user From perry at stsci.edu Wed Feb 28 13:30:15 2007 From: perry at stsci.edu (Perry Greenfield) Date: Wed, 28 Feb 2007 13:30:15 -0500 Subject: [SciPy-user] Any Books on SciPy? In-Reply-To: <88e473830702280910x3cf80db7sa9991e00c3b65a74@mail.gmail.com> References: <20070228152610.79882.qmail@web34405.mail.mud.yahoo.com> <1618CF9B-AD0D-4E1C-A0FF-C39289FDFF63@tamu.edu> <88e473830702280910x3cf80db7sa9991e00c3b65a74@mail.gmail.com> Message-ID: On Feb 28, 2007, at 12:10 PM, John Hunter wrote: > > > Perry's tutorial is very nice and covers a lot of ground, often in > more detail than what Fernando and I have done, and might be a better > starting point, depending on his interests. Or some variant of his > tutorial might serve well for the introductory chapters, followed by > the package specific stuff. > I see it as suitable for a certain class of new users (i.e, those that are more task oriented and looking to see what they can accomplish interactively without having to learn a lot of preliminary material. But it certainly isn't suitable for all. I was talking to Travis about the need for some sort of lightweight intro to numpy. His book is a great technical reference, and very suitable for developers. But I worry that it is a bit intimidating for new users. So I raised the possibility of writing some sort of introduction that did the basics but pointed to the book for details for more advanced topics. Travis was ok with that (I didn't want to instigate anything that would cut into book sales). I was thinking of perhaps taking the old Numeric/numarray user guides and stripping them down and modifying that base document after I finish with the interactive tutorial (if I have time of course). With having to switch our staff over to numpy (we've pretty much completed converting most of our distributed software; our next release in early summer will be numpy-based) I think we need something that isn't too long to read. If others want to take this on, I welcome it (and as I said, I don't think Travis objects based on what he told me). Perry From rshepard at appl-ecosys.com Wed Feb 28 13:44:28 2007 From: rshepard at appl-ecosys.com (Rich Shepard) Date: Wed, 28 Feb 2007 10:44:28 -0800 (PST) Subject: [SciPy-user] Library Message On Application Start Message-ID: Each time I start my application I see this: Overwriting info= from scipy.misc (was from numpy.lib.utils) It does not seem to affect the few functions we use, but I wonder if it's something that should be fixed somewhere. Rich -- Richard B. Shepard, Ph.D. | The Environmental Permitting Applied Ecosystem Services, Inc. | Accelerator(TM) Voice: 503-667-4517 Fax: 503-667-8863 From lou_boog2000 at yahoo.com Wed Feb 28 13:45:14 2007 From: lou_boog2000 at yahoo.com (Lou Pecora) Date: Wed, 28 Feb 2007 10:45:14 -0800 (PST) Subject: [SciPy-user] Any Books on SciPy? In-Reply-To: Message-ID: <666570.29413.qm@web34409.mail.mud.yahoo.com> What's really is needed are Examples, examples, and examples. The book is light on that (no offense). Some defintions of functions or methods leaves me scratching my head asking, "How is that done in the code, really?" --- Perry Greenfield wrote: > I see it as suitable for a certain class of new > users (i.e, those > that are more task oriented and looking to see what > they can > accomplish interactively without having to learn a > lot of preliminary > material. But it certainly isn't suitable for all. > > I was talking to Travis about the need for some sort > of lightweight > intro to numpy. His book is a great technical > reference, and very > suitable for developers. But I worry that it is a > bit intimidating > for new users. So I raised the possibility of > writing some sort of > introduction that did the basics but pointed to the > book for details > for more advanced topics. Travis was ok with that (I > didn't want to > instigate anything that would cut into book sales). > I was thinking of > perhaps taking the old Numeric/numarray user guides > and stripping > them down and modifying that base document after I > finish with the > interactive tutorial (if I have time of course). > With having to > switch our staff over to numpy (we've pretty much > completed > converting most of our distributed software; our > next release in > early summer will be numpy-based) I think we need > something that > isn't too long to read. If others want to take this > on, I welcome it > (and as I said, I don't think Travis objects based > on what he told me). > > Perry -- Lou Pecora, my views are my own. --------------- Three laws of thermodynamics: First law: "You can't win." Second law: "You can't break even." Third law: "You can't quit." -- Allen Ginsberg, beat poet ____________________________________________________________________________________ Finding fabulous fares is fun. Let Yahoo! FareChase search your favorite travel sites to find flight and hotel bargains. http://farechase.yahoo.com/promo-generic-14795097 From rshepard at appl-ecosys.com Wed Feb 28 14:02:42 2007 From: rshepard at appl-ecosys.com (Rich Shepard) Date: Wed, 28 Feb 2007 11:02:42 -0800 (PST) Subject: [SciPy-user] Any Books on SciPy? In-Reply-To: <666570.29413.qm@web34409.mail.mud.yahoo.com> References: <666570.29413.qm@web34409.mail.mud.yahoo.com> Message-ID: On Wed, 28 Feb 2007, Lou Pecora wrote: > What's really is needed are Examples, examples, and examples. The book is > light on that (no offense). Some defintions of functions or methods leaves > me scratching my head asking, "How is that done in the code, really?" I'll second that suggestion. The mail list is very helpful, but it would be nice to see the actual use of a function in a small program (or function) in the book. Rich -- Richard B. Shepard, Ph.D. | The Environmental Permitting Applied Ecosystem Services, Inc. | Accelerator(TM) Voice: 503-667-4517 Fax: 503-667-8863 From pgmdevlist at gmail.com Wed Feb 28 14:09:11 2007 From: pgmdevlist at gmail.com (Pierre GM) Date: Wed, 28 Feb 2007 14:09:11 -0500 Subject: [SciPy-user] Any Books on SciPy? In-Reply-To: References: <666570.29413.qm@web34409.mail.mud.yahoo.com> Message-ID: <200702281409.11771.pgmdevlist@gmail.com> > I'll second that suggestion. The mail list is very helpful, but it would > be nice to see the actual use of a function in a small program (or > function) in the book. Something like that ? http://www.scipy.org/Numpy_Example_List_With_Doc http://www.scipy.org/Cookbook From krish.subramaniam at gmail.com Wed Feb 28 14:11:17 2007 From: krish.subramaniam at gmail.com (Krish Subramaniam) Date: Wed, 28 Feb 2007 11:11:17 -0800 Subject: [SciPy-user] Any Books on SciPy? In-Reply-To: References: <666570.29413.qm@web34409.mail.mud.yahoo.com> Message-ID: Personally, I think the approach Cleve Moler ( of Mathworks) takes in the first chapter of "Numerical Computing in Matlab" is the best I have seen. http://www.mathworks.com/moler/chapters.html What he does is, he takes a central concept ( Fibonacci numbers and Golden Ratio). Explains everything about them using interactive examples in Matlab-scripts. This approach helps a reader to learn when to use a function and how to use it. So if one can explain a lightweight scientific-computing concept using Scipy / Numpy and exhausts most of the functions that would be a great tutorial. Just my 2 cents. --Krish Subramaniam On 2/28/07, Rich Shepard wrote: > On Wed, 28 Feb 2007, Lou Pecora wrote: > > > What's really is needed are Examples, examples, and examples. The book is > > light on that (no offense). Some defintions of functions or methods leaves > > me scratching my head asking, "How is that done in the code, really?" > > I'll second that suggestion. The mail list is very helpful, but it would > be nice to see the actual use of a function in a small program (or function) > in the book. > > Rich > > -- > Richard B. Shepard, Ph.D. | The Environmental Permitting > Applied Ecosystem Services, Inc. | Accelerator(TM) > Voice: 503-667-4517 Fax: 503-667-8863 > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From rshepard at appl-ecosys.com Wed Feb 28 14:32:48 2007 From: rshepard at appl-ecosys.com (Rich Shepard) Date: Wed, 28 Feb 2007 11:32:48 -0800 (PST) Subject: [SciPy-user] Any Books on SciPy? In-Reply-To: <200702281409.11771.pgmdevlist@gmail.com> References: <666570.29413.qm@web34409.mail.mud.yahoo.com> <200702281409.11771.pgmdevlist@gmail.com> Message-ID: On Wed, 28 Feb 2007, Pierre GM wrote: > Something like that ? > http://www.scipy.org/Numpy_Example_List_With_Doc > http://www.scipy.org/Cookbook Yes. -- Richard B. Shepard, Ph.D. | The Environmental Permitting Applied Ecosystem Services, Inc. | Accelerator(TM) Voice: 503-667-4517 Fax: 503-667-8863 From lou_boog2000 at yahoo.com Wed Feb 28 15:03:11 2007 From: lou_boog2000 at yahoo.com (Lou Pecora) Date: Wed, 28 Feb 2007 12:03:11 -0800 (PST) Subject: [SciPy-user] Any Books on SciPy? In-Reply-To: <200702281409.11771.pgmdevlist@gmail.com> Message-ID: <92333.6801.qm@web34408.mail.mud.yahoo.com> Well... yes, what took you so long? :-) Thanks. A very good web site. And right there hiding under my nose. --- Pierre GM wrote: > Something like that ? > http://www.scipy.org/Numpy_Example_List_With_Doc > http://www.scipy.org/Cookbook -- Lou Pecora, my views are my own. --------------- Three laws of thermodynamics: First law: "You can't win." Second law: "You can't break even." Third law: "You can't quit." -- Allen Ginsberg, beat poet ____________________________________________________________________________________ Need Mail bonding? Go to the Yahoo! Mail Q&A for great tips from Yahoo! Answers users. http://answers.yahoo.com/dir/?link=list&sid=396546091 From pgmdevlist at gmail.com Wed Feb 28 15:19:54 2007 From: pgmdevlist at gmail.com (Pierre GM) Date: Wed, 28 Feb 2007 15:19:54 -0500 Subject: [SciPy-user] Any Books on SciPy? In-Reply-To: <92333.6801.qm@web34408.mail.mud.yahoo.com> References: <92333.6801.qm@web34408.mail.mud.yahoo.com> Message-ID: <200702281519.54562.pgmdevlist@gmail.com> On Wednesday 28 February 2007 15:03:11 Lou Pecora wrote: > Well... yes, what took you so long? :-) > > Thanks. A very good web site. And right there hiding > under my nose. If I remember my Poe correctly, that's quite often the case. But seriously: I find the Numpy examples invaluable. However, nothing prevents us to start reorganizing these examples in a fashion closer to Moler's approach: short, real-life examples, along with the corresponding Numpy code. A kind of Guide to numpy's electronic companion, that shouldn't replace the Cookbook but complete it. We should first to agree on a basic template, and fill it adequately. The wiki organization is ideal for that. From lou_boog2000 at yahoo.com Wed Feb 28 15:53:07 2007 From: lou_boog2000 at yahoo.com (Lou Pecora) Date: Wed, 28 Feb 2007 12:53:07 -0800 (PST) Subject: [SciPy-user] Any Books on SciPy? In-Reply-To: <200702281519.54562.pgmdevlist@gmail.com> Message-ID: <20070228205307.17517.qmail@web34414.mail.mud.yahoo.com> --- Pierre GM wrote: > On Wednesday 28 February 2007 15:03:11 Lou Pecora > wrote: > > Well... yes, what took you so long? :-) > > > > Thanks. A very good web site. And right there > hiding > > under my nose. > > If I remember my Poe correctly, that's quite often > the case. You know Edgar Allen well. I think that was "The Tell-Tale Heart". Very good knowledge of literature. > But seriously: I find the Numpy examples invaluable. > However, nothing prevents > us to start reorganizing these examples in a fashion > closer to Moler's > approach: short, real-life examples, along with the > corresponding Numpy code. > A kind of Guide to numpy's electronic companion, > that shouldn't replace the > Cookbook but complete it. We should first to agree > on a basic template, and > fill it adequately. The wiki organization is ideal > for that. You are right. I'm not sure what the format should be. I'm a little out of it about writing up things even though I have contributed to the NumPy/SciPy wiki. -- Lou Pecora, my views are my own. --------------- Three laws of thermodynamics: First law: "You can't win." Second law: "You can't break even." Third law: "You can't quit." -- Allen Ginsberg, beat poet ____________________________________________________________________________________ No need to miss a message. Get email on-the-go with Yahoo! Mail for Mobile. Get started. http://mobile.yahoo.com/mail From bnuttall at uky.edu Wed Feb 28 15:51:36 2007 From: bnuttall at uky.edu (Brandon Nuttall) Date: Wed, 28 Feb 2007 15:51:36 -0500 Subject: [SciPy-user] Using SciPy/NumPy optimization THANKS! In-Reply-To: <45E5BF5E.6020601@gmail.com> References: <6.0.1.1.2.20070227131721.0287bfe0@pop.uky.edu> <45E493D7.2070703@gmail.com> <6.0.1.1.2.20070227170539.0283f608@pop.uky.edu> <6.0.1.1.2.20070228093647.02798d38@pop.uky.edu> <45E5BF5E.6020601@gmail.com> Message-ID: <6.0.1.1.2.20070228135020.026f3008@pop.uky.edu> Folks, Thanks to Alok Singhal and Robert Kern I have not only learned a great deal about SciPy and NumPy, but I have code that works. Thanks for the tip on not looping; it does make cleaner code. I have two issues: 1) there must be a better way to convert a list of data pairs to two arrays, 2) I'm not sure of a graceful way to transition from one plot to the next and then close. It works and I'm going to plug it into other code that grabs data from a MySQL database. import numpy as np from scipy import rand from scipy.optimize import leastsq import pylab class HypObj: """Object for best fit to hyperbolic function""" def __init__(self,x,y,guess=None,plot=None): self.x = x self.y = y self.plot=plot if guess==None: self.guess = [y.max(),1.0,0.5] else: self.guess = guess self.parameters, self.success = leastsq(self._errf, self.guess[:], args = (self.x, self.y)) self.r2 = self._r2(self.x,self.y,self.parameters) if self.plot<>None: pylab.plot(self.x, self.y, 'ro', self.x, self._f(self.parameters, self.x), 'b--') pylab.show() pylab.clf() def _f(self,p, x): """Evaluate function with parameters, p, and array of x values""" return p[0]*(1.0+p[1]*p[2]*x)**(-1.0/p[1]) def _errf(self,p, x, y): """return the difference between calculated and input y values""" return self._f(p, x) - y def _r2(self,x,y,abc): """calculate the correlation coefficient and rmsd""" y_calc = self._f(abc,self.x) y_avg = y.mean() rmsd = ((y-self._f(abc,x))**2).sum()/((y_calc-y_avg)**2).sum() return (1.0-rmsd, rmsd) def list_toarray(data): """Convert input list of data pairs to two arrays""" x = [] y = [] for i in data: x.append(float(i[0])) y.append(float(i[1])) x = np.array(x) y = np.array(y) return x,y def test(): def f(p, x): # for testing, make the objective function available return p[0]*(1.0+p[1]*p[2]*x)**(-1.0/p[1]) # fake data for testing, should get perfect correlation print "\nTest 1: should find an exact solution = [1.25,0.75,0.25]" print "(No plot)" qi, b, di = [1.25,0.75,0.25] sample = [] for i in range(1,61): sample.append([float(i),qi*(1.0+b*di*float(i))**(-1.0/b)]) x, y = list_toarray(sample) ex1 = HypObj(x,y) if ex1.success==1: print ex1.parameters,ex1.r2 else: print "leastsq() error: ",ex1.success # generate data with random deviations print "\nTest 2: should find a solution close to [15.0, 2.5, 0.3]" print "(No plot)" n = 151 x = np.mgrid[1:10:n*1j] abc = [15.0, 2.5, 0.3] y = f(abc, x) + rand(n) - 0.5 ex2 = HypObj(x,y,guess=[30.0, 4.0, 1.0]) if ex2.success==1: print ex2.parameters,ex2.r2 else: print "leastsq() error: ",ex2.success # real data print "\nTest 3: real world data" rn115604=[[1, 3233],[2, 3530],[3, 3152],[4, 2088],[6, 3038], [7, 2108],[8, 2132],[9, 1654],[10, 1762],[11, 1967], [12, 1760],[13, 1649],[14, 1633],[15, 1680],[16, 1398], [17, 1622],[18, 1393],[19, 1436],[20, 1352],[21, 1402], [22, 1459],[23, 1373],[24, 1262],[25, 1346],[26, 1325], [27, 1319],[28, 1309],[29, 1206],[30, 1249],[31, 1446], [32, 1255],[33, 1227],[34, 1268],[35, 1233],[36, 1175], [37, 1129],[38, 1242],[39, 1247],[40, 1198],[41, 1058], [42, 1172],[43, 1242],[44, 1214],[45, 1148],[46, 1689], [47, 971],[48, 1084],[49, 1028],[50, 1164],[51, 1297], [52, 1040],[53, 1045],[54, 1196],[55, 991],[56, 1065], [57, 898],[58, 1020],[59, 966],[60, 1162],[61, 1069], [62, 1055],[63, 1035],[64, 1045],[65, 1076],[66, 1108], [67, 918],[68, 1051],[69, 1049],[70, 1039],[71, 1133], [72, 887],[73, 924],[74, 983],[75, 1077],[76, 1092], [77, 973],[78, 920],[79, 1040]] x, y = list_toarray(rn115604) ex3 = HypObj(x,y,plot=1) if ex3.success==1: print ex3.parameters,ex3.r2 else: print "leastsq() error: ",ex3.success if __name__=="__main__": test() Brandon C. Nuttall BNUTTALL at UKY.EDU Kentucky Geological Survey (859) 257-5500 University of Kentucky (859) 257-1147 (fax) 228 Mining & Mineral Resources Bldg http://www.uky.edu/KGS/home.htm Lexington, Kentucky 40506-0107 From fperez.net at gmail.com Wed Feb 28 15:57:36 2007 From: fperez.net at gmail.com (Fernando Perez) Date: Wed, 28 Feb 2007 13:57:36 -0700 Subject: [SciPy-user] Any Books on SciPy? In-Reply-To: <20070228205307.17517.qmail@web34414.mail.mud.yahoo.com> References: <200702281519.54562.pgmdevlist@gmail.com> <20070228205307.17517.qmail@web34414.mail.mud.yahoo.com> Message-ID: On 2/28/07, Lou Pecora wrote: > --- Pierre GM wrote: > > > > Thanks. A very good web site. And right there > > hiding > > > under my nose. > > > > If I remember my Poe correctly, that's quite often > > the case. > > You know Edgar Allen well. I think that was "The > Tell-Tale Heart". Very good knowledge of literature. I suspect the reference was to "The Purloined Letter" (or "La carta robada", as I originally read it) instead. Cheers, f From robert.kern at gmail.com Wed Feb 28 16:07:48 2007 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 28 Feb 2007 15:07:48 -0600 Subject: [SciPy-user] Using SciPy/NumPy optimization THANKS! In-Reply-To: <6.0.1.1.2.20070228135020.026f3008@pop.uky.edu> References: <6.0.1.1.2.20070227131721.0287bfe0@pop.uky.edu> <45E493D7.2070703@gmail.com> <6.0.1.1.2.20070227170539.0283f608@pop.uky.edu> <6.0.1.1.2.20070228093647.02798d38@pop.uky.edu> <45E5BF5E.6020601@gmail.com> <6.0.1.1.2.20070228135020.026f3008@pop.uky.edu> Message-ID: <45E5EF24.8080703@gmail.com> Brandon Nuttall wrote: > Folks, > > Thanks to Alok Singhal and Robert Kern I have not only learned a great deal > about SciPy and NumPy, but I have code that works. Thanks for the tip on > not looping; it does make cleaner code. I have two issues: 1) there must be > a better way to convert a list of data pairs to two arrays, xy = array([[x0, y0], [x1, y1], ...]) x = xy[:,0] y = xy[:,1] -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From lou_boog2000 at yahoo.com Wed Feb 28 16:08:40 2007 From: lou_boog2000 at yahoo.com (Lou Pecora) Date: Wed, 28 Feb 2007 13:08:40 -0800 (PST) Subject: [SciPy-user] Any Books on SciPy? In-Reply-To: Message-ID: <20070228210841.22939.qmail@web34414.mail.mud.yahoo.com> --- Fernando Perez wrote: > On 2/28/07, Lou Pecora > wrote: > > You know Edgar Allen well. I think that was "The > > Tell-Tale Heart". Very good knowledge of > literature. > > I suspect the reference was to "The Purloined > Letter" (or "La carta > robada", as I originally read it) instead. Sigh. This is why I went into physics and not literature. Thanks. :-) -- Lou Pecora, my views are my own. --------------- Three laws of thermodynamics: First law: "You can't win." Second law: "You can't break even." Third law: "You can't quit." -- Allen Ginsberg, beat poet ____________________________________________________________________________________ Have a burning question? Go to www.Answers.yahoo.com and get answers from real people who know. From as8ca at virginia.edu Wed Feb 28 16:16:26 2007 From: as8ca at virginia.edu (Alok Singhal) Date: Wed, 28 Feb 2007 16:16:26 -0500 Subject: [SciPy-user] Using SciPy/NumPy optimization THANKS! In-Reply-To: <6.0.1.1.2.20070228135020.026f3008@pop.uky.edu> References: <6.0.1.1.2.20070227131721.0287bfe0@pop.uky.edu> <45E493D7.2070703@gmail.com> <6.0.1.1.2.20070227170539.0283f608@pop.uky.edu> <6.0.1.1.2.20070228093647.02798d38@pop.uky.edu> <45E5BF5E.6020601@gmail.com> <6.0.1.1.2.20070228135020.026f3008@pop.uky.edu> Message-ID: <20070228211626.GA6423@virginia.edu> On 28/02/07: 15:51, Brandon Nuttall wrote: > import numpy as np > from scipy import rand > from scipy.optimize import leastsq > import pylab > > class HypObj: > """Object for best fit to hyperbolic function""" > def __init__(self,x,y,guess=None,plot=None): > self.x = x > self.y = y > self.plot=plot > if guess==None: > self.guess = [y.max(),1.0,0.5] > else: > self.guess = guess > self.parameters, self.success = leastsq(self._errf, > self.guess[:], > args = (self.x, self.y)) > self.r2 = self._r2(self.x,self.y,self.parameters) > if self.plot<>None: Maybe if self.plot is not None: is better? See http://www.python.org/dev/peps/pep-0008/, section 'Programming Recommendations'. > def list_toarray(data): > """Convert input list of data pairs to two arrays""" > x = [] > y = [] > for i in data: > x.append(float(i[0])) > y.append(float(i[1])) > x = np.array(x) > y = np.array(y) > return x,y For your data, you can do: x = [data[i][0] for i in range(60)] y = [data[i][1] for i in range(60)] If you want to use numpy, then you could do: data = np.asarray(data) x = data[:, 0] y = data[:, 1] > def test(): > > def f(p, x): # for testing, make the objective function available > return p[0]*(1.0+p[1]*p[2]*x)**(-1.0/p[1]) > > # fake data for testing, should get perfect correlation > print "\nTest 1: should find an exact solution = [1.25,0.75,0.25]" > print "(No plot)" > qi, b, di = [1.25,0.75,0.25] > sample = [] > for i in range(1,61): > sample.append([float(i),qi*(1.0+b*di*float(i))**(-1.0/b)]) You don't need the loop: sample = np.zeros((60, 2), dtype=float) sample[:, 0] = np.arange(60) + 1 sample[:, 1] = qi*(1. + b*di*sample[:, 0])**(-1.0/b) Given that you 'unpack' sample at a later stage anyway, you could as well use two different variables instead of sample. Cheers, Alok -- Alok Singhal * * Graduate Student, dept. of Astronomy * * * University of Virginia http://www.astro.virginia.edu/~as8ca/ * * From emin.shopper at gmail.com Wed Feb 28 16:40:32 2007 From: emin.shopper at gmail.com (Emin.shopper Martinian.shopper) Date: Wed, 28 Feb 2007 16:40:32 -0500 Subject: [SciPy-user] scipy and cvxopt Message-ID: <32e43bb70702281340r39d69ea5t2e1287ed340eee9b@mail.gmail.com> Dear Experts, I need to solve some quadratic programs (and potentially other nonlinear programs). While scipy.optimize.fmin_cobyla seems like it can do this, it seems orders of magnitude slower than cvxopt. Are there plans to merge/include cvxopt in scipy or otherwise improve scipy's quadratic/nonlinear constrained optimization routines? I saw an old response to this question but the response pointed to a non-existent URL. Thanks, -Emin -------------- next part -------------- An HTML attachment was scrubbed... URL: From oliphant.travis at ieee.org Wed Feb 28 17:31:39 2007 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Wed, 28 Feb 2007 15:31:39 -0700 Subject: [SciPy-user] NumPy in Teaching In-Reply-To: <45E4FFA6.9010408@shrogers.com> References: <45E4FFA6.9010408@shrogers.com> Message-ID: Steven H. Rogers wrote: > I'm doing an informal survey on the use of Array Programming Languages > for teaching. If you're using NumPy in this manner I'd like to hear > from you. What subject was/is taught, academic level, results, lessons > learned, etc. > I've used NumPy in a Signals and Systems class and in a Probability Theory class. These were both Junior/Senior level undergraduate classes. Students were given the option to use Python or MATLAB. Most chose MATLAB because it was installed on the computers they had access to. It is also the language used when other teachers teach the course. NumPy/SciPy was a complete replacement for MATLAB however. I did all of the labs using NumPy/SciPy and they worked fine. -Travis From oliphant at ee.byu.edu Wed Feb 28 17:54:05 2007 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Wed, 28 Feb 2007 15:54:05 -0700 Subject: [SciPy-user] scipy and cvxopt In-Reply-To: <32e43bb70702281340r39d69ea5t2e1287ed340eee9b@mail.gmail.com> References: <32e43bb70702281340r39d69ea5t2e1287ed340eee9b@mail.gmail.com> Message-ID: <45E6080D.8090103@ee.byu.edu> Emin.shopper Martinian.shopper wrote: > Dear Experts, > > I need to solve some quadratic programs (and potentially other > nonlinear programs). While scipy.optimize.fmin_cobyla seems like it > can do this, it seems orders of magnitude slower than cvxopt. Are > there plans to merge/include cvxopt in scipy or otherwise improve > scipy's quadratic/nonlinear constrained optimization routines? Yes, eventually. I have talked to the author of CVXOPT at NIPS 2006. The plan is to move NumPy's matrix object into C and move CVXOPTs implementation over to use it, possibly integrating the cvxopt algorithms into at least a scikits library (of GPL code). But, I won't have time for that until April or May. -Travis From david.warde.farley at utoronto.ca Wed Feb 28 18:00:58 2007 From: david.warde.farley at utoronto.ca (David Warde-Farley) Date: Wed, 28 Feb 2007 18:00:58 -0500 Subject: [SciPy-user] NumPy in Teaching In-Reply-To: References: <45E4FFA6.9010408@shrogers.com> Message-ID: <119A160D-FBC3-4086-A22F-60658961616F@utoronto.ca> On 28-Feb-07, at 5:31 PM, Travis Oliphant wrote: > Students were given the option to use Python or MATLAB. Most chose > MATLAB because it was installed on the computers they had access > to. It > is also the language used when other teachers teach the course. > > NumPy/SciPy was a complete replacement for MATLAB however. I did > all of > the labs using NumPy/SciPy and they worked fine. Travis, Can I ask how you (or anyone else) deals with saving a "workspace" when doing interactive numerical work in Python? I'd imagine this might be important in an educational setting, and I'm remiss to still be without an equivalent to Matlab's "save" (I understand the difficulty in serializing a Python namespace though). So what do people do? Aside from being somewhat clumsy, even cPickle seems intolerably slow at saving large matrices to disk. David P.S. Many thanks for all the work you've done making NumPy and SciPy usable. I'm currently working on porting a good bit of numerical code to Python and your documentation has been invaluable. By the way, was it you who gave the presentation at the NIPS workshops? (My supervisor came away quite impressed) From oliphant at ee.byu.edu Wed Feb 28 18:15:11 2007 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Wed, 28 Feb 2007 16:15:11 -0700 Subject: [SciPy-user] NumPy in Teaching In-Reply-To: <119A160D-FBC3-4086-A22F-60658961616F@utoronto.ca> References: <45E4FFA6.9010408@shrogers.com> <119A160D-FBC3-4086-A22F-60658961616F@utoronto.ca> Message-ID: <45E60CFF.3090103@ee.byu.edu> David Warde-Farley wrote: >On 28-Feb-07, at 5:31 PM, Travis Oliphant wrote: > > > >>Students were given the option to use Python or MATLAB. Most chose >>MATLAB because it was installed on the computers they had access >>to. It >>is also the language used when other teachers teach the course. >> >>NumPy/SciPy was a complete replacement for MATLAB however. I did >>all of >>the labs using NumPy/SciPy and they worked fine. >> >> > >Travis, > >Can I ask how you (or anyone else) deals with saving a "workspace" >when doing interactive numerical work in Python? I'd imagine this >might be important in an educational setting, and I'm remiss to still >be without an equivalent to Matlab's "save" (I understand the >difficulty in serializing a Python namespace though). > > You can save .mat files (which I've done in the past) by passing in a list of the names to save to the file to scipy's scipy.io.savemat function. I've also used scipy's scipy.io.save command with success in the past. The problem with pickling is the copying that occurs to a string before pickling can occur. I think something a little more specialized to NumPy would be possible using the .tofile() method. There has been some nice work on SciPy's io functionality lately which includes using memory-mapping techniques. It has not been completely documented yet, however. -Travis From jh at physics.ucf.edu Wed Feb 28 19:32:41 2007 From: jh at physics.ucf.edu (Joe Harrington) Date: Wed, 28 Feb 2007 19:32:41 -0500 Subject: [SciPy-user] NumPy in Teaching Message-ID: <200703010032.l210WfhI005995@glup.physics.ucf.edu> Hi Steve, I have taught Astronomical Data Analysis twice at Cornell using IDL, and I will be teaching it next Fall at UCF using NumPy. Though I've been active here in the recent past, I'm actually not a regular NumPy user myself yet (I used Numeric experimentally for about 6 months in 1997), so I'm a bit nervous. There isn't the kind of documentation and how-to support for Numpy that there is for IDL, though our web site is a start in that direction. One thought I've had in making the transition easier is to put up a syntax and function concordance, similar to that available for MATLAB. I thought this existed. Maybe Perry can point me to it. Just adding a column to the MATLAB one would be fine. My syllabi (there are undergrad and grad versions) are at: Cornell courses (undergrad only): http://physics.ucf.edu/~jh/ast/ast234-2003/ http://physics.ucf.edu/~jh/ast/ast234-2004/ UCF course (4xxx is undergrad, 5xxx is grad, numbers not yet assigned): http://physics.ucf.edu/~jh/ast/dacourse/ The goal of the course is for students to go out and do research with faculty as soon as they're done, and be useful enough to be included on papers. Rather than the usual (and failing) "just do what I do" model, in which physics students learn to program badly and in FORTRAN77 from their professors, I teach programming from a CS point of view, focusing on good top-down design and bottom-up construction (indentation, documentation, sensible naming, testing, etc.). I teach error analysis by first teaching probability. Then we go into the physics of detectors and finally do an end-to-end analysis of some simple spacecraft data sets (photometry and spectroscopy), the programming of which make up most of their assignments. There is a project at the end, in which many in the class seem to get an epiphany for how all this stuff fits together. They write up the result in the format of an Astrophysical Journal article, and while I don't teach writing as a topic, I do demand that it is done well (and to my shock it usually is!). The first two times I taught it, it was way too much material (good students spent 15+ hours on the class weekly), so I'm ripping out about half the programming assignments for the undergrads, and giving simpler project datasets. My main lesson learned is that the old adage of "They know less than you think they know but they can do more than you think they can do" falls completely on its face here. Many of them actually do know how to program, and that ability, rather than their academic level, is really the best predictor of course success. A computer-savvy freshman will just kill a computerphobic grad student, because the rest of the class just isn't that hard. What I wasn't prepared for the first time I taught it is just how hard it is to teach debugging. These kids will stare a simple twiddle-characters bug in the face for hours and not see it. It's been twenty-five years since I was at that stage and it's hard to remember what it was like. To teach debugging, I'm emphasizing "fingers as the creators of error" (since you KNOW your brain didn't do it!), and that they should test each small bit of code before incorporating it in their function. I'm also showing them how to use a debugger, giving them a list of common bug types and how to find them, and only having them do every other step in the photometry pipeline. I'll give them the other half of the steps and that will teach them how to design to an API. The other lesson is that it is a hell of a lot of work to grade programming assignments from even four students, if you care about grading for good practice and not just whether it runs. I probably spent 20 hours a week on the class the second year I taught it. Since I'll have 10 students next semester, I plan on doing something here with peer evaluation. Wish me luck... I'm posting here because I'm interested in your results and any advice you or your respondents have to share. I hope other respondents will post here rather than sending private email. If we get enough people, let's start a page on the wiki (a mailing list! a conference! a movement! ok, well, let's start with a wiki page). --jh-- Prof. Joseph Harrington Department of Physics University of Central Florida From palazzol at comcast.net Wed Feb 28 21:00:10 2007 From: palazzol at comcast.net (Frank Palazzolo) Date: Wed, 28 Feb 2007 21:00:10 -0500 Subject: [SciPy-user] Robert Kern's pyCA Message-ID: <45E633AA.8070107@comcast.net> Hello, Has anyone tried to port Robert Kern's pyCA (Geometric Algebra) code to use the new NumPy? It uses a combination of Numeric and NumPy at present - as seen here: http://mail.python.org/pipermail/python-list/2000-August/050443.html I might have a go at it myself...just thought I'd find out if someone did it already. And I noticed that Robert posts here :) I wonder if there is interest in having this as part of SciPy? Thanks, Frank From steve at shrogers.com Wed Feb 28 23:14:48 2007 From: steve at shrogers.com (Steven H. Rogers) Date: Wed, 28 Feb 2007 21:14:48 -0700 Subject: [SciPy-user] NumPy in Teaching In-Reply-To: <20070228124928.8D9D2BA05AD@phaser.physics.ucf.edu> References: <20070228124928.8D9D2BA05AD@phaser.physics.ucf.edu> Message-ID: <45E65338.9060806@shrogers.com> Hi Joe: Thanks for the comprehensive response. I'll post the results to the lists when I've compiled them. # Steve From robert.kern at gmail.com Wed Feb 28 23:20:17 2007 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 28 Feb 2007 22:20:17 -0600 Subject: [SciPy-user] Robert Kern's pyCA In-Reply-To: <45E633AA.8070107@comcast.net> References: <45E633AA.8070107@comcast.net> Message-ID: <45E65481.5050505@gmail.com> Frank Palazzolo wrote: > Hello, > > Has anyone tried to port Robert Kern's pyCA (Geometric Algebra) code to > use the new NumPy? I made an initial pass at it: http://www.enthought.com/~rkern/cgi-bin/hgwebdir.cgi/clifford/ That's a Mercurial repository[1], so after installing Mercurial, you can make a local branch like so: $ hg clone http://www.enthought.com/~rkern/cgi-bin/hgwebdir.cgi/clifford/ [1] http://www.selenic.com/mercurial/wiki -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From steve at shrogers.com Wed Feb 28 23:30:16 2007 From: steve at shrogers.com (Steven H. Rogers) Date: Wed, 28 Feb 2007 21:30:16 -0700 Subject: [SciPy-user] NumPy in Teaching In-Reply-To: References: <45E4FFA6.9010408@shrogers.com> Message-ID: <45E656D8.6030408@shrogers.com> Thanks Ryan. Matlab _is_ rather pervasive in engineering, but I expect NumPy/SciPy to make inroads as the rough edges are smoothed out. Regards, Steve