From keith.hughitt at gmail.com Tue May 1 12:36:32 2012 From: keith.hughitt at gmail.com (Keith Hughitt) Date: Tue, 1 May 2012 12:36:32 -0400 Subject: [SciPy-User] building on Ubuntu Linux from github source : BLAS and LAPACK not getting picked up from environment variable In-Reply-To: References: Message-ID: Hi hari, Try: sudo apt-get build-dep python-scipy That should fetch and install all of the dependencies needed to build SciPy. Best, Keith On Fri, Apr 20, 2012 at 11:53 AM, hari jayaram wrote: > Hi > I am on 64 bit Ubuntu , python 2.6.5 , GCC 4.4.3. I want to compile > my own scipy since the ubuntu package on lucid installs a scipy which > does not have the scipy.optimize.curve_fit > > I followed the build instructions at the url below to install from the > git source . > > http://www.scipy.org/Installing_SciPy/BuildingGeneral > > I could install numpy from the git source . Then I installed lapack > and blas libraries in my home directory following the instructions at > BuildingGeneral. > I set the environment variables BLAS and LAPACK in my ~/.bashrc and > sourced them and then tried the setup.py for scipy > > sudo python setup.py install > > > I get the following errors implying that BLAS was not getting picked > up. I even copied the libblas.a and liblapack.a files to /usr/lib, > linked it to liblapack.so ...but none of these helped find lapack or > blas > > > "numpy.distutils.system_info.BlasNotFoundError: > Blas (http://www.netlib.org/blas/) libraries not found. > Directories to search for the libraries can be specified in the > numpy/distutils/site.cfg file (section [blas]) or by setting > the BLAS environment variable." > > I am wondering if the build instructions are different for the github > tree. The detailed error is given below. > Thanks for your help > > Hari > > > > > > > > > hari at hari:~/scipy$ sudo python setup.py install --prefix=/usr/local > blas_opt_info: > blas_mkl_info: > libraries mkl,vml,guide not found in ['/usr/local/lib', > '/usr/lib64', '/usr/lib'] > NOT AVAILABLE > > atlas_blas_threads_info: > Setting PTATLAS=ATLAS > libraries ptf77blas,ptcblas,atlas not found in ['/usr/local/lib', > '/usr/lib64', '/usr/lib'] > NOT AVAILABLE > > atlas_blas_info: > libraries f77blas,cblas,atlas not found in ['/usr/local/lib', > '/usr/lib64', '/usr/lib'] > NOT AVAILABLE > > /usr/local/lib/python2.6/dist-packages/numpy/distutils/system_info.py:1474: > UserWarning: > Atlas (http://math-atlas.sourceforge.net/) libraries not found. > Directories to search for the libraries can be specified in the > numpy/distutils/site.cfg file (section [atlas]) or by setting > the ATLAS environment variable. > warnings.warn(AtlasNotFoundError.__doc__) > blas_info: > libraries blas not found in ['/usr/local/lib', '/usr/lib64', '/usr/lib'] > NOT AVAILABLE > > /usr/local/lib/python2.6/dist-packages/numpy/distutils/system_info.py:1483: > UserWarning: > Blas (http://www.netlib.org/blas/) libraries not found. > Directories to search for the libraries can be specified in the > numpy/distutils/site.cfg file (section [blas]) or by setting > the BLAS environment variable. > warnings.warn(BlasNotFoundError.__doc__) > blas_src_info: > NOT AVAILABLE > > /usr/local/lib/python2.6/dist-packages/numpy/distutils/system_info.py:1486: > UserWarning: > Blas (http://www.netlib.org/blas/) sources not found. > Directories to search for the sources can be specified in the > numpy/distutils/site.cfg file (section [blas_src]) or by setting > the BLAS_SRC environment variable. > warnings.warn(BlasSrcNotFoundError.__doc__) > Traceback (most recent call last): > File "setup.py", line 208, in > setup_package() > File "setup.py", line 199, in setup_package > configuration=configuration ) > File "/usr/local/lib/python2.6/dist-packages/numpy/distutils/core.py", > line 152, in setup > config = configuration() > File "setup.py", line 136, in configuration > config.add_subpackage('scipy') > File > "/usr/local/lib/python2.6/dist-packages/numpy/distutils/misc_util.py", > line 1002, in add_subpackage > caller_level = 2) > File > "/usr/local/lib/python2.6/dist-packages/numpy/distutils/misc_util.py", > line 971, in get_subpackage > caller_level = caller_level + 1) > File > "/usr/local/lib/python2.6/dist-packages/numpy/distutils/misc_util.py", > line 908, in _get_configuration_from_setup_py > config = setup_module.configuration(*args) > File "scipy/setup.py", line 8, in configuration > config.add_subpackage('integrate') > File > "/usr/local/lib/python2.6/dist-packages/numpy/distutils/misc_util.py", > line 1002, in add_subpackage > caller_level = 2) > File > "/usr/local/lib/python2.6/dist-packages/numpy/distutils/misc_util.py", > line 971, in get_subpackage > caller_level = caller_level + 1) > File > "/usr/local/lib/python2.6/dist-packages/numpy/distutils/misc_util.py", > line 908, in _get_configuration_from_setup_py > config = setup_module.configuration(*args) > File "scipy/integrate/setup.py", line 10, in configuration > blas_opt = get_info('blas_opt',notfound_action=2) > File > "/usr/local/lib/python2.6/dist-packages/numpy/distutils/system_info.py", > line 325, in get_info > return cl().get_info(notfound_action) > File > "/usr/local/lib/python2.6/dist-packages/numpy/distutils/system_info.py", > line 484, in get_info > raise self.notfounderror(self.notfounderror.__doc__) > numpy.distutils.system_info.BlasNotFoundError: > Blas (http://www.netlib.org/blas/) libraries not found. > Directories to search for the libraries can be specified in the > numpy/distutils/site.cfg file (section [blas]) or by setting > the BLAS environment variable. > hari at hari:~/scipy$ export BLAS=/home/hari/blas/BLAS/libblas.a > hari at hari:~/scipy$ export > LAPACK=/home/hari/LAPACK/lapack-3.4.0/liblapack.a > hari at hari:~/scipy$ sudo python setup.py install --prefix=/usr/local > blas_opt_info: > blas_mkl_info: > libraries mkl,vml,guide not found in ['/usr/local/lib', > '/usr/lib64', '/usr/lib'] > NOT AVAILABLE > > atlas_blas_threads_info: > Setting PTATLAS=ATLAS > libraries ptf77blas,ptcblas,atlas not found in ['/usr/local/lib', > '/usr/lib64', '/usr/lib'] > NOT AVAILABLE > > atlas_blas_info: > libraries f77blas,cblas,atlas not found in ['/usr/local/lib', > '/usr/lib64', '/usr/lib'] > NOT AVAILABLE > > /usr/local/lib/python2.6/dist-packages/numpy/distutils/system_info.py:1474: > UserWarning: > Atlas (http://math-atlas.sourceforge.net/) libraries not found. > Directories to search for the libraries can be specified in the > numpy/distutils/site.cfg file (section [atlas]) or by setting > the ATLAS environment variable. > warnings.warn(AtlasNotFoundError.__doc__) > blas_info: > libraries blas not found in ['/usr/local/lib', '/usr/lib64', '/usr/lib'] > NOT AVAILABLE > > /usr/local/lib/python2.6/dist-packages/numpy/distutils/system_info.py:1483: > UserWarning: > Blas (http://www.netlib.org/blas/) libraries not found. > Directories to search for the libraries can be specified in the > numpy/distutils/site.cfg file (section [blas]) or by setting > the BLAS environment variable. > warnings.warn(BlasNotFoundError.__doc__) > blas_src_info: > NOT AVAILABLE > > /usr/local/lib/python2.6/dist-packages/numpy/distutils/system_info.py:1486: > UserWarning: > Blas (http://www.netlib.org/blas/) sources not found. > Directories to search for the sources can be specified in the > numpy/distutils/site.cfg file (section [blas_src]) or by setting > the BLAS_SRC environment variable. > warnings.warn(BlasSrcNotFoundError.__doc__) > Traceback (most recent call last): > File "setup.py", line 208, in > setup_package() > File "setup.py", line 199, in setup_package > configuration=configuration ) > File "/usr/local/lib/python2.6/dist-packages/numpy/distutils/core.py", > line 152, in setup > config = configuration() > File "setup.py", line 136, in configuration > config.add_subpackage('scipy') > File > "/usr/local/lib/python2.6/dist-packages/numpy/distutils/misc_util.py", > line 1002, in add_subpackage > caller_level = 2) > File > "/usr/local/lib/python2.6/dist-packages/numpy/distutils/misc_util.py", > line 971, in get_subpackage > caller_level = caller_level + 1) > File > "/usr/local/lib/python2.6/dist-packages/numpy/distutils/misc_util.py", > line 908, in _get_configuration_from_setup_py > config = setup_module.configuration(*args) > File "scipy/setup.py", line 8, in configuration > config.add_subpackage('integrate') > File > "/usr/local/lib/python2.6/dist-packages/numpy/distutils/misc_util.py", > line 1002, in add_subpackage > caller_level = 2) > File > "/usr/local/lib/python2.6/dist-packages/numpy/distutils/misc_util.py", > line 971, in get_subpackage > caller_level = caller_level + 1) > File > "/usr/local/lib/python2.6/dist-packages/numpy/distutils/misc_util.py", > line 908, in _get_configuration_from_setup_py > config = setup_module.configuration(*args) > File "scipy/integrate/setup.py", line 10, in configuration > blas_opt = get_info('blas_opt',notfound_action=2) > File > "/usr/local/lib/python2.6/dist-packages/numpy/distutils/system_info.py", > line 325, in get_info > return cl().get_info(notfound_action) > File > "/usr/local/lib/python2.6/dist-packages/numpy/distutils/system_info.py", > line 484, in get_info > raise self.notfounderror(self.notfounderror.__doc__) > numpy.distutils.system_info.BlasNotFoundError: > Blas (http://www.netlib.org/blas/) libraries not found. > Directories to search for the libraries can be specified in the > numpy/distutils/site.cfg file (section [blas]) or by setting > the BLAS environment variable. > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From camillechambon at yahoo.fr Tue May 1 13:31:20 2012 From: camillechambon at yahoo.fr (camille chambon) Date: Tue, 1 May 2012 18:31:20 +0100 (BST) Subject: [SciPy-User] Least-square fitting of a 3rd degree polynomial Message-ID: <1335893480.26559.YahooMailNeo@web171302.mail.ir2.yahoo.com> Hello, I would like to fit 'a' such as y = a * x**3 + b * x**2 + c * x + d, where x and y are measured data, and b = 0.0, c = -0.00458844157413 and d = 5.8 are fixed. According to http://www.scipy.org/Cookbook/FittingData#head-27373a786baa162a2e8a910ee0b8a48838082262, I try to use scipy.optimize.leastsq fit routine like that: #### CODE ##### from numpy import * from pylab import * from scipy import optimize # My data points x = array([1078.0, 1117.0, 1212.1, 1368.7, 1686.8, 1880.0]) y = array([5.8, 5.6, 5.4, 4.9, 2.4, 0.0]) b, c, d =? 0.0, -0.00458844157413, 5.8 # Fixed parameters fitfunc = lambda p, x: poly1d([p[0], b, c, d])(x) # Target function errfunc = lambda p, x, y: fitfunc(p, x) - y # Distance to the target function p0 = [-4.0E-09] # Initial guess for the parameters p1, success = optimize.leastsq(errfunc, p0[:], args=(x, y)) time = linspace(x.min(), x.max(), 100) plot(x, y, "ro", time, fitfunc(p1, time), "r-") # Plot of the data and the fit title("a fitting") xlabel("time [day]") ylabel("number []") legend(('x position', 'x fit', 'y position', 'y fit')) show() ################################ But the fit does not work (as one can see on the attached image). Does someone have any idea of what I'm doing wrong? Thanks in advance for your help. Cheers, Camille -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: a_fit.png Type: image/png Size: 23663 bytes Desc: not available URL: From josef.pktd at gmail.com Tue May 1 13:44:01 2012 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Tue, 1 May 2012 13:44:01 -0400 Subject: [SciPy-User] Least-square fitting of a 3rd degree polynomial In-Reply-To: <1335893480.26559.YahooMailNeo@web171302.mail.ir2.yahoo.com> References: <1335893480.26559.YahooMailNeo@web171302.mail.ir2.yahoo.com> Message-ID: On Tue, May 1, 2012 at 1:31 PM, camille chambon wrote: > Hello, > > I would like to fit 'a' such as y = a * x**3 + b * x**2 + c * x + d, > where x and y are measured data, and b = 0.0, c = -0.00458844157413 and d = > 5.8 are fixed. > > According to > http://www.scipy.org/Cookbook/FittingData#head-27373a786baa162a2e8a910ee0b8a48838082262, > I try to use scipy.optimize.leastsq fit routine like that: > > #### CODE ##### > > from numpy import * > from pylab import * > from scipy import optimize > > # My data points > x = array([1078.0, 1117.0, 1212.1, 1368.7, 1686.8, 1880.0]) > y = array([5.8, 5.6, 5.4, 4.9, 2.4, 0.0]) > > b, c, d =? 0.0, -0.00458844157413, 5.8 # Fixed parameters > > fitfunc = lambda p, x: poly1d([p[0], b, c, d])(x) # Target function > errfunc = lambda p, x, y: fitfunc(p, x) - y # Distance to the target > function > > p0 = [-4.0E-09] # Initial guess for the parameters > p1, success = optimize.leastsq(errfunc, p0[:], args=(x, y)) > > time = linspace(x.min(), x.max(), 100) > plot(x, y, "ro", time, fitfunc(p1, time), "r-") # Plot of the data and the > fit > > title("a fitting") > xlabel("time [day]") > ylabel("number []") > legend(('x position', 'x fit', 'y position', 'y fit')) > > show() > > ################################ > > But the fit does not work (as one can see on the attached image). > > Does someone have any idea of what I'm doing wrong? > > Thanks in advance for your help. my guess would be that the scale is awful, the x are very large if you only need the to estimate "a", then it's just a very simple linear regression problem dot(x,x)/dot(x,y) or something like this. as aside numpy.polynomial works very well for fitting a polynomial, but doesn't allow for missing terms Josef > > Cheers, > > Camille > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From charlesr.harris at gmail.com Tue May 1 13:45:55 2012 From: charlesr.harris at gmail.com (Charles R Harris) Date: Tue, 1 May 2012 11:45:55 -0600 Subject: [SciPy-User] Least-square fitting of a 3rd degree polynomial In-Reply-To: <1335893480.26559.YahooMailNeo@web171302.mail.ir2.yahoo.com> References: <1335893480.26559.YahooMailNeo@web171302.mail.ir2.yahoo.com> Message-ID: On Tue, May 1, 2012 at 11:31 AM, camille chambon wrote: > Hello, > > I would like to fit 'a' such as y = a * x**3 + b * x**2 + c * x + d, > where x and y are measured data, and b = 0.0, c = -0.00458844157413 and d > = 5.8 are fixed. > > According to > http://www.scipy.org/Cookbook/FittingData#head-27373a786baa162a2e8a910ee0b8a48838082262, > I try to use scipy.optimize.leastsq fit routine like that: > > #### CODE ##### > > from numpy import * > from pylab import * > from scipy import optimize > > # My data points > x = array([1078.0, 1117.0, 1212.1, 1368.7, 1686.8, 1880.0]) > y = array([5.8, 5.6, 5.4, 4.9, 2.4, 0.0]) > > b, c, d = 0.0, -0.00458844157413, 5.8 # Fixed parameters > > fitfunc = lambda p, x: poly1d([p[0], b, c, d])(x) # Target function > errfunc = lambda p, x, y: fitfunc(p, x) - y # Distance to the target > function > > p0 = [-4.0E-09] # Initial guess for the parameters > p1, success = optimize.leastsq(errfunc, p0[:], args=(x, y)) > > time = linspace(x.min(), x.max(), 100) > plot(x, y, "ro", time, fitfunc(p1, time), "r-") # Plot of the data and the > fit > > title("a fitting") > xlabel("time [day]") > ylabel("number []") > legend(('x position', 'x fit', 'y position', 'y fit')) > > show() > > ################################ > > But the fit does not work (as one can see on the attached image). > > Does someone have any idea of what I'm doing wrong? > > Thanks in advance for your help. > > Cheers, > > Camille > > You can use numpy for this. In [1]: from numpy.polynomial import Polynomial as P In [2]: x = array([1078.0, 1117.0, 1212.1, 1368.7, 1686.8, 1880.0]) In [3]: y = array([5.8, 5.6, 5.4, 4.9, 2.4, 0.0]) In [4]: p = P.fit(x,y,3) In [5]: plot(*p.linspace()) Out[5]: [] In [6]: plot(x, y, '.') Out[6]: [] In [7]: p.mapparms() Out[7]: (-3.6882793017456361, 0.0024937655860349127) Note two things, the coefficients go from degree zero upwards and you need to make the substitution x <- -3.6882793017456361 + 0024937655860349127*x if you want to use them in a publication. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: example.png Type: image/png Size: 19989 bytes Desc: not available URL: From charlesr.harris at gmail.com Tue May 1 13:47:23 2012 From: charlesr.harris at gmail.com (Charles R Harris) Date: Tue, 1 May 2012 11:47:23 -0600 Subject: [SciPy-User] Least-square fitting of a 3rd degree polynomial In-Reply-To: References: <1335893480.26559.YahooMailNeo@web171302.mail.ir2.yahoo.com> Message-ID: On Tue, May 1, 2012 at 11:44 AM, wrote: > On Tue, May 1, 2012 at 1:31 PM, camille chambon > wrote: > > Hello, > > > > I would like to fit 'a' such as y = a * x**3 + b * x**2 + c * x + d, > > where x and y are measured data, and b = 0.0, c = -0.00458844157413 and > d = > > 5.8 are fixed. > > > > According to > > > http://www.scipy.org/Cookbook/FittingData#head-27373a786baa162a2e8a910ee0b8a48838082262 > , > > I try to use scipy.optimize.leastsq fit routine like that: > > > > #### CODE ##### > > > > from numpy import * > > from pylab import * > > from scipy import optimize > > > > # My data points > > x = array([1078.0, 1117.0, 1212.1, 1368.7, 1686.8, 1880.0]) > > y = array([5.8, 5.6, 5.4, 4.9, 2.4, 0.0]) > > > > b, c, d = 0.0, -0.00458844157413, 5.8 # Fixed parameters > > > > fitfunc = lambda p, x: poly1d([p[0], b, c, d])(x) # Target function > > errfunc = lambda p, x, y: fitfunc(p, x) - y # Distance to the target > > function > > > > p0 = [-4.0E-09] # Initial guess for the parameters > > p1, success = optimize.leastsq(errfunc, p0[:], args=(x, y)) > > > > time = linspace(x.min(), x.max(), 100) > > plot(x, y, "ro", time, fitfunc(p1, time), "r-") # Plot of the data and > the > > fit > > > > title("a fitting") > > xlabel("time [day]") > > ylabel("number []") > > legend(('x position', 'x fit', 'y position', 'y fit')) > > > > show() > > > > ################################ > > > > But the fit does not work (as one can see on the attached image). > > > > Does someone have any idea of what I'm doing wrong? > > > > Thanks in advance for your help. > > my guess would be that the scale is awful, the x are very large > > if you only need the to estimate "a", then it's just a very simple > linear regression problem dot(x,x)/dot(x,y) or something like this. > > as aside numpy.polynomial works very well for fitting a polynomial, > but doesn't allow for missing terms > > It does recognize NA in devel, but we will see where that goes ;) Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From harijay at gmail.com Tue May 1 14:56:13 2012 From: harijay at gmail.com (hari jayaram) Date: Tue, 1 May 2012 14:56:13 -0400 Subject: [SciPy-User] building on Ubuntu Linux from github source : BLAS and LAPACK not getting picked up from environment variable In-Reply-To: References: Message-ID: Thanks Keith, My problem was that I had installed numpy from the git repository before I had installed BLAS. Once I uninstalled numpy , then installed LAPACK and BLAS. Then I could install numpy and scipy using "sudo python setup.py install" Thanks Hari On Tue, May 1, 2012 at 12:36 PM, Keith Hughitt wrote: > Hi hari, > > Try: > > sudo apt-get build-dep python-scipy > > That should fetch and install all of the dependencies needed to build SciPy. > > Best, > Keith > > On Fri, Apr 20, 2012 at 11:53 AM, hari jayaram wrote: >> >> Hi >> I am on 64 bit Ubuntu , python ?2.6.5 , GCC 4.4.3. I want to compile >> my own scipy since the ubuntu package on lucid installs a scipy which >> does not have the scipy.optimize.curve_fit >> >> I followed the build instructions at the url below to install from the >> git source . >> >> http://www.scipy.org/Installing_SciPy/BuildingGeneral >> >> I could install numpy from the git source . Then I installed lapack >> and blas libraries in my home directory following the instructions at >> BuildingGeneral. >> I set the environment variables BLAS and LAPACK in my ~/.bashrc and >> sourced them and then tried the setup.py for scipy >> >> sudo python setup.py install >> >> >> I get the following errors implying that BLAS was not getting picked >> up. ?I even copied the libblas.a and liblapack.a files to /usr/lib, >> linked it to liblapack.so ...but none of these helped find lapack or >> blas >> >> >> "numpy.distutils.system_info.BlasNotFoundError: >> ? ?Blas (http://www.netlib.org/blas/) libraries not found. >> ? ?Directories to search for the libraries can be specified in the >> ? ?numpy/distutils/site.cfg file (section [blas]) or by setting >> ? ?the BLAS environment variable." >> >> I am wondering if ?the build instructions are different for the github >> tree. The detailed error is given below. >> Thanks for your help >> >> Hari >> >> >> >> >> >> >> >> >> hari at hari:~/scipy$ sudo python setup.py install --prefix=/usr/local >> blas_opt_info: >> blas_mkl_info: >> ?libraries mkl,vml,guide not found in ['/usr/local/lib', >> '/usr/lib64', '/usr/lib'] >> ?NOT AVAILABLE >> >> atlas_blas_threads_info: >> Setting PTATLAS=ATLAS >> ?libraries ptf77blas,ptcblas,atlas not found in ['/usr/local/lib', >> '/usr/lib64', '/usr/lib'] >> ?NOT AVAILABLE >> >> atlas_blas_info: >> ?libraries f77blas,cblas,atlas not found in ['/usr/local/lib', >> '/usr/lib64', '/usr/lib'] >> ?NOT AVAILABLE >> >> >> /usr/local/lib/python2.6/dist-packages/numpy/distutils/system_info.py:1474: >> UserWarning: >> ? ?Atlas (http://math-atlas.sourceforge.net/) libraries not found. >> ? ?Directories to search for the libraries can be specified in the >> ? ?numpy/distutils/site.cfg file (section [atlas]) or by setting >> ? ?the ATLAS environment variable. >> ?warnings.warn(AtlasNotFoundError.__doc__) >> blas_info: >> ?libraries blas not found in ['/usr/local/lib', '/usr/lib64', '/usr/lib'] >> ?NOT AVAILABLE >> >> >> /usr/local/lib/python2.6/dist-packages/numpy/distutils/system_info.py:1483: >> UserWarning: >> ? ?Blas (http://www.netlib.org/blas/) libraries not found. >> ? ?Directories to search for the libraries can be specified in the >> ? ?numpy/distutils/site.cfg file (section [blas]) or by setting >> ? ?the BLAS environment variable. >> ?warnings.warn(BlasNotFoundError.__doc__) >> blas_src_info: >> ?NOT AVAILABLE >> >> >> /usr/local/lib/python2.6/dist-packages/numpy/distutils/system_info.py:1486: >> UserWarning: >> ? ?Blas (http://www.netlib.org/blas/) sources not found. >> ? ?Directories to search for the sources can be specified in the >> ? ?numpy/distutils/site.cfg file (section [blas_src]) or by setting >> ? ?the BLAS_SRC environment variable. >> ?warnings.warn(BlasSrcNotFoundError.__doc__) >> Traceback (most recent call last): >> ?File "setup.py", line 208, in >> ? ?setup_package() >> ?File "setup.py", line 199, in setup_package >> ? ?configuration=configuration ) >> ?File "/usr/local/lib/python2.6/dist-packages/numpy/distutils/core.py", >> line 152, in setup >> ? ?config = configuration() >> ?File "setup.py", line 136, in configuration >> ? ?config.add_subpackage('scipy') >> ?File >> "/usr/local/lib/python2.6/dist-packages/numpy/distutils/misc_util.py", >> line 1002, in add_subpackage >> ? ?caller_level = 2) >> ?File >> "/usr/local/lib/python2.6/dist-packages/numpy/distutils/misc_util.py", >> line 971, in get_subpackage >> ? ?caller_level = caller_level + 1) >> ?File >> "/usr/local/lib/python2.6/dist-packages/numpy/distutils/misc_util.py", >> line 908, in _get_configuration_from_setup_py >> ? ?config = setup_module.configuration(*args) >> ?File "scipy/setup.py", line 8, in configuration >> ? ?config.add_subpackage('integrate') >> ?File >> "/usr/local/lib/python2.6/dist-packages/numpy/distutils/misc_util.py", >> line 1002, in add_subpackage >> ? ?caller_level = 2) >> ?File >> "/usr/local/lib/python2.6/dist-packages/numpy/distutils/misc_util.py", >> line 971, in get_subpackage >> ? ?caller_level = caller_level + 1) >> ?File >> "/usr/local/lib/python2.6/dist-packages/numpy/distutils/misc_util.py", >> line 908, in _get_configuration_from_setup_py >> ? ?config = setup_module.configuration(*args) >> ?File "scipy/integrate/setup.py", line 10, in configuration >> ? ?blas_opt = get_info('blas_opt',notfound_action=2) >> ?File >> "/usr/local/lib/python2.6/dist-packages/numpy/distutils/system_info.py", >> line 325, in get_info >> ? ?return cl().get_info(notfound_action) >> ?File >> "/usr/local/lib/python2.6/dist-packages/numpy/distutils/system_info.py", >> line 484, in get_info >> ? ?raise self.notfounderror(self.notfounderror.__doc__) >> numpy.distutils.system_info.BlasNotFoundError: >> ? ?Blas (http://www.netlib.org/blas/) libraries not found. >> ? ?Directories to search for the libraries can be specified in the >> ? ?numpy/distutils/site.cfg file (section [blas]) or by setting >> ? ?the BLAS environment variable. >> hari at hari:~/scipy$ export BLAS=/home/hari/blas/BLAS/libblas.a >> hari at hari:~/scipy$ export >> LAPACK=/home/hari/LAPACK/lapack-3.4.0/liblapack.a >> hari at hari:~/scipy$ sudo python setup.py install --prefix=/usr/local >> blas_opt_info: >> blas_mkl_info: >> ?libraries mkl,vml,guide not found in ['/usr/local/lib', >> '/usr/lib64', '/usr/lib'] >> ?NOT AVAILABLE >> >> atlas_blas_threads_info: >> Setting PTATLAS=ATLAS >> ?libraries ptf77blas,ptcblas,atlas not found in ['/usr/local/lib', >> '/usr/lib64', '/usr/lib'] >> ?NOT AVAILABLE >> >> atlas_blas_info: >> ?libraries f77blas,cblas,atlas not found in ['/usr/local/lib', >> '/usr/lib64', '/usr/lib'] >> ?NOT AVAILABLE >> >> >> /usr/local/lib/python2.6/dist-packages/numpy/distutils/system_info.py:1474: >> UserWarning: >> ? ?Atlas (http://math-atlas.sourceforge.net/) libraries not found. >> ? ?Directories to search for the libraries can be specified in the >> ? ?numpy/distutils/site.cfg file (section [atlas]) or by setting >> ? ?the ATLAS environment variable. >> ?warnings.warn(AtlasNotFoundError.__doc__) >> blas_info: >> ?libraries blas not found in ['/usr/local/lib', '/usr/lib64', '/usr/lib'] >> ?NOT AVAILABLE >> >> >> /usr/local/lib/python2.6/dist-packages/numpy/distutils/system_info.py:1483: >> UserWarning: >> ? ?Blas (http://www.netlib.org/blas/) libraries not found. >> ? ?Directories to search for the libraries can be specified in the >> ? ?numpy/distutils/site.cfg file (section [blas]) or by setting >> ? ?the BLAS environment variable. >> ?warnings.warn(BlasNotFoundError.__doc__) >> blas_src_info: >> ?NOT AVAILABLE >> >> >> /usr/local/lib/python2.6/dist-packages/numpy/distutils/system_info.py:1486: >> UserWarning: >> ? ?Blas (http://www.netlib.org/blas/) sources not found. >> ? ?Directories to search for the sources can be specified in the >> ? ?numpy/distutils/site.cfg file (section [blas_src]) or by setting >> ? ?the BLAS_SRC environment variable. >> ?warnings.warn(BlasSrcNotFoundError.__doc__) >> Traceback (most recent call last): >> ?File "setup.py", line 208, in >> ? ?setup_package() >> ?File "setup.py", line 199, in setup_package >> ? ?configuration=configuration ) >> ?File "/usr/local/lib/python2.6/dist-packages/numpy/distutils/core.py", >> line 152, in setup >> ? ?config = configuration() >> ?File "setup.py", line 136, in configuration >> ? ?config.add_subpackage('scipy') >> ?File >> "/usr/local/lib/python2.6/dist-packages/numpy/distutils/misc_util.py", >> line 1002, in add_subpackage >> ? ?caller_level = 2) >> ?File >> "/usr/local/lib/python2.6/dist-packages/numpy/distutils/misc_util.py", >> line 971, in get_subpackage >> ? ?caller_level = caller_level + 1) >> ?File >> "/usr/local/lib/python2.6/dist-packages/numpy/distutils/misc_util.py", >> line 908, in _get_configuration_from_setup_py >> ? ?config = setup_module.configuration(*args) >> ?File "scipy/setup.py", line 8, in configuration >> ? ?config.add_subpackage('integrate') >> ?File >> "/usr/local/lib/python2.6/dist-packages/numpy/distutils/misc_util.py", >> line 1002, in add_subpackage >> ? ?caller_level = 2) >> ?File >> "/usr/local/lib/python2.6/dist-packages/numpy/distutils/misc_util.py", >> line 971, in get_subpackage >> ? ?caller_level = caller_level + 1) >> ?File >> "/usr/local/lib/python2.6/dist-packages/numpy/distutils/misc_util.py", >> line 908, in _get_configuration_from_setup_py >> ? ?config = setup_module.configuration(*args) >> ?File "scipy/integrate/setup.py", line 10, in configuration >> ? ?blas_opt = get_info('blas_opt',notfound_action=2) >> ?File >> "/usr/local/lib/python2.6/dist-packages/numpy/distutils/system_info.py", >> line 325, in get_info >> ? ?return cl().get_info(notfound_action) >> ?File >> "/usr/local/lib/python2.6/dist-packages/numpy/distutils/system_info.py", >> line 484, in get_info >> ? ?raise self.notfounderror(self.notfounderror.__doc__) >> numpy.distutils.system_info.BlasNotFoundError: >> ? ?Blas (http://www.netlib.org/blas/) libraries not found. >> ? ?Directories to search for the libraries can be specified in the >> ? ?numpy/distutils/site.cfg file (section [blas]) or by setting >> ? ?the BLAS environment variable. >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user > > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From ralf.gommers at googlemail.com Tue May 1 15:53:59 2012 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Tue, 1 May 2012 21:53:59 +0200 Subject: [SciPy-User] scipy sprint @ EuroSciPy '12? Message-ID: Hi all, Would people be interested in having a scipy sprint at EuroSciPy this year? Last year we tried to do a last minute mini-sprint and ended up hunting for wifi access for half the time, so it would be good to organize things better this time around. I'm thinking a one-day sprint on Wed Aug 22nd would be good. If there's interest, I'm happy to organize (contact the conference organizers for a room, create a wiki page, etc.). Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From camillechambon at yahoo.fr Wed May 2 03:35:16 2012 From: camillechambon at yahoo.fr (camille chambon) Date: Wed, 2 May 2012 08:35:16 +0100 (BST) Subject: [SciPy-User] Re : Least-square fitting of a 3rd degree polynomial In-Reply-To: References: <1335893480.26559.YahooMailNeo@web171302.mail.ir2.yahoo.com> Message-ID: <1335944116.84181.YahooMailNeo@web171304.mail.ir2.yahoo.com> Thanks for your answer. I tried to reduce my problem to a linear regression problem: x = array([1078.0, 1117.0, 1212.1, 1368.7, 1686.8, 1880.0]) y = array([5.8, 5.6, 5.4, 4.9, 2.4, 0.0]) b, c, d =? 0.0, -0.00458844157413, 5.8 z = y - b * x**2 - c * x p = numpy.polyfit(x, z, 3) time = linspace(x.min(), x.max(), 100) plot(x, y, "ro", time, numpy.poly1d([p[0], b, c, d])(time), "r-") but it doesn't work (see attached image). Where am I wrong? Cheers, Camille ________________________________ De?: "josef.pktd at gmail.com" ??: camille chambon ; SciPy Users List Envoy? le : Mardi 1 mai 2012 19h44 Objet?: Re: [SciPy-User] Least-square fitting of a 3rd degree polynomial On Tue, May 1, 2012 at 1:31 PM, camille chambon wrote: > Hello, > > I would like to fit 'a' such as y = a * x**3 + b * x**2 + c * x + d, > where x and y are measured data, and b = 0.0, c = -0.00458844157413 and d = > 5.8 are fixed. > > According to > http://www.scipy.org/Cookbook/FittingData#head-27373a786baa162a2e8a910ee0b8a48838082262, > I try to use scipy.optimize.leastsq fit routine like that: > > #### CODE ##### > > from numpy import * > from pylab import * > from scipy import optimize > > # My data points > x = array([1078.0, 1117.0, 1212.1, 1368.7, 1686.8, 1880.0]) > y = array([5.8, 5.6, 5.4, 4.9, 2.4, 0.0]) > > b, c, d =? 0.0, -0.00458844157413, 5.8 # Fixed parameters > > fitfunc = lambda p, x: poly1d([p[0], b, c, d])(x) # Target function > errfunc = lambda p, x, y: fitfunc(p, x) - y # Distance to the target > function > > p0 = [-4.0E-09] # Initial guess for the parameters > p1, success = optimize.leastsq(errfunc, p0[:], args=(x, y)) > > time = linspace(x.min(), x.max(), 100) > plot(x, y, "ro", time, fitfunc(p1, time), "r-") # Plot of the data and the > fit > > title("a fitting") > xlabel("time [day]") > ylabel("number []") > legend(('x position', 'x fit', 'y position', 'y fit')) > > show() > > ################################ > > But the fit does not work (as one can see on the attached image). > > Does someone have any idea of what I'm doing wrong? > > Thanks in advance for your help. my guess would be that the scale is awful, the x are very large if you only need the to estimate "a", then it's just a very simple linear regression problem dot(x,x)/dot(x,y) or something like this. as aside numpy.polynomial works very well for fitting a polynomial, but doesn't allow for missing terms Josef > > Cheers, > > Camille > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: a_fit.png Type: image/png Size: 28215 bytes Desc: not available URL: From camillechambon at yahoo.fr Wed May 2 03:39:12 2012 From: camillechambon at yahoo.fr (camille chambon) Date: Wed, 2 May 2012 08:39:12 +0100 (BST) Subject: [SciPy-User] Re : Least-square fitting of a 3rd degree polynomial In-Reply-To: References: <1335893480.26559.YahooMailNeo@web171302.mail.ir2.yahoo.com> Message-ID: <1335944352.68951.YahooMailNeo@web171306.mail.ir2.yahoo.com> Thanks for your answer. But I would like to fit 'a' while 'b', 'c' and 'd' are fixed.Your method gives me another b, c and d. Is there any way to fit a polynomial fixing some coefficients? Cheers, Camille ________________________________ De?: Charles R Harris ??: camille chambon ; SciPy Users List Envoy? le : Mardi 1 mai 2012 19h45 Objet?: Re: [SciPy-User] Least-square fitting of a 3rd degree polynomial On Tue, May 1, 2012 at 11:31 AM, camille chambon wrote: Hello, > >I would like to fit 'a' such as y = a * x**3 + b * x**2 + c * x + d, >where x and y are measured data, and b = 0.0, c = -0.00458844157413 and d = 5.8 are fixed. > >According to http://www.scipy.org/Cookbook/FittingData#head-27373a786baa162a2e8a910ee0b8a48838082262, I try to use scipy.optimize.leastsq fit routine like that: > >#### CODE ##### > >from numpy import * >from pylab import * >from scipy import optimize > ># My data points >x = array([1078.0, 1117.0, 1212.1, 1368.7, 1686.8, 1880.0]) >y = array([5.8, 5.6, 5.4, 4.9, 2.4, 0.0]) > >b, c, d =? 0.0, -0.00458844157413, 5.8 # Fixed parameters > >fitfunc = lambda p, x: poly1d([p[0], b, c, d])(x) # Target function >errfunc = lambda p, x, y: fitfunc(p, x) - y # Distance to the target function > >p0 = [-4.0E-09] # Initial guess for the parameters >p1, success = optimize.leastsq(errfunc, p0[:], args=(x, y)) > >time = linspace(x.min(), x.max(), 100) >plot(x, y, "ro", time, fitfunc(p1, time), "r-") # Plot of the data and the fit > >title("a fitting") >xlabel("time [day]") >ylabel("number []") >legend(('x position', 'x fit', 'y position', 'y fit')) > >show() > >################################ > >But the fit does not work (as one can see on the attached image). > >Does someone have any idea of what I'm doing wrong? > >Thanks in advance for your help. > >Cheers, > >Camille > > You can use numpy for this. In [1]: from numpy.polynomial import Polynomial as P In [2]: x = array([1078.0, 1117.0, 1212.1, 1368.7, 1686.8, 1880.0]) In [3]: y = array([5.8, 5.6, 5.4, 4.9, 2.4, 0.0]) In [4]: p = P.fit(x,y,3) In [5]: plot(*p.linspace()) Out[5]: [] In [6]: plot(x, y, '.') Out[6]: [] In [7]: p.mapparms() Out[7]: (-3.6882793017456361, 0.0024937655860349127) Note two things, the coefficients go from degree zero upwards and you need to make the substitution x <- -3.6882793017456361 + 0024937655860349127*x if you want to use them in a publication. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From camillechambon at yahoo.fr Wed May 2 03:51:21 2012 From: camillechambon at yahoo.fr (camille chambon) Date: Wed, 2 May 2012 08:51:21 +0100 (BST) Subject: [SciPy-User] Re : Least-square fitting of a 3rd degree polynomial In-Reply-To: References: <1335893480.26559.YahooMailNeo@web171302.mail.ir2.yahoo.com> Message-ID: <1335945081.81241.YahooMailNeo@web171301.mail.ir2.yahoo.com> I tried to set a polynomial with a NA coefficient: from numpy.polynomial import Polynomial as P import numpy x = numpy.array([1078.0, 1117.0, 1212.1, 1368.7, 1686.8, 1880.0]) y = numpy.array([5.8, 5.6, 5.4, 4.9, 2.4, 0.0]) b, c, d =? 0.0, -0.00458844157413, 5.8 my_poly = P([d, c, b, numpy.nan], [x.min(), x.max()]) print my_poly # print "poly([? 5.80000000e+00? -4.58844157e-03?? 0.00000000e+00????????????? nan], [ 1078.? 1880.])" p = my_poly.fit(x,y,3) print p # print "poly([ 4.20020036 -2.66837734 -1.33882427 -0.20317739], [ 1078.? 1880.])" but new coefficients are calculated. Does that mean it doesn't work? Cheers, Camille ________________________________ De?: Charles R Harris ??: SciPy Users List Envoy? le : Mardi 1 mai 2012 19h47 Objet?: Re: [SciPy-User] Least-square fitting of a 3rd degree polynomial On Tue, May 1, 2012 at 11:44 AM, wrote: On Tue, May 1, 2012 at 1:31 PM, camille chambon wrote: >> Hello, >> >> I would like to fit 'a' such as y = a * x**3 + b * x**2 + c * x + d, >> where x and y are measured data, and b = 0.0, c = -0.00458844157413 and d = >> 5.8 are fixed. >> >> According to >> http://www.scipy.org/Cookbook/FittingData#head-27373a786baa162a2e8a910ee0b8a48838082262, >> I try to use scipy.optimize.leastsq fit routine like that: >> >> #### CODE ##### >> >> from numpy import * >> from pylab import * >> from scipy import optimize >> >> # My data points >> x = array([1078.0, 1117.0, 1212.1, 1368.7, 1686.8, 1880.0]) >> y = array([5.8, 5.6, 5.4, 4.9, 2.4, 0.0]) >> >> b, c, d =? 0.0, -0.00458844157413, 5.8 # Fixed parameters >> >> fitfunc = lambda p, x: poly1d([p[0], b, c, d])(x) # Target function >> errfunc = lambda p, x, y: fitfunc(p, x) - y # Distance to the target >> function >> >> p0 = [-4.0E-09] # Initial guess for the parameters >> p1, success = optimize.leastsq(errfunc, p0[:], args=(x, y)) >> >> time = linspace(x.min(), x.max(), 100) >> plot(x, y, "ro", time, fitfunc(p1, time), "r-") # Plot of the data and the >> fit >> >> title("a fitting") >> xlabel("time [day]") >> ylabel("number []") >> legend(('x position', 'x fit', 'y position', 'y fit')) >> >> show() >> >> ################################ >> >> But the fit does not work (as one can see on the attached image). >> >> Does someone have any idea of what I'm doing wrong? >> >> Thanks in advance for your help. > >my guess would be that the scale is awful, the x are very large > >if you only need the to estimate "a", then it's just a very simple >linear regression problem dot(x,x)/dot(x,y) or something like this. > >as aside numpy.polynomial works very well for fitting a polynomial, >but doesn't allow for missing terms > > It does recognize NA in devel, but we will see where that goes ;) Chuck _______________________________________________ SciPy-User mailing list SciPy-User at scipy.org http://mail.scipy.org/mailman/listinfo/scipy-user -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Wed May 2 06:45:43 2012 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Wed, 2 May 2012 06:45:43 -0400 Subject: [SciPy-User] Re : Least-square fitting of a 3rd degree polynomial In-Reply-To: <1335944116.84181.YahooMailNeo@web171304.mail.ir2.yahoo.com> References: <1335893480.26559.YahooMailNeo@web171302.mail.ir2.yahoo.com> <1335944116.84181.YahooMailNeo@web171304.mail.ir2.yahoo.com> Message-ID: On Wed, May 2, 2012 at 3:35 AM, camille chambon wrote: > Thanks for your answer. > > I tried to reduce my problem to a linear regression problem: > > x = array([1078.0, 1117.0, 1212.1, 1368.7, 1686.8, 1880.0]) > y = array([5.8, 5.6, 5.4, 4.9, 2.4, 0.0]) > b, c, d =? 0.0, -0.00458844157413, 5.8 > z = y - b * x**2 - c * x > p = numpy.polyfit(x, z, 3) > time = linspace(x.min(), x.max(), 100) > plot(x, y, "ro", time, numpy.poly1d([p[0], b, c, d])(time), "r-") > > but it doesn't work (see attached image). > > Where am I wrong? Are you sure you got your fixed values b,c,d right? y2 = y - d - c * x - b * x**2 >>> y2 array([ 4.94634002, 4.92528924, 5.16165003, 5.38019998, 4.33978325, 2.82627016]) It doesn't look like you can fit y2 as a function of x**3 and get a good result. I get a similar plot to your original leastsq solution. import numpy as np import matplotlib.pyplot as plt x = np.array([1078.0, 1117.0, 1212.1, 1368.7, 1686.8, 1880.0]) y = np.array([5.8, 5.6, 5.4, 4.9, 2.4, 0.0]) b, c, d = 0.0, -0.00458844157413, 5.8 y2 = y - d - c * x - b * x**2 x2 = x**3 a = np.dot(x2, y2) / np.dot(x2, x2) #check with statsmodels import statsmodels.api as sm a2 = sm.OLS(y2, x2).fit().params print a, a2[0] xx = np.linspace(x.min(), x.max()) yhat = d + c * xx + b * xx**2 + a * xx**3 plt.plot(x, y, 'o') plt.plot(xx, yhat, '-') plt.figure() plt.plot(x, y2, 'o') plt.show() Josef > > Cheers, > > Camille > > ________________________________ > De?: "josef.pktd at gmail.com" > ??: camille chambon ; SciPy Users List > > Envoy? le : Mardi 1 mai 2012 19h44 > Objet?: Re: [SciPy-User] Least-square fitting of a 3rd degree polynomial > > On Tue, May 1, 2012 at 1:31 PM, camille chambon > wrote: >> Hello, >> >> I would like to fit 'a' such as y = a * x**3 + b * x**2 + c * x + d, >> where x and y are measured data, and b = 0.0, c = -0.00458844157413 and d >> = >> 5.8 are fixed. >> >> According to >> >> http://www.scipy.org/Cookbook/FittingData#head-27373a786baa162a2e8a910ee0b8a48838082262, >> I try to use scipy.optimize.leastsq fit routine like that: >> >> #### CODE ##### >> >> from numpy import * >> from pylab import * >> from scipy import optimize >> >> # My data points >> x = array([1078.0, 1117.0, 1212.1, 1368.7, 1686.8, 1880.0]) >> y = array([5.8, 5.6, 5.4, 4.9, 2.4, 0.0]) >> >> b, c, d =? 0.0, -0.00458844157413, 5.8 # Fixed parameters >> >> fitfunc = lambda p, x: poly1d([p[0], b, c, d])(x) # Target function >> errfunc = lambda p, x, y: fitfunc(p, x) - y # Distance to the target >> function >> >> p0 = [-4.0E-09] # Initial guess for the parameters >> p1, success = optimize.leastsq(errfunc, p0[:], args=(x, y)) >> >> time = linspace(x.min(), x.max(), 100) >> plot(x, y, "ro", time, fitfunc(p1, time), "r-") # Plot of the data and the >> fit >> >> title("a fitting") >> xlabel("time [day]") >> ylabel("number []") >> legend(('x position', 'x fit', 'y position', 'y fit')) >> >> show() >> >> ################################ >> >> But the fit does not work (as one can see on the attached image). >> >> Does someone have any idea of what I'm doing wrong? >> >> Thanks in advance for your help. > > my guess would be that the scale is awful, the x are very large > > if you only need the to estimate "a", then it's just a very simple > linear regression problem dot(x,x)/dot(x,y) or something like this. > > as aside numpy.polynomial works very well for fitting a polynomial, > but doesn't allow for missing terms > > Josef > >> >> Cheers, >> >> Camille >> >> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> > > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From camillechambon at yahoo.fr Wed May 2 13:04:28 2012 From: camillechambon at yahoo.fr (camille chambon) Date: Wed, 2 May 2012 18:04:28 +0100 (BST) Subject: [SciPy-User] Re : Re : Least-square fitting of a 3rd degree polynomial References: <1335893480.26559.YahooMailNeo@web171302.mail.ir2.yahoo.com> <1335944116.84181.YahooMailNeo@web171304.mail.ir2.yahoo.com> Message-ID: <1335978268.79539.YahooMailNeo@web171305.mail.ir2.yahoo.com> Thanks for your answer. I realized I made a mistake with my data. It works much better now. Sorry, I have my head in the clouds... Cheers, Camille ________________________________ De?: "josef.pktd at gmail.com" ??: SciPy Users List Envoy? le : Mercredi 2 mai 2012 12h45 Objet?: Re: [SciPy-User] Re : Least-square fitting of a 3rd degree polynomial On Wed, May 2, 2012 at 3:35 AM, camille chambon wrote: > Thanks for your answer. > > I tried to reduce my problem to a linear regression problem: > > x = array([1078.0, 1117.0, 1212.1, 1368.7, 1686.8, 1880.0]) > y = array([5.8, 5.6, 5.4, 4.9, 2.4, 0.0]) > b, c, d =? 0.0, -0.00458844157413, 5.8 > z = y - b * x**2 - c * x > p = numpy.polyfit(x, z, 3) > time = linspace(x.min(), x.max(), 100) > plot(x, y, "ro", time, numpy.poly1d([p[0], b, c, d])(time), "r-") > > but it doesn't work (see attached image). > > Where am I wrong? Are you sure you got your fixed values b,c,d right? y2 = y - d - c * x - b * x**2 >>> y2 array([ 4.94634002,? 4.92528924,? 5.16165003,? 5.38019998,? 4.33978325, ? ? ? ? 2.82627016]) It doesn't look like you can fit y2? as a function of x**3 and get a good result. I get a similar plot to your original leastsq solution. import numpy as np import matplotlib.pyplot as plt x = np.array([1078.0, 1117.0, 1212.1, 1368.7, 1686.8, 1880.0]) y = np.array([5.8, 5.6, 5.4, 4.9, 2.4, 0.0]) b, c, d =? 0.0, -0.00458844157413, 5.8 y2 = y - d - c * x - b * x**2 x2 = x**3 a = np.dot(x2, y2) / np.dot(x2, x2) #check with statsmodels import statsmodels.api as sm a2 = sm.OLS(y2, x2).fit().params print a, a2[0] xx = np.linspace(x.min(), x.max()) yhat = d + c * xx + b * xx**2 + a * xx**3 plt.plot(x, y, 'o') plt.plot(xx, yhat, '-') plt.figure() plt.plot(x, y2, 'o') plt.show() Josef > > Cheers, > > Camille > > ________________________________ > De?: "josef.pktd at gmail.com" > ??: camille chambon ; SciPy Users List > > Envoy? le : Mardi 1 mai 2012 19h44 > Objet?: Re: [SciPy-User] Least-square fitting of a 3rd degree polynomial > > On Tue, May 1, 2012 at 1:31 PM, camille chambon > wrote: >> Hello, >> >> I would like to fit 'a' such as y = a * x**3 + b * x**2 + c * x + d, >> where x and y are measured data, and b = 0.0, c = -0.00458844157413 and d >> = >> 5.8 are fixed. >> >> According to >> >> http://www.scipy.org/Cookbook/FittingData#head-27373a786baa162a2e8a910ee0b8a48838082262, >> I try to use scipy.optimize.leastsq fit routine like that: >> >> #### CODE ##### >> >> from numpy import * >> from pylab import * >> from scipy import optimize >> >> # My data points >> x = array([1078.0, 1117.0, 1212.1, 1368.7, 1686.8, 1880.0]) >> y = array([5.8, 5.6, 5.4, 4.9, 2.4, 0.0]) >> >> b, c, d =? 0.0, -0.00458844157413, 5.8 # Fixed parameters >> >> fitfunc = lambda p, x: poly1d([p[0], b, c, d])(x) # Target function >> errfunc = lambda p, x, y: fitfunc(p, x) - y # Distance to the target >> function >> >> p0 = [-4.0E-09] # Initial guess for the parameters >> p1, success = optimize.leastsq(errfunc, p0[:], args=(x, y)) >> >> time = linspace(x.min(), x.max(), 100) >> plot(x, y, "ro", time, fitfunc(p1, time), "r-") # Plot of the data and the >> fit >> >> title("a fitting") >> xlabel("time [day]") >> ylabel("number []") >> legend(('x position', 'x fit', 'y position', 'y fit')) >> >> show() >> >> ################################ >> >> But the fit does not work (as one can see on the attached image). >> >> Does someone have any idea of what I'm doing wrong? >> >> Thanks in advance for your help. > > my guess would be that the scale is awful, the x are very large > > if you only need the to estimate "a", then it's just a very simple > linear regression problem dot(x,x)/dot(x,y) or something like this. > > as aside numpy.polynomial works very well for fitting a polynomial, > but doesn't allow for missing terms > > Josef > >> >> Cheers, >> >> Camille >> >> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> > > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > _______________________________________________ SciPy-User mailing list SciPy-User at scipy.org http://mail.scipy.org/mailman/listinfo/scipy-user -------------- next part -------------- An HTML attachment was scrubbed... URL: From otrov at hush.ai Wed May 2 16:45:20 2012 From: otrov at hush.ai (Kliment) Date: Wed, 02 May 2012 22:45:20 +0200 Subject: [SciPy-User] ... numpy/linalg/lapack_lite.so: undefined symbol: zungqr_ Message-ID: <20120502204520.C52B310E2D8@smtp.hushmail.com> Hi, I compiled lapack, atlas, umfpack, fftw in local folder, in similar way as described here: http://www.scipy.org/Installing_SciPy/Linux on 32bit Ubuntu Precise In ~/.local/lib I have: ======================================== libamd.2.2.3.a libamd.a -> libamd.2.2.3.a libatlas.a libcblas.a libf77blas.a libfftw3.a libfftw3.la* liblapack.a librefblas.a libsatlas.so* libtmglib.a libumfpack.5.5.2.a libumfpack.a -> libumfpack.5.5.2.a ======================================== In ~/.local/include I have: ======================================== amd.h atlas/ cblas.h clapack.h fftw3.f fftw3.f03 fftw3.h fftw3l.f03 fftw3q.f03 UFconfig.h umfpack.h ======================================== My site.cfg looks like this: ======================================== [DEFAULT] library_dirs = $HOME/.local/lib include_dirs = $HOME/.local/include [atlas] atlas_libs = lapack, f77blas, cblas, atlas [amd] amd_libs = amd [umfpack] umfpack_libs = umfpack, gfortran [fftw] libraries = fftw3 ======================================== I extracted numpy and run: python setup.py build --fcompiler=gnu95 python setup.py install --prefix=$HOME/.local I then run python interpreter and try to import numpy, when I receive import error: ImportError: /home/vlad/.local/lib/python2.7/site- packages/numpy/linalg/lapack_lite.so: undefined symbol: zungqr_ What did I do wrong? From cournape at gmail.com Thu May 3 13:11:56 2012 From: cournape at gmail.com (David Cournapeau) Date: Thu, 3 May 2012 18:11:56 +0100 Subject: [SciPy-User] ... numpy/linalg/lapack_lite.so: undefined symbol: zungqr_ In-Reply-To: <20120502204520.C52B310E2D8@smtp.hushmail.com> References: <20120502204520.C52B310E2D8@smtp.hushmail.com> Message-ID: On Wed, May 2, 2012 at 9:45 PM, Kliment wrote: > Hi, > > I compiled lapack, atlas, umfpack, fftw in local folder, in similar > way as described here: http://www.scipy.org/Installing_SciPy/Linux > on 32bit Ubuntu Precise > > In ~/.local/lib I have: > ======================================== > libamd.2.2.3.a > libamd.a -> libamd.2.2.3.a > libatlas.a > libcblas.a > libf77blas.a > libfftw3.a > libfftw3.la* > liblapack.a > librefblas.a > libsatlas.so* > libtmglib.a > libumfpack.5.5.2.a > libumfpack.a -> libumfpack.5.5.2.a > ======================================== > > In ~/.local/include I have: > ======================================== > amd.h > atlas/ > cblas.h > clapack.h > fftw3.f > fftw3.f03 > fftw3.h > fftw3l.f03 > fftw3q.f03 > UFconfig.h > umfpack.h > > ======================================== > > My site.cfg looks like this: > ======================================== > [DEFAULT] > library_dirs = $HOME/.local/lib > include_dirs = $HOME/.local/include > > [atlas] > atlas_libs = lapack, f77blas, cblas, atlas > > [amd] > amd_libs = amd > > [umfpack] > umfpack_libs = umfpack, gfortran > > [fftw] > libraries = fftw3 > ======================================== > > I extracted numpy and run: > > python setup.py build --fcompiler=gnu95 > python setup.py install --prefix=$HOME/.local > > I then run python interpreter and try to import numpy, when I > receive import error: > > ImportError: /home/vlad/.local/lib/python2.7/site- > packages/numpy/linalg/lapack_lite.so: undefined symbol: zungqr_ > Most likely you did not build atlas and/or lapack correctly. Unless you are familiar with debugging those kind of issues, I strongly recommend you to build numpy against atlas as given by ubuntu (something like apt-get install libatlas-base-dev should do the trick). David -------------- next part -------------- An HTML attachment was scrubbed... URL: From otrov at hush.ai Thu May 3 13:36:05 2012 From: otrov at hush.ai (Kliment) Date: Thu, 3 May 2012 17:36:05 +0000 (UTC) Subject: [SciPy-User] =?utf-8?q?=2E=2E=2E_numpy/linalg/lapack=5Flite=2Eso?= =?utf-8?q?=3A_undefined_symbol=3A=09zungqr=5F?= References: <20120502204520.C52B310E2D8@smtp.hushmail.com> Message-ID: David Cournapeau gmail.com> writes: > > Most likely you did not build atlas and/or lapack correctly. > > Unless you are familiar with debugging those kind of issues, I strongly recommend you to build numpy against atlas as given by ubuntu (something like apt-get install libatlas-base-dev should do the trick). > > David? That much I do know myself, obviously Thanks for trying to communicate thou From zfyuan at mail.ustc.edu.cn Fri May 4 09:03:21 2012 From: zfyuan at mail.ustc.edu.cn (=?UTF-8?B?6KKB5oyv6aOe?=) Date: Fri, 04 May 2012 21:03:21 +0800 Subject: [SciPy-User] Is there any quantile function in scipy.stats module? Message-ID: <4FA3D399.4030605@mail.ustc.edu.cn> Hi, all. My work involved in some probability calculation, which needs the quantile function of distributions like standard normal and t. However, I looked into scipy.stats.t, there's only pdf and cdf functions, and didn't see quantile function. What can I do with this problem? I want your help, thanks a lot ! Best wishes: ) -- Jeffrey -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Fri May 4 09:09:41 2012 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Fri, 4 May 2012 09:09:41 -0400 Subject: [SciPy-User] Is there any quantile function in scipy.stats module? In-Reply-To: <4FA3D399.4030605@mail.ustc.edu.cn> References: <4FA3D399.4030605@mail.ustc.edu.cn> Message-ID: On Fri, May 4, 2012 at 9:03 AM, ??? wrote: > Hi, all. > > ??? My work involved in some probability calculation, which needs the > quantile > function of distributions like standard normal and t. However, I looked into > scipy.stats.t, there's only pdf and cdf functions, and didn't see quantile > function. What can I do with this problem? stats.norm.ppf stats.t.ppf gives the quantiles for a specific values (can be list or array) .isf is the quantile for the survival function, and is for some distributions more precise for upper tails than using ppf. Josef > > ??? I want your help, thanks a lot ! > > Best wishes: ) > > -- > Jeffrey > > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From cmueller_dev at yahoo.com Thu May 3 12:44:15 2012 From: cmueller_dev at yahoo.com (Chris Mueller) Date: Thu, 3 May 2012 09:44:15 -0700 (PDT) Subject: [SciPy-User] 2012 SciPy Bioinformatics Workshop Message-ID: <1336063455.23270.YahooMailNeo@web111204.mail.gq1.yahoo.com> We are pleased to announce the 2012 SciPy Bioinformatics Workshop held in conjunction with SciPy 2012 this July in Austin, TX. Python in biology is not dead yet... in fact, it's alive and well! Remember just a few short years ago when BioPerl ruled the world? ?Just one minor paradigm shift* later and Python now has a commanding presence in bioinformatics. From Python bindings to common tools all the way to entire Python-based informatics platforms, Python is used everywhere** in modern bioinformatics. If you use Python for bioinformatics or just want to learn more about how its being used, join us at the 2012 SciPy Bioinformatics Workshop. We will have speakers from both academia and industry showcasing how Python is enabling biologists to effectively work with large, complex data sets. The workshop will be held the evening of July 19 from 5-6:30. More information about SciPy is available on the conference site: ? http://conference.scipy.org/scipy2012/ !! Participate !! Are you using Python in bioinformatics? ?We'd love to have you share your story. ?We are looking for 3-4 speakers to share their experiences using Python for bioinformatics. ? Please contact Chris Mueller at chris.mueller [at] lab7.io and Ray Roberts at rroberts [at] enthought.com to volunteer. Please include a brief description or link to a paper/topic which you would like to discuss. Presentations will last for 15 minutes each and will be followed by a panel Q&A. -- * That would be next generation sequencing ** Yes, we aRe awaRe of that otheR language used eveRywhere, but let's celebRate Python Right now. From zfyuan at mail.ustc.edu.cn Fri May 4 11:23:50 2012 From: zfyuan at mail.ustc.edu.cn (=?UTF-8?B?6KKB5oyv6aOe?=) Date: Fri, 04 May 2012 23:23:50 +0800 Subject: [SciPy-User] Is there any quantile function in scipy.stats module? In-Reply-To: References: <4FA3D399.4030605@mail.ustc.edu.cn> Message-ID: <4FA3F486.2060309@mail.ustc.edu.cn> ? 2012?05?04? 21:09, josef.pktd at gmail.com ??: > On Fri, May 4, 2012 at 9:03 AM, wrote: >> Hi, all. >> >> My work involved in some probability calculation, which needs the >> quantile >> function of distributions like standard normal and t. However, I looked into >> scipy.stats.t, there's only pdf and cdf functions, and didn't see quantile >> function. What can I do with this problem? > stats.norm.ppf > stats.t.ppf > gives the quantiles for a specific values (can be list or array) > > .isf is the quantile for the survival function, and is for some > distributions more precise for upper tails than using ppf. > > Josef > >> I want your help, thanks a lot ! >> >> Best wishes: ) >> >> -- >> Jeffrey >> >> >> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user I see. Thanks a lot -- Jeffrey From zfyuan at mail.ustc.edu.cn Fri May 4 23:04:09 2012 From: zfyuan at mail.ustc.edu.cn (Jeffrey) Date: Sat, 05 May 2012 11:04:09 +0800 Subject: [SciPy-User] scipy.integrate with a function vector Message-ID: <4FA498A9.5050406@mail.ustc.edu.cn> Hi, all. My problem came with an integration problem. Below is a brief sample: >>> e1=numpy.arange(10) >>> d1=e1/10 >>> def e2(w): >>> return w*e1 >>> def d2(w): >>> return w*d1 >>> >>> scipy.integrate.quad(lambda x: e1(x)-d1(x), 0 , 1) Since e1,d1 are array with length 10, d2(w) and e2(w) will also each return an array of length 10. However scipy.integrate.quad only supports a scale function. Is there any integrating function in scipy, which could solve this problem? Or my code writes wrongly? Any alternative way to solve this problem? Thanks a lot. -- Jeffrey From otrov at hush.ai Sat May 5 01:44:13 2012 From: otrov at hush.ai (Kliment) Date: Sat, 5 May 2012 05:44:13 +0000 (UTC) Subject: [SciPy-User] =?utf-8?q?=2E=2E=2Enumpy/linalg/lapack=5Flite=2Eso?= =?utf-8?q?=3A_undefined_symbol=3A=09zungqr=5F?= References: <20120502204520.C52B310E2D8@smtp.hushmail.com> Message-ID: In case anyone gets here from Google: I managed to build it successfully, by following this recipe: http://mbudisic.wordpress.com/2010/08/12/installing-atlas-with-full-lapack-on-64-bit-linux/ and additionally using latest stable ATLAS (instead latest dev version) paired with LAPACK 3.3.1 (instead latest 3.4.1), as apparently not any LAPACK can be build with any ATLAS, but perhaps pairing roughly dates should go fine. Method described in above post differs from one provided on Scipy page, by the way LAPACK is set, which is not just linked to ATLAS build process with downloaded archive. Looking now, from first post is evident that LAPACK library is striped down version, which is what I got by following instructions on Scipy page. Hope this help someone From ralf.gommers at googlemail.com Sat May 5 14:15:47 2012 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Sat, 5 May 2012 20:15:47 +0200 Subject: [SciPy-User] ANN: NumPy 1.6.2 release candidate 1 Message-ID: Hi, I'm pleased to announce the availability of the first release candidate of NumPy 1.6.2. This is a maintenance release. Due to the delay of the NumPy 1.7.0, this release contains far more fixes than a regular NumPy bugfix release. It also includes a number of documentation and build improvements. Sources and binary installers can be found at https://sourceforge.net/projects/numpy/files/NumPy/1.6.2rc1/ Please test this release and report any issues on the numpy-discussion mailing list. Cheers, Ralf ``numpy.core`` issues fixed --------------------------- #2063 make unique() return consistent index #1138 allow creating arrays from empty buffers or empty slices #1446 correct note about correspondence vstack and concatenate #1149 make argmin() work for datetime #1672 fix allclose() to work for scalar inf #1747 make np.median() work for 0-D arrays #1776 make complex division by zero to yield inf properly #1675 add scalar support for the format() function #1905 explicitly check for NaNs in allclose() #1952 allow floating ddof in std() and var() #1948 fix regression for indexing chararrays with empty list #2017 fix type hashing #2046 deleting array attributes causes segfault #2033 a**2.0 has incorrect type #2045 make attribute/iterator_element deletions not segfault #2021 fix segfault in searchsorted() #2073 fix float16 __array_interface__ bug ``numpy.lib`` issues fixed -------------------------- #2048 break reference cycle in NpzFile #1573 savetxt() now handles complex arrays #1387 allow bincount() to accept empty arrays #1899 fixed histogramdd() bug with empty inputs #1793 fix failing npyio test under py3k #1936 fix extra nesting for subarray dtypes #1848 make tril/triu return the same dtype as the original array #1918 use Py_TYPE to access ob_type, so it works also on Py3 ``numpy.f2py`` changes ---------------------- ENH: Introduce new options extra_f77_compiler_args and extra_f90_compiler_args BLD: Improve reporting of fcompiler value BUG: Fix f2py test_kind.py test ``numpy.poly`` changes ---------------------- ENH: Add some tests for polynomial printing ENH: Add companion matrix functions DOC: Rearrange the polynomial documents BUG: Fix up links to classes DOC: Add version added to some of the polynomial package modules DOC: Document xxxfit functions in the polynomial package modules BUG: The polynomial convenience classes let different types interact DOC: Document the use of the polynomial convenience classes DOC: Improve numpy reference documentation of polynomial classes ENH: Improve the computation of polynomials from roots STY: Code cleanup in polynomial [*]fromroots functions DOC: Remove references to cast and NA, which were added in 1.7 ``numpy.distutils`` issues fixed ------------------------------- #1261 change compile flag on AIX from -O5 to -O3 #1377 update HP compiler flags #1383 provide better support for C++ code on HPUX #1857 fix build for py3k + pip BLD: raise a clearer warning in case of building without cleaning up first BLD: follow build_ext coding convention in build_clib BLD: fix up detection of Intel CPU on OS X in system_info.py BLD: add support for the new X11 directory structure on Ubuntu & co. BLD: add ufsparse to the libraries search path. BLD: add 'pgfortran' as a valid compiler in the Portland Group BLD: update version match regexp for IBM AIX Fortran compilers. -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Sat May 5 16:20:27 2012 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Sat, 5 May 2012 16:20:27 -0400 Subject: [SciPy-User] scipy.stats.distributions.erlang - "boring" consensus building in a ticket Message-ID: Should we restrict the shape parameter to be an integer instead of a float? http://projects.scipy.org/scipy/ticket/1647 Let's ask the users: Does anyone want an exception if the shape parameter is not an integer? Is there a demand or use case for estimating the shape parameter as an integer instead of a float? right now erlang and gamma are essentially the same, as far as I can see Josef From josef.pktd at gmail.com Sat May 5 16:22:15 2012 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Sat, 5 May 2012 16:22:15 -0400 Subject: [SciPy-User] scipy.stats.distributions.erlang - "boring" consensus building in a ticket In-Reply-To: References: Message-ID: On Sat, May 5, 2012 at 4:20 PM, wrote: > Should we restrict the shape parameter to be an integer instead of a float? > > http://projects.scipy.org/scipy/ticket/1647 > > Let's ask the users: > > Does anyone want an exception if the shape parameter is not an integer? sorry correction, not an exception, if the parameter is invalid, an nan is returned > Is there a demand or use case for estimating the shape parameter as an > integer instead of a float? > > right now erlang and gamma are essentially the same, as far as I can see > > Josef From vanforeest at gmail.com Sat May 5 17:59:52 2012 From: vanforeest at gmail.com (nicky van foreest) Date: Sat, 5 May 2012 23:59:52 +0200 Subject: [SciPy-User] scipy.stats.distributions.erlang - "boring" consensus building in a ticket In-Reply-To: References: Message-ID: Hi, I just looked through the discussion in the ticket. Both sides (1: the scale k should be an int, 2) allow k to be a float) make sense. From my background in queueing I am inclined to say that k should be restricted to the integers as in queueing theory the Erlang-k distribution is used to model some distribution for which k can only be an int. On the other hand, I am unsure whether a user of the Erlang distribution should be protected from filling in a float. In all (?) books on queueing and probability it is written that the Erlang distribution is a special case of the gamma distribution, so users of the Erlang distribution should know this (.... kind of, hopefully). >From an implementation point of view I think that aliasing Erlang to the gamma distribution makes good sense, and I don't believe that the users or the Erlang distribution require to be protected against filling in floats. Perhaps a sentence in the docstring of the Erlang distribution that it is an alias of the gamma distribution, hence does not check on the scale being an int, will prevent some potential misuse. Nicky On 5 May 2012 22:20, wrote: > Should we restrict the shape parameter to be an integer instead of a float? > > http://projects.scipy.org/scipy/ticket/1647 > > Let's ask the users: > > Does anyone want an exception if the shape parameter is not an integer? > Is there a demand or use case for estimating the shape parameter as an > integer instead of a float? > > right now erlang and gamma are essentially the same, as far as I can see > > Josef > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user From cp3028 at googlemail.com Fri May 4 16:57:07 2012 From: cp3028 at googlemail.com (cp3028) Date: Fri, 4 May 2012 13:57:07 -0700 (PDT) Subject: [SciPy-User] Using Scipy Sparse Matrices and Weave Inline possible? Message-ID: <71ebd09a-6bdb-4247-8659-db0e1c179fc6@n1g2000vby.googlegroups.com> Hello Everyone, I have to assemble a very large sparse matrix from small dense matrices. Is there a way to use weave.inlne to speed up the process? The problem is, the entries of the small dense matrices are distributed very irregular on the large sparse matrice. Until now I loop over all dense matrices and every entry and assign it to the corresponding position in the sparse large matrice through a lookup table. Is there a faster way? Cheers! From klonuo at gmail.com Sun May 6 02:39:20 2012 From: klonuo at gmail.com (klo uo) Date: Sun, 6 May 2012 08:39:20 +0200 Subject: [SciPy-User] ndimage/morphology - binary dilation and erosion? In-Reply-To: <3847715.zaxhO1o2BG@rabbit> References: <3847715.zaxhO1o2BG@rabbit> Message-ID: Hi Luis, a bit prolonged reply, but as I remembered this messages thought to maybe get to you :) I wanted to try next on, with "skeleton-ing" some images, but "ndimage" doesn't have this function AFAIK. Then I googled and pointed to "pymorph" which I assume you are the author. So I'm curious, why did you recommended "mohotas", when "pymorph" seems like made for morphological analysis? Regards, klo On Mon, Apr 2, 2012 at 6:25 PM, Luis Pedro Coelho wrote: > On Saturday, March 31, 2012 08:08:41 PM klo uo wrote: >> ?I tried grey opening on sample image with both modules. Approach seems >> good and result is bit identical with both modules (footprint=square(3)), >> and I thought to comment on differences on both modules: >> >> ?- skimage requires converting data type to 'uint8' and won't accept >> anything less >> ?- ndimage grey opening is 3 times faster on my PC > > Mahotas (which I wrote): http://luispedro.org/software/mahotas is closer in > implementation to ndimage and should be as fast (as well as supporting > multiple types). > > It doesn't have the open() operation, but you can dilate() & erode() yourself: > > def open(f, Bc, output=None): > ? ? ? output = mahotas.dilate(f, Bc, output=output) > ? ? ? return mahotas.erode(f, Bc, output=output) > > (Also, I think that the skimage erode() & dilate() are for flat structuring > elements only, but that doesn't seem to be an issue for you). > > HTH, > -- > Luis Pedro Coelho | Institute for Molecular Medicine | http://luispedro.org > From klonuo at gmail.com Sun May 6 02:47:29 2012 From: klonuo at gmail.com (klo uo) Date: Sun, 6 May 2012 08:47:29 +0200 Subject: [SciPy-User] ndimage/morphology - binary dilation and erosion? In-Reply-To: References: <3847715.zaxhO1o2BG@rabbit> Message-ID: > So I'm curious, why did you recommended "mohotas", when "pymorph" > seems like made for morphological analysis? Just couple of minutes later, I read this (http://luispedro.org/software/pymorph#see-also): "mahotas is another Python image processing package, but most of its functions are in C++. It is, therefore, much faster for the operations it supports." So I assume that will be the answer ;) From klonuo at gmail.com Sun May 6 04:49:19 2012 From: klonuo at gmail.com (klo uo) Date: Sun, 6 May 2012 10:49:19 +0200 Subject: [SciPy-User] ndimage/morphology - binary dilation and erosion? In-Reply-To: References: <3847715.zaxhO1o2BG@rabbit> Message-ID: > (http://luispedro.org/software/pymorph#see-also): > > "mahotas is another Python image processing package, but most of its > functions are in C++. It is, therefore, much faster for the operations > it supports." > Skeletonization with "pymorph" is indeed very very slow I found very nice example how to do this just with "ndimage" and in a blink of a second time: ======================================== def sk(i, r): x = ndimage.distance_transform_edt(i) y = ndimage.morphological_laplace(x, (r, r)) return y < y.min()/2 ======================================== For example in pylab mode: In [1]: from scipy import ndimage In [2]: im = imread('clip.png')[:,:,0] In [3]: imshow(sk(im, 5), cmap='binary', interpolation='nearest') From tsyu80 at gmail.com Sun May 6 08:24:53 2012 From: tsyu80 at gmail.com (Tony Yu) Date: Sun, 6 May 2012 08:24:53 -0400 Subject: [SciPy-User] ndimage/morphology - binary dilation and erosion? In-Reply-To: References: <3847715.zaxhO1o2BG@rabbit> Message-ID: On Sun, May 6, 2012 at 4:49 AM, klo uo wrote: > > (http://luispedro.org/software/pymorph#see-also): > > > > "mahotas is another Python image processing package, but most of its > > functions are in C++. It is, therefore, much faster for the operations > > it supports." > > > > Skeletonization with "pymorph" is indeed very very slow > > I found very nice example how to do this just with "ndimage" and in a > blink of a second time: > > ======================================== > def sk(i, r): > x = ndimage.distance_transform_edt(i) > y = ndimage.morphological_laplace(x, (r, r)) > return y < y.min()/2 > ======================================== > > For example in pylab mode: > > In [1]: from scipy import ndimage > In [2]: im = imread('clip.png')[:,:,0] > In [3]: imshow(sk(im, 5), cmap='binary', interpolation='nearest') > _______________________________________________ > > There's also a couple of skeletonize functions in scikits-image: skeletonize and medial_axis. But I'm not sure how the performance compares to the other solutions you've found. Cheers, -Tony -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Sun May 6 10:46:38 2012 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Sun, 6 May 2012 10:46:38 -0400 Subject: [SciPy-User] scipy.stats.distributions.erlang - "boring" consensus building in a ticket In-Reply-To: References: Message-ID: On Sat, May 5, 2012 at 5:59 PM, nicky van foreest wrote: > Hi, > > I just looked through the discussion in the ticket. Both sides (1: the > scale k should be an int, 2) allow k to be a float) make sense. From > my background in queueing I am inclined to say that k should be > restricted to the integers as in queueing theory the Erlang-k > distribution is used to model some distribution for which k can only > be an int. On the other hand, I am unsure whether a user of the Erlang > distribution should be protected from filling in a float. In all (?) > books on queueing and probability it is written that the Erlang > distribution is a special case of the gamma distribution, so users of > the Erlang distribution should know this (.... kind of, hopefully). > > >From an implementation point of view I think that aliasing Erlang to > the gamma distribution makes good sense, and I don't believe that the > users or the Erlang distribution require to be protected against > filling in floats. Perhaps a sentence in the docstring of the Erlang > distribution that it is an alias of the gamma distribution, hence does > not check on the scale being an int, will prevent some potential > misuse. Thanks for the comments. Do you think it would be useful to have fit() restrict to integers? I guess currently nobody uses erlang.fit() because, at least in the example, it doesn't work with default parameters. import numpy as np from scipy import stats #add fitstart to erlang, otherwise it doesn't work stats.erlang._fitstart = stats.gamma._fitstart np.random.seed(876589) rvs = stats.erlang.rvs(5, size=500) for dist in [stats.erlang, stats.gamma]: print '\n', dist.name p0 = dist.fit(rvs) print stats.gamma.nnlf(p0, rvs), p0 print for k in range(10): p = dist.fit(rvs, f0=k) print dist.nnlf(p, rvs), p Josef > > Nicky > > > > On 5 May 2012 22:20, ? wrote: >> Should we restrict the shape parameter to be an integer instead of a float? >> >> http://projects.scipy.org/scipy/ticket/1647 >> >> Let's ask the users: >> >> Does anyone want an exception if the shape parameter is not an integer? >> Is there a demand or use case for estimating the shape parameter as an >> integer instead of a float? >> >> right now erlang and gamma are essentially the same, as far as I can see >> >> Josef >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user From klonuo at gmail.com Sun May 6 11:31:29 2012 From: klonuo at gmail.com (klo uo) Date: Sun, 6 May 2012 17:31:29 +0200 Subject: [SciPy-User] ndimage/morphology - binary dilation and erosion? In-Reply-To: References: <3847715.zaxhO1o2BG@rabbit> Message-ID: > There's also a couple of skeletonize functions in scikits-image: > skeletonize?and medial_axis. But I'm not sure how the performance compares > to the other solutions you've found. Well, it can't be compared :) "skimage" skeletonize() executes almost instantly. There must be something wrong "pymorph" skelm() other then requirement to convert numpy array in boolean dtype ;) It just showed first in Google, I should have tried harder and check skimage Thanks Tony From david_baddeley at yahoo.com.au Sun May 6 18:03:33 2012 From: david_baddeley at yahoo.com.au (David Baddeley) Date: Sun, 6 May 2012 15:03:33 -0700 (PDT) Subject: [SciPy-User] scipy.stats.distributions.erlang - "boring" consensus building in a ticket In-Reply-To: References: Message-ID: <1336341813.68989.YahooMailNeo@web113410.mail.gq1.yahoo.com> I don't use the Erlang distribution myself, but would be curious to know how you would go about restricting the scale parameter to being an integer when doing the fit. This kind of restriction sound like it could be useful in a number of different problems, but I can't think of any easy way to do it (other than to use some form of brute force optimisation over the integer parameter). cheers, David ________________________________ From: "josef.pktd at gmail.com" To: SciPy Users List Sent: Monday, 7 May 2012 2:46 AM Subject: Re: [SciPy-User] scipy.stats.distributions.erlang - "boring" consensus building in a ticket On Sat, May 5, 2012 at 5:59 PM, nicky van foreest wrote: > Hi, > > I just looked through the discussion in the ticket. Both sides (1: the > scale k should be an int, 2) allow k to be a float) make sense. From > my background in queueing I am inclined to say that k should be > restricted to the integers as in queueing theory the Erlang-k > distribution is used to model some distribution for which k can only > be an int. On the other hand, I am unsure whether a user of the Erlang > distribution should be protected from filling in a float. In all (?) > books on queueing and probability it is written that the Erlang > distribution is a special case of the gamma distribution, so users of > the Erlang distribution should know this (.... kind of, hopefully). > > >From an implementation point of view I think that aliasing Erlang to > the gamma distribution makes good sense, and I don't believe that the > users or the Erlang distribution require to be protected against > filling in floats. Perhaps a sentence in the docstring of the Erlang > distribution that it is an alias of the gamma distribution, hence does > not check on the scale being an int, will prevent some potential > misuse. Thanks for the comments. Do you think it would be useful to have fit() restrict to integers? I guess currently nobody uses erlang.fit() because, at least in the example, it doesn't work with default parameters. import numpy as np from scipy import stats #add fitstart to erlang, otherwise it doesn't work stats.erlang._fitstart = stats.gamma._fitstart np.random.seed(876589) rvs = stats.erlang.rvs(5, size=500) for dist in [stats.erlang, stats.gamma]: ? ? print '\n', dist.name ? ? p0 = dist.fit(rvs) ? ? print stats.gamma.nnlf(p0, rvs), p0 ? ? print ? ? for k in range(10): ? ? ? ? p = dist.fit(rvs, f0=k) ? ? ? ? print dist.nnlf(p, rvs), p Josef > > Nicky > > > > On 5 May 2012 22:20, ? wrote: >> Should we restrict the shape parameter to be an integer instead of a float? >> >> http://projects.scipy.org/scipy/ticket/1647 >> >> Let's ask the users: >> >> Does anyone want an exception if the shape parameter is not an integer? >> Is there a demand or use case for estimating the shape parameter as an >> integer instead of a float? >> >> right now erlang and gamma are essentially the same, as far as I can see >> >> Josef >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user _______________________________________________ SciPy-User mailing list SciPy-User at scipy.org http://mail.scipy.org/mailman/listinfo/scipy-user -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Sun May 6 18:20:08 2012 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Sun, 6 May 2012 18:20:08 -0400 Subject: [SciPy-User] scipy.stats.distributions.erlang - "boring" consensus building in a ticket In-Reply-To: <1336341813.68989.YahooMailNeo@web113410.mail.gq1.yahoo.com> References: <1336341813.68989.YahooMailNeo@web113410.mail.gq1.yahoo.com> Message-ID: On Sun, May 6, 2012 at 6:03 PM, David Baddeley wrote: > I don't use the Erlang distribution myself, but would be curious to know how > you would go about restricting the scale parameter to being an integer when > doing the fit. This kind of restriction sound like it could be useful in a > number of different problems, but I can't think of any easy way to do it > (other than to use some form of brute force optimisation over the integer > parameter). As in my previous example, I see two possibilities 1) two step optimization, with grid search on the outside and continuous fit for fixed integer parameter on the inside. This works now that constraint fitting is working correctly. 2) run continuous parameter optimization, given that the function is defined on the real line/space, and then go to 1) in the integer neighborhood of the continuous result (other possibility go for openopt or similar with mixed integer programming. I have no idea and never tried) My guess wabout whether 1) or 2) is faster depends on how large the possible range of integers to search in is, and how nice the curvature is to use fmin_bfgs for example. Cheers, Josef > > cheers, > David > > ________________________________ > From: "josef.pktd at gmail.com" > To: SciPy Users List > Sent: Monday, 7 May 2012 2:46 AM > Subject: Re: [SciPy-User] scipy.stats.distributions.erlang - "boring" > consensus building in a ticket > > On Sat, May 5, 2012 at 5:59 PM, nicky van foreest > wrote: >> Hi, >> >> I just looked through the discussion in the ticket. Both sides (1: the >> scale k should be an int, 2) allow k to be a float) make sense. From >> my background in queueing I am inclined to say that k should be >> restricted to the integers as in queueing theory the Erlang-k >> distribution is used to model some distribution for which k can only >> be an int. On the other hand, I am unsure whether a user of the Erlang >> distribution should be protected from filling in a float. In all (?) >> books on queueing and probability it is written that the Erlang >> distribution is a special case of the gamma distribution, so users of >> the Erlang distribution should know this (.... kind of, hopefully). >> >> >From an implementation point of view I think that aliasing Erlang to >> the gamma distribution makes good sense, and I don't believe that the >> users or the Erlang distribution require to be protected against >> filling in floats. Perhaps a sentence in the docstring of the Erlang >> distribution that it is an alias of the gamma distribution, hence does >> not check on the scale being an int, will prevent some potential >> misuse. > > Thanks for the comments. > > Do you think it would be useful to have fit() restrict to integers? > > I guess currently nobody uses erlang.fit() because, at least in the > example, it doesn't work with default parameters. > > import numpy as np > from scipy import stats > > #add fitstart to erlang, otherwise it doesn't work > stats.erlang._fitstart = stats.gamma._fitstart > > np.random.seed(876589) > rvs = stats.erlang.rvs(5, size=500) > > for dist in [stats.erlang, stats.gamma]: > ? ? print '\n', dist.name > ? ? p0 = dist.fit(rvs) > ? ? print stats.gamma.nnlf(p0, rvs), p0 > ? ? print > > ? ? for k in range(10): > ? ? ? ? p = dist.fit(rvs, f0=k) > ? ? ? ? print dist.nnlf(p, rvs), p > > Josef > >> >> Nicky >> >> >> >> On 5 May 2012 22:20, ? wrote: >>> Should we restrict the shape parameter to be an integer instead of a >>> float? >>> >>> http://projects.scipy.org/scipy/ticket/1647 >>> >>> Let's ask the users: >>> >>> Does anyone want an exception if the shape parameter is not an integer? >>> Is there a demand or use case for estimating the shape parameter as an >>> integer instead of a float? >>> >>> right now erlang and gamma are essentially the same, as far as I can see >>> >>> Josef >>> _______________________________________________ >>> SciPy-User mailing list >>> SciPy-User at scipy.org >>> http://mail.scipy.org/mailman/listinfo/scipy-user >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From lpc at cmu.edu Sun May 6 18:21:12 2012 From: lpc at cmu.edu (Luis Pedro Coelho) Date: Sun, 06 May 2012 23:21:12 +0100 Subject: [SciPy-User] ndimage/morphology - binary dilation and erosion? In-Reply-To: References: Message-ID: <3305104.yBzU5AVMy4@rabbit> On Sunday, May 06, 2012 05:31:29 PM klo uo wrote: > > There's also a couple of skeletonize functions in scikits-image: > > skeletonize and medial_axis. But I'm not sure how the performance compares > > to the other solutions you've found. > > Well, it can't be compared :) > "skimage" skeletonize() executes almost instantly. There must be > something wrong "pymorph" skelm() other then requirement to convert > numpy array in boolean dtype ;) There is nothing wrong, it's just Python instead of compiled code. I maintain both mahotas and pymorph and, increasingly, there is little point to using pymorph. Mahotas is starting to do it all and it does it much, much, faster. Mahotas has thin() which is a really well-tuned implementation. > It just showed first in Google, I should have tried harder and check skimage If you're interested, check out the pythonvision google group for more computer vision related questions. https://groups.google.com/forum/#!forum/pythonvision -- Luis Pedro Coelho | Institute for Molecular Medicine | http://luispedro.org -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 198 bytes Desc: This is a digitally signed message part. URL: From vanforeest at gmail.com Mon May 7 07:13:09 2012 From: vanforeest at gmail.com (nicky van foreest) Date: Mon, 7 May 2012 13:13:09 +0200 Subject: [SciPy-User] scipy.stats.distributions.erlang - "boring" consensus building in a ticket In-Reply-To: References: Message-ID: > Do you think it would be useful to have fit() restrict to integers? > > I guess currently nobody uses erlang.fit() because, at least in the > example, it doesn't work with default parameters. > > import numpy as np > from scipy import stats > > #add fitstart to erlang, otherwise it doesn't work > stats.erlang._fitstart = stats.gamma._fitstart > > np.random.seed(876589) > rvs = stats.erlang.rvs(5, size=500) > > for dist in [stats.erlang, stats.gamma]: > ? ?print '\n', dist.name > ? ?p0 = dist.fit(rvs) > ? ?print stats.gamma.nnlf(p0, rvs), p0 > ? ?print > > ? ?for k in range(10): > ? ? ? ?p = dist.fit(rvs, f0=k) > ? ? ? ?print dist.nnlf(p, rvs), p I think it returning an int is here to be exptected by an Erlang user. Hence, yes. I ran the above code, but got some warnings: Warning: invalid value encountered in subtract I did not pursue the origin of this though. > > Josef > >> >> Nicky >> >> >> >> On 5 May 2012 22:20, ? wrote: >>> Should we restrict the shape parameter to be an integer instead of a float? >>> >>> http://projects.scipy.org/scipy/ticket/1647 >>> >>> Let's ask the users: >>> >>> Does anyone want an exception if the shape parameter is not an integer? >>> Is there a demand or use case for estimating the shape parameter as an >>> integer instead of a float? >>> >>> right now erlang and gamma are essentially the same, as far as I can see >>> >>> Josef >>> _______________________________________________ >>> SciPy-User mailing list >>> SciPy-User at scipy.org >>> http://mail.scipy.org/mailman/listinfo/scipy-user >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user From ralf.gommers at googlemail.com Mon May 7 12:03:23 2012 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Mon, 7 May 2012 18:03:23 +0200 Subject: [SciPy-User] scipy.sparse.csgraph merged Message-ID: Hi, I finally merged https://github.com/scipy/scipy/pull/119, the sparse graph algorithms module written by Jake Vanderplas. This will certainly by one of the highlights of the 0.11 release (which shouldn't be too far off). Thanks again to Jake for all the work he put in. Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From moise at aims.ac.za Tue May 8 07:53:54 2012 From: moise at aims.ac.za (=?ISO-8859-1?Q?Guy_Mo=EFse_Dongho_Nguimdo?=) Date: Tue, 8 May 2012 13:53:54 +0200 Subject: [SciPy-User] scipy.integrate.dblquad with variable boundary Message-ID: I am trying to use dblquad to do double integration, but one of the boundaries of the integrale is also a variable. This is the integral: $\int^b_a \int^y_c f(x,y) dydx$ May some one help? Thanks -- GM DONGHO NGUIMDO --------------------------------------------------------------------------------------------------------------------------- "When it is obvious that the goals cannot be reached, don't adjust the goals, adjust the action steps." *-Confucius* -------------- next part -------------- An HTML attachment was scrubbed... URL: From cynthia.crooks at bp.com Tue May 8 15:25:38 2012 From: cynthia.crooks at bp.com (Crooks, Cynthia J) Date: Tue, 8 May 2012 20:25:38 +0100 Subject: [SciPy-User] Linux compile questions for SciPy Message-ID: <829E7BF743AC4E48932E140E9F36219003FFAB96@BP1XEUEX054-C.bp1.ad.bp.com> I am trying to compile SciPy for our Linux systems. I have been able to compile NumPy and ATLAS. These have been installed in user specified directories, /hpc/soft/NumPy and /hpc/soft/atlas, using the -prefix option. When I try to build SciPy it seems to be picking up an older version of numpy installed in the system directories. How do I point the SciPy build to look for the version I have compiled? OS SLES SP 1 uname -a Linux hpcp8002 2.6.32.12-0.7-default #1 SMP 2010-05-20 11:14:20 +0200 x86_64 x86_64 x86_64 GNU/Linux numpy version 1.6.1 **The version I compiled numpy version 1.3.0 **Version install in system directories scipy version 0.10.1 atlas version 3.9.74 python version 2.6 I have attached the screen output from the scipy build attempt. Any help would be appreciated. Thank you. Cindy <> -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: scipy.build.log Type: application/octet-stream Size: 9805 bytes Desc: scipy.build.log URL: From lists at hilboll.de Wed May 9 03:18:27 2012 From: lists at hilboll.de (Andreas H.) Date: Wed, 09 May 2012 09:18:27 +0200 Subject: [SciPy-User] Linux compile questions for SciPy In-Reply-To: <829E7BF743AC4E48932E140E9F36219003FFAB96@BP1XEUEX054-C.bp1.ad.bp.com> References: <829E7BF743AC4E48932E140E9F36219003FFAB96@BP1XEUEX054-C.bp1.ad.bp.com> Message-ID: <4FAA1A43.8010105@hilboll.de> Am Di 08 Mai 2012 21:25:38 CEST schrieb Crooks, Cynthia J: > I am trying to compile SciPy for our Linux systems. I have been able > to compile NumPy and ATLAS. These have been installed in user > specified directories, /hpc/soft/NumPy and /hpc/soft/atlas, using the > ?prefix option. When I try to build SciPy it seems to be picking up > an older version of numpy installed in the system directories. How do > I point the SciPy build to look for the version I have compiled? > > OS SLESSP 1 > > uname ?a Linux hpcp8002 2.6.32.12-0.7-default #1 SMP 2010-05-20 > 11:14:20 +0200 x86_64 x86_64 x86_64 GNU/Linux > > numpy version 1.6.1 **The version I compiled > > numpy version 1.3.0 **Version install in system directories > > scipy version 0.10.1 > > atlas version3.9.74 > > python version 2.6 > > I have attached the screen output from the scipy build attempt. > > Any help would be appreciated. Thank you. > > Cindy<> > > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user Make sure your PYTHONPATH environment variable is set to include the site-packages folder from /hpc/soft/NumPy/ before the system-wide site-packages. Andreas. From matheusmacedof at hotmail.com Sun May 6 18:49:59 2012 From: matheusmacedof at hotmail.com (=?utf-8?B?TWF0aGV1cyBNYWNlZMOY?=) Date: Sun, 6 May 2012 19:49:59 -0300 Subject: [SciPy-User] Python 2.7 / 64-bit In-Reply-To: BANLkTikLa-+S0hHHd4NuDvWyk-NK+shY6A@mail.gmail.com Message-ID: An HTML attachment was scrubbed... URL: From matheusmacedof at hotmail.com Sun May 6 18:52:50 2012 From: matheusmacedof at hotmail.com (=?utf-8?B?TWF0aGV1cyBNYWNlZMOY?=) Date: Sun, 6 May 2012 19:52:50 -0300 Subject: [SciPy-User] Python 2.7 / 64-bit In-Reply-To: BANLkTikLa-+S0hHHd4NuDvWyk-NK+shY6A@mail.gmail.com Message-ID: An HTML attachment was scrubbed... URL: From ralf.gommers at googlemail.com Wed May 9 15:01:05 2012 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Wed, 9 May 2012 21:01:05 +0200 Subject: [SciPy-User] Using Scipy Sparse Matrices and Weave Inline possible? In-Reply-To: <71ebd09a-6bdb-4247-8659-db0e1c179fc6@n1g2000vby.googlegroups.com> References: <71ebd09a-6bdb-4247-8659-db0e1c179fc6@n1g2000vby.googlegroups.com> Message-ID: On Fri, May 4, 2012 at 10:57 PM, cp3028 wrote: > Hello Everyone, > > I have to assemble a very large sparse matrix from small dense > matrices. Is there a way to use weave.inlne to speed up the process? > The problem is, the entries of the small dense matrices are > distributed very irregular on the large sparse matrice. Until now I > loop over all dense matrices and every entry and assign it to the > corresponding position in the sparse large matrice through a lookup > table. > > Is there a faster way? > I don't have a faster way, but do want to recommend not using weave for any new code you're writing (because it's unmaintained). Better to use cython.inline if you can. Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From sturla at molden.no Thu May 10 09:39:30 2012 From: sturla at molden.no (Sturla Molden) Date: Thu, 10 May 2012 15:39:30 +0200 Subject: [SciPy-User] OT: Python and C syntax highlighting with LaTeX Message-ID: <4FABC512.2000702@molden.no> I know this is slightly OT on the scipy list, but it applies to scientific computing nevertheless. Do you have suggesting for a LaTeX package to do consistent syntax highlighting of Python and C (preferably C99)? Colors are ok, but I want C and Python to look about the same style wise. Regards, Sturla From alan.isaac at gmail.com Thu May 10 10:07:40 2012 From: alan.isaac at gmail.com (Alan G Isaac) Date: Thu, 10 May 2012 10:07:40 -0400 Subject: [SciPy-User] OT: Python and C syntax highlighting with LaTeX In-Reply-To: <4FABC512.2000702@molden.no> References: <4FABC512.2000702@molden.no> Message-ID: <4FABCBAC.7050402@gmail.com> On 5/10/2012 9:39 AM, Sturla Molden wrote: > a LaTeX package to do consistent syntax > highlighting of Python and C (preferably C99) Is the ``listings`` package not adequate? http://get-software.net/macros/latex/contrib/listings/listings.pdf Alan Isaac From scott.sinclair.za at gmail.com Thu May 10 10:18:05 2012 From: scott.sinclair.za at gmail.com (Scott Sinclair) Date: Thu, 10 May 2012 16:18:05 +0200 Subject: [SciPy-User] OT: Python and C syntax highlighting with LaTeX In-Reply-To: <4FABC512.2000702@molden.no> References: <4FABC512.2000702@molden.no> Message-ID: On 10 May 2012 15:39, Sturla Molden wrote: > Do you have suggesting for a LaTeX package to do consistent syntax > highlighting of Python and C (preferably C99)? I've had some nice results with the minted package (http://tug.ctan.org/pkg/minted). Cheers, Scott From josef.pktd at gmail.com Thu May 10 20:04:44 2012 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Thu, 10 May 2012 20:04:44 -0400 Subject: [SciPy-User] why do the discrete distributions not have a `fit`? Message-ID: Why do the discrete distributions not have a `fit` method like the continuous distributions? currently it's a bug in the documentation http://projects.scipy.org/scipy/ticket/1659 in statsmodels, we fit several of the discrete distributions. How about discrete parameters? (in analogy to the erlang discussion) hypergeom is based on a story about marbles or balls http://en.wikipedia.org/wiki/Hypergeometric_distribution#Application_and_example but why should we care, it's just a discrete distribution with 3 shape parameters, isn't it? fractional marbles ? >>> nn = np.linspace(4.5, 8, 101) >>> pmf = [stats.hypergeom.pmf(5, 10.8, n, 8.5) for n in nn] >>> plt.plot(nn, pmf, '-o') >>> plt.title("pmf of hypergeom as function of parameter n") Doesn't look like there are any problems, and the likelihood function is nicely concave. conclusion: scipy.stats doesn't have a hypergeometric distribution, but a generalized version that is defined on a real parameter space. Josef (so what's the point? Sorry, I was just getting distracted while looking for `fit`.) -------------- next part -------------- A non-text attachment was scrubbed... Name: hypergeom_like.png Type: image/png Size: 32561 bytes Desc: not available URL: From ben.root at ou.edu Fri May 11 10:05:17 2012 From: ben.root at ou.edu (Benjamin Root) Date: Fri, 11 May 2012 10:05:17 -0400 Subject: [SciPy-User] [Numpy-discussion] ANN: NumPy 1.6.2 release candidate 1 In-Reply-To: References: Message-ID: On Sat, May 5, 2012 at 2:15 PM, Ralf Gommers wrote: > Hi, > > I'm pleased to announce the availability of the first release candidate of > NumPy 1.6.2. This is a maintenance release. Due to the delay of the NumPy > 1.7.0, this release contains far more fixes than a regular NumPy bugfix > release. It also includes a number of documentation and build improvements. > > Sources and binary installers can be found at > https://sourceforge.net/projects/numpy/files/NumPy/1.6.2rc1/ > > Please test this release and report any issues on the numpy-discussion > mailing list. > > Cheers, > Ralf > > > > ``numpy.core`` issues fixed > --------------------------- > > #2063 make unique() return consistent index > #1138 allow creating arrays from empty buffers or empty slices > #1446 correct note about correspondence vstack and concatenate > #1149 make argmin() work for datetime > #1672 fix allclose() to work for scalar inf > #1747 make np.median() work for 0-D arrays > #1776 make complex division by zero to yield inf properly > #1675 add scalar support for the format() function > #1905 explicitly check for NaNs in allclose() > #1952 allow floating ddof in std() and var() > #1948 fix regression for indexing chararrays with empty list > #2017 fix type hashing > #2046 deleting array attributes causes segfault > #2033 a**2.0 has incorrect type > #2045 make attribute/iterator_element deletions not segfault > #2021 fix segfault in searchsorted() > #2073 fix float16 __array_interface__ bug > > > ``numpy.lib`` issues fixed > -------------------------- > > #2048 break reference cycle in NpzFile > #1573 savetxt() now handles complex arrays > #1387 allow bincount() to accept empty arrays > #1899 fixed histogramdd() bug with empty inputs > #1793 fix failing npyio test under py3k > #1936 fix extra nesting for subarray dtypes > #1848 make tril/triu return the same dtype as the original array > #1918 use Py_TYPE to access ob_type, so it works also on Py3 > > > ``numpy.f2py`` changes > ---------------------- > > ENH: Introduce new options extra_f77_compiler_args and > extra_f90_compiler_args > BLD: Improve reporting of fcompiler value > BUG: Fix f2py test_kind.py test > > > ``numpy.poly`` changes > ---------------------- > > ENH: Add some tests for polynomial printing > ENH: Add companion matrix functions > DOC: Rearrange the polynomial documents > BUG: Fix up links to classes > DOC: Add version added to some of the polynomial package modules > DOC: Document xxxfit functions in the polynomial package modules > BUG: The polynomial convenience classes let different types interact > DOC: Document the use of the polynomial convenience classes > DOC: Improve numpy reference documentation of polynomial classes > ENH: Improve the computation of polynomials from roots > STY: Code cleanup in polynomial [*]fromroots functions > DOC: Remove references to cast and NA, which were added in 1.7 > > > ``numpy.distutils`` issues fixed > ------------------------------- > > #1261 change compile flag on AIX from -O5 to -O3 > #1377 update HP compiler flags > #1383 provide better support for C++ code on HPUX > #1857 fix build for py3k + pip > BLD: raise a clearer warning in case of building without cleaning up > first > BLD: follow build_ext coding convention in build_clib > BLD: fix up detection of Intel CPU on OS X in system_info.py > BLD: add support for the new X11 directory structure on Ubuntu & co. > BLD: add ufsparse to the libraries search path. > BLD: add 'pgfortran' as a valid compiler in the Portland Group > BLD: update version match regexp for IBM AIX Fortran compilers. > > > I just noticed that my fix for the np.gradient() function isn't listed. https://github.com/numpy/numpy/pull/167 Not critical, but if a second rc is needed for any reason, it would be nice to have that in there. Thanks! Ben Root -------------- next part -------------- An HTML attachment was scrubbed... URL: From cp3028 at googlemail.com Wed May 9 13:44:26 2012 From: cp3028 at googlemail.com (cp3028) Date: Wed, 9 May 2012 10:44:26 -0700 (PDT) Subject: [SciPy-User] How to assemble large sparse matrices effectively Message-ID: Hello everyone, I am working on an FEM project using Scipy. Now my problem is, that the assembly of the sparse matrices is to slow. I compute the contribution of every element in dense small matrices (one for each element). For the assembly of the global matrices I loop over all small dense matrices and set the matrice entries the following way: ... [i,j] = someList[k][l] Mglobal[i,j] = Mglobal[i,j] + Mlocal[k,l] ... Mglobal is a lil_matrice of appropriate size, someList maps the indexing variables. Of course this is rather slow and consumes most of the matrice assembly time. Is there a better way to assemble a large sparse matrix from many small dense matrices? I tried scipy.weave but it doesn't seem to work with sparse matrices Yours, cm From cp3028 at googlemail.com Wed May 9 14:34:58 2012 From: cp3028 at googlemail.com (cp3028) Date: Wed, 9 May 2012 20:34:58 +0200 Subject: [SciPy-User] How to assemble large sparse matrices effectively Message-ID: <985C3F2D-4449-4946-A348-B731660D798D@gmail.com> Hello everyone, I am working on an FEM project using Scipy. Now my problem is, that the assembly of the sparse matrices is to slow. I compute the contribution of every element in dense small matrices (one for each element). For the assembly of the global matrices I loop over all small dense matrices and set the matrice entries the following way: ... [i,j] = someList[k][l] Mglobal[i,j] = Mglobal[i,j] + Mlocal[k,l] ... Mglobal is a lil_matrice of appropriate size, someList maps the indexing variables. Of course this is rather slow and consumes most of the matrice assembly time. Is there a better way to assemble a large sparse matrix from many small dense matrices? I tried scipy.weave but it doesn't seem to work with sparse matrices Yours, cm From cp3028 at googlemail.com Wed May 9 14:36:39 2012 From: cp3028 at googlemail.com (cp3028) Date: Wed, 9 May 2012 11:36:39 -0700 (PDT) Subject: [SciPy-User] How to assemble large sparse matrices effectively Message-ID: <681b81d4-491f-4b4b-b201-df6bc1fbcce7@m10g2000vbb.googlegroups.com> Hello everyone, I am working on an FEM project using Scipy. Now my problem is, that the assembly of the sparse matrices is to slow. I compute the contribution of every element in dense small matrices (one for each element). For the assembly of the global matrices I loop over all small dense matrices and set the matrice entries the following way: ... [i,j] = someList[k][l] Mglobal[i,j] = Mglobal[i,j] + Mlocal[k,l] ... Mglobal is a lil_matrice of appropriate size, someList maps the indexing variables. Of course this is rather slow and consumes most of the matrice assembly time. Is there a better way to assemble a large sparse matrix from many small dense matrices? I tried scipy.weave but it doesn't seem to work with sparse matrices Yours, cm From erik.kastman at gmail.com Thu May 10 17:15:43 2012 From: erik.kastman at gmail.com (Erik Kastman) Date: Thu, 10 May 2012 17:15:43 -0400 Subject: [SciPy-User] Simple ndarray dim question? Message-ID: <448F2446-BEA6-4850-A64A-830578984DD9@gmail.com> Hi all, Using SciPy and Matlab, I'm having trouble reconstructing an array to match what is given from a matlab cell array loaded using scipy.io.loadmat(). For example, say I create a cell containing a pair of double arrays in matlab and then load it using scipy.io (I'm using SPM to do imaging analyses in conjunction with pynifti and the like) Matlab >> onsets{1} = [0 30 60 90] >> onsets{2} = [15 45 75 105] Python >>> import scipy.io as scio >>> mat = scio.loadmat('onsets.mat') >>> mat['onsets'][0] array([[[ 0 30 60 90]], [[ 15 45 75 105]]], dtype=object) >>> mat['onsets'][0].shape (2,) My question is this: **Why does this numpy array have the shape (2,) instead of (2,1,4)**? In real life I'm trying to use Python to parse a logfile and build these onsets cell arrays, so I'd like to be able to build them from scratch. When I try to build the same array from the printed output, I get a different shape back: >>> new_onsets = array([[[ 0, 30, 60, 90]], [[ 15, 45, 75, 105]]], dtype=object) array([[[0, 30, 60, 90]], [[15, 45, 75, 105]]], dtype=object) >>> new_onsets.shape (2,1,4) Unfortunately, the shape (vectors of doubles in a cell array) is coded in a spec upstream, so I need to be able to get this saved exactly in this format. Of course, it's not a big deal since I could just write the parser in matlab, but it would be nice to figure out what's going on and add a little to my [minuscule] knowledge of numpy. Thanks in advance for any suggestions, Erik cross-posted to stack-overflow: http://stackoverflow.com/questions/10542263 From charlesr.harris at gmail.com Fri May 11 12:01:59 2012 From: charlesr.harris at gmail.com (Charles R Harris) Date: Fri, 11 May 2012 10:01:59 -0600 Subject: [SciPy-User] [Numpy-discussion] ANN: NumPy 1.6.2 release candidate 1 In-Reply-To: References: Message-ID: On Fri, May 11, 2012 at 8:05 AM, Benjamin Root wrote: > > > On Sat, May 5, 2012 at 2:15 PM, Ralf Gommers wrote: > >> Hi, >> >> I'm pleased to announce the availability of the first release candidate >> of NumPy 1.6.2. This is a maintenance release. Due to the delay of the >> NumPy 1.7.0, this release contains far more fixes than a regular NumPy >> bugfix release. It also includes a number of documentation and build >> improvements. >> >> Sources and binary installers can be found at >> https://sourceforge.net/projects/numpy/files/NumPy/1.6.2rc1/ >> >> Please test this release and report any issues on the numpy-discussion >> mailing list. >> >> Cheers, >> Ralf >> >> >> >> ``numpy.core`` issues fixed >> --------------------------- >> >> #2063 make unique() return consistent index >> #1138 allow creating arrays from empty buffers or empty slices >> #1446 correct note about correspondence vstack and concatenate >> #1149 make argmin() work for datetime >> #1672 fix allclose() to work for scalar inf >> #1747 make np.median() work for 0-D arrays >> #1776 make complex division by zero to yield inf properly >> #1675 add scalar support for the format() function >> #1905 explicitly check for NaNs in allclose() >> #1952 allow floating ddof in std() and var() >> #1948 fix regression for indexing chararrays with empty list >> #2017 fix type hashing >> #2046 deleting array attributes causes segfault >> #2033 a**2.0 has incorrect type >> #2045 make attribute/iterator_element deletions not segfault >> #2021 fix segfault in searchsorted() >> #2073 fix float16 __array_interface__ bug >> >> >> ``numpy.lib`` issues fixed >> -------------------------- >> >> #2048 break reference cycle in NpzFile >> #1573 savetxt() now handles complex arrays >> #1387 allow bincount() to accept empty arrays >> #1899 fixed histogramdd() bug with empty inputs >> #1793 fix failing npyio test under py3k >> #1936 fix extra nesting for subarray dtypes >> #1848 make tril/triu return the same dtype as the original array >> #1918 use Py_TYPE to access ob_type, so it works also on Py3 >> >> >> ``numpy.f2py`` changes >> ---------------------- >> >> ENH: Introduce new options extra_f77_compiler_args and >> extra_f90_compiler_args >> BLD: Improve reporting of fcompiler value >> BUG: Fix f2py test_kind.py test >> >> >> ``numpy.poly`` changes >> ---------------------- >> >> ENH: Add some tests for polynomial printing >> ENH: Add companion matrix functions >> DOC: Rearrange the polynomial documents >> BUG: Fix up links to classes >> DOC: Add version added to some of the polynomial package modules >> DOC: Document xxxfit functions in the polynomial package modules >> BUG: The polynomial convenience classes let different types interact >> DOC: Document the use of the polynomial convenience classes >> DOC: Improve numpy reference documentation of polynomial classes >> ENH: Improve the computation of polynomials from roots >> STY: Code cleanup in polynomial [*]fromroots functions >> DOC: Remove references to cast and NA, which were added in 1.7 >> >> >> ``numpy.distutils`` issues fixed >> ------------------------------- >> >> #1261 change compile flag on AIX from -O5 to -O3 >> #1377 update HP compiler flags >> #1383 provide better support for C++ code on HPUX >> #1857 fix build for py3k + pip >> BLD: raise a clearer warning in case of building without cleaning up >> first >> BLD: follow build_ext coding convention in build_clib >> BLD: fix up detection of Intel CPU on OS X in system_info.py >> BLD: add support for the new X11 directory structure on Ubuntu & co. >> BLD: add ufsparse to the libraries search path. >> BLD: add 'pgfortran' as a valid compiler in the Portland Group >> BLD: update version match regexp for IBM AIX Fortran compilers. >> >> >> > I just noticed that my fix for the np.gradient() function isn't listed. > https://github.com/numpy/numpy/pull/167 > > Not critical, but if a second rc is needed for any reason, it would be > nice to have that in there. > > If there is a second rc I'll put in in. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From gregor.thalhammer at gmail.com Fri May 11 15:29:51 2012 From: gregor.thalhammer at gmail.com (Gregor Thalhammer) Date: Fri, 11 May 2012 21:29:51 +0200 Subject: [SciPy-User] 2D phase unwrapping Message-ID: Hi all, I have beend searching for an implementation for phase unwrapping in 2D (and possibly also 3D). I found this old thread in the numpy mailing list http://mail.scipy.org/pipermail/numpy-discussion/2008-November/038873.html which mentions the C implementation from GERI: http://www.ljmu.ac.uk/GERI/90202.htm While searching I found remarks by the authors that these algorithms have been incorporated into scipy, however I am unable to find them in current scipy or numpy. Am I missing something obvious? Instead I found this wrapper: https://github.com/pointtonull/pyunwrap This seems to be based on a wrapper already mentioned in the above mentioned discussion, but the links mentioned there are dead. I added some setup.py, and with some small modifications I managed to compile the extension both on OS X and Windows. I would like to see these algorithms included in scipy, and I am willing to work on this. So now my questions: In the old numpy-discussion thread licensing issues are raised, can anybody tell more? On the GERI homepage they distribute their code under a non-commercial-use license, but the authors seem to agree on incorporating their code into scipy. Except from this, what else would be required? Thanks for any advice Gregor From travis at continuum.io Fri May 11 16:02:10 2012 From: travis at continuum.io (Travis Oliphant) Date: Fri, 11 May 2012 15:02:10 -0500 Subject: [SciPy-User] Simple ndarray dim question? In-Reply-To: <448F2446-BEA6-4850-A64A-830578984DD9@gmail.com> References: <448F2446-BEA6-4850-A64A-830578984DD9@gmail.com> Message-ID: <83D62D0E-5765-4744-B30D-4A24A1DDA180@continuum.io> On May 10, 2012, at 4:15 PM, Erik Kastman wrote: > Hi all, > > Using SciPy and Matlab, I'm having trouble reconstructing an array to match what is given from a matlab cell array loaded using scipy.io.loadmat(). > > For example, say I create a cell containing a pair of double arrays in matlab and then load it using scipy.io (I'm using SPM to do imaging analyses in conjunction with pynifti and the like) > > Matlab > >>> onsets{1} = [0 30 60 90] >>> onsets{2} = [15 45 75 105] > > Python > >>>> import scipy.io as scio >>>> mat = scio.loadmat('onsets.mat') >>>> mat['onsets'][0] > array([[[ 0 30 60 90]], [[ 15 45 75 105]]], dtype=object) > >>>> mat['onsets'][0].shape > > (2,) > > My question is this: **Why does this numpy array have the shape (2,) instead of (2,1,4)**? In real life I'm trying to use Python to parse a logfile and build these onsets cell arrays, so I'd like to be able to build them from scratch. loadmat returns an array of *Python objects* because you have cell arrays. Each object in the outer NumPy array is probably a (1,4) NumPy array of a different type. To be sure look at type(mat['onsets'][0][0]) and mat['onsets'][0].shape if the result of the first expression is an ndarray. > > When I try to build the same array from the printed output, I get a different shape back: > >>>> new_onsets = array([[[ 0, 30, 60, 90]], [[ 15, 45, 75, 105]]], dtype=object) > array([[[0, 30, 60, 90]], > > [[15, 45, 75, 105]]], dtype=object) > >>>> new_onsets.shape > (2,1,4) Now you have an "object" array of shape (2,1,4) where each object happens to be a Python int. Normally you would make an array of "integers" or "floating point values". You could build what you saw before with: new_onsets = empty((2,), dtype=object) new_onsets[0] = array([[0, 30, 60, 90]]) new_onsets[1] = array([[15, 45, 75, 105]]) -Travis From cimrman3 at ntc.zcu.cz Sat May 12 04:54:42 2012 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Sat, 12 May 2012 10:54:42 +0200 Subject: [SciPy-User] How to assemble large sparse matrices effectively In-Reply-To: References: Message-ID: <4FAE2552.2050905@ntc.zcu.cz> Hello, On 05/09/2012 07:44 PM, cp3028 wrote: > Hello everyone, > > I am working on an FEM project using Scipy. Now my problem is, that > the assembly of the sparse matrices is to slow. I compute the > contribution of every element in dense small matrices (one for each > element). For the assembly of the global matrices I loop over all > small dense matrices and set the matrice entries the following way: > ... > [i,j] = someList[k][l] > Mglobal[i,j] = Mglobal[i,j] + Mlocal[k,l] > ... > > Mglobal is a lil_matrice of appropriate size, someList maps the > indexing variables. You can try to adapt [1], which is in Cython, and works with the CSR matrices. Example usage: assemble_matrix(m.data, m.indptr, m.indices, mtx_in_els, element_indices, sign, row_connectivity, column_connectivity) - m is the global matrix in CSR format - mtx_in_els is an array of element matrices of shape (n_elements, 1, n_rows, n_columns) - element_indices are indices (of length n_elements) into the row and column element connectivity arrays. r. [1] https://github.com/sfepy/sfepy/blob/master/sfepy/fem/extmods/assemble.pyx From ralf.gommers at googlemail.com Sat May 12 05:58:17 2012 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Sat, 12 May 2012 11:58:17 +0200 Subject: [SciPy-User] 2D phase unwrapping In-Reply-To: References: Message-ID: On Fri, May 11, 2012 at 9:29 PM, Gregor Thalhammer < gregor.thalhammer at gmail.com> wrote: > Hi all, > > I have beend searching for an implementation for phase unwrapping in 2D > (and possibly also 3D). I found this old thread in the numpy mailing list > > http://mail.scipy.org/pipermail/numpy-discussion/2008-November/038873.html > > which mentions the C implementation from GERI: > http://www.ljmu.ac.uk/GERI/90202.htm > > While searching I found remarks by the authors that these algorithms have > been incorporated into scipy, however I am unable to find them in current > scipy or numpy. Am I missing something obvious? > Instead I found this wrapper: https://github.com/pointtonull/pyunwrap > This seems to be based on a wrapper already mentioned in the above > mentioned discussion, but the links mentioned there are dead. I added some > setup.py, and with some small modifications I managed to compile the > extension both on OS X and Windows. > > I would like to see these algorithms included in scipy, and I am willing > to work on this. Great! > So now my questions: In the old numpy-discussion thread licensing issues > are raised, can anybody tell more? On the GERI homepage they distribute > their code under a non-commercial-use license, but the authors seem to > agree on incorporating their code into scipy. > Except from this, what else would be required? > The emails about the license seem to be very clear to me, there's no issue. So all it would take is someone submitting a complete wrapper (with the normal requirements on docs/tests). The only things to decide are then: do we want it in scipy (+1 from me), where to put it and what the API should look like. Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From friedrichromstedt at gmail.com Sat May 12 06:30:03 2012 From: friedrichromstedt at gmail.com (Friedrich Romstedt) Date: Sat, 12 May 2012 12:30:03 +0200 Subject: [SciPy-User] 2D phase unwrapping In-Reply-To: References: Message-ID: <51478D12-D06D-449B-BE37-901E93C72061@gmail.com> Am 11.05.2012 um 21:29 schrieb Gregor Thalhammer : > Hi all, > > I have beend searching for an implementation for phase unwrapping in 2D (and possibly also 3D). I found this old thread in the numpy mailing list > > http://mail.scipy.org/pipermail/numpy-discussion/2008-November/038873.html > > which mentions the C implementation from GERI: http://www.ljmu.ac.uk/GERI/90202.htm > > While searching I found remarks by the authors that these algorithms have been incorporated into scipy, however I am unable to find them in current scipy or numpy. Am I missing something obvious? > Instead I found this wrapper: https://github.com/pointtonull/pyunwrap > This seems to be based on a wrapper already mentioned in the above mentioned discussion, but the links mentioned there are dead. I added some setup.py, and with some small modifications I managed to compile the extension both on OS X and Windows. > > I would like to see these algorithms included in scipy, and I am willing to work on this. So now my questions: In the old numpy-discussion thread licensing issues are raised, can anybody tell more? On the GERI homepage they distribute their code under a non-commercial-use license, but the authors seem to agree on incorporating their code into scipy. > Except from this, what else would be required? > > Thanks for any advice > Gregor Hi, I worked pretty hard on an unwrapping algorithm for Fourier tranform results. By this I noticed the following points: ? Phases are angles, so they are just identical, if their angle "value" is identical modulus 2 pi. There's nothing to tell them apart. The numerical value is just an insufficient model and we fight these insufficiencies when trying to "unwrap". They are already unwrapped. We just don't see it anymore. ? For the special case of adding complex numbers, as in Fourier theory, I noticed that there is no mathematically consistent way of determining the 2 pi component of the sum of two complex numbers w.r.t. the angular numbers of the summands. It just depends on how you work it out to be calculated. This is because adding complex numbers works on real and imaginary part. For multiplication it's just fine to add the angles, as multiplication of complex numbers works on complex modulus and phase. ? I suggest to leave the numerical value behind, or to use it in a way consistent with what it is supposed to model. This means to see it as a distance on the line of a circle, not on a normal cartesian axis. So if the number reaches 2 pi, it's just exactly the same point on the circle. From distances between points on the circle phase differences can be calculated easily by using school math. Of course it will still be undetermined up to 2 pi. I think if the problem requires unwrapping still, working on the way of posing the problem and understanding its kernel better might prove more useful in the long run that trying to fight noise which makes unwrapping algorithms break down. But this is just a gut feeling. It builds on that if some angular value requires to be interpreted as if on a cartesian scale there must be something very wrong with the interpretation after all. Angles are not like that as I mentioned in the beginning. This is a theoretical post and I'm not about to have any effect on what goes into scipy and what does not. :-) Friedrich From robert.kern at gmail.com Sat May 12 06:49:19 2012 From: robert.kern at gmail.com (Robert Kern) Date: Sat, 12 May 2012 11:49:19 +0100 Subject: [SciPy-User] 2D phase unwrapping In-Reply-To: <51478D12-D06D-449B-BE37-901E93C72061@gmail.com> References: <51478D12-D06D-449B-BE37-901E93C72061@gmail.com> Message-ID: On Sat, May 12, 2012 at 11:30 AM, Friedrich Romstedt wrote: > I worked pretty hard on an unwrapping algorithm for Fourier tranform results. By this I noticed the following points: > > ? Phases are angles, so they are just identical, if their angle "value" is identical modulus 2 pi. There's nothing to tell them apart. The numerical value is just an insufficient model and we fight these insufficiencies when trying to "unwrap". They are already unwrapped. We just don't see it anymore. Usually, the point of phase unwrapping is to try to recover an underlying linear variable in the range (-inf, inf) that we are observing through the lens of a complex phase. For example, when doing Interferometric Synthetic Aperture RADAR (InSAR), the setup is to fly a satellite doing SAR twice over a given area. SAR gives you a complex image of the ground surface. The complex phase is related to the two-way travel time of the RADAR signal. The phase of a single SAR image is essentially meaningless because it depends on every little detail on the surface, but if the surface did not change much in its fine details (seriously, plants growing on just the order of centimeters is a problem) but moved in the direction of the RADAR signal due to seismic events, the difference in the phases (interferometry!) is linearly related to the distance that the ground moved. Except that the phase difference, the only thing we can directly observe, gets wrapped. There *is* an underlying non-phase linear variable here that we are trying to estimate via phase unwrapping. When we're unwrapping phase, we're not really interested in the phase itself. It's just the only thing that we can observe. There are better and worse ways to unwrap phase in order to estimate these underlying variables, but it's not a theoretically doomed effort. -- Robert Kern From friedrichromstedt at gmail.com Sat May 12 12:43:07 2012 From: friedrichromstedt at gmail.com (Friedrich Romstedt) Date: Sat, 12 May 2012 18:43:07 +0200 Subject: [SciPy-User] 2D phase unwrapping In-Reply-To: References: <51478D12-D06D-449B-BE37-901E93C72061@gmail.com> Message-ID: <0EEF82BB-29B0-4795-B6EE-A45C5F24F885@gmail.com> Hi Robert, This post contains reasoning which is OT to the problem at hand. It is related, but it does not solve anything about this "unwrapping" process. Rather it tries to understand what we are doing here and where it might lead to, what the aiming is. :-) If the reader is only interested in getting his data done, he better does not read it. I've done it because I found there was be something to it (for me), and I wanted to know what it was. This means to be able to formulate it. I wanted to know what thing it is that I apparently did not take into account, and how it relates to the problem of the OP, and what the whole story looks like. If the reader finds this an interesting attitude, he might want to eventually read on. Otherwise, storage is cheap these days. I won't waste much. :-) I know this is not a philosophy mailing list. It's just the best I can do at the moment. I found the thing interesting and gave my contribution. If someone complains enough that thinking like this does absolutely not fit here, I might want to unsubscribe. :-) Just be open. :-) Am 12.05.2012 um 12:49 schrieb Robert Kern : > On Sat, May 12, 2012 at 11:30 AM, Friedrich Romstedt > wrote: > >> I worked pretty hard on an unwrapping algorithm for Fourier tranform results. By this I noticed the following points: >> >> ? Phases are angles, so they are just identical, if their angle "value" is identical modulus 2 pi. There's nothing to tell them apart. The numerical value is just an insufficient model and we fight these insufficiencies when trying to "unwrap". They are already unwrapped. We just don't see it anymore. > > Usually, the point of phase unwrapping is to try to recover an > underlying linear variable in the range (-inf, inf) that we are > observing through the lens of a complex phase. For example, when doing > Interferometric Synthetic Aperture RADAR (InSAR), the setup is to fly > a satellite doing SAR twice over a given area. SAR gives you a complex > image of the ground surface. The complex phase is related to the > two-way travel time of the RADAR signal. The phase of a single SAR > image is essentially meaningless because it depends on every little > detail on the surface, but if the surface did not change much in its > fine details (seriously, plants growing on just the order of > centimeters is a problem) but moved in the direction of the RADAR > signal due to seismic events, the difference in the phases > (interferometry!) is linearly related to the distance that the ground > moved. Except that the phase difference, the only thing we can > directly observe, gets wrapped. There *is* an underlying non-phase > linear variable here that we are trying to estimate via phase > unwrapping. When we're unwrapping phase, we're not really interested > in the phase itself. It's just the only thing that we can observe. > There are better and worse ways to unwrap phase in order to estimate > these underlying variables, but it's not a theoretically doomed > effort. Yes. [I concentrate in the following on InSAR] This is a descriptive data analysis problem. I'm not really interested in that kind of problems, although I'm always tempted to be. Two different people with two different interests met here. As I see it, the theory is just about modeling how height change and lateral displacement (when moving along the map), as well as any other effect, comprise the net interferometric phase. Since the lateral displacement (when moving along the map) is encoded in the pixel coordinate (or the spacial extent of the image), it can be resolved. Now all the other effects remain. The theory now does not make any judgement on the probability of all the different explanations which are possible due to the 2 pi "ambiguity" of the phase (more precise, of the fact that this ambiguity arises when representing phase as a number). So the theory would be happy with all these rather rough height maps which incorporate jumps of a kilometer from pixel to pixel. Because the theory yields the sum of all this different height maps, all of them at the same time. It is now hence a matter of a-priori information which continuous interpretation of these contraint done by the measurement we prefer or like best. It's no longer up to the theory to give this. It's a judgement on the constraint done by the measurement, taking into account that it should be continuous, s.t. jumps of a kilometer are not that likely. We shrink the theoretically possible explanations down using a-priori information. If there was a jump of one kilometer from one scanline to the next it would still be there on earth, but the interpretation would fail to find this out. Because it has not an eye for it. It would prefer the explanation where the jump is the remaining 2 cm downwards or so (just an example). As I said, this is not the kind of problems I'm interested in. I'm not a believer of realism. :-). It's fine if the interpretation yields this 2 cm jump from one scanline to the next. It will create a world for you where you can build on. You might want to refine it by going down there, finding that 1 km jump, or not. Although it might have been preferrable for an alien doing InSAR to just agree on that he would not know what he will see. To be precise, the alien [having InSAR eyes] would have no interest in assuming that there would be a continuous surface. :-) So it's rather "space unwrapping" or "how to choose the continuous surface" than "phase unwrapping". And I'm not going into that here, I just have not studied what others did about it so I'll better hide my childish ideas how it might be done. :-) Friedrich [some content deleted :-)] P.S.: And if course, a-priori information involves theoretical notions. With "theory" I meant the theory of InSAR, implying phases starting from height and other information. The InSAR process is interesting because it brings together two different theories: First, that of how spacial coordinates etc. relate to phase, and second, the model of a continuous surface. It's an assumption that they hold at the same time. But if they hold at the same time, it won't create anything new on the basis of other things, and that's why I'm not interested in it. So much of text just to find that out :-/. And the process of choosing the surface is just the attempt to see if the two theories mentioned can hold at the same time. From drwells at vt.edu Sat May 12 15:36:40 2012 From: drwells at vt.edu (Dave Wells) Date: Sat, 12 May 2012 15:36:40 -0400 Subject: [SciPy-User] How to assemble large sparse matrices Message-ID: cm, The trick is to use the IJV storage format. This is a trio of three arrays where the first one contains row indicies, the second has column indicies, and the third has the values of the matrix at that location. This is the best way to build finite element matricies (or any sparse matrix in my opinion) as access to the first format is really fast (just filling an an array) and conversion is also very fast. In scipy this is called the COO matrix. It sums entries with the same indicies and can be converted to any other format fast. It ignores zeros too (so it does 'the right things' for finite elements). For finite elements, you can estimate the size of the three arrays by something like size = number_of_elements * number_of_basis_functions**2 so if you have 2D quadratics you would do number_of_elements * 36, for example. This approach is convenient because if you have local matricies you definitely have the global numbers and entry values: exactly what you need for building the three IJV arrays. I hope this helps, Dave Wells Virginia Tech > Message: 4 > Date: Wed, 9 May 2012 11:36:39 -0700 (PDT) > From: cp3028 > Subject: [SciPy-User] How to assemble large sparse matrices > effectively > To: scipy-user at scipy.org > Message-ID: > <681b81d4-491f-4b4b-b201-df6bc1fbcce7 at m10g2000vbb.googlegroups.com> > Content-Type: text/plain; charset=ISO-8859-1 > > Hello everyone, > > I am working on an FEM project using Scipy. Now my problem is, that > the assembly of the sparse matrices is to slow. I compute the > contribution of every element in dense small matrices (one for each > element). For the assembly of the global matrices I loop over all > small dense matrices and set the matrice entries the following way: > ... > [i,j] = someList[k][l] > Mglobal[i,j] = Mglobal[i,j] + Mlocal[k,l] > ... > > Mglobal is a lil_matrice of appropriate size, someList maps the > indexing variables. > > Of course this is rather slow and consumes most of the matrice > assembly time. Is there a better way to assemble a large sparse matrix > from many small dense matrices? I tried scipy.weave but it doesn't > seem to work with sparse matrices > > Yours, > > cm From zachary.pincus at yale.edu Sun May 13 13:07:02 2012 From: zachary.pincus at yale.edu (Zachary Pincus) Date: Sun, 13 May 2012 13:07:02 -0400 Subject: [SciPy-User] Weighted KDE Message-ID: <0CE25948-3ED7-4928-9C63-4F44366C3AF5@yale.edu> Hello all, A while ago, someone asked on this list about whether it would be simple to modify scipy.stats.kde.gaussian_kde to deal with weighted data: http://mail.scipy.org/pipermail/scipy-user/2008-November/018578.html Anne and Robert assured the writer that this was pretty simple (modulo bandwidth selection), though I couldn't find any code that the original author may have generated based on that advice. I've got a problem that could (perhaps) be solved neatly with weighed KDE, so I'd like to give this a go. I assume that at a minimum, to get basic gaussian_kde.evaluate() functionality: (1) The covariance calculation would need to be replaced by a weighted-covariance calculation. (Simple enough.) (2) In evaluate(), the critical part looks like this (and a similar stanza that loops over the points instead): # if there are more points than data, so loop over data for i in range(self.n): diff = self.dataset[:, i, newaxis] - points tdiff = dot(self.inv_cov, diff) energy = sum(diff*tdiff,axis=0) / 2.0 result = result + exp(-energy) I assume that, further, the 'diff' values ought to be scaled by the weights, too. Is this all that would need to be done? (For the integration and resampling, obviously, there would be a bit of other work...) Thanks, Zach From josef.pktd at gmail.com Sun May 13 14:17:30 2012 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Sun, 13 May 2012 14:17:30 -0400 Subject: [SciPy-User] Weighted KDE In-Reply-To: <0CE25948-3ED7-4928-9C63-4F44366C3AF5@yale.edu> References: <0CE25948-3ED7-4928-9C63-4F44366C3AF5@yale.edu> Message-ID: On Sun, May 13, 2012 at 1:07 PM, Zachary Pincus wrote: > Hello all, > > A while ago, someone asked on this list about whether it would be simple to modify scipy.stats.kde.gaussian_kde to deal with weighted data: > http://mail.scipy.org/pipermail/scipy-user/2008-November/018578.html > > Anne and Robert assured the writer that this was pretty simple (modulo bandwidth selection), though I couldn't find any code that the original author may have generated based on that advice. > > I've got a problem that could (perhaps) be solved neatly with weighed KDE, so I'd like to give this a go. I assume that at a minimum, to get basic gaussian_kde.evaluate() functionality: > > (1) The covariance calculation would need to be replaced by a weighted-covariance calculation. (Simple enough.) > > (2) In evaluate(), the critical part looks like this (and a similar stanza that loops over the points instead): > # if there are more points than data, so loop over data > for i in range(self.n): > ? ?diff = self.dataset[:, i, newaxis] - points > ? ?tdiff = dot(self.inv_cov, diff) > ? ?energy = sum(diff*tdiff,axis=0) / 2.0 > ? ?result = result + exp(-energy) > > I assume that, further, the 'diff' values ought to be scaled by the weights, too. Is this all that would need to be done? (For the integration and resampling, obviously, there would be a bit of other work...) it looks to me that way, scaled according to weight by dataset points I don't see what the norm_factor should be: self._norm_factor = sqrt(linalg.det(2*pi*self.covariance)) * self.n there should be the weights somewhere in there, maybe just replace self.n by sum(weights) given a constant covariance sampling doesn't look difficult, if we want biased sampling, then instead of randint, we would need weighted randint (non-uniform) integration might require more work, or not (I never tried to understand them) (I don't know if kde in statsmodels has weights on the schedule.) Josef mostly guessing > > Thanks, > Zach > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user From markus.baden at gmail.com Sun May 13 23:25:41 2012 From: markus.baden at gmail.com (Markus Baden) Date: Mon, 14 May 2012 11:25:41 +0800 Subject: [SciPy-User] scipy.odr - Goodness of fit and parameter estimation for explicit orthogonal distance regression Message-ID: Hi list, Currently, I am trying to fit a quadratic curve to a data set which has much larger errors in the x than in the y direction. My errors are assumed to be normally distributed and I want to estimate the confidence interval of the fitted parameters. I have fitted the data two different ways. 1) I neglect the x errors and fit the quadratic by minimizing the weighted residuals (y -f) / sig_y via scipy.optimize.leastsq and 2) I use scipy.odr to fit the parameters. Both result similar fitted parameters. Now I am stuck with estimating the confidence intervals on these errors and I have a couple of questions. For method 1) the reduced chi squared is bad (much larger then 1), because when neglecting the x errors, none of the points actually lie within a couple of standard deviations on the line. However when including the x-errors they all fall nicely onto the line. Is there a way for me to include the x errors into my minimization and goodness of fit estimation via the reduced chi-squared? I came across a remark in the CERN minuit documentation [1], that seemed to approximate the function by a line over the point and then use this to convert x to y errors. I also read something like this in numerical recipes [2], but not for general least squares (only for the case of linear data). Does anyone have any pointers into that direction? My second question is related to method 2). Is there a way of accessing the goodness of fit for ODR, similar to calculating the reduced chi-squared for a fit that only has errors in the y? Another question along this line concerns the scipy.odr.ODR.Output.sd_beta attribute. In the docstring it says it is the standard error of the parameter, does that mean a 1 standard deviation confidence interval? And how exactly are they calculated. Tried to look at the source and in the odrpack guide, but unfortunately couldn't figure that out. I somehow have the feeling, that my problem should be quite standard (goodness of fit with both x and y errors), but so far I could not find a good explanation. Any pointers to textbooks or resources on the web would be greatly appreciated. Best regards, Markus [1] http://wwwasd.web.cern.ch/wwwasd/cgi-bin/listpawfaqs.pl/27 [2] Numerical Recipes in C, Second Edition, Chapter 15.3 "Straight-Line Data with Errors in Both Coordinate" -------------- next part -------------- An HTML attachment was scrubbed... URL: From jdgleeson at mac.com Mon May 14 01:01:47 2012 From: jdgleeson at mac.com (John Gleeson) Date: Sun, 13 May 2012 23:01:47 -0600 Subject: [SciPy-User] scipy.odr - Goodness of fit and parameter estimation for explicit orthogonal distance regression In-Reply-To: References: Message-ID: <38210107-A9CA-40E5-80F1-10825C23DB1A@mac.com> Perhaps the module kmpfit from the kapetyn package would be helpful to you. http://www.astro.rug.nl/software/kapteyn/kmpfit.html Don't miss the tutorial http://www.astro.rug.nl/software/kapteyn/kmpfittutorial.html which has a section "Fitting data when both variables have uncertainties" with subsection "Effective variance method for various models: Model parabola" From markus.baden at gmail.com Mon May 14 03:36:47 2012 From: markus.baden at gmail.com (Markus Baden) Date: Mon, 14 May 2012 15:36:47 +0800 Subject: [SciPy-User] scipy.odr - Goodness of fit and parameter estimation for explicit orthogonal distance regression In-Reply-To: <38210107-A9CA-40E5-80F1-10825C23DB1A@mac.com> References: <38210107-A9CA-40E5-80F1-10825C23DB1A@mac.com> Message-ID: > Don't miss the tutorial > http://www.astro.rug.nl/software/kapteyn/kmpfittutorial.html > which has a section "Fitting data when both variables have > uncertainties" > Thanks a lot, the tutorial and references therein are very useful. Regards, Markus -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Mon May 14 06:10:48 2012 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 14 May 2012 11:10:48 +0100 Subject: [SciPy-User] scipy.odr - Goodness of fit and parameter estimation for explicit orthogonal distance regression In-Reply-To: References: Message-ID: On Mon, May 14, 2012 at 4:25 AM, Markus Baden wrote: > Hi list, > > Currently, I am trying to fit a quadratic curve to a data set which has much > larger errors in the x than in the y direction. My errors are assumed to be > normally distributed and I want to estimate the confidence interval of the > fitted parameters. I have fitted the data two different ways. 1) I neglect > the x errors and fit the quadratic by minimizing the weighted residuals (y > -f) / sig_y via scipy.optimize.leastsq and 2) I use scipy.odr to fit the > parameters. Both result similar fitted parameters. > > Now I am stuck with estimating the confidence intervals on these errors and > I have a couple of questions. scipy.odr provides an estimate of the covariance matrix and standard deviations of the parameter estimates. Getting the confidence interval for a parameter is just a matter of scaling up the standard deviations by the appropriate t-distribution value with nobs-nparams degrees of freedom. A paper by the ODRPACK implementors gives the formula explicitly on page 6: http://www.mechanicalkern.com/static/odr_vcv.pdf It also has more information on how the covariance matrix is calculated. > My second question is related to method 2). Is there a way of accessing the > goodness of fit for ODR, similar to calculating the reduced chi-squared for > a fit that only has errors in the y? Output.res_var is the reduced Chi-square. But you can compute it from scratch using the raw residuals. Output.eps are the differences in Y and Output.delta are the differences in X. Square each and multiply by their weights (a.k.a. divide by the data variances), then add up all of them. Divide by (nobs - nparams). > Another question along this line > concerns the scipy.odr.ODR.Output.sd_beta attribute. In the docstring it > says it is the standard error of the parameter, does that mean a 1 standard > deviation confidence interval? Yes. > And how exactly are they calculated. Tried to > look at the source and in the odrpack guide, but unfortunately couldn't > figure that out. As given in the odr_vcv.pdf paper. Some slightly sparser details are also in the ODRPACK Guide in section "4.B. Covariance Matrix". -- Robert Kern From gerrit.holl at ltu.se Mon May 14 10:51:54 2012 From: gerrit.holl at ltu.se (Gerrit Holl) Date: Mon, 14 May 2012 16:51:54 +0200 Subject: [SciPy-User] Using scipy.io.loadmat to read Matlab containers.Map object? Message-ID: Hi, is it possible to use scipy.io.loadmat to read a Matlab containers.Map object? >> cm = containers.Map(); >> cm('abc') = 42; >> save('/tmp/test.mat', 'cm'); In [15]: M = scipy.io.loadmat('/tmp/test.mat') In [16]: M.keys() Out[16]: ['__function_workspace__', 'None', '__version__', '__header__', '__globals__'] In [17]: M["None"] Out[17]: MatlabOpaque([ ('cm', 'MCOS', 'containers.Map', [[3707764736L], [2L], [1L], [1L], [1L], [1L]])], dtype=[('s0', '|O8'), ('s1', '|O8'), ('s2', '|O8'), ('arr', '|O8')]) In [15]: M = scipy.io.loadmat('/tmp/test.mat') In [16]: M.keys() Out[16]: ['__function_workspace__', 'None', '__version__', '__header__', '__globals__'] In [17]: M["None"] Out[17]: MatlabOpaque([ ('cm', 'MCOS', 'containers.Map', [[3707764736L], [2L], [1L], [1L], [1L], [1L]])], dtype=[('s0', '|O8'), ('s1', '|O8'), ('s2', '|O8'), ('arr', '|O8')]) regards, Gerrit. -- Gerrit Holl PhD student at Division of Space Technology, Lule? University of Technology, Kiruna, Sweden http://www.sat.ltu.se/members/gerrit/ From gregor.thalhammer at gmail.com Mon May 14 11:11:37 2012 From: gregor.thalhammer at gmail.com (Gregor Thalhammer) Date: Mon, 14 May 2012 17:11:37 +0200 Subject: [SciPy-User] 2D phase unwrapping In-Reply-To: References: Message-ID: <5B6184CF-C72B-4B6D-AABE-306A7FD69EB7@gmail.com> Am 12.5.2012 um 11:58 schrieb Ralf Gommers: > > > On Fri, May 11, 2012 at 9:29 PM, Gregor Thalhammer wrote: > Hi all, > > I have beend searching for an implementation for phase unwrapping in 2D (and possibly also 3D). I found this old thread in the numpy mailing list > > http://mail.scipy.org/pipermail/numpy-discussion/2008-November/038873.html > > which mentions the C implementation from GERI: http://www.ljmu.ac.uk/GERI/90202.htm > > While searching I found remarks by the authors that these algorithms have been incorporated into scipy, however I am unable to find them in current scipy or numpy. Am I missing something obvious? > Instead I found this wrapper: https://github.com/pointtonull/pyunwrap > This seems to be based on a wrapper already mentioned in the above mentioned discussion, but the links mentioned there are dead. I added some setup.py, and with some small modifications I managed to compile the extension both on OS X and Windows. > > I would like to see these algorithms included in scipy, and I am willing to work on this. > > Great! > > So now my questions: In the old numpy-discussion thread licensing issues are raised, can anybody tell more? On the GERI homepage they distribute their code under a non-commercial-use license, but the authors seem to agree on incorporating their code into scipy. > Except from this, what else would be required? > > The emails about the license seem to be very clear to me, there's no issue. So all it would take is someone submitting a complete wrapper (with the normal requirements on docs/tests). The only things to decide are then: do we want it in scipy (+1 from me), where to put it and what the API should look like. Over the weekend I worked on the 2D and 3D phase unwrappers. I put a first version on github: https://github.com/geggo/phase-unwrap git://github.com/geggo/phase-unwrap.git I tested it on OS X with gcc-4.2, Python 2.7 and recent cython. (it crashes on Windows (MSVC 9), probably related to cython, but thats another story). Seems to work ok, only one basic test, no docs yet. The interface is quite simple: def unwrap(wrapped_array, wrap_around_axis_0 = False, wrap_around_axis_1 = False, wrap_around_axis_2 = False): it accepts an 2d or 3d numpy array or an masked array, and returns an masked array if one was given. A fresh float32 array is returned. The additional arguments can be used to specify cyclic boundary conditions along the given axis. The calculations are internally performed with float32. It should be quite straightforward to extend this to other types, but I think for the most common use cases for this algorithm (unwrapping of noisy data) a float32 array is sufficient. I guess a proper place in scipy would be to add this extension to scipy.ndimage. Please, any comments are welcome. And also some hints, why it crashes on Windows - or how to debug. Gregor -------------- next part -------------- An HTML attachment was scrubbed... URL: From erik.kastman at gmail.com Mon May 14 11:31:08 2012 From: erik.kastman at gmail.com (Erik) Date: Mon, 14 May 2012 15:31:08 +0000 (UTC) Subject: [SciPy-User] Simple ndarray dim question? References: <448F2446-BEA6-4850-A64A-830578984DD9@gmail.com> <83D62D0E-5765-4744-B30D-4A24A1DDA180@continuum.io> Message-ID: Travis Oliphant continuum.io> writes: > You could build what you saw before with: > > new_onsets = empty((2,), dtype=object) > new_onsets[0] = array([[0, 30, 60, 90]]) > new_onsets[1] = array([[15, 45, 75, 105]]) > That works like a charm. I guess I was trying to build an outer array by gluing the two together, but building the structure first (empty(shape)) and then inserting the data arrays actually makes more sense. Thanks for your help Travis! Erik From sara2411 at gmail.com Mon May 14 12:59:58 2012 From: sara2411 at gmail.com (Sara Gallian) Date: Mon, 14 May 2012 18:59:58 +0200 Subject: [SciPy-User] Trajectory Integration via scipy.integrate or PyDSTool? Message-ID: Hi all, I'm trying to integrate particle trajectories in a finite domain, and I need to make them "reflect" at the boundaries (i.e. reverse the velocity perpendicular to the plane they collided with). I started by using scipy.integrate.odeint , but since the integration steps are variable, simply checking the position and reversing the velocity won't work! Can anyone suggest the quickest way to obtain this? would Vode be able to handle this, or should I try to learn to use PyDSTool? I'm running pretty late on a deadline, so any suggestion is more than appreciated :) Thanks! Sara From rob.clewley at gmail.com Mon May 14 13:22:37 2012 From: rob.clewley at gmail.com (Rob Clewley) Date: Mon, 14 May 2012 13:22:37 -0400 Subject: [SciPy-User] Trajectory Integration via scipy.integrate or PyDSTool? In-Reply-To: References: Message-ID: Hi Sara, I think you should use a tool that's appropriate for the task. With an hour of work and help on this list or the PyDSTool forum at http://sourceforge.net/projects/pydstool/forums/forum/472291 you should be able to get your problem working quickly with PyDSTool. I expect your problem is easy to solve in PyDSTool, and difficult to do with odeint. You can still use scipy's VODE in PyDSTool and you won't need any fancy installation. Check out the file vode_event_test1.py in the tests directory. Writing the vector field is simple, but in case you have specific problems setting up yours then you should send me your code on list so that I can help. I'd be interested to see what the equations look like. It's very simple to define a terminal event, just copy and adapt the code in the file. You can write a loop so that whenever integration has stopped because of an event, you reverse the velocity IC and restart. -Rob On Mon, May 14, 2012 at 12:59 PM, Sara Gallian wrote: > Hi all, > I'm trying to integrate particle trajectories in a finite domain, and I need to make them "reflect" at the boundaries (i.e. reverse the velocity perpendicular to the plane they collided with). I started by using ?scipy.integrate.odeint , but since the integration steps are variable, simply checking the position and reversing the velocity won't work! > Can anyone suggest the quickest way to obtain this? would Vode be able to handle this, or should I try to learn to use ?PyDSTool? > I'm running pretty late on a deadline, so any suggestion is more than appreciated :) > Thanks! > Sara > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user -- Robert Clewley, Ph.D. Assistant Professor Neuroscience Institute and Department of Mathematics and Statistics Georgia State University PO Box 5030 Atlanta, GA 30302, USA tel: 404-413-6420 fax: 404-413-5446 http://www2.gsu.edu/~matrhc http://neuroscience.gsu.edu/rclewley.html From sara2411 at gmail.com Mon May 14 14:03:28 2012 From: sara2411 at gmail.com (Sara Gallian) Date: Mon, 14 May 2012 20:03:28 +0200 Subject: [SciPy-User] Trajectory Integration via scipy.integrate or PyDSTool? In-Reply-To: References: Message-ID: Dear Rob, thank you for the quick reply. Unfortunately - I must say I'm a bit ashamed of that - I have troubles with installing PyDSTool! :( I'm using a Mac with Lion version 10.7.3 installed. I copied the unzipped folder and changed the permission. than added the path to my .profle file PYTHONPATH="/usr/local/PyDSTool:${PYTHONPATH}" export PYTHONPATH The python version I'm using though, is the Enthough Distribution (EPD-7.1-2) , which path PATH="/Library/Frameworks/EPD64.framework/Versions/Current/bin:${PATH}" export PATH So I can't set the variable PYTHON to /sw/bin/python2.6.. Is there a conflict with EPD version I have installed? Or am I just getting it totally wrong? Thank you! Sara On May 14, 2012, at 7:22 PM, Rob Clewley wrote: > Hi Sara, > > I think you should use a tool that's appropriate for the task. With an > hour of work and help on this list or the PyDSTool forum at > http://sourceforge.net/projects/pydstool/forums/forum/472291 you > should be able to get your problem working quickly with PyDSTool. I > expect your problem is easy to solve in PyDSTool, and difficult to do > with odeint. > > You can still use scipy's VODE in PyDSTool and you won't need any > fancy installation. Check out the file vode_event_test1.py in the > tests directory. Writing the vector field is simple, but in case you > have specific problems setting up yours then you should send me your > code on list so that I can help. I'd be interested to see what the > equations look like. > > It's very simple to define a terminal event, just copy and adapt the > code in the file. You can write a loop so that whenever integration > has stopped because of an event, you reverse the velocity IC and > restart. > > -Rob > > On Mon, May 14, 2012 at 12:59 PM, Sara Gallian wrote: >> Hi all, >> I'm trying to integrate particle trajectories in a finite domain, and I need to make them "reflect" at the boundaries (i.e. reverse the velocity perpendicular to the plane they collided with). I started by using scipy.integrate.odeint , but since the integration steps are variable, simply checking the position and reversing the velocity won't work! >> Can anyone suggest the quickest way to obtain this? would Vode be able to handle this, or should I try to learn to use PyDSTool? >> I'm running pretty late on a deadline, so any suggestion is more than appreciated :) >> Thanks! >> Sara >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user > > > > -- > Robert Clewley, Ph.D. > Assistant Professor > Neuroscience Institute and > Department of Mathematics and Statistics > Georgia State University > PO Box 5030 > Atlanta, GA 30302, USA > > tel: 404-413-6420 fax: 404-413-5446 > http://www2.gsu.edu/~matrhc > http://neuroscience.gsu.edu/rclewley.html > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user From lou_boog2000 at yahoo.com Mon May 14 14:03:42 2012 From: lou_boog2000 at yahoo.com (Lou Pecora) Date: Mon, 14 May 2012 11:03:42 -0700 (PDT) Subject: [SciPy-User] Trajectory Integration via scipy.integrate or PyDSTool? In-Reply-To: References: Message-ID: <1337018622.1991.YahooMailNeo@web160306.mail.bf1.yahoo.com> Hi, Sara, Can you give more information? ? Are the particles in a potential or is it free flight between hard wall collisions? ?Do the particles interact? ?I assume the dynamics are conservative (Hamiltonian). Is that correct? ? -- Lou Pecora, my views are my own. ________________________________ From: Rob Clewley To: SciPy Users List Sent: Monday, May 14, 2012 1:22 PM Subject: Re: [SciPy-User] Trajectory Integration via scipy.integrate or PyDSTool? Hi Sara, I think you should use a tool that's appropriate for the task. With an hour of work and help on this list or the PyDSTool forum at http://sourceforge.net/projects/pydstool/forums/forum/472291 you should be able to get your problem working quickly with PyDSTool. I expect your problem is easy to solve in PyDSTool, and difficult to do with odeint. You can still use scipy's VODE in PyDSTool and you won't need any fancy installation. Check out the file vode_event_test1.py in the tests directory. Writing the vector field is simple, but in case you have specific problems setting up yours then you should send me your code on list so that I can help. I'd be interested to see what the equations look like. It's very simple to define a terminal event, just copy and adapt the code in the file. You can write a loop so that whenever integration has stopped because of an event, you reverse the velocity IC and restart. -Rob On Mon, May 14, 2012 at 12:59 PM, Sara Gallian wrote: > Hi all, > I'm trying to integrate particle trajectories in a finite domain, and I need to make them "reflect" at the boundaries (i.e. reverse the velocity perpendicular to the plane they collided with). I started by using ?scipy.integrate.odeint , but since the integration steps are variable, simply checking the position and reversing the velocity won't work! > Can anyone suggest the quickest way to obtain this? would Vode be able to handle this, or should I try to learn to use ?PyDSTool? > I'm running pretty late on a deadline, so any suggestion is more than appreciated :) > Thanks! > Sara > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user -- Robert Clewley, Ph.D. Assistant Professor Neuroscience Institute and Department of Mathematics and Statistics Georgia State University PO Box 5030 Atlanta, GA 30302, USA tel: 404-413-6420 fax: 404-413-5446 http://www2.gsu.edu/~matrhc http://neuroscience.gsu.edu/rclewley.html _______________________________________________ SciPy-User mailing list SciPy-User at scipy.org http://mail.scipy.org/mailman/listinfo/scipy-user -------------- next part -------------- An HTML attachment was scrubbed... URL: From sara2411 at gmail.com Mon May 14 14:12:33 2012 From: sara2411 at gmail.com (Sara Gallian) Date: Mon, 14 May 2012 20:12:33 +0200 Subject: [SciPy-User] Trajectory Integration via scipy.integrate or PyDSTool? In-Reply-To: <1337018622.1991.YahooMailNeo@web160306.mail.bf1.yahoo.com> References: <1337018622.1991.YahooMailNeo@web160306.mail.bf1.yahoo.com> Message-ID: Dear Lou, I'm just moving a bunch of electrons in a static b field dv/dt = q/m v x B B depends on the position, in general They move in this field, without interacting, till they bounce off a wall, that specularly reflects them Yes, the dynamic is conservative. I could just write a RK4 and check the position at every ts, but I was hoping to be able to use a better tool Thanks Sara Sent from my iPhone On 14.05.2012, at 20:03, Lou Pecora wrote: > Hi, Sara, > > Can you give more information? > > Are the particles in a potential or is it free flight between hard wall collisions? Do the particles interact? I assume the dynamics are conservative (Hamiltonian). Is that correct? > > -- Lou Pecora, my views are my own. > From: Rob Clewley > To: SciPy Users List > Sent: Monday, May 14, 2012 1:22 PM > Subject: Re: [SciPy-User] Trajectory Integration via scipy.integrate or PyDSTool? > > Hi Sara, > > I think you should use a tool that's appropriate for the task. With an > hour of work and help on this list or the PyDSTool forum at > http://sourceforge.net/projects/pydstool/forums/forum/472291 you > should be able to get your problem working quickly with PyDSTool. I > expect your problem is easy to solve in PyDSTool, and difficult to do > with odeint. > > You can still use scipy's VODE in PyDSTool and you won't need any > fancy installation. Check out the file vode_event_test1.py in the > tests directory. Writing the vector field is simple, but in case you > have specific problems setting up yours then you should send me your > code on list so that I can help. I'd be interested to see what the > equations look like. > > It's very simple to define a terminal event, just copy and adapt the > code in the file. You can write a loop so that whenever integration > has stopped because of an event, you reverse the velocity IC and > restart. > > -Rob > > On Mon, May 14, 2012 at 12:59 PM, Sara Gallian wrote: > > Hi all, > > I'm trying to integrate particle trajectories in a finite domain, and I need to make them "reflect" at the boundaries (i.e. reverse the velocity perpendicular to the plane they collided with). I started by using scipy.integrate.odeint , but since the integration steps are variable, simply checking the position and reversing the velocity won't work! > > Can anyone suggest the quickest way to obtain this? would Vode be able to handle this, or should I try to learn to use PyDSTool? > > I'm running pretty late on a deadline, so any suggestion is more than appreciated :) > > Thanks! > > Sara > > _______________________________________________ > > SciPy-User mailing list > > SciPy-User at scipy.org > > http://mail.scipy.org/mailman/listinfo/scipy-user > > > > -- > Robert Clewley, Ph.D. > Assistant Professor > Neuroscience Institute and > Department of Mathematics and Statistics > Georgia State University > PO Box 5030 > Atlanta, GA 30302, USA > > tel: 404-413-6420 fax: 404-413-5446 > http://www2.gsu.edu/~matrhc > http://neuroscience.gsu.edu/rclewley.html > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthew.brett at gmail.com Mon May 14 14:36:56 2012 From: matthew.brett at gmail.com (Matthew Brett) Date: Mon, 14 May 2012 11:36:56 -0700 Subject: [SciPy-User] Using scipy.io.loadmat to read Matlab containers.Map object? In-Reply-To: References: Message-ID: Hi, On Mon, May 14, 2012 at 7:51 AM, Gerrit Holl wrote: > Hi, > > is it possible to use scipy.io.loadmat to read a Matlab containers.Map object? > >>> cm = containers.Map(); >>> cm('abc') = 42; >>> save('/tmp/test.mat', 'cm'); > > > In [15]: M = scipy.io.loadmat('/tmp/test.mat') > > In [16]: M.keys() > Out[16]: ['__function_workspace__', 'None', '__version__', > '__header__', '__globals__'] > > In [17]: M["None"] > Out[17]: > MatlabOpaque([ ('cm', 'MCOS', 'containers.Map', [[3707764736L], [2L], > [1L], [1L], [1L], [1L]])], > ? ? ?dtype=[('s0', '|O8'), ('s1', '|O8'), ('s2', '|O8'), ('arr', '|O8')]) Just because I didn't know what a container.Map was, it looks like it's a MATLAB dict equivalent: http://www.mathworks.com/help/techdoc/matlab_prog/brqqo69.html I see that I or someone thought that types labeled as MatlabOpaque were MATLAB functions: https://github.com/scipy/scipy/blob/master/scipy/io/matlab/mio5_params.py#L51 but at least, the internal structure looks rather confusing. Do you recognize any of the structure in what got loaded? Does it load OK when loaded in Octave? Best, Matthew From lou_boog2000 at yahoo.com Mon May 14 14:46:15 2012 From: lou_boog2000 at yahoo.com (Lou Pecora) Date: Mon, 14 May 2012 11:46:15 -0700 (PDT) Subject: [SciPy-User] Trajectory Integration via scipy.integrate or PyDSTool? In-Reply-To: References: <1337018622.1991.YahooMailNeo@web160306.mail.bf1.yahoo.com> Message-ID: <1337021175.94572.YahooMailNeo@web160301.mail.bf1.yahoo.com> Position dependent B makes it hard otherwise it could all be done geometrically (I'm sure you know that). ?I don't see how you can avoid checking for boundary collisions on each time step. Well, you would probably leave the boundaries out of the dynamics, but check for boundary crossing at each time step, then do an adaptive time step to get up to the boundary for a reflection (I hope that makes sense). ?If the boundaries are not complicated, checking for crossings is pretty straightforward. ? -- Lou Pecora, my views are my own. ________________________________ From: Sara Gallian To: Lou Pecora ; SciPy Users List Sent: Monday, May 14, 2012 2:12 PM Subject: Re: [SciPy-User] Trajectory Integration via scipy.integrate or PyDSTool? Dear Lou,? I'm just moving a bunch of electrons in a static b field dv/dt = q/m v x B B depends on the position, in general They move in this field, without interacting, till they bounce off a wall, that specularly reflects them Yes, the dynamic is conservative. I could just write a RK4 and check the position at every ts, but I was hoping to be able to use a better tool Thanks Sara Sent from my iPhone On 14.05.2012, at 20:03, Lou Pecora wrote: Hi, Sara, > > >Can you give more information? ? > > >Are the particles in a potential or is it free flight between hard wall collisions? ?Do the particles interact? ?I assume the dynamics are conservative (Hamiltonian). Is that correct? >? >-- Lou Pecora, my views are my own. > > >________________________________ > From: Rob Clewley >To: SciPy Users List >Sent: Monday, May 14, 2012 1:22 PM >Subject: Re: [SciPy-User] Trajectory Integration via scipy.integrate or PyDSTool? > >Hi Sara, > >I think you should use a tool that's appropriate for the task. With an >hour of work and help on this list or the PyDSTool forum at >http://sourceforge.net/projects/pydstool/forums/forum/472291 you >should be able to get your problem working quickly with PyDSTool. I >expect your problem is easy to solve in PyDSTool, and difficult to do >with odeint. > >You can still use scipy's VODE in PyDSTool and you won't need any >fancy installation. Check out the file vode_event_test1.py in the >tests directory. Writing the vector field is simple, but in case you >have specific problems setting up yours then you should send me your >code on list so that I can help. I'd be interested to see what the >equations look like. > >It's very simple to define a terminal event, just copy and adapt the >code in the file. You can write a loop so that whenever integration >has stopped because of an event, you reverse the velocity IC and >restart. > >-Rob > >On Mon, May 14, 2012 at 12:59 PM, Sara Gallian wrote: >> Hi all, >> I'm trying to integrate particle trajectories in a finite domain, and I need to make them "reflect" at the boundaries (i.e. reverse the velocity perpendicular to the plane they collided with). I started by using ?scipy.integrate.odeint , but since the integration steps are variable, simply checking the position and reversing the velocity won't work! >> Can anyone suggest the quickest way to obtain this? would Vode be able to handle this, or should I try to learn to use ?PyDSTool? >> I'm running pretty late on a deadline, so any suggestion is more than appreciated :) >> Thanks! >> Sara >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user > > > >-- >Robert Clewley, Ph.D. >Assistant Professor >Neuroscience Institute and >Department of Mathematics and Statistics >Georgia State University >PO Box 5030 >Atlanta, GA 30302, USA > >tel: 404-413-6420 fax: 404-413-5446 >http://www2.gsu.edu/~matrhc >http://neuroscience.gsu.edu/rclewley.html >_______________________________________________ >SciPy-User mailing list >SciPy-User at scipy.org >http://mail.scipy.org/mailman/listinfo/scipy-user > > > _______________________________________________ >SciPy-User mailing list >SciPy-User at scipy.org >http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rob.clewley at gmail.com Mon May 14 16:28:56 2012 From: rob.clewley at gmail.com (Rob Clewley) Date: Mon, 14 May 2012 16:28:56 -0400 Subject: [SciPy-User] Trajectory Integration via scipy.integrate or PyDSTool? In-Reply-To: References: <1337018622.1991.YahooMailNeo@web160306.mail.bf1.yahoo.com> Message-ID: Sara, Your Hamiltonian system really requires a symplectic integrator or similar, as Lou mentions, that truly respects energy conservation. A regular ODE integrator will create numerical error in the total energy, as it is oblivious. In lieu of such a thing, at the very least you can make a correction at those detected events in your loop to restore the total energy. Anyway, regarding your installation. You should change your PATH to point to whichever version of python you are using, i.e. the version that has numpy, scipy etc. installed through EPD. It looks like you are using a Mac, so if you've installed EPD you should point to that version of python with your PATH. /sw/ is not a valid directory with EPD. If EPD installs outside of the Mac Framework then you will need to change the path, but if it uses the Framework python then there is nothing to change. Also, and I haven't given this much thought, but in the absence of a geometric integrator, you could possibly even solve the system more accurately as a differential-algebraic system (DAE), using the energy constant as the algebraic constraint. But there might be more mathematical problems with that approach that I'm not seeing (I'm not an expert). At least you could try setting such a thing with the Radau integrator in PyDSTool, and there are DAE examples provided. Feel free to share your equations and code in more detail if you're still struggling. With small integration steps you might be fine using RK4 with the energy correction at boundaries. -Rob On Mon, May 14, 2012 at 2:12 PM, Sara Gallian wrote: > Dear Lou, > I'm just moving a bunch of electrons in a static b field > dv/dt = q/m v x B > B depends on the position, in general > They move in this field, without interacting, till they bounce off a wall, > that specularly reflects them > Yes, the dynamic is conservative. > I could just write a RK4 and check the position at every ts, but I was > hoping to be able to use a better tool > Thanks > Sara > > Sent from my iPhone > > On 14.05.2012, at 20:03, Lou Pecora wrote: > > Hi, Sara, > > Can you give more information? > > Are the particles in a potential or is it free flight between hard wall > collisions? ?Do the particles interact? ?I assume the dynamics are > conservative (Hamiltonian). Is that correct? > > -- Lou Pecora, my views are my own. > ________________________________ > From: Rob Clewley > To: SciPy Users List > Sent: Monday, May 14, 2012 1:22 PM > Subject: Re: [SciPy-User] Trajectory Integration via scipy.integrate or > PyDSTool? > > Hi Sara, > > I think you should use a tool that's appropriate for the task. With an > hour of work and help on this list or the PyDSTool forum at > http://sourceforge.net/projects/pydstool/forums/forum/472291 you > should be able to get your problem working quickly with PyDSTool. I > expect your problem is easy to solve in PyDSTool, and difficult to do > with odeint. > > You can still use scipy's VODE in PyDSTool and you won't need any > fancy installation. Check out the file vode_event_test1.py in the > tests directory. Writing the vector field is simple, but in case you > have specific problems setting up yours then you should send me your > code on list so that I can help. I'd be interested to see what the > equations look like. > > It's very simple to define a terminal event, just copy and adapt the > code in the file. You can write a loop so that whenever integration > has stopped because of an event, you reverse the velocity IC and > restart. > > -Rob > > On Mon, May 14, 2012 at 12:59 PM, Sara Gallian wrote: >> Hi all, >> I'm trying to integrate particle trajectories in a finite domain, and I >> need to make them "reflect" at the boundaries (i.e. reverse the velocity >> perpendicular to the plane they collided with). I started by using >> ?scipy.integrate.odeint , but since the integration steps are variable, >> simply checking the position and reversing the velocity won't work! >> Can anyone suggest the quickest way to obtain this? would Vode be able to >> handle this, or should I try to learn to use ?PyDSTool? >> I'm running pretty late on a deadline, so any suggestion is more than >> appreciated :) >> Thanks! >> Sara >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user > > > > -- > Robert Clewley, Ph.D. > Assistant Professor > Neuroscience Institute and > Department of Mathematics and Statistics > Georgia State University > PO Box 5030 > Atlanta, GA 30302, USA > > tel: 404-413-6420 fax: 404-413-5446 > http://www2.gsu.edu/~matrhc > http://neuroscience.gsu.edu/rclewley.html > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -- Robert Clewley, Ph.D. Assistant Professor Neuroscience Institute and Department of Mathematics and Statistics Georgia State University PO Box 5030 Atlanta, GA 30302, USA tel: 404-413-6420 fax: 404-413-5446 http://www2.gsu.edu/~matrhc http://neuroscience.gsu.edu/rclewley.html From sara2411 at gmail.com Mon May 14 16:47:10 2012 From: sara2411 at gmail.com (Sara Gallian) Date: Mon, 14 May 2012 22:47:10 +0200 Subject: [SciPy-User] Trajectory Integration via scipy.integrate or PyDSTool? In-Reply-To: References: <1337018622.1991.YahooMailNeo@web160306.mail.bf1.yahoo.com> Message-ID: Thank you all for the suggestions! Indeed I know RK4 is not the best integrator for orbital-like problems, I just needed a "quick and dirty" solution to begin with (and show a few pictures :) ). I will definitely refine my little code later.. PyDSTool seems a really cool module, at any rate, but still I am a little confused. I have already set both paths in my .profile : PYTHONPATH="/usr/local/PyDSTool:${PYTHONPATH}" export PYTHONPATH and PATH="/Library/Frameworks/EPD64.framework/Versions/Current/bin:${PATH}" export PATH but I receive an error when I try to import the module. I don't understand why.. Thank you very much for the patience! Sara On 14.05.2012, at 22:28, Rob Clewley wrote: > Sara, > > Your Hamiltonian system really requires a symplectic integrator or > similar, as Lou mentions, that truly respects energy conservation. A > regular ODE integrator will create numerical error in the total > energy, as it is oblivious. In lieu of such a thing, at the very least > you can make a correction at those detected events in your loop to > restore the total energy. > > Anyway, regarding your installation. You should change your PATH to > point to whichever version of python you are using, i.e. the version > that has numpy, scipy etc. installed through EPD. It looks like you > are using a Mac, so if you've installed EPD you should point to that > version of python with your PATH. /sw/ is not a valid directory with > EPD. If EPD installs outside of the Mac Framework then you will need > to change the path, but if it uses the Framework python then there is > nothing to change. > > Also, and I haven't given this much thought, but in the absence of a > geometric integrator, you could possibly even solve the system more > accurately as a differential-algebraic system (DAE), using the energy > constant as the algebraic constraint. But there might be more > mathematical problems with that approach that I'm not seeing (I'm not > an expert). At least you could try setting such a thing with the Radau > integrator in PyDSTool, and there are DAE examples provided. > > Feel free to share your equations and code in more detail if you're > still struggling. With small integration steps you might be fine using > RK4 with the energy correction at boundaries. > > -Rob > > On Mon, May 14, 2012 at 2:12 PM, Sara Gallian wrote: >> Dear Lou, >> I'm just moving a bunch of electrons in a static b field >> dv/dt = q/m v x B >> B depends on the position, in general >> They move in this field, without interacting, till they bounce off a wall, >> that specularly reflects them >> Yes, the dynamic is conservative. >> I could just write a RK4 and check the position at every ts, but I was >> hoping to be able to use a better tool >> Thanks >> Sara >> >> Sent from my iPhone >> >> On 14.05.2012, at 20:03, Lou Pecora wrote: >> >> Hi, Sara, >> >> Can you give more information? >> >> Are the particles in a potential or is it free flight between hard wall >> collisions? Do the particles interact? I assume the dynamics are >> conservative (Hamiltonian). Is that correct? >> >> -- Lou Pecora, my views are my own. >> ________________________________ >> From: Rob Clewley >> To: SciPy Users List >> Sent: Monday, May 14, 2012 1:22 PM >> Subject: Re: [SciPy-User] Trajectory Integration via scipy.integrate or >> PyDSTool? >> >> Hi Sara, >> >> I think you should use a tool that's appropriate for the task. With an >> hour of work and help on this list or the PyDSTool forum at >> http://sourceforge.net/projects/pydstool/forums/forum/472291 you >> should be able to get your problem working quickly with PyDSTool. I >> expect your problem is easy to solve in PyDSTool, and difficult to do >> with odeint. >> >> You can still use scipy's VODE in PyDSTool and you won't need any >> fancy installation. Check out the file vode_event_test1.py in the >> tests directory. Writing the vector field is simple, but in case you >> have specific problems setting up yours then you should send me your >> code on list so that I can help. I'd be interested to see what the >> equations look like. >> >> It's very simple to define a terminal event, just copy and adapt the >> code in the file. You can write a loop so that whenever integration >> has stopped because of an event, you reverse the velocity IC and >> restart. >> >> -Rob >> >> On Mon, May 14, 2012 at 12:59 PM, Sara Gallian wrote: >>> Hi all, >>> I'm trying to integrate particle trajectories in a finite domain, and I >>> need to make them "reflect" at the boundaries (i.e. reverse the velocity >>> perpendicular to the plane they collided with). I started by using >>> scipy.integrate.odeint , but since the integration steps are variable, >>> simply checking the position and reversing the velocity won't work! >>> Can anyone suggest the quickest way to obtain this? would Vode be able to >>> handle this, or should I try to learn to use PyDSTool? >>> I'm running pretty late on a deadline, so any suggestion is more than >>> appreciated :) >>> Thanks! >>> Sara >>> _______________________________________________ >>> SciPy-User mailing list >>> SciPy-User at scipy.org >>> http://mail.scipy.org/mailman/listinfo/scipy-user >> >> >> >> -- >> Robert Clewley, Ph.D. >> Assistant Professor >> Neuroscience Institute and >> Department of Mathematics and Statistics >> Georgia State University >> PO Box 5030 >> Atlanta, GA 30302, USA >> >> tel: 404-413-6420 fax: 404-413-5446 >> http://www2.gsu.edu/~matrhc >> http://neuroscience.gsu.edu/rclewley.html >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> >> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> >> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> > > > > -- > Robert Clewley, Ph.D. > Assistant Professor > Neuroscience Institute and > Department of Mathematics and Statistics > Georgia State University > PO Box 5030 > Atlanta, GA 30302, USA > > tel: 404-413-6420 fax: 404-413-5446 > http://www2.gsu.edu/~matrhc > http://neuroscience.gsu.edu/rclewley.html > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user From cgohlke at uci.edu Mon May 14 17:47:06 2012 From: cgohlke at uci.edu (Christoph Gohlke) Date: Mon, 14 May 2012 14:47:06 -0700 Subject: [SciPy-User] 2D phase unwrapping In-Reply-To: <5B6184CF-C72B-4B6D-AABE-306A7FD69EB7@gmail.com> References: <5B6184CF-C72B-4B6D-AABE-306A7FD69EB7@gmail.com> Message-ID: <4FB17D5A.40806@uci.edu> On 5/14/2012 8:11 AM, Gregor Thalhammer wrote: > > Am 12.5.2012 um 11:58 schrieb Ralf Gommers: > >> >> >> On Fri, May 11, 2012 at 9:29 PM, Gregor Thalhammer >> > wrote: >> >> Hi all, >> >> I have beend searching for an implementation for phase unwrapping >> in 2D (and possibly also 3D). I found this old thread in the numpy >> mailing list >> >> http://mail.scipy.org/pipermail/numpy-discussion/2008-November/038873.html >> >> which mentions the C implementation from GERI: >> http://www.ljmu.ac.uk/GERI/90202.htm >> >> While searching I found remarks by the authors that these >> algorithms have been incorporated into scipy, however I am unable >> to find them in current scipy or numpy. Am I missing something >> obvious? >> Instead I found this wrapper: https://github.com/pointtonull/pyunwrap >> This seems to be based on a wrapper already mentioned in the above >> mentioned discussion, but the links mentioned there are dead. I >> added some setup.py, and with some small modifications I managed >> to compile the extension both on OS X and Windows. >> >> I would like to see these algorithms included in scipy, and I am >> willing to work on this. >> >> >> Great! >> >> So now my questions: In the old numpy-discussion thread licensing >> issues are raised, can anybody tell more? On the GERI homepage >> they distribute their code under a non-commercial-use license, but >> the authors seem to agree on incorporating their code into scipy. >> Except from this, what else would be required? >> >> >> The emails about the license seem to be very clear to me, there's no >> issue. So all it would take is someone submitting a complete wrapper >> (with the normal requirements on docs/tests). The only things to >> decide are then: do we want it in scipy (+1 from me), where to put it >> and what the API should look like. > > Over the weekend I worked on the 2D and 3D phase unwrappers. I put a > first version on github: > https://github.com/geggo/phase-unwrap > git://github.com/geggo/phase-unwrap.git > I tested it on OS X with gcc-4.2, Python 2.7 and recent cython. (it > crashes on Windows (MSVC 9), probably related to cython, but thats > another story). > > Seems to work ok, only one basic test, no docs yet. > > The interface is quite simple: > > defunwrap(wrapped_array, > > wrap_around_axis_0 = False, > wrap_around_axis_1 = False, > wrap_around_axis_2 = False): > > it accepts an 2d or 3d numpy array or an masked array, and returns an > masked array if one was given. A fresh float32 array is returned. The > additional arguments can be used to specify cyclic boundary conditions > along the given axis. The calculations are internally performed with > float32. It should be quite straightforward to extend this to other > types, but I think for the most common use cases for this algorithm > (unwrapping of noisy data) a float32 array is sufficient. > > I guess a proper place in scipy would be to add this extension to > scipy.ndimage. > > Please, any comments are welcome. And also some hints, why it crashes on > Windows - or how to debug. > > Gregor > Explicitly defining the C function return types (e.g. as void) fixes the crash for me. Christoph -------------- next part -------------- diff --git a/unwrap2D/Hussein_3D_unwrapper_with_mask_and_wrap_around_option.c b/unwrap2D/Hussein_3D_unwrapper_with_mask_and_wrap_around_option.c index e0e4cb9..08f51aa 100755 --- a/unwrap2D/Hussein_3D_unwrapper_with_mask_and_wrap_around_option.c +++ b/unwrap2D/Hussein_3D_unwrapper_with_mask_and_wrap_around_option.c @@ -1011,6 +1011,7 @@ void returnVolume(VOXELM *voxel, float *unwrappedVolume, int volume_width, int } //the main function of the unwrapper +void unwrap3D(float* wrapped_volume, float* unwrapped_volume, unsigned char* input_mask, int volume_width, int volume_height, int volume_depth, int wrap_around_x, int wrap_around_y, int wrap_around_z) diff --git a/unwrap2D/Miguel_2D_unwrapper_with_mask_and_wrap_around_option.c b/unwrap2D/Miguel_2D_unwrapper_with_mask_and_wrap_around_option.c index 3017c52..2e851eb 100755 --- a/unwrap2D/Miguel_2D_unwrapper_with_mask_and_wrap_around_option.c +++ b/unwrap2D/Miguel_2D_unwrapper_with_mask_and_wrap_around_option.c @@ -680,6 +680,7 @@ void returnImage(PIXELM *pixel, float *unwrapped_image, int image_width, int im } //the main function of the unwrapper +void unwrap2D(float* wrapped_image, float* UnwrappedImage, unsigned char* input_mask, int image_width, int image_height, int wrap_around_x, int wrap_around_y) diff --git a/unwrap2D/unwrap2D.pyx b/unwrap2D/unwrap2D.pyx index 9bd901d..dab5216 100755 --- a/unwrap2D/unwrap2D.pyx +++ b/unwrap2D/unwrap2D.pyx @@ -1,4 +1,4 @@ -cdef extern unwrap2D(float* wrapped_image, +cdef extern void unwrap2D(float* wrapped_image, float* unwrapped_image, unsigned char* input_mask, int image_width, int image_height, diff --git a/unwrap2D/unwrap3D.pyx b/unwrap2D/unwrap3D.pyx index 7a0aee9..aa18b37 100644 --- a/unwrap2D/unwrap3D.pyx +++ b/unwrap2D/unwrap3D.pyx @@ -1,4 +1,4 @@ -cdef extern unwrap3D(float* wrapped_volume, +cdef extern void unwrap3D(float* wrapped_volume, float* unwrapped_volume, unsigned char* input_mask, int image_width, int image_height, int volume_depth, From rob.clewley at gmail.com Mon May 14 17:56:04 2012 From: rob.clewley at gmail.com (Rob Clewley) Date: Mon, 14 May 2012 17:56:04 -0400 Subject: [SciPy-User] Trajectory Integration via scipy.integrate or PyDSTool? In-Reply-To: References: <1337018622.1991.YahooMailNeo@web160306.mail.bf1.yahoo.com> Message-ID: On Mon, May 14, 2012 at 4:47 PM, Sara Gallian wrote: > I have already set both paths in my .profile : [SNIP] > but I receive an error when I try to import the module. > I don't understand why.. For purely installation issues you should post to the PyDSTool user forum in any follow up questions you still have. I don't think the scipy list wants to hear about this non-scipy issue! Briefly, though, if you look at the installation notes, it also shows how you should write and place a PyDSTool.pth file in python's site-packages folder. That is quite possibly your remaining problem. -Rob > On 14.05.2012, at 22:28, Rob Clewley wrote: > >> Sara, >> >> Your Hamiltonian system really requires a symplectic integrator or >> similar, as Lou mentions, that truly respects energy conservation. A >> regular ODE integrator will create numerical error in the total >> energy, as it is oblivious. In lieu of such a thing, at the very least >> you can make a correction at those detected events in your loop to >> restore the total energy. >> >> Anyway, regarding your installation. You should change your PATH to >> point to whichever version of python you are using, i.e. the version >> that has numpy, scipy etc. installed through EPD. It looks like you >> are using a Mac, so if you've installed EPD you should point to that >> version of python with your PATH. /sw/ is not a valid directory with >> EPD. If EPD installs outside of the Mac Framework then you will need >> to change the path, but if it uses the Framework python then there is >> nothing to change. >> >> Also, and I haven't given this much thought, but in the absence of a >> geometric integrator, you could possibly even solve the system more >> accurately as a differential-algebraic system (DAE), using the energy >> constant as the algebraic constraint. But there might be more >> mathematical problems with that approach that I'm not seeing (I'm not >> an expert). At least you could try setting such a thing with the Radau >> integrator in PyDSTool, and there are DAE examples provided. >> >> Feel free to share your equations and code in more detail if you're >> still struggling. With small integration steps you might be fine using >> RK4 with the energy correction at boundaries. >> >> -Rob >> >> On Mon, May 14, 2012 at 2:12 PM, Sara Gallian wrote: >>> Dear Lou, >>> I'm just moving a bunch of electrons in a static b field >>> dv/dt = q/m v x B >>> B depends on the position, in general >>> They move in this field, without interacting, till they bounce off a wall, >>> that specularly reflects them >>> Yes, the dynamic is conservative. >>> I could just write a RK4 and check the position at every ts, but I was >>> hoping to be able to use a better tool >>> Thanks >>> Sara >>> >>> Sent from my iPhone >>> >>> On 14.05.2012, at 20:03, Lou Pecora wrote: >>> >>> Hi, Sara, >>> >>> Can you give more information? >>> >>> Are the particles in a potential or is it free flight between hard wall >>> collisions? ?Do the particles interact? ?I assume the dynamics are >>> conservative (Hamiltonian). Is that correct? >>> >>> -- Lou Pecora, my views are my own. >>> ________________________________ >>> From: Rob Clewley >>> To: SciPy Users List >>> Sent: Monday, May 14, 2012 1:22 PM >>> Subject: Re: [SciPy-User] Trajectory Integration via scipy.integrate or >>> PyDSTool? >>> >>> Hi Sara, >>> >>> I think you should use a tool that's appropriate for the task. With an >>> hour of work and help on this list or the PyDSTool forum at >>> http://sourceforge.net/projects/pydstool/forums/forum/472291 you >>> should be able to get your problem working quickly with PyDSTool. I >>> expect your problem is easy to solve in PyDSTool, and difficult to do >>> with odeint. >>> >>> You can still use scipy's VODE in PyDSTool and you won't need any >>> fancy installation. Check out the file vode_event_test1.py in the >>> tests directory. Writing the vector field is simple, but in case you >>> have specific problems setting up yours then you should send me your >>> code on list so that I can help. I'd be interested to see what the >>> equations look like. >>> >>> It's very simple to define a terminal event, just copy and adapt the >>> code in the file. You can write a loop so that whenever integration >>> has stopped because of an event, you reverse the velocity IC and >>> restart. >>> >>> -Rob >>> >>> On Mon, May 14, 2012 at 12:59 PM, Sara Gallian wrote: >>>> Hi all, >>>> I'm trying to integrate particle trajectories in a finite domain, and I >>>> need to make them "reflect" at the boundaries (i.e. reverse the velocity >>>> perpendicular to the plane they collided with). I started by using >>>> ?scipy.integrate.odeint , but since the integration steps are variable, >>>> simply checking the position and reversing the velocity won't work! >>>> Can anyone suggest the quickest way to obtain this? would Vode be able to >>>> handle this, or should I try to learn to use ?PyDSTool? >>>> I'm running pretty late on a deadline, so any suggestion is more than >>>> appreciated :) >>>> Thanks! >>>> Sara >>>> _______________________________________________ >>>> SciPy-User mailing list >>>> SciPy-User at scipy.org >>>> http://mail.scipy.org/mailman/listinfo/scipy-user >>> >>> >>> >>> -- >>> Robert Clewley, Ph.D. >>> Assistant Professor >>> Neuroscience Institute and >>> Department of Mathematics and Statistics >>> Georgia State University >>> PO Box 5030 >>> Atlanta, GA 30302, USA >>> >>> tel: 404-413-6420 fax: 404-413-5446 >>> http://www2.gsu.edu/~matrhc >>> http://neuroscience.gsu.edu/rclewley.html >>> _______________________________________________ >>> SciPy-User mailing list >>> SciPy-User at scipy.org >>> http://mail.scipy.org/mailman/listinfo/scipy-user >>> >>> >>> _______________________________________________ >>> SciPy-User mailing list >>> SciPy-User at scipy.org >>> http://mail.scipy.org/mailman/listinfo/scipy-user >>> >>> >>> _______________________________________________ >>> SciPy-User mailing list >>> SciPy-User at scipy.org >>> http://mail.scipy.org/mailman/listinfo/scipy-user >>> >> >> >> >> -- >> Robert Clewley, Ph.D. >> Assistant Professor >> Neuroscience Institute and >> Department of Mathematics and Statistics >> Georgia State University >> PO Box 5030 >> Atlanta, GA 30302, USA >> >> tel: 404-413-6420 fax: 404-413-5446 >> http://www2.gsu.edu/~matrhc >> http://neuroscience.gsu.edu/rclewley.html >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user -- Robert Clewley, Ph.D. Assistant Professor Neuroscience Institute and Department of Mathematics and Statistics Georgia State University PO Box 5030 Atlanta, GA 30302, USA tel: 404-413-6420 fax: 404-413-5446 http://www2.gsu.edu/~matrhc http://neuroscience.gsu.edu/rclewley.html From gregor.thalhammer at gmail.com Tue May 15 03:17:19 2012 From: gregor.thalhammer at gmail.com (Gregor Thalhammer) Date: Tue, 15 May 2012 09:17:19 +0200 Subject: [SciPy-User] 2D phase unwrapping In-Reply-To: <4FB17D5A.40806@uci.edu> References: <5B6184CF-C72B-4B6D-AABE-306A7FD69EB7@gmail.com> <4FB17D5A.40806@uci.edu> Message-ID: <2049A88E-4CB3-444E-B239-1D122D865967@gmail.com> Am 14.5.2012 um 23:47 schrieb Christoph Gohlke: > > > On 5/14/2012 8:11 AM, Gregor Thalhammer wrote: >> >> Am 12.5.2012 um 11:58 schrieb Ralf Gommers: >> >>> >>> >>> On Fri, May 11, 2012 at 9:29 PM, Gregor Thalhammer >>> > wrote: >>> >>> Hi all, >>> >>> I have beend searching for an implementation for phase unwrapping >>> in 2D (and possibly also 3D). I found this old thread in the numpy >>> mailing list >>> >>> http://mail.scipy.org/pipermail/numpy-discussion/2008-November/038873.html >>> >>> which mentions the C implementation from GERI: >>> http://www.ljmu.ac.uk/GERI/90202.htm >>> >>> While searching I found remarks by the authors that these >>> algorithms have been incorporated into scipy, however I am unable >>> to find them in current scipy or numpy. Am I missing something >>> obvious? >>> Instead I found this wrapper: https://github.com/pointtonull/pyunwrap >>> This seems to be based on a wrapper already mentioned in the above >>> mentioned discussion, but the links mentioned there are dead. I >>> added some setup.py, and with some small modifications I managed >>> to compile the extension both on OS X and Windows. >>> >>> I would like to see these algorithms included in scipy, and I am >>> willing to work on this. >>> >>> >>> Great! >>> >>> So now my questions: In the old numpy-discussion thread licensing >>> issues are raised, can anybody tell more? On the GERI homepage >>> they distribute their code under a non-commercial-use license, but >>> the authors seem to agree on incorporating their code into scipy. >>> Except from this, what else would be required? >>> >>> >>> The emails about the license seem to be very clear to me, there's no >>> issue. So all it would take is someone submitting a complete wrapper >>> (with the normal requirements on docs/tests). The only things to >>> decide are then: do we want it in scipy (+1 from me), where to put it >>> and what the API should look like. >> >> Over the weekend I worked on the 2D and 3D phase unwrappers. I put a >> first version on github: >> https://github.com/geggo/phase-unwrap >> git://github.com/geggo/phase-unwrap.git >> I tested it on OS X with gcc-4.2, Python 2.7 and recent cython. (it >> crashes on Windows (MSVC 9), probably related to cython, but thats >> another story). >> >> Seems to work ok, only one basic test, no docs yet. >> >> The interface is quite simple: >> >> defunwrap(wrapped_array, >> >> wrap_around_axis_0 = False, >> wrap_around_axis_1 = False, >> wrap_around_axis_2 = False): >> >> it accepts an 2d or 3d numpy array or an masked array, and returns an >> masked array if one was given. A fresh float32 array is returned. The >> additional arguments can be used to specify cyclic boundary conditions >> along the given axis. The calculations are internally performed with >> float32. It should be quite straightforward to extend this to other >> types, but I think for the most common use cases for this algorithm >> (unwrapping of noisy data) a float32 array is sufficient. >> >> I guess a proper place in scipy would be to add this extension to >> scipy.ndimage. >> >> Please, any comments are welcome. And also some hints, why it crashes on >> Windows - or how to debug. >> >> Gregor >> > > Explicitly defining the C function return types (e.g. as void) fixes the crash for me. > > Christoph Great! Thanks a lot for spotting this. I would have never found this by myself. Gregor From a.klein at science-applied.nl Tue May 15 03:55:01 2012 From: a.klein at science-applied.nl (Almar Klein) Date: Tue, 15 May 2012 09:55:01 +0200 Subject: [SciPy-User] ANN: IEP 3.0 - the Interactive Editor for Python Message-ID: Dear all, We're pleased to announce version 3.0 of the Interactive Editor for Python. IEP is a cross-platform Python IDE focused on interactivity and introspection, which makes it very suitable for scientific computing. Its practical design is aimed at simplicity and efficiency. IEP is written in Python 3 and Qt. Binaries are available for Windows, Linux, and Mac. Website: http://code.google.com/p/iep/ Discussion group: http://groups.google.com/group/iep_ Release notes: http://code.google.com/p/iep/wiki/Release#version_3.0_(14-05-2012) Regards, Rob, Ludo & Almar -------------- next part -------------- An HTML attachment was scrubbed... URL: From gerrit.holl at ltu.se Tue May 15 04:17:37 2012 From: gerrit.holl at ltu.se (Gerrit Holl) Date: Tue, 15 May 2012 10:17:37 +0200 Subject: [SciPy-User] Using scipy.io.loadmat to read Matlab containers.Map object? In-Reply-To: References: Message-ID: On 14 May 2012 20:36, Matthew Brett wrote: > Hi, > > On Mon, May 14, 2012 at 7:51 AM, Gerrit Holl wrote: >> Hi, >> >> is it possible to use scipy.io.loadmat to read a Matlab containers.Map object? >> >>>> cm = containers.Map(); >>>> cm('abc') = 42; >>>> save('/tmp/test.mat', 'cm'); >> >> >> In [15]: M = scipy.io.loadmat('/tmp/test.mat') >> >> In [16]: M.keys() >> Out[16]: ['__function_workspace__', 'None', '__version__', >> '__header__', '__globals__'] >> >> In [17]: M["None"] >> Out[17]: >> MatlabOpaque([ ('cm', 'MCOS', 'containers.Map', [[3707764736L], [2L], >> [1L], [1L], [1L], [1L]])], >> ? ? ?dtype=[('s0', '|O8'), ('s1', '|O8'), ('s2', '|O8'), ('arr', '|O8')]) > > Just because I didn't know what a container.Map was, it looks like > it's a MATLAB dict equivalent: > > http://www.mathworks.com/help/techdoc/matlab_prog/brqqo69.html > > I see that I or someone thought that types labeled as MatlabOpaque > were MATLAB functions: > > https://github.com/scipy/scipy/blob/master/scipy/io/matlab/mio5_params.py#L51 > > but at least, the internal structure looks rather confusing. ?Do you > recognize any of the structure in what got loaded? ?Does it load OK > when loaded in Octave? I don't recognise anything, and nor does Octave. containers.Map are a relatively new type in Matlab, not very well known and quite underused. They're indeed very close to being a Python dict, except that all keys must be of the same type and can only be numeric or character arrays. But that means that Matlab->Python is a well-defined translation. It appears the answer to my question is "no". Is this something to file an 'issue' for, I suppose it would be desirable to be able to read those? Gerrit. -- Gerrit Holl PhD student at Division of Space Technology, Lule? University of Technology, Kiruna, Sweden http://www.sat.ltu.se/members/gerrit/ From matthew.brett at gmail.com Tue May 15 04:37:58 2012 From: matthew.brett at gmail.com (Matthew Brett) Date: Tue, 15 May 2012 01:37:58 -0700 Subject: [SciPy-User] Using scipy.io.loadmat to read Matlab containers.Map object? In-Reply-To: References: Message-ID: Hi, On Tue, May 15, 2012 at 1:17 AM, Gerrit Holl wrote: > On 14 May 2012 20:36, Matthew Brett wrote: >> Hi, >> >> On Mon, May 14, 2012 at 7:51 AM, Gerrit Holl wrote: >>> Hi, >>> >>> is it possible to use scipy.io.loadmat to read a Matlab containers.Map object? >>> >>>>> cm = containers.Map(); >>>>> cm('abc') = 42; >>>>> save('/tmp/test.mat', 'cm'); >>> >>> >>> In [15]: M = scipy.io.loadmat('/tmp/test.mat') >>> >>> In [16]: M.keys() >>> Out[16]: ['__function_workspace__', 'None', '__version__', >>> '__header__', '__globals__'] >>> >>> In [17]: M["None"] >>> Out[17]: >>> MatlabOpaque([ ('cm', 'MCOS', 'containers.Map', [[3707764736L], [2L], >>> [1L], [1L], [1L], [1L]])], >>> ? ? ?dtype=[('s0', '|O8'), ('s1', '|O8'), ('s2', '|O8'), ('arr', '|O8')]) >> >> Just because I didn't know what a container.Map was, it looks like >> it's a MATLAB dict equivalent: >> >> http://www.mathworks.com/help/techdoc/matlab_prog/brqqo69.html >> >> I see that I or someone thought that types labeled as MatlabOpaque >> were MATLAB functions: >> >> https://github.com/scipy/scipy/blob/master/scipy/io/matlab/mio5_params.py#L51 >> >> but at least, the internal structure looks rather confusing. ?Do you >> recognize any of the structure in what got loaded? ?Does it load OK >> when loaded in Octave? > > I don't recognise anything, and nor does Octave. > > containers.Map are a relatively new type in Matlab, not very well > known and quite underused. They're indeed very close to being a Python > dict, except that all keys must be of the same type and can only be > numeric or character arrays. But that means that Matlab->Python is a > well-defined translation. > > It appears the answer to my question is "no". Is this something to > file an 'issue' for, I suppose it would be desirable to be able to > read those? Yes, er, the answer is 'no', sorry about that. But, please do file an issue for it. It would be good to be able to read them, but I guess it will be hard work getting there. Do you know of any documentation for the binary (.mat file) format of these things? You might get an idea of where we are by scanning the comments at the top of https://github.com/scipy/scipy/blob/master/scipy/io/matlab/mio5.py Cheers, Matthew From servant.mathieu at gmail.com Wed May 16 12:20:27 2012 From: servant.mathieu at gmail.com (servant mathieu) Date: Wed, 16 May 2012 18:20:27 +0200 Subject: [SciPy-User] is it possible to constrain the scipy.optimize.curve_fit function? Message-ID: Dear scipy users, I'm trying to fit to data a power law of the form : def func (x, a,b, r): return r + a*np.power(x,-b) I would like to constrain the curve_fit routine to only allow positive parameter values. How is it possible to do so? Kind regards, Mathieu -------------- next part -------------- An HTML attachment was scrubbed... URL: From david_baddeley at yahoo.com.au Wed May 16 17:02:30 2012 From: david_baddeley at yahoo.com.au (David Baddeley) Date: Wed, 16 May 2012 14:02:30 -0700 (PDT) Subject: [SciPy-User] is it possible to constrain the scipy.optimize.curve_fit function? In-Reply-To: References: Message-ID: <1337202150.22855.YahooMailNeo@web113414.mail.gq1.yahoo.com> The quick and dirty way is to do a variable?substitution with the square of your parameter?and fit, e.g. r + (a**2)*np.power(x, -b**2) You can take the sqrt of the parameters later. ________________________________ From: servant mathieu To: scipy-user at scipy.org Sent: Thursday, 17 May 2012 4:20 AM Subject: [SciPy-User] is it possible to constrain the scipy.optimize.curve_fit function? Dear scipy users, ? I'm trying to fit to?data?a power law of the form : ? ? def func (x, a,b, r): ????? return r + a*np.power(x,-b)? ? I would like to constrain the curve_fit routine to only allow positive?parameter values.?How is it possible to do so? ? Kind regards, Mathieu? _______________________________________________ SciPy-User mailing list SciPy-User at scipy.org http://mail.scipy.org/mailman/listinfo/scipy-user -------------- next part -------------- An HTML attachment was scrubbed... URL: From markus.baden at gmail.com Wed May 16 20:35:52 2012 From: markus.baden at gmail.com (Markus Baden) Date: Thu, 17 May 2012 08:35:52 +0800 Subject: [SciPy-User] scipy.odr - Goodness of fit and parameter estimation for explicit orthogonal distance regression In-Reply-To: References: Message-ID: > scipy.odr provides an estimate of the covariance matrix and standard > deviations of the parameter estimates. Getting the confidence interval > for a parameter is just a matter of scaling up the standard deviations > by the appropriate t-distribution value with nobs-nparams degrees of > freedom. A paper by the ODRPACK implementors gives the formula > explicitly on page 6: > > http://www.mechanicalkern.com/static/odr_vcv.pdf > > It also has more information on how the covariance matrix is calculated. > Thanks for the pointer to the paper. That is exactly what I was looking for. Thanks also for the other clarification (and making scipy.odr!) Regards, Markus -------------- next part -------------- An HTML attachment was scrubbed... URL: From servant.mathieu at gmail.com Thu May 17 05:00:15 2012 From: servant.mathieu at gmail.com (servant mathieu) Date: Thu, 17 May 2012 11:00:15 +0200 Subject: [SciPy-User] is it possible to constrain the scipy.optimize.curve_fit function? In-Reply-To: <1337202150.22855.YahooMailNeo@web113414.mail.gq1.yahoo.com> References: <1337202150.22855.YahooMailNeo@web113414.mail.gq1.yahoo.com> Message-ID: Hi David, That's really dirty but it works thanks :-). More generally, is there a standard procedure to constrain a specific parameter to fall in a specific range (e.g., [200, 400]) during the fitting process? Mathieu 2012/5/16 David Baddeley > The quick and dirty way is to do a variable substitution with the square > of your parameter and fit, e.g. r + (a**2)*np.power(x, -b**2) > You can take the sqrt of the parameters later. > > ------------------------------ > *From:* servant mathieu > *To:* scipy-user at scipy.org > *Sent:* Thursday, 17 May 2012 4:20 AM > *Subject:* [SciPy-User] is it possible to constrain the > scipy.optimize.curve_fit function? > > Dear scipy users, > > I'm trying to fit to data a power law of the form : > > > def func (x, a,b, r): > return r + a*np.power(x,-b) > > > I would like to constrain the curve_fit routine to only allow > positive parameter values. How is it possible to do so? > > Kind regards, > Mathieu > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From vaggi.federico at gmail.com Thu May 17 05:51:05 2012 From: vaggi.federico at gmail.com (federico vaggi) Date: Thu, 17 May 2012 11:51:05 +0200 Subject: [SciPy-User] is it possible to constrain the scipy.optimize.curve_fit function? Message-ID: Afaik, there was a big discussion about this a while ago, and the short answer is, currently there is no 'automatic' way to do it. However, in your case, it's pretty easy. Simply define: def func (x, a,b, r): a = abs(a) b = abs(b) r = abs(r) return r + a*np.power(x,-b) And that will do the trick. If you need to more complex boundaries, you can simply use a combination of period functions with a given amplitude or what have you. Alternatively, there are *a lot* of optimization libraries available for Python that are not a part of scipy that offer the possibility to specify boundaries. For example: http://newville.github.com/lmfit-py/ http://ab-initio.mit.edu/wiki/index.php/NLopt_Python_Reference Federico Date: Wed, 16 May 2012 18:20:27 +0200 > From: servant mathieu > Subject: [SciPy-User] is it possible to constrain the > scipy.optimize.curve_fit function? > To: scipy-user at scipy.org > Message-ID: > > > Content-Type: text/plain; charset="iso-8859-1" > > Dear scipy users, > > I'm trying to fit to data a power law of the form : > > > > > def func (x, a,b, r): > > return r + a*np.power(x,-b) > > > > > I would like to constrain the curve_fit routine to only allow > positive parameter values. How is it possible to do so? > > > > Kind regards, > > Mathieu > -------------- next part -------------- An HTML attachment was scrubbed... URL: From david_baddeley at yahoo.com.au Thu May 17 15:47:43 2012 From: david_baddeley at yahoo.com.au (David Baddeley) Date: Thu, 17 May 2012 12:47:43 -0700 (PDT) Subject: [SciPy-User] is it possible to constrain the scipy.optimize.curve_fit function? Message-ID: <1337284063.28406.BPMail_high_noncarrier@web113410.mail.gq1.yahoo.com> I'd caution against using abs, as abs(x) is not differentiable around 0 and could cause a gradient descent solver to get stuck/confused. x**2 on the other hand is fully differentiable, but requires you to take the sqrt of the parameters after fitting. ------------------------------ On Thu, May 17, 2012 9:51 PM NZST federico vaggi wrote: >Afaik, > >there was a big discussion about this a while ago, and the short answer is, >currently there is no 'automatic' way to do it. However, in your case, >it's pretty easy. > >Simply define: > >def func (x, a,b, r): > a = abs(a) > b = abs(b) > r = abs(r) > return r + a*np.power(x,-b) > >And that will do the trick. If you need to more complex boundaries, you >can simply use a combination of period functions with a given amplitude or >what have you. Alternatively, there are *a lot* of optimization libraries >available for Python that are not a part of scipy that offer the >possibility to specify boundaries. > >For example: > >http://newville.github.com/lmfit-py/ >http://ab-initio.mit.edu/wiki/index.php/NLopt_Python_Reference > >Federico > > > >Date: Wed, 16 May 2012 18:20:27 +0200 >> From: servant mathieu >> Subject: [SciPy-User] is it possible to constrain the >> scipy.optimize.curve_fit function? >> To: scipy-user at scipy.org >> Message-ID: >> > > >> Content-Type: text/plain; charset="iso-8859-1" >> >> Dear scipy users, >> >> I'm trying to fit to data a power law of the form : >> >> >> >> >> def func (x, a,b, r): >> >> return r + a*np.power(x,-b) >> >> >> >> >> I would like to constrain the curve_fit routine to only allow >> positive parameter values. How is it possible to do so? >> >> >> >> Kind regards, >> >> Mathieu >> From pav at iki.fi Thu May 17 17:02:12 2012 From: pav at iki.fi (Pauli Virtanen) Date: Thu, 17 May 2012 23:02:12 +0200 Subject: [SciPy-User] is it possible to constrain the scipy.optimize.curve_fit function? In-Reply-To: References: Message-ID: 17.05.2012 11:51, federico vaggi kirjoitti: [clip] > And that will do the trick. If you need to more complex boundaries, you > can simply use a combination of period functions with a given amplitude > or what have you. Alternatively, there are *a lot* of optimization > libraries available for Python that are not a part of scipy that offer > the possibility to specify boundaries. Note that Scipy has several solvers that support bounds in optimization problems --- to use those for least squares, you'll just need to do "return (r**2).sum()" yourself. This is AFAIK also what lmfit does, in addition to clipping parameter values within the bounds in the residual function (I'm not sure how robust the results such clipping produces are). Pauli From sergio_r at mail.com Fri May 18 16:13:00 2012 From: sergio_r at mail.com (Sergio Rojas) Date: Fri, 18 May 2012 16:13:00 -0400 Subject: [SciPy-User] about scipy performance Message-ID: <20120518201300.17900@gmx.com> Running the performance test of scipy presented at [ http://software.intel.com/en-us/articles/numpy-scipy-with-mkl/ ] I found it strange a decrease in the time that takes the computation for a large K value (see for instance the value of tm corresponding to k=192 and k=200). Does this behaviour makes sense? Is there any test suite available to better check the performance of a scipy installation? Sergio PD. the output of np.show_config() is shown below the data. K TM GFLOPS 64, 0.2182408, 34.91556 80, 0.2414728, 39.50755 96, 0.2986258, 38.37579 104, 0.3231602, 38.43295 112, 0.3429056, 39.01948 120, 0.3645972, 39.33108 128, 0.4074092, 37.55438 144, 0.445568, 38.6473 160, 0.4914636, 38.9449 176, 0.5274322, 39.92931 192, 0.6281674, 36.5826 200, 0.6149654, 38.92902 208, 0.6355928, 39.17603 224, 0.673929, 39.79648 240, 0.7244088, 39.67373 256, 0.8612222, 35.60057 384, 1.185954, 38.80421 atlas_threads_info: libraries = ['lapack', 'ptf77blas', 'ptcblas', 'atlas'] library_dirs = ['/home/srojas/myPROG/LapackLib_gfortran/Atlas64b/lib'] define_macros = [('ATLAS_INFO', '"\\"3.9.72\\""')] language = f77 include_dirs = ['/home/srojas/myPROG/LapackLib_gfortran/Atlas64b/include'] blas_opt_info: libraries = ['ptf77blas', 'ptcblas', 'atlas'] library_dirs = ['/home/srojas/myPROG/LapackLib_gfortran/Atlas64b/lib'] define_macros = [('ATLAS_INFO', '"\\"3.9.72\\""')] language = c include_dirs = ['/home/srojas/myPROG/LapackLib_gfortran/Atlas64b/include'] atlas_blas_threads_info: libraries = ['ptf77blas', 'ptcblas', 'atlas'] library_dirs = ['/home/srojas/myPROG/LapackLib_gfortran/Atlas64b/lib'] define_macros = [('ATLAS_INFO', '"\\"3.9.72\\""')] language = c include_dirs = ['/home/srojas/myPROG/LapackLib_gfortran/Atlas64b/include'] lapack_opt_info: libraries = ['lapack', 'ptf77blas', 'ptcblas', 'atlas'] library_dirs = ['/home/srojas/myPROG/LapackLib_gfortran/Atlas64b/lib'] define_macros = [('ATLAS_INFO', '"\\"3.9.72\\""')] language = f77 include_dirs = ['/home/srojas/myPROG/LapackLib_gfortran/Atlas64b/include'] lapack_mkl_info: NOT AVAILABLE blas_mkl_info: NOT AVAILABLE mkl_info: NOT AVAILABLE -------------- next part -------------- An HTML attachment was scrubbed... URL: From david_baddeley at yahoo.com.au Fri May 18 16:28:19 2012 From: david_baddeley at yahoo.com.au (David Baddeley) Date: Fri, 18 May 2012 13:28:19 -0700 (PDT) Subject: [SciPy-User] about scipy performance Message-ID: <1337372899.50497.BPMail_high_noncarrier@web113409.mail.gq1.yahoo.com> Is this reproducible, or could it just be timing noise? In general though, I'm not sure you can expect performance to be strictly monotonic in k - there are all sorts of issues (e.g. cache coherency) which can make some sizes more efficient than others. The departures from monotonicity that you see are really rather small compared to those which are present in e.g. fftw. Cheers, David ------------------------------ On Sat, May 19, 2012 8:13 AM NZST Sergio Rojas wrote: >Running the performance test of scipy presented at > [ http://software.intel.com/en-us/articles/numpy-scipy-with-mkl/ ] I found it strange a > decrease in the time that takes the computation for a large K value (see for instance the value of > tm corresponding to k=192 and k=200). Does this behaviour makes sense? Is there any test suite available > to better check the performance of a scipy installation? > > Sergio > PD. the output of np.show_config() is shown below the data. > > K TM GFLOPS > 64, 0.2182408, 34.91556 > 80, 0.2414728, 39.50755 > 96, 0.2986258, 38.37579 > 104, 0.3231602, 38.43295 > 112, 0.3429056, 39.01948 > 120, 0.3645972, 39.33108 > 128, 0.4074092, 37.55438 > 144, 0.445568, 38.6473 > 160, 0.4914636, 38.9449 > 176, 0.5274322, 39.92931 > 192, 0.6281674, 36.5826 > 200, 0.6149654, 38.92902 > 208, 0.6355928, 39.17603 > 224, 0.673929, 39.79648 > 240, 0.7244088, 39.67373 > 256, 0.8612222, 35.60057 > 384, 1.185954, 38.80421 > > > > atlas_threads_info: > libraries = ['lapack', 'ptf77blas', 'ptcblas', 'atlas'] > library_dirs = ['/home/srojas/myPROG/LapackLib_gfortran/Atlas64b/lib'] > define_macros = [('ATLAS_INFO', '"\\"3.9.72\\"')] > language = f77 > include_dirs = ['/home/srojas/myPROG/LapackLib_gfortran/Atlas64b/include'] > blas_opt_info: > libraries = ['ptf77blas', 'ptcblas', 'atlas'] > library_dirs = ['/home/srojas/myPROG/LapackLib_gfortran/Atlas64b/lib'] > define_macros = [('ATLAS_INFO', '"\\"3.9.72\\"')] > language = c > include_dirs = ['/home/srojas/myPROG/LapackLib_gfortran/Atlas64b/include'] > atlas_blas_threads_info: > libraries = ['ptf77blas', 'ptcblas', 'atlas'] > library_dirs = ['/home/srojas/myPROG/LapackLib_gfortran/Atlas64b/lib'] > define_macros = [('ATLAS_INFO', '"\\"3.9.72\\"')] > language = c > include_dirs = ['/home/srojas/myPROG/LapackLib_gfortran/Atlas64b/include'] > lapack_opt_info: > libraries = ['lapack', 'ptf77blas', 'ptcblas', 'atlas'] > library_dirs = ['/home/srojas/myPROG/LapackLib_gfortran/Atlas64b/lib'] > define_macros = [('ATLAS_INFO', '"\\"3.9.72\\"')] > language = f77 > include_dirs = ['/home/srojas/myPROG/LapackLib_gfortran/Atlas64b/include'] > lapack_mkl_info: > NOT AVAILABLE > blas_mkl_info: > NOT AVAILABLE > mkl_info: > NOT AVAILABLE From sergio_r at mail.com Fri May 18 18:46:09 2012 From: sergio_r at mail.com (Sergio Rojas) Date: Fri, 18 May 2012 18:46:09 -0400 Subject: [SciPy-User] scipy test error: undefined symbol: ATL_buildinfo Message-ID: <20120518224609.17910@gmx.com> Hello guys, After struggling for a while, I just finished the installation of scipy using the intel mkl library. Ungracefully the tests ended with an error regarding "undefined symbol: ATL_buildinfo" (see below) Can this scipy installation live with this error? or Is it prone to throw out wrong computations? Also, why the number of ran test is not a fix number (running the tests using a scipy/Atlas installation finished succesfully indicating: Ran 5832 tests in 422.155s OK (KNOWNFAIL=14, SKIP=42), but the scipy/mkl tests ended with Ran 5833 tests in 448.386s FAILED (KNOWNFAIL=14, SKIP=34, errors=1))? what is the meaning of KNOWNFAIL=14? Sergio >$ python_mkl Python 2.7.2 (default, May 16 2012, 13:06:17) [GCC 4.6.1] on linux3 Type "help", "copyright", "credits" or "license" for more information. >>> import scipy >>> scipy.show_config() umfpack_info: NOT AVAILABLE lapack_opt_info: libraries = ['mkl_lapack95_ilp64', 'mkl_lapack95_lp64', 'mkl_rt', 'mkl_intel _lp64', 'mkl_intel_thread', 'mkl_core', 'pthread'] library_dirs = ['/home/srojas/myPROG/IntelC/composer_xe_2011_sp1.7.256/mkl/l ib/intel64'] define_macros = [('SCIPY_MKL_H', None)] include_dirs = ['/home/srojas/myPROG/IntelC/composer_xe_2011_sp1.7.256/mkl/i nclude'] blas_opt_info: libraries = ['mkl_rt', 'mkl_intel_lp64', 'mkl_intel_thread', 'mkl_core', 'pt hread'] library_dirs = ['/home/srojas/myPROG/IntelC/composer_xe_2011_sp1.7.256/mkl/l ib/intel64'] define_macros = [('SCIPY_MKL_H', None)] include_dirs = ['/home/srojas/myPROG/IntelC/composer_xe_2011_sp1.7.256/mkl/i nclude'] lapack_mkl_info: libraries = ['mkl_lapack95_ilp64', 'mkl_lapack95_lp64', 'mkl_rt', 'mkl_intel _lp64', 'mkl_intel_thread', 'mkl_core', 'pthread'] library_dirs = ['/home/srojas/myPROG/IntelC/composer_xe_2011_sp1.7.256/mkl/l ib/intel64'] define_macros = [('SCIPY_MKL_H', None)] include_dirs = ['/home/srojas/myPROG/IntelC/composer_xe_2011_sp1.7.256/mkl/i nclude'] blas_mkl_info: libraries = ['mkl_rt', 'mkl_intel_lp64', 'mkl_intel_thread', 'mkl_core', 'pt hread'] library_dirs = ['/home/srojas/myPROG/IntelC/composer_xe_2011_sp1.7.256/mkl/l ib/intel64'] define_macros = [('SCIPY_MKL_H', None)] include_dirs = ['/home/srojas/myPROG/IntelC/composer_xe_2011_sp1.7.256/mkl/i nclude'] mkl_info: libraries = ['mkl_rt', 'mkl_intel_lp64', 'mkl_intel_thread', 'mkl_core', 'pt hread'] library_dirs = ['/home/srojas/myPROG/IntelC/composer_xe_2011_sp1.7.256/mkl/l ib/intel64'] define_macros = [('SCIPY_MKL_H', None)] include_dirs = ['/home/srojas/myPROG/IntelC/composer_xe_2011_sp1.7.256/mkl/i nclude'] >>> scipy.test('full', verbose=2) Running unit tests for scipy NumPy version 1.6.1 NumPy is installed in /home/srojas/myPROG/Python272GnuMKL/lib/python2.7/site-pac kages/numpy SciPy version 0.10.1 SciPy is installed in /home/srojas/myPROG/Python272GnuMKL/lib/python2.7/site-pac kages/scipy Python version 2.7.2 (default, May 16 2012, 13:06:17) [GCC 4.6.1] nose version 1.1.2 ... ... (DELETED OUTPUT) ... ====================================================================== ERROR: Failure: ImportError (/home/srojas/myPROG/Python272GnuMKL/lib/python2.7/s ite-packages/scipy/linalg/atlas_version.so: undefined symbol: ATL_buildinfo) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/srojas/myPROG/Python272GnuMKL/lib/python2.7/site-packages/nose/loa der.py", line 390, in loadTestsFromName addr.filename, addr.module) File "/home/srojas/myPROG/Python272GnuMKL/lib/python2.7/site-packages/nose/imp orter.py", line 39, in importFromPath return self.importFromDir(dir_path, fqname) File "/home/srojas/myPROG/Python272GnuMKL/lib/python2.7/site-packages/nose/imp orter.py", line 86, in importFromDir mod = load_module(part_fqname, fh, filename, desc) File "/home/srojas/myPROG/Python272GnuMKL/lib/python2.7/site-packages/scipy/li nalg/tests/test_atlas_version.py", line 6, in import scipy.linalg.atlas_version ImportError: /home/srojas/myPROG/Python272GnuMKL/lib/python2.7/site-packages/sci py/linalg/atlas_version.so: undefined symbol: ATL_buildinfo ---------------------------------------------------------------------- Ran 5833 tests in 448.386s FAILED (KNOWNFAIL=14, SKIP=34, errors=1) -------------- next part -------------- An HTML attachment was scrubbed... URL: From f_magician at mac.com Sat May 19 11:09:39 2012 From: f_magician at mac.com (Magician) Date: Sun, 20 May 2012 00:09:39 +0900 Subject: [SciPy-User] SciPy installation troubles on CentOS 6.2 Message-ID: <4CD3A5FA-D012-4EAF-82C3-53D56AAB19F6@mac.com> Hi All, I'm trying to build SciPy from source code, but I have some troubles. My environment is below: > CentOS 6.2 on VMware Fusion 4.1.2 (CentOS was installed as Software Development WS) > Python 2.7.3 (already built from sources, installed at /usr/local/python-2.7.3) > NumPy 1.6.1, SciPy 0.10.1, ATLAS 3.8.4, Lapack 3.4.1 (now trying to build) I and my colleagues (other users) want to use recent Python, so I installed Python from sources, and I can't install SciPy by using yum command. Now I'm facing ATLAS compiling errors. Configuration options are "--prefix=/usr/local/atlas-3.8.4 -Fa alg -fPIC". I tried to build it for several times, and always I got errors as below: > res/dgemvN_6_75 : VARIATION EXCEEDS TOLERENCE, RERUN WITH HIGHER REPS. > > ATL_gemvN_mm.c : 1257.99 > ATL_gemvN_1x1_1.c : 581.74 > ATL_gemvN_1x1_1a.c : 1589.45 > ATL_gemvN_4x2_0.c : 813.13 > ATL_gemvN_4x4_1.c : 755.54 > make[3]: *** [res/dMVRES] Error 255 > make[3]: Leaving directory `/home/magician/Desktop/ATLAS/build/tune/blas/gemv' > make[2]: *** [/home/magician/Desktop/ATLAS/build/tune/blas/gemv/res/dMVRES] Error 2 > make[2]: Leaving directory `/home/magician/Desktop/ATLAS/build/bin' > ERROR 734 DURING MVTUNE!!. CHECK INSTALL_LOG/dMVTUNE.LOG FOR DETAILS. > make[2]: Entering directory `/home/magician/Desktop/ATLAS/build/bin' > cd /home/magician/Desktop/ATLAS/build ; make error_report > make[3]: Entering directory `/home/magician/Desktop/ATLAS/build' > make -f Make.top error_report > make[4]: Entering directory `/home/magician/Desktop/ATLAS/build' > uname -a 2>&1 >> bin/INSTALL_LOG/ERROR.LOG > gcc -v 2>&1 >> bin/INSTALL_LOG/ERROR.LOG > Using built-in specs. > Target: x86_64-redhat-linux > Configured with: ../configure --prefix=/usr --mandir=/usr/share/man --infodir=/usr/share/info --with-bugurl=http://bugzilla.redhat.com/bugzilla --enable-bootstrap --enable-shared --enable-threads=posix --enable-checking=release --with-system-zlib --enable-__cxa_atexit --disable-libunwind-exceptions --enable-gnu-unique-object --enable-languages=c,c++,objc,obj-c++,java,fortran,ada --enable-java-awt=gtk --disable-dssi --with-java-home=/usr/lib/jvm/java-1.5.0-gcj-1.5.0.0/jre --enable-libgcj-multifile --enable-java-maintainer-mode --with-ecj-jar=/usr/share/java/eclipse-ecj.jar --disable-libjava-multilib --with-ppl --with-cloog --with-tune=generic --with-arch_32=i686 --build=x86_64-redhat-linux > Thread model: posix > gcc version 4.4.6 20110731 (Red Hat 4.4.6-3) (GCC) > gcc -V 2>&1 >> bin/INSTALL_LOG/ERROR.LOG > gcc: '-V' option must have argument > make[4]: [error_report] Error 1 (ignored) > gcc --version 2>&1 >> bin/INSTALL_LOG/ERROR.LOG > tar cf error_Corei264SSE3.tar Make.inc bin/INSTALL_LOG/* > gzip --best error_Corei264SSE3.tar > mv error_Corei264SSE3.tar.gz error_Corei264SSE3.tgz > make[4]: Leaving directory `/home/magician/Desktop/ATLAS/build' > make[3]: Leaving directory `/home/magician/Desktop/ATLAS/build' > make[2]: Leaving directory `/home/magician/Desktop/ATLAS/build/bin' > Error report error_.tgz has been created in your top-level ATLAS > directory. Be sure to include this file in any help request. > cat: ../../CONFIG/error.txt: No such file or directory > cat: ../../CONFIG/error.txt: No such file or directory > make[1]: *** [build] Error 255 > make[1]: Leaving directory `/home/magician/Desktop/ATLAS/build' > make: *** [build] Error 2 It's very troublesome for me to build ATLAS by myself. My purpose is just using SciPy on my Python. Even if it's optimized not so good for my environment, it's OK. Is there an easy or a sure way to build and install SciPy? Magician From ralf.gommers at googlemail.com Sun May 20 04:21:12 2012 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Sun, 20 May 2012 10:21:12 +0200 Subject: [SciPy-User] SciPy installation troubles on CentOS 6.2 In-Reply-To: <4CD3A5FA-D012-4EAF-82C3-53D56AAB19F6@mac.com> References: <4CD3A5FA-D012-4EAF-82C3-53D56AAB19F6@mac.com> Message-ID: On Sat, May 19, 2012 at 5:09 PM, Magician wrote: > Hi All, > > > I'm trying to build SciPy from source code, > but I have some troubles. > > My environment is below: > > CentOS 6.2 on VMware Fusion 4.1.2 (CentOS was installed as Software > Development WS) > > Python 2.7.3 (already built from sources, installed at > /usr/local/python-2.7.3) > > NumPy 1.6.1, SciPy 0.10.1, ATLAS 3.8.4, Lapack 3.4.1 (now trying to > build) > I and my colleagues (other users) want to use recent Python, > so I installed Python from sources, and I can't install SciPy > by using yum command. > > Now I'm facing ATLAS compiling errors. > Configuration options are "--prefix=/usr/local/atlas-3.8.4 -Fa alg -fPIC". > I tried to build it for several times, and always I got errors as below: > > res/dgemvN_6_75 : VARIATION EXCEEDS TOLERENCE, RERUN WITH HIGHER REPS. > > > > ATL_gemvN_mm.c : 1257.99 > > ATL_gemvN_1x1_1.c : 581.74 > > ATL_gemvN_1x1_1a.c : 1589.45 > > ATL_gemvN_4x2_0.c : 813.13 > > ATL_gemvN_4x4_1.c : 755.54 > > make[3]: *** [res/dMVRES] Error 255 > > make[3]: Leaving directory > `/home/magician/Desktop/ATLAS/build/tune/blas/gemv' > > make[2]: *** > [/home/magician/Desktop/ATLAS/build/tune/blas/gemv/res/dMVRES] Error 2 > > make[2]: Leaving directory `/home/magician/Desktop/ATLAS/build/bin' > > ERROR 734 DURING MVTUNE!!. CHECK INSTALL_LOG/dMVTUNE.LOG FOR DETAILS. > > make[2]: Entering directory `/home/magician/Desktop/ATLAS/build/bin' > > cd /home/magician/Desktop/ATLAS/build ; make error_report > > make[3]: Entering directory `/home/magician/Desktop/ATLAS/build' > > make -f Make.top error_report > > make[4]: Entering directory `/home/magician/Desktop/ATLAS/build' > > uname -a 2>&1 >> bin/INSTALL_LOG/ERROR.LOG > > gcc -v 2>&1 >> bin/INSTALL_LOG/ERROR.LOG > > Using built-in specs. > > Target: x86_64-redhat-linux > > Configured with: ../configure --prefix=/usr --mandir=/usr/share/man > --infodir=/usr/share/info --with-bugurl= > http://bugzilla.redhat.com/bugzilla --enable-bootstrap --enable-shared > --enable-threads=posix --enable-checking=release --with-system-zlib > --enable-__cxa_atexit --disable-libunwind-exceptions > --enable-gnu-unique-object > --enable-languages=c,c++,objc,obj-c++,java,fortran,ada > --enable-java-awt=gtk --disable-dssi > --with-java-home=/usr/lib/jvm/java-1.5.0-gcj-1.5.0.0/jre > --enable-libgcj-multifile --enable-java-maintainer-mode > --with-ecj-jar=/usr/share/java/eclipse-ecj.jar --disable-libjava-multilib > --with-ppl --with-cloog --with-tune=generic --with-arch_32=i686 > --build=x86_64-redhat-linux > > Thread model: posix > > gcc version 4.4.6 20110731 (Red Hat 4.4.6-3) (GCC) > > gcc -V 2>&1 >> bin/INSTALL_LOG/ERROR.LOG > > gcc: '-V' option must have argument > > make[4]: [error_report] Error 1 (ignored) > > gcc --version 2>&1 >> bin/INSTALL_LOG/ERROR.LOG > > tar cf error_Corei264SSE3.tar Make.inc bin/INSTALL_LOG/* > > gzip --best error_Corei264SSE3.tar > > mv error_Corei264SSE3.tar.gz error_Corei264SSE3.tgz > > make[4]: Leaving directory `/home/magician/Desktop/ATLAS/build' > > make[3]: Leaving directory `/home/magician/Desktop/ATLAS/build' > > make[2]: Leaving directory `/home/magician/Desktop/ATLAS/build/bin' > > Error report error_.tgz has been created in your top-level ATLAS > > directory. Be sure to include this file in any help request. > > cat: ../../CONFIG/error.txt: No such file or directory > > cat: ../../CONFIG/error.txt: No such file or directory > > make[1]: *** [build] Error 255 > > make[1]: Leaving directory `/home/magician/Desktop/ATLAS/build' > > make: *** [build] Error 2 > > It's very troublesome for me to build ATLAS by myself. > My purpose is just using SciPy on my Python. > Even if it's optimized not so good for my environment, it's OK. > > Is there an easy or a sure way to build and install SciPy? Building ATLAS is much harder than building scipy, so you should try to find some rpm's for it, like http://linuxtoolkit.blogspot.com/2010/09/installing-lapack-blas-and-atlas-on.html. There's no problem building scipy against ATLAS from a binary install. Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at googlemail.com Sun May 20 05:34:46 2012 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Sun, 20 May 2012 11:34:46 +0200 Subject: [SciPy-User] ANN: NumPy 1.6.2 released Message-ID: Hi, I'm pleased to announce the availability of NumPy 1.6.2. This is a maintenance release. Due to the delay of the NumPy 1.7.0, this release contains far more fixes than a regular NumPy bugfix release. It also includes a number of documentation and build improvements. Sources and binary installers can be found at http://sourceforge.net/projects/numpy/files/NumPy/1.6.2/, release notes are copied below. Thanks to everyone who contributed to this release. Enjoy, The NumPy developers ========================= NumPy 1.6.2 Release Notes ========================= This is a bugfix release in the 1.6.x series. Due to the delay of the NumPy 1.7.0 release, this release contains far more fixes than a regular NumPy bugfix release. It also includes a number of documentation and build improvements. ``numpy.core`` issues fixed --------------------------- #2063 make unique() return consistent index #1138 allow creating arrays from empty buffers or empty slices #1446 correct note about correspondence vstack and concatenate #1149 make argmin() work for datetime #1672 fix allclose() to work for scalar inf #1747 make np.median() work for 0-D arrays #1776 make complex division by zero to yield inf properly #1675 add scalar support for the format() function #1905 explicitly check for NaNs in allclose() #1952 allow floating ddof in std() and var() #1948 fix regression for indexing chararrays with empty list #2017 fix type hashing #2046 deleting array attributes causes segfault #2033 a**2.0 has incorrect type #2045 make attribute/iterator_element deletions not segfault #2021 fix segfault in searchsorted() #2073 fix float16 __array_interface__ bug ``numpy.lib`` issues fixed -------------------------- #2048 break reference cycle in NpzFile #1573 savetxt() now handles complex arrays #1387 allow bincount() to accept empty arrays #1899 fixed histogramdd() bug with empty inputs #1793 fix failing npyio test under py3k #1936 fix extra nesting for subarray dtypes #1848 make tril/triu return the same dtype as the original array #1918 use Py_TYPE to access ob_type, so it works also on Py3 ``numpy.f2py`` changes ---------------------- ENH: Introduce new options extra_f77_compiler_args and extra_f90_compiler_args BLD: Improve reporting of fcompiler value BUG: Fix f2py test_kind.py test ``numpy.poly`` changes ---------------------- ENH: Add some tests for polynomial printing ENH: Add companion matrix functions DOC: Rearrange the polynomial documents BUG: Fix up links to classes DOC: Add version added to some of the polynomial package modules DOC: Document xxxfit functions in the polynomial package modules BUG: The polynomial convenience classes let different types interact DOC: Document the use of the polynomial convenience classes DOC: Improve numpy reference documentation of polynomial classes ENH: Improve the computation of polynomials from roots STY: Code cleanup in polynomial [*]fromroots functions DOC: Remove references to cast and NA, which were added in 1.7 ``numpy.distutils`` issues fixed ------------------------------- #1261 change compile flag on AIX from -O5 to -O3 #1377 update HP compiler flags #1383 provide better support for C++ code on HPUX #1857 fix build for py3k + pip BLD: raise a clearer warning in case of building without cleaning up first BLD: follow build_ext coding convention in build_clib BLD: fix up detection of Intel CPU on OS X in system_info.py BLD: add support for the new X11 directory structure on Ubuntu & co. BLD: add ufsparse to the libraries search path. BLD: add 'pgfortran' as a valid compiler in the Portland Group BLD: update version match regexp for IBM AIX Fortran compilers. ``numpy.random`` issues fixed ----------------------------- BUG: Use npy_intp instead of long in mtrand -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at googlemail.com Sun May 20 06:44:34 2012 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Sun, 20 May 2012 12:44:34 +0200 Subject: [SciPy-User] scipy test error: undefined symbol: ATL_buildinfo In-Reply-To: <20120518224609.17910@gmx.com> References: <20120518224609.17910@gmx.com> Message-ID: On Sat, May 19, 2012 at 12:46 AM, Sergio Rojas wrote: > > Hello guys, > > After struggling for a while, I just finished the installation > of scipy using the intel mkl library. > > Ungracefully the tests ended with an error regarding > "undefined symbol: ATL_buildinfo" (see below) > Can this scipy installation live with this error? or > Is it prone to throw out wrong computations? > The only failure is a check of the ATLAS version, that shouldn't yield incorrect results. > Also, why the number of ran test is not a fix number > Not sure. Perhaps nose isn't counting the one ATLAS test when it passes, due to that file not having any actual tests in it. > (running the tests using a scipy/Atlas installation > finished succesfully indicating: Ran 5832 tests in 422.155s > OK (KNOWNFAIL=14, SKIP=42), but the scipy/mkl tests ended with > Ran 5833 tests in 448.386s FAILED (KNOWNFAIL=14, SKIP=34, errors=1))? > > what is the meaning of KNOWNFAIL=14? > Knownfail indicates tests that are known to fail under some conditions (usually hardware/compiler dependent), so they aren't reported as new errors/failures. If we wouldn't do this, we would keep getting the same bug reports over and over again. Ralf > Sergio > > >$ python_mkl > Python 2.7.2 (default, May 16 2012, 13:06:17) > [GCC 4.6.1] on linux3 > Type "help", "copyright", "credits" or "license" for more information. > >>> import scipy > >>> scipy.show_config() > umfpack_info: > NOT AVAILABLE > lapack_opt_info: > libraries = ['mkl_lapack95_ilp64', 'mkl_lapack95_lp64', 'mkl_rt', > 'mkl_intel > _lp64', 'mkl_intel_thread', 'mkl_core', 'pthread'] > library_dirs = > ['/home/srojas/myPROG/IntelC/composer_xe_2011_sp1.7.256/mkl/l > ib/intel64'] > define_macros = [('SCIPY_MKL_H', None)] > include_dirs = > ['/home/srojas/myPROG/IntelC/composer_xe_2011_sp1.7.256/mkl/i > nclude'] > blas_opt_info: > libraries = ['mkl_rt', 'mkl_intel_lp64', 'mkl_intel_thread', > 'mkl_core', 'pt > hread'] > library_dirs = > ['/home/srojas/myPROG/IntelC/composer_xe_2011_sp1.7.256/mkl/l > ib/intel64'] > define_macros = [('SCIPY_MKL_H', None)] > include_dirs = > ['/home/srojas/myPROG/IntelC/composer_xe_2011_sp1.7.256/mkl/i > nclude'] > lapack_mkl_info: > libraries = ['mkl_lapack95_ilp64', 'mkl_lapack95_lp64', 'mkl_rt', > 'mkl_intel > _lp64', 'mkl_intel_thread', 'mkl_core', 'pthread'] > library_dirs = > ['/home/srojas/myPROG/IntelC/composer_xe_2011_sp1.7.256/mkl/l > ib/intel64'] > define_macros = [('SCIPY_MKL_H', None)] > include_dirs = > ['/home/srojas/myPROG/IntelC/composer_xe_2011_sp1.7.256/mkl/i > nclude'] > blas_mkl_info: > libraries = ['mkl_rt', 'mkl_intel_lp64', 'mkl_intel_thread', > 'mkl_core', 'pt > hread'] > library_dirs = > ['/home/srojas/myPROG/IntelC/composer_xe_2011_sp1.7.256/mkl/l > ib/intel64'] > define_macros = [('SCIPY_MKL_H', None)] > include_dirs = > ['/home/srojas/myPROG/IntelC/composer_xe_2011_sp1.7.256/mkl/i > nclude'] > mkl_info: > libraries = ['mkl_rt', 'mkl_intel_lp64', 'mkl_intel_thread', > 'mkl_core', 'pt > hread'] > library_dirs = > ['/home/srojas/myPROG/IntelC/composer_xe_2011_sp1.7.256/mkl/l > ib/intel64'] > define_macros = [('SCIPY_MKL_H', None)] > include_dirs = > ['/home/srojas/myPROG/IntelC/composer_xe_2011_sp1.7.256/mkl/i > nclude'] > >>> scipy.test('full', verbose=2) > Running unit tests for scipy > NumPy version 1.6.1 > NumPy is installed in > /home/srojas/myPROG/Python272GnuMKL/lib/python2.7/site-pac > kages/numpy > SciPy version 0.10.1 > SciPy is installed in > /home/srojas/myPROG/Python272GnuMKL/lib/python2.7/site-pac > kages/scipy > Python version 2.7.2 (default, May 16 2012, 13:06:17) [GCC 4.6.1] > nose version 1.1.2 > ... > ... (DELETED OUTPUT) > ... > ====================================================================== > ERROR: Failure: ImportError > (/home/srojas/myPROG/Python272GnuMKL/lib/python2.7/s > ite-packages/scipy/linalg/atlas_version.so: undefined symbol: > ATL_buildinfo) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/home/srojas/myPROG/Python272GnuMKL/lib/python2.7/site-packages/nose/loa > der.py", line 390, in loadTestsFromName > addr.filename, addr.module) > File > "/home/srojas/myPROG/Python272GnuMKL/lib/python2.7/site-packages/nose/imp > orter.py", line 39, in importFromPath > return self.importFromDir(dir_path, fqname) > File > "/home/srojas/myPROG/Python272GnuMKL/lib/python2.7/site-packages/nose/imp > orter.py", line 86, in importFromDir > mod = load_module(part_fqname, fh, filename, desc) > File > "/home/srojas/myPROG/Python272GnuMKL/lib/python2.7/site-packages/scipy/li > nalg/tests/test_atlas_version.py", line 6, in > import scipy.linalg.atlas_version > ImportError: > /home/srojas/myPROG/Python272GnuMKL/lib/python2.7/site-packages/sci > py/linalg/atlas_version.so: undefined symbol: ATL_buildinfo > > ---------------------------------------------------------------------- > Ran 5833 tests in 448.386s > > FAILED (KNOWNFAIL=14, SKIP=34, errors=1) > > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From f_magician at mac.com Sun May 20 10:16:48 2012 From: f_magician at mac.com (Magician) Date: Sun, 20 May 2012 23:16:48 +0900 Subject: [SciPy-User] SciPy installation troubles on CentOS 6.2 In-Reply-To: References: Message-ID: Hi Ralf, Thanks for your advice. I tried to install BLAS/Lapack/ATLAS as below: > yum install blas-devel lapack-devel atlas-devel Next I installed NumPy as below: > tar xzvf numpy-1.6.1.tar.gz > cd numpy-1.6.1 > export CFLAGS="-L/usr/local/python-2.7.3/lib" > python setup.py build But then I got those errors: > building 'numpy.linalg.lapack_lite' extension > compiling C sources > C compiler: gcc -pthread -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -L/usr/local/python-2.7.3/lib -fPIC > > creating build/temp.linux-x86_64-2.7/numpy/linalg > compile options: '-DATLAS_INFO="\"3.8.4\"" -I/usr/include -Inumpy/core/include -Ibuild/src.linux-x86_64-2.7/numpy/core/include/numpy -Inumpy/core/src/private -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/include -I/usr/local/python-2.7.3/include/python2.7 -Ibuild/src.linux-x86_64-2.7/numpy/core/src/multiarray -Ibuild/src.linux-x86_64-2.7/numpy/core/src/umath -c' > gcc: numpy/linalg/lapack_litemodule.c > gcc: numpy/linalg/python_xerbla.c > /usr/bin/gfortran -Wall -Wall -shared build/temp.linux-x86_64-2.7/numpy/linalg/lapack_litemodule.o build/temp.linux-x86_64-2.7/numpy/linalg/python_xerbla.o -L/usr/lib64/atlas -L. -Lbuild/temp.linux-x86_64-2.7 -llapack -lptf77blas -lptcblas -latlas -lpython2.7 -lgfortran -o build/lib.linux-x86_64-2.7/numpy/linalg/lapack_lite.so > /usr/bin/ld: cannot find -lpython2.7 > collect2: ld returned 1 exit status > /usr/bin/ld: cannot find -lpython2.7 > collect2: ld returned 1 exit status > error: Command "/usr/bin/gfortran -Wall -Wall -shared build/temp.linux-x86_64-2.7/numpy/linalg/lapack_litemodule.o build/temp.linux-x86_64-2.7/numpy/linalg/python_xerbla.o -L/usr/lib64/atlas -L. -Lbuild/temp.linux-x86_64-2.7 -llapack -lptf77blas -lptcblas -latlas -lpython2.7 -lgfortran -o build/lib.linux-x86_64-2.7/numpy/linalg/lapack_lite.so" failed with exit status 1 If I haven't install BLAS/Lapack/ATLAS, NumPy will be successfully built and installed. Magician On 2012/05/20, at 19:40, scipy-user-request at scipy.org wrote: > Message: 1 > Date: Sun, 20 May 2012 10:21:12 +0200 > From: Ralf Gommers > Subject: Re: [SciPy-User] SciPy installation troubles on CentOS 6.2 > To: SciPy Users List > Message-ID: > > Content-Type: text/plain; charset="iso-8859-1" > > On Sat, May 19, 2012 at 5:09 PM, Magician wrote: > >> Hi All, >> >> >> I'm trying to build SciPy from source code, >> but I have some troubles. >> >> My environment is below: >>> CentOS 6.2 on VMware Fusion 4.1.2 (CentOS was installed as Software >> Development WS) >>> Python 2.7.3 (already built from sources, installed at >> /usr/local/python-2.7.3) >>> NumPy 1.6.1, SciPy 0.10.1, ATLAS 3.8.4, Lapack 3.4.1 (now trying to >> build) >> I and my colleagues (other users) want to use recent Python, >> so I installed Python from sources, and I can't install SciPy >> by using yum command. >> >> Now I'm facing ATLAS compiling errors. >> Configuration options are "--prefix=/usr/local/atlas-3.8.4 -Fa alg -fPIC". >> I tried to build it for several times, and always I got errors as below: >>> res/dgemvN_6_75 : VARIATION EXCEEDS TOLERENCE, RERUN WITH HIGHER REPS. >>> >>> ATL_gemvN_mm.c : 1257.99 >>> ATL_gemvN_1x1_1.c : 581.74 >>> ATL_gemvN_1x1_1a.c : 1589.45 >>> ATL_gemvN_4x2_0.c : 813.13 >>> ATL_gemvN_4x4_1.c : 755.54 >>> make[3]: *** [res/dMVRES] Error 255 >>> make[3]: Leaving directory >> `/home/magician/Desktop/ATLAS/build/tune/blas/gemv' >>> make[2]: *** >> [/home/magician/Desktop/ATLAS/build/tune/blas/gemv/res/dMVRES] Error 2 >>> make[2]: Leaving directory `/home/magician/Desktop/ATLAS/build/bin' >>> ERROR 734 DURING MVTUNE!!. CHECK INSTALL_LOG/dMVTUNE.LOG FOR DETAILS. >>> make[2]: Entering directory `/home/magician/Desktop/ATLAS/build/bin' >>> cd /home/magician/Desktop/ATLAS/build ; make error_report >>> make[3]: Entering directory `/home/magician/Desktop/ATLAS/build' >>> make -f Make.top error_report >>> make[4]: Entering directory `/home/magician/Desktop/ATLAS/build' >>> uname -a 2>&1 >> bin/INSTALL_LOG/ERROR.LOG >>> gcc -v 2>&1 >> bin/INSTALL_LOG/ERROR.LOG >>> Using built-in specs. >>> Target: x86_64-redhat-linux >>> Configured with: ../configure --prefix=/usr --mandir=/usr/share/man >> --infodir=/usr/share/info --with-bugurl= >> http://bugzilla.redhat.com/bugzilla --enable-bootstrap --enable-shared >> --enable-threads=posix --enable-checking=release --with-system-zlib >> --enable-__cxa_atexit --disable-libunwind-exceptions >> --enable-gnu-unique-object >> --enable-languages=c,c++,objc,obj-c++,java,fortran,ada >> --enable-java-awt=gtk --disable-dssi >> --with-java-home=/usr/lib/jvm/java-1.5.0-gcj-1.5.0.0/jre >> --enable-libgcj-multifile --enable-java-maintainer-mode >> --with-ecj-jar=/usr/share/java/eclipse-ecj.jar --disable-libjava-multilib >> --with-ppl --with-cloog --with-tune=generic --with-arch_32=i686 >> --build=x86_64-redhat-linux >>> Thread model: posix >>> gcc version 4.4.6 20110731 (Red Hat 4.4.6-3) (GCC) >>> gcc -V 2>&1 >> bin/INSTALL_LOG/ERROR.LOG >>> gcc: '-V' option must have argument >>> make[4]: [error_report] Error 1 (ignored) >>> gcc --version 2>&1 >> bin/INSTALL_LOG/ERROR.LOG >>> tar cf error_Corei264SSE3.tar Make.inc bin/INSTALL_LOG/* >>> gzip --best error_Corei264SSE3.tar >>> mv error_Corei264SSE3.tar.gz error_Corei264SSE3.tgz >>> make[4]: Leaving directory `/home/magician/Desktop/ATLAS/build' >>> make[3]: Leaving directory `/home/magician/Desktop/ATLAS/build' >>> make[2]: Leaving directory `/home/magician/Desktop/ATLAS/build/bin' >>> Error report error_.tgz has been created in your top-level ATLAS >>> directory. Be sure to include this file in any help request. >>> cat: ../../CONFIG/error.txt: No such file or directory >>> cat: ../../CONFIG/error.txt: No such file or directory >>> make[1]: *** [build] Error 255 >>> make[1]: Leaving directory `/home/magician/Desktop/ATLAS/build' >>> make: *** [build] Error 2 >> >> It's very troublesome for me to build ATLAS by myself. >> My purpose is just using SciPy on my Python. >> Even if it's optimized not so good for my environment, it's OK. >> >> Is there an easy or a sure way to build and install SciPy? > > > Building ATLAS is much harder than building scipy, so you should try to > find some rpm's for it, like > http://linuxtoolkit.blogspot.com/2010/09/installing-lapack-blas-and-atlas-on.html. > There's no problem building scipy against ATLAS from a binary install. > > Ralf From ralf.gommers at googlemail.com Sun May 20 12:51:12 2012 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Sun, 20 May 2012 18:51:12 +0200 Subject: [SciPy-User] SciPy installation troubles on CentOS 6.2 In-Reply-To: References: Message-ID: On Sun, May 20, 2012 at 4:16 PM, Magician wrote: > Hi Ralf, > > > Thanks for your advice. > I tried to install BLAS/Lapack/ATLAS as below: > > yum install blas-devel lapack-devel atlas-devel > > Next I installed NumPy as below: > > tar xzvf numpy-1.6.1.tar.gz > > cd numpy-1.6.1 > > export CFLAGS="-L/usr/local/python-2.7.3/lib" > Note that this overrides CFLAGS instead of appending that flag to the rest. If it doesn't work without that line, you have to either specify all cflags or use numscons/bento. Ralf > > python setup.py build > > But then I got those errors: > > building 'numpy.linalg.lapack_lite' extension > > compiling C sources > > C compiler: gcc -pthread -DNDEBUG -g -fwrapv -O3 -Wall > -Wstrict-prototypes -L/usr/local/python-2.7.3/lib -fPIC > > > > creating build/temp.linux-x86_64-2.7/numpy/linalg > > compile options: '-DATLAS_INFO="\"3.8.4\"" -I/usr/include > -Inumpy/core/include -Ibuild/src.linux-x86_64-2.7/numpy/core/include/numpy > -Inumpy/core/src/private -Inumpy/core/src -Inumpy/core > -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath > -Inumpy/core/include -I/usr/local/python-2.7.3/include/python2.7 > -Ibuild/src.linux-x86_64-2.7/numpy/core/src/multiarray > -Ibuild/src.linux-x86_64-2.7/numpy/core/src/umath -c' > > gcc: numpy/linalg/lapack_litemodule.c > > gcc: numpy/linalg/python_xerbla.c > > /usr/bin/gfortran -Wall -Wall -shared > build/temp.linux-x86_64-2.7/numpy/linalg/lapack_litemodule.o > build/temp.linux-x86_64-2.7/numpy/linalg/python_xerbla.o -L/usr/lib64/atlas > -L. -Lbuild/temp.linux-x86_64-2.7 -llapack -lptf77blas -lptcblas -latlas > -lpython2.7 -lgfortran -o > build/lib.linux-x86_64-2.7/numpy/linalg/lapack_lite.so > > /usr/bin/ld: cannot find -lpython2.7 > > collect2: ld returned 1 exit status > > /usr/bin/ld: cannot find -lpython2.7 > > collect2: ld returned 1 exit status > > error: Command "/usr/bin/gfortran -Wall -Wall -shared > build/temp.linux-x86_64-2.7/numpy/linalg/lapack_litemodule.o > build/temp.linux-x86_64-2.7/numpy/linalg/python_xerbla.o -L/usr/lib64/atlas > -L. -Lbuild/temp.linux-x86_64-2.7 -llapack -lptf77blas -lptcblas -latlas > -lpython2.7 -lgfortran -o > build/lib.linux-x86_64-2.7/numpy/linalg/lapack_lite.so" failed with exit > status 1 > > If I haven't install BLAS/Lapack/ATLAS, NumPy will be > successfully built and installed. > > > Magician > > > On 2012/05/20, at 19:40, scipy-user-request at scipy.org wrote: > > > Message: 1 > > Date: Sun, 20 May 2012 10:21:12 +0200 > > From: Ralf Gommers > > Subject: Re: [SciPy-User] SciPy installation troubles on CentOS 6.2 > > To: SciPy Users List > > Message-ID: > > bSvq9eCK_qihAfQ at mail.gmail.com> > > Content-Type: text/plain; charset="iso-8859-1" > > > > On Sat, May 19, 2012 at 5:09 PM, Magician wrote: > > > >> Hi All, > >> > >> > >> I'm trying to build SciPy from source code, > >> but I have some troubles. > >> > >> My environment is below: > >>> CentOS 6.2 on VMware Fusion 4.1.2 (CentOS was installed as Software > >> Development WS) > >>> Python 2.7.3 (already built from sources, installed at > >> /usr/local/python-2.7.3) > >>> NumPy 1.6.1, SciPy 0.10.1, ATLAS 3.8.4, Lapack 3.4.1 (now trying to > >> build) > >> I and my colleagues (other users) want to use recent Python, > >> so I installed Python from sources, and I can't install SciPy > >> by using yum command. > >> > >> Now I'm facing ATLAS compiling errors. > >> Configuration options are "--prefix=/usr/local/atlas-3.8.4 -Fa alg > -fPIC". > >> I tried to build it for several times, and always I got errors as below: > >>> res/dgemvN_6_75 : VARIATION EXCEEDS TOLERENCE, RERUN WITH HIGHER REPS. > >>> > >>> ATL_gemvN_mm.c : 1257.99 > >>> ATL_gemvN_1x1_1.c : 581.74 > >>> ATL_gemvN_1x1_1a.c : 1589.45 > >>> ATL_gemvN_4x2_0.c : 813.13 > >>> ATL_gemvN_4x4_1.c : 755.54 > >>> make[3]: *** [res/dMVRES] Error 255 > >>> make[3]: Leaving directory > >> `/home/magician/Desktop/ATLAS/build/tune/blas/gemv' > >>> make[2]: *** > >> [/home/magician/Desktop/ATLAS/build/tune/blas/gemv/res/dMVRES] Error 2 > >>> make[2]: Leaving directory `/home/magician/Desktop/ATLAS/build/bin' > >>> ERROR 734 DURING MVTUNE!!. CHECK INSTALL_LOG/dMVTUNE.LOG FOR DETAILS. > >>> make[2]: Entering directory `/home/magician/Desktop/ATLAS/build/bin' > >>> cd /home/magician/Desktop/ATLAS/build ; make error_report > >>> make[3]: Entering directory `/home/magician/Desktop/ATLAS/build' > >>> make -f Make.top error_report > >>> make[4]: Entering directory `/home/magician/Desktop/ATLAS/build' > >>> uname -a 2>&1 >> bin/INSTALL_LOG/ERROR.LOG > >>> gcc -v 2>&1 >> bin/INSTALL_LOG/ERROR.LOG > >>> Using built-in specs. > >>> Target: x86_64-redhat-linux > >>> Configured with: ../configure --prefix=/usr --mandir=/usr/share/man > >> --infodir=/usr/share/info --with-bugurl= > >> http://bugzilla.redhat.com/bugzilla --enable-bootstrap --enable-shared > >> --enable-threads=posix --enable-checking=release --with-system-zlib > >> --enable-__cxa_atexit --disable-libunwind-exceptions > >> --enable-gnu-unique-object > >> --enable-languages=c,c++,objc,obj-c++,java,fortran,ada > >> --enable-java-awt=gtk --disable-dssi > >> --with-java-home=/usr/lib/jvm/java-1.5.0-gcj-1.5.0.0/jre > >> --enable-libgcj-multifile --enable-java-maintainer-mode > >> --with-ecj-jar=/usr/share/java/eclipse-ecj.jar > --disable-libjava-multilib > >> --with-ppl --with-cloog --with-tune=generic --with-arch_32=i686 > >> --build=x86_64-redhat-linux > >>> Thread model: posix > >>> gcc version 4.4.6 20110731 (Red Hat 4.4.6-3) (GCC) > >>> gcc -V 2>&1 >> bin/INSTALL_LOG/ERROR.LOG > >>> gcc: '-V' option must have argument > >>> make[4]: [error_report] Error 1 (ignored) > >>> gcc --version 2>&1 >> bin/INSTALL_LOG/ERROR.LOG > >>> tar cf error_Corei264SSE3.tar Make.inc bin/INSTALL_LOG/* > >>> gzip --best error_Corei264SSE3.tar > >>> mv error_Corei264SSE3.tar.gz error_Corei264SSE3.tgz > >>> make[4]: Leaving directory `/home/magician/Desktop/ATLAS/build' > >>> make[3]: Leaving directory `/home/magician/Desktop/ATLAS/build' > >>> make[2]: Leaving directory `/home/magician/Desktop/ATLAS/build/bin' > >>> Error report error_.tgz has been created in your top-level ATLAS > >>> directory. Be sure to include this file in any help request. > >>> cat: ../../CONFIG/error.txt: No such file or directory > >>> cat: ../../CONFIG/error.txt: No such file or directory > >>> make[1]: *** [build] Error 255 > >>> make[1]: Leaving directory `/home/magician/Desktop/ATLAS/build' > >>> make: *** [build] Error 2 > >> > >> It's very troublesome for me to build ATLAS by myself. > >> My purpose is just using SciPy on my Python. > >> Even if it's optimized not so good for my environment, it's OK. > >> > >> Is there an easy or a sure way to build and install SciPy? > > > > > > Building ATLAS is much harder than building scipy, so you should try to > > find some rpm's for it, like > > > http://linuxtoolkit.blogspot.com/2010/09/installing-lapack-blas-and-atlas-on.html > . > > There's no problem building scipy against ATLAS from a binary install. > > > > Ralf > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at googlemail.com Sun May 20 13:14:41 2012 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Sun, 20 May 2012 19:14:41 +0200 Subject: [SciPy-User] why do the discrete distributions not have a `fit`? In-Reply-To: References: Message-ID: On Fri, May 11, 2012 at 2:04 AM, wrote: > Why do the discrete distributions not have a `fit` method like the > continuous distributions? > > currently it's a bug in the documentation > http://projects.scipy.org/scipy/ticket/1659 > > in statsmodels, we fit several of the discrete distributions. > Which ones? And do you then return non-integer parameters or not? > How about discrete parameters? (in analogy to the erlang discussion) > > hypergeom is based on a story about marbles or balls > > http://en.wikipedia.org/wiki/Hypergeometric_distribution#Application_and_example > but why should we care, it's just a discrete distribution with 3 shape > parameters, isn't it? > > fractional marbles ? > > >>> nn = np.linspace(4.5, 8, 101) > >>> pmf = [stats.hypergeom.pmf(5, 10.8, n, 8.5) for n in nn] > > >>> plt.plot(nn, pmf, '-o') > >>> plt.title("pmf of hypergeom as function of parameter n") > > Doesn't look like there are any problems, and the likelihood function > is nicely concave. > > conclusion: scipy.stats doesn't have a hypergeometric distribution, > but a generalized version that is defined on a real parameter space. > > Josef > (so what's the point? Sorry, I was just getting distracted while > looking for `fit`.) > For functions that work with continuous input, perhaps using the continuous fit and then looking for the best-fit with integer params near the continuous optimum would work. I looked for literature on this topic, but didn't find anything useful yet. Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Sun May 20 13:44:05 2012 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Sun, 20 May 2012 13:44:05 -0400 Subject: [SciPy-User] why do the discrete distributions not have a `fit`? In-Reply-To: References: Message-ID: On Sun, May 20, 2012 at 1:14 PM, Ralf Gommers wrote: > > > On Fri, May 11, 2012 at 2:04 AM, wrote: >> >> Why do the discrete distributions not have a `fit` method like the >> continuous distributions? >> >> currently it's a bug in the documentation >> http://projects.scipy.org/scipy/ticket/1659 >> >> in statsmodels, we fit several of the discrete distributions. > > > Which ones? And do you then return non-integer parameters or not? statsmodels.discrete Logit, Poisson, Negative Binomial (in GLM, but unfinished in discrete) http://statsmodels.sourceforge.net/devel/discretemod.html No integer parameters restriction since I never seen them in the literature. > >> >> How about discrete parameters? ? (in analogy to the erlang discussion) >> >> hypergeom is based on a story about marbles or balls >> >> http://en.wikipedia.org/wiki/Hypergeometric_distribution#Application_and_example >> but why should we care, it's just a discrete distribution with 3 shape >> parameters, isn't it? >> >> fractional marbles ? >> >> >>> nn = np.linspace(4.5, 8, 101) >> >>> pmf = [stats.hypergeom.pmf(5, 10.8, n, 8.5) for n in nn] >> >> >>> plt.plot(nn, pmf, '-o') >> >>> plt.title("pmf of hypergeom as function of parameter n") >> >> Doesn't look like there are any problems, and the likelihood function >> is nicely concave. >> >> conclusion: scipy.stats doesn't have a hypergeometric distribution, >> but a generalized version that is defined on a real parameter space. >> >> Josef >> (so what's the point? Sorry, I was just getting distracted while >> looking for `fit`.) > > > For functions that work with continuous input, perhaps using the continuous > fit and then looking for the best-fit with integer params near the > continuous optimum would work. I looked for literature on this topic, but > didn't find anything useful yet. If hypergeometric works (and I guess boltzmann and binom works from a quick look) for continuous parameters, then I don't see much reason to restrict parameters to integers. I played a bit more with hypergeom.fit and the main problem is to find starting parameters that are consistent with the data, since the support moves with the parameters, even if loc and scale are fixed. For many discrete distributions the problem might be more how to restrict the parameter space, e.g. if shape parameter is in (0,1) (Logit or Probit transformation for Bernoulli, or exp transformation for shape parameters > 0 in Poisson. Or maybe not, if the optimization routine always finds an interior solution. Josef > > Ralf > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From sergio_r at mail.com Sun May 20 16:34:44 2012 From: sergio_r at mail.com (Sergio Rojas) Date: Sun, 20 May 2012 16:34:44 -0400 Subject: [SciPy-User] SciPy installation troubles on CentOS 6.2 Message-ID: <20120520203445.17910@gmx.com> Magician, since I did not see any mention of it, I think you are missing to include in your build up a "site.cfg" file (see for instance http://www.scipy.org/Installing_SciPy/Linux Step 4: Build numpy(1.5.0) ). In that file you need to specify where your libraries are. That file needs to be used in building both Numpy and Scipy. Sergio ---------------------------------------------------------------------- Message: 1 Date: Sun, 20 May 2012 23:16:48 +0900 From: Magician Subject: Re: [SciPy-User] SciPy installation troubles on CentOS 6.2 To: ralf.gommers at googlemail.com Cc: scipy-user at scipy.org Message-ID: Content-Type: text/plain; CHARSET=US-ASCII Hi Ralf, Thanks for your advice. I tried to install BLAS/Lapack/ATLAS as below: > yum install blas-devel lapack-devel atlas-devel Next I installed NumPy as below: > tar xzvf numpy-1.6.1.tar.gz > cd numpy-1.6.1 > export CFLAGS="-L/usr/local/python-2.7.3/lib" > python setup.py build But then I got those errors: > building 'numpy.linalg.lapack_lite' extension > compiling C sources > C compiler: gcc -pthread -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -L/usr/local/python-2.7.3/lib -fPIC > > creating build/temp.linux-x86_64-2.7/numpy/linalg > compile options: '-DATLAS_INFO="\"3.8.4\"" -I/usr/include -Inumpy/core/include -Ibuild/src.linux-x86_64-2.7/numpy/core/include/numpy -Inumpy/core/src/private -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/include -I/usr/local/python-2.7.3/include/python2.7 -Ibuild/src.linux-x86_64-2.7/numpy/core/src/multiarray -Ibuild/src.linux-x86_64-2.7/numpy/core/src/umath -c' > gcc: numpy/linalg/lapack_litemodule.c > gcc: numpy/linalg/python_xerbla.c > /usr/bin/gfortran -Wall -Wall -shared build/temp.linux-x86_64-2.7/numpy/linalg/lapack_litemodule.o build/temp.linux-x86_64-2.7/numpy/linalg/python_xerbla.o -L/usr/lib64/atlas -L. -Lbuild/temp.linux-x86_64-2.7 -llapack -lptf77blas -lptcblas -latlas -lpython2.7 -lgfortran -o build/lib.linux-x86_64-2.7/numpy/linalg/lapack_lite.so > /usr/bin/ld: cannot find -lpython2.7 > collect2: ld returned 1 exit status > /usr/bin/ld: cannot find -lpython2.7 > collect2: ld returned 1 exit status > error: Command "/usr/bin/gfortran -Wall -Wall -shared build/temp.linux-x86_64-2.7/numpy/linalg/lapack_litemodule.o build/temp.linux-x86_64-2.7/numpy/linalg/python_xerbla.o -L/usr/lib64/atlas -L. -Lbuild/temp.linux-x86_64-2.7 -llapack -lptf77blas -lptcblas -latlas -lpython2.7 -lgfortran -o build/lib.linux-x86_64-2.7/numpy/linalg/lapack_lite.so" failed with exit status 1 If I haven't install BLAS/Lapack/ATLAS, NumPy will be successfully built and installed. Magician On 2012/05/20, at 19:40, scipy-user-request at scipy.org wrote: > Message: 1 > Date: Sun, 20 May 2012 10:21:12 +0200 > From: Ralf Gommers > Subject: Re: [SciPy-User] SciPy installation troubles on CentOS 6.2 > To: SciPy Users List > Message-ID: > > Content-Type: text/plain; charset="iso-8859-1" > > On Sat, May 19, 2012 at 5:09 PM, Magician wrote: > >> Hi All, >> >> >> I'm trying to build SciPy from source code, >> but I have some troubles. >> >> My environment is below: >>> CentOS 6.2 on VMware Fusion 4.1.2 (CentOS was installed as Software >> Development WS) >>> Python 2.7.3 (already built from sources, installed at >> /usr/local/python-2.7.3) >>> NumPy 1.6.1, SciPy 0.10.1, ATLAS 3.8.4, Lapack 3.4.1 (now trying to >> build) >> I and my colleagues (other users) want to use recent Python, >> so I installed Python from sources, and I can't install SciPy >> by using yum command. >> >> Now I'm facing ATLAS compiling errors. >> Configuration options are "--prefix=/usr/local/atlas-3.8.4 -Fa alg -fPIC". >> I tried to build it for several times, and always I got errors as below: >>> res/dgemvN_6_75 : VARIATION EXCEEDS TOLERENCE, RERUN WITH HIGHER REPS. >>> >>> ATL_gemvN_mm.c : 1257.99 >>> ATL_gemvN_1x1_1.c : 581.74 >>> ATL_gemvN_1x1_1a.c : 1589.45 >>> ATL_gemvN_4x2_0.c : 813.13 >>> ATL_gemvN_4x4_1.c : 755.54 >>> make[3]: *** [res/dMVRES] Error 255 >>> make[3]: Leaving directory >> `/home/magician/Desktop/ATLAS/build/tune/blas/gemv' >>> make[2]: *** >> [/home/magician/Desktop/ATLAS/build/tune/blas/gemv/res/dMVRES] Error 2 >>> make[2]: Leaving directory `/home/magician/Desktop/ATLAS/build/bin' >>> ERROR 734 DURING MVTUNE!!. CHECK INSTALL_LOG/dMVTUNE.LOG FOR DETAILS. >>> make[2]: Entering directory `/home/magician/Desktop/ATLAS/build/bin' >>> cd /home/magician/Desktop/ATLAS/build ; make error_report >>> make[3]: Entering directory `/home/magician/Desktop/ATLAS/build' >>> make -f Make.top error_report >>> make[4]: Entering directory `/home/magician/Desktop/ATLAS/build' >>> uname -a 2>&1 >> bin/INSTALL_LOG/ERROR.LOG >>> gcc -v 2>&1 >> bin/INSTALL_LOG/ERROR.LOG >>> Using built-in specs. >>> Target: x86_64-redhat-linux >>> Configured with: ../configure --prefix=/usr --mandir=/usr/share/man >> --infodir=/usr/share/info --with-bugurl= >> http://bugzilla.redhat.com/bugzilla --enable-bootstrap --enable-shared >> --enable-threads=posix --enable-checking=release --with-system-zlib >> --enable-__cxa_atexit --disable-libunwind-exceptions >> --enable-gnu-unique-object >> --enable-languages=c,c++,objc,obj-c++,java,fortran,ada >> --enable-java-awt=gtk --disable-dssi >> --with-java-home=/usr/lib/jvm/java-1.5.0-gcj-1.5.0.0/jre >> --enable-libgcj-multifile --enable-java-maintainer-mode >> --with-ecj-jar=/usr/share/java/eclipse-ecj.jar --disable-libjava-multilib >> --with-ppl --with-cloog --with-tune=generic --with-arch_32=i686 >> --build=x86_64-redhat-linux >>> Thread model: posix >>> gcc version 4.4.6 20110731 (Red Hat 4.4.6-3) (GCC) >>> gcc -V 2>&1 >> bin/INSTALL_LOG/ERROR.LOG >>> gcc: '-V' option must have argument >>> make[4]: [error_report] Error 1 (ignored) >>> gcc --version 2>&1 >> bin/INSTALL_LOG/ERROR.LOG >>> tar cf error_Corei264SSE3.tar Make.inc bin/INSTALL_LOG/* >>> gzip --best error_Corei264SSE3.tar >>> mv error_Corei264SSE3.tar.gz error_Corei264SSE3.tgz >>> make[4]: Leaving directory `/home/magician/Desktop/ATLAS/build' >>> make[3]: Leaving directory `/home/magician/Desktop/ATLAS/build' >>> make[2]: Leaving directory `/home/magician/Desktop/ATLAS/build/bin' >>> Error report error_.tgz has been created in your top-level ATLAS >>> directory. Be sure to include this file in any help request. >>> cat: ../../CONFIG/error.txt: No such file or directory >>> cat: ../../CONFIG/error.txt: No such file or directory >>> make[1]: *** [build] Error 255 >>> make[1]: Leaving directory `/home/magician/Desktop/ATLAS/build' >>> make: *** [build] Error 2 >> >> It's very troublesome for me to build ATLAS by myself. >> My purpose is just using SciPy on my Python. >> Even if it's optimized not so good for my environment, it's OK. >> >> Is there an easy or a sure way to build and install SciPy? > > > Building ATLAS is much harder than building scipy, so you should try to > find some rpm's for it, like > http://linuxtoolkit.blogspot.com/2010/09/installing-lapack-blas-and-atlas-on.html. > There's no problem building scipy against ATLAS from a binary install. > > Ralf - -------------- next part -------------- An HTML attachment was scrubbed... URL: From pierre.puiseux at univ-pau.fr Mon May 21 05:56:10 2012 From: pierre.puiseux at univ-pau.fr (pierre puiseux UPPA) Date: Mon, 21 May 2012 11:56:10 +0200 Subject: [SciPy-User] subclassing ndarray : i want slice to return ndarray, not subclass Message-ID: Hello, all my question is in the title. More precisely, like in scipy doc i try this : ====================================== import sys import numpy as np class ArrayChild(np.ndarray): def __new__(cls, array, info=None): obj = np.asarray(array).view(cls) obj.info = info return obj def __array_finalize__(self, obj): print>>sys.stderr, "__array_finalize__" if obj is None: return self.info = getattr(obj, 'info', None) if __name__=='__main__': a = np.arange(6) a.shape=2,3 a_child = ArrayChild(a) for x in a_child : print>>sys.stderr, x, type(x) ===================================== and i have the answer : ===================================== __array_finalize__ __array_finalize__ [0 1 2] __array_finalize__ [3 4 5] ===================================== but i want ===================================== __array_finalize__ __array_finalize__ [0 1 2] __array_finalize__ [3 4 5] ===================================== I've tried to redefine : __getslice__ like this, but it does not work. ===================================== def __getslice__(self, *args, **kwargs): return np.ndarray.__getslice__(self, *args, **kwargs) ===================================== Some idea ? Thanks to numpy/scipy team for that great job. ===================================== Pierre Puiseux Laboratoire de Math?matiques Appliqu?es Universit? de Pau et des Pays de l'Adour pierre.puiseux at univ-pau.fr http://www.univ-pau.fr/~puiseux -------------- next part -------------- An HTML attachment was scrubbed... URL: From d.s.seljebotn at astro.uio.no Mon May 21 07:42:07 2012 From: d.s.seljebotn at astro.uio.no (Dag Sverre Seljebotn) Date: Mon, 21 May 2012 13:42:07 +0200 Subject: [SciPy-User] subclassing ndarray : i want slice to return ndarray, not subclass In-Reply-To: References: Message-ID: <4FBA2A0F.10003@astro.uio.no> On 05/21/2012 11:56 AM, pierre puiseux UPPA wrote: > Hello, > all my question is in the title. More precisely, like in scipy doc i try > this : > ====================================== > importsys > import numpy as np > > class ArrayChild(np.ndarray): Subclassing ndarray is a very tricky business -- I did it once and regretted having done it for years, because there's so much you can't do etc.. You're almost certainly better off with embedding an array as an attribute, and then forward properties etc. to it. But your answer: You want to override __getitem__, __getslice__ is deprecated in Python. Dag > > def __new__(cls, array, info=None): > obj = np.asarray(array).view(cls) > obj.info = info > return obj > > def __array_finalize__(self, obj): > print>>sys.stderr, "__array_finalize__" > if obj is None: return > self.info = getattr(obj, 'info', None) > > if __name__=='__main__': > a = np.arange(6) > a.shape=2,3 > a_child = ArrayChild(a) > for x in a_child : > print>>sys.stderr, x, type(x) > > ===================================== > and i have the answer : > ===================================== > __array_finalize__ > __array_finalize__ > [0 1 2] > __array_finalize__ > [3 4 5] > > ===================================== > but i want > ===================================== > __array_finalize__ > __array_finalize__ > [0 1 2] > __array_finalize__ > [3 4 5] > > ===================================== > I've tried to redefine : __getslice__ like this, but it does not work. > ===================================== > def __getslice__(self, *args, **kwargs): > return np.ndarray.__getslice__(self, *args, **kwargs) From tobias at ebockwurst.de Mon May 21 07:36:13 2012 From: tobias at ebockwurst.de (Tobias) Date: Mon, 21 May 2012 11:36:13 +0000 (UTC) Subject: [SciPy-User] scipy.weave on windows vista compiling error Message-ID: Hello! I'm trying to bring scipy.weave to work on Windows Vista with MinGw. I use Python 2.7.1, Scipy 0.10.1 and MinGw 4.6.2 My code looks like the following: from scipy import weave weave.inline("""print('Hello World!');""", [], compiler = 'mingw32-gcc') I get the following error message: DistutilsPlatformError: don't know how to compile C/C++ code on platform 'nt' with 'mingw32-gcc' compiler With MinGw come a lot of different compiler. If I choose not 'mingw32-gcc' but for example 'gcc' weave complains ValueError: invalid version number '4.' So I also downloaded the olf MinGW 2.9.5 Version. Then I also get the Error "don't know how to compile C/C++ code on platform 'nt'" The PATH-Variables are set and I also made shure, that there is write access to the MinGw and Python folders. Searching for this error message gave no progress, but maybe you could help me. Would be great! All the best Tobi From njs at pobox.com Mon May 21 08:15:24 2012 From: njs at pobox.com (Nathaniel Smith) Date: Mon, 21 May 2012 13:15:24 +0100 Subject: [SciPy-User] subclassing ndarray : i want slice to return ndarray, not subclass In-Reply-To: <4FBA2A0F.10003@astro.uio.no> References: <4FBA2A0F.10003@astro.uio.no> Message-ID: On Mon, May 21, 2012 at 12:42 PM, Dag Sverre Seljebotn wrote: > On 05/21/2012 11:56 AM, pierre puiseux UPPA wrote: >> Hello, >> all my question is in the title. More precisely, like in scipy doc i try >> this : >> ====================================== >> importsys >> import numpy as np >> >> class ArrayChild(np.ndarray): > > Subclassing ndarray is a very tricky business -- I did it once and > regretted having done it for years, because there's so much you can't do > etc.. You're almost certainly better off with embedding an array as an > attribute, and then forward properties etc. to it. Yes, it's almost always the wrong thing... > But your answer: You want to override __getitem__, __getslice__ is > deprecated in Python. You have to override both, actually. After overriding __getitem__, then add def __getslice__(self, i, j): return self.__getitem__(slice(i, j)) or otherwise your __getitem__ will get skipped in some cases. The other option would be to just accept that your subclass will be returned from slices. If the only difference between your subclass and the base ndarray is that your arrays have an extra ".info" attribute, then you can just not define __array_finalize__ and check for the presence of a ".info" attribute instead of checking for the subclass directly. This depends on what your subclass is actually supposed to do, obviously. -- Nathaniel From Jerome.Kieffer at esrf.fr Mon May 21 15:00:06 2012 From: Jerome.Kieffer at esrf.fr (Jerome Kieffer) Date: Mon, 21 May 2012 21:00:06 +0200 Subject: [SciPy-User] scipy.weave on windows vista compiling error In-Reply-To: References: Message-ID: <20120521210006.ca8d3b07.Jerome.Kieffer@esrf.fr> Hi Tobias, > I'm trying to bring scipy.weave to work on Windows Vista with MinGw. > I use Python 2.7.1, Scipy 0.10.1 and MinGw 4.6.2 I thought weave never worked under windows ??? maybe with mingw it is better (but no under 64bits windows :( This is the main reason why most people (like me) moved from weave to cython. Cheers, -- J?r?me Kieffer Data analysis unit - ESRF From cgohlke at uci.edu Mon May 21 15:37:13 2012 From: cgohlke at uci.edu (Christoph Gohlke) Date: Mon, 21 May 2012 12:37:13 -0700 Subject: [SciPy-User] scipy.weave on windows vista compiling error In-Reply-To: References: Message-ID: <4FBA9969.5090002@uci.edu> On 5/21/2012 4:36 AM, Tobias wrote: > Hello! > > I'm trying to bring scipy.weave to work on Windows Vista with MinGw. > I use Python 2.7.1, Scipy 0.10.1 and MinGw 4.6.2 > My code looks like the following: > > from scipy import weave > weave.inline("""print('Hello World!');""", [], compiler = 'mingw32-gcc') > > I get the following error message: > > DistutilsPlatformError: don't know how to compile C/C++ code on platform 'nt' > with 'mingw32-gcc' compiler > > With MinGw come a lot of different compiler. If I choose not 'mingw32-gcc' but > for example 'gcc' weave complains > > ValueError: invalid version number '4.' > > So I also downloaded the olf MinGW 2.9.5 Version. Then I also get the Error > "don't know how to compile C/C++ code on platform 'nt'" > > The PATH-Variables are set and I also made shure, that there is write access to > the MinGw and Python folders. Searching for this error message gave no > progress, but maybe you could help me. Would be great! > > All the best > Tobi > > Try `weave.inline("""printf("Hello World!");""", [], compiler='mingw32')` Worked for me with gcc 4.5.2 on win32-py2.7. The msvc9 compiler also works for 64 bit Python. Christoph From vaggi.federico at gmail.com Mon May 21 18:13:30 2012 From: vaggi.federico at gmail.com (federico vaggi) Date: Tue, 22 May 2012 00:13:30 +0200 Subject: [SciPy-User] scipy.weave on windows vista compiling error Message-ID: > Message: 2 > Date: Mon, 21 May 2012 11:36:13 +0000 (UTC) > From: Tobias > Subject: [SciPy-User] scipy.weave on windows vista compiling error > To: scipy-user at scipy.org > Message-ID: > Content-Type: text/plain; charset=us-ascii > > Hello! > > I'm trying to bring scipy.weave to work on Windows Vista with MinGw. > I use Python 2.7.1, Scipy 0.10.1 and MinGw 4.6.2 > My code looks like the following: > > from scipy import weave > weave.inline("""print('Hello World!');""", [], compiler = 'mingw32-gcc') > > I get the following error message: > > DistutilsPlatformError: don't know how to compile C/C++ code on platform > 'nt' > with 'mingw32-gcc' compiler > > With MinGw come a lot of different compiler. If I choose not 'mingw32-gcc' > but > for example 'gcc' weave complains > > ValueError: invalid version number '4.' > > So I also downloaded the olf MinGW 2.9.5 Version. Then I also get the Error > "don't know how to compile C/C++ code on platform 'nt'" > > The PATH-Variables are set and I also made shure, that there is write > access to > the MinGw and Python folders. Searching for this error message gave no > progress, but maybe you could help me. Would be great! > > All the best > Tobi > > Your C++ compiler has to match the C++ compiler used to compile Python - which, I think, by default, on Vista is Visual Studio 2008. However, the weave module is not actively maintained, and there are much better options right now for interfacing with C/C++. I would highly suggest that if you have some performance-critical part of your code, you either use Cython, or, if it is a simple numerical expression, use the amazing numexpr package. I warn you though - while pure Python code is very portable across platforms, interfacing Python with other libraries (mostly, C and C++) is a huge pain on Windows. If you really want to continue, I strongly advise you take advantage of Christoph Gohlke's work: http://www.lfd.uci.edu/~gohlke/pythonlibs/ who has done all the hard work has made binary installers for most of the hard-to-compile Python libraries with C and C++ dependencies. Federico -------------- next part -------------- An HTML attachment was scrubbed... URL: From mail.till at gmx.de Tue May 22 10:57:18 2012 From: mail.till at gmx.de (Till Stensitzki) Date: Tue, 22 May 2012 14:57:18 +0000 (UTC) Subject: [SciPy-User] scipy.weave on windows vista compiling error References: Message-ID: Have a look at the solution of: http://stackoverflow.com/questions/10231572/f2py-returns-valueerror-invalid-version-number-4 It is a bug in distutils version comparison. From tobias at ebockwurst.de Tue May 22 15:07:48 2012 From: tobias at ebockwurst.de (Tobias) Date: Tue, 22 May 2012 19:07:48 +0000 (UTC) Subject: [SciPy-User] scipy.weave on windows vista compiling error References: Message-ID: Till Stensitzki gmx.de> writes: > > Have a look at the solution of: > > http://stackoverflow.com/questions/10231572/f2py-returns-valueerror-invalid- version-number-4 > > It is a bug in distutils version comparison. > Hey Till, this really solved my problem! Thank you very much Tobi From otti at gargamels-haus.net Wed May 23 04:01:31 2012 From: otti at gargamels-haus.net (otti) Date: Wed, 23 May 2012 10:01:31 +0200 Subject: [SciPy-User] Wrong Step Response Message-ID: <4FBC995B.60400@gargamels-haus.net> Hello everyone, after reading some threads about step response problems in the mailing list i couldnt come up with a proper solution for my problem. I've got the following transfer function: G = Vr/(sT1*(1+sT2)) Vr = 2/15 T1 = 0.2 T2 = 1 For this i like to plot the step response so i made the following script: *---Begin Script* # some imports import numpy as np import matplotlib.pyplot as plt import scipy as sp from scipy import signal # Constants T1 = 0.2 T2 = 1 Vr = float(2)/15 Tsim = 50 # Denumerator N0 = np.array([T1*T2 ,T1 , 0]) # numerator Z0 = Vr # create tfcn sys = sp.signal.lti(Z0, N0) # create the step response t = np.linspace(0, Tsim, 1000) u = np.arange(len(t)) u = np.ones_like(u) yout = sp.signal.lsim2(sys, T=t, U=u)[1] plt.figure(1) plt.plot(t, u, t, yout/yout.max()) plt.grid("on") plt.xlabel("t") plt.ylabel("h(t)") plt.title("Sprungantwort") plt.show() *---End Script* This gives me the following response, which i know is not the rigth one because i made this one already in a lab at university with MATLAB but i don't have MATLAB at home because i prefer to use Numpy+Scipy+Matplotlib. wrong step response Here is the correct response: correct step response After several hours of reading an trying different approaches im kind of frustrated ^^. I would be really thankfull for any help so that i can keep on going the Scipy track. Greetings, William -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: step_response.png Type: image/png Size: 38727 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: TA1.jpg Type: image/jpeg Size: 66953 bytes Desc: not available URL: From silva at lma.cnrs-mrs.fr Wed May 23 04:36:58 2012 From: silva at lma.cnrs-mrs.fr (Fabrice Silva) Date: Wed, 23 May 2012 10:36:58 +0200 Subject: [SciPy-User] Wrong Step Response In-Reply-To: <4FBC995B.60400@gargamels-haus.net> References: <4FBC995B.60400@gargamels-haus.net> Message-ID: <1337762218.3554.8.camel@sonsAA> Le mercredi 23 mai 2012 ? 10:01 +0200, otti a ?crit : > Hello everyone, > > after reading some threads about step response problems in the mailing list > i couldnt come up with a proper solution for my problem. > > I've got the following transfer function: > > G = Vr/(sT1*(1+sT2)) > > Vr = 2/15 > T1 = 0.2 > T2 = 1 This transfer function involves an integrator (leading to the slope after the transient) and a first-order low-pass filter (leading to a damped contribution, the transient). IMHO, the step_response.png file is more probably correct than the TA1.jpg file, as there is no second-order term that would possibly lead to the oscillation shown in the latter. Moreover, the action of the integrator for a step response can not lead to a constant steady state. Am I wrong? Looking at the fonts, I believe the step_response.png file comes from matplotlib, and then from your python script. What makes you sure the matlab script is correct? -- Fabrice Silva From otti at gargamels-haus.net Wed May 23 06:00:37 2012 From: otti at gargamels-haus.net (William) Date: Wed, 23 May 2012 10:00:37 +0000 (UTC) Subject: [SciPy-User] Wrong Step Response References: <4FBC995B.60400@gargamels-haus.net> <1337762218.3554.8.camel@sonsAA> Message-ID: Fabrice Silva lma.cnrs-mrs.fr> writes: > This transfer function involves an integrator (leading to the slope > after the transient) and a first-order low-pass filter (leading to a > damped contribution, the transient). > > IMHO, the step_response.png file is more probably correct than the > TA1.jpg file, as there is no second-order term that would possibly lead > to the oscillation shown in the latter. Moreover, the action of the > integrator for a step response can not lead to a constant steady state. > Am I wrong? > > Looking at the fonts, I believe the step_response.png file comes from > matplotlib, and then from your python script. What makes you sure the > matlab script is correct? > The integrator is used to reduce the overall error of the response so that it reaches the constant steady state 1. The low-pass is used to reduce the overshoot caused by the integrator. If i didn't get it wrong in the control engineering class. Probably have to check the notes again and run thru the MATLAB script again. The response of TA1.jpg was made during a Lab (control engineering) at University and the Prof checked it. From silva at lma.cnrs-mrs.fr Wed May 23 06:16:05 2012 From: silva at lma.cnrs-mrs.fr (Fabrice Silva) Date: Wed, 23 May 2012 12:16:05 +0200 Subject: [SciPy-User] Wrong Step Response In-Reply-To: References: <4FBC995B.60400@gargamels-haus.net> <1337762218.3554.8.camel@sonsAA> Message-ID: <1337768165.3554.12.camel@sonsAA> > The integrator is used to reduce the overall error of the response so that it > reaches the constant steady state 1. > The low-pass is used to reduce the overshoot caused by the integrator. > If i didn't get it wrong in the control engineering class. > Probably have to check the notes again and run thru the MATLAB script again. > > The response of TA1.jpg was made during a Lab (control engineering) at > University and the Prof checked it. Are you sure you are not confusing the step response of a transfer function with the step response of a looped system containing an integrator ? Once you close the loop, the global transfer function is not the one you mentionned but H = G/(1+G) for a unity feedback (for example), which exhibits the oscillation and the unity steady state. What you did in your control engineering classes may be the closed-loop system... -- Fabrice Silva From otti at gargamels-haus.net Wed May 23 06:23:53 2012 From: otti at gargamels-haus.net (William) Date: Wed, 23 May 2012 10:23:53 +0000 (UTC) Subject: [SciPy-User] Wrong Step Response References: <4FBC995B.60400@gargamels-haus.net> <1337762218.3554.8.camel@sonsAA> <1337768165.3554.12.camel@sonsAA> Message-ID: Fabrice Silva lma.cnrs-mrs.fr> writes: > Are you sure you are not confusing the step response of a transfer > function with the step response of a looped system containing an > integrator ? > > Once you close the loop, the global transfer function is not the one you > mentionned but > H = G/(1+G) > for a unity feedback (for example), which exhibits the oscillation and > the unity steady state. Ah thanks alot Fabrice, youre right!! Got things mixed up! From l.ulferts at hs-osnabrueck.de Wed May 23 10:06:43 2012 From: l.ulferts at hs-osnabrueck.de (Lothar Ulferts) Date: Wed, 23 May 2012 14:06:43 +0000 (UTC) Subject: [SciPy-User] Array Selection Help -Part2- Message-ID: Dear list, based upon the thread "Array Selection Help" http://thread.gmane.org/gmane.comp.python.scientific.user/19412 I would like to modifiy the task a little bit: The arr1 contains numbers (label) , arr2 floating points (values). But, here it is different, zones shouldn't be 'Multiparts'. A zone should be split into its untouched parts. For example: label: 100 100 100 -99 -99 -99 100 100 100 -99 -99 -99 100 100 -99 -99 200 200 100 100 -99 -99 200 200 -99 -99 -99 -99 200 200 => -99 -99 -99 -99 200 200 300 300 300 300 300 300 300 300 300 300 300 300 200 200 200 -99 100 100 300 400 400 -99 500 500 200 200 200 -99 100 100 400 400 400 -99 500 500 values: 1.5 1.9 1.8 0.3 0.1 0.1 1.5 1.7 0.6 0.3 2.5 2.9 0.6 0.6 0.8 0.4 2.1 2.1 3.1 3.2 3.3 3.4 3.5 3.6 4.7 4.7 4.0 0.1 1.0 1.4 4.3 4.0 4.9 0.3 1.2 1.1 Result of zonal min should be: 1.7 1.7 1.7 -99 -99 -99 1.7 1.7 -99 0.3 2.9 2.9 -99 -99 -99 -99 2.9 2.9 3.6 3.6 3.6 3.6 3.6 3.6 4.0 4.0 4.0 -99 1.0 1.0 4.0 4.0 4.0 -99 1.0 1.0 The question is, how do I tranform the left array to the right? beste regards Lothar From pierre at puiseux.name Mon May 21 09:57:05 2012 From: pierre at puiseux.name (pierre puiseux) Date: Mon, 21 May 2012 15:57:05 +0200 Subject: [SciPy-User] subclassing ndarray : i want slice to return ndarray, not subclass In-Reply-To: References: Message-ID: <94E136DD-35A8-4837-8F67-CF4F6A75C680@puiseux.name> First, my problem is : Imagine ArrayChild is a subclass of np.ndarray and 'ac' is an instance of ArrayChild. I call very often ac[i] or ac[1:12], which calls methods ArrayChild.__getislice__() or ArrayChild.__getiitem__() But for each call Python/NumPy create a new instance of ArrayChild, and calls my (very expansive) ArrayChild.__array_finalize__ I'd like to avoid this creation, and I want a more direct access to ArrayChild's slice or item Finally, i've found a solution (I'm not sure it's less expansive, anyway) : first, i create a ndarray view of my ArrayChild in __array_finalize__. where i add this line : ====================================== self.array_view = obj.view(np.ndarray) ====================================== Then i define methods ArrayChild.__getslice__ and ArrayChild.__getitem__ to call the getters of array_view instead of default getters : ====================================== def __getslice__(self, *args, **kwargs): return np.ndarray.__getslice__(self.array_view, *args, **kwargs) --------------- def __getitem__(self, *args, **kwargs): return np.ndarray.__getitem__(self.array_view, *args, **kwargs) --------------- ====================================== That's it, i have my results : subarray of ArrayChild are now np.ndarray s. ====================================== __array_finalize__ [0 1 2] [3 4 5] ====================================== The final test program is : ====================================== import numpy as np class ArrayChild(np.ndarray): def __new__(cls, array, info=None): obj = np.asarray(array).view(cls) obj.info = info return obj def __array_finalize__(self, obj): print>>sys.stderr, "__array_finalize__" if obj is None: return self.info = getattr(obj, 'info', None) self.array_view = obj.view(np.ndarray) def __getslice__(self, *args, **kwargs): return np.ndarray.__getslice__(self.array_view, *args, **kwargs) def __getitem__(self, *args, **kwargs): return np.ndarray.__getitem__(self.array_view, *args, **kwargs) if __name__=='__main__': a = np.arange(6) a.shape=2,3 a_child = ArrayChild(a) for x in a_child : print>>sys.stderr, x, type(x) ====================================== Le 21 mai 2012 ? 11:56, pierre puiseux UPPA a ?crit : > Hello, > all my question is in the title. More precisely, like in scipy doc i try this : > ====================================== > import sys > import numpy as np > > class ArrayChild(np.ndarray): > > def __new__(cls, array, info=None): > obj = np.asarray(array).view(cls) > obj.info = info > return obj > > def __array_finalize__(self, obj): > print>>sys.stderr, "__array_finalize__" > if obj is None: return > self.info = getattr(obj, 'info', None) > > if __name__=='__main__': > a = np.arange(6) > a.shape=2,3 > a_child = ArrayChild(a) > for x in a_child : > print>>sys.stderr, x, type(x) > > ===================================== > and i have the answer : > ===================================== > __array_finalize__ > __array_finalize__ > [0 1 2] > __array_finalize__ > [3 4 5] > > ===================================== > but i want > ===================================== > __array_finalize__ > __array_finalize__ > [0 1 2] > __array_finalize__ > [3 4 5] > > ===================================== > I've tried to redefine : __getslice__ like this, but it does not work. > ===================================== > def __getslice__(self, *args, **kwargs): > return np.ndarray.__getslice__(self, *args, **kwargs) > > > ===================================== > Some idea ? > Thanks to numpy/scipy team for that great job. > ===================================== > > Pierre Puiseux > Laboratoire de Math?matiques Appliqu?es > Universit? de Pau et des Pays de l'Adour > pierre.puiseux at univ-pau.fr > http://www.univ-pau.fr/~puiseux > pierre puiseux pierre at puiseux.name http://puiseux.name -------------- next part -------------- An HTML attachment was scrubbed... URL: From alejandro.weinstein at gmail.com Wed May 23 11:33:15 2012 From: alejandro.weinstein at gmail.com (Alejandro Weinstein) Date: Wed, 23 May 2012 09:33:15 -0600 Subject: [SciPy-User] scipy.sparse.linalg.eigs is faster with k=8 than with k=1 Message-ID: Hi: I am using scipy.sparse.linalg.eigs with a 336x336 sparse matrix with 1144 nonzero entries. I only need the eigenvector corresponding to the larger eigenvalue, so I was running the function with k=1. However, I found that it is about 10 times faster to call the function with k=8. I am testing this with the following code (available here: https://gist.github.com/2775892): ############################################################# import timeit setup = """ import numpy as np import scipy.sparse import scipy.io from scipy.sparse.linalg import eigs P = scipy.io.mmread('P.mtx') """ n = 10 for k in range(1,20): code = 'eigs(P, k=%d)' % k t = timeit.timeit(stmt=code, setup=setup, number=n) / n print 'k: %2d, time: %5.1f ms' % (k, 1000*t) ############################################################# The output is k: 1, time: 301.7 ms k: 2, time: 242.6 ms k: 3, time: 352.0 ms k: 4, time: 168.8 ms k: 5, time: 148.1 ms k: 6, time: 93.2 ms k: 7, time: 70.0 ms k: 8, time: 29.3 ms k: 9, time: 45.2 ms k: 10, time: 63.0 ms k: 11, time: 209.1 ms k: 12, time: 170.8 ms k: 13, time: 120.2 ms k: 14, time: 104.6 ms k: 15, time: 115.0 ms k: 16, time: 97.0 ms k: 17, time: 94.0 ms k: 18, time: 94.4 ms k: 19, time: 74.3 ms Is this behavior typical for eigs? In other words, should I always use k set to values around 6 or 8, or is it matrix dependent? I am also curious about this. I would expect that computing 1 eigenvector should never be slower than computing more eigenvectors. Any about why is this happening? Alejandro. From pav at iki.fi Wed May 23 13:15:53 2012 From: pav at iki.fi (Pauli Virtanen) Date: Wed, 23 May 2012 17:15:53 +0000 (UTC) Subject: [SciPy-User] =?utf-8?q?scipy=2Esparse=2Elinalg=2Eeigs_is_faster_w?= =?utf-8?q?ith_k=3D8_than_with=09k=3D1?= References: Message-ID: Alejandro Weinstein gmail.com> writes: [clip] > I am using scipy.sparse.linalg.eigs with a 336x336 sparse matrix with > 1144 nonzero entries. I only need the eigenvector corresponding to the > larger eigenvalue, so I was running the function with k=1. However, I > found that it is about 10 times faster to call the function with k=8. [clip] > Is this behavior typical for eigs? In other words, should I always use > k set to values around 6 or 8, or is it matrix dependent? > > I am also curious about this. I would expect that computing 1 > eigenvector should never be slower than computing more eigenvectors. > Any about why is this happening? The sparse eigenvalue solver, ARPACK, is a Krylov method, and the size of the Krylov subspace depends on `k` (you can also adjust it by specifying the `ncv` parameter). I don't see any straightforward ways to predict how the exact performance depends on the Krylov subspace size --- bigger is better, but on the other hand, involves more work. The performance will in any case depend on the structure of the matrix. More on ARPACK: ftp://ftp.caam.rice.edu/pub/people/sorensen/ARPACK/ug.ps.gz From vaggi.federico at gmail.com Wed May 23 13:47:02 2012 From: vaggi.federico at gmail.com (federico vaggi) Date: Wed, 23 May 2012 19:47:02 +0200 Subject: [SciPy-User] Linear algebra problem Message-ID: Hi, I have a standard system, of type: AX = B Where: X is a nxm matrix. B is a mxn matrix. A is a nxn matrix. I have B and X, and I am trying to calculate A - however, B, is null matrix. Using the default numpy solvers ( http://docs.scipy.org/doc/numpy/reference/generated/numpy.linalg.lstsq.html#numpy.linalg.lstsq) I obtain the trivial correct solution of A being a null matrix. What's the most robust way to search for the solution which minimizes the residuals, while still not returning a null matrix of A? X, is usually rank deficient, so I know I won't have an exact solution. Thanks for all the help, Federico -------------- next part -------------- An HTML attachment was scrubbed... URL: From sergio_r at mail.com Wed May 23 13:47:18 2012 From: sergio_r at mail.com (Sergio Rojas) Date: Wed, 23 May 2012 13:47:18 -0400 Subject: [SciPy-User] scipy.sparse.linalg.eigs is faster with k=8 than with k=1 Message-ID: <20120523174718.17940@gmx.com> I do not have an answer to your question Alejandro, but I am curious about why the timing changes each time the code is executed. Is there any random allocation in the process of solving your problem? Sergio ---------------------------------------------------------------------- Message: 1 Date: Wed, 23 May 2012 09:33:15 -0600 From: Alejandro Weinstein Subject: [SciPy-User] scipy.sparse.linalg.eigs is faster with k=8 than with k=1 To: scipy-user at scipy.org Message-ID: Content-Type: text/plain; charset=ISO-8859-1 Hi: I am using scipy.sparse.linalg.eigs with a 336x336 sparse matrix with 1144 nonzero entries. I only need the eigenvector corresponding to the larger eigenvalue, so I was running the function with k=1. However, I found that it is about 10 times faster to call the function with k=8. I am testing this with the following code (available here: https://gist.github.com/2775892): ############################################################# import timeit setup = """ import numpy as np import scipy.sparse import scipy.io from scipy.sparse.linalg import eigs P = scipy.io.mmread('P.mtx') """ n = 10 for k in range(1,20): code = 'eigs(P, k=%d)' % k t = timeit.timeit(stmt=code, setup=setup, number=n) / n print 'k: %2d, time: %5.1f ms' % (k, 1000*t) ############################################################# The output is k: 1, time: 301.7 ms k: 2, time: 242.6 ms k: 3, time: 352.0 ms k: 4, time: 168.8 ms k: 5, time: 148.1 ms k: 6, time: 93.2 ms k: 7, time: 70.0 ms k: 8, time: 29.3 ms k: 9, time: 45.2 ms k: 10, time: 63.0 ms k: 11, time: 209.1 ms k: 12, time: 170.8 ms k: 13, time: 120.2 ms k: 14, time: 104.6 ms k: 15, time: 115.0 ms k: 16, time: 97.0 ms k: 17, time: 94.0 ms k: 18, time: 94.4 ms k: 19, time: 74.3 ms Is this behavior typical for eigs? In other words, should I always use k set to values around 6 or 8, or is it matrix dependent? I am also curious about this. I would expect that computing 1 eigenvector should never be slower than computing more eigenvectors. Any about why is this happening? Alejandro. ------------------------------ -------------- next part -------------- An HTML attachment was scrubbed... URL: From alejandro.weinstein at gmail.com Wed May 23 14:00:11 2012 From: alejandro.weinstein at gmail.com (Alejandro Weinstein) Date: Wed, 23 May 2012 12:00:11 -0600 Subject: [SciPy-User] scipy.sparse.linalg.eigs is faster with k=8 than with k=1 In-Reply-To: References: Message-ID: On Wed, May 23, 2012 at 11:15 AM, Pauli Virtanen wrote: > The sparse eigenvalue solver, ARPACK, is a Krylov method, and the size of the > Krylov subspace depends on `k` (you can also adjust it by specifying the `ncv` > parameter). I don't see any straightforward ways to predict how the exact > performance depends on the Krylov subspace size --- bigger is better, but on the > other hand, involves more work. The performance will in any case depend on the > structure of the matrix. I see. Thanks for the answer. Just for the record, if I set sigma=1, which means that eigs finds eigenvalues near sigma using shift-invert mode, then the time increases with k. These are the times (everything is the same as in the original program, except that I use code = 'eigs(P, k=%d, sigma=1)' % k Using sigma equal to 1.000000 k: 1, time: 3.1 ms k: 2, time: 6.9 ms k: 3, time: 5.4 ms k: 4, time: 7.9 ms k: 5, time: 16.8 ms k: 6, time: 24.4 ms k: 7, time: 17.1 ms k: 8, time: 20.4 ms k: 9, time: 21.2 ms k: 10, time: 20.7 ms k: 11, time: 23.5 ms k: 12, time: 24.2 ms k: 13, time: 32.0 ms k: 14, time: 32.9 ms k: 15, time: 23.3 ms k: 16, time: 36.4 ms k: 17, time: 49.4 ms k: 18, time: 39.3 ms k: 19, time: 42.2 ms Alejandro. From alejandro.weinstein at gmail.com Wed May 23 14:06:16 2012 From: alejandro.weinstein at gmail.com (Alejandro Weinstein) Date: Wed, 23 May 2012 12:06:16 -0600 Subject: [SciPy-User] scipy.sparse.linalg.eigs is faster with k=8 than with k=1 In-Reply-To: <20120523174718.17940@gmx.com> References: <20120523174718.17940@gmx.com> Message-ID: On Wed, May 23, 2012 at 11:47 AM, Sergio Rojas wrote: > > I do not have an answer to your question Alejandro, but I am curious about > why the timing > changes each time the code is executed. Is there any random allocation in > the process of solving > your problem? There is nothing random on my side. I just load the matrix and call eigs (the timing is only for the function call). But I don't see a significant variation each time I call the program. I would say is the natural variation you observe in a multitasking environment. (I am running Linux, in case that matters). Alejandro. From pav at iki.fi Wed May 23 16:12:59 2012 From: pav at iki.fi (Pauli Virtanen) Date: Wed, 23 May 2012 22:12:59 +0200 Subject: [SciPy-User] Linear algebra problem In-Reply-To: References: Message-ID: 23.05.2012 19:47, federico vaggi kirjoitti: [clip] > AX = B [clip] > I have B and X, and I am trying to calculate A - however, B, is > null matrix. > > What's the most robust way to search for the solution which minimizes > the residuals, while still not returning a null matrix of A? X, is > usually rank deficient, so I know I won't have an exact solution. This problem is equivalent to looking for the subspace of zero eigenvalue. You can do an eigenvalue or SVD decomposition of X, and then grab the solution from the eigen-/singular vectors corresponding to zero or small eigen/singular values. -- Pauli Virtanen From warren.weckesser at enthought.com Thu May 24 00:29:19 2012 From: warren.weckesser at enthought.com (Warren Weckesser) Date: Wed, 23 May 2012 23:29:19 -0500 Subject: [SciPy-User] Linear algebra problem In-Reply-To: References: Message-ID: On Wed, May 23, 2012 at 3:12 PM, Pauli Virtanen wrote: > 23.05.2012 19:47, federico vaggi kirjoitti: > [clip] > > AX = B > [clip] > > I have B and X, and I am trying to calculate A - however, B, is > > null matrix. > > > > What's the most robust way to search for the solution which minimizes > > the residuals, while still not returning a null matrix of A? X, is > > usually rank deficient, so I know I won't have an exact solution. > > This problem is equivalent to looking for the subspace of zero > eigenvalue. You can do an eigenvalue or SVD decomposition of X, and then > grab the solution from the eigen-/singular vectors corresponding to zero > or small eigen/singular values. > > Another way to say this is you want the nullspace of X.T. See the SciPy Cookbook entry http://www.scipy.org/Cookbook/RankNullspace for an example of a function that computes the nullspace using the singular value decomposition. For example, in the following, X is 4 by 2: In [57]: import rank_nullspace as rn In [58]: X = array([[1,2],[3,4],[5,6],[7,8]]) In [59]: X Out[59]: array([[1, 2], [3, 4], [5, 6], [7, 8]]) In [60]: A = rn.nullspace(X.T).T In [61]: A Out[61]: array([[-0.39450102, 0.24279655, 0.69790998, -0.5462055 ], [-0.37995913, 0.80065588, -0.46143436, 0.04073761]]) We get a 2 by 4 result; we can add trivial rows of zeros to make A 4 by 4: In [62]: A = vstack((A, zeros((2,4)))) In [63]: A Out[63]: array([[-0.39450102, 0.24279655, 0.69790998, -0.5462055 ], [-0.37995913, 0.80065588, -0.46143436, 0.04073761], [ 0. , 0. , 0. , 0. ], [ 0. , 0. , 0. , 0. ]]) Verify that we have solved AX = B, where B is the 4 by 2 array of zeros: In [64]: dot(A, X) Out[64]: array([[ -1.33226763e-15, 0.00000000e+00], [ 0.00000000e+00, 3.33066907e-16], [ 0.00000000e+00, 0.00000000e+00], [ 0.00000000e+00, 0.00000000e+00]]) Warren -------------- next part -------------- An HTML attachment was scrubbed... URL: From servant.mathieu at gmail.com Thu May 24 04:15:51 2012 From: servant.mathieu at gmail.com (servant mathieu) Date: Thu, 24 May 2012 10:15:51 +0200 Subject: [SciPy-User] efficiency of the simplex routine: R (optim) vs scipy.optimize.fmin Message-ID: Dear scipy users, Again a question about optimization. I've just compared the efficiency of the simplex routine in R (optim) vs scipy (fmin), when minimizing a chi-square. fmin is faster than optim, but appears to be less efficient. In R, the value of the function is always minimized step by step (there are of course some exceptions) while there is lot of fluctuations in python. Given that the underlying simplex algorithm is supposed to be the same, which mechanism is responsible for this difference? Is it possible to constrain fmin so it could be more rigorous? Cheers, Mathieu -------------- next part -------------- An HTML attachment was scrubbed... URL: From m_daoust at hotmail.com Thu May 24 06:40:25 2012 From: m_daoust at hotmail.com (M Daoust) Date: Thu, 24 May 2012 06:40:25 -0400 Subject: [SciPy-User] subclassing ndarray : i want slice to return > ndarray, not subclass In-Reply-To: References: Message-ID: > > Subclassing ndarray is a very tricky business -- I did it once and > > regretted having done it for years, because there's so much you can't do > > etc.. You're almost certainly better off with embedding an array as an > > attribute, and then forward properties etc. to it. > > Yes, it's almost always the wrong thing... I've sub-classes numpy.ndarray in some of my personal code, so I'm just hoping to understand why this is the wrong choice before I dig the hole any deeper. So it's only wrong because it's hard to do right, and you can't do certain things? (which things?) If you want a object to act mostly like a numpy.ndarray, making a sub-class *should* be the right answer. Because isn't forwarding lookups to a wrapped class just a re-implementation of the sub-classing mechanism? Isn't it also tricky to get that "wrapping subclassing" right? If you're going to sub-class something, it would be nice to use the built in sub-class machinery, for clarity... etc... -------------- next part -------------- An HTML attachment was scrubbed... URL: From tmp50 at ukr.net Thu May 24 07:32:29 2012 From: tmp50 at ukr.net (Dmitrey) Date: Thu, 24 May 2012 14:32:29 +0300 Subject: [SciPy-User] Some numpy funcs for PyPy Message-ID: <6355.1337859149.6100681629467803648@ffe6.ukr.net> hi all, maybe you're aware of numpypy - numpy port for pypy (pypy.org) - Python language implementation with dynamic compilation. Unfortunately, numpypy developmnent is very slow due to strict quality standards and some other issues, so for my purposes I have provided some missing numpypy funcs, in particular * atleast_1d, atleast_2d, hstack, vstack, cumsum, isscalar, asscalar, asfarray, flatnonzero, tile, zeros_like, ones_like, empty_like, where, searchsorted * with "axis" parameter: nan(arg)min, nan(arg)max, all, any and have got some OpenOpt / FuncDesigner functionality working faster than in CPython. File with this functions you can get here Also you may be interested in some info at http://openopt.org/PyPy Regards, Dmitrey. -------------- next part -------------- An HTML attachment was scrubbed... URL: From fccoelho at gmail.com Thu May 24 07:43:45 2012 From: fccoelho at gmail.com (Flavio Coelho) Date: Thu, 24 May 2012 08:43:45 -0300 Subject: [SciPy-User] Some numpy funcs for PyPy In-Reply-To: <6355.1337859149.6100681629467803648@ffe6.ukr.net> References: <6355.1337859149.6100681629467803648@ffe6.ukr.net> Message-ID: That's very usefull! I hope these features get included upstream in the next release of numpypy. thanks, Fl?vio On Thu, May 24, 2012 at 8:32 AM, Dmitrey wrote: > hi all, > maybe you're aware of numpypy - numpy port for pypy (pypy.org) - Python > language implementation with dynamic compilation. > > Unfortunately, numpypy developmnent is very slow due to strict quality > standards and some other issues, so for my purposes I have provided some > missing numpypy funcs, in particular > > > - atleast_1d, atleast_2d, hstack, vstack, cumsum, isscalar, asscalar, > asfarray, flatnonzero, tile, zeros_like, ones_like, empty_like, where, > searchsorted > - with "axis" parameter: nan(arg)min, nan(arg)max, all, any > > and have got some OpenOpt / FuncDesigner functionality working faster than > in CPython. > > File with this functions you can get here > > Also you may be interested in some info at http://openopt.org/PyPy > Regards, Dmitrey. > > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -- Fl?vio Code?o Coelho ================ +55(21) 3799-5567 Professor Escola de Matem?tica Aplicada Funda??o Get?lio Vargas Rio de Janeiro - RJ Brasil -------------- next part -------------- An HTML attachment was scrubbed... URL: From tmp50 at ukr.net Thu May 24 09:07:25 2012 From: tmp50 at ukr.net (Dmitrey) Date: Thu, 24 May 2012 16:07:25 +0300 Subject: [SciPy-User] [Numpy-discussion] Some numpy funcs for PyPy In-Reply-To: References: <6355.1337859149.6100681629467803648@ffe6.ukr.net> Message-ID: <69589.1337864845.13913483336144781312@ffe16.ukr.net> > On your website you wrote: >> From my (Dmitrey) point of view numpypy development is >> very unfriendly for newcomers - PyPy developers say "provide >> code, preferably in interpreter level instead of AppLevel, >> provide whole test coverage for all possible corner cases, >> provide hg diff for code, and then, maybe, it will be committed". >> Probably this is the reason why so insufficient number of >> developers work on numpypy. I assume that is paraphrased with a little hyperbole, but it isn't so different from numpy (other than using git), or many other open source projects. > Of course, many opensource projects do like that, but in the case of numpypy IMHO the things are especially bad. > Unit tests are important, and taking patches without them is risky. > Yes, but at first, things required from numpypy newcomers are TOO complicated - and no guarrantee is provided, that elapsed efforts will not be just a waste of time; at 2nd, the high-quality standards are especially cynic when compared with their own code quality, e.g. numpypy.all(True) doesn't work yet, despite it hangs in bug tracker for a long time; a[a<0] = b[b<0] works incorrectly etc. These are reasons that forced me to write some required for my purposes missing funcs and some bug walkarounds (like for that one with numpypy.all and any). > ?I've been subscribed to the pypy-dev list for a while, > I had been subsribed IIRC for a couple of months > but I don't recall seeing you posting there. > I had made some, see my pypy activity here > ?Have you tried to submit any of your work to PyPy yet? > yes: I had spent lots of time for concatenate() (pypy developers said noone works on it) - and finally they have committed code for this func from other trunc. Things like this were with some other my proposed code for PyPy and all those days spent for it. > ?Perhaps you should have sent this message to pypy-dev instead? > I had explained them my point of view in mail list and irc channel, their answer was like "don't borther horses, why do you in a hurry? All will be done during several months", but I see it (porting whole numpy) definitely won't be done during the term. IIRC during ~ 2 months only ~10 new items were added to numpypy; also, lots of numpypy items, when calling, e.g. searchsorted, just raise NotImplementedError: wainting for interplevel routine, or don't work with high-dimensional arrays and/or some other corner cases. numpypy developers go (rather slowly) their own way, while I just propose temporary alternative, till proper PyPy-numpy implementation regards, D. -------------- next part -------------- An HTML attachment was scrubbed... URL: From yildiz at strw.leidenuniv.nl Thu May 24 09:50:52 2012 From: yildiz at strw.leidenuniv.nl (Umut Yildiz) Date: Thu, 24 May 2012 13:50:52 +0000 (UTC) Subject: [SciPy-User] griddata not working after update to Python 2.7 Message-ID: Dear All, I used to use griddata in order to make my contourmaps. However, after I updated my Python from 2.6 to 2.7 griddata is not working anymore. I tried some workarounds but no success. The countourmap that I produced before is here. http://dl.dropbox.com/u/17983476/matplotlib/contour_dT_workingbefore.png After the Python 2.7 update, it turns to the following. http://dl.dropbox.com/u/17983476/matplotlib/contour_dT_broken.png Here is the datafile. http://dl.dropbox.com/u/17983476/matplotlib/contour_dT.dat And the associated python script (which is also below). http://dl.dropbox.com/u/17983476/matplotlib/contour_dT.py The code that I was using before is here. I had to comment out #import griddata line because this is the only way that it continues. Is this a bug in griddata, or if there are new workarounds, I would be glad to know a new method to produce my contourplots again. Thanks a lot ---------------------------- #! /usr/bin python import os import sys import math from math import * from numpy import * #import griddata from pylab import * from matplotlib.ticker import FormatStrFormatter params = {'axes.labelsize': 20, 'text.fontsize': 15, 'legend.fontsize': 14, 'xtick.labelsize': 20, 'ytick.labelsize': 20, 'text.usetex': True } rcParams.update(params) par1 = [] par2 = [] chis = [] rfile = file('contour_dT.dat','r') line = rfile.readline() data = line.split() while len(data) >1: par1.append(float(data[0])) par2.append(float(data[1])) chis.append(float(data[2])) line = rfile.readline() data = line.split() par1 = array(par1) par2 = array(par2) chis = array(chis) xi = linspace(3.2,7.8,50) yi = linspace(15,300,50) zi = griddata(par2,par1,chis,xi,yi) levels = [0.4,0.5,0.6,0.7,0.8,0.9,1.0,1.2,1.5,2,3,4,6,10,12,15,20,25,30,40,50] CS = contourf(xi,yi,zi,levels,cmap=cm.jet) CS2 = contour(CS, levels=CS.levels[::2], colors = 'r', hold='on') cbar = colorbar(CS) cbar.add_lines(CS2) savefig("contour_dT.png") show() From aldcroft at head.cfa.harvard.edu Thu May 24 12:09:42 2012 From: aldcroft at head.cfa.harvard.edu (Tom Aldcroft) Date: Thu, 24 May 2012 12:09:42 -0400 Subject: [SciPy-User] subclassing ndarray : i want slice to return > ndarray, not subclass In-Reply-To: References: Message-ID: On Thu, May 24, 2012 at 6:40 AM, M Daoust wrote: >> > Subclassing ndarray is a very tricky business -- I did it once and >> > regretted having done it for years, because there's so much you can't do >> > etc.. You're almost certainly better off with embedding an array as an >> > attribute, and then forward properties etc. to it. >> >> Yes, it's almost always the wrong thing... > > I've sub-classes numpy.ndarray in some of my personal code, so I'm just > hoping to understand why this is the wrong choice before I dig the hole any > deeper. > > So it's only wrong because it's hard to do right, and you can't do certain > things? (which things?) > > If you want a object to act mostly like a? numpy.ndarray, making a sub-class > *should* be the right answer. > > Because isn't forwarding lookups to a wrapped class? just a > re-implementation of the sub-classing mechanism? > > Isn't it also tricky to get that "wrapping subclassing" right? > > If you're going to sub-class something, it would be nice to use the built in > sub-class machinery, for clarity... etc... > > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > After seeing this thread I brought this point up on the numpy-discussion list. Here is a link to the thread: http://mail.scipy.org/pipermail/numpy-discussion/2012-May/062451.html - Tom From pav at iki.fi Thu May 24 14:09:20 2012 From: pav at iki.fi (Pauli Virtanen) Date: Thu, 24 May 2012 20:09:20 +0200 Subject: [SciPy-User] griddata not working after update to Python 2.7 In-Reply-To: References: Message-ID: 24.05.2012 15:50, Umut Yildiz kirjoitti: [clip] > The code that I was using before is here. I had to comment out #import griddata > line because this is the only way that it continues. Is this a bug in griddata, > or if there are new workarounds, I would be glad to know a new method to produce > my contourplots again. [clip] > zi = griddata(par2,par1,chis,xi,yi) Note that this is the griddata function from matplotlib, so asking on one of the Matplotlib lists could be useful. -- Pauli Virtanen From pierre.raybaut at gmail.com Sat May 26 15:42:01 2012 From: pierre.raybaut at gmail.com (Pierre Raybaut) Date: Sat, 26 May 2012 21:42:01 +0200 Subject: [SciPy-User] ANN: Spyder v2.1.10 Message-ID: Hi all, On the behalf of Spyder's development team (http://code.google.com/p/spyderlib/people/list), I'm pleased to announce that Spyder v2.1.10 has been released and is available for Windows XP/Vista/7, GNU/Linux and MacOS X: http://code.google.com/p/spyderlib/ This is a pure maintenance release -- a lot of bugs were fixed since v2.1.9: http://code.google.com/p/spyderlib/wiki/ChangeLog Spyder is a free, open-source (MIT license) interactive development environment for the Python language with advanced editing, interactive testing, debugging and introspection features. Originally designed to provide MATLAB-like features (integrated help, interactive console, variable explorer with GUI-based editors for dictionaries, NumPy arrays, ...), it is strongly oriented towards scientific computing and software development. Thanks to the `spyderlib` library, Spyder also provides powerful ready-to-use widgets: embedded Python console (example: http://packages.python.org/guiqwt/_images/sift3.png), NumPy array editor (example: http://packages.python.org/guiqwt/_images/sift2.png), dictionary editor, source code editor, etc. Description of key features with tasty screenshots can be found at: http://code.google.com/p/spyderlib/wiki/Features On Windows platforms, Spyder is also available as a stand-alone executable (don't forget to disable UAC on Vista/7). This all-in-one portable version is still experimental (for example, it does not embed sphinx -- meaning no rich text mode for the object inspector) but it should provide a working version of Spyder for Windows platforms without having to install anything else (except Python 2.x itself, of course). Don't forget to follow Spyder updates/news: * on the project website: http://code.google.com/p/spyderlib/ * and on our official blog: http://spyder-ide.blogspot.com/ Last, but not least, we welcome any contribution that helps making Spyder an efficient scientific development/computing environment. Join us to help creating your favourite environment! (http://code.google.com/p/spyderlib/wiki/NoteForContributors) Enjoy! -Pierre From f_magician at mac.com Sat May 26 22:05:27 2012 From: f_magician at mac.com (Magician) Date: Sun, 27 May 2012 11:05:27 +0900 Subject: [SciPy-User] SciPy installation troubles on CentOS 6.2 In-Reply-To: References: Message-ID: Hi Ralf and Sergio, I thought my problems inclined to NumPy, and I posted to NumPy mailing list. But I couldn't get any answers for several days, so I'd like to ask again. I tried to build NumPy again with site.cfg, but I got these errors: > building 'numpy.core._sort' extension > compiling C sources > C compiler: gcc -pthread -fno-strict-aliasing -g -O2 -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC > > compile options: '-Inumpy/core/include -Ibuild/src.linux-x86_64-2.7/numpy/core/include/numpy -Inumpy/core/src/private -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/include -I/usr/local/python-2.7.3/include/python2.7 -Ibuild/src.linux-x86_64-2.7/numpy/core/src/multiarray -Ibuild/src.linux-x86_64-2.7/numpy/core/src/umath -c' > gcc: build/src.linux-x86_64-2.7/numpy/core/src/_sortmodule.c > gcc -pthread -shared build/temp.linux-x86_64-2.7/build/src.linux-x86_64-2.7/numpy/core/src/_sortmodule.o -L. -Lbuild/temp.linux-x86_64-2.7 -lnpymath -lm -lpython2.7 -o build/lib.linux-x86_64-2.7/numpy/core/_sort.so > /usr/bin/ld: cannot find -lpython2.7 > collect2: ld returned 1 exit status > /usr/bin/ld: cannot find -lpython2.7 > collect2: ld returned 1 exit status > error: Command "gcc -pthread -shared build/temp.linux-x86_64-2.7/build/src.linux-x86_64-2.7/numpy/core/src/_sortmodule.o -L. -Lbuild/temp.linux-x86_64-2.7 -lnpymath -lm -lpython2.7 -o build/lib.linux-x86_64-2.7/numpy/core/_sort.so" failed with exit status 1 Before I built NumPy, I uncommented and modified site.cfg as below: > [DEFAULT] > library_dirs = /usr/local/python-2.7.3/lib:/usr/lib64:/usr/lib64/atlas > include_dirs = /usr/local/python-2.7.3/include:/usr/include:/usr/include/atlas > > [blas_opt] > libraries = f77blas, cblas, atlas > > [lapack_opt] > libraries = lapack, f77blas, cblas, atlas And "setup.py config" dumped these messages: > Running from numpy source directory.F2PY Version 2 > blas_opt_info: > blas_mkl_info: > libraries mkl,vml,guide not found in /usr/local/python-2.7.3/lib > libraries mkl,vml,guide not found in /usr/lib64 > libraries mkl,vml,guide not found in /usr/lib64/atlas > NOT AVAILABLE > > atlas_blas_threads_info: > Setting PTATLAS=ATLAS > libraries ptf77blas,ptcblas,atlas not found in /usr/local/python-2.7.3/lib > Setting PTATLAS=ATLAS > customize GnuFCompiler > Could not locate executable g77 > Could not locate executable f77 > customize IntelFCompiler > Could not locate executable ifort > Could not locate executable ifc > customize LaheyFCompiler > Could not locate executable lf95 > customize PGroupFCompiler > Could not locate executable pgf90 > Could not locate executable pgf77 > customize AbsoftFCompiler > Could not locate executable f90 > customize NAGFCompiler > Found executable /usr/bin/f95 > customize VastFCompiler > customize CompaqFCompiler > Could not locate executable fort > customize IntelItaniumFCompiler > Could not locate executable efort > Could not locate executable efc > customize IntelEM64TFCompiler > customize Gnu95FCompiler > Found executable /usr/bin/gfortran > customize Gnu95FCompiler > customize Gnu95FCompiler using config > compiling '_configtest.c': > > /* This file is generated from numpy/distutils/system_info.py */ > void ATL_buildinfo(void); > int main(void) { > ATL_buildinfo(); > return 0; > } > > C compiler: gcc -pthread -fno-strict-aliasing -g -O2 -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC > > compile options: '-c' > gcc: _configtest.c > gcc -pthread _configtest.o -L/usr/lib64/atlas -lptf77blas -lptcblas -latlas -o _configtest > ATLAS version 3.8.4 built by mockbuild on Wed Dec 7 18:04:21 GMT 2011: > UNAME : Linux c6b5.bsys.dev.centos.org 2.6.32-44.2.el6.x86_64 #1 SMP Wed Jul 21 12:48:32 EDT 2010 x86_64 x86_64 x86_64 GNU/Linux > INSTFLG : -1 0 -a 1 > ARCHDEFS : -DATL_OS_Linux -DATL_ARCH_PII -DATL_CPUMHZ=2261 -DATL_SSE2 -DATL_SSE1 -DATL_USE64BITS -DATL_GAS_x8664 > F2CDEFS : -DAdd_ -DF77_INTEGER=int -DStringSunStyle > CACHEEDGE: 8388608 > F77 : gfortran, version GNU Fortran (GCC) 4.4.6 20110731 (Red Hat 4.4.6-3) > F77FLAGS : -fomit-frame-pointer -mfpmath=387 -O2 -falign-loops=4 -g -Wa,--noexecstack -fPIC -m64 > SMC : gcc, version gcc (GCC) 4.4.6 20110731 (Red Hat 4.4.6-3) > SMCFLAGS : -fomit-frame-pointer -mfpmath=387 -O2 -falign-loops=4 -g -Wa,--noexecstack -fPIC -m64 > SKC : gcc, version gcc (GCC) 4.4.6 20110731 (Red Hat 4.4.6-3) > SKCFLAGS : -fomit-frame-pointer -mfpmath=387 -O2 -falign-loops=4 -g -Wa,--noexecstack -fPIC -m64 > success! > removing: _configtest.c _configtest.o _configtest > Setting PTATLAS=ATLAS > FOUND: > libraries = ['ptf77blas', 'ptcblas', 'atlas'] > library_dirs = ['/usr/lib64/atlas'] > language = c > define_macros = [('ATLAS_INFO', '"\\"3.8.4\\""')] > include_dirs = ['/usr/include'] > > FOUND: > libraries = ['ptf77blas', 'ptcblas', 'atlas'] > library_dirs = ['/usr/lib64/atlas'] > language = c > define_macros = [('ATLAS_INFO', '"\\"3.8.4\\""')] > include_dirs = ['/usr/include'] > > lapack_opt_info: > lapack_mkl_info: > mkl_info: > libraries mkl,vml,guide not found in /usr/local/python-2.7.3/lib > libraries mkl,vml,guide not found in /usr/lib64 > libraries mkl,vml,guide not found in /usr/lib64/atlas > NOT AVAILABLE > > NOT AVAILABLE > > atlas_threads_info: > Setting PTATLAS=ATLAS > libraries ptf77blas,ptcblas,atlas not found in /usr/local/python-2.7.3/lib > libraries lapack_atlas not found in /usr/local/python-2.7.3/lib > libraries lapack_atlas not found in /usr/lib64/atlas > numpy.distutils.system_info.atlas_threads_info > Setting PTATLAS=ATLAS > Setting PTATLAS=ATLAS > FOUND: > libraries = ['lapack', 'ptf77blas', 'ptcblas', 'atlas'] > library_dirs = ['/usr/lib64/atlas'] > language = f77 > define_macros = [('ATLAS_INFO', '"\\"3.8.4\\""')] > include_dirs = ['/usr/include'] > > FOUND: > libraries = ['lapack', 'ptf77blas', 'ptcblas', 'atlas'] > library_dirs = ['/usr/lib64/atlas'] > language = f77 > define_macros = [('ATLAS_INFO', '"\\"3.8.4\\""')] > include_dirs = ['/usr/include'] > > running config I couldn't get enough informations about numscons/bento, so I'm still trying to build it with site.cfg and CFLAGS. How could I solve it? Magician On 2012/05/21, at 1:50, Ralf Gommers wrote: > > > On Sun, May 20, 2012 at 4:16 PM, Magician wrote: > Hi Ralf, > > > Thanks for your advice. > I tried to install BLAS/Lapack/ATLAS as below: > > yum install blas-devel lapack-devel atlas-devel > > Next I installed NumPy as below: > > tar xzvf numpy-1.6.1.tar.gz > > cd numpy-1.6.1 > > export CFLAGS="-L/usr/local/python-2.7.3/lib" > > Note that this overrides CFLAGS instead of appending that flag to the rest. If it doesn't work without that line, you have to either specify all cflags or use numscons/bento. > > Ralf > > > > python setup.py build > > But then I got those errors: > > building 'numpy.linalg.lapack_lite' extension > > compiling C sources > > C compiler: gcc -pthread -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -L/usr/local/python-2.7.3/lib -fPIC > > > > creating build/temp.linux-x86_64-2.7/numpy/linalg > > compile options: '-DATLAS_INFO="\"3.8.4\"" -I/usr/include -Inumpy/core/include -Ibuild/src.linux-x86_64-2.7/numpy/core/include/numpy -Inumpy/core/src/private -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/include -I/usr/local/python-2.7.3/include/python2.7 -Ibuild/src.linux-x86_64-2.7/numpy/core/src/multiarray -Ibuild/src.linux-x86_64-2.7/numpy/core/src/umath -c' > > gcc: numpy/linalg/lapack_litemodule.c > > gcc: numpy/linalg/python_xerbla.c > > /usr/bin/gfortran -Wall -Wall -shared build/temp.linux-x86_64-2.7/numpy/linalg/lapack_litemodule.o build/temp.linux-x86_64-2.7/numpy/linalg/python_xerbla.o -L/usr/lib64/atlas -L. -Lbuild/temp.linux-x86_64-2.7 -llapack -lptf77blas -lptcblas -latlas -lpython2.7 -lgfortran -o build/lib.linux-x86_64-2.7/numpy/linalg/lapack_lite.so > > /usr/bin/ld: cannot find -lpython2.7 > > collect2: ld returned 1 exit status > > /usr/bin/ld: cannot find -lpython2.7 > > collect2: ld returned 1 exit status > > error: Command "/usr/bin/gfortran -Wall -Wall -shared build/temp.linux-x86_64-2.7/numpy/linalg/lapack_litemodule.o build/temp.linux-x86_64-2.7/numpy/linalg/python_xerbla.o -L/usr/lib64/atlas -L. -Lbuild/temp.linux-x86_64-2.7 -llapack -lptf77blas -lptcblas -latlas -lpython2.7 -lgfortran -o build/lib.linux-x86_64-2.7/numpy/linalg/lapack_lite.so" failed with exit status 1 > > If I haven't install BLAS/Lapack/ATLAS, NumPy will be > successfully built and installed. > > > Magician > > > On 2012/05/20, at 19:40, scipy-user-request at scipy.org wrote: > > > Message: 1 > > Date: Sun, 20 May 2012 10:21:12 +0200 > > From: Ralf Gommers > > Subject: Re: [SciPy-User] SciPy installation troubles on CentOS 6.2 > > To: SciPy Users List > > Message-ID: > > > > Content-Type: text/plain; charset="iso-8859-1" > > > > On Sat, May 19, 2012 at 5:09 PM, Magician wrote: > > > >> Hi All, > >> > >> > >> I'm trying to build SciPy from source code, > >> but I have some troubles. > >> > >> My environment is below: > >>> CentOS 6.2 on VMware Fusion 4.1.2 (CentOS was installed as Software > >> Development WS) > >>> Python 2.7.3 (already built from sources, installed at > >> /usr/local/python-2.7.3) > >>> NumPy 1.6.1, SciPy 0.10.1, ATLAS 3.8.4, Lapack 3.4.1 (now trying to > >> build) > >> I and my colleagues (other users) want to use recent Python, > >> so I installed Python from sources, and I can't install SciPy > >> by using yum command. > >> > >> Now I'm facing ATLAS compiling errors. > >> Configuration options are "--prefix=/usr/local/atlas-3.8.4 -Fa alg -fPIC". > >> I tried to build it for several times, and always I got errors as below: > >>> res/dgemvN_6_75 : VARIATION EXCEEDS TOLERENCE, RERUN WITH HIGHER REPS. > >>> > >>> ATL_gemvN_mm.c : 1257.99 > >>> ATL_gemvN_1x1_1.c : 581.74 > >>> ATL_gemvN_1x1_1a.c : 1589.45 > >>> ATL_gemvN_4x2_0.c : 813.13 > >>> ATL_gemvN_4x4_1.c : 755.54 > >>> make[3]: *** [res/dMVRES] Error 255 > >>> make[3]: Leaving directory > >> `/home/magician/Desktop/ATLAS/build/tune/blas/gemv' > >>> make[2]: *** > >> [/home/magician/Desktop/ATLAS/build/tune/blas/gemv/res/dMVRES] Error 2 > >>> make[2]: Leaving directory `/home/magician/Desktop/ATLAS/build/bin' > >>> ERROR 734 DURING MVTUNE!!. CHECK INSTALL_LOG/dMVTUNE.LOG FOR DETAILS. > >>> make[2]: Entering directory `/home/magician/Desktop/ATLAS/build/bin' > >>> cd /home/magician/Desktop/ATLAS/build ; make error_report > >>> make[3]: Entering directory `/home/magician/Desktop/ATLAS/build' > >>> make -f Make.top error_report > >>> make[4]: Entering directory `/home/magician/Desktop/ATLAS/build' > >>> uname -a 2>&1 >> bin/INSTALL_LOG/ERROR.LOG > >>> gcc -v 2>&1 >> bin/INSTALL_LOG/ERROR.LOG > >>> Using built-in specs. > >>> Target: x86_64-redhat-linux > >>> Configured with: ../configure --prefix=/usr --mandir=/usr/share/man > >> --infodir=/usr/share/info --with-bugurl= > >> http://bugzilla.redhat.com/bugzilla --enable-bootstrap --enable-shared > >> --enable-threads=posix --enable-checking=release --with-system-zlib > >> --enable-__cxa_atexit --disable-libunwind-exceptions > >> --enable-gnu-unique-object > >> --enable-languages=c,c++,objc,obj-c++,java,fortran,ada > >> --enable-java-awt=gtk --disable-dssi > >> --with-java-home=/usr/lib/jvm/java-1.5.0-gcj-1.5.0.0/jre > >> --enable-libgcj-multifile --enable-java-maintainer-mode > >> --with-ecj-jar=/usr/share/java/eclipse-ecj.jar --disable-libjava-multilib > >> --with-ppl --with-cloog --with-tune=generic --with-arch_32=i686 > >> --build=x86_64-redhat-linux > >>> Thread model: posix > >>> gcc version 4.4.6 20110731 (Red Hat 4.4.6-3) (GCC) > >>> gcc -V 2>&1 >> bin/INSTALL_LOG/ERROR.LOG > >>> gcc: '-V' option must have argument > >>> make[4]: [error_report] Error 1 (ignored) > >>> gcc --version 2>&1 >> bin/INSTALL_LOG/ERROR.LOG > >>> tar cf error_Corei264SSE3.tar Make.inc bin/INSTALL_LOG/* > >>> gzip --best error_Corei264SSE3.tar > >>> mv error_Corei264SSE3.tar.gz error_Corei264SSE3.tgz > >>> make[4]: Leaving directory `/home/magician/Desktop/ATLAS/build' > >>> make[3]: Leaving directory `/home/magician/Desktop/ATLAS/build' > >>> make[2]: Leaving directory `/home/magician/Desktop/ATLAS/build/bin' > >>> Error report error_.tgz has been created in your top-level ATLAS > >>> directory. Be sure to include this file in any help request. > >>> cat: ../../CONFIG/error.txt: No such file or directory > >>> cat: ../../CONFIG/error.txt: No such file or directory > >>> make[1]: *** [build] Error 255 > >>> make[1]: Leaving directory `/home/magician/Desktop/ATLAS/build' > >>> make: *** [build] Error 2 > >> > >> It's very troublesome for me to build ATLAS by myself. > >> My purpose is just using SciPy on my Python. > >> Even if it's optimized not so good for my environment, it's OK. > >> > >> Is there an easy or a sure way to build and install SciPy? > > > > > > Building ATLAS is much harder than building scipy, so you should try to > > find some rpm's for it, like > > http://linuxtoolkit.blogspot.com/2010/09/installing-lapack-blas-and-atlas-on.html. > > There's no problem building scipy against ATLAS from a binary install. > > > > Ralf From ralf.gommers at googlemail.com Sun May 27 04:16:23 2012 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Sun, 27 May 2012 10:16:23 +0200 Subject: [SciPy-User] SciPy installation troubles on CentOS 6.2 In-Reply-To: References: Message-ID: On Sun, May 27, 2012 at 4:05 AM, Magician wrote: > Hi Ralf and Sergio, > > > I thought my problems inclined to NumPy, and I posted to NumPy mailing > list. > But I couldn't get any answers for several days, so I'd like to ask again. > > > I tried to build NumPy again with site.cfg, but I got these errors: > > building 'numpy.core._sort' extension > > compiling C sources > > C compiler: gcc -pthread -fno-strict-aliasing -g -O2 -DNDEBUG -g -fwrapv > -O3 -Wall -Wstrict-prototypes -fPIC > > > > compile options: '-Inumpy/core/include > -Ibuild/src.linux-x86_64-2.7/numpy/core/include/numpy > -Inumpy/core/src/private -Inumpy/core/src -Inumpy/core > -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath > -Inumpy/core/include -I/usr/local/python-2.7.3/include/python2.7 > -Ibuild/src.linux-x86_64-2.7/numpy/core/src/multiarray > -Ibuild/src.linux-x86_64-2.7/numpy/core/src/umath -c' > > gcc: build/src.linux-x86_64-2.7/numpy/core/src/_sortmodule.c > > gcc -pthread -shared > build/temp.linux-x86_64-2.7/build/src.linux-x86_64-2.7/numpy/core/src/_sortmodule.o > -L. -Lbuild/temp.linux-x86_64-2.7 -lnpymath -lm -lpython2.7 -o > build/lib.linux-x86_64-2.7/numpy/core/_sort.so > > /usr/bin/ld: cannot find -lpython2.7 > > collect2: ld returned 1 exit status > > /usr/bin/ld: cannot find -lpython2.7 > > collect2: ld returned 1 exit status > > error: Command "gcc -pthread -shared > build/temp.linux-x86_64-2.7/build/src.linux-x86_64-2.7/numpy/core/src/_sortmodule.o > -L. -Lbuild/temp.linux-x86_64-2.7 -lnpymath -lm -lpython2.7 -o > build/lib.linux-x86_64-2.7/numpy/core/_sort.so" failed with exit status 1 > > Before I built NumPy, I uncommented and modified site.cfg as below: > > [DEFAULT] > > library_dirs = /usr/local/python-2.7.3/lib:/usr/lib64:/usr/lib64/atlas > > include_dirs = > /usr/local/python-2.7.3/include:/usr/include:/usr/include/atlas > > > > [blas_opt] > > libraries = f77blas, cblas, atlas > > > > [lapack_opt] > > libraries = lapack, f77blas, cblas, atlas > > And "setup.py config" dumped these messages: > > Running from numpy source directory.F2PY Version 2 > > blas_opt_info: > > blas_mkl_info: > > libraries mkl,vml,guide not found in /usr/local/python-2.7.3/lib > > libraries mkl,vml,guide not found in /usr/lib64 > > libraries mkl,vml,guide not found in /usr/lib64/atlas > > NOT AVAILABLE > > > > atlas_blas_threads_info: > > Setting PTATLAS=ATLAS > > libraries ptf77blas,ptcblas,atlas not found in > /usr/local/python-2.7.3/lib > > Setting PTATLAS=ATLAS > > customize GnuFCompiler > > Could not locate executable g77 > > Could not locate executable f77 > > customize IntelFCompiler > > Could not locate executable ifort > > Could not locate executable ifc > > customize LaheyFCompiler > > Could not locate executable lf95 > > customize PGroupFCompiler > > Could not locate executable pgf90 > > Could not locate executable pgf77 > > customize AbsoftFCompiler > > Could not locate executable f90 > > customize NAGFCompiler > > Found executable /usr/bin/f95 > > customize VastFCompiler > > customize CompaqFCompiler > > Could not locate executable fort > > customize IntelItaniumFCompiler > > Could not locate executable efort > > Could not locate executable efc > > customize IntelEM64TFCompiler > > customize Gnu95FCompiler > > Found executable /usr/bin/gfortran > > customize Gnu95FCompiler > > customize Gnu95FCompiler using config > > compiling '_configtest.c': > > > > /* This file is generated from numpy/distutils/system_info.py */ > > void ATL_buildinfo(void); > > int main(void) { > > ATL_buildinfo(); > > return 0; > > } > > > > C compiler: gcc -pthread -fno-strict-aliasing -g -O2 -DNDEBUG -g -fwrapv > -O3 -Wall -Wstrict-prototypes -fPIC > > > > compile options: '-c' > > gcc: _configtest.c > > gcc -pthread _configtest.o -L/usr/lib64/atlas -lptf77blas -lptcblas > -latlas -o _configtest > > ATLAS version 3.8.4 built by mockbuild on Wed Dec 7 18:04:21 GMT 2011: > > UNAME : Linux c6b5.bsys.dev.centos.org 2.6.32-44.2.el6.x86_64 #1 > SMP Wed Jul 21 12:48:32 EDT 2010 x86_64 x86_64 x86_64 GNU/Linux > > INSTFLG : -1 0 -a 1 > > ARCHDEFS : -DATL_OS_Linux -DATL_ARCH_PII -DATL_CPUMHZ=2261 -DATL_SSE2 > -DATL_SSE1 -DATL_USE64BITS -DATL_GAS_x8664 > > F2CDEFS : -DAdd_ -DF77_INTEGER=int -DStringSunStyle > > CACHEEDGE: 8388608 > > F77 : gfortran, version GNU Fortran (GCC) 4.4.6 20110731 (Red Hat > 4.4.6-3) > > F77FLAGS : -fomit-frame-pointer -mfpmath=387 -O2 -falign-loops=4 -g > -Wa,--noexecstack -fPIC -m64 > > SMC : gcc, version gcc (GCC) 4.4.6 20110731 (Red Hat 4.4.6-3) > > SMCFLAGS : -fomit-frame-pointer -mfpmath=387 -O2 -falign-loops=4 -g > -Wa,--noexecstack -fPIC -m64 > > SKC : gcc, version gcc (GCC) 4.4.6 20110731 (Red Hat 4.4.6-3) > > SKCFLAGS : -fomit-frame-pointer -mfpmath=387 -O2 -falign-loops=4 -g > -Wa,--noexecstack -fPIC -m64 > > success! > > removing: _configtest.c _configtest.o _configtest > > Setting PTATLAS=ATLAS > > FOUND: > > libraries = ['ptf77blas', 'ptcblas', 'atlas'] > > library_dirs = ['/usr/lib64/atlas'] > > language = c > > define_macros = [('ATLAS_INFO', '"\\"3.8.4\\""')] > > include_dirs = ['/usr/include'] > > > > FOUND: > > libraries = ['ptf77blas', 'ptcblas', 'atlas'] > > library_dirs = ['/usr/lib64/atlas'] > > language = c > > define_macros = [('ATLAS_INFO', '"\\"3.8.4\\""')] > > include_dirs = ['/usr/include'] > > > > lapack_opt_info: > > lapack_mkl_info: > > mkl_info: > > libraries mkl,vml,guide not found in /usr/local/python-2.7.3/lib > > libraries mkl,vml,guide not found in /usr/lib64 > > libraries mkl,vml,guide not found in /usr/lib64/atlas > > NOT AVAILABLE > > > > NOT AVAILABLE > > > > atlas_threads_info: > > Setting PTATLAS=ATLAS > > libraries ptf77blas,ptcblas,atlas not found in > /usr/local/python-2.7.3/lib > > libraries lapack_atlas not found in /usr/local/python-2.7.3/lib > > libraries lapack_atlas not found in /usr/lib64/atlas > > numpy.distutils.system_info.atlas_threads_info > > Setting PTATLAS=ATLAS > > Setting PTATLAS=ATLAS > > FOUND: > > libraries = ['lapack', 'ptf77blas', 'ptcblas', 'atlas'] > > library_dirs = ['/usr/lib64/atlas'] > > language = f77 > > define_macros = [('ATLAS_INFO', '"\\"3.8.4\\""')] > > include_dirs = ['/usr/include'] > > > > FOUND: > > libraries = ['lapack', 'ptf77blas', 'ptcblas', 'atlas'] > > library_dirs = ['/usr/lib64/atlas'] > > language = f77 > > define_macros = [('ATLAS_INFO', '"\\"3.8.4\\""')] > > include_dirs = ['/usr/include'] > > > > running config > > I couldn't get enough informations about numscons/bento, > so I'm still trying to build it with site.cfg and CFLAGS. > You specified the additional library dir you need in site.cfg. You therefore don't need to use CFLAGS anymore. This is probably still what's going wrong, because the build error says it can't find Python itself anymore. So make sure that CFLAGS is not defined and try again. Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From f_magician at mac.com Sun May 27 07:12:37 2012 From: f_magician at mac.com (Magician) Date: Sun, 27 May 2012 20:12:37 +0900 Subject: [SciPy-User] SciPy installation troubles on CentOS 6.2 In-Reply-To: References: Message-ID: <601BA7AC-02E6-469D-AA51-535E52DE22E7@mac.com> Hi Ralf, Maybe I'm misunderstanding how to handle site.cfg, so I'm trying easy tests about CFLAGS and site.cfg. These tests are done on independent snapshots (already installed Python 2.7.3) of VMware. My libpython2.7.so and libpython2.7.so.1.0 are on /usr/local/python-2.7.3/lib. Lapack and ATLAS are not installed yet. With CFLAGS: > export CFLAGS="-L/usr/local/python-2.7.3/lib" > python setup.py config >> Running from numpy source directory.non-existing path in 'numpy/distutils': 'site.cfg' >> F2PY Version 2 >> blas_opt_info: >> blas_mkl_info: >> libraries mkl,vml,guide not found in /usr/local/python-2.7.3/lib >> libraries mkl,vml,guide not found in /usr/local/lib64 >> libraries mkl,vml,guide not found in /usr/local/lib >> libraries mkl,vml,guide not found in /usr/lib64 >> libraries mkl,vml,guide not found in /usr/lib >> NOT AVAILABLE >> >> atlas_blas_threads_info: >> Setting PTATLAS=ATLAS >> libraries ptf77blas,ptcblas,atlas not found in /usr/local/python-2.7.3/lib >> libraries ptf77blas,ptcblas,atlas not found in /usr/local/lib64 >> libraries ptf77blas,ptcblas,atlas not found in /usr/local/lib >> libraries ptf77blas,ptcblas,atlas not found in /usr/lib64/atlas >> libraries ptf77blas,ptcblas,atlas not found in /usr/lib64/sse2 >> libraries ptf77blas,ptcblas,atlas not found in /usr/lib64 >> libraries ptf77blas,ptcblas,atlas not found in /usr/lib >> NOT AVAILABLE >> >> atlas_blas_info: >> libraries f77blas,cblas,atlas not found in /usr/local/python-2.7.3/lib >> libraries f77blas,cblas,atlas not found in /usr/local/lib64 >> libraries f77blas,cblas,atlas not found in /usr/local/lib >> libraries f77blas,cblas,atlas not found in /usr/lib64/atlas >> libraries f77blas,cblas,atlas not found in /usr/lib64/sse2 >> libraries f77blas,cblas,atlas not found in /usr/lib64 >> libraries f77blas,cblas,atlas not found in /usr/lib >> NOT AVAILABLE >> >> /home/magician/Desktop/numpy-1.6.1/numpy/distutils/system_info.py:1414: UserWarning: >> Atlas (http://math-atlas.sourceforge.net/) libraries not found. >> Directories to search for the libraries can be specified in the >> numpy/distutils/site.cfg file (section [atlas]) or by setting >> the ATLAS environment variable. >> warnings.warn(AtlasNotFoundError.__doc__) >> blas_info: >> libraries blas not found in /usr/local/python-2.7.3/lib >> libraries blas not found in /usr/local/lib64 >> libraries blas not found in /usr/local/lib >> libraries blas not found in /usr/lib64 >> libraries blas not found in /usr/lib >> NOT AVAILABLE >> >> /home/magician/Desktop/numpy-1.6.1/numpy/distutils/system_info.py:1423: UserWarning: >> Blas (http://www.netlib.org/blas/) libraries not found. >> Directories to search for the libraries can be specified in the >> numpy/distutils/site.cfg file (section [blas]) or by setting >> the BLAS environment variable. >> warnings.warn(BlasNotFoundError.__doc__) >> blas_src_info: >> NOT AVAILABLE >> >> /home/magician/Desktop/numpy-1.6.1/numpy/distutils/system_info.py:1426: UserWarning: >> Blas (http://www.netlib.org/blas/) sources not found. >> Directories to search for the sources can be specified in the >> numpy/distutils/site.cfg file (section [blas_src]) or by setting >> the BLAS_SRC environment variable. >> warnings.warn(BlasSrcNotFoundError.__doc__) >> NOT AVAILABLE >> >> lapack_opt_info: >> lapack_mkl_info: >> mkl_info: >> libraries mkl,vml,guide not found in /usr/local/python-2.7.3/lib >> libraries mkl,vml,guide not found in /usr/local/lib64 >> libraries mkl,vml,guide not found in /usr/local/lib >> libraries mkl,vml,guide not found in /usr/lib64 >> libraries mkl,vml,guide not found in /usr/lib >> NOT AVAILABLE >> >> NOT AVAILABLE >> >> atlas_threads_info: >> Setting PTATLAS=ATLAS >> libraries ptf77blas,ptcblas,atlas not found in /usr/local/python-2.7.3/lib >> libraries lapack_atlas not found in /usr/local/python-2.7.3/lib >> libraries ptf77blas,ptcblas,atlas not found in /usr/local/lib64 >> libraries lapack_atlas not found in /usr/local/lib64 >> libraries ptf77blas,ptcblas,atlas not found in /usr/local/lib >> libraries lapack_atlas not found in /usr/local/lib >> libraries ptf77blas,ptcblas,atlas not found in /usr/lib64/atlas >> libraries lapack_atlas not found in /usr/lib64/atlas >> libraries ptf77blas,ptcblas,atlas not found in /usr/lib64/sse2 >> libraries lapack_atlas not found in /usr/lib64/sse2 >> libraries ptf77blas,ptcblas,atlas not found in /usr/lib64 >> libraries lapack_atlas not found in /usr/lib64 >> libraries ptf77blas,ptcblas,atlas not found in /usr/lib >> libraries lapack_atlas not found in /usr/lib >> numpy.distutils.system_info.atlas_threads_info >> NOT AVAILABLE >> >> atlas_info: >> libraries f77blas,cblas,atlas not found in /usr/local/python-2.7.3/lib >> libraries lapack_atlas not found in /usr/local/python-2.7.3/lib >> libraries f77blas,cblas,atlas not found in /usr/local/lib64 >> libraries lapack_atlas not found in /usr/local/lib64 >> libraries f77blas,cblas,atlas not found in /usr/local/lib >> libraries lapack_atlas not found in /usr/local/lib >> libraries f77blas,cblas,atlas not found in /usr/lib64/atlas >> libraries lapack_atlas not found in /usr/lib64/atlas >> libraries f77blas,cblas,atlas not found in /usr/lib64/sse2 >> libraries lapack_atlas not found in /usr/lib64/sse2 >> libraries f77blas,cblas,atlas not found in /usr/lib64 >> libraries lapack_atlas not found in /usr/lib64 >> libraries f77blas,cblas,atlas not found in /usr/lib >> libraries lapack_atlas not found in /usr/lib >> numpy.distutils.system_info.atlas_info >> NOT AVAILABLE >> >> /home/magician/Desktop/numpy-1.6.1/numpy/distutils/system_info.py:1330: UserWarning: >> Atlas (http://math-atlas.sourceforge.net/) libraries not found. >> Directories to search for the libraries can be specified in the >> numpy/distutils/site.cfg file (section [atlas]) or by setting >> the ATLAS environment variable. >> warnings.warn(AtlasNotFoundError.__doc__) >> lapack_info: >> libraries lapack not found in /usr/local/python-2.7.3/lib >> libraries lapack not found in /usr/local/lib64 >> libraries lapack not found in /usr/local/lib >> libraries lapack not found in /usr/lib64 >> libraries lapack not found in /usr/lib >> NOT AVAILABLE >> >> /home/magician/Desktop/numpy-1.6.1/numpy/distutils/system_info.py:1341: UserWarning: >> Lapack (http://www.netlib.org/lapack/) libraries not found. >> Directories to search for the libraries can be specified in the >> numpy/distutils/site.cfg file (section [lapack]) or by setting >> the LAPACK environment variable. >> warnings.warn(LapackNotFoundError.__doc__) >> lapack_src_info: >> NOT AVAILABLE >> >> /home/magician/Desktop/numpy-1.6.1/numpy/distutils/system_info.py:1344: UserWarning: >> Lapack (http://www.netlib.org/lapack/) sources not found. >> Directories to search for the sources can be specified in the >> numpy/distutils/site.cfg file (section [lapack_src]) or by setting >> the LAPACK_SRC environment variable. >> warnings.warn(LapackSrcNotFoundError.__doc__) >> NOT AVAILABLE >> >> running config Then I executed "python setup.py build", NumPy was built successfully. Next with site.cfg: > cp site.cfg.example site.cfg > vi site.cfg >> [DEFAULT] >> library_dirs = /usr/local/python-2.7.3/lib >> include_dirs = /usr/local/python-2.7.3/include > python setup.py config >> Running from numpy source directory.F2PY Version 2 >> blas_opt_info: >> blas_mkl_info: >> libraries mkl,vml,guide not found in /usr/local/python-2.7.3/lib >> NOT AVAILABLE >> >> atlas_blas_threads_info: >> Setting PTATLAS=ATLAS >> libraries ptf77blas,ptcblas,atlas not found in /usr/local/python-2.7.3/lib >> NOT AVAILABLE >> >> atlas_blas_info: >> libraries f77blas,cblas,atlas not found in /usr/local/python-2.7.3/lib >> NOT AVAILABLE >> >> /home/magician/Desktop/numpy-1.6.1/numpy/distutils/system_info.py:1414: UserWarning: >> Atlas (http://math-atlas.sourceforge.net/) libraries not found. >> Directories to search for the libraries can be specified in the >> numpy/distutils/site.cfg file (section [atlas]) or by setting >> the ATLAS environment variable. >> warnings.warn(AtlasNotFoundError.__doc__) >> blas_info: >> libraries blas not found in /usr/local/python-2.7.3/lib >> NOT AVAILABLE >> >> /home/magician/Desktop/numpy-1.6.1/numpy/distutils/system_info.py:1423: UserWarning: >> Blas (http://www.netlib.org/blas/) libraries not found. >> Directories to search for the libraries can be specified in the >> numpy/distutils/site.cfg file (section [blas]) or by setting >> the BLAS environment variable. >> warnings.warn(BlasNotFoundError.__doc__) >> blas_src_info: >> NOT AVAILABLE >> >> /home/magician/Desktop/numpy-1.6.1/numpy/distutils/system_info.py:1426: UserWarning: >> Blas (http://www.netlib.org/blas/) sources not found. >> Directories to search for the sources can be specified in the >> numpy/distutils/site.cfg file (section [blas_src]) or by setting >> the BLAS_SRC environment variable. >> warnings.warn(BlasSrcNotFoundError.__doc__) >> NOT AVAILABLE >> >> lapack_opt_info: >> lapack_mkl_info: >> mkl_info: >> libraries mkl,vml,guide not found in /usr/local/python-2.7.3/lib >> NOT AVAILABLE >> >> NOT AVAILABLE >> >> atlas_threads_info: >> Setting PTATLAS=ATLAS >> libraries ptf77blas,ptcblas,atlas not found in /usr/local/python-2.7.3/lib >> libraries lapack_atlas not found in /usr/local/python-2.7.3/lib >> numpy.distutils.system_info.atlas_threads_info >> NOT AVAILABLE >> >> atlas_info: >> libraries f77blas,cblas,atlas not found in /usr/local/python-2.7.3/lib >> libraries lapack_atlas not found in /usr/local/python-2.7.3/lib >> numpy.distutils.system_info.atlas_info >> NOT AVAILABLE >> >> /home/magician/Desktop/numpy-1.6.1/numpy/distutils/system_info.py:1330: UserWarning: >> Atlas (http://math-atlas.sourceforge.net/) libraries not found. >> Directories to search for the libraries can be specified in the >> numpy/distutils/site.cfg file (section [atlas]) or by setting >> the ATLAS environment variable. >> warnings.warn(AtlasNotFoundError.__doc__) >> lapack_info: >> libraries lapack not found in /usr/local/python-2.7.3/lib >> NOT AVAILABLE >> >> /home/magician/Desktop/numpy-1.6.1/numpy/distutils/system_info.py:1341: UserWarning: >> Lapack (http://www.netlib.org/lapack/) libraries not found. >> Directories to search for the libraries can be specified in the >> numpy/distutils/site.cfg file (section [lapack]) or by setting >> the LAPACK environment variable. >> warnings.warn(LapackNotFoundError.__doc__) >> lapack_src_info: >> NOT AVAILABLE >> >> /home/magician/Desktop/numpy-1.6.1/numpy/distutils/system_info.py:1344: UserWarning: >> Lapack (http://www.netlib.org/lapack/) sources not found. >> Directories to search for the sources can be specified in the >> numpy/distutils/site.cfg file (section [lapack_src]) or by setting >> the LAPACK_SRC environment variable. >> warnings.warn(LapackSrcNotFoundError.__doc__) >> NOT AVAILABLE >> >> running config Then I executed "python setup.py build", I got errors about -lpython which I posted before. If I edit site.cfg as below, config messages become same as CFLAGS, but same -lpython errors are dumped: > vi site.cfg >> [DEFAULT] >> library_dirs = /usr/local/python-2.7.3/lib:/usr/local/lib64:/usr/local/lib:/usr/lib64:/usr/lib >> include_dirs = /usr/local/python-2.7.3/include Magician On 2012/05/27, at 17:16, Ralf Gommers wrote: > On Sun, May 27, 2012 at 4:05 AM, Magician wrote: >> Hi Ralf and Sergio, >> >> >> I thought my problems inclined to NumPy, and I posted to NumPy mailing list. >> But I couldn't get any answers for several days, so I'd like to ask again. >> >> >> I tried to build NumPy again with site.cfg, but I got these errors: >> > building 'numpy.core._sort' extension >> > compiling C sources >> > C compiler: gcc -pthread -fno-strict-aliasing -g -O2 -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC >> > >> > compile options: '-Inumpy/core/include -Ibuild/src.linux-x86_64-2.7/numpy/core/include/numpy -Inumpy/core/src/private -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/include -I/usr/local/python-2.7.3/include/python2.7 -Ibuild/src.linux-x86_64-2.7/numpy/core/src/multiarray -Ibuild/src.linux-x86_64-2.7/numpy/core/src/umath -c' >> > gcc: build/src.linux-x86_64-2.7/numpy/core/src/_sortmodule.c >> > gcc -pthread -shared build/temp.linux-x86_64-2.7/build/src.linux-x86_64-2.7/numpy/core/src/_sortmodule.o -L. -Lbuild/temp.linux-x86_64-2.7 -lnpymath -lm -lpython2.7 -o build/lib.linux-x86_64-2.7/numpy/core/_sort.so >> > /usr/bin/ld: cannot find -lpython2.7 >> > collect2: ld returned 1 exit status >> > /usr/bin/ld: cannot find -lpython2.7 >> > collect2: ld returned 1 exit status >> > error: Command "gcc -pthread -shared build/temp.linux-x86_64-2.7/build/src.linux-x86_64-2.7/numpy/core/src/_sortmodule.o -L. -Lbuild/temp.linux-x86_64-2.7 -lnpymath -lm -lpython2.7 -o build/lib.linux-x86_64-2.7/numpy/core/_sort.so" failed with exit status 1 >> >> Before I built NumPy, I uncommented and modified site.cfg as below: >> > [DEFAULT] >> > library_dirs = /usr/local/python-2.7.3/lib:/usr/lib64:/usr/lib64/atlas >> > include_dirs = /usr/local/python-2.7.3/include:/usr/include:/usr/include/atlas >> > >> > [blas_opt] >> > libraries = f77blas, cblas, atlas >> > >> > [lapack_opt] >> > libraries = lapack, f77blas, cblas, atlas >> >> And "setup.py config" dumped these messages: >> > Running from numpy source directory.F2PY Version 2 >> > blas_opt_info: >> > blas_mkl_info: >> > libraries mkl,vml,guide not found in /usr/local/python-2.7.3/lib >> > libraries mkl,vml,guide not found in /usr/lib64 >> > libraries mkl,vml,guide not found in /usr/lib64/atlas >> > NOT AVAILABLE >> > >> > atlas_blas_threads_info: >> > Setting PTATLAS=ATLAS >> > libraries ptf77blas,ptcblas,atlas not found in /usr/local/python-2.7.3/lib >> > Setting PTATLAS=ATLAS >> > customize GnuFCompiler >> > Could not locate executable g77 >> > Could not locate executable f77 >> > customize IntelFCompiler >> > Could not locate executable ifort >> > Could not locate executable ifc >> > customize LaheyFCompiler >> > Could not locate executable lf95 >> > customize PGroupFCompiler >> > Could not locate executable pgf90 >> > Could not locate executable pgf77 >> > customize AbsoftFCompiler >> > Could not locate executable f90 >> > customize NAGFCompiler >> > Found executable /usr/bin/f95 >> > customize VastFCompiler >> > customize CompaqFCompiler >> > Could not locate executable fort >> > customize IntelItaniumFCompiler >> > Could not locate executable efort >> > Could not locate executable efc >> > customize IntelEM64TFCompiler >> > customize Gnu95FCompiler >> > Found executable /usr/bin/gfortran >> > customize Gnu95FCompiler >> > customize Gnu95FCompiler using config >> > compiling '_configtest.c': >> > >> > /* This file is generated from numpy/distutils/system_info.py */ >> > void ATL_buildinfo(void); >> > int main(void) { >> > ATL_buildinfo(); >> > return 0; >> > } >> > >> > C compiler: gcc -pthread -fno-strict-aliasing -g -O2 -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC >> > >> > compile options: '-c' >> > gcc: _configtest.c >> > gcc -pthread _configtest.o -L/usr/lib64/atlas -lptf77blas -lptcblas -latlas -o _configtest >> > ATLAS version 3.8.4 built by mockbuild on Wed Dec 7 18:04:21 GMT 2011: >> > UNAME : Linux c6b5.bsys.dev.centos.org 2.6.32-44.2.el6.x86_64 #1 SMP Wed Jul 21 12:48:32 EDT 2010 x86_64 x86_64 x86_64 GNU/Linux >> > INSTFLG : -1 0 -a 1 >> > ARCHDEFS : -DATL_OS_Linux -DATL_ARCH_PII -DATL_CPUMHZ=2261 -DATL_SSE2 -DATL_SSE1 -DATL_USE64BITS -DATL_GAS_x8664 >> > F2CDEFS : -DAdd_ -DF77_INTEGER=int -DStringSunStyle >> > CACHEEDGE: 8388608 >> > F77 : gfortran, version GNU Fortran (GCC) 4.4.6 20110731 (Red Hat 4.4.6-3) >> > F77FLAGS : -fomit-frame-pointer -mfpmath=387 -O2 -falign-loops=4 -g -Wa,--noexecstack -fPIC -m64 >> > SMC : gcc, version gcc (GCC) 4.4.6 20110731 (Red Hat 4.4.6-3) >> > SMCFLAGS : -fomit-frame-pointer -mfpmath=387 -O2 -falign-loops=4 -g -Wa,--noexecstack -fPIC -m64 >> > SKC : gcc, version gcc (GCC) 4.4.6 20110731 (Red Hat 4.4.6-3) >> > SKCFLAGS : -fomit-frame-pointer -mfpmath=387 -O2 -falign-loops=4 -g -Wa,--noexecstack -fPIC -m64 >> > success! >> > removing: _configtest.c _configtest.o _configtest >> > Setting PTATLAS=ATLAS >> > FOUND: >> > libraries = ['ptf77blas', 'ptcblas', 'atlas'] >> > library_dirs = ['/usr/lib64/atlas'] >> > language = c >> > define_macros = [('ATLAS_INFO', '"\\"3.8.4\\""')] >> > include_dirs = ['/usr/include'] >> > >> > FOUND: >> > libraries = ['ptf77blas', 'ptcblas', 'atlas'] >> > library_dirs = ['/usr/lib64/atlas'] >> > language = c >> > define_macros = [('ATLAS_INFO', '"\\"3.8.4\\""')] >> > include_dirs = ['/usr/include'] >> > >> > lapack_opt_info: >> > lapack_mkl_info: >> > mkl_info: >> > libraries mkl,vml,guide not found in /usr/local/python-2.7.3/lib >> > libraries mkl,vml,guide not found in /usr/lib64 >> > libraries mkl,vml,guide not found in /usr/lib64/atlas >> > NOT AVAILABLE >> > >> > NOT AVAILABLE >> > >> > atlas_threads_info: >> > Setting PTATLAS=ATLAS >> > libraries ptf77blas,ptcblas,atlas not found in /usr/local/python-2.7.3/lib >> > libraries lapack_atlas not found in /usr/local/python-2.7.3/lib >> > libraries lapack_atlas not found in /usr/lib64/atlas >> > numpy.distutils.system_info.atlas_threads_info >> > Setting PTATLAS=ATLAS >> > Setting PTATLAS=ATLAS >> > FOUND: >> > libraries = ['lapack', 'ptf77blas', 'ptcblas', 'atlas'] >> > library_dirs = ['/usr/lib64/atlas'] >> > language = f77 >> > define_macros = [('ATLAS_INFO', '"\\"3.8.4\\""')] >> > include_dirs = ['/usr/include'] >> > >> > FOUND: >> > libraries = ['lapack', 'ptf77blas', 'ptcblas', 'atlas'] >> > library_dirs = ['/usr/lib64/atlas'] >> > language = f77 >> > define_macros = [('ATLAS_INFO', '"\\"3.8.4\\""')] >> > include_dirs = ['/usr/include'] >> > >> > running config >> >> I couldn't get enough informations about numscons/bento, >> so I'm still trying to build it with site.cfg and CFLAGS. > > You specified the additional library dir you need in site.cfg. You therefore don't need to use CFLAGS anymore. This is probably still what's going wrong, because the build error says it can't find Python itself anymore. So make sure that CFLAGS is not defined and try again. > > Ralf From sergio_r at mail.com Sun May 27 13:21:11 2012 From: sergio_r at mail.com (Sergio Rojas) Date: Sun, 27 May 2012 13:21:11 -0400 Subject: [SciPy-User] SciPy installation troubles on CentOS 6.2 Message-ID: <20120527172112.17920@gmx.com> > From: Magician > /usr/bin/ld: cannot find -lpython2.7 Magician, This error listed indicates that python libraries are not being found. Try this: linux> locate libpython2.7.so linux> echo $LD_LIBRARY_PATH and verify that in your library path are the lib and include directories corresponding to the python you want to built. I don't recall this but, did you built your own python? Sergio ----- Original Message ----- From: Magician Sent: 05/26/12 10:05 PM To: Ralf Gommers, sergio_r at mail.com Subject: Re: [SciPy-User] SciPy installation troubles on CentOS 6.2 Hi Ralf and Sergio, I thought my problems inclined to NumPy, and I posted to NumPy mailing list. But I couldn't get any answers for several days, so I'd like to ask again. I tried to build NumPy again with site.cfg, but I got these errors: > building 'numpy.core._sort' extension > compiling C sources > C compiler: gcc -pthread -fno-strict-aliasing -g -O2 -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC > > compile options: '-Inumpy/core/include -Ibuild/src.linux-x86_64-2.7/numpy/core/include/numpy -Inumpy/core/src/private -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/include -I/usr/local/python-2.7.3/include/python2.7 -Ibuild/src.linux-x86_64-2.7/numpy/core/src/multiarray -Ibuild/src.linux-x86_64-2.7/numpy/core/src/umath -c' > gcc: build/src.linux-x86_64-2.7/numpy/core/src/_sortmodule.c > gcc -pthread -shared build/temp.linux-x86_64-2.7/build/src.linux-x86_64-2.7/numpy/core/src/_sortmodule.o -L. -Lbuild/temp.linux-x86_64-2.7 -lnpymath -lm -lpython2.7 -o build/lib.linux-x86_64-2.7/numpy/core/_sort.so > /usr/bin/ld: cannot find -lpython2.7 > collect2: ld returned 1 exit status > /usr/bin/ld: cannot find -lpython2.7 > collect2: ld returned 1 exit status > error: Command "gcc -pthread -shared build/temp.linux-x86_64-2.7/build/src.linux-x86_64-2.7/numpy/core/src/_sortmodule.o -L. -Lbuild/temp.linux-x86_64-2.7 -lnpymath -lm -lpython2.7 -o build/lib.linux-x86_64-2.7/numpy/core/_sort.so" failed with exit status 1 Before I built NumPy, I uncommented and modified site.cfg as below: > [DEFAULT] > library_dirs = /usr/local/python-2.7.3/lib:/usr/lib64:/usr/lib64/atlas > include_dirs = /usr/local/python-2.7.3/include:/usr/include:/usr/include/atlas > > [blas_opt] > libraries = f77blas, cblas, atlas > > [lapack_opt] > libraries = lapack, f77blas, cblas, atlas And "setup.py config" dumped these messages: > Running from numpy source directory.F2PY Version 2 > blas_opt_info: > blas_mkl_info: > libraries mkl,vml,guide not found in /usr/local/python-2.7.3/lib > libraries mkl,vml,guide not found in /usr/lib64 > libraries mkl,vml,guide not found in /usr/lib64/atlas > NOT AVAILABLE > > atlas_blas_threads_info: > Setting PTATLAS=ATLAS > libraries ptf77blas,ptcblas,atlas not found in /usr/local/python-2.7.3/lib > Setting PTATLAS=ATLAS > customize GnuFCompiler > Could not locate executable g77 > Could not locate executable f77 > customize IntelFCompiler > Could not locate executable ifort > Could not locate executable ifc > customize LaheyFCompiler > Could not locate executable lf95 > customize PGroupFCompiler > Could not locate executable pgf90 > Could not locate executable pgf77 > customize AbsoftFCompiler > Could not locate executable f90 > customize NAGFCompiler > Found executable /usr/bin/f95 > customize VastFCompiler > customize CompaqFCompiler > Could not locate executable fort > customize IntelItaniumFCompiler > Could not locate executable efort > Could not locate executable efc > customize IntelEM64TFCompiler > customize Gnu95FCompiler > Found executable /usr/bin/gfortran > customize Gnu95FCompiler > customize Gnu95FCompiler using config > compiling '_configtest.c': > > /* This file is generated from numpy/distutils/system_info.py */ > void ATL_buildinfo(void); > int main(void) { > ATL_buildinfo(); > return 0; > } > > C compiler: gcc -pthread -fno-strict-aliasing -g -O2 -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC > > compile options: '-c' > gcc: _configtest.c > gcc -pthread _configtest.o -L/usr/lib64/atlas -lptf77blas -lptcblas -latlas -o _configtest > ATLAS version 3.8.4 built by mockbuild on Wed Dec 7 18:04:21 GMT 2011: > UNAME : Linux c6b5.bsys.dev.centos.org 2.6.32-44.2.el6.x86_64 #1 SMP Wed Jul 21 12:48:32 EDT 2010 x86_64 x86_64 x86_64 GNU/Linux > INSTFLG : -1 0 -a 1 > ARCHDEFS : -DATL_OS_Linux -DATL_ARCH_PII -DATL_CPUMHZ=2261 -DATL_SSE2 -DATL_SSE1 -DATL_USE64BITS -DATL_GAS_x8664 > F2CDEFS : -DAdd_ -DF77_INTEGER=int -DStringSunStyle > CACHEEDGE: 8388608 > F77 : gfortran, version GNU Fortran (GCC) 4.4.6 20110731 (Red Hat 4.4.6-3) > F77FLAGS : -fomit-frame-pointer -mfpmath=387 -O2 -falign-loops=4 -g -Wa,--noexecstack -fPIC -m64 > SMC : gcc, version gcc (GCC) 4.4.6 20110731 (Red Hat 4.4.6-3) > SMCFLAGS : -fomit-frame-pointer -mfpmath=387 -O2 -falign-loops=4 -g -Wa,--noexecstack -fPIC -m64 > SKC : gcc, version gcc (GCC) 4.4.6 20110731 (Red Hat 4.4.6-3) > SKCFLAGS : -fomit-frame-pointer -mfpmath=387 -O2 -falign-loops=4 -g -Wa,--noexecstack -fPIC -m64 > success! > removing: _configtest.c _configtest.o _configtest > Setting PTATLAS=ATLAS > FOUND: > libraries = ['ptf77blas', 'ptcblas', 'atlas'] > library_dirs = ['/usr/lib64/atlas'] > language = c > define_macros = [('ATLAS_INFO', '"\\"3.8.4\\""')] > include_dirs = ['/usr/include'] > > FOUND: > libraries = ['ptf77blas', 'ptcblas', 'atlas'] > library_dirs = ['/usr/lib64/atlas'] > language = c > define_macros = [('ATLAS_INFO', '"\\"3.8.4\\""')] > include_dirs = ['/usr/include'] > > lapack_opt_info: > lapack_mkl_info: > mkl_info: > libraries mkl,vml,guide not found in /usr/local/python-2.7.3/lib > libraries mkl,vml,guide not found in /usr/lib64 > libraries mkl,vml,guide not found in /usr/lib64/atlas > NOT AVAILABLE > > NOT AVAILABLE > > atlas_threads_info: > Setting PTATLAS=ATLAS > libraries ptf77blas,ptcblas,atlas not found in /usr/local/python-2.7.3/lib > libraries lapack_atlas not found in /usr/local/python-2.7.3/lib > libraries lapack_atlas not found in /usr/lib64/atlas > numpy.distutils.system_info.atlas_threads_info > Setting PTATLAS=ATLAS > Setting PTATLAS=ATLAS > FOUND: > libraries = ['lapack', 'ptf77blas', 'ptcblas', 'atlas'] > library_dirs = ['/usr/lib64/atlas'] > language = f77 > define_macros = [('ATLAS_INFO', '"\\"3.8.4\\""')] > include_dirs = ['/usr/include'] > > FOUND: > libraries = ['lapack', 'ptf77blas', 'ptcblas', 'atlas'] > library_dirs = ['/usr/lib64/atlas'] > language = f77 > define_macros = [('ATLAS_INFO', '"\\"3.8.4\\""')] > include_dirs = ['/usr/include'] > > running config I couldn't get enough informations about numscons/bento, so I'm still trying to build it with site.cfg and CFLAGS. How could I solve it? Magician On 2012/05/21, at 1:50, Ralf Gommers wrote: > > > On Sun, May 20, 2012 at 4:16 PM, Magician wrote: > Hi Ralf, > > > Thanks for your advice. > I tried to install BLAS/Lapack/ATLAS as below: > > yum install blas-devel lapack-devel atlas-devel > > Next I installed NumPy as below: > > tar xzvf numpy-1.6.1.tar.gz > > cd numpy-1.6.1 > > export CFLAGS="-L/usr/local/python-2.7.3/lib" > > Note that this overrides CFLAGS instead of appending that flag to the rest. If it doesn't work without that line, you have to either specify all cflags or use numscons/bento. > > Ralf > > > > python setup.py build > > But then I got those errors: > > building 'numpy.linalg.lapack_lite' extension > > compiling C sources > > C compiler: gcc -pthread -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -L/usr/local/python-2.7.3/lib -fPIC > > > > creating build/temp.linux-x86_64-2.7/numpy/linalg > > compile options: '-DATLAS_INFO="\"3.8.4\"" -I/usr/include -Inumpy/core/include -Ibuild/src.linux-x86_64-2.7/numpy/core/include/numpy -Inumpy/core/src/private -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/include -I/usr/local/python-2.7.3/include/python2.7 -Ibuild/src.linux-x86_64-2.7/numpy/core/src/multiarray -Ibuild/src.linux-x86_64-2.7/numpy/core/src/umath -c' > > gcc: numpy/linalg/lapack_litemodule.c > > gcc: numpy/linalg/python_xerbla.c > > /usr/bin/gfortran -Wall -Wall -shared build/temp.linux-x86_64-2.7/numpy/linalg/lapack_litemodule.o build/temp.linux-x86_64-2.7/numpy/linalg/python_xerbla.o -L/usr/lib64/atlas -L. -Lbuild/temp.linux-x86_64-2.7 -llapack -lptf77blas -lptcblas -latlas -lpython2.7 -lgfortran -o build/lib.linux-x86_64-2.7/numpy/linalg/lapack_lite.so > > /usr/bin/ld: cannot find -lpython2.7 > > collect2: ld returned 1 exit status > > /usr/bin/ld: cannot find -lpython2.7 > > collect2: ld returned 1 exit status > > error: Command "/usr/bin/gfortran -Wall -Wall -shared build/temp.linux-x86_64-2.7/numpy/linalg/lapack_litemodule.o build/temp.linux-x86_64-2.7/numpy/linalg/python_xerbla.o -L/usr/lib64/atlas -L. -Lbuild/temp.linux-x86_64-2.7 -llapack -lptf77blas -lptcblas -latlas -lpython2.7 -lgfortran -o build/lib.linux-x86_64-2.7/numpy/linalg/lapack_lite.so" failed with exit status 1 > > If I haven't install BLAS/Lapack/ATLAS, NumPy will be > successfully built and installed. > > > Magician > > > On 2012/05/20, at 19:40, scipy-user-request at scipy.org wrote: > > > Message: 1 > > Date: Sun, 20 May 2012 10:21:12 +0200 > > From: Ralf Gommers > > Subject: Re: [SciPy-User] SciPy installation troubles on CentOS 6.2 > > To: SciPy Users List > > Message-ID: > > > > Content-Type: text/plain; charset="iso-8859-1" > > > > On Sat, May 19, 2012 at 5:09 PM, Magician wrote: > > > >> Hi All, > >> > >> > >> I'm trying to build SciPy from source code, > >> but I have some troubles. > >> > >> My environment is below: > >>> CentOS 6.2 on VMware Fusion 4.1.2 (CentOS was installed as Software > >> Development WS) > >>> Python 2.7.3 (already built from sources, installed at > >> /usr/local/python-2.7.3) > >>> NumPy 1.6.1, SciPy 0.10.1, ATLAS 3.8.4, Lapack 3.4.1 (now trying to > >> build) > >> I and my colleagues (other users) want to use recent Python, > >> so I installed Python from sources, and I can't install SciPy > >> by using yum command. > >> > >> Now I'm facing ATLAS compiling errors. > >> Configuration options are "--prefix=/usr/local/atlas-3.8.4 -Fa alg -fPIC". > >> I tried to build it for several times, and always I got errors as below: > >>> res/dgemvN_6_75 : VARIATION EXCEEDS TOLERENCE, RERUN WITH HIGHER REPS. > >>> > >>> ATL_gemvN_mm.c : 1257.99 > >>> ATL_gemvN_1x1_1.c : 581.74 > >>> ATL_gemvN_1x1_1a.c : 1589.45 > >>> ATL_gemvN_4x2_0.c : 813.13 > >>> ATL_gemvN_4x4_1.c : 755.54 > >>> make[3]: *** [res/dMVRES] Error 255 > >>> make[3]: Leaving directory > >> `/home/magician/Desktop/ATLAS/build/tune/blas/gemv' > >>> make[2]: *** > >> [/home/magician/Desktop/ATLAS/build/tune/blas/gemv/res/dMVRES] Error 2 > >>> make[2]: Leaving directory `/home/magician/Desktop/ATLAS/build/bin' > >>> ERROR 734 DURING MVTUNE!!. CHECK INSTALL_LOG/dMVTUNE.LOG FOR DETAILS. > >>> make[2]: Entering directory `/home/magician/Desktop/ATLAS/build/bin' > >>> cd /home/magician/Desktop/ATLAS/build ; make error_report > >>> make[3]: Entering directory `/home/magician/Desktop/ATLAS/build' > >>> make -f Make.top error_report > >>> make[4]: Entering directory `/home/magician/Desktop/ATLAS/build' > >>> uname -a 2>&1 >> bin/INSTALL_LOG/ERROR.LOG > >>> gcc -v 2>&1 >> bin/INSTALL_LOG/ERROR.LOG > >>> Using built-in specs. > >>> Target: x86_64-redhat-linux > >>> Configured with: ../configure --prefix=/usr --mandir=/usr/share/man > >> --infodir=/usr/share/info --with-bugurl= > >> http://bugzilla.redhat.com/bugzilla --enable-bootstrap --enable-shared > >> --enable-threads=posix --enable-checking=release --with-system-zlib > >> --enable-__cxa_atexit --disable-libunwind-exceptions > >> --enable-gnu-unique-object > >> --enable-languages=c,c++,objc,obj-c++,java,fortran,ada > >> --enable-java-awt=gtk --disable-dssi > >> --with-java-home=/usr/lib/jvm/java-1.5.0-gcj-1.5.0.0/jre > >> --enable-libgcj-multifile --enable-java-maintainer-mode > >> --with-ecj-jar=/usr/share/java/eclipse-ecj.jar --disable-libjava-multilib > >> --with-ppl --with-cloog --with-tune=generic --with-arch_32=i686 > >> --build=x86_64-redhat-linux > >>> Thread model: posix > >>> gcc version 4.4.6 20110731 (Red Hat 4.4.6-3) (GCC) > >>> gcc -V 2>&1 >> bin/INSTALL_LOG/ERROR.LOG > >>> gcc: '-V' option must have argument > >>> make[4]: [error_report] Error 1 (ignored) > >>> gcc --version 2>&1 >> bin/INSTALL_LOG/ERROR.LOG > >>> tar cf error_Corei264SSE3.tar Make.inc bin/INSTALL_LOG/* > >>> gzip --best error_Corei264SSE3.tar > >>> mv error_Corei264SSE3.tar.gz error_Corei264SSE3.tgz > >>> make[4]: Leaving directory `/home/magician/Desktop/ATLAS/build' > >>> make[3]: Leaving directory `/home/magician/Desktop/ATLAS/build' > >>> make[2]: Leaving directory `/home/magician/Desktop/ATLAS/build/bin' > >>> Error report error_.tgz has been created in your top-level ATLAS > >>> directory. Be sure to include this file in any help request. > >>> cat: ../../CONFIG/error.txt: No such file or directory > >>> cat: ../../CONFIG/error.txt: No such file or directory > >>> make[1]: *** [build] Error 255 > >>> make[1]: Leaving directory `/home/magician/Desktop/ATLAS/build' > >>> make: *** [build] Error 2 > >> > >> It's very troublesome for me to build ATLAS by myself. > >> My purpose is just using SciPy on my Python. > >> Even if it's optimized not so good for my environment, it's OK. > >> > >> Is there an easy or a sure way to build and install SciPy? > > > > > > Building ATLAS is much harder than building scipy, so you should try to > > find some rpm's for it, like > > http://linuxtoolkit.blogspot.com/2010/09/installing-lapack-blas-and-atlas-on.html. > > There's no problem building scipy against ATLAS from a binary install. > > > > Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From sergio_r at mail.com Sun May 27 14:10:41 2012 From: sergio_r at mail.com (Sergio Rojas) Date: Sun, 27 May 2012 14:10:41 -0400 Subject: [SciPy-User] SciPy installation troubles on CentOS 6.2 Message-ID: <20120527181042.17890@gmx.com> Magician, try this: linux> locate libtatlas.so the directory in which it appears and the corresponding include which contains atlas include headears (to find it try "locate cblas.h". In my system the include directory is at /usr/include/ and inside it there should be an atlas directory) are the ones you need in site.cfg Since I built my own atlas I have: [DEFAULT] library_dirs = /home/srojas/myPROG/LapackLib_gfortran/Atlas64b/lib include_dirs = /home/srojas/myPROG/LapackLib_gfortran/Atlas64b/include [blas_opt] libraries = ptf77blas, ptcblas, atlas [lapack_opt] libraries = lapack, ptf77blas, ptcblas, atlas to compile numpy I use: python setup.py build --fcompiler=gfortran (you need to use the same compiler used to built atlas) If finished sucessfully, then to verify atlas support, before installing, try: ldd ./build/lib.linux-x86_64-2.7/numpy/linalg/lapack_lite.so which should show links to the library. If still you have problems, maybe you need to start doing things from scratch, installing python and atlas in your own path. Sergio ----- Original Message ----- From: Magician Sent: 05/26/12 10:05 PM To: Ralf Gommers, sergio_r at mail.com Subject: Re: [SciPy-User] SciPy installation troubles on CentOS 6.2 Hi Ralf and Sergio, I thought my problems inclined to NumPy, and I posted to NumPy mailing list. But I couldn't get any answers for several days, so I'd like to ask again. I tried to build NumPy again with site.cfg, but I got these errors: > building 'numpy.core._sort' extension > compiling C sources > C compiler: gcc -pthread -fno-strict-aliasing -g -O2 -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC > > compile options: '-Inumpy/core/include -Ibuild/src.linux-x86_64-2.7/numpy/core/include/numpy -Inumpy/core/src/private -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/include -I/usr/local/python-2.7.3/include/python2.7 -Ibuild/src.linux-x86_64-2.7/numpy/core/src/multiarray -Ibuild/src.linux-x86_64-2.7/numpy/core/src/umath -c' > gcc: build/src.linux-x86_64-2.7/numpy/core/src/_sortmodule.c > gcc -pthread -shared build/temp.linux-x86_64-2.7/build/src.linux-x86_64-2.7/numpy/core/src/_sortmodule.o -L. -Lbuild/temp.linux-x86_64-2.7 -lnpymath -lm -lpython2.7 -o build/lib.linux-x86_64-2.7/numpy/core/_sort.so > /usr/bin/ld: cannot find -lpython2.7 > collect2: ld returned 1 exit status > /usr/bin/ld: cannot find -lpython2.7 > collect2: ld returned 1 exit status > error: Command "gcc -pthread -shared build/temp.linux-x86_64-2.7/build/src.linux-x86_64-2.7/numpy/core/src/_sortmodule.o -L. -Lbuild/temp.linux-x86_64-2.7 -lnpymath -lm -lpython2.7 -o build/lib.linux-x86_64-2.7/numpy/core/_sort.so" failed with exit status 1 Before I built NumPy, I uncommented and modified site.cfg as below: > [DEFAULT] > library_dirs = /usr/local/python-2.7.3/lib:/usr/lib64:/usr/lib64/atlas > include_dirs = /usr/local/python-2.7.3/include:/usr/include:/usr/include/atlas > > [blas_opt] > libraries = f77blas, cblas, atlas > > [lapack_opt] > libraries = lapack, f77blas, cblas, atlas And "setup.py config" dumped these messages: > Running from numpy source directory.F2PY Version 2 > blas_opt_info: > blas_mkl_info: > libraries mkl,vml,guide not found in /usr/local/python-2.7.3/lib > libraries mkl,vml,guide not found in /usr/lib64 > libraries mkl,vml,guide not found in /usr/lib64/atlas > NOT AVAILABLE > > atlas_blas_threads_info: > Setting PTATLAS=ATLAS > libraries ptf77blas,ptcblas,atlas not found in /usr/local/python-2.7.3/lib > Setting PTATLAS=ATLAS > customize GnuFCompiler > Could not locate executable g77 > Could not locate executable f77 > customize IntelFCompiler > Could not locate executable ifort > Could not locate executable ifc > customize LaheyFCompiler > Could not locate executable lf95 > customize PGroupFCompiler > Could not locate executable pgf90 > Could not locate executable pgf77 > customize AbsoftFCompiler > Could not locate executable f90 > customize NAGFCompiler > Found executable /usr/bin/f95 > customize VastFCompiler > customize CompaqFCompiler > Could not locate executable fort > customize IntelItaniumFCompiler > Could not locate executable efort > Could not locate executable efc > customize IntelEM64TFCompiler > customize Gnu95FCompiler > Found executable /usr/bin/gfortran > customize Gnu95FCompiler > customize Gnu95FCompiler using config > compiling '_configtest.c': > > /* This file is generated from numpy/distutils/system_info.py */ > void ATL_buildinfo(void); > int main(void) { > ATL_buildinfo(); > return 0; > } > > C compiler: gcc -pthread -fno-strict-aliasing -g -O2 -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC > > compile options: '-c' > gcc: _configtest.c > gcc -pthread _configtest.o -L/usr/lib64/atlas -lptf77blas -lptcblas -latlas -o _configtest > ATLAS version 3.8.4 built by mockbuild on Wed Dec 7 18:04:21 GMT 2011: > UNAME : Linux c6b5.bsys.dev.centos.org 2.6.32-44.2.el6.x86_64 #1 SMP Wed Jul 21 12:48:32 EDT 2010 x86_64 x86_64 x86_64 GNU/Linux > INSTFLG : -1 0 -a 1 > ARCHDEFS : -DATL_OS_Linux -DATL_ARCH_PII -DATL_CPUMHZ=2261 -DATL_SSE2 -DATL_SSE1 -DATL_USE64BITS -DATL_GAS_x8664 > F2CDEFS : -DAdd_ -DF77_INTEGER=int -DStringSunStyle > CACHEEDGE: 8388608 > F77 : gfortran, version GNU Fortran (GCC) 4.4.6 20110731 (Red Hat 4.4.6-3) > F77FLAGS : -fomit-frame-pointer -mfpmath=387 -O2 -falign-loops=4 -g -Wa,--noexecstack -fPIC -m64 > SMC : gcc, version gcc (GCC) 4.4.6 20110731 (Red Hat 4.4.6-3) > SMCFLAGS : -fomit-frame-pointer -mfpmath=387 -O2 -falign-loops=4 -g -Wa,--noexecstack -fPIC -m64 > SKC : gcc, version gcc (GCC) 4.4.6 20110731 (Red Hat 4.4.6-3) > SKCFLAGS : -fomit-frame-pointer -mfpmath=387 -O2 -falign-loops=4 -g -Wa,--noexecstack -fPIC -m64 > success! > removing: _configtest.c _configtest.o _configtest > Setting PTATLAS=ATLAS > FOUND: > libraries = ['ptf77blas', 'ptcblas', 'atlas'] > library_dirs = ['/usr/lib64/atlas'] > language = c > define_macros = [('ATLAS_INFO', '"\\"3.8.4\\""')] > include_dirs = ['/usr/include'] > > FOUND: > libraries = ['ptf77blas', 'ptcblas', 'atlas'] > library_dirs = ['/usr/lib64/atlas'] > language = c > define_macros = [('ATLAS_INFO', '"\\"3.8.4\\""')] > include_dirs = ['/usr/include'] > > lapack_opt_info: > lapack_mkl_info: > mkl_info: > libraries mkl,vml,guide not found in /usr/local/python-2.7.3/lib > libraries mkl,vml,guide not found in /usr/lib64 > libraries mkl,vml,guide not found in /usr/lib64/atlas > NOT AVAILABLE > > NOT AVAILABLE > > atlas_threads_info: > Setting PTATLAS=ATLAS > libraries ptf77blas,ptcblas,atlas not found in /usr/local/python-2.7.3/lib > libraries lapack_atlas not found in /usr/local/python-2.7.3/lib > libraries lapack_atlas not found in /usr/lib64/atlas > numpy.distutils.system_info.atlas_threads_info > Setting PTATLAS=ATLAS > Setting PTATLAS=ATLAS > FOUND: > libraries = ['lapack', 'ptf77blas', 'ptcblas', 'atlas'] > library_dirs = ['/usr/lib64/atlas'] > language = f77 > define_macros = [('ATLAS_INFO', '"\\"3.8.4\\""')] > include_dirs = ['/usr/include'] > > FOUND: > libraries = ['lapack', 'ptf77blas', 'ptcblas', 'atlas'] > library_dirs = ['/usr/lib64/atlas'] > language = f77 > define_macros = [('ATLAS_INFO', '"\\"3.8.4\\""')] > include_dirs = ['/usr/include'] > > running config I couldn't get enough informations about numscons/bento, so I'm still trying to build it with site.cfg and CFLAGS. How could I solve it? Magician On 2012/05/21, at 1:50, Ralf Gommers wrote: > > > On Sun, May 20, 2012 at 4:16 PM, Magician wrote: > Hi Ralf, > > > Thanks for your advice. > I tried to install BLAS/Lapack/ATLAS as below: > > yum install blas-devel lapack-devel atlas-devel > > Next I installed NumPy as below: > > tar xzvf numpy-1.6.1.tar.gz > > cd numpy-1.6.1 > > export CFLAGS="-L/usr/local/python-2.7.3/lib" > > Note that this overrides CFLAGS instead of appending that flag to the rest. If it doesn't work without that line, you have to either specify all cflags or use numscons/bento. > > Ralf > > > > python setup.py build > > But then I got those errors: > > building 'numpy.linalg.lapack_lite' extension > > compiling C sources > > C compiler: gcc -pthread -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -L/usr/local/python-2.7.3/lib -fPIC > > > > creating build/temp.linux-x86_64-2.7/numpy/linalg > > compile options: '-DATLAS_INFO="\"3.8.4\"" -I/usr/include -Inumpy/core/include -Ibuild/src.linux-x86_64-2.7/numpy/core/include/numpy -Inumpy/core/src/private -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/include -I/usr/local/python-2.7.3/include/python2.7 -Ibuild/src.linux-x86_64-2.7/numpy/core/src/multiarray -Ibuild/src.linux-x86_64-2.7/numpy/core/src/umath -c' > > gcc: numpy/linalg/lapack_litemodule.c > > gcc: numpy/linalg/python_xerbla.c > > /usr/bin/gfortran -Wall -Wall -shared build/temp.linux-x86_64-2.7/numpy/linalg/lapack_litemodule.o build/temp.linux-x86_64-2.7/numpy/linalg/python_xerbla.o -L/usr/lib64/atlas -L. -Lbuild/temp.linux-x86_64-2.7 -llapack -lptf77blas -lptcblas -latlas -lpython2.7 -lgfortran -o build/lib.linux-x86_64-2.7/numpy/linalg/lapack_lite.so > > /usr/bin/ld: cannot find -lpython2.7 > > collect2: ld returned 1 exit status > > /usr/bin/ld: cannot find -lpython2.7 > > collect2: ld returned 1 exit status > > error: Command "/usr/bin/gfortran -Wall -Wall -shared build/temp.linux-x86_64-2.7/numpy/linalg/lapack_litemodule.o build/temp.linux-x86_64-2.7/numpy/linalg/python_xerbla.o -L/usr/lib64/atlas -L. -Lbuild/temp.linux-x86_64-2.7 -llapack -lptf77blas -lptcblas -latlas -lpython2.7 -lgfortran -o build/lib.linux-x86_64-2.7/numpy/linalg/lapack_lite.so" failed with exit status 1 > > If I haven't install BLAS/Lapack/ATLAS, NumPy will be > successfully built and installed. > > > Magician > > > On 2012/05/20, at 19:40, scipy-user-request at scipy.org wrote: > > > Message: 1 > > Date: Sun, 20 May 2012 10:21:12 +0200 > > From: Ralf Gommers > > Subject: Re: [SciPy-User] SciPy installation troubles on CentOS 6.2 > > To: SciPy Users List > > Message-ID: > > > > Content-Type: text/plain; charset="iso-8859-1" > > > > On Sat, May 19, 2012 at 5:09 PM, Magician wrote: > > > >> Hi All, > >> > >> > >> I'm trying to build SciPy from source code, > >> but I have some troubles. > >> > >> My environment is below: > >>> CentOS 6.2 on VMware Fusion 4.1.2 (CentOS was installed as Software > >> Development WS) > >>> Python 2.7.3 (already built from sources, installed at > >> /usr/local/python-2.7.3) > >>> NumPy 1.6.1, SciPy 0.10.1, ATLAS 3.8.4, Lapack 3.4.1 (now trying to > >> build) > >> I and my colleagues (other users) want to use recent Python, > >> so I installed Python from sources, and I can't install SciPy > >> by using yum command. > >> > >> Now I'm facing ATLAS compiling errors. > >> Configuration options are "--prefix=/usr/local/atlas-3.8.4 -Fa alg -fPIC". > >> I tried to build it for several times, and always I got errors as below: > >>> res/dgemvN_6_75 : VARIATION EXCEEDS TOLERENCE, RERUN WITH HIGHER REPS. > >>> > >>> ATL_gemvN_mm.c : 1257.99 > >>> ATL_gemvN_1x1_1.c : 581.74 > >>> ATL_gemvN_1x1_1a.c : 1589.45 > >>> ATL_gemvN_4x2_0.c : 813.13 > >>> ATL_gemvN_4x4_1.c : 755.54 > >>> make[3]: *** [res/dMVRES] Error 255 > >>> make[3]: Leaving directory > >> `/home/magician/Desktop/ATLAS/build/tune/blas/gemv' > >>> make[2]: *** > >> [/home/magician/Desktop/ATLAS/build/tune/blas/gemv/res/dMVRES] Error 2 > >>> make[2]: Leaving directory `/home/magician/Desktop/ATLAS/build/bin' > >>> ERROR 734 DURING MVTUNE!!. CHECK INSTALL_LOG/dMVTUNE.LOG FOR DETAILS. > >>> make[2]: Entering directory `/home/magician/Desktop/ATLAS/build/bin' > >>> cd /home/magician/Desktop/ATLAS/build ; make error_report > >>> make[3]: Entering directory `/home/magician/Desktop/ATLAS/build' > >>> make -f Make.top error_report > >>> make[4]: Entering directory `/home/magician/Desktop/ATLAS/build' > >>> uname -a 2>&1 >> bin/INSTALL_LOG/ERROR.LOG > >>> gcc -v 2>&1 >> bin/INSTALL_LOG/ERROR.LOG > >>> Using built-in specs. > >>> Target: x86_64-redhat-linux > >>> Configured with: ../configure --prefix=/usr --mandir=/usr/share/man > >> --infodir=/usr/share/info --with-bugurl= > >> http://bugzilla.redhat.com/bugzilla --enable-bootstrap --enable-shared > >> --enable-threads=posix --enable-checking=release --with-system-zlib > >> --enable-__cxa_atexit --disable-libunwind-exceptions > >> --enable-gnu-unique-object > >> --enable-languages=c,c++,objc,obj-c++,java,fortran,ada > >> --enable-java-awt=gtk --disable-dssi > >> --with-java-home=/usr/lib/jvm/java-1.5.0-gcj-1.5.0.0/jre > >> --enable-libgcj-multifile --enable-java-maintainer-mode > >> --with-ecj-jar=/usr/share/java/eclipse-ecj.jar --disable-libjava-multilib > >> --with-ppl --with-cloog --with-tune=generic --with-arch_32=i686 > >> --build=x86_64-redhat-linux > >>> Thread model: posix > >>> gcc version 4.4.6 20110731 (Red Hat 4.4.6-3) (GCC) > >>> gcc -V 2>&1 >> bin/INSTALL_LOG/ERROR.LOG > >>> gcc: '-V' option must have argument > >>> make[4]: [error_report] Error 1 (ignored) > >>> gcc --version 2>&1 >> bin/INSTALL_LOG/ERROR.LOG > >>> tar cf error_Corei264SSE3.tar Make.inc bin/INSTALL_LOG/* > >>> gzip --best error_Corei264SSE3.tar > >>> mv error_Corei264SSE3.tar.gz error_Corei264SSE3.tgz > >>> make[4]: Leaving directory `/home/magician/Desktop/ATLAS/build' > >>> make[3]: Leaving directory `/home/magician/Desktop/ATLAS/build' > >>> make[2]: Leaving directory `/home/magician/Desktop/ATLAS/build/bin' > >>> Error report error_.tgz has been created in your top-level ATLAS > >>> directory. Be sure to include this file in any help request. > >>> cat: ../../CONFIG/error.txt: No such file or directory > >>> cat: ../../CONFIG/error.txt: No such file or directory > >>> make[1]: *** [build] Error 255 > >>> make[1]: Leaving directory `/home/magician/Desktop/ATLAS/build' > >>> make: *** [build] Error 2 > >> > >> It's very troublesome for me to build ATLAS by myself. > >> My purpose is just using SciPy on my Python. > >> Even if it's optimized not so good for my environment, it's OK. > >> > >> Is there an easy or a sure way to build and install SciPy? > > > > > > Building ATLAS is much harder than building scipy, so you should try to > > find some rpm's for it, like > > http://linuxtoolkit.blogspot.com/2010/09/installing-lapack-blas-and-atlas-on.html. > > There's no problem building scipy against ATLAS from a binary install. > > > > Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From wesmckinn at gmail.com Sun May 27 16:25:34 2012 From: wesmckinn at gmail.com (Wes McKinney) Date: Sun, 27 May 2012 16:25:34 -0400 Subject: [SciPy-User] SciPy install from source fails on fresh Python 3.2.3, 64-bit Ubuntu Message-ID: I'm getting failure at the end of 2to3 on scipy 0.10.1, with easy_install and also from git master / 0.10.1 tag. I'll try rolling back to Python 3.2.2 (3.2.3 was released on 4/10) later today and see if that works RefactoringTool: /tmp/easy_install-sv1v29/scipy-0.10.1/build/py3k/scipy/sparse/linalg/dsolve/umfpack/setup.py /tmp/easy_install-sv1v29/scipy-0.10.1/build/py3k/scipy/ndimage/fourier.py /tmp/easy_install-sv1v29/scipy-0.10.1/build/py3k/scipy/ndimage/interpolation.py /tmp/easy_install-sv1v29/scipy-0.10.1/build/py3k/scipy/ndimage/measurements.py /tmp/easy_install-sv1v29/scipy-0.10.1/build/py3k/scipy/ndimage/filters.py /tmp/easy_install-sv1v29/scipy-0.10.1/build/py3k/scipy/ndimage/morphology.py /tmp/easy_install-sv1v29/scipy-0.10.1/build/py3k/scipy/integrate/quadpack.py /tmp/easy_install-sv1v29/scipy-0.10.1/build/py3k/scipy/integrate/_ode.py /tmp/easy_install-sv1v29/scipy-0.10.1/build/py3k/scipy/integrate/odepack.py /tmp/easy_install-sv1v29/scipy-0.10.1/build/py3k/scipy/lib/lapack/__init__.py /tmp/easy_install-sv1v29/scipy-0.10.1/build/py3k/scipy/lib/blas/__init__.py /tmp/easy_install-sv1v29/scipy-0.10.1/build/py3k/scipy/cluster/__init__.py /tmp/easy_install-sv1v29/scipy-0.10.1/build/py3k/scipy/cluster/vq.py /tmp/easy_install-sv1v29/scipy-0.10.1/build/py3k/scipy/cluster/hierarchy.py /tmp/easy_install-sv1v29/scipy-0.10.1/build/py3k/scipy/stats/mstats_basic.py /tmp/easy_install-sv1v29/scipy-0.10.1/build/py3k/scipy/stats/morestats.py /tmp/easy_install-sv1v29/scipy-0.10.1/build/py3k/scipy/stats/distributions.py /tmp/easy_install-sv1v29/scipy-0.10.1/build/py3k/scipy/stats/kde.py /tmp/easy_install-sv1v29/scipy-0.10.1/build/py3k/scipy/stats/stats.py /tmp/easy_install-sv1v29/scipy-0.10.1/build/py3k/scipy/interpolate/fitpack.py /tmp/easy_install-sv1v29/scipy-0.10.1/build/py3k/scipy/interpolate/interpolate.py /tmp/easy_install-sv1v29/scipy-0.10.1/build/py3k/scipy/interpolate/ndgriddata.py /tmp/easy_install-sv1v29/scipy-0.10.1/build/py3k/scipy/interpolate/interpolate_wrapper.py /tmp/easy_install-sv1v29/scipy-0.10.1/build/py3k/scipy/interpolate/fitpack2.py /tmp/easy_install-sv1v29/scipy-0.10.1/build/py3k/scipy/io/__init__.py /tmp/easy_install-sv1v29/scipy-0.10.1/build/py3k/scipy/io/matlab/mio4.py /tmp/easy_install-sv1v29/scipy-0.10.1/build/py3k/scipy/io/matlab/mio5.py /tmp/easy_install-sv1v29/scipy-0.10.1/build/py3k/scipy/io/matlab/miobase.py /tmp/easy_install-sv1v29/scipy-0.10.1/build/py3k/scipy/io/matlab/mio5_params.py /tmp/easy_install-sv1v29/scipy-0.10.1/build/py3k/scipy/optimize/zeros.py /tmp/easy_install-sv1v29/scipy-0.10.1/build/py3k/scipy/optimize/nnls.py /tmp/easy_install-sv1v29/scipy-0.10.1/build/py3k/scipy/optimize/cobyla.py /tmp/easy_install-sv1v29/scipy-0.10.1/build/py3k/scipy/optimize/lbfgsb.py /tmp/easy_install-sv1v29/scipy-0.10.1/build/py3k/scipy/optimize/minpack.py /tmp/easy_install-sv1v29/scipy-0.10.1/build/py3k/scipy/optimize/slsqp.py /tmp/easy_install-sv1v29/scipy-0.10.1/build/py3k/scipy/spatial/__init__.py /tmp/easy_install-sv1v29/scipy-0.10.1/build/py3k/scipy/spatial/distance.py /tmp/easy_install-sv1v29/scipy-0.10.1/build/py3k/scipy/special/__init__.py /tmp/easy_install-sv1v29/scipy-0.10.1/build/py3k/scipy/special/basic.py /tmp/easy_install-sv1v29/scipy-0.10.1/build/py3k/scipy/special/add_newdocs.py /tmp/easy_install-sv1v29/scipy-0.10.1/build/py3k/scipy/special/orthogonal.py /tmp/easy_install-sv1v29/scipy-0.10.1/build/py3k/scipy/linalg/decomp.py /tmp/easy_install-sv1v29/scipy-0.10.1/build/py3k/scipy/linalg/basic.py /tmp/easy_install-sv1v29/scipy-0.10.1/build/py3k/scipy/linalg/misc.py /tmp/easy_install-sv1v29/scipy-0.10.1/build/py3k/scipy/linalg/flinalg.py /tmp/easy_install-sv1v29/scipy-0.10.1/build/py3k/scipy/linalg/lapack.py /tmp/easy_install-sv1v29/scipy-0.10.1/build/py3k/scipy/signal/__init__.py /tmp/easy_install-sv1v29/scipy-0.10.1/build/py3k/scipy/signal/bsplines.py /tmp/easy_install-sv1v29/scipy-0.10.1/build/py3k/scipy/signal/signaltools.py /tmp/easy_install-sv1v29/scipy-0.10.1/build/py3k/scipy/signal/fir_filter_design.py /tmp/easy_install-sv1v29/scipy-0.10.1/build/py3k/scipy/fftpack/basic.py /tmp/easy_install-sv1v29/scipy-0.10.1/build/py3k/scipy/fftpack/pseudo_diffs.py /tmp/easy_install-sv1v29/scipy-0.10.1/build/py3k/scipy/sparse/linalg/eigen/arpack/arpack.py /tmp/easy_install-sv1v29/scipy-0.10.1/build/py3k/scipy/sparse/linalg/isolve/iterative.py /tmp/easy_install-sv1v29/scipy-0.10.1/build/py3k/scipy/sparse/linalg/dsolve/linsolve.py /tmp/easy_install-sv1v29/scipy-0.10.1/build/py3k/scipy/sparse/linalg/dsolve/umfpack/umfpack.py Running from scipy source directory. Traceback (most recent call last): File "/usr/local/bin/easy_install-3.2", line 9, in load_entry_point('distribute==0.6.27', 'console_scripts', 'easy_install-3.2')() File "/usr/local/lib/python3.2/site-packages/distribute-0.6.27-py3.2.egg/setuptools/command/easy_install.py", line 1915, in main with_ei_usage(lambda: File "/usr/local/lib/python3.2/site-packages/distribute-0.6.27-py3.2.egg/setuptools/command/easy_install.py", line 1896, in with_ei_usage return f() File "/usr/local/lib/python3.2/site-packages/distribute-0.6.27-py3.2.egg/setuptools/command/easy_install.py", line 1919, in distclass=DistributionWithoutHelpCommands, **kw File "/usr/local/lib/python3.2/distutils/core.py", line 148, in setup dist.run_commands() File "/usr/local/lib/python3.2/distutils/dist.py", line 917, in run_commands self.run_command(cmd) File "/usr/local/lib/python3.2/distutils/dist.py", line 936, in run_command cmd_obj.run() File "/usr/local/lib/python3.2/site-packages/distribute-0.6.27-py3.2.egg/setuptools/command/easy_install.py", line 350, in run self.easy_install(spec, not self.no_deps) File "/usr/local/lib/python3.2/site-packages/distribute-0.6.27-py3.2.egg/setuptools/command/easy_install.py", line 590, in easy_install return self.install_item(spec, dist.location, tmpdir, deps) File "/usr/local/lib/python3.2/site-packages/distribute-0.6.27-py3.2.egg/setuptools/command/easy_install.py", line 620, in install_item dists = self.install_eggs(spec, download, tmpdir) File "/usr/local/lib/python3.2/site-packages/distribute-0.6.27-py3.2.egg/setuptools/command/easy_install.py", line 814, in install_eggs return self.build_and_install(setup_script, setup_base) File "/usr/local/lib/python3.2/site-packages/distribute-0.6.27-py3.2.egg/setuptools/command/easy_install.py", line 1094, in build_and_install self.run_setup(setup_script, setup_base, args) File "/usr/local/lib/python3.2/site-packages/distribute-0.6.27-py3.2.egg/setuptools/command/easy_install.py", line 1080, in run_setup run_setup(setup_script, args) File "/usr/local/lib/python3.2/site-packages/distribute-0.6.27-py3.2.egg/setuptools/sandbox.py", line 31, in run_setup lambda: exec(compile(open( File "/usr/local/lib/python3.2/site-packages/distribute-0.6.27-py3.2.egg/setuptools/sandbox.py", line 79, in run return func() File "/usr/local/lib/python3.2/site-packages/distribute-0.6.27-py3.2.egg/setuptools/sandbox.py", line 34, in {'__file__':setup_script, '__name__':'__main__'}) File "setup.py", line 196, in File "setup.py", line 187, in setup_package File "/usr/local/lib/python3.2/site-packages/numpy/distutils/core.py", line 152, in setup config = configuration() File "setup.py", line 138, in configuration File "/usr/local/lib/python3.2/site-packages/numpy/distutils/misc_util.py", line 1002, in add_subpackage caller_level = 2) File "/usr/local/lib/python3.2/site-packages/numpy/distutils/misc_util.py", line 971, in get_subpackage caller_level = caller_level + 1) File "/usr/local/lib/python3.2/site-packages/numpy/distutils/misc_util.py", line 908, in _get_configuration_from_setup_py config = setup_module.configuration(*args) File "scipy/setup.py", line 4, in configuration File "/usr/local/lib/python3.2/site-packages/numpy/distutils/misc_util.py", line 743, in __init__ raise ValueError("%r is not a directory" % (package_path,)) ValueError: 'build/py3k/scipy' is not a directory From wesmckinn at gmail.com Sun May 27 21:38:08 2012 From: wesmckinn at gmail.com (Wes McKinney) Date: Sun, 27 May 2012 21:38:08 -0400 Subject: [SciPy-User] SciPy install from source fails on fresh Python 3.2.3, 64-bit Ubuntu In-Reply-To: References: Message-ID: On Sun, May 27, 2012 at 4:25 PM, Wes McKinney wrote: > I'm getting failure at the end of 2to3 on scipy 0.10.1, with > easy_install and also from git master / 0.10.1 tag. I'll try rolling > back to Python 3.2.2 (3.2.3 was released on 4/10) later today and see > if that works > > RefactoringTool: > /tmp/easy_install-sv1v29/scipy-0.10.1/build/py3k/scipy/sparse/linalg/dsolve/umfpack/setup.py > /tmp/easy_install-sv1v29/scipy-0.10.1/build/py3k/scipy/ndimage/fourier.py > /tmp/easy_install-sv1v29/scipy-0.10.1/build/py3k/scipy/ndimage/interpolation.py > /tmp/easy_install-sv1v29/scipy-0.10.1/build/py3k/scipy/ndimage/measurements.py > /tmp/easy_install-sv1v29/scipy-0.10.1/build/py3k/scipy/ndimage/filters.py > /tmp/easy_install-sv1v29/scipy-0.10.1/build/py3k/scipy/ndimage/morphology.py > /tmp/easy_install-sv1v29/scipy-0.10.1/build/py3k/scipy/integrate/quadpack.py > /tmp/easy_install-sv1v29/scipy-0.10.1/build/py3k/scipy/integrate/_ode.py > /tmp/easy_install-sv1v29/scipy-0.10.1/build/py3k/scipy/integrate/odepack.py > /tmp/easy_install-sv1v29/scipy-0.10.1/build/py3k/scipy/lib/lapack/__init__.py > /tmp/easy_install-sv1v29/scipy-0.10.1/build/py3k/scipy/lib/blas/__init__.py > /tmp/easy_install-sv1v29/scipy-0.10.1/build/py3k/scipy/cluster/__init__.py > /tmp/easy_install-sv1v29/scipy-0.10.1/build/py3k/scipy/cluster/vq.py > /tmp/easy_install-sv1v29/scipy-0.10.1/build/py3k/scipy/cluster/hierarchy.py > /tmp/easy_install-sv1v29/scipy-0.10.1/build/py3k/scipy/stats/mstats_basic.py > /tmp/easy_install-sv1v29/scipy-0.10.1/build/py3k/scipy/stats/morestats.py > /tmp/easy_install-sv1v29/scipy-0.10.1/build/py3k/scipy/stats/distributions.py > /tmp/easy_install-sv1v29/scipy-0.10.1/build/py3k/scipy/stats/kde.py > /tmp/easy_install-sv1v29/scipy-0.10.1/build/py3k/scipy/stats/stats.py > /tmp/easy_install-sv1v29/scipy-0.10.1/build/py3k/scipy/interpolate/fitpack.py > /tmp/easy_install-sv1v29/scipy-0.10.1/build/py3k/scipy/interpolate/interpolate.py > /tmp/easy_install-sv1v29/scipy-0.10.1/build/py3k/scipy/interpolate/ndgriddata.py > /tmp/easy_install-sv1v29/scipy-0.10.1/build/py3k/scipy/interpolate/interpolate_wrapper.py > /tmp/easy_install-sv1v29/scipy-0.10.1/build/py3k/scipy/interpolate/fitpack2.py > /tmp/easy_install-sv1v29/scipy-0.10.1/build/py3k/scipy/io/__init__.py > /tmp/easy_install-sv1v29/scipy-0.10.1/build/py3k/scipy/io/matlab/mio4.py > /tmp/easy_install-sv1v29/scipy-0.10.1/build/py3k/scipy/io/matlab/mio5.py > /tmp/easy_install-sv1v29/scipy-0.10.1/build/py3k/scipy/io/matlab/miobase.py > /tmp/easy_install-sv1v29/scipy-0.10.1/build/py3k/scipy/io/matlab/mio5_params.py > /tmp/easy_install-sv1v29/scipy-0.10.1/build/py3k/scipy/optimize/zeros.py > /tmp/easy_install-sv1v29/scipy-0.10.1/build/py3k/scipy/optimize/nnls.py > /tmp/easy_install-sv1v29/scipy-0.10.1/build/py3k/scipy/optimize/cobyla.py > /tmp/easy_install-sv1v29/scipy-0.10.1/build/py3k/scipy/optimize/lbfgsb.py > /tmp/easy_install-sv1v29/scipy-0.10.1/build/py3k/scipy/optimize/minpack.py > /tmp/easy_install-sv1v29/scipy-0.10.1/build/py3k/scipy/optimize/slsqp.py > /tmp/easy_install-sv1v29/scipy-0.10.1/build/py3k/scipy/spatial/__init__.py > /tmp/easy_install-sv1v29/scipy-0.10.1/build/py3k/scipy/spatial/distance.py > /tmp/easy_install-sv1v29/scipy-0.10.1/build/py3k/scipy/special/__init__.py > /tmp/easy_install-sv1v29/scipy-0.10.1/build/py3k/scipy/special/basic.py > /tmp/easy_install-sv1v29/scipy-0.10.1/build/py3k/scipy/special/add_newdocs.py > /tmp/easy_install-sv1v29/scipy-0.10.1/build/py3k/scipy/special/orthogonal.py > /tmp/easy_install-sv1v29/scipy-0.10.1/build/py3k/scipy/linalg/decomp.py > /tmp/easy_install-sv1v29/scipy-0.10.1/build/py3k/scipy/linalg/basic.py > /tmp/easy_install-sv1v29/scipy-0.10.1/build/py3k/scipy/linalg/misc.py > /tmp/easy_install-sv1v29/scipy-0.10.1/build/py3k/scipy/linalg/flinalg.py > /tmp/easy_install-sv1v29/scipy-0.10.1/build/py3k/scipy/linalg/lapack.py > /tmp/easy_install-sv1v29/scipy-0.10.1/build/py3k/scipy/signal/__init__.py > /tmp/easy_install-sv1v29/scipy-0.10.1/build/py3k/scipy/signal/bsplines.py > /tmp/easy_install-sv1v29/scipy-0.10.1/build/py3k/scipy/signal/signaltools.py > /tmp/easy_install-sv1v29/scipy-0.10.1/build/py3k/scipy/signal/fir_filter_design.py > /tmp/easy_install-sv1v29/scipy-0.10.1/build/py3k/scipy/fftpack/basic.py > /tmp/easy_install-sv1v29/scipy-0.10.1/build/py3k/scipy/fftpack/pseudo_diffs.py > /tmp/easy_install-sv1v29/scipy-0.10.1/build/py3k/scipy/sparse/linalg/eigen/arpack/arpack.py > /tmp/easy_install-sv1v29/scipy-0.10.1/build/py3k/scipy/sparse/linalg/isolve/iterative.py > /tmp/easy_install-sv1v29/scipy-0.10.1/build/py3k/scipy/sparse/linalg/dsolve/linsolve.py > /tmp/easy_install-sv1v29/scipy-0.10.1/build/py3k/scipy/sparse/linalg/dsolve/umfpack/umfpack.py > Running from scipy source directory. > Traceback (most recent call last): > ?File "/usr/local/bin/easy_install-3.2", line 9, in > ? ?load_entry_point('distribute==0.6.27', 'console_scripts', > 'easy_install-3.2')() > ?File "/usr/local/lib/python3.2/site-packages/distribute-0.6.27-py3.2.egg/setuptools/command/easy_install.py", > line 1915, in main > ? ?with_ei_usage(lambda: > ?File "/usr/local/lib/python3.2/site-packages/distribute-0.6.27-py3.2.egg/setuptools/command/easy_install.py", > line 1896, in with_ei_usage > ? ?return f() > ?File "/usr/local/lib/python3.2/site-packages/distribute-0.6.27-py3.2.egg/setuptools/command/easy_install.py", > line 1919, in > ? ?distclass=DistributionWithoutHelpCommands, **kw > ?File "/usr/local/lib/python3.2/distutils/core.py", line 148, in setup > ? ?dist.run_commands() > ?File "/usr/local/lib/python3.2/distutils/dist.py", line 917, in run_commands > ? ?self.run_command(cmd) > ?File "/usr/local/lib/python3.2/distutils/dist.py", line 936, in run_command > ? ?cmd_obj.run() > ?File "/usr/local/lib/python3.2/site-packages/distribute-0.6.27-py3.2.egg/setuptools/command/easy_install.py", > line 350, in run > ? ?self.easy_install(spec, not self.no_deps) > ?File "/usr/local/lib/python3.2/site-packages/distribute-0.6.27-py3.2.egg/setuptools/command/easy_install.py", > line 590, in easy_install > ? ?return self.install_item(spec, dist.location, tmpdir, deps) > ?File "/usr/local/lib/python3.2/site-packages/distribute-0.6.27-py3.2.egg/setuptools/command/easy_install.py", > line 620, in install_item > ? ?dists = self.install_eggs(spec, download, tmpdir) > ?File "/usr/local/lib/python3.2/site-packages/distribute-0.6.27-py3.2.egg/setuptools/command/easy_install.py", > line 814, in install_eggs > ? ?return self.build_and_install(setup_script, setup_base) > ?File "/usr/local/lib/python3.2/site-packages/distribute-0.6.27-py3.2.egg/setuptools/command/easy_install.py", > line 1094, in build_and_install > ? ?self.run_setup(setup_script, setup_base, args) > ?File "/usr/local/lib/python3.2/site-packages/distribute-0.6.27-py3.2.egg/setuptools/command/easy_install.py", > line 1080, in run_setup > ? ?run_setup(setup_script, args) > ?File "/usr/local/lib/python3.2/site-packages/distribute-0.6.27-py3.2.egg/setuptools/sandbox.py", > line 31, in run_setup > ? ?lambda: exec(compile(open( > ?File "/usr/local/lib/python3.2/site-packages/distribute-0.6.27-py3.2.egg/setuptools/sandbox.py", > line 79, in run > ? ?return func() > ?File "/usr/local/lib/python3.2/site-packages/distribute-0.6.27-py3.2.egg/setuptools/sandbox.py", > line 34, in > ? ?{'__file__':setup_script, '__name__':'__main__'}) > ?File "setup.py", line 196, in > ?File "setup.py", line 187, in setup_package > ?File "/usr/local/lib/python3.2/site-packages/numpy/distutils/core.py", > line 152, in setup > ? ?config = configuration() > ?File "setup.py", line 138, in configuration > ?File "/usr/local/lib/python3.2/site-packages/numpy/distutils/misc_util.py", > line 1002, in add_subpackage > ? ?caller_level = 2) > ?File "/usr/local/lib/python3.2/site-packages/numpy/distutils/misc_util.py", > line 971, in get_subpackage > ? ?caller_level = caller_level + 1) > ?File "/usr/local/lib/python3.2/site-packages/numpy/distutils/misc_util.py", > line 908, in _get_configuration_from_setup_py > ? ?config = setup_module.configuration(*args) > ?File "scipy/setup.py", line 4, in configuration > ?File "/usr/local/lib/python3.2/site-packages/numpy/distutils/misc_util.py", > line 743, in __init__ > ? ?raise ValueError("%r is not a directory" % (package_path,)) > ValueError: 'build/py3k/scipy' is not a directory To follow up, it appears that something was borked with my git clone-- a fresh git clone seems to have solved the problem. From numpy-discussion at maubp.freeserve.co.uk Thu May 24 08:19:23 2012 From: numpy-discussion at maubp.freeserve.co.uk (Peter) Date: Thu, 24 May 2012 13:19:23 +0100 Subject: [SciPy-User] [Numpy-discussion] Some numpy funcs for PyPy In-Reply-To: <6355.1337859149.6100681629467803648@ffe6.ukr.net> References: <6355.1337859149.6100681629467803648@ffe6.ukr.net> Message-ID: On Thu, May 24, 2012 at 12:32 PM, Dmitrey wrote: > hi all, > maybe you're aware of numpypy - numpy port for pypy (pypy.org) - Python > language implementation with dynamic compilation. > > Unfortunately, numpypy developmnent is very slow due to strict quality > standards and some other issues, so for my purposes I have provided some > missing numpypy funcs, in particular > > atleast_1d, atleast_2d, hstack, vstack, cumsum, isscalar, asscalar, > asfarray, flatnonzero, tile, zeros_like, ones_like, empty_like, where, > searchsorted > with "axis" parameter: nan(arg)min, nan(arg)max, all, any > > and have got some OpenOpt / FuncDesigner functionality working > faster than in CPython. > > File with this functions you can get here > > Also you may be interested in some info at http://openopt.org/PyPy > Regards, Dmitrey. As a NumPy user interested in PyPy it is great to know more people are trying to contribute in this area. I myself have only filed PyPy bugs about missing NumPy features rendering the initial numpypy support useless to me. On your website you wrote: >> From my (Dmitrey) point of view numpypy development is >> very unfriendly for newcomers - PyPy developers say "provide >> code, preferably in interpreter level instead of AppLevel, >> provide whole test coverage for all possible corner cases, >> provide hg diff for code, and then, maybe, it will be committed". >> Probably this is the reason why so insufficient number of >> developers work on numpypy. I assume that is paraphrased with a little hyperbole, but it isn't so different from numpy (other than using git), or many other open source projects. Unit tests are important, and taking patches without them is risky. I've been subscribed to the pypy-dev list for a while, but I don't recall seeing you posting there. Have you tried to submit any of your work to PyPy yet? Perhaps you should have sent this message to pypy-dev instead? (I am trying to be constructive, not critical.) Regards, Peter From numpy-discussion at maubp.freeserve.co.uk Thu May 24 09:37:39 2012 From: numpy-discussion at maubp.freeserve.co.uk (Peter) Date: Thu, 24 May 2012 14:37:39 +0100 Subject: [SciPy-User] [Numpy-discussion] Some numpy funcs for PyPy In-Reply-To: <69589.1337864845.13913483336144781312@ffe16.ukr.net> References: <6355.1337859149.6100681629467803648@ffe6.ukr.net> <69589.1337864845.13913483336144781312@ffe16.ukr.net> Message-ID: On Thu, May 24, 2012 at 2:07 PM, Dmitrey wrote: > > I had been subsribed IIRC for a couple of months I don't follow the PyPy IRC so that would explain it. I don't know how much they use that rather than their mailing list, but both seem a better place to discuss their handling or external contributions than on the numpy-discussion and scipy-user lists. Still, I hope you are able to make some contributions to numpypy, because so far I've also found PyPy's numpy implementation too limited for my usage. Regards, Peter From f_magician at mac.com Mon May 28 11:09:53 2012 From: f_magician at mac.com (Magician) Date: Tue, 29 May 2012 00:09:53 +0900 Subject: [SciPy-User] SciPy installation troubles on CentOS 6.2 In-Reply-To: <20120527172112.17920@gmx.com> References: <20120527172112.17920@gmx.com> Message-ID: <47F243F2-B8C8-4EB9-A52C-96913CD79CDF@mac.com> Hi Sergio, Thanks for your advices. Your advices looks very important for my state. When I tried "locate libpython2.7.so", I got an error as below: > locate: can not stat () `/var/lib/mlocate/mlocate.db': No such file or directory After I did updatedb, locate command dumped exact library paths. > /usr/local/python-2.7.3/lib/libpython2.7.so > /usr/local/python-2.7.3/lib/libpython2.7.so.1.0 LD_LIBRARY_PATH was empty. But I already defined library paths at /etc/ld.conf.so (already done ldconfig). Then I did "python setup.py build" without creating site.cfg, I got usual errors: > compile options: '-Inumpy/core/include -Ibuild/src.linux-x86_64-2.7/numpy/core/include/numpy -Inumpy/core/src/private -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/include -I/usr/local/python-2.7.3/include/python2.7 -Ibuild/src.linux-x86_64-2.7/numpy/core/src/multiarray -Ibuild/src.linux-x86_64-2.7/numpy/core/src/umath -c' > gcc: build/src.linux-x86_64-2.7/numpy/core/src/_sortmodule.c > gcc -pthread -shared build/temp.linux-x86_64-2.7/build/src.linux-x86_64-2.7/numpy/core/src/_sortmodule.o -L. -Lbuild/temp.linux-x86_64-2.7 -lnpymath -lm -lpython2.7 -o build/lib.linux-x86_64-2.7/numpy/core/_sort.so > /usr/bin/ld: cannot find -lpython2.7 > collect2: ld returned 1 exit status > /usr/bin/ld: cannot find -lpython2.7 > collect2: ld returned 1 exit status > error: Command "gcc -pthread -shared build/temp.linux-x86_64-2.7/build/src.linux-x86_64-2.7/numpy/core/src/_sortmodule.o -L. -Lbuild/temp.linux-x86_64-2.7 -lnpymath -lm -lpython2.7 -o build/lib.linux-x86_64-2.7/numpy/core/_sort.so" failed with exit status 1 I have to do a bit more try.... Magician On 2012/05/28, at 2:21, Sergio Rojas wrote: > > > From: Magician > > > /usr/bin/ld: cannot find -lpython2.7 > > > Magician, > This error listed indicates that python libraries are not being found. > Try this: > > linux> locate libpython2.7.so > linux> echo $LD_LIBRARY_PATH > > and verify that in your library path are the lib and include directories corresponding > to the python you want to built. > > I don't recall this but, did you built your own python? > > Sergio > > >> ----- Original Message ----- >> From: Magician >> Sent: 05/26/12 10:05 PM >> To: Ralf Gommers, sergio_r at mail.com >> Subject: Re: [SciPy-User] SciPy installation troubles on CentOS 6.2 >> >> >> Hi Ralf and Sergio, >> >> >> I thought my problems inclined to NumPy, and I posted to NumPy mailing list. >> But I couldn't get any answers for several days, so I'd like to ask again. >> >> >> I tried to build NumPy again with site.cfg, but I got these errors: >> > building 'numpy.core._sort' extension >> > compiling C sources >> > C compiler: gcc -pthread -fno-strict-aliasing -g -O2 -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC >> > >> > compile options: '-Inumpy/core/include -Ibuild/src.linux-x86_64-2.7/numpy/core/include/numpy -Inumpy/core/src/private -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/include -I/usr/local/python-2.7.3/include/python2.7 -Ibuild/src.linux-x86_64-2.7/numpy/core/src/multiarray -Ibuild/src.linux-x86_64-2.7/numpy/core/src/umath -c' >> > gcc: build/src.linux-x86_64-2.7/numpy/core/src/_sortmodule.c >> > gcc -pthread -shared build/temp.linux-x86_64-2.7/build/src.linux-x86_64-2.7/numpy/core/src/_sortmodule.o -L. -Lbuild/temp.linux-x86_64-2.7 -lnpymath -lm -lpython2.7 -o build/lib.linux-x86_64-2.7/numpy/core/_sort.so >> > /usr/bin/ld: cannot find -lpython2.7 >> > collect2: ld returned 1 exit status >> > /usr/bin/ld: cannot find -lpython2.7 >> > collect2: ld returned 1 exit status >> > error: Command "gcc -pthread -shared build/temp.linux-x86_64-2.7/build/src.linux-x86_64-2.7/numpy/core/src/_sortmodule.o -L. -Lbuild/temp.linux-x86_64-2.7 -lnpymath -lm -lpython2.7 -o build/lib.linux-x86_64-2.7/numpy/core/_sort.so" failed with exit status 1 >> >> Before I built NumPy, I uncommented and modified site.cfg as below: >> > [DEFAULT] >> > library_dirs = /usr/local/python-2.7.3/lib:/usr/lib64:/usr/lib64/atlas >> > include_dirs = /usr/local/python-2.7.3/include:/usr/include:/usr/include/atlas >> > >> > [blas_opt] >> > libraries = f77blas, cblas, atlas >> > >> > [lapack_opt] >> > libraries = lapack, f77blas, cblas, atlas >> >> And "setup.py config" dumped these messages: >> > Running from numpy source directory.F2PY Version 2 >> > blas_opt_info: >> > blas_mkl_info: >> > libraries mkl,vml,guide not found in /usr/local/python-2.7.3/lib >> > libraries mkl,vml,guide not found in /usr/lib64 >> > libraries mkl,vml,guide not found in /usr/lib64/atlas >> > NOT AVAILABLE >> > >> > atlas_blas_threads_info: >> > Setting PTATLAS=ATLAS >> > libraries ptf77blas,ptcblas,atlas not found in /usr/local/python-2.7.3/lib >> > Setting PTATLAS=ATLAS >> > customize GnuFCompiler >> > Could not locate executable g77 >> > Could not locate executable f77 >> > customize IntelFCompiler >> > Could not locate executable ifort >> > Could not locate executable ifc >> > customize LaheyFCompiler >> > Could not locate executable lf95 >> > customize PGroupFCompiler >> > Could not locate executable pgf90 >> > Could not locate executable pgf77 >> > customize AbsoftFCompiler >> > Could not locate executable f90 >> > customize NAGFCompiler >> > Found executable /usr/bin/f95 >> > customize VastFCompiler >> > customize CompaqFCompiler >> > Could not locate executable fort >> > customize IntelItaniumFCompiler >> > Could not locate executable efort >> > Could not locate executable efc >> > customize IntelEM64TFCompiler >> > customize Gnu95FCompiler >> > Found executable /usr/bin/gfortran >> > customize Gnu95FCompiler >> > customize Gnu95FCompiler using config >> > compiling '_configtest.c': >> > >> > /* This file is generated from numpy/distutils/system_info.py */ >> > void ATL_buildinfo(void); >> > int main(void) { >> > ATL_buildinfo(); >> > return 0; >> > } >> > >> > C compiler: gcc -pthread -fno-strict-aliasing -g -O2 -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC >> > >> > compile options: '-c' >> > gcc: _configtest.c >> > gcc -pthread _configtest.o -L/usr/lib64/atlas -lptf77blas -lptcblas -latlas -o _configtest >> > ATLAS version 3.8.4 built by mockbuild on Wed Dec 7 18:04:21 GMT 2011: >> > UNAME : Linux c6b5.bsys.dev.centos.org 2.6.32-44.2.el6.x86_64 #1 SMP Wed Jul 21 12:48:32 EDT 2010 x86_64 x86_64 x86_64 GNU/Linux >> > INSTFLG : -1 0 -a 1 >> > ARCHDEFS : -DATL_OS_Linux -DATL_ARCH_PII -DATL_CPUMHZ=2261 -DATL_SSE2 -DATL_SSE1 -DATL_USE64BITS -DATL_GAS_x8664 >> > F2CDEFS : -DAdd_ -DF77_INTEGER=int -DStringSunStyle >> > CACHEEDGE: 8388608 >> > F77 : gfortran, version GNU Fortran (GCC) 4.4.6 20110731 (Red Hat 4.4.6-3) >> > F77FLAGS : -fomit-frame-pointer -mfpmath=387 -O2 -falign-loops=4 -g -Wa,--noexecstack -fPIC -m64 >> > SMC : gcc, version gcc (GCC) 4.4.6 20110731 (Red Hat 4.4.6-3) >> > SMCFLAGS : -fomit-frame-pointer -mfpmath=387 -O2 -falign-loops=4 -g -Wa,--noexecstack -fPIC -m64 >> > SKC : gcc, version gcc (GCC) 4.4.6 20110731 (Red Hat 4.4.6-3) >> > SKCFLAGS : -fomit-frame-pointer -mfpmath=387 -O2 -falign-loops=4 -g -Wa,--noexecstack -fPIC -m64 >> > success! >> > removing: _configtest.c _configtest.o _configtest >> > Setting PTATLAS=ATLAS >> > FOUND: >> > libraries = ['ptf77blas', 'ptcblas', 'atlas'] >> > library_dirs = ['/usr/lib64/atlas'] >> > language = c >> > define_macros = [('ATLAS_INFO', '"\\"3.8.4\\""')] >> > include_dirs = ['/usr/include'] >> > >> > FOUND: >> > libraries = ['ptf77blas', 'ptcblas', 'atlas'] >> > library_dirs = ['/usr/lib64/atlas'] >> > language = c >> > define_macros = [('ATLAS_INFO', '"\\"3.8.4\\""')] >> > include_dirs = ['/usr/include'] >> > >> > lapack_opt_info: >> > lapack_mkl_info: >> > mkl_info: >> > libraries mkl,vml,guide not found in /usr/local/python-2.7.3/lib >> > libraries mkl,vml,guide not found in /usr/lib64 >> > libraries mkl,vml,guide not found in /usr/lib64/atlas >> > NOT AVAILABLE >> > >> > NOT AVAILABLE >> > >> > atlas_threads_info: >> > Setting PTATLAS=ATLAS >> > libraries ptf77blas,ptcblas,atlas not found in /usr/local/python-2.7.3/lib >> > libraries lapack_atlas not found in /usr/local/python-2.7.3/lib >> > libraries lapack_atlas not found in /usr/lib64/atlas >> > numpy.distutils.system_info.atlas_threads_info >> > Setting PTATLAS=ATLAS >> > Setting PTATLAS=ATLAS >> > FOUND: >> > libraries = ['lapack', 'ptf77blas', 'ptcblas', 'atlas'] >> > library_dirs = ['/usr/lib64/atlas'] >> > language = f77 >> > define_macros = [('ATLAS_INFO', '"\\"3.8.4\\""')] >> > include_dirs = ['/usr/include'] >> > >> > FOUND: >> > libraries = ['lapack', 'ptf77blas', 'ptcblas', 'atlas'] >> > library_dirs = ['/usr/lib64/atlas'] >> > language = f77 >> > define_macros = [('ATLAS_INFO', '"\\"3.8.4\\""')] >> > include_dirs = ['/usr/include'] >> > >> > running config >> >> I couldn't get enough informations about numscons/bento, >> so I'm still trying to build it with site.cfg and CFLAGS. >> >> How could I solve it? >> >> >> Magician >> >> >> On 2012/05/21, at 1:50, Ralf Gommers wrote: >> >> > >> > >> > On Sun, May 20, 2012 at 4:16 PM, Magician wrote: >> > Hi Ralf, >> > >> > >> > Thanks for your advice. >> > I tried to install BLAS/Lapack/ATLAS as below: >> > > yum install blas-devel lapack-devel atlas-devel >> > >> > Next I installed NumPy as below: >> > > tar xzvf numpy-1.6.1.tar.gz >> > > cd numpy-1.6.1 >> > > export CFLAGS="-L/usr/local/python-2.7.3/lib" >> > >> > Note that this overrides CFLAGS instead of appending that flag to the rest. If it doesn't work without that line, you have to either specify all cflags or use numscons/bento. >> > >> > Ralf >> > >> > >> > > python setup.py build >> > >> > But then I got those errors: >> > > building 'numpy.linalg.lapack_lite' extension >> > > compiling C sources >> > > C compiler: gcc -pthread -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -L/usr/local/python-2.7.3/lib -fPIC >> > > >> > > creating build/temp.linux-x86_64-2.7/numpy/linalg >> > > compile options: '-DATLAS_INFO="\"3.8.4\"" -I/usr/include -Inumpy/core/include -Ibuild/src.linux-x86_64-2.7/numpy/core/include/numpy -Inumpy/core/src/private -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/include -I/usr/local/python-2.7.3/include/python2.7 -Ibuild/src.linux-x86_64-2.7/numpy/core/src/multiarray -Ibuild/src.linux-x86_64-2.7/numpy/core/src/umath -c' >> > > gcc: numpy/linalg/lapack_litemodule.c >> > > gcc: numpy/linalg/python_xerbla.c >> > > /usr/bin/gfortran -Wall -Wall -shared build/temp.linux-x86_64-2.7/numpy/linalg/lapack_litemodule.o build/temp.linux-x86_64-2.7/numpy/linalg/python_xerbla.o -L/usr/lib64/atlas -L. -Lbuild/temp.linux-x86_64-2.7 -llapack -lptf77blas -lptcblas -latlas -lpython2.7 -lgfortran -o build/lib.linux-x86_64-2.7/numpy/linalg/lapack_lite.so >> > > /usr/bin/ld: cannot find -lpython2.7 >> > > collect2: ld returned 1 exit status >> > > /usr/bin/ld: cannot find -lpython2.7 >> > > collect2: ld returned 1 exit status >> > > error: Command "/usr/bin/gfortran -Wall -Wall -shared build/temp.linux-x86_64-2.7/numpy/linalg/lapack_litemodule.o build/temp.linux-x86_64-2.7/numpy/linalg/python_xerbla.o -L/usr/lib64/atlas -L. -Lbuild/temp.linux-x86_64-2.7 -llapack -lptf77blas -lptcblas -latlas -lpython2.7 -lgfortran -o build/lib.linux-x86_64-2.7/numpy/linalg/lapack_lite.so" failed with exit status 1 >> > >> > If I haven't install BLAS/Lapack/ATLAS, NumPy will be >> > successfully built and installed. >> > >> > >> > Magician >> > >> > >> > On 2012/05/20, at 19:40, scipy-user-request at scipy.org wrote: >> > >> > > Message: 1 >> > > Date: Sun, 20 May 2012 10:21:12 +0200 >> > > From: Ralf Gommers >> > > Subject: Re: [SciPy-User] SciPy installation troubles on CentOS 6.2 >> > > To: SciPy Users List >> > > Message-ID: >> > > >> > > Content-Type: text/plain; charset="iso-8859-1" >> > > >> > > On Sat, May 19, 2012 at 5:09 PM, Magician wrote: >> > > >> > >> Hi All, >> > >> >> > >> >> > >> I'm trying to build SciPy from source code, >> > >> but I have some troubles. >> > >> >> > >> My environment is below: >> > >>> CentOS 6.2 on VMware Fusion 4.1.2 (CentOS was installed as Software >> > >> Development WS) >> > >>> Python 2.7.3 (already built from sources, installed at >> > >> /usr/local/python-2.7.3) >> > >>> NumPy 1.6.1, SciPy 0.10.1, ATLAS 3.8.4, Lapack 3.4.1 (now trying to >> > >> build) >> > >> I and my colleagues (other users) want to use recent Python, >> > >> so I installed Python from sources, and I can't install SciPy >> > >> by using yum command. >> > >> >> > >> Now I'm facing ATLAS compiling errors. >> > >> Configuration options are "--prefix=/usr/local/atlas-3.8.4 -Fa alg -fPIC". >> > >> I tried to build it for several times, and always I got errors as below: >> > >>> res/dgemvN_6_75 : VARIATION EXCEEDS TOLERENCE, RERUN WITH HIGHER REPS. >> > >>> >> > >>> ATL_gemvN_mm.c : 1257.99 >> > >>> ATL_gemvN_1x1_1.c : 581.74 >> > >>> ATL_gemvN_1x1_1a.c : 1589.45 >> > >>> ATL_gemvN_4x2_0.c : 813.13 >> > >>> ATL_gemvN_4x4_1.c : 755.54 >> > >>> make[3]: *** [res/dMVRES] Error 255 >> > >>> make[3]: Leaving directory >> > >> `/home/magician/Desktop/ATLAS/build/tune/blas/gemv' >> > >>> make[2]: *** >> > >> [/home/magician/Desktop/ATLAS/build/tune/blas/gemv/res/dMVRES] Error 2 >> > >>> make[2]: Leaving directory `/home/magician/Desktop/ATLAS/build/bin' >> > >>> ERROR 734 DURING MVTUNE!!. CHECK INSTALL_LOG/dMVTUNE.LOG FOR DETAILS. >> > >>> make[2]: Entering directory `/home/magician/Desktop/ATLAS/build/bin' >> > >>> cd /home/magician/Desktop/ATLAS/build ; make error_report >> > >>> make[3]: Entering directory `/home/magician/Desktop/ATLAS/build' >> > >>> make -f Make.top error_report >> > >>> make[4]: Entering directory `/home/magician/Desktop/ATLAS/build' >> > >>> uname -a 2>&1 >> bin/INSTALL_LOG/ERROR.LOG >> > >>> gcc -v 2>&1 >> bin/INSTALL_LOG/ERROR.LOG >> > >>> Using built-in specs. >> > >>> Target: x86_64-redhat-linux >> > >>> Configured with: ../configure --prefix=/usr --mandir=/usr/share/man >> > >> --infodir=/usr/share/info --with-bugurl= >> > >> http://bugzilla.redhat.com/bugzilla --enable-bootstrap --enable-shared >> > >> --enable-threads=posix --enable-checking=release --with-system-zlib >> > >> --enable-__cxa_atexit --disable-libunwind-exceptions >> > >> --enable-gnu-unique-object >> > >> --enable-languages=c,c++,objc,obj-c++,java,fortran,ada >> > >> --enable-java-awt=gtk --disable-dssi >> > >> --with-java-home=/usr/lib/jvm/java-1.5.0-gcj-1.5.0.0/jre >> > >> --enable-libgcj-multifile --enable-java-maintainer-mode >> > >> --with-ecj-jar=/usr/share/java/eclipse-ecj.jar --disable-libjava-multilib >> > >> --with-ppl --with-cloog --with-tune=generic --with-arch_32=i686 >> > >> --build=x86_64-redhat-linux >> > >>> Thread model: posix >> > >>> gcc version 4.4.6 20110731 (Red Hat 4.4.6-3) (GCC) >> > >>> gcc -V 2>&1 >> bin/INSTALL_LOG/ERROR.LOG >> > >>> gcc: '-V' option must have argument >> > >>> make[4]: [error_report] Error 1 (ignored) >> > >>> gcc --version 2>&1 >> bin/INSTALL_LOG/ERROR.LOG >> > >>> tar cf error_Corei264SSE3.tar Make.inc bin/INSTALL_LOG/* >> > >>> gzip --best error_Corei264SSE3.tar >> > >>> mv error_Corei264SSE3.tar.gz error_Corei264SSE3.tgz >> > >>> make[4]: Leaving directory `/home/magician/Desktop/ATLAS/build' >> > >>> make[3]: Leaving directory `/home/magician/Desktop/ATLAS/build' >> > >>> make[2]: Leaving directory `/home/magician/Desktop/ATLAS/build/bin' >> > >>> Error report error_.tgz has been created in your top-level ATLAS >> > >>> directory. Be sure to include this file in any help request. >> > >>> cat: ../../CONFIG/error.txt: No such file or directory >> > >>> cat: ../../CONFIG/error.txt: No such file or directory >> > >>> make[1]: *** [build] Error 255 >> > >>> make[1]: Leaving directory `/home/magician/Desktop/ATLAS/build' >> > >>> make: *** [build] Error 2 >> > >> >> > >> It's very troublesome for me to build ATLAS by myself. >> > >> My purpose is just using SciPy on my Python. >> > >> Even if it's optimized not so good for my environment, it's OK. >> > >> >> > >> Is there an easy or a sure way to build and install SciPy? >> > > >> > > >> > > Building ATLAS is much harder than building scipy, so you should try to >> > > find some rpm's for it, like >> > > http://linuxtoolkit.blogspot.com/2010/09/installing-lapack-blas-and-atlas-on.html. >> > > There's no problem building scipy against ATLAS from a binary install. >> > > >> > > Ralf From cimrman3 at ntc.zcu.cz Tue May 29 09:28:26 2012 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Tue, 29 May 2012 15:28:26 +0200 Subject: [SciPy-User] ANN: SfePy 2012.2 Message-ID: <4FC4CEFA.3030500@ntc.zcu.cz> I am pleased to announce release 2012.2 of SfePy. Description ----------- SfePy (simple finite elements in Python) is a software for solving systems of coupled partial differential equations by the finite element method. The code is based on NumPy and SciPy packages. It is distributed under the new BSD license. Home page: http://sfepy.org Downloads, mailing list, wiki: http://code.google.com/p/sfepy/ Git (source) repository, issue tracker: http://github.com/sfepy Highlights of this release -------------------------- - reimplemented acoustic band gaps code using the homogenization engine - high order quadrature rules - unified dot product and mass terms, lots of other term updates/fixes, - updated the PDE solver application For full release notes see http://docs.sfepy.org/doc/release_notes.html#id1 (rather long and technical). Best regards, Robert Cimrman and Contributors (*) (*) Contributors to this release (alphabetical order): Vladim?r Luke?, Andre Smit From bouloumag at gmail.com Thu May 31 09:18:10 2012 From: bouloumag at gmail.com (Darcoux Christine) Date: Thu, 31 May 2012 09:18:10 -0400 Subject: [SciPy-User] 2-D data interpolation Message-ID: Hi, I am converting a matlab code to python and I am looking for a function like interp2 [1] for 2-D data interpolation. My matlab code has calls like M = interp2(x,y,z, xi,yi, 'cubic') where x, y and z describe a surface function. The interp2 function returns a matrix M corresponding to the elements of xi and yi and determined by cubic interpolation. Any help would be greatly appreciated, [1] http://www.mathworks.com/help/techdoc/ref/interp2.html From lists at hilboll.de Thu May 31 09:27:33 2012 From: lists at hilboll.de (Andreas H.) Date: Thu, 31 May 2012 15:27:33 +0200 Subject: [SciPy-User] 2-D data interpolation In-Reply-To: References: Message-ID: > Hi, > > I am converting a matlab code to python and I am looking for a > function like interp2 [1] for 2-D data interpolation. > > My matlab code has calls like > > M = interp2(x,y,z, xi,yi, 'cubic') > > where x, y and z describe a surface function. The interp2 function > returns a matrix M corresponding to the elements of xi and yi and > determined by cubic interpolation. > > Any help would be greatly appreciated, You have several choices in scipy.interpolate, see http://docs.scipy.org/doc/scipy/reference/interpolate.html#multivariate-interpolation. Probably, you're looking for the two subclasses of BivariateSpline, namely LSQBivariateSpline and SmoothBivariateSpline, the first doind least-squares fitting, while the latter does some smoothing. Unfortunately, there's no examples in the docstrings as of now. Cheers, Andreas. From nadavh at visionsense.com Thu May 31 14:34:26 2012 From: nadavh at visionsense.com (Nadav Horesh) Date: Thu, 31 May 2012 18:34:26 +0000 Subject: [SciPy-User] 2-D data interpolation Message-ID: You can try also scipy.interpolate.griddata (not the pylab's griddata!) Nadav -------------- next part -------------- An HTML attachment was scrubbed... URL: From david_baddeley at yahoo.com.au Thu May 31 15:44:38 2012 From: david_baddeley at yahoo.com.au (David Baddeley) Date: Thu, 31 May 2012 12:44:38 -0700 (PDT) Subject: [SciPy-User] 2-D data interpolation Message-ID: <1338493478.77964.BPMail_high_noncarrier@web113409.mail.gq1.yahoo.com> It's a bit of a special case, but if your starting values are already on a regular grid scipy.ndimage.map_coordinates is what you're after. Cheers, David ------------------------------ On Fri, Jun 1, 2012 1:18 AM NZST Darcoux Christine wrote: >Hi, > >I am converting a matlab code to python and I am looking for a >function like interp2 [1] for 2-D data interpolation. > >My matlab code has calls like > >M = interp2(x,y,z, xi,yi, 'cubic') > >where x, y and z describe a surface function. The interp2 function >returns a matrix M corresponding to the elements of xi and yi and >determined by cubic interpolation. > >Any help would be greatly appreciated, > >[1] http://www.mathworks.com/help/techdoc/ref/interp2.html >_______________________________________________ >SciPy-User mailing list >SciPy-User at scipy.org >http://mail.scipy.org/mailman/listinfo/scipy-user From markus.baden at gmail.com Thu May 31 23:50:30 2012 From: markus.baden at gmail.com (Markus Baden) Date: Fri, 1 Jun 2012 11:50:30 +0800 Subject: [SciPy-User] leastsq - When to scale covariance matrix by reduced chi square for confidence interval estimation Message-ID: Hi List, I'm trying to get my head around, when to scale the covariance matrix by the reduced chi square of the problem for getting an estimate of the error of a parameter obtained via fitting. I'm kinda stuck and would appreciate any pointers to an answer. From the documentation of scipy.optimize.leastsq and scipy.curve_fit, as well as from some old threads on this mailing list [1, 2] it seems the procedure in scipy is the following 1) Create estimate of the Hessian matrix based on the Jacobian at the final value, which is called the covariance matrix cov_x and is the curvature of the fitting parameters around the minimum 2) Calculate the reduced chi square, red_chi_2 = sum( (w_i *(y_i - f_i))**2 ) / dof, where w_i are the weights, y_i the data, f_i the fit and dof the degrees of freedom (number of knobs) 3) Get parameter estimates by calculating sqrt(diag(cov_x) * red_chi_2) 4) Scale confidence interval by appropriate value of student t distribution, e.g. when predicting the standard deviation of a single value just *= 1 So far, so good. However in the literature [3, 4] I often find that steps 2 and 3 are skipped when the data is weighted by errors in the individual observation. Obviously for a good fit with red_chi_2 = 1 both ways of getting an error are the same. [3] and [4] caution that the method they are using assume among others normality and a reduced chi square of about 1 and discourage the use of estimating the error in the fit for bad fits. However it seems that the method currently in scipy somehow is more robust. Take for example data similiar to the one I am currently working with [5]. The fit has a reduced chi square of about one, and hence the errors of both the scipy method and the literature method agree. If I make my reduced chi square worse by scaling the error bars, the method in the literature gives either very, very small errors or very, very large ones. The scipy method however always produces about the same error estimate. Here is the output of [5] Errors scaled by: 1 Offset: -71.0 Chi 2: 19.1752 Red. Chi 2: 1.0092 Error literature 2.8426 Error scipy 2.8557 Errors scaled by: 100.0 Offset: -71.0077 Chi 2: 0.0019 Red. Chi 2: 0.0001 Error literature 283.9153 Error scipy 2.8563 Errors scaled by: 0.01 Offset: -69.0724 Chi 2: 9797.5838 Red. Chi 2: 515.6623 Error literature 0.1235 Error scipy 2.8036 Now in the particular problem I am working at, we have a couple of fits like [5] and some of them have a slightly worse reduced chi square of say about 1.4 or 0.7. At this point the two methods start to deviate and I am wondering which would be the correct way of quoting the errors estimated from the fit. Even a basic reference to some text book that explains the method used in scipy would be very helpful. Thanks a lot, Markus P.S. On a related side remark: What happened to the benchmark tests against NIST certified non linear regression data sets mentioned in [1]. Are they still in the code base? And are there similiar benchmark data sets for non linear regression with observations that have error bars on them? [1] http://thread.gmane.org/gmane.comp.python.scientific.user/19482 [2] http://thread.gmane.org/gmane.comp.python.scientific.user/31023/focus=31024 [3] Numerical Recipes in C, 2nd edition, chapter 15.6 , especially p. 695-698 [4] Data Reduction and Error Analysis for the physical sciences, 3rd edition, chapter 8.5, p. 157 [5] Gist of self-contained example at https://gist.github.com/2848439 -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.barker at noaa.gov Wed May 30 19:45:14 2012 From: chris.barker at noaa.gov (Chris Barker) Date: Wed, 30 May 2012 16:45:14 -0700 Subject: [SciPy-User] Scientific Software and Web Developer Needed Message-ID: Scientific Software and Web Developer Needed NOAA Emergency Response Division Help us develop our next-generation oil spill transport model. Background: The Emergency Response Division (ERD) of NOAA's Office of Response and Restoration (OR&R) provides scientific expertise to support the response to oil and chemical spills in the coastal environment. We played a major role in the recent Deepwater Horizon oil spill in the Gulf of Mexico. In order to fulfill our mission, we develop many of the software tools and models required to support a response to hazardous material spills. We are currently in the middle of a program to develop our next-generation oil spill transport model, taking into account lessons learned from years of response and recent major incidents. There are currently two positions available: one will focus on on the computational code in C++, Pyhton and Cython, and the other on a new web front end (Python, HTML, CSS, Javascript). General Characteristics: The incumbents in this position will provide software development services to support the mission of the Emergency Response Division of NOAA's Office of Response and Restoration. As part of his/her efforts, independent evaluation and application of development techniques, algorithms, software architecture, and programming patterns will be required. The incumbent will work with the staff of ERD to provide analysis on user needs and software, GUI, and library design. He/she will be expect to work primarily on site at NOAA's facility in Seattle. Knowledge: The incumbent must be able to apply modern concepts of software engineering and design to the development of computational code, web applications, and libraries. The incumbent will need to be able to design, write, refactor, and implement code for a complex web application and/or computational library. The incumbent will work with a multi-disciplinary team including scientists, users, and other developers, utilizing software development practices such as usability design, version control, bug and issue tracking, and unit testing. Good communication skills and the knowledge of working as part of a team are required. Direction received: The incumbent will participate on various research and development teams. While endpoints will be identified through Division management and some direct supervision will be provided, the incumbent will be responsible for progressively being able to take input from team meetings and design objectives and propose strategies for reaching endpoints. Typical duties and responsibilities: The incumbent will work with the oil and chemical spill modeling team to improve and develop new tools and models used in fate and transport forecasting. Different components of the project will be written in C++, Python, and Javascript. Education requirement, minimum: Bachelor's degree in a technical discipline. Experience requirement, minimum: One to five years experience in development of complex software systems in one or more full-featured programming languages (C, C++, Java, Python, Javaascript, Ruby, Fortran, etc.) The team requires experience in the following languages/disciplines. Each incumbent will need experience in some subset: * Computational/Scientific programming * Numerical Analysis/Methods * Parallel processing * Desktop GUI * Web services * Web clients: HTML/CSS/Javascript * Python * C/C++ * Python--C/C++ integration * Software development team leadership While the incumbent will work on-site at NOAA, directly with the NOAA team, this is a contract position with General Dynamics Information Technology: http://www.gdit.com/default.aspx For more information and to apply, use the GDIT web site: http://www.resumeware.net/gdns_rw/gdns_web/job_detail.cfm?recnum=1&totalrecs=1&start=1&pagestart=1 If that long url doesn't work, try: http://www.resumeware.net/gdns_rw/gdns_web/job_search.cfm and search for job ID: 199765 You can also send questions about employment issues to: Susan Bowley: susan.bowley at gdit.com And questions about the nature of the work to: Chris Barker: Chris.Barker at noaa.gov -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R ? ? ? ? ? ?(206) 526-6959?? voice 7600 Sand Point Way NE ??(206) 526-6329?? fax Seattle, WA ?98115 ? ? ??(206) 526-6317?? main reception Chris.Barker at noaa.gov From gregory.mosby at gmail.com Thu May 31 00:19:09 2012 From: gregory.mosby at gmail.com (Greg Mosby) Date: Wed, 30 May 2012 23:19:09 -0500 Subject: [SciPy-User] SciPy Install on Cygwin (Successful Once) Message-ID: Hi, I'm installing SciPy on my Win7 machine using Cygwin and its Python 2.6.7 version. I actually managed to successfully install Scipy not to long ago, but I had some rebasing issues that ended with me having to reinstall cygwin and those directories. I followed the basic outline of installing ATLAS plus LAPACK. I have Numpy. I made a site.cfg for my unzipped scipy-0.10.1 directory to make sure it would look in the right spots for the lapack, atlas, etc libraries. It's the same as the one I used for my successful install. But when I run: python setup.py config_fc --fcompiler=gnu95 install I get undefined reference errors. I can tell they are somehow due to the install script not including a clapack.h file. The location of the file is in my environment path, but perhaps not my python sys.path. However, it's in one of the include directories I place in the site.cfg too, so I'm not sure why the install is missing it. The last of the undefined references is: "build/temp.cygwin-1.7.15-i686-2.6/build/src.cygwin-1.7.15-i686-2.6/build/src.cygwin-1.7.15-i686-2.6/scipy/lib/lapack/clapackmodule.o:clapackmodule.c:(.data+0x1414): undefined reference to `_clapack_ztrtri' collect2: ld returned 1 exit status error: Command "/usr/bin/gfortran -Wall -Wall -shared build/temp.cygwin-1.7.15-i686-2.6/build/src.cygwin-1.7.15-i686-2.6/build/src.cygwin-1.7.15-i686-2.6/scipy/lib/lapack/clapackmodule.o build/temp.cygwin-1.7.15-i686-2.6/build/src.cygwin-1.7.15-i686-2.6/fortranobject.o -L/usr/local/atlas/lib -L/usr/lib/gcc/i686-pc-cygwin/4.5.3 -L/usr/lib/python2.6/config -Lbuild/temp.cygwin-1.7.15-i686-2.6 -llapack -lptf77blas -lptcblas -latlas -lpython2.6 -lgfortran -o build/lib.cygwin-1.7.15-i686-2.6/scipy/lib/lapack/clapack.dll" failed with exit status 1" I'll attach a copy of the results from: python -c 'from numpy.f2py.diagnose import run; run()'. Thanks in advance. -- Best, Greg -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- ------ os.name='posix' ------ sys.platform='cygwin' ------ sys.version: 2.6.7 (r267:88850, Feb 2 2012, 23:50:20) [GCC 4.5.3] ------ sys.prefix: /usr ------ sys.path=':/usr/lib/python2.6/site-packages/setuptools-0.6c11-py2.6.egg:/usr/lib/python2.6/site-packages/nose-1.1.2-py2.6.egg:/usr/lib/python2.6/site-packages/pyfits-3.0.7-py2.6-cygwin-1.7.15-i686.egg:/usr/lib/python2.6/site-packages/matplotlib-1.1.0-py2.6-cygwin-1.7.15-i686.egg:/usr/lib/python26.zip:/usr/lib/python2.6:/usr/lib/python2.6/plat-cygwin:/usr/lib/python2.6/lib-tk:/usr/lib/python2.6/lib-old:/usr/lib/python2.6/lib-dynload:/usr/lib/python2.6/site-packages:/usr/lib/python2.6/site-packages/PIL:/usr/lib/python2.6/site-packages/gtk-2.0' ------ Found new numpy version '1.6.2' in /usr/lib/python2.6/site-packages/numpy/__init__.pyc Found f2py2e version '2' in /usr/lib/python2.6/site-packages/numpy/f2py/f2py2e.pyc Found numpy.distutils version '0.4.0' in '/usr/lib/python2.6/site-packages/numpy/distutils/__init__.pyc' ------ Importing numpy.distutils.fcompiler ... ok ------ Checking availability of supported Fortran compilers: GnuFCompiler instance properties: archiver = ['/usr/bin/g77', '-cr'] compile_switch = '-c' compiler_f77 = ['/usr/bin/g77', '-g', '-Wall', '-O3', '-funroll-loops'] compiler_f90 = None compiler_fix = None libraries = ['g2c'] library_dirs = ['/usr/lib/gcc/i686-pc-cygwin/3.4.4'] linker_exe = ['/usr/bin/g77', '-g', '-Wall', '-g', '-Wall'] linker_so = ['/usr/bin/g77', '-g', '-Wall', '-g', '-Wall', '- shared'] object_switch = '-o ' ranlib = ['/usr/bin/g77'] version = LooseVersion ('3.4.4') version_cmd = ['/usr/bin/g77', '--version'] Gnu95FCompiler instance properties: archiver = ['/usr/bin/gfortran', '-cr'] compile_switch = '-c' compiler_f77 = ['/usr/bin/gfortran', '-Wall', '-ffixed-form', '-O3', '- funroll-loops'] compiler_f90 = ['/usr/bin/gfortran', '-Wall', '-O3', '-funroll-loops'] compiler_fix = ['/usr/bin/gfortran', '-Wall', '-ffixed-form', '-Wall', '-O3', '-funroll-loops'] libraries = ['gfortran'] library_dirs = ['/usr/lib/gcc/i686-pc-cygwin/4.5.3'] linker_exe = ['/usr/bin/gfortran', '-Wall', '-Wall'] linker_so = ['/usr/bin/gfortran', '-Wall', '-Wall', '-shared'] object_switch = '-o ' ranlib = ['/usr/bin/gfortran'] version = LooseVersion ('4.5.3') version_cmd = ['/usr/bin/gfortran', '--version'] Fortran compilers found: --fcompiler=gnu GNU Fortran 77 compiler (3.4.4) --fcompiler=gnu95 GNU Fortran 95 compiler (4.5.3) Compilers available for this platform, but not found: --fcompiler=absoft Absoft Corp Fortran Compiler --fcompiler=compaqv DIGITAL or Compaq Visual Fortran Compiler --fcompiler=g95 G95 Fortran Compiler --fcompiler=intelev Intel Visual Fortran Compiler for Itanium apps --fcompiler=intelv Intel Visual Fortran Compiler for 32-bit apps Compilers not available on this platform: --fcompiler=compaq Compaq Fortran Compiler --fcompiler=hpux HP Fortran 90 Compiler --fcompiler=ibm IBM XL Fortran Compiler --fcompiler=intel Intel Fortran Compiler for 32-bit apps --fcompiler=intele Intel Fortran Compiler for Itanium apps --fcompiler=intelem Intel Fortran Compiler for 64-bit apps --fcompiler=intelvem Intel Visual Fortran Compiler for 64-bit apps --fcompiler=lahey Lahey/Fujitsu Fortran 95 Compiler --fcompiler=mips MIPSpro Fortran Compiler --fcompiler=nag NAGWare Fortran 95 Compiler --fcompiler=none Fake Fortran compiler --fcompiler=pathf95 PathScale Fortran Compiler --fcompiler=pg Portland Group Fortran Compiler --fcompiler=sun Sun or Forte Fortran 95 Compiler --fcompiler=vast Pacific-Sierra Research Fortran 90 Compiler For compiler details, run 'config_fc --verbose' setup command. ------ Importing numpy.distutils.cpuinfo ... ok ------ CPU information: CPUInfoBase__get_nbits getNCPUs has_mmx has_sse has_sse2 has_sse3 has_ssse3 is_32bit is_Intel is_i686 ------