From cimrman3 at ntc.zcu.cz Fri Dec 1 04:15:58 2006 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Fri, 01 Dec 2006 10:15:58 +0100 Subject: [SciPy-user] umfpack and building scipy from source In-Reply-To: References: <456EAB45.3070500@ntc.zcu.cz> <456F2D75.6070609@gmail.com> Message-ID: <456FF2CE.9090109@ntc.zcu.cz> Ryan Krauss wrote: > Following Robert Kern's advice I am getting very close. The API error > is now gone. > > There is my current scipy.test() output. > > In [4]: scipy.test() > Warning: FAILURE importing tests for om '...y\\linsolve\\umfpack\\umfpack.pyc'> > C:\Python24\Lib\site-packages\scipy\linsolve\umfpack\tests\test_umfpack.py:17: A > ttributeError: 'module' object has no attribute 'umfpack' (in ?) > ... This is ok, since you do not have umfpack. r. From nwagner at iam.uni-stuttgart.de Fri Dec 1 04:30:20 2006 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Fri, 01 Dec 2006 10:30:20 +0100 Subject: [SciPy-user] umfpack and building scipy from source In-Reply-To: <456FF2CE.9090109@ntc.zcu.cz> References: <456EAB45.3070500@ntc.zcu.cz> <456F2D75.6070609@gmail.com> <456FF2CE.9090109@ntc.zcu.cz> Message-ID: <456FF62C.9050402@iam.uni-stuttgart.de> Robert Cimrman wrote: > Ryan Krauss wrote: > >> Following Robert Kern's advice I am getting very close. The API error >> is now gone. >> >> There is my current scipy.test() output. >> >> In [4]: scipy.test() >> Warning: FAILURE importing tests for > om '...y\\linsolve\\umfpack\\umfpack.pyc'> >> C:\Python24\Lib\site-packages\scipy\linsolve\umfpack\tests\test_umfpack.py:17: A >> ttributeError: 'module' object has no attribute 'umfpack' (in ?) >> > > ... > > This is ok, since you do not have umfpack. > > r. > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > I recommend UMFACK. It's faster than SuperLU. SuperLU solution Use minimum degree ordering on A'+A. CPU time SuperLU 0.32002 Residual SuperLU 3.87138066926e-06 UMFPACK solution CPU time UMFPACK 0.032002 Residual UMFPACK 2.88360989563e-07 Nils From nwagner at iam.uni-stuttgart.de Fri Dec 1 06:54:46 2006 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Fri, 01 Dec 2006 12:54:46 +0100 Subject: [SciPy-user] umfpack.report_info() Message-ID: <45701806.7070603@iam.uni-stuttgart.de> Hi, Umfpack provides a function for printing statistics. I am wondering if SuperLU has a similar function. Nils from scipy import * import scipy.linsolve.umfpack as um print 'Reading A' A = io.mmread('sherman2.mtx') A = A.tocsr() print 'Reading RHS' b = io.mmread('sherman2_rhs1.mtx').squeeze() # Contruct the solver umfpack = um.UmfpackContext() # Use default 'di' family of UMFPACK routines. # Make LU decomposition. umfpack.numeric(A) # Use already LU-decomposed matrix. sol1 = umfpack( um.UMFPACK_A, A, b, autoTranspose = True ) # Print statistics. umfpack.report_info() From tgrav at mac.com Fri Dec 1 07:34:23 2006 From: tgrav at mac.com (Tommy Grav) Date: Fri, 1 Dec 2006 07:34:23 -0500 Subject: [SciPy-user] ScipySuperpack Message-ID: <78043CD3-3A2D-438D-BB76-8A4EEA34A1CA@mac.com> I installed the Mac ScipySuperpack (from http://www.scipy.org/Download). However it seems that the version of matplotlib in there is not compatible with their version of numpy [tgrav@******] ch2/pbcd -> python ActivePython 2.4.3 Build 11 (ActiveState Software Inc.) based on Python 2.4.3 (#1, Apr 3 2006, 18:07:18) [GCC 3.3 20030304 (Apple Computer, Inc. build 1666)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import numpy >>> import pylab >>> x = range(1,10) >>> y = range(1,10) >>> pylab.plot(x,y) [] >>> pylab.show() alloc: invalid block: 0xa08bcd8: a 68 0 Abort [tgrav@*****] ch2/pbcd -> Anyone know how to fix this? Cheers Tommy From lou_boog2000 at yahoo.com Fri Dec 1 10:06:57 2006 From: lou_boog2000 at yahoo.com (Lou Pecora) Date: Fri, 1 Dec 2006 07:06:57 -0800 (PST) Subject: [SciPy-user] How to: Pass NumPy arrays to C & use SWIG to generate modules Message-ID: <461232.60327.qm@web34413.mail.mud.yahoo.com> I have some number crunching routines I want to code in C or C++ and then use SWIG to create a module I can import. I can get the simple example.i stuff in the SWIG package to work, but I need to pass NumPy arrays back and forth. I have looked up typemaps and googled examples, but whatever I find is pretty general and high level for me and my needs -- at least until I get more experience. Does anyone know of simple examples of passing NumPy arrays and using SWIG? I really don't want to branch out to other objects, yet. No C++ templates or fancy stuff. Just NumPy arrays to C arrays (or something reasonable in C) will get me a long way in my coding for now. Thanks for any suggestions or pointers. -- Lou Pecora US Naval Research Lab ____________________________________________________________________________________ Do you Yahoo!? Everyone is raving about the all-new Yahoo! Mail beta. http://new.mail.yahoo.com From gael.varoquaux at normalesup.org Fri Dec 1 10:14:30 2006 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Fri, 1 Dec 2006 16:14:30 +0100 Subject: [SciPy-user] How to: Pass NumPy arrays to C & use SWIG to generate modules In-Reply-To: <461232.60327.qm@web34413.mail.mud.yahoo.com> References: <461232.60327.qm@web34413.mail.mud.yahoo.com> Message-ID: <20061201151430.GA6179@clipper.ens.fr> Did you consider using ctypes. http://scipy.org/Cookbook/Ctypes2 . I had a great experience with it. Ga?l On Fri, Dec 01, 2006 at 07:06:57AM -0800, Lou Pecora wrote: > I have some number crunching routines I want to code > in C or C++ and then use SWIG to create a module I can > import. I can get the simple example.i stuff in the > SWIG package to work, but I need to pass NumPy arrays > back and forth. I have looked up typemaps and googled > examples, but whatever I find is pretty general and > high level for me and my needs -- at least until I get > more experience. > Does anyone know of simple examples of passing NumPy > arrays and using SWIG? I really don't want to branch > out to other objects, yet. No C++ templates or fancy > stuff. Just NumPy arrays to C arrays (or something > reasonable in C) will get me a long way in my coding > for now. > Thanks for any suggestions or pointers. > -- Lou Pecora > US Naval Research Lab > ____________________________________________________________________________________ > Do you Yahoo!? > Everyone is raving about the all-new Yahoo! Mail beta. > http://new.mail.yahoo.com > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user From lou_boog2000 at yahoo.com Fri Dec 1 10:28:09 2006 From: lou_boog2000 at yahoo.com (Lou Pecora) Date: Fri, 1 Dec 2006 07:28:09 -0800 (PST) Subject: [SciPy-user] How to: Pass NumPy arrays to C & use SWIG to generate modules In-Reply-To: <20061201151430.GA6179@clipper.ens.fr> Message-ID: <233745.61837.qm@web34407.mail.mud.yahoo.com> Thanks will check them out. I've seen the word, but don't know about them. -- Lou --- Gael Varoquaux wrote: > Did you consider using ctypes. > http://scipy.org/Cookbook/Ctypes2 . I had > a great experience with it. > > Ga?l ____________________________________________________________________________________ Yahoo! Music Unlimited Access over 1 million songs. http://music.yahoo.com/unlimited From oliphant.travis at ieee.org Fri Dec 1 14:34:43 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Fri, 01 Dec 2006 12:34:43 -0700 Subject: [SciPy-user] problem with lsim In-Reply-To: References: Message-ID: <457083D3.9040600@ieee.org> Ryan Krauss wrote: > I was hoping that updating my versions would fix this problem, but > after wrestling with building from source under windows (and mostly > getting it working), I still get: > This was a problem with NumPy. It is now fixed in SVN and should be in the 1.0.1 release of NumPy. -Travis From jrpirone at schoolph.umass.edu Fri Dec 1 14:35:52 2006 From: jrpirone at schoolph.umass.edu (jrpirone) Date: Fri, 01 Dec 2006 14:35:52 -0500 Subject: [SciPy-user] clapack and cblas modules are empty Message-ID: <1165001752.5095.20.camel@jrpirone-desktop> Hello, I've installed scipy and numpy from Andrew Straw's repository, all other libraries are from the standard Ubuntu repositories (6.10). When I execute scipy.test(), the following warnings appear: **************************************************************** WARNING: clapack module is empty ----------- See scipy/INSTALL.txt for troubleshooting. Notes: * If atlas library is not found by numpy/distutils/system_info.py, then scipy uses flapack instead of clapack. **************************************************************** **************************************************************** WARNING: cblas module is empty ----------- See scipy/INSTALL.txt for troubleshooting. Notes: * If atlas library is not found by numpy/distutils/system_info.py, then scipy uses fblas instead of cblas. **************************************************************** Running the command: ldd /usr/lib/python2.4/site-packages/numpy/linalg/lapack_lite.so Gives me: linux-gate.so.1 => (0xffffe000) liblapack.so.3 => /usr/lib/atlas/sse2/liblapack.so.3 (0xb7958000) libblas.so.3 => /usr/lib/atlas/sse2/libblas.so.3 (0xb7387000) libg2c.so.0 => /usr/lib/libg2c.so.0 (0xb735f000) libm.so.6 => /lib/tls/i686/cmov/libm.so.6 (0xb7339000) libgcc_s.so.1 => /lib/libgcc_s.so.1 (0xb732e000) libc.so.6 => /lib/tls/i686/cmov/libc.so.6 (0xb71fa000) /lib/ld-linux.so.2 (0x80000000) Is there a problem with the libraries I have installed? Should I be worried about these warnings, or am I just being compulsive? Thanks for your help, Jason From joshuafr at gmail.com Fri Dec 1 16:07:03 2006 From: joshuafr at gmail.com (Joshua Petterson) Date: Fri, 1 Dec 2006 22:07:03 +0100 Subject: [SciPy-user] geostatistics Message-ID: Hi all, I can't find any package for geostatistics in scipy (kriging, variograms, ...), do I have to use R or is there another place for that? Best regards From R.Springuel at umit.maine.edu Fri Dec 1 11:57:04 2006 From: R.Springuel at umit.maine.edu (R. Padraic Springuel) Date: Fri, 01 Dec 2006 16:57:04 +0000 Subject: [SciPy-user] pajer@iname.com has sent you a file. Message-ID: <45705EE0.3080500@umit.maine.edu> So, I downloaded and installed those files, but am still getting the same error. Also, I installed the Superpack on my Mac system and can't seem to import either scipy or matplotlib. I get the following errors: Traceback (most recent call last): File "", line 1, in ? File "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages/pylab.py", line 1, in ? from matplotlib.pylab import * ImportError: No module named matplotlib.pylab and Traceback (most recent call last): File "", line 1, in ? NameError: name 'python' is not defined >>> from scipy import * Traceback (most recent call last): File "", line 1, in ? File "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages/scipy/linalg/__init__.py", line 8, in ? from basic import * File "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages/scipy/linalg/basic.py", line 17, in ? from lapack import get_lapack_funcs File "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages/scipy/linalg/lapack.py", line 17, in ? from scipy.linalg import flapack ImportError: Failure linking new module: /Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages/scipy/linalg/flapack.so: Library not loaded: /usr/local/lib/libgfortran.1.dylib Referenced from: /Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages/scipy/linalg/flapack.so Reason: image not found What does all that mean? I thought the Superpack installs were supposed to work right out of the box? -- R. Padraic Springuel Teaching Assistant Department of Physics and Astronomy University of Maine Bennett 309 Office Hours: Wednesday 2-3pm From robert.kern at gmail.com Fri Dec 1 20:18:21 2006 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 01 Dec 2006 19:18:21 -0600 Subject: [SciPy-user] geostatistics In-Reply-To: References: Message-ID: <4570D45D.5030708@gmail.com> Joshua Petterson wrote: > Hi all, > I can't find any package for geostatistics in scipy (kriging, > variograms, ...), do I have to use R or is there another place for > that? Sadly, no, we haven't had any kriging algorithms contributed to scipy. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From robert.kern at gmail.com Fri Dec 1 20:20:24 2006 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 01 Dec 2006 19:20:24 -0600 Subject: [SciPy-user] clapack and cblas modules are empty In-Reply-To: <1165001752.5095.20.camel@jrpirone-desktop> References: <1165001752.5095.20.camel@jrpirone-desktop> Message-ID: <4570D4D8.30806@gmail.com> jrpirone wrote: > Hello, > > I've installed scipy and numpy from Andrew Straw's repository, all other > libraries are from the standard Ubuntu repositories (6.10). When I > execute scipy.test(), the following warnings appear: > > **************************************************************** > WARNING: clapack module is empty > ----------- > See scipy/INSTALL.txt for troubleshooting. > Notes: > * If atlas library is not found by numpy/distutils/system_info.py, > then scipy uses flapack instead of clapack. > **************************************************************** > > **************************************************************** > WARNING: cblas module is empty > ----------- > See scipy/INSTALL.txt for troubleshooting. > Notes: > * If atlas library is not found by numpy/distutils/system_info.py, > then scipy uses fblas instead of cblas. > **************************************************************** > > Running the command: > ldd /usr/lib/python2.4/site-packages/numpy/linalg/lapack_lite.so > > Gives me: > linux-gate.so.1 => (0xffffe000) > liblapack.so.3 => /usr/lib/atlas/sse2/liblapack.so.3 > (0xb7958000) > libblas.so.3 => /usr/lib/atlas/sse2/libblas.so.3 (0xb7387000) > libg2c.so.0 => /usr/lib/libg2c.so.0 (0xb735f000) > libm.so.6 => /lib/tls/i686/cmov/libm.so.6 (0xb7339000) > libgcc_s.so.1 => /lib/libgcc_s.so.1 (0xb732e000) > libc.so.6 => /lib/tls/i686/cmov/libc.so.6 (0xb71fa000) > /lib/ld-linux.so.2 (0x80000000) > > Is there a problem with the libraries I have installed? Should I be > worried about these warnings, or am I just being compulsive? They're not *necessary* but they are nice to have. It demonstrates that scipy was not compiled with ATLAS. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From robert.kern at gmail.com Fri Dec 1 20:24:20 2006 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 01 Dec 2006 19:24:20 -0600 Subject: [SciPy-user] Scipy Superpack (was Re: pajer@iname.com has sent you a file.) In-Reply-To: <45705EE0.3080500@umit.maine.edu> References: <45705EE0.3080500@umit.maine.edu> Message-ID: <4570D5C4.2060408@gmail.com> R. Padraic Springuel wrote: > So, I downloaded and installed those files, but am still getting the > same error. > > Also, I installed the Superpack on my Mac system and can't seem to > import either scipy or matplotlib. I get the following errors: > > Traceback (most recent call last): > File "", line 1, in ? > File > "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages/pylab.py", > line 1, in ? > from matplotlib.pylab import * > ImportError: No module named matplotlib.pylab That would indeed be a problem. Chris might not be reading this thread, so hopefully my changing the subject line will grab his attention. > Traceback (most recent call last): > File "", line 1, in ? > NameError: name 'python' is not defined > >>> from scipy import * > Traceback (most recent call last): > File "", line 1, in ? > File > "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages/scipy/linalg/__init__.py", > line 8, in ? > from basic import * > File > "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages/scipy/linalg/basic.py", > line 17, in ? > from lapack import get_lapack_funcs > File > "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages/scipy/linalg/lapack.py", > line 17, in ? > from scipy.linalg import flapack > ImportError: Failure linking new module: > /Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages/scipy/linalg/flapack.so: > Library not loaded: /usr/local/lib/libgfortran.1.dylib > Referenced from: > /Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages/scipy/linalg/flapack.so > Reason: image not found > > What does all that mean? I thought the Superpack installs were supposed > to work right out of the box? No, as the website says: """To avoid compatibility issues, you should install the gFortran compiler and the GCC 4.0 compiler tools (both also included) before installing these packages.""" Install the gfortran compiler given in the tarball. It contains the runtime libraries that all of the scipy fortran wrappers compiled against. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From robert.kern at gmail.com Fri Dec 1 20:38:22 2006 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 01 Dec 2006 19:38:22 -0600 Subject: [SciPy-user] Scipy Superpack (was Re: pajer@iname.com has sent you a file.) In-Reply-To: <4570D5C4.2060408@gmail.com> References: <45705EE0.3080500@umit.maine.edu> <4570D5C4.2060408@gmail.com> Message-ID: <4570D90E.5060307@gmail.com> Robert Kern wrote: > R. Padraic Springuel wrote: >> So, I downloaded and installed those files, but am still getting the >> same error. >> >> Also, I installed the Superpack on my Mac system and can't seem to >> import either scipy or matplotlib. I get the following errors: >> >> Traceback (most recent call last): >> File "", line 1, in ? >> File >> "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages/pylab.py", >> line 1, in ? >> from matplotlib.pylab import * >> ImportError: No module named matplotlib.pylab > > That would indeed be a problem. Chris might not be reading this thread, so > hopefully my changing the subject line will grab his attention. I've found what I think is the problem, matplotlib/__init__.py does not exist, so something went wrong with the bdist_mpkg process. You can try grabbing it from the SVN repository and putting it in .../site-packages/matplotlib/ https://svn.sourceforge.net/svnroot/matplotlib/trunk/matplotlib/lib/matplotlib/__init__.py Of course, there may be other pieces missing from the package. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From lou_boog2000 at yahoo.com Sat Dec 2 08:21:29 2006 From: lou_boog2000 at yahoo.com (Lou Pecora) Date: Sat, 2 Dec 2006 05:21:29 -0800 (PST) Subject: [SciPy-user] Scipy Superpack (was Re: pajer@iname.com has sent you a file.) In-Reply-To: <4570D5C4.2060408@gmail.com> Message-ID: <628689.17446.qm@web34409.mail.mud.yahoo.com> Robert, I had this problem with the SciPy superpack, too. I had to install the fortran compiler using the terminal command on the tarball: sudo tar -xvf g77v3.4-bin.tar -C / followed by my password. I also found that since I used a non-Apple Python I had to install iPython from the terminal, too. I would like to emphasize that as a user I really like the idea of these superpacks. I love the stuff coming out in python now and I am using it more and more, but it is maddening to keep the dependencies straight -- which versions of the packages work together. I fear upgrading any package including python since it can start a long spiral of seaching and testing to get everything working together again. Superpacks can go a long way in providing a useful scientific package all in one. Just some tweaking and they're there. I hope they are maintained for the sake of us users. Thanks to all who do that. -- Lou Pecora My views are my own. ____________________________________________________________________________________ Yahoo! Music Unlimited Access over 1 million songs. http://music.yahoo.com/unlimited From arserlom at gmail.com Sat Dec 2 12:18:08 2006 From: arserlom at gmail.com (Armando Serrano Lombillo) Date: Sat, 2 Dec 2006 18:18:08 +0100 Subject: [SciPy-user] geostatistics In-Reply-To: <4570D45D.5030708@gmail.com> References: <4570D45D.5030708@gmail.com> Message-ID: I have never done any wrapping of fortran code in python, but if anyone has the time, knowledge and interest to do it, i think the GSLIB code is available for download. It certainly would be interesting to have gslib's capabilities in scipy. On 12/2/06, Robert Kern wrote: > > Joshua Petterson wrote: > > Hi all, > > I can't find any package for geostatistics in scipy (kriging, > > variograms, ...), do I have to use R or is there another place for > > that? > > Sadly, no, we haven't had any kriging algorithms contributed to scipy. > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless > enigma > that is made terrible by our own mad attempt to interpret it as though it > had > an underlying truth." > -- Umberto Eco > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ryanlists at gmail.com Sat Dec 2 13:55:08 2006 From: ryanlists at gmail.com (Ryan Krauss) Date: Sat, 2 Dec 2006 12:55:08 -0600 Subject: [SciPy-user] problem with lsim In-Reply-To: <457083D3.9040600@ieee.org> References: <457083D3.9040600@ieee.org> Message-ID: Sweet. This is working for me now. I am running numpy/scipy built from source under windows and all is well. It is almost like running Linux. On 12/1/06, Travis Oliphant wrote: > Ryan Krauss wrote: > > I was hoping that updating my versions would fix this problem, but > > after wrestling with building from source under windows (and mostly > > getting it working), I still get: > > > > This was a problem with NumPy. It is now fixed in SVN and should be in > the 1.0.1 release of NumPy. > > -Travis > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From anand at soe.ucsc.edu Sat Dec 2 15:53:32 2006 From: anand at soe.ucsc.edu (Anand Patil) Date: Sat, 02 Dec 2006 12:53:32 -0800 Subject: [SciPy-user] Fastest python extension for tiny functions? Message-ID: <4571E7CC.5090007@cse.ucsc.edu> Hi all, I have some very light functions that get called enormous numbers of times. I'd like to get them running as fast as possible, even if it makes them ugly. The functions do several integer comparisons but no floating-point computation, so I'd like to port them to C/C++ rather than Fortran. While weave.inline has been rocking my world for most applications, on my computer the gateway alone seems to take about 1s per 100k calls, which is quite a bit of overhead for functions this small. Could anyone help me figure out which python-to-C method (swig, boost::python, etc) is fastest for tiny functions? I know ahead of time what types all the arguments will be. Thanks very much as always, Anand From kammeyer at enthought.com Sat Dec 2 17:29:21 2006 From: kammeyer at enthought.com (Dave Kammeyer) Date: Sat, 02 Dec 2006 16:29:21 -0600 Subject: [SciPy-user] How to: Pass NumPy arrays to C & use SWIG to generate modules In-Reply-To: <461232.60327.qm@web34413.mail.mud.yahoo.com> References: <461232.60327.qm@web34413.mail.mud.yahoo.com> Message-ID: <4571FE41.4070105@enthought.com> If they're small routines, you might take a look at Weave, which allows you to inline C++ code into Python code with easy access to arrays pretty easily. http://scipy.org/Weave -Dave Lou Pecora wrote: > I have some number crunching routines I want to code > in C or C++ and then use SWIG to create a module I can > import. I can get the simple example.i stuff in the > SWIG package to work, but I need to pass NumPy arrays > back and forth. I have looked up typemaps and googled > examples, but whatever I find is pretty general and > high level for me and my needs -- at least until I get > more experience. > > Does anyone know of simple examples of passing NumPy > arrays and using SWIG? I really don't want to branch > out to other objects, yet. No C++ templates or fancy > stuff. Just NumPy arrays to C arrays (or something > reasonable in C) will get me a long way in my coding > for now. > > Thanks for any suggestions or pointers. > > -- Lou Pecora > US Naval Research Lab > > > > ____________________________________________________________________________________ > Do you Yahoo!? > Everyone is raving about the all-new Yahoo! Mail beta. > http://new.mail.yahoo.com > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > From oliphant.travis at ieee.org Sun Dec 3 01:09:44 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Sat, 02 Dec 2006 23:09:44 -0700 Subject: [SciPy-user] Fastest python extension for tiny functions? In-Reply-To: <4571E7CC.5090007@cse.ucsc.edu> References: <4571E7CC.5090007@cse.ucsc.edu> Message-ID: <45726A28.8060406@ieee.org> > While weave.inline has been rocking my world for most applications, on > my computer the gateway alone seems to take about 1s per 100k calls, > which is quite a bit of overhead for functions this small. Could anyone > help me figure out which python-to-C method (swig, boost::python, etc) > is fastest for tiny functions? I know ahead of time what types all the > arguments will be. > The fastest interface is going to be hand-written. Other than that, my experience shows that ctypes and pyrex (and weave) are comparable to each other (and not much slower than hand-written at that). All of my experience, however, is on functions that do a reasonably large amount of work. -Travis From nauss at lcrs.de Sun Dec 3 09:51:25 2006 From: nauss at lcrs.de (Thomas Nauss) Date: Sun, 03 Dec 2006 15:51:25 +0100 Subject: [SciPy-user] Writing a Numpy array to an ascii file with column headings Message-ID: <4572E46D.2050702@lcrs.de> Hi there, I am a really newby to Python but I hope this is the right place to get an answer for my question: I want to write data from an n-D array to an ASCII file but with column headings. I have no problem to store array data in an ASCII file using e. g.: from numpy import * from scipy.io import write_array data = column_stack((array_1D_1,array_1D_2)) write_array("test.txt", data, separator=' ', linesep='\n', precision=7) As you all know better than myself, this results in something like: 1.0000000e+000 2.2720000e-001 9.9999000e-001 2.2910000e-001 9.9998000e-001 2.2970000e-001 But for my purpose, the ASCII file should look like: SSA Reflection 1.0000000e+000 2.2720000e-001 9.9999000e-001 2.2910000e-001 9.9998000e-001 2.2970000e-001 with e. g. "SSA" and "Reflection" as column headings. Thank you in advance for your comments, Thomas From ckkart at hoc.net Sun Dec 3 10:09:15 2006 From: ckkart at hoc.net (Christian Kristukat) Date: Mon, 04 Dec 2006 00:09:15 +0900 Subject: [SciPy-user] Writing a Numpy array to an ascii file with column headings In-Reply-To: <4572E46D.2050702@lcrs.de> References: <4572E46D.2050702@lcrs.de> Message-ID: <4572E89B.40107@hoc.net> Thomas Nauss wrote: > Hi there, > I am a really newby to Python but I hope this is the right place to get > an answer for my question: I want to write data from an n-D array to an > ASCII file but with column headings. > > I have no problem to store array data in an ASCII file using e. g.: > from numpy import * > from scipy.io import write_array > data = column_stack((array_1D_1,array_1D_2)) > write_array("test.txt", data, separator=' ', linesep='\n', precision=7) Instead of the filename you may put an open file object. So you can open a file for writing, write a header and then let io.write_array write the array data. Christian From lou_boog2000 at yahoo.com Sun Dec 3 12:19:42 2006 From: lou_boog2000 at yahoo.com (Lou Pecora) Date: Sun, 3 Dec 2006 09:19:42 -0800 (PST) Subject: [SciPy-user] How to: Pass NumPy arrays to C & use SWIG to generate modules In-Reply-To: <4571FE41.4070105@enthought.com> Message-ID: <987737.90335.qm@web34412.mail.mud.yahoo.com> I will be using both small and large code modules, but I am focused mostly on large C and C++ code routines for big parts of number-crunching routines. Thanks for the suggestion. --- Dave Kammeyer wrote: > If they're small routines, you might take a look at > Weave, which allows > you to inline C++ code into Python code with easy > access to arrays > pretty easily. http://scipy.org/Weave -- Lou Pecora My views are my own. ____________________________________________________________________________________ Any questions? Get answers on any topic at www.Answers.yahoo.com. Try it now. From joshuafr at gmail.com Sun Dec 3 17:22:01 2006 From: joshuafr at gmail.com (Joshua Petterson) Date: Sun, 3 Dec 2006 23:22:01 +0100 Subject: [SciPy-user] recarray Message-ID: Hi all, I have a little "stupid" question, but what is the purpose of recarray? I have saw that numpy.array supports names, then why using recarray? thanks From gael.varoquaux at normalesup.org Sun Dec 3 17:42:40 2006 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Sun, 3 Dec 2006 23:42:40 +0100 Subject: [SciPy-user] Editor of the wiki Message-ID: <20061203224240.GK20167@clipper.ens.fr> Hi, Can an editor add me please to the editor page, I would like to rename a page. Cheers, Ga?l From robert.kern at gmail.com Sun Dec 3 19:06:40 2006 From: robert.kern at gmail.com (Robert Kern) Date: Sun, 03 Dec 2006 18:06:40 -0600 Subject: [SciPy-user] recarray In-Reply-To: References: Message-ID: <45736690.2060008@gmail.com> Joshua Petterson wrote: > Hi all, > I have a little "stupid" question, but what is the purpose of > recarray? I have saw that numpy.array supports names, then why using > recarray? It adds attribute access. E.g. records['name'] records.name Note that attribute access goes through a Python method and adds some overhead. If you happen to be iterating through a large array of records and using attribute access on each of the individual record objects, that overhead can be quite substantial. Unfortunately, it appears that record arrays go through the Python __getattribute__ method even ['name'] access is used (although I haven't satisfactorally explained to myself why this is the case). -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From robert.kern at gmail.com Sun Dec 3 19:08:52 2006 From: robert.kern at gmail.com (Robert Kern) Date: Sun, 03 Dec 2006 18:08:52 -0600 Subject: [SciPy-user] Editor of the wiki In-Reply-To: <20061203224240.GK20167@clipper.ens.fr> References: <20061203224240.GK20167@clipper.ens.fr> Message-ID: <45736714.9080106@gmail.com> Gael Varoquaux wrote: > Hi, > > Can an editor add me please to the editor page, I would like to rename a > page. Done. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From david at ar.media.kyoto-u.ac.jp Sun Dec 3 22:58:56 2006 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Mon, 04 Dec 2006 12:58:56 +0900 Subject: [SciPy-user] Fastest python extension for tiny functions? In-Reply-To: <4571E7CC.5090007@cse.ucsc.edu> References: <4571E7CC.5090007@cse.ucsc.edu> Message-ID: <45739D00.70607@ar.media.kyoto-u.ac.jp> Anand Patil wrote: > Hi all, > > I have some very light functions that get called enormous numbers of > times. I'd like to get them running as fast as possible, even if it > makes them ugly. The functions do several integer comparisons but no > floating-point computation, so I'd like to port them to C/C++ rather > than Fortran. > > While weave.inline has been rocking my world for most applications, on > my computer the gateway alone seems to take about 1s per 100k calls, > which is quite a bit of overhead for functions this small. Could anyone > help me figure out which python-to-C method (swig, boost::python, etc) > is fastest for tiny functions? I know ahead of time what types all the > arguments will be. I have the same problem for some recursive problems which are not easy to "vectorize" (recursive stochastic approximation). I don't know your case, but I think mine is typical. I have a function which is computive intensive, but works on numpy arrays instead of looping using python. The function checks its arguments, the implementation is left to an other function etc... All the checking + function call is really expensive once you send only one sample instead of a whole array (the function itself, in C, would take a few 100/1000 cycles). My approach so far is 1 First, implements the algorithm as straightforward as possible, and keep it as a reference :) 2 Then profile it significantly to be sure that you have a problem where you think you have a problem (under linux, I found hotshot + kcachegrind really useful for this, thanks to the graphical call tree with the cost at each node). 2 To slightly change the api: basically, you have to trade flexibility for speed. You have to give up flexibility ie your function won't be reusable in other cases, but it is tiny, so it is OK. I check the arguments in the "root function", and loop in python using a special purpose, ugly function without argument checking in python first, and once I checked it worked, in pure C python, without arguments checking. For 2, and for the scale you are talking about (100k functions / s; I don't know your computer's CPU power, but on my fairly recent computer, this scale definitely shows the limits of actual python implementation, where function calls are expensive), ctypes is too expensive in my experience, when passing at least one numpy array: the call from_param used for any argument passing is itself a python function call, which is expensive. Also, you should avoid any superflous other function call (in python); in my case, with my special purpose function, by removing all the intermediate layers, I got a speed up of 5x. For example, juste by removing calls to functions such as atleast_2d, shape, etc... I already got a 2x speed up ! When I am done, as the code is part of the pyem toolbox in scipy.sandbox, you may take a look if you want (the change will be documented), cheers, David From elcorto at gmx.net Mon Dec 4 03:54:43 2006 From: elcorto at gmx.net (Steve Schmerler) Date: Mon, 04 Dec 2006 09:54:43 +0100 Subject: [SciPy-user] odeint error message In-Reply-To: <20061203224240.GK20167@clipper.ens.fr> References: <20061203224240.GK20167@clipper.ens.fr> Message-ID: <4573E253.6010406@gmx.net> Hi I accidentally passed an 1-element array to odeint and got #--------------------------------------# # y''(t) + y(t) == 0 def dYdt(Y, t0): y0 = Y[0] y1 = Y[1] return array([y1, -y0]) ##t = arange(0, 2*pi, .1) t = array([1.0]) sol = odeint(dYdt, array([3.0,0.0]), t, full_output = 1, rtol = 1e-7, atol = 1e-7) Traceback (most recent call last): File "odeint_example.py", line 43, in ? sol = odeint(dYdt, array([3.0,0.0]), t, full_output = 1, rtol = 1e-7, atol = 1e-7) File "/usr/local//lib/python2.4/site-packages/scipy/integrate/odepack.py", line 132, in odeint output[1]['message'] = _msgs[output[-1]] KeyError: 1 #--------------------------------------# I found my error rather quick but just want to note that the err message wasn't really helpful. Maybe odeint could check the shape of its input. -- cheers, steve Random number generation is the art of producing pure gibberish as quickly as possible. From hollowspook at gmail.com Mon Dec 4 08:52:30 2006 From: hollowspook at gmail.com (Zhang Sam) Date: Mon, 4 Dec 2006 21:52:30 +0800 Subject: [SciPy-user] about the addition of several sparse matrix Message-ID: Hi, there I have several sparse matrices in coo_matrix format, say Y1, Y2, Y3, Y4. I want to get the sum of them and I use the operation: Y = Y1 + Y2 + Y3 + Y4. When I print Y, it returns: >>> (0, 0) -17.3611111111j (3, 0) 17.3611111111j (1, 1) -16j (7, 1) 16j (2, 2) -17.0648464164j (5, 2) 17.0648464164j (0, 3) 17.3611111111j (3, 3) (1.94219124871-27.792793163j) (3, 3) (1.36518771331-11.5160955631j) (4, 3) (-1.94219124871+10.5106820519j) (8, 3) (-1.36518771331+11.6040955631j) (3, 4) (-1.94219124871+10.5106820519j) (4, 4) (3.22420038714-15.8409270142j) (5, 4) (-1.28200913842+5.58824496236j) (2, 5) 17.0648464164j (4, 5) (-1.28200913842+5.58824496236j) (5, 5) (1.15508748089-26.7446168427j) (5, 5) (1.28200913842-5.40924496236j) (6, 5) (-1.15508748089+9.78427042636j) (5, 6) (-1.15508748089+9.78427042636j) (6, 6) (2.77220995414-23.3032490233j) (7, 6) (-1.61712247325+13.6979785969j) (1, 7) 16j (6, 7) (-1.61712247325+13.6979785969j) (7, 7) (1.61712247325-29.6234785969j) (7, 7) (1.18760437929-5.82213453331j) (8, 7) (-1.18760437929+5.97513453331j) (3, 8) (-1.36518771331+11.6040955631j) (7, 8) (-1.18760437929+5.97513453331j) (8, 8) (2.5527920926-17.3382300964j) It is strange that there are two (3,3) entries, two (5,5) entries and two (7,7) entries. I have checked this result. It is correct if we add the two value together. What operation can let it return a sparse matrix which has only one value at every nonzero entry? Thanks! Sam Dec 4th, 2006 -------------- next part -------------- An HTML attachment was scrubbed... URL: From martin.wiechert at gmx.de Mon Dec 4 10:35:09 2006 From: martin.wiechert at gmx.de (Martin Wiechert) Date: Mon, 4 Dec 2006 16:35:09 +0100 Subject: [SciPy-user] about the addition of several sparse matrix In-Reply-To: References: Message-ID: <200612041635.10449.martin.wiechert@gmx.de> Hi Sam, FYI. I've experienced the same issue with csc matrices, cf. http://projects.scipy.org/scipy/scipy/ticket/316 Cheers, Martin On Monday 04 December 2006 14:52, Zhang Sam wrote: > Hi, there > > I have several sparse matrices in coo_matrix format, say Y1, Y2, Y3, Y4. I > want to get the sum of them and I use the operation: Y = Y1 + Y2 + Y3 + > Y4. > > When I print Y, it returns: > >>> (0, 0) -17.3611111111j > > (3, 0) 17.3611111111j > (1, 1) -16j > (7, 1) 16j > (2, 2) -17.0648464164j > (5, 2) 17.0648464164j > (0, 3) 17.3611111111j > (3, 3) (1.94219124871-27.792793163j) > (3, 3) (1.36518771331-11.5160955631j) > (4, 3) (-1.94219124871+10.5106820519j) > (8, 3) (-1.36518771331+11.6040955631j) > (3, 4) (-1.94219124871+10.5106820519j) > (4, 4) (3.22420038714-15.8409270142j) > (5, 4) (-1.28200913842+5.58824496236j) > (2, 5) 17.0648464164j > (4, 5) (-1.28200913842+5.58824496236j) > (5, 5) (1.15508748089-26.7446168427j) > (5, 5) (1.28200913842-5.40924496236j) > (6, 5) (-1.15508748089+9.78427042636j) > (5, 6) (-1.15508748089+9.78427042636j) > (6, 6) (2.77220995414-23.3032490233j) > (7, 6) (-1.61712247325+13.6979785969j) > (1, 7) 16j > (6, 7) (-1.61712247325+13.6979785969j) > (7, 7) (1.61712247325-29.6234785969j) > (7, 7) (1.18760437929-5.82213453331j) > (8, 7) (-1.18760437929+5.97513453331j) > (3, 8) (-1.36518771331+11.6040955631j) > (7, 8) (-1.18760437929+5.97513453331j) > (8, 8) (2.5527920926-17.3382300964j) > > It is strange that there are two (3,3) entries, two (5,5) entries and two > (7,7) entries. I have checked this result. It is correct if we add the > two value together. What operation can let it return a sparse matrix > which has only one value at every nonzero entry? > > Thanks! > > > Sam > Dec 4th, 2006 From afraser at lanl.gov Mon Dec 4 11:40:36 2006 From: afraser at lanl.gov (afraser) Date: Mon, 04 Dec 2006 09:40:36 -0700 Subject: [SciPy-user] How to: Pass NumPy arrays to C & use SWIG to generate modules In-Reply-To: <987737.90335.qm@web34412.mail.mud.yahoo.com> (Lou Pecora's message of "Sun, 3 Dec 2006 09:19:42 -0800 (PST)") References: <987737.90335.qm@web34412.mail.mud.yahoo.com> Message-ID: <87u00br4vv.fsf@hmm.lanl.gov> Lou, I've passed NumPy arrays to C with SWIG. My code is not pretty, but if you don't find what you need, I will send you what I've got. -- Andy Fraser ISR-2 (MS:B244) afraser at lanl.gov Los Alamos National Laboratory 505 665 9448 Los Alamos, NM 87545 From fredmfp at gmail.com Mon Dec 4 12:05:45 2006 From: fredmfp at gmail.com (fred) Date: Mon, 04 Dec 2006 18:05:45 +0100 Subject: [SciPy-user] Editor of the wiki In-Reply-To: <45736714.9080106@gmail.com> References: <20061203224240.GK20167@clipper.ens.fr> <45736714.9080106@gmail.com> Message-ID: <45745569.7080309@gmail.com> Robert Kern wrote: > Gael Varoquaux wrote: > >> Hi, >> >> Can an editor add me please to the editor page, I would like to rename a >> page. >> > > Done. > Hi, I asked the same a few month ago, without any answer. The reason is, may be I forgot to tell, that I want to update/delete numerous figures I have uploaded to the MV2's wiki pages I wrote. Thanks in advance. -- http://scipy.org/FredericPetit From lou_boog2000 at yahoo.com Mon Dec 4 12:24:08 2006 From: lou_boog2000 at yahoo.com (Lou Pecora) Date: Mon, 4 Dec 2006 09:24:08 -0800 (PST) Subject: [SciPy-user] How to: Pass NumPy arrays to C & use SWIG to generate modules In-Reply-To: <87u00br4vv.fsf@hmm.lanl.gov> Message-ID: <970477.41065.qm@web34411.mail.mud.yahoo.com> Hi, Andy, I hope all is well. Thanks for the offer. I will take you up on it. I don't need anything fancy, just passing some numpy arrays back and forth so I can work on them in C or C++. My problem is, I've gotten the simple SWIG examples to work, but when I jump to NumPy arrays, I "hit the wall." Things get complex fast and most of the stuff I've found out there is more sophisticated than I need and hard to understand. I've spent a few full days on this and haven't made much headway. Let me know how I can get your code. Thanks. -- Lou --- afraser wrote: > Lou, > > I've passed NumPy arrays to C with SWIG. My code is > not pretty, but > if you don't find what you need, I will send you > what I've got. > -- > Andy Fraser ISR-2 (MS:B244) > afraser at lanl.gov Los Alamos National Laboratory > 505 665 9448 Los Alamos, NM 87545 > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > ____________________________________________________________________________________ Have a burning question? Go to www.Answers.yahoo.com and get answers from real people who know. From ellisonbg.net at gmail.com Mon Dec 4 15:49:07 2006 From: ellisonbg.net at gmail.com (Brian Granger) Date: Mon, 4 Dec 2006 13:49:07 -0700 Subject: [SciPy-user] General parallel processing question In-Reply-To: <455E0312.5080308@astraw.com> References: <455D44E6.3030506@cse.ucsc.edu> <6ce0ac130611162149q3c53fdedm1a9e3b8b6bcadb0f@mail.gmail.com> <455E0312.5080308@astraw.com> Message-ID: <6ce0ac130612041249y43a738cfx85f5cd7dc8a6d82@mail.gmail.com> > I just want to clear up any misconception about Pyro - there are several > ways to avoid the issue Brian mentions. The easiest are "one way" calls. > Another approach is to use multi-threading. Anyhow, I find Pyro to be a > quick and easy way to get started on IPC. There is no doubt that Pyro is a quick and easy approach. I like it very much for that. The workarounds you mentions definitely work. I wasn't familiar with the "one way" calls but after reading about them, they seems like a reasonable thing that doesn't require threads. > [Note: Pyro's approach is apparently inflammatory to some of the Twisted > developers, I think because they're (rightfully) concerned about > security. However, if you're running on a single, (possibly multi-core) > machine or a trusted LAN, the security issue is a non-issue.] Yes, they strongly object to the usage of pickle based RPCs. I understand their security concerns, but I think there *are* times (such as the usage case you mention) where those concerns are not a show stopper for pickle based RPCs. We definitely use pickle to send around arbitrary objects in ipython and this probably won't change. Even though I like Pyro in many ways, the main objection I have it that there is no clean abstraction of asynchronous events. At the root level Pyro is synchronous and threading/one way calls only make things look asynchronous. For more complicated distributed systems that are really asynchronous in nature, I think Pyro's simplicity is lost because of this. At that point, it is easier just to bite the bullet and use the clean abstractions that Twisted provides. Brian > -Andrew > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From palazzol at comcast.net Mon Dec 4 16:15:17 2006 From: palazzol at comcast.net (palazzol at comcast.net) Date: Mon, 04 Dec 2006 21:15:17 +0000 Subject: [SciPy-user] optimize.leastsq() question Message-ID: <120420062115.12218.45748FE5000AD4FD00002FBA2200735834040196960E040E9F@comcast.net> Hi, First, I want to thank you all for making SciPy so great. I am prototyping an application using VPython and SciPy together, and I can't believe how well everything is working after <2 weeks of coding. Now, on to my question. I've boiled it down to a small test program which shows the situation, and attached it here. I am passing an array into my minimization function, intended to hold the return values. Apparently, this causes some kind of issue, when the array is updated elementwise. If you run it as-is, you will see the iteration stop after a few tries, with an error. If you uncomment either of the #err=xxx lines, it works fine. I am running under WinXP, Python2.4, a slightly older version of SciPy 0.4.9, with numpy-0.9.8. I'm hoping to upgrade soon, but haven't yet. Thanks, -Frank -------------- next part -------------- A non-text attachment was scrubbed... Name: leastsq_test.py Type: application/octet-stream Size: 674 bytes Desc: not available URL: From fredmfp at gmail.com Mon Dec 4 18:35:55 2006 From: fredmfp at gmail.com (fred) Date: Tue, 05 Dec 2006 00:35:55 +0100 Subject: [SciPy-user] blas/lapack issue on a freebsd box... Message-ID: <4574B0DB.7060000@gmail.com> Hi all, I'm trying to build scipy and all its stuff on my freebsd box (r6.2). (binaries pkg are not yet provided on freebsd ftp servers). I have successfully built scipy 0.5.1, but in fact, it fails when I want to import some module: IPython 0.7.2 -- An enhanced Interactive Python. ? -> Introduction to IPython's features. %magic -> Information about IPython's 'magic' % functions. help -> Python's own help system. object? -> Details about 'object'. ?object also works, ?? prints more. In [1]: from scipy.special import jv --------------------------------------------------------------------------- exceptions.ImportError Traceback (most recent call last) /usr/home/fred/ /usr/local/lib/python2.4/site-packages/scipy/special/__init__.py 8 from basic import * 9 import specfun ---> 10 import orthogonal 11 from orthogonal import legendre, chebyt, chebyu, chebyc, chebys, \ 12 jacobi, laguerre, genlaguerre, hermite, hermitenorm, gegenbauer, \ /usr/local/lib/python2.4/site-packages/scipy/special/orthogonal.py 65 import _cephes as cephes 66 _gam = cephes.gamma ---> 67 from scipy.linalg import eig 68 69 def poch(z,m): /usr/local/lib/python2.4/site-packages/scipy/linalg/__init__.py 6 from linalg_version import linalg_version as __version__ 7 ----> 8 from basic import * 9 from decomp import * 10 from matfuncs import * /usr/local/lib/python2.4/site-packages/scipy/linalg/basic.py 15 #from blas import get_blas_funcs 16 from flinalg import get_flinalg_funcs ---> 17 from lapack import get_lapack_funcs 18 from numpy import asarray,zeros,sum,newaxis,greater_equal,subtract,arange,\ 19 conjugate,ravel,r_,mgrid,take,ones,dot,transpose,sqrt,add,real /usr/local/lib/python2.4/site-packages/scipy/linalg/lapack.py 15 import numpy 16 ---> 17 from scipy.linalg import flapack 18 from scipy.linalg import clapack 19 _use_force_clapack = 1 ImportError: /usr/local/lib/libalapack_r.so.1: Undefined symbol "cblas_dswap" As google gives no result about this issue, any suggestion are welcome. Cheers, PS : python 2.4.3, numpy 1.0r1, scipy 0.5.1, atlas 3.6.0.1, blas 1.0 -- http://scipy.org/FredericPetit From ckkart at hoc.net Mon Dec 4 20:29:46 2006 From: ckkart at hoc.net (Christian Kristukat) Date: Tue, 05 Dec 2006 10:29:46 +0900 Subject: [SciPy-user] optimize.leastsq() question In-Reply-To: <120420062115.12218.45748FE5000AD4FD00002FBA2200735834040196960E040E9F@comcast.net> References: <120420062115.12218.45748FE5000AD4FD00002FBA2200735834040196960E040E9F@comcast.net> Message-ID: <4574CB8A.9000602@hoc.net> palazzol at comcast.net wrote: > Hi, > > First, I want to thank you all for making SciPy so great. I am prototyping an application using VPython and SciPy together, and I can't believe how well everything is working after <2 weeks of coding. > > Now, on to my question. I've boiled it down to a small test program which shows the situation, and attached it here. I am passing an array into my minimization function, intended to hold the return values. Apparently, this causes some kind of issue, when the array is updated elementwise. If you run it as-is, you will see the iteration stop after a few tries, with an error. If you uncomment either of the #err=xxx lines, it works fine. I do not see what you intend using the err variable, nor do I understand why it should not work like this. This is really weird. Anyway, I send attached a modified version. Christian -------------- next part -------------- A non-text attachment was scrubbed... Name: leastsq_test2.py Type: text/x-python Size: 990 bytes Desc: not available URL: From palazzol at comcast.net Mon Dec 4 22:18:44 2006 From: palazzol at comcast.net (Frank Palazzolo) Date: Mon, 04 Dec 2006 22:18:44 -0500 Subject: [SciPy-user] optimize.leastsq() question In-Reply-To: <4574CB8A.9000602@hoc.net> References: <120420062115.12218.45748FE5000AD4FD00002FBA2200735834040196960E040E9F@comcast.net> <4574CB8A.9000602@hoc.net> Message-ID: <4574E514.1010209@comcast.net> Hi, Thanks for looking. The only reason I passed in the err variable was because in my actual program - which is much more complicated - it was more convenient to create and allocate err once, outside of the callback function. I know this is probably not significant for a Python program, but I did it that was because this is a prototype for a C++ implementation. Thanks for letting me know about numpy.random. I am trying to get all of the non-numpy stuff out of my app :) It is wierd, because it doesn't seem like a "Python" error, the solver just stops iterating with that message about the Jacobian. And the values it calculates up to that point are the same as the configuration that succeeds. I'm curious enough that I'd love to set up a debug environment so that I could trace through the Python/C interface, but I don't have time at the moment. Thanks again, -Frank Christian Kristukat wrote: > palazzol at comcast.net wrote: > >>Hi, >> >>First, I want to thank you all for making SciPy so great. I am prototyping an application using VPython and SciPy together, and I can't believe how well everything is working after <2 weeks of coding. >> >>Now, on to my question. I've boiled it down to a small test program which shows the situation, and attached it here. I am passing an array into my minimization function, intended to hold the return values. Apparently, this causes some kind of issue, when the array is updated elementwise. If you run it as-is, you will see the iteration stop after a few tries, with an error. If you uncomment either of the #err=xxx lines, it works fine. > > > I do not see what you intend using the err variable, nor do I understand why it > should not work like this. This is really weird. > Anyway, I send attached a modified version. > > Christian > > > ------------------------------------------------------------------------ > > > from numpy import * > from scipy.optimize.minpack import * > > #from random import * > # this statement imports the python random module > # you rather want to use numpy.random > > > def f(x,y,z): > temp = z - (x[0] + x[1]*y + x[2]*y*y) > # to make err a copy of temp, just write err[:] = temp[:] > # or err = temp.copy() > # or even err = temp+0 > return temp > > y = arange(0,10) > # consider using linspace() instead of arange, it is more convenient > > #z = 1. + 5*y + 9*y*y + random() > # this returns a random scalar > z = 1. + 5*y + 9*y*y + random.rand(10)*100.0 > # this returns an array with 10 random numbers > > x0 = array([1.,1.,1.]) > #err = array([0.,0.,0.,0.,0.,0.,0.,0.,0.,0.]) > > #args = (y,z,err) > args = (y,z) > > # I don't see why you pass err and what it is used for nor do I understand > # why the minimization does not work anyway. > > sol = leastsq(f,x0,args) > print sol > > #import pylab > > #pylab.plot(y,z) > #pylab.plot(y,-f(sol[0],y,z)+z) > #pylab.show() > > > ------------------------------------------------------------------------ > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user From fredmfp at gmail.com Tue Dec 5 02:50:09 2006 From: fredmfp at gmail.com (fred) Date: Tue, 05 Dec 2006 08:50:09 +0100 Subject: [SciPy-user] geostatistics In-Reply-To: References: <4570D45D.5030708@gmail.com> Message-ID: <457524B1.7030405@gmail.com> Armando Serrano Lombillo a ?crit : > I have never done any wrapping of fortran code in python, but if > anyone has the time, knowledge and interest to do it, i think the > GSLIB code is available for download. It certainly would be > interesting to have gslib's capabilities in scipy. I'm (very) interested in it too. I plan to work on it in the few next months. Cheers, -- http://scipy.org/FredericPetit From cimrman3 at ntc.zcu.cz Tue Dec 5 04:24:17 2006 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Tue, 05 Dec 2006 10:24:17 +0100 Subject: [SciPy-user] How to: Pass NumPy arrays to C & use SWIG to generate modules In-Reply-To: <87u00br4vv.fsf@hmm.lanl.gov> References: <987737.90335.qm@web34412.mail.mud.yahoo.com> <87u00br4vv.fsf@hmm.lanl.gov> Message-ID: <45753AC1.5070306@ntc.zcu.cz> afraser wrote: > Lou, > > I've passed NumPy arrays to C with SWIG. My code is not pretty, but > if you don't find what you need, I will send you what I've got. Also look at http://www.scipy.org/Cookbook/SWIG_and_NumPy. r. From nmarais at sun.ac.za Tue Dec 5 04:36:13 2006 From: nmarais at sun.ac.za (Neilen Marais) Date: Tue, 05 Dec 2006 11:36:13 +0200 Subject: [SciPy-user] Using fftpack.diff Message-ID: Hi I'm trying to get the DFT of the derivative of a real valued periodic signal. If I have the time domain signal in ts (where one period is stored in ts), I do: fs = scipy.fftpack.fft(ts) dfs = scipy.fftpack.diff(fs) dts = scipy.fftpack.ifft(dfs) However dts seems to be almost purely imaginary whereas ts is purely real. I haven't done DFT related stuff for a while, am I misunderstanding completely? -- you know its kind of tragic we live in the new world but we've lost the magic -- Battery 9 (www.battery9.co.za) From lou_boog2000 at yahoo.com Tue Dec 5 10:38:01 2006 From: lou_boog2000 at yahoo.com (Lou Pecora) Date: Tue, 5 Dec 2006 07:38:01 -0800 (PST) Subject: [SciPy-user] How to: Pass NumPy arrays to C & use SWIG to generate modules In-Reply-To: <45753AC1.5070306@ntc.zcu.cz> Message-ID: <859538.73209.qm@web34403.mail.mud.yahoo.com> Yes, thank you. I've looked at that, but like several other examples of typemaps for NumPy in SWIG, I confess I am having trouble understanding them. I have gotten a pure C extension working (sans SWIG) that takes a Numeric array and I may go that way and give up on SWIG. 95% of the time (or more) all I need to do is pass a float numpy array. I'm beginning to wonder if the SWIG typemaps and .i files are worth all the trouble. This remains a frustration with Python. I absolutely love the language and code as much of my numerical work in it as I can, but extending it remains a frustrating barrier. This despite spending several full days looking into Guido's tutorial on extensions, SWIG, and lots of googling. Maybe it's me. Sorry, had to rant a little. Thanks, again. -- Lou Pecora --- Robert Cimrman wrote: > afraser wrote: > > Lou, > > > > I've passed NumPy arrays to C with SWIG. My code > is not pretty, but > > if you don't find what you need, I will send you > what I've got. > > Also look at > http://www.scipy.org/Cookbook/SWIG_and_NumPy. > > r. ____________________________________________________________________________________ Any questions? Get answers on any topic at www.Answers.yahoo.com. Try it now. ____________________________________________________________________________________ Do you Yahoo!? Everyone is raving about the all-new Yahoo! Mail beta. http://new.mail.yahoo.com From gael.varoquaux at normalesup.org Tue Dec 5 11:14:07 2006 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Tue, 5 Dec 2006 17:14:07 +0100 Subject: [SciPy-user] How to: Pass NumPy arrays to C & use SWIG to generate modules In-Reply-To: <859538.73209.qm@web34403.mail.mud.yahoo.com> References: <45753AC1.5070306@ntc.zcu.cz> <859538.73209.qm@web34403.mail.mud.yahoo.com> Message-ID: <20061205161407.GA10568@clipper.ens.fr> On Tue, Dec 05, 2006 at 07:38:01AM -0800, Lou Pecora wrote: > This remains a frustration with Python. I absolutely > love the language and code as much of my numerical > work in it as I can, but extending it remains a > frustrating barrier. This despite spending several > full days looking into Guido's tutorial on extensions, > SWIG, and lots of googling. Lou, I am not a very advanced python user. I do not master writing C extentions. I have tried simple things, and had success with them because they were simple enough for me. In order of difficulty this is what I did: - scipy.weave (great, but won't probably suit your purposes, have a look at the wiki page) - pyrex (same reamrk than above, though the concept is fabulous) - passing array to C code with ctypes. I think this can suit you. It might not be as versatile or powerful as SWIG, but I found it was simple enough for my limited knowledge. I started out with simple hello word C functions, and was able to pass numpy arrays back and forth between numpy and C thanks to Albert Strasheim's wiki page http://scipy.org/Cookbook/Ctypes2 I cannot help you out with SWIG, as I have decided it was to complicated for me so far. Ga?l From lou_boog2000 at yahoo.com Tue Dec 5 11:22:39 2006 From: lou_boog2000 at yahoo.com (Lou Pecora) Date: Tue, 5 Dec 2006 08:22:39 -0800 (PST) Subject: [SciPy-user] How to: Pass NumPy arrays to C & use SWIG to generate modules In-Reply-To: <20061205161407.GA10568@clipper.ens.fr> Message-ID: <20061205162239.78536.qmail@web34411.mail.mud.yahoo.com> Thanks. I saw that site in a previous google. But it seems to be aimed at Windows. I am on a Mac. Rather than fight that battle now, I will try today to just generate my own simple C extension test routine. If I get that far, then that will probably be the way for me to go. I haven't checked out pyrex, though. I might look at that when I take a break. If I get the C extension working, I will post an announcement about it. It might help non-gurus like me get started. -- Lou Pecora Naval Research Lab, Washington, DC --- Gael Varoquaux wrote: > Lou, > > I am not a very advanced python user. I do not > master writing C > extentions. I have tried simple things, and had > success with them because > they were simple enough for me. In order of > difficulty this is what I > did: > > - scipy.weave (great, but won't probably suit your > purposes, have a look > at the wiki page) > > - pyrex (same reamrk than above, though the concept > is fabulous) > > - passing array to C code with ctypes. I think this > can suit you. It > might not be as versatile or powerful as SWIG, but > I found it was > simple enough for my limited knowledge. I started > out with simple hello > word C functions, and was able to pass numpy > arrays back and forth > between numpy and C thanks to Albert Strasheim's > wiki page > http://scipy.org/Cookbook/Ctypes2 > > I cannot help you out with SWIG, as I have decided > it was to complicated > for me so far. > > Ga?l ____________________________________________________________________________________ Want to start your own business? Learn how on Yahoo! Small Business. http://smallbusiness.yahoo.com/r-index From vallis.35530172 at bloglines.com Tue Dec 5 12:33:00 2006 From: vallis.35530172 at bloglines.com (vallis.35530172 at bloglines.com) Date: 5 Dec 2006 17:33:00 -0000 Subject: [SciPy-user] How to: Pass NumPy arrays to C & use SWIG to generate modules Message-ID: <1165339980.3313092686.20289.sendItem@bloglines.com> If all you want to do is pass an array of doubles (with its length) to C, you can use the attached typemap and swig file. I think they represent the maximum possible distillation of the process. Read the short comments in example.i. (Uh, I'm afraid the posting might break the newlines... I will also send you the files in attachment). Let me know if you need help with compiling. Cheers, Michele ==== the following is my example.i (the SWIG file) ==== %module example %{ #include "example.h" %} // suppose your example.h contains the function declaration // void examplefunction(double *array,long arraylength); // to which you want to pass a numpy 1-D array; // what we do here is to apply the numpy-to-array typemap // defined in numpy_typemaps.i to your function, which is then // included by the %include line below %include numpy_typemaps.i %typemap(in) (double *array,long arraylength) = (double *numpyarray,long numpyarraysize); %include "example.h" ==== the following is file numpy_typemaps.i ==== #include "numpy/arrayobject.h" #define ISCONTIGUOUS(m) (PyArray_ISCONTIGUOUS(m)) #define PyArray_CONTIGUOUS(m) (ISCONTIGUOUS(m) ? Py_INCREF(m), m : \ (PyArrayObject *)(PyArray_ContiguousFromObject((PyObject *)(m), (m)->descr->type_num, 0,0))) %} %init %{ import_array(); %} /* %typemap(in) (double* numarray, long length) { */ %typemap(in) (double *numpyarray,long numpyarraysize) { PyArrayObject *arr; if (!PyArray_Check($input)) { PyErr_SetString(PyExc_TypeError,"First argument is not an array"); return NULL; } if (PyArray_ObjectType($input,0) != PyArray_DOUBLE) { PyErr_SetString(PyExc_TypeError, "Incorrect array type: we need an array of DOUBLE"); return NULL; } arr = PyArray_CONTIGUOUS((PyArrayObject *)$input); if (arr->nd == 1) { $1 = (double *)arr->data; $2 = (long)arr->dimensions[0]; } else if (arr->nd == 2) { $1 = (double *)arr->data; $2 = (long)arr->dimensions[0] * (long)arr->dimensions[1]; } else { PyErr_SetString(PyExc_TypeError, "Incorrect number of dims: we want a 1D or 2D array"); return NULL; } Py_DECREF(arr); /* Release our local copy of the PyArray */ } ==== --- SciPy Users List seems to be aimed at Windows. I am on a Mac. Rather > than fight that battle now, I will try today to just > generate my own simple C extension test routine. If I > get that far, then that will probably be the way for > me to go. I haven't checked out pyrex, though. I > might look at that when I take a break. > > If I get the C extension working, I will post an > announcement about it. It might help non-gurus like > me get started. > > -- Lou Pecora > Naval Research Lab, Washington, DC > > --- Gael Varoquaux > wrote: > > > Lou, > > > > I am not a very advanced python user. I do not > > master writing C > > extentions. I have tried simple things, and had > > success with them because > > they were simple enough for me. In order of > > difficulty this is what I > > did: > > > > - scipy.weave (great, but won't probably suit your > > purposes, have a look > > at the wiki page) > > > > - pyrex (same reamrk than above, though the concept > > is fabulous) > > > > - passing array to C code with ctypes. I think this > > can suit you. It > > might not be as versatile or powerful as SWIG, but > > I found it was > > simple enough for my limited knowledge. I started > > out with simple hello > > word C functions, and was able to pass numpy > > arrays back and forth > > between numpy and C thanks to Albert Strasheim's > > wiki page > > http://scipy.org/Cookbook/Ctypes2 > > > > I cannot help you out with SWIG, as I have decided > > it was to complicated > > for me so far. > > > > Ga?l > > > > > ____________________________________________________________________________________ > Want to start your own business? > Learn how on Yahoo! Small Business. > http://smallbusiness.yahoo.com/r-index > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From robert.kern at gmail.com Tue Dec 5 14:14:37 2006 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 05 Dec 2006 13:14:37 -0600 Subject: [SciPy-user] blas/lapack issue on a freebsd box... In-Reply-To: <4574B0DB.7060000@gmail.com> References: <4574B0DB.7060000@gmail.com> Message-ID: <4575C51D.1000407@gmail.com> fred wrote: > Hi all, > > I'm trying to build scipy and all its stuff on my freebsd box (r6.2). > (binaries pkg are not yet provided on freebsd ftp servers). > > I have successfully built scipy 0.5.1, but in fact, it fails when I want > to import some module: > > IPython 0.7.2 -- An enhanced Interactive Python. > ? -> Introduction to IPython's features. > %magic -> Information about IPython's 'magic' % functions. > help -> Python's own help system. > object? -> Details about 'object'. ?object also works, ?? prints more. > > In [1]: from scipy.special import jv > --------------------------------------------------------------------------- > ImportError: /usr/local/lib/libalapack_r.so.1: Undefined symbol > "cblas_dswap" > > As google gives no result about this issue, any suggestion are welcome. > > Cheers, > > PS : python 2.4.3, numpy 1.0r1, scipy 0.5.1, atlas 3.6.0.1, blas 1.0 Check what libraries libalapack_r.so.1 wants to link againt with ldd(1) (at least that's what I'd do on a Linux; FreeBSD might use a different tool for this). It looks like your scipy.linalg.flapack module was not compiled against ATLAS correctly. Can you post the output of the following command (on scipy's setup.py)? python setup.py config -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From mwojc at p.lodz.pl Tue Dec 5 22:17:13 2006 From: mwojc at p.lodz.pl (Marek Wojciechowski) Date: Wed, 06 Dec 2006 03:17:13 -0000 Subject: [SciPy-user] feed-forward neural network for python Message-ID: Hi! I released feed-forward neural network for python project at sourceforge (ffnet). I'm announcing it here because it depends on numpy/scipy tandem, so you, folks, are potential users/testers :) If anyone is interested please visit ffnet.sourceforge.net (and then post comments if any...) Greetings -- Marek Wojciechowski From giorgio.luciano at chimica.unige.it Wed Dec 6 03:04:15 2006 From: giorgio.luciano at chimica.unige.it (Giorgio Luciano) Date: Wed, 06 Dec 2006 09:04:15 +0100 Subject: [SciPy-user] feed-forward neural network for python In-Reply-To: References: Message-ID: <4576797F.2090502@chimica.unige.it> Hello Marek, thanks for the code. I wil ldownload and try to test it and give feedback. I work in chemometrics and I will try to use them on a test set. Giorgio From fredmfp at gmail.com Wed Dec 6 03:04:39 2006 From: fredmfp at gmail.com (fred) Date: Wed, 06 Dec 2006 09:04:39 +0100 Subject: [SciPy-user] blas/lapack issue on a freebsd box... In-Reply-To: <4575C51D.1000407@gmail.com> References: <4574B0DB.7060000@gmail.com> <4575C51D.1000407@gmail.com> Message-ID: <45767997.5010903@gmail.com> Robert Kern a ?crit : >Check what libraries libalapack_r.so.1 wants to link againt with ldd(1) (at >least that's what I'd do on a Linux; FreeBSD might use a different tool for >this). It looks like your scipy.linalg.flapack module was not compiled against >ATLAS correctly. Can you post the output of the following command (on scipy's >setup.py)? Bad news. It seems it comes from atlas internals. The problem is that atlas is not provided as a binary package with freebsd 6.2, and it takes several hours to build on my Pentium D 930... :-( Cheers, -- http://scipy.org/FredericPetit -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: log URL: From david at ar.media.kyoto-u.ac.jp Wed Dec 6 03:23:44 2006 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Wed, 06 Dec 2006 17:23:44 +0900 Subject: [SciPy-user] blas/lapack issue on a freebsd box... In-Reply-To: <45767997.5010903@gmail.com> References: <4574B0DB.7060000@gmail.com> <4575C51D.1000407@gmail.com> <45767997.5010903@gmail.com> Message-ID: <45767E10.5020700@ar.media.kyoto-u.ac.jp> fred wrote: > Bad news. > It seems it comes from atlas internals. > > The problem is that atlas is not provided as a binary package with > freebsd 6.2, and it takes several hours to build on my Pentium D > 930... :-( > Which version of atlas are you trying to build ? The recent (3.7.*) are much easier to build/configure; if your configuration is already done, atlas should build in less than one hour, specially if you use several jobs (to use if you have a lot of memory only), I think. cheers. David From olivetti at itc.it Wed Dec 6 03:57:45 2006 From: olivetti at itc.it (Emanuele Olivetti) Date: Wed, 06 Dec 2006 09:57:45 +0100 Subject: [SciPy-user] feed-forward neural network for python In-Reply-To: References: Message-ID: <45768609.1010307@itc.it> Marek Wojciechowski wrote: > Hi! > I released feed-forward neural network for python project at sourceforge > (ffnet). I'm announcing it here because it depends on numpy/scipy tandem, > so you, folks, are potential users/testers :) > > If anyone is interested please visit ffnet.sourceforge.net (and then post > comments if any...) Hi Marek, thank you for you ANN implementation. I'm currently interested in recurrent neural networks so I hope you'll add them too in near future ;) Anyway, for those interested in ANN you can check conx.py module of PyRobotics project. It's just one python file (182Kb of python source!) inside a much bigger project. Conx.py is self contained and unfortunately it needs Numeric and not numpy :(( What about mixing ffnet and conx.py? ;) Some links: http://www.pyrorobotics.org/ - main site http://www.pyrorobotics.org/?page=PyroModuleNeuralNetworks - conx module introduction http://cvs.cs.brynmawr.edu/cgi-bin/viewcvs.cgi/pyrobot/brain/conx.py - actual conx.py code http://www.pyrorobotics.org/?page=Building_20Neural_20Networks_20using_20Conx - conx example code HTH Emanuele P.S.: both ffnet and conx.py are released under GPL. Good! From fredmfp at gmail.com Wed Dec 6 04:03:41 2006 From: fredmfp at gmail.com (fred) Date: Wed, 06 Dec 2006 10:03:41 +0100 Subject: [SciPy-user] blas/lapack issue on a freebsd box... In-Reply-To: <45767E10.5020700@ar.media.kyoto-u.ac.jp> References: <4574B0DB.7060000@gmail.com> <4575C51D.1000407@gmail.com> <45767997.5010903@gmail.com> <45767E10.5020700@ar.media.kyoto-u.ac.jp> Message-ID: <4576876D.1050500@gmail.com> David Cournapeau wrote: >> > Which version of atlas are you trying to build ? The recent (3.7.*) are > much easier to build/configure; if your configuration is already done, > atlas should build in less than one hour, specially if you use several > jobs (to use if you have a lot of memory only), I think. > I'm actually building 3.7.11 ;-) Let's wait & see... Cheers, -- http://scipy.org/FredericPetit From giorgio.luciano at chimica.unige.it Wed Dec 6 05:31:14 2006 From: giorgio.luciano at chimica.unige.it (Giorgio Luciano) Date: Wed, 06 Dec 2006 11:31:14 +0100 Subject: [SciPy-user] Newbie question about boolean condition argument In-Reply-To: <45767E10.5020700@ar.media.kyoto-u.ac.jp> References: <4574B0DB.7060000@gmail.com> <4575C51D.1000407@gmail.com> <45767997.5010903@gmail.com> <45767E10.5020700@ar.media.kyoto-u.ac.jp> Message-ID: <45769BF2.7050807@chimica.unige.it> Sorry for the newbie question, but I'm struggling with the las lines of my matlab code for a regression methods (99% finished) to convert then it in numpy/scipy here they are (matlab code) s2=find(sig(:,in:c)>0.001&sig(:,in:c)<=0.01) if s2~=[]; % for Matlab 7 use if not(isempty(s2)); the first problem is that the command find (and also where) does not support a logicla operator as condition :( because s2=find(sig(:,in:c)>0.001) works.. but s2=find(sig(:,in:c)>0.001&sig(:,in:c)<=0.01) give me an error like this Traceback (most recent call last): File "", line 1, in -toplevel- s3=find(sig1[:,arange(ini,c)]>0.001 and sig1[:,arange(ini,c)]<=0.00 ) ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all() I've read all the manual about difference in logical and bitwise operation... tried with different parenthese, using AND or logical_and, & etc. but I didn' manage to do it. The second question is...(already solve) does it exist a command like (isempty) in scipy/numpy I used if s2 !=[] ant it worked perfectly, I was just curious to know if the command existed :) Thanks to all and sorry for the easy question (I've searched in the faq, and manual before posting but found nothing) Giorgio From otto at tronarp.se Wed Dec 6 07:26:18 2006 From: otto at tronarp.se (Otto Tronarp) Date: Wed, 06 Dec 2006 13:26:18 +0100 Subject: [SciPy-user] Newbie question about boolean condition argument In-Reply-To: <45769BF2.7050807@chimica.unige.it> References: <4574B0DB.7060000@gmail.com> <4575C51D.1000407@gmail.com> <45767997.5010903@gmail.com> <45767E10.5020700@ar.media.kyoto-u.ac.jp> <45769BF2.7050807@chimica.unige.it> Message-ID: <20061206132618.fifttb0go4cksgoc@mathcore.kicks-ass.org> Quoting Giorgio Luciano : [...] > > here they are (matlab code) > > s2=find(sig(:,in:c)>0.001&sig(:,in:c)<=0.01) > if s2~=[]; % for Matlab 7 use if not(isempty(s2)); > > the first problem is that the command find (and also where) does not > support a logicla operator as condition :( > because > > s2=find(sig(:,in:c)>0.001) works.. > but s2=find(sig(:,in:c)>0.001&sig(:,in:c)<=0.01) > > give me an error like this > > Traceback (most recent call last): > File "", line 1, in -toplevel- > s3=find(sig1[:,arange(ini,c)]>0.001 and sig1[:,arange(ini,c)]<=0.00 ) > ValueError: The truth value of an array with more than one element is > ambiguous. Use a.any() or a.all() > > I've read all the manual about difference in logical and bitwise > operation... tried with different parenthese, using AND or logical_and, > & etc. but I didn' manage to do it. Are you sure that you used logical_and in the right way? It seems to work in my simple example. In [10]:x = rand(20) In [11]:logical_and(x > 0.01, x < 0.5) Out[11]:NumPy array, format: long [True True False True True False True True True False False False False False True False True True False True] In [12]:where(logical_and(x > 0.01, x < 0.5)) Out[12]:NumPy array, format: long [ 0 1 3 4 6 7 8 14 16 17 19] /Otto From fredmfp at gmail.com Wed Dec 6 07:14:15 2006 From: fredmfp at gmail.com (fred) Date: Wed, 06 Dec 2006 13:14:15 +0100 Subject: [SciPy-user] blas/lapack issue on a freebsd box... In-Reply-To: <45767E10.5020700@ar.media.kyoto-u.ac.jp> References: <4574B0DB.7060000@gmail.com> <4575C51D.1000407@gmail.com> <45767997.5010903@gmail.com> <45767E10.5020700@ar.media.kyoto-u.ac.jp> Message-ID: <4576B417.9030201@gmail.com> David Cournapeau wrote: > Which version of atlas are you trying to build ? The recent (3.7.*) are > much easier to build/configure; if your configuration is already done, > atlas should build in less than one hour, specially if you use several > jobs (to use if you have a lot of memory only), I think. > During building (not yet finished) I have a lot of messages which do not sound good : Dec 6 10:19:31 marsu kernel: pid 47783 (xsfc), uid 0: exited on signal 11 (core dumped) Dec 6 10:20:12 marsu kernel: pid 48589 (xsfc), uid 0: exited on signal 11 (core dumped) Dec 6 10:21:04 marsu kernel: pid 49528 (xsfc), uid 0: exited on signal 11 (core dumped) Dec 6 10:33:13 marsu kernel: pid 60560 (xzfc), uid 0: exited on signal 11 (core dumped) Dec 6 10:49:11 marsu kernel: pid 74713 (xcfc), uid 0: exited on signal 11 (core dumped) Dec 6 10:49:56 marsu kernel: pid 76007 (xcfc), uid 0: exited on signal 11 (core dumped) Dec 6 10:50:55 marsu kernel: pid 77564 (xcfc), uid 0: exited on signal 11 (core dumped) Dec 6 11:22:09 marsu kernel: pid 9256 (xdfc), uid 0: exited on signal 11 (core dumped) Dec 6 11:40:57 marsu kernel: pid 23135 (xsfc), uid 0: exited on signal 11 (core dumped) Dec 6 11:41:38 marsu kernel: pid 23938 (xsfc), uid 0: exited on signal 11 (core dumped) Dec 6 11:42:30 marsu kernel: pid 24877 (xsfc), uid 0: exited on signal 11 (core dumped) Dec 6 11:54:41 marsu kernel: pid 35877 (xzfc), uid 0: exited on signal 11 (core dumped) Dec 6 12:10:44 marsu kernel: pid 50103 (xcfc), uid 0: exited on signal 11 (core dumped) Dec 6 12:11:29 marsu kernel: pid 51410 (xcfc), uid 0: exited on signal 11 (core dumped) Dec 6 12:12:28 marsu kernel: pid 52964 (xcfc), uid 0: exited on signal 11 (core dumped) Dec 6 12:44:06 marsu kernel: pid 86195 (xdfc), uid 0: exited on signal 11 (core dumped) Any idea ? Yes, it's a litte OT, sorry. Cheers, -- http://scipy.org/FredericPetit From giorgio.luciano at chimica.unige.it Wed Dec 6 10:40:30 2006 From: giorgio.luciano at chimica.unige.it (Giorgio Luciano) Date: Wed, 06 Dec 2006 16:40:30 +0100 Subject: [SciPy-user] Newbie question about boolean condition argument (isempty command equivalent in matlab) In-Reply-To: <45769BF2.7050807@chimica.unige.it> References: <4574B0DB.7060000@gmail.com> <4575C51D.1000407@gmail.com> <45767997.5010903@gmail.com> <45767E10.5020700@ar.media.kyoto-u.ac.jp> <45769BF2.7050807@chimica.unige.it> Message-ID: <4576E46E.8030606@chimica.unige.it> thanks a lot to otto I've finished my regression module. I'm testing it and I will soon "release" it to anyone interested in chemometrics (I've sent a message to the international society of chemometrics and some people seems interested in creating a repository fo pyhom chemometrics module)... but I've a small doubt this works good bar(N, b1[:,0], width, color='r', yerr=binterv) ############ s3=find(sig1[:,arange(ini,c)]<=0.001) b1=b.flatten() #if s3!=[]: for i3 in arange(len(s3)): text(s3[i3], b1[s3[i3]+ini],'***') s2=find(logical_and(sig1[:,arange(ini,c)]>0.001,sig1[:,arange(ini,c)]<=0.01)) for i2 in arange(len(s2)): text(s2[i2], b1[s2[i2]+ini],'**') s1=find(logical_and(sig1[:,arange(ini,c)]>0.01,sig1[:,arange(ini,c)]<=0.05)) for i1 in arange(len(s1)): text(s1[i1], b1[s1[i1]+ini],'*') title('Plot of the coefficients of the model') and when i uncomment the ifs3!=[] part it does not... so in this case I've solve the problem.. but is there an equivalent for isempty matlab command in scilab/numpy ? Giorgio > > From fredmfp at gmail.com Wed Dec 6 12:30:36 2006 From: fredmfp at gmail.com (fred) Date: Wed, 06 Dec 2006 18:30:36 +0100 Subject: [SciPy-user] blas/lapack issue on a freebsd box... In-Reply-To: <45767E10.5020700@ar.media.kyoto-u.ac.jp> References: <4574B0DB.7060000@gmail.com> <4575C51D.1000407@gmail.com> <45767997.5010903@gmail.com> <45767E10.5020700@ar.media.kyoto-u.ac.jp> Message-ID: <4576FE3C.8080001@gmail.com> David Cournapeau wrote: > Which version of atlas are you trying to build ? The recent (3.7.*) are > much easier to build/configure; if your configuration is already done, > atlas should build in less than one hour, specially if you use several > jobs (to use if you have a lot of memory only), I think. > Same result with atlas-devel-3.7.11 :-( -- http://scipy.org/FredericPetit From robert.kern at gmail.com Wed Dec 6 12:52:15 2006 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 06 Dec 2006 11:52:15 -0600 Subject: [SciPy-user] blas/lapack issue on a freebsd box... In-Reply-To: <45767997.5010903@gmail.com> References: <4574B0DB.7060000@gmail.com> <4575C51D.1000407@gmail.com> <45767997.5010903@gmail.com> Message-ID: <4577034F.9060609@gmail.com> fred wrote: > compiling '_configtest.c': > > /* This file is generated from numpy_distutils/system_info.py */ > void ATL_buildinfo(void); > int main(void) { > ATL_buildinfo(); > return 0; > } > C compiler: cc -fno-strict-aliasing -DNDEBUG -O2 -fno-strict-aliasing -pipe -D__wchar_t=wchar_t -DTHREAD_STACK_SIZE=0x20000 -fPIC > > compile options: '-c' > cc: _configtest.c > cc _configtest.o -L/usr/local/lib -llapack -lblas -o _configtest > _configtest.o(.text+0x18): In function `main': > : undefined reference to `ATL_buildinfo' > /usr/local/lib/liblapack.so: undefined reference to `log' > /usr/local/lib/liblapack.so: undefined reference to `sqrt' > /usr/local/lib/liblapack.so: undefined reference to `e_wsfe' > /usr/local/lib/liblapack.so: undefined reference to `z_abs' > /usr/local/lib/liblapack.so: undefined reference to `c_sqrt' > /usr/local/lib/liblapack.so: undefined reference to `s_cmp' > /usr/local/lib/liblapack.so: undefined reference to `z_exp' > /usr/local/lib/liblapack.so: undefined reference to `cos' > /usr/local/lib/liblapack.so: undefined reference to `c_exp' > /usr/local/lib/liblapack.so: undefined reference to `pow' > /usr/local/lib/liblapack.so: undefined reference to `log10' > /usr/local/lib/liblapack.so: undefined reference to `do_fio' > /usr/local/lib/liblapack.so: undefined reference to `z_sqrt' > /usr/local/lib/liblapack.so: undefined reference to `s_cat' > /usr/local/lib/liblapack.so: undefined reference to `s_stop' > /usr/local/lib/liblapack.so: undefined reference to `c_abs' > /usr/local/lib/liblapack.so: undefined reference to `s_wsfe' > /usr/local/lib/liblapack.so: undefined reference to `s_copy' > _configtest.o(.text+0x18): In function `main': > : undefined reference to `ATL_buildinfo' > /usr/local/lib/liblapack.so: undefined reference to `log' > /usr/local/lib/liblapack.so: undefined reference to `sqrt' > /usr/local/lib/liblapack.so: undefined reference to `e_wsfe' > /usr/local/lib/liblapack.so: undefined reference to `z_abs' > /usr/local/lib/liblapack.so: undefined reference to `c_sqrt' > /usr/local/lib/liblapack.so: undefined reference to `s_cmp' > /usr/local/lib/liblapack.so: undefined reference to `z_exp' > /usr/local/lib/liblapack.so: undefined reference to `cos' > /usr/local/lib/liblapack.so: undefined reference to `c_exp' > /usr/local/lib/liblapack.so: undefined reference to `pow' > /usr/local/lib/liblapack.so: undefined reference to `log10' > /usr/local/lib/liblapack.so: undefined reference to `do_fio' > /usr/local/lib/liblapack.so: undefined reference to `z_sqrt' > /usr/local/lib/liblapack.so: undefined reference to `s_cat' > /usr/local/lib/liblapack.so: undefined reference to `s_stop' > /usr/local/lib/liblapack.so: undefined reference to `c_abs' > /usr/local/lib/liblapack.so: undefined reference to `s_wsfe' > /usr/local/lib/liblapack.so: undefined reference to `s_copy' > failure. > removing: _configtest.c _configtest.o > Status: 255 > Output: > FOUND: > libraries = ['lapack', 'blas'] > library_dirs = ['/usr/local/lib'] > language = c > define_macros = [('NO_ATLAS_INFO', 2)] > include_dirs = ['/usr/local/include'] This is part of your problem. Do you have a site.cfg file? If so, please post it. You ought to have sections that look like these: [DEFAULT] library_dirs=/usr/local/lib include_dirs=/usr/local/include [blas_opt] libraries=f77blas, cblas, atlas [lapack_opt] libraries=lapack, f77blas, cblas, atlas Possibly, you might also need to add g2c to the end of the libraries= lines. Now, those libraries lists are from what I normally see installed from ATLAS. Given that I've never seen libalapack_r.so.1 before, it's possible that ATLAS gets build strangely on your machine. Please also give us a list of the libraries that ATLAS installs. Also tell us where you got /usr/local/lib/liblapack.so and /usr/local/lib/libblas.so . -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From jonas.kahn at math.u-psud.fr Wed Dec 6 13:35:40 2006 From: jonas.kahn at math.u-psud.fr (Jonas Kahn) Date: Wed, 6 Dec 2006 19:35:40 +0100 Subject: [SciPy-user] Definition of random variable Message-ID: <20061206183540.GA1769@clipper.ens.fr> Hi I try to create a new instance of the rv_continuous class, and the __init__ is not very explicit... Essentially, I would like to define it from the probability distribution only, and I guess (totally without any ground for that) that it is implemented as such. Especially I would like to use the associated inverse survival function, without writing it by myself. Of course I could use the routines for solving equations to get it... Alternatively, is there any way to get the inverse function of a monotonic function? Thanks for the help Jonas PS: I think there is something wrong with the "scale" parameter in the package, or I have not understood how to use it. There is no change to the pdf when I try to give it another value as 1: In [124]: stats.binom.rvs(1,1) Out[124]: array([1]) # No problem with loc In [126]: stats.binom.rvs(1,1, loc = 10) Out[126]: array([11]) # But nothing with scale, with or without loc In [127]: stats.binom.rvs(1,1, loc = 10, scale =100) Out[127]: array([11]) In [128]: stats.binom.rvs(1,1, scale =100) Out[128]: array([1]) From fredmfp at gmail.com Wed Dec 6 14:02:02 2006 From: fredmfp at gmail.com (fred) Date: Wed, 06 Dec 2006 20:02:02 +0100 Subject: [SciPy-user] blas/lapack issue on a freebsd box... In-Reply-To: <45770570.5080205@gmail.com> References: <4574B0DB.7060000@gmail.com> <4575C51D.1000407@gmail.com> <45767997.5010903@gmail.com> <45770570.5080205@gmail.com> Message-ID: <457713AA.6070102@gmail.com> > > This is part of your problem. Do you have a site.cfg file? If so, please > post it. > > Ok. I have this one, in /usr/local/lib/python2.4/site-packages/numpy/distutils: [atlas] library_dirs = /usr/lib/atlas/3dnow/ atlas_libs = lapack, blas Looks wrong, no ? > You ought to have sections that look like these: > > > [DEFAULT] > library_dirs=/usr/local/lib > include_dirs=/usr/local/include > > [blas_opt] > libraries=f77blas, cblas, atlas > > [lapack_opt] > libraries=lapack, f77blas, cblas, atlas > > I have deleted the lines above, and put yours. But nothing changes :-( > Possibly, you might also need to add g2c to the end of the libraries= lines. > > Hmm, I have no g2c on my freebsd box. > Now, those libraries lists are from what I normally see installed from > ATLAS. > Given that I've never seen libalapack_r.so.1 before, it's possible that > ATLAS > gets build strangely on your machine. Please also give us a list of the > libraries that ATLAS installs. Also tell us where you got > /usr/local/lib/liblapack.so and /usr/local/lib/libblas.so . > Information for atlas-devel-3.7.11: Files: /usr/local/include/cblas.h /usr/local/include/clapack.h /usr/local/include/blas.h /usr/local/include/lapack.h /usr/local/lib/libalapack.a /usr/local/lib/libalapack.so.1 /usr/local/lib/libalapack.so /usr/local/lib/libalapack_r.a /usr/local/lib/libalapack_r.so.1 /usr/local/lib/libalapack_r.so /usr/local/lib/libatlas.a /usr/local/lib/libatlas.so.1 /usr/local/lib/libatlas.so /usr/local/lib/libatlas_r.a /usr/local/lib/libatlas_r.so.1 /usr/local/lib/libatlas_r.so /usr/local/lib/libcblas.a /usr/local/lib/libcblas.so.1 /usr/local/lib/libcblas.so /usr/local/lib/libcblas_r.a /usr/local/lib/libcblas_r.so.1 /usr/local/lib/libcblas_r.so /usr/local/lib/libf77blas.a /usr/local/lib/libf77blas.so.1 /usr/local/lib/libf77blas.so /usr/local/lib/libf77blas_r.a /usr/local/lib/libf77blas_r.so.1 /usr/local/lib/libf77blas_r.so /usr/local/lib/libptcblas.a /usr/local/lib/libptcblas.so.1 /usr/local/lib/libptcblas.so /usr/local/lib/libptf77blas.a /usr/local/lib/libptf77blas.so.1 /usr/local/lib/libptf77blas.so /usr/local/lib/libtstatlas.a /usr/local/lib/libtstatlas.so.1 /usr/local/lib/libtstatlas.so /usr/local/lib/libtstatlas_r.a /usr/local/lib/libtstatlas_r.so.1 /usr/local/lib/libtstatlas_r.so /usr/local/share/doc/atlas/AtlasCredits.txt /usr/local/share/doc/atlas/ChangeLog /usr/local/share/doc/atlas/DirStruct.txt /usr/local/share/doc/atlas/INDEX.txt /usr/local/share/doc/atlas/LibReadme.txt /usr/local/share/doc/atlas/TestTime.txt /usr/local/share/doc/atlas/TroubleShoot.txt /usr/local/share/doc/atlas/Windows.txt /usr/local/share/doc/atlas/atlas_contrib.ps /usr/local/share/doc/atlas/atlas_devel.ps /usr/local/share/doc/atlas/atlas_over.ps /usr/local/share/doc/atlas/cblas.ps /usr/local/share/doc/atlas/cblasqref.ps /usr/local/share/doc/atlas/f77blasqref.ps /usr/local/share/doc/atlas/lapackqref.ps I got liblapack.so from lapack-3.0_1 and libblas.so from blas-1.0 binary packages. Thanks in advance. PS : building scipy-0.5.1 requires that atlas be built with static libs. PS2 : thanks to added me to the EditorsList ;-) -- http://scipy.org/FredericPetit From robert.kern at gmail.com Wed Dec 6 14:26:21 2006 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 06 Dec 2006 13:26:21 -0600 Subject: [SciPy-user] blas/lapack issue on a freebsd box... In-Reply-To: <457713AA.6070102@gmail.com> References: <4574B0DB.7060000@gmail.com> <4575C51D.1000407@gmail.com> <45767997.5010903@gmail.com> <45770570.5080205@gmail.com> <457713AA.6070102@gmail.com> Message-ID: <4577195D.4020407@gmail.com> fred wrote: >> This is part of your problem. Do you have a site.cfg file? If so, please >> post it. >> >> > Ok. I have this one, in > /usr/local/lib/python2.4/site-packages/numpy/distutils: > > [atlas] > library_dirs = /usr/lib/atlas/3dnow/ > atlas_libs = lapack, blas > > Looks wrong, no ? Quite wrong. You should remove it. Also, put your site.cfg files next to the setup.py files for the package you are building. numpy.distutils will find it if it's in that location, but that's a bad place for it. >> You ought to have sections that look like these: >> >> >> [DEFAULT] >> library_dirs=/usr/local/lib >> include_dirs=/usr/local/include >> >> [blas_opt] >> libraries=f77blas, cblas, atlas >> >> [lapack_opt] >> libraries=lapack, f77blas, cblas, atlas >> >> > I have deleted the lines above, and put yours. > But nothing changes :-( Something should have changed. Maybe you don't get a successful compile, but the output of "python setup.py config" should be different. >> Possibly, you might also need to add g2c to the end of the libraries= lines. >> > Hmm, I have no g2c on my freebsd box. What FORTRAN compiler are you using? What FORTRAN compiler did you configure ATLAS with? >> Now, those libraries lists are from what I normally see installed from >> ATLAS. >> Given that I've never seen libalapack_r.so.1 before, it's possible that >> ATLAS >> gets build strangely on your machine. Please also give us a list of the >> libraries that ATLAS installs. Also tell us where you got >> /usr/local/lib/liblapack.so and /usr/local/lib/libblas.so . >> > Information for atlas-devel-3.7.11: > > Files: > /usr/local/include/cblas.h > /usr/local/include/clapack.h > /usr/local/include/blas.h > /usr/local/include/lapack.h > /usr/local/lib/libalapack.a > /usr/local/lib/libalapack.so.1 > /usr/local/lib/libalapack.so > /usr/local/lib/libalapack_r.a > /usr/local/lib/libalapack_r.so.1 > /usr/local/lib/libalapack_r.so > /usr/local/lib/libatlas.a > /usr/local/lib/libatlas.so.1 > /usr/local/lib/libatlas.so > /usr/local/lib/libatlas_r.a > /usr/local/lib/libatlas_r.so.1 > /usr/local/lib/libatlas_r.so > /usr/local/lib/libcblas.a > /usr/local/lib/libcblas.so.1 > /usr/local/lib/libcblas.so > /usr/local/lib/libcblas_r.a > /usr/local/lib/libcblas_r.so.1 > /usr/local/lib/libcblas_r.so > /usr/local/lib/libf77blas.a > /usr/local/lib/libf77blas.so.1 > /usr/local/lib/libf77blas.so > /usr/local/lib/libf77blas_r.a > /usr/local/lib/libf77blas_r.so.1 > /usr/local/lib/libf77blas_r.so > /usr/local/lib/libptcblas.a > /usr/local/lib/libptcblas.so.1 > /usr/local/lib/libptcblas.so > /usr/local/lib/libptf77blas.a > /usr/local/lib/libptf77blas.so.1 > /usr/local/lib/libptf77blas.so > /usr/local/lib/libtstatlas.a > /usr/local/lib/libtstatlas.so.1 > /usr/local/lib/libtstatlas.so > /usr/local/lib/libtstatlas_r.a > /usr/local/lib/libtstatlas_r.so.1 > /usr/local/lib/libtstatlas_r.so > /usr/local/share/doc/atlas/AtlasCredits.txt > /usr/local/share/doc/atlas/ChangeLog > /usr/local/share/doc/atlas/DirStruct.txt > /usr/local/share/doc/atlas/INDEX.txt > /usr/local/share/doc/atlas/LibReadme.txt > /usr/local/share/doc/atlas/TestTime.txt > /usr/local/share/doc/atlas/TroubleShoot.txt > /usr/local/share/doc/atlas/Windows.txt > /usr/local/share/doc/atlas/atlas_contrib.ps > /usr/local/share/doc/atlas/atlas_devel.ps > /usr/local/share/doc/atlas/atlas_over.ps > /usr/local/share/doc/atlas/cblas.ps > /usr/local/share/doc/atlas/cblasqref.ps > /usr/local/share/doc/atlas/f77blasqref.ps > /usr/local/share/doc/atlas/lapackqref.ps > > I got liblapack.so from lapack-3.0_1 and libblas.so from blas-1.0 binary > packages. Okay, try these sections: [DEFAULT] library_dirs=/usr/local/lib include_dirs=/usr/local/include [blas_opt] libraries=f77blas, cblas, atlas [lapack_opt] libraries=alapack, f77blas, cblas, atlas > Thanks in advance. > > > PS : building scipy-0.5.1 requires that atlas be built with static libs. Says who? (It's possibly true, I don't know, but I'd like to know where you are getting this information from.) If that's the case, then add [DEFAULT] search_static_first = true > PS2 : thanks to added me to the EditorsList ;-) No problem. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From fredmfp at gmail.com Wed Dec 6 14:26:46 2006 From: fredmfp at gmail.com (fred) Date: Wed, 06 Dec 2006 20:26:46 +0100 Subject: [SciPy-user] blas/lapack issue on a freebsd box [fixed] In-Reply-To: <457713AA.6070102@gmail.com> References: <4574B0DB.7060000@gmail.com> <4575C51D.1000407@gmail.com> <45767997.5010903@gmail.com> <45770570.5080205@gmail.com> <457713AA.6070102@gmail.com> Message-ID: <45771976.5090901@gmail.com> site.cfg was really guilty ;-) As I said, some packages were installed from binary packages, others from "ports" as freebsd users know, i.e. build on my freebsd box. So I rebuilt all from ports, site.cfg looks like good, and now it works fine. Thanks to have pointed to me where the issue could come from. Cheers, -- http://scipy.org/FredericPetit From mwojc at p.lodz.pl Wed Dec 6 15:38:20 2006 From: mwojc at p.lodz.pl (Marek Wojciechowski) Date: Wed, 06 Dec 2006 20:38:20 -0000 Subject: [SciPy-user] feed-forward neural network for python In-Reply-To: References: Message-ID: On Wed, 06 Dec 2006 18:00:08 -0000, wrote: > Marek Wojciechowski wrote: >> Hi! >> I released feed-forward neural network for python project at sourceforge >> (ffnet). I'm announcing it here because it depends on numpy/scipy >> tandem, >> so you, folks, are potential users/testers >> >> If anyone is interested please visit ffnet.sourceforge.net (and then >> post >> comments if any...) > Hi Marek, > thank you for you ANN implementation. I'm currently interested in > recurrent > neural networks so I hope you'll add them too in near future > Anyway, for those interested in ANN you can check conx.py module of > PyRobotics project. It's just one python file (182Kb of python source!) > inside a much bigger project. Conx.py is self contained and unfortunately > it needs Numeric and not numpy :(( > What about mixing ffnet and conx.py? Comapring to conx.py ffnet is much, much faster and much, much simpler. Faster thanks to scipy optimization routines and simpler because simplicity was the basic assumption when I created ffnet. To prove it: I trained a XOR network (2-2-1) in conx with: from pyrobot.brain import conx net=conx.Network() net.addThreeLayers(2,2,1) net.setInputs([[0., 0.], [0., 1.], [1., 0.], [1., 1.]]) net.setTargets([[1.], [0.], [0.], [1.]]) net.setLearning(0.5) net.setMomentum(0.8) net.setTolerance(0.001) net.train() This performed 5000 iterations in 34.6 s (and the tolerance has not been reached) With ffnet it can be done with: from ffnet import ffnet, mlgraph conec = mlgraph((2,2,1)) net = ffnet(conec) input=[[0., 0.], [0., 1.], [1., 0.], [1., 1.]]; target=[[1.], [0.], [0.], [1.]] net.train_momentum(input, target, eta=0.5, momentum=0.8, maxiter=5000) The above trains nework in 17.9 ms, and the fitness is perfect. ffnet is almost 2000 times faster than conx in this example! An the code is simpler. However there are some nice testing methods in conx, which can be used in ffnet. Thanks for this tip. -- Marek Wojciechowski From fredmfp at gmail.com Wed Dec 6 14:39:34 2006 From: fredmfp at gmail.com (fred) Date: Wed, 06 Dec 2006 20:39:34 +0100 Subject: [SciPy-user] blas/lapack issue on a freebsd box... In-Reply-To: <4577195D.4020407@gmail.com> References: <4574B0DB.7060000@gmail.com> <4575C51D.1000407@gmail.com> <45767997.5010903@gmail.com> <45770570.5080205@gmail.com> <457713AA.6070102@gmail.com> <4577195D.4020407@gmail.com> Message-ID: <45771C76.4020709@gmail.com> Robert Kern wrote: [snip] > Says who? (It's possibly true, I don't know, but I'd like to know where you are > getting this information from.) > Who ? FreeBSD scipy port maintainer I guess. From scipy's Makefile: PORTNAME= scipy PORTVERSION= 0.5.1 CATEGORIES= science python MASTER_SITES= ${MASTER_SITE_SOURCEFORGE} MASTER_SITE_SUBDIR= scipy PKGNAMEPREFIX= ${PYTHON_PKGNAMEPREFIX} MAINTAINER= db at db.net COMMENT= Scientific tools for Python BUILD_DEPENDS= ${PYTHON_SITELIBDIR}/numpy/core/numeric.py:${PORTSDIR}/math/py-numpy \ ${LOCALBASE}/lib/libdjbfft.a:${PORTSDIR}/math/djbfft LIB_DEPENDS= fftw.2:${PORTSDIR}/math/fftw LATEST_LINK= py-${PORTNAME} OPTIONSFILE= ${PORT_DBDIR}/py-numpy/options USE_PYTHON= 2.3+ USE_PYDISTUTILS= yes USE_WX= 2.4-2.6 WX_COMPS= python:build python:run OPTIONS= ATLAS "Use optimized blas library" OFF post-patch: @${GREP} -lR "malloc\.h" ${WRKSRC} | ${XARGS} ${REINPLACE_CMD} \ -e "s at malloc\.h at stdlib.h@" .include .if defined(WITH_ATLAS) LIB_DEPENDS+= atlas.1:${PORTSDIR}/math/atlas .if !exists(${LOCALBASE}/lib/libalapack.a) IGNORE= Atlas needs to be built with WITH_STATICLIB for scipy to function properly .endif .else LIB_DEPENDS+= lapack.3:${PORTSDIR}/math/lapack \ blas.1:${PORTSDIR}/math/blas .endif I don't know why... Cheers, -- http://scipy.org/FredericPetit From conor.robinson at gmail.com Wed Dec 6 14:40:05 2006 From: conor.robinson at gmail.com (Conor Robinson) Date: Wed, 6 Dec 2006 11:40:05 -0800 Subject: [SciPy-user] feed-forward neural network for python In-Reply-To: <45768609.1010307@itc.it> References: <45768609.1010307@itc.it> Message-ID: Marek, This looks really good, and seems to be along the lines of stuff I've been working on. I like the fact that your using fortran to do some of the heavy computation, especially the GA. I'd like to contribute. I find that the rprop algorithm is the best for real-world-noisy data (besides GAs), and this could be an easy add. I have a python implementation that uses scipy to run rprop, there is definite room for improvement, I'm mainly using scipy.where() and wrote the function directly from the original rprop paper (i can send you this code if you'd like it). However, it may be faster to use fortran here as well. The original c version or rprop is here http://www.koders.com/c/fid1B403F9BA6B2BAA1C7CEDD1C680F8AAB2819B44E.aspx I also think that it would be great to have two versions of this. One for general usage and one for altivec and sse 64bit ibm and intel chips, which at this stage of the project would be easy to modify http://developer.apple.com/documentation/Performance/Conceptual/Accelerate_sse_migration/index.html gcc 4.0.3 does some auto optimization with the right flags and I've found you can increase the speed by factors of 10 or more in many cases. What type of platforms do you have access to? The CHUD tools on OS X are very effective and might be useful (http://developer.apple.com/tools/performance/) to find optimizations relevant to all platforms. conor On 12/6/06, Emanuele Olivetti wrote: > Marek Wojciechowski wrote: > > Hi! > > I released feed-forward neural network for python project at sourceforge > > (ffnet). I'm announcing it here because it depends on numpy/scipy tandem, > > so you, folks, are potential users/testers :) > > > > If anyone is interested please visit ffnet.sourceforge.net (and then post > > comments if any...) > > Hi Marek, > thank you for you ANN implementation. I'm currently interested in recurrent > neural networks so I hope you'll add them too in near future ;) > > Anyway, for those interested in ANN you can check conx.py module of > PyRobotics project. It's just one python file (182Kb of python source!) > inside a much bigger project. Conx.py is self contained and unfortunately > it needs Numeric and not numpy :(( > > What about mixing ffnet and conx.py? ;) > > Some links: > http://www.pyrorobotics.org/ - main site > http://www.pyrorobotics.org/?page=PyroModuleNeuralNetworks - conx module introduction > http://cvs.cs.brynmawr.edu/cgi-bin/viewcvs.cgi/pyrobot/brain/conx.py - actual conx.py code > http://www.pyrorobotics.org/?page=Building_20Neural_20Networks_20using_20Conx - conx example code > > HTH > > Emanuele > > P.S.: both ffnet and conx.py are released under GPL. Good! > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From R.Springuel at umit.maine.edu Wed Dec 6 10:15:46 2006 From: R.Springuel at umit.maine.edu (R. Padraic Springuel) Date: Wed, 06 Dec 2006 15:15:46 +0000 Subject: [SciPy-user] Scipy Superpack (was Re: pajer@iname.com has sent you a, file.) Message-ID: <4576DEA2.3030304@umit.maine.edu> Well, I got scipy and matplotlib working now. For scipy, installing gfortran worked just as advertised. For maplotlib, I decided to download it directly from their sourceforge site and install it along with wxPython for back end purposes (it's the same backend I use on Windows so I like the consistency). It was easier for me than trying to figure out if more than one matplotlib file was missing. Is there anyway that the Superpack could include a Readme.txt file with installation instructions? I know that I'll sometimes download stuff to install later when I'm offline and I'm sure that others do too. Having the installation instructions in a txt file in the download would help in those situations. Also, the installation instructions on the Scipy website aren't clear about when they apply to the Superpack installation and when they apply to a build from scratch installation. Some clarification here would be useful. I'm willing to help write the installation instructions for the Superpack (to the best of my ability), but don't know where they might be submitted (direction?). Also, since I use wxPython for my matplotlib backend, if there are any specific instructions for PyMC (which I think is another backend) that would make the installation go smoother, I wouldn't be able to provide those. -- R. Padraic Springuel Teaching Assistant Department of Physics and Astronomy University of Maine Bennett 309 Office Hours: Wednesday 2-3pm From R.Springuel at umit.maine.edu Wed Dec 6 10:23:58 2006 From: R.Springuel at umit.maine.edu (R. Padraic Springuel) Date: Wed, 06 Dec 2006 15:23:58 +0000 Subject: [SciPy-user] Windows Binary Problem Message-ID: <4576E08E.2080409@umit.maine.edu> I'm still having the compatibility issues between the numpy and scipy Windows binaries, even with the installations that Gary sent me. Does anyone have any other suggestions as to how to get scipy up and running on a Windows machine (numpy is working fine)? -- R. Padraic Springuel Teaching Assistant Department of Physics and Astronomy University of Maine Bennett 309 Office Hours: Wednesday 2-3pm From fredmfp at gmail.com Wed Dec 6 16:18:10 2006 From: fredmfp at gmail.com (fred) Date: Wed, 06 Dec 2006 22:18:10 +0100 Subject: [SciPy-user] scipy wiki Message-ID: <45773392.2000305@gmail.com> Hi all, The new is UI is much better (& faster) ! Great job ! Anyway, I get an error when I try to access to http://scipy.org/Cookbook/MayaVi or http://scipy.org/Cookbook/MayaVi/Installation (and only these two wiki pages): ERROR: The installed docutils version is 0.3.7; version 0.3.10 or later is required. Cheers, -- http://scipy.org/FredericPetit From jstrunk at enthought.com Wed Dec 6 16:22:28 2006 From: jstrunk at enthought.com (Jeff Strunk) Date: Wed, 6 Dec 2006 15:22:28 -0600 Subject: [SciPy-user] scipy wiki In-Reply-To: <45773392.2000305@gmail.com> References: <45773392.2000305@gmail.com> Message-ID: <200612061522.28778.jstrunk@enthought.com> I'll take care of that. Thanks, Jeff On Wednesday 06 December 2006 3:18 pm, fred wrote: > Hi all, > > The new is UI is much better (& faster) ! > > Great job ! > > Anyway, I get an error when I try to access to > http://scipy.org/Cookbook/MayaVi > or http://scipy.org/Cookbook/MayaVi/Installation (and only these two > wiki pages): > > ERROR: The installed docutils version is 0.3.7; version 0.3.10 or later > is required. > > > Cheers, From jstrunk at enthought.com Wed Dec 6 16:25:11 2006 From: jstrunk at enthought.com (Jeff Strunk) Date: Wed, 6 Dec 2006 15:25:11 -0600 Subject: [SciPy-user] scipy wiki In-Reply-To: <200612061522.28778.jstrunk@enthought.com> References: <45773392.2000305@gmail.com> <200612061522.28778.jstrunk@enthought.com> Message-ID: <200612061525.11320.jstrunk@enthought.com> The version of docutils in the wiki's sitepackages was older than the sitewide installation. Those pages work now. Thanks, Jeff On Wednesday 06 December 2006 3:22 pm, Jeff Strunk wrote: > I'll take care of that. > > Thanks, > Jeff > > On Wednesday 06 December 2006 3:18 pm, fred wrote: > > Hi all, > > > > The new is UI is much better (& faster) ! > > > > Great job ! > > > > Anyway, I get an error when I try to access to > > http://scipy.org/Cookbook/MayaVi > > or http://scipy.org/Cookbook/MayaVi/Installation (and only these two > > wiki pages): > > > > ERROR: The installed docutils version is 0.3.7; version 0.3.10 or later > > is required. > > > > > > Cheers, > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user From fredmfp at gmail.com Wed Dec 6 16:27:07 2006 From: fredmfp at gmail.com (fred) Date: Wed, 06 Dec 2006 22:27:07 +0100 Subject: [SciPy-user] scipy wiki In-Reply-To: <200612061525.11320.jstrunk@enthought.com> References: <45773392.2000305@gmail.com> <200612061522.28778.jstrunk@enthought.com> <200612061525.11320.jstrunk@enthought.com> Message-ID: <457735AB.9070202@gmail.com> Jeff Strunk a ?crit : >The version of docutils in the wiki's sitepackages was older than the sitewide >installation. > >Those pages work now. > > Thanks a lot for this very fast answer :-) -- http://scipy.org/FredericPetit From silesalvarado at hotmail.com Wed Dec 6 18:52:51 2006 From: silesalvarado at hotmail.com (Hugo Siles) Date: Wed, 06 Dec 2006 23:52:51 +0000 Subject: [SciPy-user] Windows Binary Problem In-Reply-To: <4576E08E.2080409@umit.maine.edu> Message-ID: Hi I have just instaled the following versions and they work without any problem python-2.4.4.msi scipy-0.5.1.win32-py2.4.exe numpy-1.0rc2.win32-py2.4.exe Good luck Hugo Siles >From: "R. Padraic Springuel" >Reply-To: SciPy Users List >To: Scipy User Support >Subject: Re: [SciPy-user] Windows Binary Problem >Date: Wed, 06 Dec 2006 15:23:58 +0000 > >I'm still having the compatibility issues between the numpy and scipy >Windows binaries, even with the installations that Gary sent me. Does >anyone have any other suggestions as to how to get scipy up and running >on a Windows machine (numpy is working fine)? >-- > >R. Padraic Springuel >Teaching Assistant >Department of Physics and Astronomy >University of Maine >Bennett 309 >Office Hours: Wednesday 2-3pm >_______________________________________________ >SciPy-user mailing list >SciPy-user at scipy.org >http://projects.scipy.org/mailman/listinfo/scipy-user _________________________________________________________________ Talk now to your Hotmail contacts with Windows Live Messenger. http://clk.atdmt.com/MSN/go/msnnkwme0020000001msn/direct/01/?href=http://get.live.com/messenger/overview From olivetti at itc.it Thu Dec 7 03:57:00 2006 From: olivetti at itc.it (Emanuele Olivetti) Date: Thu, 07 Dec 2006 09:57:00 +0100 Subject: [SciPy-user] feed-forward neural network for python In-Reply-To: References: Message-ID: <4577D75C.5060003@itc.it> Marek Wojciechowski wrote: > Comapring to conx.py ffnet is much, much faster and much, much simpler. > Faster > thanks to scipy optimization routines and simpler because simplicity was > the basic assumption when I created ffnet. > > To prove it: > I trained a XOR network (2-2-1) in conx with: ... > This performed 5000 iterations in 34.6 s (and the tolerance has not been > reached) > > With ffnet it can be done with: ... > The above trains nework in 17.9 ms, and the fitness is perfect. > > ffnet is almost 2000 times faster than conx in this example! An the code > is simpler. > > However there are some nice testing methods in conx, which can be used in > ffnet. Hi Marek, Thank you _very_ much for your test. When I need to use ANN is on huge datasets so performance is a really big issue to me. I'll definitely try your ffnet very soon on my data. And I hope to see recurrent neural networks implemented in future releases ;) Emanuele From nwagner at iam.uni-stuttgart.de Thu Dec 7 05:19:14 2006 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Thu, 07 Dec 2006 11:19:14 +0100 Subject: [SciPy-user] A comparison of Matlab's and SciPy's eig function for repeated eigenvalues Message-ID: <4577EAA2.60801@iam.uni-stuttgart.de> Hi all, The eigenvectors associated with repeated eigenvalues are not uniquely defined. The eigenvectors returned by MATLAB's eig function are orthogonal with respect to A and B [V,D] = eig(A,B) V' * A * V = diag(a_i) V' * B * V = diag(b_i) And the eigenvalues differs with respect to the order. How can I obtain "similar" results with SciPy ? w, vr = linalg.eig(A,B) dot(vr.T,dot(A,vr)) \ne diag(i_i) dot(vr.T,dot(B,vr)) \ne diag(b_i) I have attached small test scripts for both, SciPy and MATLAB. So, what is the difference between MATLAB's eig and SciPy's eig ? Thanks in advance. Nils -------------- next part -------------- A non-text attachment was scrubbed... Name: test_multeig.py Type: text/x-python Size: 207 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: test_multeig.m Type: application/m-file Size: 252 bytes Desc: not available URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: K1.mtx URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: M1.mtx URL: From jeff at taupro.com Thu Dec 7 10:58:50 2006 From: jeff at taupro.com (Jeff Rush) Date: Thu, 07 Dec 2006 09:58:50 -0600 Subject: [SciPy-user] A Call to Arms - Advocating Python in Science Message-ID: <45783A3A.6080201@taupro.com> Greetings. I've been lurking on this list, to check out what is needed to support Python in the scientific community. I'm the Python Advocacy Coordinator that the PSF has hired to help out. My main jumping off point for advocacy topics is at: http://wiki.python.org/moin/Advocacy One of the items I'm hoping to get input about is the contents of two advocacy kits I'm getting together: - Research Lab Python Advocacy Kit - University Educator's Python Advocacy Kit Details at: http://wiki.python.org/moin/Advocacy#AdvocacyKits I'm drumming up a series of whitepapers, flyers, videos and podcasts to put into those kits. Not being a scientist myself, I need ideas on what is needed. The actual individual items are being solicited at: http://wiki.python.org/moin/AdvocacyWritingTasks http://wiki.python.org/moin/ArticleIdeas. Next, we are seeking reusable/retargetable teaching materials, such as those under a Creative Commons license. We need slide presentations and class handouts. Now I know there are a great many slide presentations on the web about Python. I can google them all but we're not looking for just any presentation, we're looking for the best of field. We're collecting links and ideas at: http://wiki.python.org/moin/Advocacy#TeachingMaterials I know some of this already exists in the scientific community and I just need some nudges to come up to speed on what is available for me to make use of. Also permissions for reuse and reformatting, in some cases. Thanks for any assistance you have time to provide, Jeff Rush Python Advocacy Coordinator From wangxj.uc at gmail.com Thu Dec 7 13:15:32 2006 From: wangxj.uc at gmail.com (Xiaojian Wang) Date: Thu, 7 Dec 2006 10:15:32 -0800 Subject: [SciPy-user] where I can find quadratic programming in scipy? Message-ID: Hi, I try to look for a module currently, is there any scipy module for quadratic programming? if not, anybody knows where I can get it free in any other languages? thanks Xiaojian -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.vergnes at yahoo.fr Thu Dec 7 17:03:52 2006 From: robert.vergnes at yahoo.fr (Robert VERGNES) Date: Thu, 7 Dec 2006 23:03:52 +0100 (CET) Subject: [SciPy-user] help on numerical modelisation of near-sinusoidal function using SciPy for pendulum analysis Message-ID: <20061207220352.87975.qmail@web27414.mail.ukl.yahoo.com> Hello, new to Python. I would like to know if in SciPy there is a already a fucntion for the modelisation of a near sinusoidal function. ie: I have result from data acquisition from a pendulum and I obtain a text file with a matrix: (t(sec), Theta(deg)). And I 'know' that is it is in the form of Theta(t) = a+b*sin(co*t+phi) And I want to give my matrix [t,Theta] to a Python Lib class and get in return the nearest parameters a,b,co and phi ( and if possible the error level ). Does it exist in Python librairy or SciPy or Pynum??? Currently I am doing that job with Regressi.exe from evariste - it works ok. but I would like to enclose this calculation in a larger set of other analysis hence I would like to get a function doing that modelisation - regression from the curve to the function we believe right. Thanx Robert --------------------------------- Yahoo! Mail r?invente le mail ! D?couvrez le nouveau Yahoo! Mail et son interface r?volutionnaire. --------------------------------- D?couvrez une nouvelle fa?on d'obtenir des r?ponses ? toutes vos questions ! Profitez des connaissances, des opinions et des exp?riences des internautes sur Yahoo! Questions/R?ponses. -------------- next part -------------- An HTML attachment was scrubbed... URL: From palazzol at comcast.net Thu Dec 7 17:24:09 2006 From: palazzol at comcast.net (palazzol at comcast.net) Date: Thu, 07 Dec 2006 22:24:09 +0000 Subject: [SciPy-user] help on numerical modelisation of near-sinusoidal function using SciPy for pendulum analysis Message-ID: <120720062224.18165.4578948900025F8D000046F52200734076040196960E040E9F@comcast.net> Hi, I think optimize.minpack.leastsq() could be made to work, but it would be best if you could make good initial guesses for a,b,co, and phi. You could use some ad-hoc methods to come up with guesses for these initial conditions, based on looking at a bit of the waveform, and then use leastsq() to calculate the best values and residual error norm. For example, a is the average over time, b is the half the amplitude between peaks, co based on Theta(t) crossing a, etc. You could even give it analytical Jacobians if you want to take some derivatives of your equation. Another alternative would be to take an fft(), and look at the DC offset, gain, maximum peak freq and phase to get good initial conditions. Good luck, -Frank -------------- Original message ---------------------- From: Robert VERGNES > > Hello, > > new to Python. > I would like to know if in SciPy there is a already a fucntion for the > modelisation of a near sinusoidal function. ie: > I have result from data acquisition from a pendulum and I obtain a text file > with a matrix: > (t(sec), Theta(deg)). > > And I 'know' that is it is in the form of > > Theta(t) = a+b*sin(co*t+phi) > > And I want to give my matrix [t,Theta] to a Python Lib class and get in return > the nearest parameters a,b,co and phi ( and if possible the error level ). > > Does it exist in Python librairy or SciPy or Pynum??? > > Currently I am doing that job with Regressi.exe from evariste - it works ok. but > I would like to enclose this calculation in a larger set of other analysis hence > I would like to get a function doing that modelisation - regression from the > curve to the function we believe right. > > Thanx > > Robert > > > > --------------------------------- > Yahoo! Mail r?invente le mail ! D?couvrez le nouveau Yahoo! Mail et son > interface r?volutionnaire. > > > --------------------------------- > D?couvrez une nouvelle fa?on d'obtenir des r?ponses ? toutes vos questions ! > Profitez des connaissances, des opinions et des exp?riences des internautes sur Yahoo! Questions/R?ponses. -------------- next part -------------- An embedded message was scrubbed... From: Robert VERGNES Subject: [SciPy-user] help on numerical modelisation of near-sinusoidal function using SciPy for pendulum analysis Date: Thu, 7 Dec 2006 22:03:56 +0000 Size: 3940 URL: From josh.p.marshall at gmail.com Thu Dec 7 17:38:11 2006 From: josh.p.marshall at gmail.com (Josh Marshall) Date: Fri, 8 Dec 2006 09:38:11 +1100 Subject: [SciPy-user] Building SciPy on OS X: Universal binary issues In-Reply-To: References: Message-ID: <27AEC1A1-F5C3-4CEE-BC1B-E1EC1B909ED8@gmail.com> I've been running for a while with older versions of numpy and scipy, but I now need to provide a Universal Binary version of my app for distribution. I'm running a G4 with 10.4.8. I have installed the universal versions of Python, Numpy, SciPy, matplotlib, etc from http://pythonmac.org/packages/py24-fat/ I get an error when importing scipy, as follows: ImportError: Failure linking new module: /Library/Frameworks/ Python.framework/Versions/2.4/lib/python2.4/site-packages/scipy/ special/_cephes.so: Symbol not found: _printf$LDBLStub Referenced from: /Library/Frameworks/Python.framework/Versions/2.4/ lib/python2.4/site-packages/scipy/special/_cephes.so Expected in: dynamic lookup My understanding is that this is a common error when a package is built with GCC 4, but this shouldn't crop up in a downloadable package. Where are instructions for building SciPy as a universal binary? Obviously it is being done, but I would like to know how to do it myself. The wiki would be a good spot for this to go. As a second issue, running the following command: for FILE in `find . | grep "\.so"` ; do file $FILE ; done (which gives the architecture for all the dynamic libraries) shows that only a few of the libraries are Universal. The output is attached below. ./cluster/_vq.so: Mach-O fat file with 2 architectures ./cluster/_vq.so (for architecture ppc): Mach-O bundle ppc ./cluster/_vq.so (for architecture i386): Mach-O bundle i386 ./fftpack/_fftpack.so: Mach-O bundle ppc ./fftpack/convolve.so: Mach-O bundle ppc ./integrate/_odepack.so: Mach-O bundle ppc ./integrate/_quadpack.so: Mach-O bundle ppc ./integrate/vode.so: Mach-O bundle ppc ./interpolate/_fitpack.so: Mach-O bundle ppc ./interpolate/dfitpack.so: Mach-O bundle ppc ./io/numpyio.so: Mach-O fat file with 2 architectures ./io/numpyio.so (for architecture ppc): Mach-O bundle ppc ./io/numpyio.so (for architecture i386): Mach-O bundle i386 ./lib/blas/cblas.so: Mach-O fat file with 2 architectures ./lib/blas/cblas.so (for architecture ppc): Mach-O bundle ppc ./lib/blas/cblas.so (for architecture i386): Mach-O bundle i386 ./lib/blas/fblas.so: Mach-O bundle ppc ./lib/lapack/atlas_version.so: Mach-O fat file with 2 architectures ./lib/lapack/atlas_version.so (for architecture ppc): Mach-O bundle ppc ./lib/lapack/atlas_version.so (for architecture i386): Mach-O bundle i386 ./lib/lapack/calc_lwork.so: Mach-O bundle ppc ./lib/lapack/clapack.so: Mach-O fat file with 2 architectures ./lib/lapack/clapack.so (for architecture ppc): Mach-O bundle ppc ./lib/lapack/clapack.so (for architecture i386): Mach-O bundle i386 ./lib/lapack/flapack.so: Mach-O fat file with 2 architectures ./lib/lapack/flapack.so (for architecture i386): Mach-O bundle i386 ./lib/lapack/flapack.so (for architecture ppc): Mach-O bundle ppc ./linalg/_flinalg.so: Mach-O bundle ppc ./linalg/_iterative.so: Mach-O bundle ppc ./linalg/atlas_version.so: Mach-O fat file with 2 architectures ./linalg/atlas_version.so (for architecture ppc): Mach-O bundle ppc ./linalg/atlas_version.so (for architecture i386): Mach-O bundle i386 ./linalg/calc_lwork.so: Mach-O bundle ppc ./linalg/cblas.so: Mach-O fat file with 2 architectures ./linalg/cblas.so (for architecture ppc): Mach-O bundle ppc ./linalg/cblas.so (for architecture i386): Mach-O bundle i386 ./linalg/clapack.so: Mach-O fat file with 2 architectures ./linalg/clapack.so (for architecture ppc): Mach-O bundle ppc ./linalg/clapack.so (for architecture i386): Mach-O bundle i386 ./linalg/fblas.so: Mach-O bundle ppc ./linalg/flapack.so: Mach-O bundle ppc ./linsolve/_csuperlu.so: Mach-O fat file with 2 architectures ./linsolve/_csuperlu.so (for architecture ppc): Mach-O bundle ppc ./linsolve/_csuperlu.so (for architecture i386): Mach-O bundle i386 ./linsolve/_dsuperlu.so: Mach-O fat file with 2 architectures ./linsolve/_dsuperlu.so (for architecture ppc): Mach-O bundle ppc ./linsolve/_dsuperlu.so (for architecture i386): Mach-O bundle i386 ./linsolve/_ssuperlu.so: Mach-O fat file with 2 architectures ./linsolve/_ssuperlu.so (for architecture ppc): Mach-O bundle ppc ./linsolve/_ssuperlu.so (for architecture i386): Mach-O bundle i386 ./linsolve/_zsuperlu.so: Mach-O fat file with 2 architectures ./linsolve/_zsuperlu.so (for architecture ppc): Mach-O bundle ppc ./linsolve/_zsuperlu.so (for architecture i386): Mach-O bundle i386 ./ndimage/_nd_image.so: Mach-O fat file with 2 architectures ./ndimage/_nd_image.so (for architecture ppc): Mach-O bundle ppc ./ndimage/_nd_image.so (for architecture i386): Mach-O bundle i386 ./optimize/_cobyla.so: Mach-O bundle ppc ./optimize/_lbfgsb.so: Mach-O bundle ppc ./optimize/_minpack.so: Mach-O bundle ppc ./optimize/_zeros.so: Mach-O fat file with 2 architectures ./optimize/_zeros.so (for architecture ppc): Mach-O bundle ppc ./optimize/_zeros.so (for architecture i386): Mach-O bundle i386 ./optimize/minpack2.so: Mach-O bundle ppc ./optimize/moduleTNC.so: Mach-O fat file with 2 architectures ./optimize/moduleTNC.so (for architecture ppc): Mach-O bundle ppc ./optimize/moduleTNC.so (for architecture i386): Mach-O bundle i386 ./signal/sigtools.so: Mach-O fat file with 2 architectures ./signal/sigtools.so (for architecture ppc): Mach-O bundle ppc ./signal/sigtools.so (for architecture i386): Mach-O bundle i386 ./signal/spline.so: Mach-O fat file with 2 architectures ./signal/spline.so (for architecture ppc): Mach-O bundle ppc ./signal/spline.so (for architecture i386): Mach-O bundle i386 ./sparse/sparsetools.so: Mach-O bundle ppc ./special/_cephes.so: Mach-O bundle ppc ./special/specfun.so: Mach-O bundle ppc ./stats/futil.so: Mach-O bundle ppc ./stats/mvn.so: Mach-O bundle ppc ./stats/statlib.so: Mach-O bundle ppc ./weave/examples/fibonacci_ext.so: ELF 32-bit LSB shared object, Intel 80386, version 1 (SYSV), not stripped ./weave/examples/ramp_ext.so: ELF 32-bit LSB shared object, Intel 80386, version 1 (SYSV), not stripped From oliphant at ee.byu.edu Thu Dec 7 17:40:44 2006 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Thu, 07 Dec 2006 15:40:44 -0700 Subject: [SciPy-user] help on numerical modelisation of near-sinusoidal function using SciPy for pendulum analysis In-Reply-To: <120720062224.18165.4578948900025F8D000046F52200734076040196960E040E9F@comcast.net> References: <120720062224.18165.4578948900025F8D000046F52200734076040196960E040E9F@comcast.net> Message-ID: <4578986C.2010908@ee.byu.edu> palazzol at comcast.net wrote: >Hi, > >I think optimize.minpack.leastsq() could be made to work, but it would be best if you could make good initial guesses for a,b,co, and phi. You could use some ad-hoc methods to come up with guesses for these initial conditions, based on looking at a bit of the waveform, and then use leastsq() to calculate the best values and residual error norm. For example, a is the average over time, b is the half the amplitude between peaks, co based on Theta(t) crossing a, etc. > > optimize.leastsq (don't use the minpack itermediary because that could change) is the function you want for general curve-fitting. You can also use another optimization function with a slightly different function. This is also similar to the frequency estimation problem so the fft is a good fit to get starting estimates as Frank suggests. A simple algorithm could be constructed in a few lines using the fft and optimize.leastsq. -Travis From robert.vergnes at yahoo.fr Fri Dec 8 02:34:54 2006 From: robert.vergnes at yahoo.fr (Robert VERGNES) Date: Fri, 8 Dec 2006 08:34:54 +0100 (CET) Subject: [SciPy-user] RE : Re: help on numerical modelisation of near-sinusoidal function using SciPy for pendulum analysis In-Reply-To: <4578986C.2010908@ee.byu.edu> Message-ID: <292591.24010.qm@web27413.mail.ukl.yahoo.com> Thanx for your help. Robert Travis Oliphant a ?crit : palazzol at comcast.net wrote: >Hi, > >I think optimize.minpack.leastsq() could be made to work, but it would be best if you could make good initial guesses for a,b,co, and phi. You could use some ad-hoc methods to come up with guesses for these initial conditions, based on looking at a bit of the waveform, and then use leastsq() to calculate the best values and residual error norm. For example, a is the average over time, b is the half the amplitude between peaks, co based on Theta(t) crossing a, etc. > > optimize.leastsq (don't use the minpack itermediary because that could change) is the function you want for general curve-fitting. You can also use another optimization function with a slightly different function. This is also similar to the frequency estimation problem so the fft is a good fit to get starting estimates as Frank suggests. A simple algorithm could be constructed in a few lines using the fft and optimize.leastsq. -Travis _______________________________________________ SciPy-user mailing list SciPy-user at scipy.org http://projects.scipy.org/mailman/listinfo/scipy-user --------------------------------- D?couvrez une nouvelle fa?on d'obtenir des r?ponses ? toutes vos questions ! Profitez des connaissances, des opinions et des exp?riences des internautes sur Yahoo! Questions/R?ponses. -------------- next part -------------- An HTML attachment was scrubbed... URL: From gael.varoquaux at normalesup.org Fri Dec 8 02:48:58 2006 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Fri, 8 Dec 2006 08:48:58 +0100 Subject: [SciPy-user] help on numerical modelisation of near-sinusoidal function using SciPy for pendulum analysis In-Reply-To: <20061207220352.87975.qmail@web27414.mail.ukl.yahoo.com> References: <20061207220352.87975.qmail@web27414.mail.ukl.yahoo.com> Message-ID: <20061208074858.GA7766@clipper.ens.fr> If I understand you right what you want is what I would call a "fitting routine": an optimisation routine that optimises parameters in a test function to reduce its distance to data points. Have a look at http://scipy.org/Cookbook/FittingData , that might help you. Ga?l From jonas.kahn at math.u-psud.fr Fri Dec 8 04:03:00 2006 From: jonas.kahn at math.u-psud.fr (Jonas Kahn) Date: Fri, 8 Dec 2006 10:03:00 +0100 Subject: [SciPy-user] Definition of random variable In-Reply-To: <20061206183540.GA1769@clipper.ens.fr> References: <20061206183540.GA1769@clipper.ens.fr> Message-ID: <20061208090300.GA16353@clipper.ens.fr> OK, I have read the source of stats.distributions and finally understood how to define new random variables. I have also written a convenience function for defining random variables from the pdf only, if anyone is interested (I know it is in French, but I wrote it for me). One remark, however, is that we have to call explicitely the cdf before being allowed to use the rvs..! That is why there is this strange line "rv_generee.cdf(1)" in the code below. Jonas def genere_une_rv_a_partir_de_pdf(pdf, min_domaine = None, max_domaine = None): class rv_gen(scipy.stats.distributions.rv_continuous): def _pdf(self, x): return pdf(x) rv_generee = rv_gen(name ="Pour l'instant je remplis", a = min_domaine, b = max_domaine) # Pour avoir acc?s aux variables al?atoires, il faut d'abord g?n?rer # la # cdf... rv_generee.cdf(1) return rv_generee On Wed, Dec 06, 2006 at 07:35:40PM +0100, Jonas Kahn wrote: > Hi > > I try to create a new instance of the rv_continuous class, and the __init__ > is not very explicit... > > Essentially, I would like to define it from the probability distribution > only, and I guess (totally without any ground for that) that it is > implemented as such. > Especially I would like to use the associated inverse survival function, > without writing it by myself. Of course I could use the routines for solving > equations to get it... > Alternatively, is there any way to get the inverse function of a monotonic > function? > > Thanks for the help > Jonas > > PS: I think there is something wrong with the "scale" parameter in the > package, or I have not understood how to use it. There is no change to the > pdf when I try to give it another value as 1: > > In [124]: stats.binom.rvs(1,1) > Out[124]: array([1]) > > # No problem with loc > In [126]: stats.binom.rvs(1,1, loc = 10) > Out[126]: array([11]) > > # But nothing with scale, with or without loc > In [127]: stats.binom.rvs(1,1, loc = 10, scale =100) > Out[127]: array([11]) > > In [128]: stats.binom.rvs(1,1, scale =100) > Out[128]: array([1]) > > > > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user From robert.vergnes at yahoo.fr Fri Dec 8 04:06:06 2006 From: robert.vergnes at yahoo.fr (Robert VERGNES) Date: Fri, 8 Dec 2006 10:06:06 +0100 (CET) Subject: [SciPy-user] RE : Re: help on numerical modelisation of near-sinusoidal function using SciPy for pendulum analysis In-Reply-To: <20061208074858.GA7766@clipper.ens.fr> Message-ID: <20061208090606.23093.qmail@web27415.mail.ukl.yahoo.com> oui c'est exacte. c'est ce que fait regressi tres bien mais regressi est un logiciel ferme... malheureusement. Mais c'est un TRES bon logiciel - dommage qu'il ne soit pas en anglais. Merci pour l'info. Robert Gael Varoquaux a ?crit : If I understand you right what you want is what I would call a "fitting routine": an optimisation routine that optimises parameters in a test function to reduce its distance to data points. Have a look at http://scipy.org/Cookbook/FittingData , that might help you. Ga?l _______________________________________________ SciPy-user mailing list SciPy-user at scipy.org http://projects.scipy.org/mailman/listinfo/scipy-user --------------------------------- Yahoo! Mail r?invente le mail ! D?couvrez le nouveau Yahoo! Mail et son interface r?volutionnaire. -------------- next part -------------- An HTML attachment was scrubbed... URL: From nwagner at iam.uni-stuttgart.de Fri Dec 8 04:49:05 2006 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Fri, 08 Dec 2006 10:49:05 +0100 Subject: [SciPy-user] Advantages of Intel's MKL Message-ID: <45793511.1020700@iam.uni-stuttgart.de> Hi all, What are the advantages of Intel's MKL library in connection with numpy/scipy ? Nils From lou_boog2000 at yahoo.com Fri Dec 8 10:33:47 2006 From: lou_boog2000 at yahoo.com (Lou Pecora) Date: Fri, 8 Dec 2006 07:33:47 -0800 (PST) Subject: [SciPy-user] C-extensions for NumPy code available. Message-ID: <364838.73688.qm@web34407.mail.mud.yahoo.com> For those of you who do numerical calculations, need to speed up your code, and use NumPy arrays: I've written several C extensions that handle NumPy arrays. They are simple, but they seem to work well. They will show you how to pass Python variables and NumPy arrays to your C code. Once you learn how to do it, it's pretty straight-forward. I suspect they will suffice for most numerical code. I've written it up as a draft and have made the code and document file available. If you want a copy, just email me or post a followup here to this message. I will send it to you as a tar.gzip file (only 192 KB). Remember it is only a draft (no guarantees or warantees) and you should test the code for your own uses. I would really love people who are much more knowledgable about these things than I to look it over and set me straight on any mistakes (like not INREF'ing something). In addition, is there a place I could put this up on the web for others? I don't have any way to do that at my lab. I would like to share with the Python community since I've been helped so much on these mailing lists. By the way I did this after a long time of searching for ways to speed up Python and connect with my C and C++ code. There are a lot of solutions out there, but I decided that they all provided so much more than I needed and required so much more learning time that it wasn't worth it. I am not knocking them. I found for my numerical needs I really only need to pass a limited set of things (integers, floats, strings, and NumPy arrays). If that's your category, this code might help you. -- Lou Pecora Naval Research Lab Washington, DC ____________________________________________________________________________________ Need a quick answer? Get one in minutes from people who know. Ask your question on www.Answers.yahoo.com From gael.varoquaux at normalesup.org Fri Dec 8 10:45:57 2006 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Fri, 8 Dec 2006 16:45:57 +0100 Subject: [SciPy-user] C-extensions for NumPy code available. In-Reply-To: <364838.73688.qm@web34407.mail.mud.yahoo.com> References: <364838.73688.qm@web34407.mail.mud.yahoo.com> Message-ID: <20061208154555.GE18458@clipper.ens.fr> On Fri, Dec 08, 2006 at 07:33:47AM -0800, Lou Pecora wrote: > In addition, is there a place I could put this up on > the web for others? I don't have any way to do that > at my lab. I would like to share with the Python > community since I've been helped so much on these > mailing lists. I think the scipy wiki ( http://www.scipy.org ) is the best way. You can create an account and start a sub-page of http://scipy.org/Cookbook/C_Extensions and modify this page. Ga?l From robert.kern at gmail.com Fri Dec 8 11:11:06 2006 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 08 Dec 2006 10:11:06 -0600 Subject: [SciPy-user] C-extensions for NumPy code available. In-Reply-To: <364838.73688.qm@web34407.mail.mud.yahoo.com> References: <364838.73688.qm@web34407.mail.mud.yahoo.com> Message-ID: <45798E9A.9050600@gmail.com> Lou Pecora wrote: > In addition, is there a place I could put this up on > the web for others? I don't have any way to do that > at my lab. I would like to share with the Python > community since I've been helped so much on these > mailing lists. Temporarily, you can write up a wiki page about it on www.scipy.org and attach the tarball to the page. Once we can look at the contents a bit more, we might have better ideas where they can go. Possibly, they can go into the numpy distribution as examples. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From lou_boog2000 at yahoo.com Fri Dec 8 11:44:00 2006 From: lou_boog2000 at yahoo.com (Lou Pecora) Date: Fri, 8 Dec 2006 08:44:00 -0800 (PST) Subject: [SciPy-user] C-extensions for NumPy code available. In-Reply-To: <45798E9A.9050600@gmail.com> Message-ID: <195291.33451.qm@web34415.mail.mud.yahoo.com> This is a good idea, but I'm afraid I have no idea how to even start this process. I've never done anything on a Wiki. Is there some tutorial for newbies like me? Maybe I missed a link to one there. Thanks. -- Lou Pecora --- Robert Kern wrote: > Temporarily, you can write up a wiki page about it > on www.scipy.org and attach > the tarball to the page. Once we can look at the > contents a bit more, we might > have better ideas where they can go. Possibly, they > can go into the numpy > distribution as examples. > > -- > Robert Kern ____________________________________________________________________________________ Any questions? Get answers on any topic at www.Answers.yahoo.com. Try it now. From robert.kern at gmail.com Fri Dec 8 11:54:23 2006 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 08 Dec 2006 10:54:23 -0600 Subject: [SciPy-user] C-extensions for NumPy code available. In-Reply-To: <195291.33451.qm@web34415.mail.mud.yahoo.com> References: <195291.33451.qm@web34415.mail.mud.yahoo.com> Message-ID: <457998BF.2010002@gmail.com> Lou Pecora wrote: > This is a good idea, but I'm afraid I have no idea how > to even start this process. I've never done anything > on a Wiki. Is there some tutorial for newbies like > me? Maybe I missed a link to one there. http://www.scipy.org/HelpContents http://www.scipy.org/HelpForBeginners -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From lou_boog2000 at yahoo.com Fri Dec 8 14:58:42 2006 From: lou_boog2000 at yahoo.com (Lou Pecora) Date: Fri, 8 Dec 2006 11:58:42 -0800 (PST) Subject: [SciPy-user] Adding a page to the SciPy Wiki?? Message-ID: <20061208195842.24152.qmail@web34411.mail.mud.yahoo.com> I would like to add a page on my C extensions to the Wiki and a link to the tar file of the code, but I am having problems getting started. Several people have pointed me to various SciPy.org pages. I have added a new link to a non-existent page by adding a /MyNewPage on the URL and it says it automatically will add a page, but when I tell it what to add (a blank page) it just says I need to log in to do that. When I look for log in on the SciPy site I don't see it anywhere. I have no idea what the templates mean and whether I should choose one of them. All the pages I go to to try to add a new page are marked Immutable Page in the left column. Clicking on the balloon (Edit) tells me I can't do that. Sorry, gang, this newbie is still lost. -- Lou Pecora My views are my own. ____________________________________________________________________________________ Any questions? Get answers on any topic at www.Answers.yahoo.com. Try it now. From robert.kern at gmail.com Fri Dec 8 15:06:10 2006 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 08 Dec 2006 14:06:10 -0600 Subject: [SciPy-user] Adding a page to the SciPy Wiki?? In-Reply-To: <20061208195842.24152.qmail@web34411.mail.mud.yahoo.com> References: <20061208195842.24152.qmail@web34411.mail.mud.yahoo.com> Message-ID: <4579C5B2.9050507@gmail.com> Lou Pecora wrote: > I would like to add a page on my C extensions to the > Wiki and a link to the tar file of the code, but I am > having problems getting started. Several people have > pointed me to various SciPy.org pages. I have added a > new link to a non-existent page by adding a /MyNewPage > on the URL and it says it automatically will add a > page, but when I tell it what to add (a blank page) it > just says I need to log in to do that. When I look > for log in on the SciPy site I don't see it anywhere. > I have no idea what the templates mean and whether I > should choose one of them. All the pages I go to to > try to add a new page are marked Immutable Page in the > left column. Clicking on the balloon (Edit) tells me > I can't do that. On the PythonMac-SIG, Larry Meyn gave you a pointer to how you can register: """ First you need an account. For that go to http://www.scipy.org/UserPreferences and enter your information. To edit pages click on the little balloon icon in the upper right. Help on editing can be found at http://www.scipy.org/HelpOnEditing """ Once you have created an account for yourself, the login link is in the upper right-hand corner of every page. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From davidlinke at tiscali.de Fri Dec 8 15:16:27 2006 From: davidlinke at tiscali.de (David Linke) Date: Fri, 08 Dec 2006 21:16:27 +0100 Subject: [SciPy-user] Adding a page to the SciPy Wiki?? In-Reply-To: <20061208195842.24152.qmail@web34411.mail.mud.yahoo.com> References: <20061208195842.24152.qmail@web34411.mail.mud.yahoo.com> Message-ID: <4579C81B.9000300@tiscali.de> Hi Lou, The easiest way to create a new page is going to http://new.scipy.org/Wiki/FindPage There you enter the new name in the last field and Click "goto page" - if that page does not exist you come to a screen where you can select to create the content "Create new empty page". You can also use one of the suggested templates. Just start and ask (on the wiki page) how to do certain things - there are several people to help. If you want to upload your file with the examples add: [attachment:examples.tgz] to the page and save the page. After saving you will get a link that you can use to upload. David ---- Lou Pecora wrote: > I would like to add a page on my C extensions to the > Wiki and a link to the tar file of the code, but I am > having problems getting started. Several people have > pointed me to various SciPy.org pages. I have added a > new link to a non-existent page by adding a /MyNewPage > on the URL and it says it automatically will add a > page, but when I tell it what to add (a blank page) it > just says I need to log in to do that. When I look > for log in on the SciPy site I don't see it anywhere. > I have no idea what the templates mean and whether I > should choose one of them. All the pages I go to to > try to add a new page are marked Immutable Page in the > left column. Clicking on the balloon (Edit) tells me > I can't do that. > > Sorry, gang, this newbie is still lost. > > > > -- Lou Pecora > My views are my own. > > > > ____________________________________________________________________________________ > Any questions? Get answers on any topic at www.Answers.yahoo.com. Try it now. > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From ryanlists at gmail.com Fri Dec 8 15:50:14 2006 From: ryanlists at gmail.com (Ryan Krauss) Date: Fri, 8 Dec 2006 14:50:14 -0600 Subject: [SciPy-user] Windows Binary Problem In-Reply-To: <4576E08E.2080409@umit.maine.edu> References: <4576E08E.2080409@umit.maine.edu> Message-ID: If you don't need sparse matrices, I can post my binaries. I have built form source from SVN sometime last week and I think everything except sparse matrices is working. Ryan On 12/6/06, R. Padraic Springuel wrote: > I'm still having the compatibility issues between the numpy and scipy > Windows binaries, even with the installations that Gary sent me. Does > anyone have any other suggestions as to how to get scipy up and running > on a Windows machine (numpy is working fine)? > -- > > R. Padraic Springuel > Teaching Assistant > Department of Physics and Astronomy > University of Maine > Bennett 309 > Office Hours: Wednesday 2-3pm > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From lou_boog2000 at yahoo.com Fri Dec 8 15:55:48 2006 From: lou_boog2000 at yahoo.com (Lou Pecora) Date: Fri, 8 Dec 2006 12:55:48 -0800 (PST) Subject: [SciPy-user] Adding a page to the SciPy Wiki?? In-Reply-To: <4579C5B2.9050507@gmail.com> Message-ID: <20061208205548.68938.qmail@web34412.mail.mud.yahoo.com> I got logged in... somehow. Not sure what I did right. I got a plain vanilla title and paragraph up there as a subpage to Cookbook/C Extensions. It's NumPyArrays. I managed to upload two files containing my contribution source code and documents. One is a .tar file the other a tar.gz file. I have no idea how to make my page point to them so others can get them. I've been looking in the Help area, but I see nothing. Thanks for any help. I am still stumped. --- Robert Kern wrote: > Lou Pecora wrote: > > I would like to add a page on my C extensions to > the > > Wiki and a link to the tar file of the code, but I > am > > having problems getting started. Several people > have > > pointed me to various SciPy.org pages. I have > added a > > new link to a non-existent page by adding a > /MyNewPage > > on the URL and it says it automatically will add a > > page, but when I tell it what to add (a blank > page) it > > just says I need to log in to do that. When I > look > > for log in on the SciPy site I don't see it > anywhere. > > I have no idea what the templates mean and whether > I > > should choose one of them. All the pages I go to > to > > try to add a new page are marked Immutable Page in > the > > left column. Clicking on the balloon (Edit) tells > me > > I can't do that. > > On the PythonMac-SIG, Larry Meyn gave you a pointer > to how you can register: > > """ > First you need an account. For that go to > http://www.scipy.org/UserPreferences > and enter your information. > > To edit pages click on the little balloon icon in > the upper right. Help on > editing can be found at > http://www.scipy.org/HelpOnEditing > """ > > Once you have created an account for yourself, the > login link is in the upper > right-hand corner of every page. > > -- > Robert Kern > > "I have come to believe that the whole world is an > enigma, a harmless enigma > that is made terrible by our own mad attempt to > interpret it as though it had > an underlying truth." > -- Umberto Eco > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > ____________________________________________________________________________________ Do you Yahoo!? Everyone is raving about the all-new Yahoo! Mail beta. http://new.mail.yahoo.com From lou_boog2000 at yahoo.com Fri Dec 8 15:58:14 2006 From: lou_boog2000 at yahoo.com (Lou Pecora) Date: Fri, 8 Dec 2006 12:58:14 -0800 (PST) Subject: [SciPy-user] Adding a page to the SciPy Wiki?? In-Reply-To: <4579C81B.9000300@tiscali.de> Message-ID: <245488.42982.qm@web34411.mail.mud.yahoo.com> --- David Linke wrote: > Hi Lou, > > The easiest way to create a new page is going to > > http://new.scipy.org/Wiki/FindPage > > There you enter the new name in the last field and > Click "goto page" - > if that page does not exist you come to a screen > where you can select to > create the content "Create new empty page". You can > also use one of the > suggested templates. > > Just start and ask (on the wiki page) how to do > certain things - there > are several people to help. If you want to upload > your file with the > examples add: > > [attachment:examples.tgz] > > to the page and save the page. After saving you will > get a link that you > can use to upload. Ah, that's the trick. Let me try it. I have another message where I say I can't do this. Maybe you have given me the clue. -- Lou Pecora My views are my own. ____________________________________________________________________________________ Cheap talk? Check out Yahoo! Messenger's low PC-to-Phone call rates. http://voice.yahoo.com From lou_boog2000 at yahoo.com Fri Dec 8 16:03:05 2006 From: lou_boog2000 at yahoo.com (Lou Pecora) Date: Fri, 8 Dec 2006 13:03:05 -0800 (PST) Subject: [SciPy-user] Adding a page to the SciPy Wiki?? In-Reply-To: <4579C81B.9000300@tiscali.de> Message-ID: <635048.42983.qm@web34411.mail.mud.yahoo.com> That worked for the Attachment. Thanks. Some time I would like to add some formatting. But that can wait. I also would like to delete one attachment (the original tar ball that is NOT zipped). If the page I created should be moved I will need help with that one, too. By the way, several people told me to use a template, but I have no idea what the heck any of them mean or do. Right now, after my first interaction with editing a Wiki: It was easier to write a C extension. :-) -- Lou --- David Linke wrote: > Hi Lou, > > The easiest way to create a new page is going to > > http://new.scipy.org/Wiki/FindPage > > There you enter the new name in the last field and > Click "goto page" - > if that page does not exist you come to a screen > where you can select to > create the content "Create new empty page". You can > also use one of the > suggested templates. > > Just start and ask (on the wiki page) how to do > certain things - there > are several people to help. If you want to upload > your file with the > examples add: > > [attachment:examples.tgz] > > to the page and save the page. After saving you will > get a link that you > can use to upload. > > David ____________________________________________________________________________________ Any questions? Get answers on any topic at www.Answers.yahoo.com. Try it now. From robert.kern at gmail.com Fri Dec 8 16:16:22 2006 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 08 Dec 2006 15:16:22 -0600 Subject: [SciPy-user] Adding a page to the SciPy Wiki?? In-Reply-To: <20061208205548.68938.qmail@web34412.mail.mud.yahoo.com> References: <20061208205548.68938.qmail@web34412.mail.mud.yahoo.com> Message-ID: <4579D626.4090601@gmail.com> Lou Pecora wrote: > I got logged in... somehow. Not sure what I did > right. I got a plain vanilla title and paragraph up > there as a subpage to Cookbook/C Extensions. It's > NumPyArrays. I managed to upload two files containing > my contribution source code and documents. One is a > .tar file the other a tar.gz file. I have no idea how > to make my page point to them so others can get them. > I've been looking in the Help area, but I see nothing. Cext.tar.gz is at the bottom of that page below your signature. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From fullung at gmail.com Fri Dec 8 19:04:26 2006 From: fullung at gmail.com (Albert Strasheim) Date: Sat, 9 Dec 2006 02:04:26 +0200 Subject: [SciPy-user] Advantages of Intel's MKL In-Reply-To: <45793511.1020700@iam.uni-stuttgart.de> References: <45793511.1020700@iam.uni-stuttgart.de> Message-ID: <20061209000426.GA14718@dogbert.sdsl.sun.ac.za> Hello all On Fri, 08 Dec 2006, Nils Wagner wrote: > Hi all, > > What are the advantages of Intel's MKL library in connection with > numpy/scipy ? I'll take a stab at this question. Feel free to disagree. Mostly I'd compare it with ATLAS. Both MKL and ATLAS are orders of magnitude faster than NumPy's "LAPACK lite", so you really want to use one of these libraries. Pro's of MKL (mostly vs ATLAS): Don't have to build the library. Runtime detection of CPU. Useful if you have a single Python+NumPy installation shared over NFS to a bunch of heterogenous machines. This is quite likely in cluster-type environments where you would use something like IPython1 -- I have a configuration exactly like this to deal with 15 machines in our lab. I think MKL supports multiple cores through OpenMP by just setting OMP_NUM_THREADS. ATLAS also has support for pthreads though. Probably about as fast as ATLAS. Con's of MKL: Costs money if the non-commercial Linux license doesn't apply to you and/or doesn't suit your needs (e.g. you need to run on Windows). Those are the points that jump out for me. I'm sure there are others. Comments? Cheers, Albert From oliphant.travis at ieee.org Fri Dec 8 21:40:53 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Fri, 08 Dec 2006 19:40:53 -0700 Subject: [SciPy-user] Windows Binary Problem In-Reply-To: References: <4576E08E.2080409@umit.maine.edu> Message-ID: <457A2235.5010507@ieee.org> Ryan Krauss wrote: > If you don't need sparse matrices, I can post my binaries. I have > built form source from SVN sometime last week and I think everything > except sparse matrices is working. > > Ryan > > On 12/6/06, R. Padraic Springuel wrote: > >> I'm still having the compatibility issues between the numpy and scipy >> Windows binaries, even with the installations that Gary sent me. Does >> anyone have any other suggestions as to how to get scipy up and running >> on a Windows machine (numpy is working fine)? >> -- >> I just made a binary release of 0.5.2 (get it from the download section of scipy --- see the news tab). >> R. Padraic Springuel >> Teaching Assistant >> Department of Physics and Astronomy >> University of Maine >> Bennett 309 >> Office Hours: Wednesday 2-3pm >> _______________________________________________ >> SciPy-user mailing list >> SciPy-user at scipy.org >> http://projects.scipy.org/mailman/listinfo/scipy-user >> >> > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > > From nwagner at iam.uni-stuttgart.de Sat Dec 9 02:04:28 2006 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Sat, 09 Dec 2006 08:04:28 +0100 Subject: [SciPy-user] Advantages of Intel's MKL In-Reply-To: <20061209000426.GA14718@dogbert.sdsl.sun.ac.za> References: <45793511.1020700@iam.uni-stuttgart.de> <20061209000426.GA14718@dogbert.sdsl.sun.ac.za> Message-ID: On Sat, 9 Dec 2006 02:04:26 +0200 Albert Strasheim wrote: > Hello all > > On Fri, 08 Dec 2006, Nils Wagner wrote: > >> Hi all, >> >> What are the advantages of Intel's MKL library in >>connection with >> numpy/scipy ? > > I'll take a stab at this question. Feel free to >disagree. Mostly I'd > compare it with ATLAS. Both MKL and ATLAS are orders of >magnitude faster > than NumPy's "LAPACK lite", so you really want to use >one of these > libraries. > > Pro's of MKL (mostly vs ATLAS): > > Don't have to build the library. > > Runtime detection of CPU. Useful if you have a single >Python+NumPy > installation shared over NFS to a bunch of heterogenous >machines. > This is quite likely in cluster-type environments where >you would use > something like IPython1 -- I have a configuration >exactly like this to > deal with 15 machines in our lab. > > I think MKL supports multiple cores through OpenMP by >just setting > OMP_NUM_THREADS. ATLAS also has support for pthreads >though. > > Probably about as fast as ATLAS. > > Con's of MKL: > > Costs money if the non-commercial Linux license doesn't >apply to > you and/or doesn't suit your needs (e.g. you need to run >on Windows). > > Those are the points that jump out for me. I'm sure >there are others. > > Comments? > > Cheers, > > Albert > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user Thank you for your notes. If you have installed both, ATLAS and MKL, which library will be used by default ? Nils From robert.kern at gmail.com Sat Dec 9 02:12:23 2006 From: robert.kern at gmail.com (Robert Kern) Date: Sat, 09 Dec 2006 01:12:23 -0600 Subject: [SciPy-user] Advantages of Intel's MKL In-Reply-To: References: <45793511.1020700@iam.uni-stuttgart.de> <20061209000426.GA14718@dogbert.sdsl.sun.ac.za> Message-ID: <457A61D7.7010800@gmail.com> Nils Wagner wrote: > Thank you for your notes. If you have installed both, > ATLAS and MKL, which library will be used by default ? It depends on how you set up your site.cfg . If you follow the recommendation to only use the [blas_opt] and [lapack_opt] sections explicitly, then you should get whatever you set in those. If you use only one of [mkl] and [atlas] sections, then you should get whichever one you set. If, for some strange reason, you put both [mkl] and [atlas] sections in the same site.cfg (not recommended!), then the MKL will get picked up first and ATLAS will be ignored. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From nwagner at iam.uni-stuttgart.de Sat Dec 9 03:15:09 2006 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Sat, 09 Dec 2006 09:15:09 +0100 Subject: [SciPy-user] Advantages of Intel's MKL In-Reply-To: <457A61D7.7010800@gmail.com> References: <45793511.1020700@iam.uni-stuttgart.de> <20061209000426.GA14718@dogbert.sdsl.sun.ac.za> <457A61D7.7010800@gmail.com> Message-ID: On Sat, 09 Dec 2006 01:12:23 -0600 Robert Kern wrote: > Nils Wagner wrote: >> Thank you for your notes. If you have installed both, >> ATLAS and MKL, which library will be used by default ? > > It depends on how you set up your site.cfg . > > If you follow the recommendation to only use the >[blas_opt] and [lapack_opt] > sections explicitly, then you should get whatever you >set in those. > > If you use only one of [mkl] and [atlas] sections, then >you should get whichever > one you set. > > If, for some strange reason, you put both [mkl] and >[atlas] sections in the same > site.cfg (not recommended!), then the MKL will get >picked up first and ATLAS > will be ignored. > > -- > Robert Kern > > "I have come to believe that the whole world is an >enigma, a harmless enigma > that is made terrible by our own mad attempt to >interpret it as though it had > an underlying truth." > -- Umberto Eco > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user A final question... Do I need Intel's C compiler to build numpy/scipy with MKL support ? Nils From robert.kern at gmail.com Sat Dec 9 03:22:21 2006 From: robert.kern at gmail.com (Robert Kern) Date: Sat, 09 Dec 2006 02:22:21 -0600 Subject: [SciPy-user] Advantages of Intel's MKL In-Reply-To: References: <45793511.1020700@iam.uni-stuttgart.de> <20061209000426.GA14718@dogbert.sdsl.sun.ac.za> <457A61D7.7010800@gmail.com> Message-ID: <457A723D.9090805@gmail.com> Nils Wagner wrote: > A final question... > Do I need Intel's C compiler to build numpy/scipy > with MKL support ? I have no idea. Consult the MKL's documentation. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From jtravs at gmail.com Sat Dec 9 05:47:41 2006 From: jtravs at gmail.com (John Travers) Date: Sat, 9 Dec 2006 10:47:41 +0000 Subject: [SciPy-user] Advantages of Intel's MKL In-Reply-To: <457A723D.9090805@gmail.com> References: <45793511.1020700@iam.uni-stuttgart.de> <20061209000426.GA14718@dogbert.sdsl.sun.ac.za> <457A61D7.7010800@gmail.com> <457A723D.9090805@gmail.com> Message-ID: <3a1077e70612090247h7764e4a6q49df00c00f98dc03@mail.gmail.com> On 09/12/06, Robert Kern wrote: > Nils Wagner wrote: > > A final question... > > Do I need Intel's C compiler to build numpy/scipy > > with MKL support ? > > I have no idea. Consult the MKL's documentation. > > -- > Robert Kern No, I use Intel's MKL with numpy/scipy on Linux and do not use intels C compiler. In fact I couldn't get it to work with Intel's compiler. You may need the fortran compiler though - at least that is what works with me. I find that MKL is the fastest I've seen for ffts. Not sure about how it squares up for Lapack type stuff though. John From fullung at gmail.com Sat Dec 9 11:47:22 2006 From: fullung at gmail.com (Albert Strasheim) Date: Sat, 9 Dec 2006 18:47:22 +0200 Subject: [SciPy-user] Advantages of Intel's MKL In-Reply-To: <3a1077e70612090247h7764e4a6q49df00c00f98dc03@mail.gmail.com> References: <45793511.1020700@iam.uni-stuttgart.de> <20061209000426.GA14718@dogbert.sdsl.sun.ac.za> <457A61D7.7010800@gmail.com> <457A723D.9090805@gmail.com> <3a1077e70612090247h7764e4a6q49df00c00f98dc03@mail.gmail.com> Message-ID: <20061209164722.GA23228@dogbert.sdsl.sun.ac.za> Hello all On Sat, 09 Dec 2006, John Travers wrote: > On 09/12/06, Robert Kern wrote: > > Nils Wagner wrote: > > > A final question... > > > Do I need Intel's C compiler to build numpy/scipy > > > with MKL support ? > > > > I have no idea. Consult the MKL's documentation. > > > > -- > > Robert Kern > > No, I use Intel's MKL with numpy/scipy on Linux and do not use intels > C compiler. In fact I couldn't get it to work with Intel's compiler. > You may need the fortran compiler though - at least that is what works > with me. As far as the C libraries go, MKL works with GCC without problems. I haven't experimented with MKL and GCC FORTRAN yet. If you have a few hours, I can really recommend that you play with Intel's compiler a bit. In some cases the code it produces is much faster than GCC's or MSVC's. > I find that MKL is the fastest I've seen for ffts. Not sure about how > it squares up for Lapack type stuff though. Makes sense. MKL 9 which was released recently apparently has even faster FFT code for the new Xeons. The free for non-commercial use version of MKL 9 will be available in early 2007. Cheers, Albert From w.northcott at unsw.edu.au Sat Dec 9 20:58:32 2006 From: w.northcott at unsw.edu.au (Bill Northcott) Date: Sun, 10 Dec 2006 12:58:32 +1100 Subject: [SciPy-user] Building SciPy on OS X: Universal binary issues In-Reply-To: References: Message-ID: <1B5FBCDB-F2E3-424A-82FD-F70EE2C81E45@unsw.edu.au> On 08/12/2006, at 6:34 PM, Josh Marshall wrote: > I get an error when importing scipy, as follows: > > ImportError: Failure linking new module: /Library/Frameworks/ > Python.framework/Versions/2.4/lib/python2.4/site-packages/scipy/ > special/_cephes.so: Symbol not found: _printf$LDBLStub > Referenced from: /Library/Frameworks/Python.framework/Versions/2.4/ > lib/python2.4/site-packages/scipy/special/_cephes.so > Expected in: dynamic lookup These errors are caused by mixing compiler versions. You cannot link object code files/static libraries if some of the code is built with gcc3.x.x and some with gcc4.x.x. gcc3.x.x must be used with g77 and gcc4.x.x with gfortran. You can always link to a dynamic library (.dylib) regardless of the compiler that built it or even the language of the source code. Bill Northcott -------------- next part -------------- An HTML attachment was scrubbed... URL: From pgmdevlist at gmail.com Sun Dec 10 15:06:12 2006 From: pgmdevlist at gmail.com (Pierre GM) Date: Sun, 10 Dec 2006 15:06:12 -0500 Subject: [SciPy-user] ANN: An alternative to numpy.core.ma Message-ID: <200612101506.12873.pgmdevlist@gmail.com> All, I just posted on the DeveloperZone of the wiki the latest version of maskedarray, an alternative to numpy.core.ma. You can download it here: http://projects.scipy.org/scipy/numpy/attachment/wiki/MaskedArray/maskedarray-1.00.dev0040.tar.gz The package has three modules: core (with the basic functions of numpy.core.ma), extras (which adds some functions, such as apply_along_axis, or the concatenator mr_), and testutils (which adds support for maskedarray for the tests functions). It also comes with its test suite (available in the tests subdirectory). For those of you who were not aware of it, the new MaskedArray is a subclass of ndarray, and it accepts any subclass of ndarray as data. You can use it as you would with numpy.core.ma.MaskedArray. For those of you who already tested the package, the main modifications are: - the reorganization of the initial module in core+extras. - Data are now shared by default (in other terms, the copy flag defaults to False in MaskedArray.__new__), for consistency with the rest of numpy. - An additional boolean flag has been introduced: keep_mask (with a default of True). This flag is useful when trying to mask a mask array: it tells __new__ whether to keep the initial mask (in that case, the new mask will be combined with the old mask) or not (in that case, the new mask replaces the old one). - Some functions/routines that were missing have been added (any/all...) As always, this is a work in progress. In particular, I should really check for the bottlenecks: would anybody have some pointers ? If you wanna be on the safe, optimized side, stick to numpy.core.ma. Otherwise, please try this new implementation, and don't forget to give me some feedback! PS: Technical question: how can I delete some files in the DeveloperZone wiki ? The maskedarray.py, test_maskedarray.py, test_subclasses.py are out of date, and should be replaced by the package. Thanks a lot in advance ! From josh.p.marshall at gmail.com Sun Dec 10 18:22:19 2006 From: josh.p.marshall at gmail.com (Josh Marshall) Date: Mon, 11 Dec 2006 10:22:19 +1100 Subject: [SciPy-user] Building SciPy on OS X: Universal binary issues In-Reply-To: References: Message-ID: > From: Bill Northcott > Subject: Re: [SciPy-user] Building SciPy on OS X: Universal binary > issues > > On 08/12/2006, at 6:34 PM, Josh Marshall wrote: >> I get an error when importing scipy, as follows: >> >> ImportError: Failure linking new module: /Library/Frameworks/ >> Python.framework/Versions/2.4/lib/python2.4/site-packages/scipy/ >> special/_cephes.so: Symbol not found: _printf$LDBLStub >> Referenced from: /Library/Frameworks/Python.framework/Versions/ >> 2.4/ >> lib/python2.4/site-packages/scipy/special/_cephes.so >> Expected in: dynamic lookup > > These errors are caused by mixing compiler versions. You cannot link > object code files/static libraries if some of the code is built with > gcc3.x.x and some with gcc4.x.x. gcc3.x.x must be used with g77 and > gcc4.x.x with gfortran. You can always link to a dynamic library > (.dylib) regardless of the compiler that built it or even the > language of the source code. I am aware of this inability to mix compilers. What I am stating is that the downloadable, binary versions on the MacPython universal binaries page have this problem, and they should not. I would like to know who built these versions, and how I go about building my own universal binary versions. However, as a related question, if I have both g77 and gfortran installed, how do I select gfortran as the compiler? I have tried various combinations of lines similar to python setup.py config --fcompiler=gnu95 build but g77 is always selected for the build. Cheers, Josh From robert.kern at gmail.com Sun Dec 10 18:33:20 2006 From: robert.kern at gmail.com (Robert Kern) Date: Sun, 10 Dec 2006 17:33:20 -0600 Subject: [SciPy-user] Building SciPy on OS X: Universal binary issues In-Reply-To: References: Message-ID: <457C9940.90004@gmail.com> Josh Marshall wrote: > However, as a related question, if I have both g77 and gfortran > installed, how do I select gfortran as the compiler? I have tried > various combinations of lines similar to > > python setup.py config --fcompiler=gnu95 build > > but g77 is always selected for the build. As far as I know, "config --fcompiler=gnu95" only sets the compiler for the configuration steps (i.e. finding ATLAS and such). I don't think it sets the FORTRAN compiler for any of the subsequent steps. python setup.py config --fcompiler=gnu95 build_src build_clib --fcompiler=gnu95 build_ext --fcompiler=gnu95 build -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From josh.p.marshall at gmail.com Mon Dec 11 15:46:22 2006 From: josh.p.marshall at gmail.com (Josh Marshall) Date: Tue, 12 Dec 2006 07:46:22 +1100 Subject: [SciPy-user] Building SciPy on OS X: Universal binary issues In-Reply-To: References: Message-ID: Thanks for that line Robert. Executing that command line definitely chooses gfortran. I am now running into this error: compiling C sources C compiler: gcc -arch ppc -arch i386 -isysroot /Developer/SDKs/ MacOSX10.4u.sdk -fno-strict-aliasing -Wno-long-double -no-cpp-precomp -mno-fused-madd -fno-common -dynamic -DNDEBUG -g -O3 creating build/temp.macosx-10.3-fat-2.4/Lib/linsolve creating build/temp.macosx-10.3-fat-2.4/Lib/linsolve/SuperLU creating build/temp.macosx-10.3-fat-2.4/Lib/linsolve/SuperLU/SRC compile options: '-DUSE_VENDOR_BLAS=1 -c' gcc: Lib/linsolve/SuperLU/SRC/zgscon.c gcc: cannot specify -o with -c or -S and multiple compilations gcc: cannot specify -o with -c or -S and multiple compilations error: Command "gcc -arch ppc -arch i386 -isysroot /Developer/SDKs/ MacOSX10.4u.sdk -fno-strict-aliasing -Wno-long-double -no-cpp-precomp -mno-fused-madd -fno-common -dynamic -DNDEBUG -g -O3 - DUSE_VENDOR_BLAS=1 -c Lib/linsolve/SuperLU/SRC/zgscon.c -o build/ temp.macosx-10.3-fat-2.4/Lib/linsolve/SuperLU/SRC/zgscon.o" failed with exit status 1 Any ideas on how to resolve this? It goes away if I don't try to compile as universal binary. Regards, Josh On 12/12/2006, at 5:00 AM, scipy-user-request at scipy.org wrote: > Message: 3 > Date: Sun, 10 Dec 2006 17:33:20 -0600 > From: Robert Kern > Subject: Re: [SciPy-user] Building SciPy on OS X: Universal binary > issues > To: SciPy Users List > Message-ID: <457C9940.90004 at gmail.com> > Content-Type: text/plain; charset=UTF-8 > > Josh Marshall wrote: > >> However, as a related question, if I have both g77 and gfortran >> installed, how do I select gfortran as the compiler? I have tried >> various combinations of lines similar to >> >> python setup.py config --fcompiler=gnu95 build >> >> but g77 is always selected for the build. > > As far as I know, "config --fcompiler=gnu95" only sets the compiler > for the > configuration steps (i.e. finding ATLAS and such). I don't think it > sets the > FORTRAN compiler for any of the subsequent steps. > > python setup.py config --fcompiler=gnu95 build_src build_clib -- > fcompiler=gnu95 > build_ext --fcompiler=gnu95 build > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a > harmless enigma > that is made terrible by our own mad attempt to interpret it as > though it had > an underlying truth." > -- Umberto Eco From mwojc at p.lodz.pl Mon Dec 11 17:54:36 2006 From: mwojc at p.lodz.pl (Marek Wojciechowski) Date: Mon, 11 Dec 2006 22:54:36 -0000 Subject: [SciPy-user] feed-forward neural network for python In-Reply-To: References: Message-ID: On Wed, 06 Dec 2006 20:16:05 -0000, wrote: > I find that the rprop algorithm is the best for real-world-noisy data > (besides GAs), and this could be an easy add. I have a python > implementation that uses scipy to run rprop, there is definite room > for improvement, I'm mainly using scipy.where() and wrote the function > directly from the original rprop paper (i can send you this code if > you'd like it). However, it may be faster to use fortran here as > well. The original c version or rprop is here > http://www.koders.com/c/fid1B403F9BA6B2BAA1C7CEDD1C680F8AAB2819B44E.aspx Yes, adding rprop would be quite easy, and it can be done in fortran for speed. I think it can appear in next release. If you like, just send me your implementation of rprop in python, I'll see if it can be useful... Generally, in future, I planned to have all fortran parts also in python. This is for all these poor people without numpy and scipy. How do you think? Another solution would be to try to include ffnet to scipy distribution. There is actually no support of ANNs in scipy. I saw scipy.sandbox.ann modules, but these, in opposite to ffnet, don't calculate even gradients for minimalization yet. Besides, ffnet has pikaia genetic optimizer bindings (which could be included to scipy.optimize) and many improvements, like automatic normalization of data which makes using ANNs much easier... > I also think that it would be great to have two versions of this. One > for general usage and one for altivec and sse 64bit ibm and intel > chips, which at this stage of the project would be easy to modify > http://developer.apple.com/documentation/Performance/Conceptual/Accelerate_sse_migration/index.html > gcc 4.0.3 does some auto optimization with the right flags and I've > found you can increase the speed by factors of 10 or more in many > cases. What type of platforms do you have access to? The CHUD tools > on OS X are very effective and might be useful > (http://developer.apple.com/tools/performance/) to find optimizations > relevant to all platforms. Generally I was never interested in optimizing my code for platforms other than x86. I have access to sourceforge compile farm which includes several architectures (but I don't suspect there are altivec sorts of things). I, personally, have no time to investigate special coding for altivec/sse. If you are interested in this and you think it could be really useful, I may consider creating a branch of ffnet for this purpose. And just to remind, ffnet is at https://sourceforge.net/projects/ffnet Greetings -- Marek Wojciechowski From wangxj.uc at gmail.com Mon Dec 11 17:01:47 2006 From: wangxj.uc at gmail.com (Xiaojian Wang) Date: Mon, 11 Dec 2006 14:01:47 -0800 Subject: [SciPy-user] looking for Monte Carlo Method for generating normal distribition Message-ID: *Hi, I can generate uniform random numbers, using random(),* *I would like to know if there is any library in python can generate * *normal distribution numbers.* thanks, Xiaojian -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Mon Dec 11 17:03:02 2006 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 11 Dec 2006 16:03:02 -0600 Subject: [SciPy-user] feed-forward neural network for python In-Reply-To: References: Message-ID: <457DD596.3000100@gmail.com> Marek Wojciechowski wrote: > Another solution would be to try to include ffnet to scipy distribution. We would be happy to accept it if you change the license to BSD instead of GPL. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From robert.kern at gmail.com Mon Dec 11 17:04:52 2006 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 11 Dec 2006 16:04:52 -0600 Subject: [SciPy-user] looking for Monte Carlo Method for generating normal distribition In-Reply-To: References: Message-ID: <457DD604.7050705@gmail.com> Xiaojian Wang wrote: > /Hi, I can generate uniform random numbers, using random(),/ > /I would like to know if there is any library in python can generate / > /normal distribution numbers./ Yes. In [18]: import numpy In [19]: print dir(numpy.random) ['RandomState', '__RandomState_ctor', '__all__', '__builtins__', '__doc__', '__file__', '__name__', '__path__', 'beta', 'binomial', 'bytes', 'chisquare', 'exponential', 'f', 'gamma', 'geometric', 'get_state', 'gumbel', 'hypergeometric', 'info', 'laplace', 'logistic', 'lognormal', 'logseries', 'mtrand', 'multinomial', 'multivariate_normal', 'negative_binomial', 'noncentral_chisquare', 'noncentral_f', 'normal', 'pareto', 'permutation', 'poisson', 'power', 'rand', 'randint', 'randn', 'random', 'random_integers', 'random_sample', 'ranf', 'rayleigh', 'sample', 'seed', 'set_state', 'shuffle', 'standard_cauchy', 'standard_exponential', 'standard_gamma', 'standard_normal', 'standard_t', 'test', 'triangular', 'uniform', 'vonmises', 'wald', 'weibull', 'zipf'] In [20]: numpy.random.normal? Type: builtin_function_or_method Base Class: Namespace: Interactive Docstring: Normal distribution (mean=loc, stdev=scale). normal(loc=0.0, scale=1.0, size=None) -> random values -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From w.northcott at unsw.edu.au Mon Dec 11 17:53:15 2006 From: w.northcott at unsw.edu.au (Bill Northcott) Date: Tue, 12 Dec 2006 09:53:15 +1100 Subject: [SciPy-user] Building SciPy on OS X: Universal binary Message-ID: <14F78577-2D66-451A-B95E-F8038B915BF2@unsw.edu.au> On 12/12/2006, at 5:00 AM, Josh Marshall wrote: > However, as a related question, if I have both g77 and gfortran > installed, how do I select gfortran as the compiler? I have tried > various combinations of lines similar to > > python setup.py config --fcompiler=gnu95 build > > but g77 is always selected for the build. > The usual procedure is to define the F77 environment variable: export F77=/path/to/fortrancompiler (on bash) or setenv F77 /path/to/fortrancompiler (for tcsh) Cheers Bill -------------- next part -------------- An HTML attachment was scrubbed... URL: From nwagner at iam.uni-stuttgart.de Tue Dec 12 06:26:53 2006 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Tue, 12 Dec 2006 12:26:53 +0100 Subject: [SciPy-user] Indefinite coefficient matrix Message-ID: <457E91FD.2000302@iam.uni-stuttgart.de> Hi all, I wan't to solve a system of linear equations A x = b, where A is symmetric but indefinite. Thus one cannot use a Cholesky factorization L L^T. AFAIK one can use an L D L^T - factorization, where D is a diagonal matrix. Is this factorization available in numpy/scipy ? Nils From cimrman3 at ntc.zcu.cz Tue Dec 12 06:35:57 2006 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Tue, 12 Dec 2006 12:35:57 +0100 Subject: [SciPy-user] Indefinite coefficient matrix In-Reply-To: <457E91FD.2000302@iam.uni-stuttgart.de> References: <457E91FD.2000302@iam.uni-stuttgart.de> Message-ID: <457E941D.6060405@ntc.zcu.cz> Nils Wagner wrote: > Hi all, > > I wan't to solve a system of linear equations > > A x = b, > > where A is symmetric but indefinite. Thus one cannot use a Cholesky > factorization L L^T. > AFAIK one can use an L D L^T - factorization, where D is a diagonal matrix. > Is this factorization available in numpy/scipy ? Nils, you can use umfpack to solve this. Thanks to Nathan Bell, you can get the actual L, U factors if you need them, too. r. From kxroberto at googlemail.com Tue Dec 12 07:35:10 2006 From: kxroberto at googlemail.com (Robert) Date: Tue, 12 Dec 2006 13:35:10 +0100 Subject: [SciPy-user] GA framework with automatic adjustment of mutation/crossover parameters ? Message-ID: I'm looking for an efficient optimizer on a noisy high-dimensional and costly function. My own GA hack seems to be somewhat stiff and I find me trying too much around with different cooling speeds and other non-systematic screwing ... There are some GA frameworks and recipes around for Python (that one of scipy (scipy.ga?) disapeared?). Which can be recommended? The searching in my use cases is mainly on fixed length parameter vectors (float and integer ranges). Just a few use cases on chromosome-bits and complex variable length structure building. My main concern is about a smart, general and robust population alteration - an automatic selection of mutation/crossover-random-step-ranges and mutation/crossover-ratio. And possibly a dynamic population/selection scheme also. The mutation function accepts a vector with step-width's 0..inf (default 1.0) for each parameter/gene. And the crossover function accepts a scalar 0 .. 1.0 controlling the extent of crossover. Thus like: def mutate(obj, astepstd=[1.0,1.0,1.0,...]): ... def crossover(obj, other, extent=0.5): ... The good optimizer alg probably should be smart enough to dynamically auto-adjust based the optimization history (independently of the task): * astepstd's * extent * crossover/mutate rate (* dynamic population size and selection-scheme) Any recommendations or hints? Robert From mwojc at p.lodz.pl Tue Dec 12 12:03:12 2006 From: mwojc at p.lodz.pl (Marek Wojciechowski) Date: Tue, 12 Dec 2006 17:03:12 -0000 Subject: [SciPy-user] feed-forward neural network for python In-Reply-To: References: Message-ID: On Tue, 12 Dec 2006 12:35:39 -0000, wrote: Robert Kern wrote: > Marek Wojciechowski wrote: >> Another solution would be to try to include ffnet to scipy distribution. > We would be happy to accept it if you change the license to BSD instead > of GPL. This could be done, but ffnet depends on networkx also and they use LGPL. I suspect that for inclusion in scipy, this dependence should be removed. Am I right? -- Marek Wojciechowski From mwojc at p.lodz.pl Tue Dec 12 12:14:01 2006 From: mwojc at p.lodz.pl (Marek Wojciechowski) Date: Tue, 12 Dec 2006 17:14:01 -0000 Subject: [SciPy-user] GA framework with automatic adjustment of mutation/crossover parameters ? In-Reply-To: References: Message-ID: On Tue, 12 Dec 2006 12:35:39 -0000, wrote: > There are some GA frameworks and recipes around for Python (that one of > scipy (scipy.ga?) disapeared?). Which can be recommended? There is pikaia optimizer in fortran: http://www.hao.ucar.edu/Public/models/pikaia/pikaia.html I created a python wrapper of this routine (with f2py) and this is included in ffnet distribution. Look at http://sourceforge.net/projects/ffnet After downloadind and installing ffnet you are granted with pikaia in python. Just write: from pikaia import pikaia Look at pikaia docstring for info how to use it. Greetings -- Marek Wojciechowski From lou_boog2000 at yahoo.com Tue Dec 12 11:24:13 2006 From: lou_boog2000 at yahoo.com (Lou Pecora) Date: Tue, 12 Dec 2006 08:24:13 -0800 (PST) Subject: [SciPy-user] Best/Safest way to parse NumPy array in C extension? Message-ID: <579946.48484.qm@web34408.mail.mud.yahoo.com> I've done this two ways. Which is better or safer? Say I'm passing a NumPy array (float) to a C extension. One way to "parse" it is (ignoring checks on return values, etc. for now), PyArrayObject *mat; PyArg_ParseTuple(args, "O!", &PyArray_Type, &mat); or is this better, PyObject *Pymat; PyArrayObject *mat; PyArg_ParseTuple(args, "O", &Pymat); mat=PyArray_ContiguousFromObject(objin,NPY_DOUBLE, 2,2); The latter appears to have the constraint that the array be contiguous. Or is that an illusion and it's saying it _expects_ a Python object that has contiguous data? I've done both. Pointing C arrays to the resulting PyArrays' data works fine, but I fear one way or the other might be courting disaster. BTW, I do other checks on dimension, type, etc., but I left those out here for clarity. (I hope.) -- Lou Pecora My views are my own. ____________________________________________________________________________________ Do you Yahoo!? Everyone is raving about the all-new Yahoo! Mail beta. http://new.mail.yahoo.com From lou_boog2000 at yahoo.com Tue Dec 12 11:28:31 2006 From: lou_boog2000 at yahoo.com (Lou Pecora) Date: Tue, 12 Dec 2006 08:28:31 -0800 (PST) Subject: [SciPy-user] Best/Safest way to parse NumPy array in C extension? In-Reply-To: <579946.48484.qm@web34408.mail.mud.yahoo.com> Message-ID: <117751.42356.qm@web34405.mail.mud.yahoo.com> YIKES. That one line should read mat=PyArray_ContiguousFromObject(Pymat,NPY_DOUBLE,2,2); Very sorry. --- Lou Pecora wrote: > I've done this two ways. Which is better or safer? > > Say I'm passing a NumPy array (float) to a C > extension. One way to "parse" it is (ignoring > checks > on return values, etc. for now), > > PyArrayObject *mat; > PyArg_ParseTuple(args, "O!", &PyArray_Type, &mat); > > or is this better, > > PyObject *Pymat; > PyArrayObject *mat; > PyArg_ParseTuple(args, "O", &Pymat); > mat=PyArray_ContiguousFromObject(objin,NPY_DOUBLE, > 2,2); > > The latter appears to have the constraint that the > array be contiguous. Or is that an illusion and > it's > saying it _expects_ a Python object that has > contiguous data? > > I've done both. Pointing C arrays to the resulting > PyArrays' data works fine, but I fear one way or the > other might be courting disaster. > > BTW, I do other checks on dimension, type, etc., but > I > left those out here for clarity. (I hope.) > > > > > > -- Lou Pecora > My views are my own. > > > > ____________________________________________________________________________________ > Do you Yahoo!? > Everyone is raving about the all-new Yahoo! Mail > beta. > http://new.mail.yahoo.com > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -- Lou Pecora My views are my own. ____________________________________________________________________________________ Have a burning question? Go to www.Answers.yahoo.com and get answers from real people who know. From oliphant.travis at ieee.org Tue Dec 12 13:40:15 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Tue, 12 Dec 2006 11:40:15 -0700 Subject: [SciPy-user] Best/Safest way to parse NumPy array in C extension? In-Reply-To: <579946.48484.qm@web34408.mail.mud.yahoo.com> References: <579946.48484.qm@web34408.mail.mud.yahoo.com> Message-ID: <457EF78F.3080602@ieee.org> Lou Pecora wrote: > I've done this two ways. Which is better or safer? > > Say I'm passing a NumPy array (float) to a C > extension. One way to "parse" it is (ignoring checks > on return values, etc. for now), > > PyArrayObject *mat; > PyArg_ParseTuple(args, "O!", &PyArray_Type, &mat); > > or is this better, > > PyObject *Pymat; > PyArrayObject *mat; > PyArg_ParseTuple(args, "O", &Pymat); > mat=PyArray_ContiguousFromObject(objin,NPY_DOUBLE, > 2,2); > It totally depends on what you want. If you just pass in the array, then you will have to make sure that your code can handle the full generality of the ndarray (or else have checks on the Python-side). Ndarrays can be arbitrarily strided, mis-aligned, out of machine order, and of arbitrary data-type. > The latter appears to have the constraint that the > array be contiguous. Or is that an illusion and it's > saying it _expects_ a Python object that has > contiguous data? > No, the latter will convert an array to be contiguous, in machine byteorder (and aligned) and convert it to DOUBLE type. So, if your code only handles that kind of array, then use the conversion function. > I've done both. Pointing C arrays to the resulting > PyArrays' data works fine, but I fear one way or the > other might be courting disaster. > It will be fine until you give it a discontiguous array, in the wrong byte-order, and possibly mis-aligned (coming from a field of another array). -Travis From lou_boog2000 at yahoo.com Tue Dec 12 14:02:15 2006 From: lou_boog2000 at yahoo.com (Lou Pecora) Date: Tue, 12 Dec 2006 11:02:15 -0800 (PST) Subject: [SciPy-user] Best/Safest way to parse NumPy array in C extension? In-Reply-To: <457EF78F.3080602@ieee.org> Message-ID: <704659.80059.qm@web34404.mail.mud.yahoo.com> Thanks for the feedback, Travis. Chris Barker gave some similar answers, but you both provided non-overlapping replies. A few questions: --- Travis Oliphant wrote: > --- Lou Pecora wrote: > > or is this better, > > > > PyObject *Pymat; > > PyArrayObject *mat; > > PyArg_ParseTuple(args, "O", &Pymat); > > > mat=PyArray_ContiguousFromObject(objin,NPY_DOUBLE, > > 2,2); > It totally depends on what you want. If you just > pass in the array, > then you will have to make sure that your code can > handle the full > generality of the ndarray (or else have checks on > the Python-side). I will have to learn how to check for contiguous in Python. > Ndarrays can be arbitrarily strided, mis-aligned, > out of machine order, > and of arbitrary data-type. > > The latter appears to have the constraint that the > > array be contiguous. Or is that an illusion and > it's > > saying it _expects_ a Python object that has > > contiguous data? > > > No, the latter will convert an array to be > contiguous, in machine > byteorder (and aligned) and convert it to DOUBLE > type. So, if your > code only handles that kind of array, then use the > conversion function. Does this conversion mean that the array now has new data in contiguous form so that operations done on the data in the C extension (in place) will show up in that array when I return to the Python caller? (I am assuming I passed the array in as an arg to the C extension in the first place.) > It will be fine until you give it a discontiguous > array, in the wrong > byte-order, and possibly mis-aligned (coming from a > field of another > array). > > -Travis Got it. It seems that NumPy array creators like zeros(...) return continguous data. Is that always true? -- Lou Pecora My views are my own. ____________________________________________________________________________________ Need a quick answer? Get one in minutes from people who know. Ask your question on www.Answers.yahoo.com From robert.kern at gmail.com Tue Dec 12 15:32:20 2006 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 12 Dec 2006 14:32:20 -0600 Subject: [SciPy-user] feed-forward neural network for python In-Reply-To: References: Message-ID: <457F11D4.5030201@gmail.com> Marek Wojciechowski wrote: > On Tue, 12 Dec 2006 12:35:39 -0000, wrote: > > Robert Kern wrote: >> Marek Wojciechowski wrote: >>> Another solution would be to try to include ffnet to scipy distribution. > >> We would be happy to accept it if you change the license to BSD instead >> of GPL. > > This could be done, but ffnet depends on networkx also and they use LGPL. > I suspect that for inclusion in scipy, this dependence should be removed. > Am I right? Yes. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From fperez.net at gmail.com Tue Dec 12 16:11:04 2006 From: fperez.net at gmail.com (Fernando Perez) Date: Tue, 12 Dec 2006 16:11:04 -0500 Subject: [SciPy-user] Best/Safest way to parse NumPy array in C extension? In-Reply-To: <704659.80059.qm@web34404.mail.mud.yahoo.com> References: <457EF78F.3080602@ieee.org> <704659.80059.qm@web34404.mail.mud.yahoo.com> Message-ID: On 12/12/06, Lou Pecora wrote: > I will have to learn how to check for contiguous in > Python. Easy: In [1]: import numpy as N In [2]: a=N.arange(10) In [3]: b=a[::2] In [4]: a.flags.contiguous Out[4]: True In [5]: b.flags.contiguous Out[5]: False Cheers, f From lou_boog2000 at yahoo.com Tue Dec 12 16:20:13 2006 From: lou_boog2000 at yahoo.com (Lou Pecora) Date: Tue, 12 Dec 2006 13:20:13 -0800 (PST) Subject: [SciPy-user] Best/Safest way to parse NumPy array in C extension? In-Reply-To: Message-ID: <28771.55098.qm@web34415.mail.mud.yahoo.com> Thanks. I also found .flags.contiguous after some googling. You give a good example for the non-contiguous case. -- Lou --- Fernando Perez wrote: > On 12/12/06, Lou Pecora > wrote: > > > I will have to learn how to check for contiguous > in > > Python. > > Easy: > > In [1]: import numpy as N > > In [2]: a=N.arange(10) > > In [3]: b=a[::2] > > In [4]: a.flags.contiguous > Out[4]: True > > In [5]: b.flags.contiguous > Out[5]: False > > > Cheers, > > f ____________________________________________________________________________________ Do you Yahoo!? Everyone is raving about the all-new Yahoo! Mail beta. http://new.mail.yahoo.com From lou_boog2000 at yahoo.com Wed Dec 13 11:25:33 2006 From: lou_boog2000 at yahoo.com (Lou Pecora) Date: Wed, 13 Dec 2006 08:25:33 -0800 (PST) Subject: [SciPy-user] How to delete attachments sent to the SciPy Wiki? In-Reply-To: Message-ID: <984809.12763.qm@web34407.mail.mud.yahoo.com> I have some notes and code about C extensions on the SciPy.org Wiki. Now I have upgraded the attachments I put there and would like to remove the old files. I cannot find how to do this either by searching SciPy site or googling. Anyone know? -- Lou Pecora My views are my own. ____________________________________________________________________________________ Do you Yahoo!? Everyone is raving about the all-new Yahoo! Mail beta. http://new.mail.yahoo.com From fredmfp at gmail.com Wed Dec 13 12:51:29 2006 From: fredmfp at gmail.com (fred) Date: Wed, 13 Dec 2006 18:51:29 +0100 Subject: [SciPy-user] How to delete attachments sent to the SciPy Wiki? In-Reply-To: <984809.12763.qm@web34407.mail.mud.yahoo.com> References: <984809.12763.qm@web34407.mail.mud.yahoo.com> Message-ID: <45803DA1.5020203@gmail.com> Lou Pecora wrote: > I have some notes and code about C extensions on the > SciPy.org Wiki. Now I have upgraded the attachments I > put there and would like to remove the old files. I > cannot find how to do this either by searching SciPy > site or googling. Anyone know? > Simply ask to add you to the Editors List page. -- http://scipy.org/FredericPetit From lou_boog2000 at yahoo.com Wed Dec 13 13:06:49 2006 From: lou_boog2000 at yahoo.com (Lou Pecora) Date: Wed, 13 Dec 2006 10:06:49 -0800 (PST) Subject: [SciPy-user] How to delete attachments sent to the SciPy Wiki? In-Reply-To: <45803DA1.5020203@gmail.com> Message-ID: <583340.35383.qm@web34405.mail.mud.yahoo.com> Uh... ask whom? --- fred wrote: > Simply ask to add you to the Editors List page. > > -- > http://scipy.org/FredericPetit -- Lou Pecora My views are my own. ____________________________________________________________________________________ Cheap talk? Check out Yahoo! Messenger's low PC-to-Phone call rates. http://voice.yahoo.com From fredmfp at gmail.com Wed Dec 13 13:24:09 2006 From: fredmfp at gmail.com (fred) Date: Wed, 13 Dec 2006 19:24:09 +0100 Subject: [SciPy-user] How to delete attachments sent to the SciPy Wiki? In-Reply-To: <583340.35383.qm@web34405.mail.mud.yahoo.com> References: <583340.35383.qm@web34405.mail.mud.yahoo.com> Message-ID: <45804549.30604@gmail.com> Lou Pecora wrote: > Uh... ask whom? > > Hmm, ask here ;-) I and Ga?l have just been added to the list last week (by R. Kern IIRC). Browse archive mailing-list ;-) -- http://scipy.org/FredericPetit From robert.kern at gmail.com Wed Dec 13 13:28:57 2006 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 13 Dec 2006 10:28:57 -0800 Subject: [SciPy-user] How to delete attachments sent to the SciPy Wiki? In-Reply-To: <45804549.30604@gmail.com> References: <583340.35383.qm@web34405.mail.mud.yahoo.com> <45804549.30604@gmail.com> Message-ID: <45804669.2090305@gmail.com> fred wrote: > Lou Pecora wrote: >> Uh... ask whom? >> >> > Hmm, ask here ;-) > > I and Ga?l have just been added to the list last week > (by R. Kern IIRC). And in fact, you could have him yourself. Anyways, Lou, I have now added you to http://www.scipy.org/EditorsGroup -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From lou_boog2000 at yahoo.com Wed Dec 13 13:36:43 2006 From: lou_boog2000 at yahoo.com (Lou Pecora) Date: Wed, 13 Dec 2006 10:36:43 -0800 (PST) Subject: [SciPy-user] How to delete attachments sent to the SciPy Wiki? In-Reply-To: <45804669.2090305@gmail.com> Message-ID: <376899.57487.qm@web34407.mail.mud.yahoo.com> Thanks, Robert. I will see what powers that gives me. :-) Deleting those files would be fine for starters. --- Robert Kern wrote: > And in fact, you could have him yourself. Anyways, > Lou, I have now added you to > > http://www.scipy.org/EditorsGroup > -- > Robert Kern -- Lou Pecora My views are my own. ____________________________________________________________________________________ Do you Yahoo!? Everyone is raving about the all-new Yahoo! Mail beta. http://new.mail.yahoo.com From gael.varoquaux at normalesup.org Wed Dec 13 16:42:57 2006 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Wed, 13 Dec 2006 22:42:57 +0100 Subject: [SciPy-user] strange error in integrate.romb Message-ID: <20061213214253.GA29905@clipper.ens.fr> Hi, I don't really understand how integrate.romb is supposed to work. First of all its docstring is not really clear: """ Uses Romberg integration to integrate y(x) using N samples along the given axis which are assumed equally spaced with distance dx. The number of samples must be 1 + a non-negative power of two: N=2**k + 1 """ Reading this it is not clear that y is supposed to be a vector, and not a function. Second if I run it on a really simple test case, I get an error, and it seems the error is due to a trivial bug in the code: In [13]: x = arange(0,2**10+1) In [14]: integrate.romb(x) --------------------------------------------------------------------------- exceptions.TypeError Traceback (most recent call last) /home/varoquau/ /usr/lib/python2.4/site-packages/scipy/integrate/quadrature.py in romb(y, dx, axis, show) 256 for i in range(2,k+1): 257 start >>= 1 --> 258 slice_R = tupleset(slice_R, slice(start,stop,step)) 259 step >>= 1 260 R[(i,1)] = 0.5*(R[(i-1,1)] + h*add.reduce(y[slice_R],axis)) TypeError: tupleset() takes exactly 3 arguments (2 given) In [15]: And indeed if you check out the code for tupleset, you find this : def tupleset(t, i, value): l = list(t) l[i] = value return tuple(l) It looks to me there has been some refactoring and the integrate.romb has been left out. Cheers, Ga?l From oliphant.travis at ieee.org Wed Dec 13 17:02:20 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Wed, 13 Dec 2006 15:02:20 -0700 Subject: [SciPy-user] strange error in integrate.romb In-Reply-To: <20061213214253.GA29905@clipper.ens.fr> References: <20061213214253.GA29905@clipper.ens.fr> Message-ID: <4580786C.4080400@ieee.org> Gael Varoquaux wrote: > Hi, > > I don't really understand how integrate.romb is supposed to work. > > First of all its docstring is not really clear: > I've tried to clean it up. > Second if I run it on a really simple test case, I get an error, and it > seems the error is due to a trivial bug in the code: > Hmm. Which version do you have? It seems to be right in my version of scipy. -Travis From gael.varoquaux at normalesup.org Wed Dec 13 17:06:50 2006 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Wed, 13 Dec 2006 23:06:50 +0100 Subject: [SciPy-user] strange error in integrate.romb In-Reply-To: <4580786C.4080400@ieee.org> References: <20061213214253.GA29905@clipper.ens.fr> <4580786C.4080400@ieee.org> Message-ID: <20061213220646.GD29905@clipper.ens.fr> On Wed, Dec 13, 2006 at 03:02:20PM -0700, Travis Oliphant wrote: > I've tried to clean it up. Great, thanks. I would have sent a patch if I had svn installed on this box. Good docstrings are really important. > > Second if I run it on a really simple test case, I get an error, and it > > seems the error is due to a trivial bug in the code: > Hmm. Which version do you have? It seems to be right in my version of > scipy. Ooops, I forgot to specify this. I have 0.5.1, shipped by Ubuntu Edgy. I just checked svn on the net (I should have done this before). The docstring is much more helpful, and indeed the bug is gone. Sorry about the noise. Ga?l From m.starnes at imperial.ac.uk Wed Dec 13 22:09:38 2006 From: m.starnes at imperial.ac.uk (Mark Starnes) Date: Thu, 14 Dec 2006 03:09:38 +0000 Subject: [SciPy-user] csc, csr sparse complex matrix problems. In-Reply-To: <457E91FD.2000302@iam.uni-stuttgart.de> References: <457E91FD.2000302@iam.uni-stuttgart.de> Message-ID: <4580C072.3060608@imperial.ac.uk> Hi everyone, I've been trying to get this simple example to work for a couple of days now, to no avail. I noticed at http://projects.scipy.org/scipy/scipy/ticket/329 that there seem to be come problems with csc and complex values but I'm hoping they're unrelated and someone can help me sort this problem! I couldn't get the csc matrix to work at all here - the csr matrix was having a go and getting the right answer but now fails too! Anyway, here's the code. I'm using Python2.4.3 on winxp with Scipy0.5.2, Numpy 1.0.1. Any help will be appreciated. Best regards, Mark Starnes. *** import numpy as n import scipy as s from scipy.linsolve import spsolve from scipy import linalg as la from numpy import zeros,complex64 # solve kx=f for x, where # #/ \ / \ / \ #| 1+1i 2+2i 3+3i | | 2 | | -5+3i | #| | | | | | #| 2 3 4+4i | | 3+2i | = | 1-6i | #| | | | | | #| 4+9i 5+10i 2+11i | | -3 | | -3+25i | #\ / \ / \ / # k0=n.array([[1+1j,2+2j,3+3j],[2,3,4+4j],[4+9j,5+10j,2+11j]]) f0=[-5+3j,1-6j,-3+25j] # Test numpy routine. k=n.mat(k0) f=n.mat(f0).T fsol=s.dot(la.inv(k),f) print fsol print k*fsol # result should = [2, 3+2j, -3]. Numpy passes. # Test Scipy sparse routines. k=s.sparse.csc_matrix((3,3),dtype='F') k[0,0]=1+1j ; k[0,1]=2+2j ; k[0,2]=3+3j ; k[1,0]=2 ; k[1,1]=3 ; k[1,2]=4+4j ; k[2,0]=4+9j ; k[2,1]=5+10j ; k[2,2]=2+11j; f=zeros((3),complex64) f[0]=-5+3j f[1]=1-6j f[2]=-3+25j #fsol2=spsolve(k,f) # this crashes python with typeD, AND with type F #print fsol2 #print k.dot(fsol2) k=s.sparse.csr_matrix((3,3),dtype='F') k[0,0]=1+1j ; k[0,1]=2+2j ; k[0,2]=3+3j ; k[1,0]=2 ; k[1,1]=3 ; k[1,2]=4+4j ; k[2,0]=4+9j ; k[2,1]=5+10j ; k[2,2]=2+11j; fsol3=spsolve(k,f) # this crashes python with typeD, originally ok with type F print fsol3 print k.dot(fsol3) From nwagner at iam.uni-stuttgart.de Thu Dec 14 03:18:12 2006 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Thu, 14 Dec 2006 09:18:12 +0100 Subject: [SciPy-user] csc, csr sparse complex matrix problems. In-Reply-To: <4580C072.3060608@imperial.ac.uk> References: <457E91FD.2000302@iam.uni-stuttgart.de> <4580C072.3060608@imperial.ac.uk> Message-ID: <458108C4.8040901@iam.uni-stuttgart.de> Mark Starnes wrote: > Hi everyone, > > I've been trying to get this simple example to work for a couple of days > now, to no avail. I noticed at > http://projects.scipy.org/scipy/scipy/ticket/329 that there seem to be > come problems with csc and complex values but I'm hoping they're > unrelated and someone can help me sort this problem! I couldn't get the > csc matrix to work at all here - the csr matrix was having a go and > getting the right answer but now fails too! Anyway, here's the code. > I'm using Python2.4.3 on winxp with Scipy0.5.2, Numpy 1.0.1. > > Any help will be appreciated. > > Best regards, > > Mark Starnes. > > *** > > import numpy as n > import scipy as s > from scipy.linsolve import spsolve > from scipy import linalg as la > from numpy import zeros,complex64 > > # solve kx=f for x, where > # > #/ \ / \ / \ > #| 1+1i 2+2i 3+3i | | 2 | | -5+3i | > #| | | | | | > #| 2 3 4+4i | | 3+2i | = | 1-6i | > #| | | | | | > #| 4+9i 5+10i 2+11i | | -3 | | -3+25i | > #\ / \ / \ / > # > > > k0=n.array([[1+1j,2+2j,3+3j],[2,3,4+4j],[4+9j,5+10j,2+11j]]) > f0=[-5+3j,1-6j,-3+25j] > > # Test numpy routine. > > k=n.mat(k0) > f=n.mat(f0).T > fsol=s.dot(la.inv(k),f) > print fsol > print k*fsol > # result should = [2, 3+2j, -3]. Numpy passes. > > # Test Scipy sparse routines. > k=s.sparse.csc_matrix((3,3),dtype='F') > k[0,0]=1+1j ; k[0,1]=2+2j ; k[0,2]=3+3j ; > k[1,0]=2 ; k[1,1]=3 ; k[1,2]=4+4j ; > k[2,0]=4+9j ; k[2,1]=5+10j ; k[2,2]=2+11j; > > f=zeros((3),complex64) > f[0]=-5+3j > f[1]=1-6j > f[2]=-3+25j > > #fsol2=spsolve(k,f) # this crashes python with typeD, AND with type F > #print fsol2 > #print k.dot(fsol2) > > k=s.sparse.csr_matrix((3,3),dtype='F') > k[0,0]=1+1j ; k[0,1]=2+2j ; k[0,2]=3+3j ; > k[1,0]=2 ; k[1,1]=3 ; k[1,2]=4+4j ; > k[2,0]=4+9j ; k[2,1]=5+10j ; k[2,2]=2+11j; > > fsol3=spsolve(k,f) # this crashes python with typeD, originally ok > with type F > print fsol3 > print k.dot(fsol3) > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > Hi Mark, This seems to be a an UMFPACK problem. It works fine for me with SuperLU. I am using the latest svn versions and UMFPACK v4.4 Nils With UMFPACK python -i starnes.py x [[ 2. +1.11022302e-16j] [ 3. +2.00000000e+00j] [-3. -7.28583860e-17j]] f [[-5. +3.j] [ 1. -6.j] [-3.+25.j]] Traceback (most recent call last): File "starnes.py", line 47, in ? fsol2=spsolve(k,f) # File "/usr/lib64/python2.4/site-packages/scipy/linsolve/linsolve.py", line 74, in spsolve return umf.linsolve( umfpack.UMFPACK_A, mat, b, autoTranspose = True ) File "/usr/lib64/python2.4/site-packages/scipy/linsolve/umfpack/umfpack.py", line 561, in linsolve self.numeric( mtx ) File "/usr/lib64/python2.4/site-packages/scipy/linsolve/umfpack/umfpack.py", line 374, in numeric self.symbolic( mtx ) File "/usr/lib64/python2.4/site-packages/scipy/linsolve/umfpack/umfpack.py", line 356, in symbolic raise RuntimeError, '%s failed with %s' % (self.funs.symbolic, RuntimeError: failed with UMFPACK_ERROR_invalid_matrix Without UMFPACK python -i starnes.py x [[ 2. +1.11022302e-16j] [ 3. +2.00000000e+00j] [-3. -7.28583860e-17j]] f [[-5. +3.j] [ 1. -6.j] [-3.+25.j]] Use minimum degree ordering on A'+A. x [ 2. -7.58729969e-17j 3. +2.00000000e+00j -3. -2.77108197e-16j] f [-5. +3.j 1. -6.j -3.+25.j] Use minimum degree ordering on A'+A. x [ 2. -3.44633288e-16j 3. +2.00000000e+00j -3. -2.91704595e-16j] f [-5. +3.j 1. -6.j -3.+25.j] Nils -------------- next part -------------- A non-text attachment was scrubbed... Name: starnes.py Type: text/x-python Size: 1436 bytes Desc: not available URL: From cimrman3 at ntc.zcu.cz Thu Dec 14 05:17:32 2006 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Thu, 14 Dec 2006 11:17:32 +0100 Subject: [SciPy-user] csc, csr sparse complex matrix problems. In-Reply-To: <4580C072.3060608@imperial.ac.uk> References: <457E91FD.2000302@iam.uni-stuttgart.de> <4580C072.3060608@imperial.ac.uk> Message-ID: <458124BC.6080508@ntc.zcu.cz> Mark Starnes wrote: > Hi everyone, > > I've been trying to get this simple example to work for a couple of days > now, to no avail. I noticed at > http://projects.scipy.org/scipy/scipy/ticket/329 that there seem to be > come problems with csc and complex values but I'm hoping they're > unrelated and someone can help me sort this problem! I couldn't get the > csc matrix to work at all here - the csr matrix was having a go and > getting the right answer but now fails too! Anyway, here's the code. > I'm using Python2.4.3 on winxp with Scipy0.5.2, Numpy 1.0.1. > > Any help will be appreciated. > > Best regards, > > Mark Starnes. > > *** > > import numpy as n > import scipy as s > from scipy.linsolve import spsolve > from scipy import linalg as la > from numpy import zeros,complex64 > > # solve kx=f for x, where > # > #/ \ / \ / \ > #| 1+1i 2+2i 3+3i | | 2 | | -5+3i | > #| | | | | | > #| 2 3 4+4i | | 3+2i | = | 1-6i | > #| | | | | | > #| 4+9i 5+10i 2+11i | | -3 | | -3+25i | > #\ / \ / \ / > # > > > k0=n.array([[1+1j,2+2j,3+3j],[2,3,4+4j],[4+9j,5+10j,2+11j]]) > f0=[-5+3j,1-6j,-3+25j] > > # Test numpy routine. > > k=n.mat(k0) > f=n.mat(f0).T > fsol=s.dot(la.inv(k),f) > print fsol > print k*fsol > # result should = [2, 3+2j, -3]. Numpy passes. > > # Test Scipy sparse routines. > k=s.sparse.csc_matrix((3,3),dtype='F') > k[0,0]=1+1j ; k[0,1]=2+2j ; k[0,2]=3+3j ; > k[1,0]=2 ; k[1,1]=3 ; k[1,2]=4+4j ; > k[2,0]=4+9j ; k[2,1]=5+10j ; k[2,2]=2+11j; > > f=zeros((3),complex64) > f[0]=-5+3j > f[1]=1-6j > f[2]=-3+25j > > #fsol2=spsolve(k,f) # this crashes python with typeD, AND with type F > #print fsol2 > #print k.dot(fsol2) > > k=s.sparse.csr_matrix((3,3),dtype='F') > k[0,0]=1+1j ; k[0,1]=2+2j ; k[0,2]=3+3j ; > k[1,0]=2 ; k[1,1]=3 ; k[1,2]=4+4j ; > k[2,0]=4+9j ; k[2,1]=5+10j ; k[2,2]=2+11j; > > fsol3=spsolve(k,f) # this crashes python with typeD, originally ok > with type F > print fsol3 > print k.dot(fsol3) 1. If you use umfpack, k.dtype.char should equal 'D', not 'F' (Look at Nils response) 2. also, in case of umfpack usage, use k=n.empty( (3,3), dtype = 'D' ) k[0,0]=1+1j ; k[0,1]=2+2j ; k[0,2]=3+3j ; k[1,0]=2 ; k[1,1]=3 ; k[1,2]=4+4j ; k[2,0]=4+9j ; k[2,1]=5+10j ; k[2,2]=2+11j; k=s.sparse.csr_matrix(k,dtype='D') because assigning to a CSR matrix via [] (__setitem__) is currently buggy. You could use e.g. the following form of the CSR matrix constructor csr_matrix((data, ij), [dims=(M, N), nzmax=nzmax]) for real data, then, see the doc. There is now discussion on scipy-dev considering an overhaul of the sparse module (speed-wise and functionality-wise and antibug-wise), so hopefully bright future awaits us :). r. From m.starnes at imperial.ac.uk Thu Dec 14 06:28:26 2006 From: m.starnes at imperial.ac.uk (Mark Starnes) Date: Thu, 14 Dec 2006 11:28:26 +0000 Subject: [SciPy-user] csc, csr sparse complex matrix problems. Thanks. In-Reply-To: <458124BC.6080508@ntc.zcu.cz> References: <457E91FD.2000302@iam.uni-stuttgart.de> <4580C072.3060608@imperial.ac.uk> <458124BC.6080508@ntc.zcu.cz> Message-ID: <4581355A.2020806@imperial.ac.uk> An HTML attachment was scrubbed... URL: From josh.p.marshall at gmail.com Thu Dec 14 22:49:06 2006 From: josh.p.marshall at gmail.com (Josh Marshall) Date: Fri, 15 Dec 2006 14:49:06 +1100 Subject: [SciPy-user] Universal binary SciPy on OS X Message-ID: <3539EE2B-72B8-4BF7-8431-A3F8AE8302E2@gmail.com> I have been trying to build a universal SciPy that works, so I can include it in a redistributable application I build with py2app. The one at http://pythonmac.org/packages/py24-fat/index.html is *incomplete*, it contains some dynamic libraries which are ppc only, which is what also happens when I try to build a universal binary myself.Some libs build fine, but some are only a single arch (the native one). The results of for FILE in `find . | grep '\.so'` ; do lipo -info $FILE ; done are: ------------------------------------------------------------------------ ----------- Architectures in the fat file: ./scipy/cluster/_vq.so are: i386 ppc Non-fat file: ./scipy/fftpack/_fftpack.so is architecture: ppc Non-fat file: ./scipy/fftpack/convolve.so is architecture: ppc Non-fat file: ./scipy/integrate/_odepack.so is architecture: ppc Non-fat file: ./scipy/integrate/_quadpack.so is architecture: ppc Non-fat file: ./scipy/integrate/vode.so is architecture: ppc Non-fat file: ./scipy/interpolate/_fitpack.so is architecture: ppc Non-fat file: ./scipy/interpolate/dfitpack.so is architecture: ppc Architectures in the fat file: ./scipy/io/numpyio.so are: i386 ppc Architectures in the fat file: ./scipy/lib/blas/cblas.so are: i386 ppc Non-fat file: ./scipy/lib/blas/fblas.so is architecture: ppc Architectures in the fat file: ./scipy/lib/lapack/atlas_version.so are: i386 ppc Non-fat file: ./scipy/lib/lapack/calc_lwork.so is architecture: ppc Architectures in the fat file: ./scipy/lib/lapack/clapack.so are: i386 ppc Architectures in the fat file: ./scipy/lib/lapack/flapack.so are: i386 ppc Non-fat file: ./scipy/linalg/_flinalg.so is architecture: ppc Non-fat file: ./scipy/linalg/_iterative.so is architecture: ppc Architectures in the fat file: ./scipy/linalg/atlas_version.so are: i386 ppc Non-fat file: ./scipy/linalg/calc_lwork.so is architecture: ppc Architectures in the fat file: ./scipy/linalg/cblas.so are: i386 ppc Architectures in the fat file: ./scipy/linalg/clapack.so are: i386 ppc Non-fat file: ./scipy/linalg/fblas.so is architecture: ppc Non-fat file: ./scipy/linalg/flapack.so is architecture: ppc Architectures in the fat file: ./scipy/linsolve/_csuperlu.so are: i386 ppc Architectures in the fat file: ./scipy/linsolve/_dsuperlu.so are: i386 ppc Architectures in the fat file: ./scipy/linsolve/_ssuperlu.so are: i386 ppc Architectures in the fat file: ./scipy/linsolve/_zsuperlu.so are: i386 ppc Architectures in the fat file: ./scipy/ndimage/_nd_image.so are: i386 ppc Non-fat file: ./scipy/odr/__odrpack.so is architecture: ppc Non-fat file: ./scipy/optimize/_cobyla.so is architecture: ppc Non-fat file: ./scipy/optimize/_lbfgsb.so is architecture: ppc Non-fat file: ./scipy/optimize/_minpack.so is architecture: ppc Architectures in the fat file: ./scipy/optimize/_zeros.so are: i386 ppc Non-fat file: ./scipy/optimize/minpack2.so is architecture: ppc Architectures in the fat file: ./scipy/optimize/moduleTNC.so are: i386 ppc Architectures in the fat file: ./scipy/signal/sigtools.so are: i386 ppc Architectures in the fat file: ./scipy/signal/spline.so are: i386 ppc Non-fat file: ./scipy/sparse/sparsetools.so is architecture: ppc Non-fat file: ./scipy/special/_cephes.so is architecture: ppc Non-fat file: ./scipy/special/specfun.so is architecture: ppc Non-fat file: ./scipy/stats/futil.so is architecture: ppc Non-fat file: ./scipy/stats/mvn.so is architecture: ppc Non-fat file: ./scipy/stats/statlib.so is architecture: ppc Architectures in the fat file: ./scipy/stsci/convolve/_correlate.so are: i386 ppc Architectures in the fat file: ./scipy/stsci/image/_combine.so are: i386 ppc ------------------------------------------------------------------------ ----------- So I didn't pursue this any further for the time being. Since I am not an expert on the SciPy build process, I'm trying to work around this. Instead I took your (Chris') versions from http://trichech.us/? page_id=4 , both ppc and intel SuperPacks. I yanked the scipy dirs out of the packages, and called them scipy-ppc and scipy-i386. Then what I did was lipo the shared libs together, as described on Apple's site: http://developer.apple.com/opensource/buildingopensourceuniversal.html Like so: mv scipy-ppc scipy-universal cd scipy-universal for FILE in `find . | grep '\.so'` ; do file $FILE ; done # shows all ppc libs for LIB in `find . | grep '\.so'` ; do lipo -create $LIB ../scipy- i386/$LIB -output $LIB ; done for FILE in `find . | grep '\.so'` ; do file $FILE ; done # shows all universal libs, yay! cd .. sudo cp -r scipy-universal $PYTHON_SITEPACKAGES/scipy Now this works fine on my G4. I won't have a chance to test on an Intel Mac until later in the week, but this looks promising. Chris, since the package seem to be identical apart from the shared libs, you could probably do this on the SuperPack and only need to distribute one version. Could I also have some advice on how to deal with this sort of stuff. I imagine that much of it is tied up with not having an Apple-blessed Fortran compiler. (Fingers crossed for Leopard). Thanks for all your help everyone, Josh From cssmwbs at gmail.com Fri Dec 15 02:37:54 2006 From: cssmwbs at gmail.com (WB) Date: Thu, 14 Dec 2006 23:37:54 -0800 Subject: [SciPy-user] error building scipy on linux with mkl Message-ID: <7c13686f0612142337r7e903c9aucab19ffb63020d55@mail.gmail.com> hi, i am attempting to build scipy on a linux machine with mkl 9.0 installed. when i do python setup.py install i get lots of warnings, and then build failure with the following output: begin error out>> In file included from Lib/fftpack/src/drfft.c:7: Lib/fftpack/src/fftpack.h:44:22: error: mkl_dfti.h: No such file or directory In file included from Lib/fftpack/src/drfft.c:7: Lib/fftpack/src/fftpack.h:44:22: error: mkl_dfti.h: No such file or directory error: Command "gcc -pthread -fno-strict-aliasing -DNDEBUG -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m32 -march=i386 -mtune=generic -fasynchronous-unwind-tables -D_GNU_SOURCE -fPIC -fPIC -DSCIPY_MKL_H -DSCIPY_MKL_H -I/usr/local/include -I/usr/include -Ibuild/src.linux- i686-2.4 -I/usr/lib/python2.4/site-packages/numpy/core/include -I/usr/include/python2.4 -c Lib/fftpack/src/drfft.c -o build/temp.linux- i686-2.4/Lib/fftpack/src/drfft.o" failed with exit status 1 < From cssmwbs at gmail.com Fri Dec 15 03:22:38 2006 From: cssmwbs at gmail.com (WB) Date: Fri, 15 Dec 2006 00:22:38 -0800 Subject: [SciPy-user] error building scipy on linux with mkl In-Reply-To: <7c13686f0612142337r7e903c9aucab19ffb63020d55@mail.gmail.com> References: <7c13686f0612142337r7e903c9aucab19ffb63020d55@mail.gmail.com> Message-ID: <7c13686f0612150022i30564ddfj57f694015bb62cd0@mail.gmail.com> uggh... please disregard. not paying attention... doesn't help to add a non-library path to the LD_LIBRARY_PATH variable! :| On 12/14/06, WB wrote: > > hi, > i am attempting to build scipy on a linux machine with mkl 9.0 installed. > when i do > > python setup.py install > > i get lots of warnings, and then build failure with the following output: > > begin error out>> > > In file included from Lib/fftpack/src/drfft.c:7: > Lib/fftpack/src/fftpack.h:44:22: error: mkl_dfti.h: No such file or > directory > In file included from Lib/fftpack/src/drfft.c:7: > Lib/fftpack/src/fftpack.h:44:22: error: mkl_dfti.h: No such file or > directory > error: Command "gcc -pthread -fno-strict-aliasing -DNDEBUG -O2 -g -pipe > -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector > --param=ssp-buffer-size=4 -m32 -march=i386 -mtune=generic > -fasynchronous-unwind-tables -D_GNU_SOURCE -fPIC -fPIC -DSCIPY_MKL_H > -DSCIPY_MKL_H -I/usr/local/include -I/usr/include -Ibuild/src.linux- > i686-2.4 -I/usr/lib/python2.4/site-packages/numpy/core/include > -I/usr/include/python2.4 -c Lib/fftpack/src/drfft.c -o build/temp.linux- > i686-2.4/Lib/fftpack/src/drfft.o" failed with exit status 1 > > < > so i checked to see if i had the file it is complaining about: > [bryan at localhost ~]$ locate mkl_dfti.h > /opt/intel/mkl/9.0/include/mkl_dfti.h > > and checked to see if that directory is on my LD_LIBRARY_PATH: > [bryan at localhost ~]$ echo $LD_LIBRARY_PATH > > /opt/intel/mkl/9.0/lib/32:/opt/intel/cc/9.1.045/lib:/opt/intel/mkl/9.0/include > > anyone know what i am supposed to do to get this to work? > > thanks, > bryan -------------- next part -------------- An HTML attachment was scrubbed... URL: From niels.ellegaard at gmail.com Sat Dec 16 01:21:11 2006 From: niels.ellegaard at gmail.com (Niels L Ellegaard) Date: Sat, 16 Dec 2006 06:21:11 +0000 (UTC) Subject: [SciPy-user] Installing scipy-5.2 on fedora 64 bit: cannot import name flapack Message-ID: Dear masters I am trying to install scipy 5.2 on a fedora 64 bit machine with numpy 1.0.1. When I try to run scipy.test() i get the following error from scipy.linalg import flapack ImportError: cannot import name flapack I must be doing something wrong, but I cannot see what, so I would be grateful for some hints (Details below.. sorry about the flood) Thanks in advance Niels ***************** Output from scipy.test() ************ python -c 'import scipy; scipy.test()' Found 4 tests for scipy.io.array_import Found 397 tests for scipy.ndimage Found 96 tests for scipy.sparse.sparse Found 20 tests for scipy.fftpack.pseudo_diffs Found 6 tests for scipy.optimize.optimize Found 6 tests for scipy.interpolate.fitpack Found 6 tests for scipy.interpolate Found 12 tests for scipy.io.mmio Found 18 tests for scipy.fftpack.basic Found 4 tests for scipy.io.recaster Warning: FAILURE importing tests for /usr/lib64/python2.4/site-packages/scipy/linsolve/umfpack/tests/ test_umfpack.py:17: AttributeError: 'module' object has no attribute 'umfpack' (in ?) Found 4 tests for scipy.optimize.zeros Found 28 tests for scipy.io.mio Warning: FAILURE importing tests for /usr/lib64/python2.4/site-packages/scipy/linalg/lapack.py:17: ImportError: cannot import name flapack (in ?) Warning: FAILURE importing tests for /usr/lib64/python2.4/site-packages/scipy/lib/blas/__init__.py:10: ImportError: /usr/lib64/python2.4/site-packages/scipy/lib/blas/cblas.so: undefined symbol: cblas_saxpy (in ?) Warning: FAILURE importing tests for /usr/lib64/python2.4/site-packages/scipy/linsolve/umfpack/tests/test_ mfpack.py:17: AttributeError: 'module' object has no attribute 'umfpack' (in ?) Found 1 tests for scipy.optimize.cobyla Found 4 tests for scipy.fftpack.helper Found 0 tests for __main__ Don't worry about a warning regarding the number of bytes read. Warning: 1000000 bytes requested, 20 bytes read. ...E......................................................................... ............................................................................ ............................................................................. ............................................................................. ............................................................................. ..........................Resizing... 16 17 24 Resizing... 20 7 35 Resizing... 23 7 47 Resizing... 24 25 58 Resizing... 28 7 68 Resizing... 28 27 73 .....Use minimum degree ordering on A'+A. ........................Use minimum degree ordering on A'+A. ...................Resizing... 16 17 24 Resizing... 20 7 35 Resizing... 23 7 47 Resizing... 24 25 58 Resizing... 28 7 68 Resizing... 28 27 73 .....Use minimum degree ordering on A'+A. .................Resizing... 16 17 24 Resizing... 20 7 35 Resizing... 23 7 47 Resizing... 24 25 58 Resizing... 28 7 68 Resizing... 28 27 73 .....Use minimum degree ordering on A'+A. ...................................... /usr/lib64/python2.4/site-packages/scipy/interpolate/fitpack2.py:457: UserWarning: The coefficients of the spline returned have been computed as the minimal norm least-squares solution of a (numerically) rank deficient system (deficiency=7). If deficiency is large, the results may be inaccurate. Deficiency may strongly depend on the value of eps. warnings.warn(message) ....................................................................... ............ ====================================================================== ERROR: check_integer (scipy.io.tests.test_array_import.test_read_array) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib64/python2.4/site-packages/scipy/io/tests/test_array_import.py", line 55, in check_integer from scipy import stats File "/usr/lib64/python2.4/site-packages/scipy/stats/__init__.py", line 7, in ? from stats import * File "/usr/lib64/python2.4/site-packages/scipy/stats/stats.py", line 190, in ? import scipy.special as special File "/usr/lib64/python2.4/site-packages/scipy/special/__init__.py", line 10, in ? import orthogonal File "/usr/lib64/python2.4/site-packages/scipy/special/orthogonal.py", line 65, in ? from scipy.linalg import eig File "/usr/lib64/python2.4/site-packages/scipy/linalg/__init__.py", line 8, in ? from basic import * File "/usr/lib64/python2.4/site-packages/scipy/linalg/basic.py", line 17, in ? from lapack import get_lapack_funcs File "/usr/lib64/python2.4/site-packages/scipy/linalg/lapack.py", line 17, in ? from scipy.linalg import flapack ImportError: cannot import name flapack ---------------------------------------------------------------------- Ran 606 tests in 2.998s FAILED (errors=1) *************** Site.cfg ************* [lapack] library_dirs = /usr/lib64 [atlas] library_dirs = /usr/lib64:/usr/lib64/atlas [blas] library_dirs = /usr/lib64:/usr/lib64/atlas [umfpack] library_dirs = /usr/lib64 include_dirs = /usr/include/ufsparse/ [amd] library_dirs = /usr/lib64 [fftw] library_dirs = /usr/lib64/ [fftw2] library_dirs = /usr/lib64/ [fftw3] library_dirs = /usr/lib64/ **************** Output from installation ******* python setup.py install > log.0 Warning: Subpackage 'Lib' configuration returned as 'scipy' cat log.0 mkl_info: libraries mkl,vml,guide not found in /usr/local/lib libraries mkl,vml,guide not found in /usr/lib NOT AVAILABLE fftw3_info: FOUND: libraries = ['fftw3'] library_dirs = ['/usr/lib64/'] define_macros = [('SCIPY_FFTW3_H', None)] include_dirs = ['/usr/include'] djbfft_info: NOT AVAILABLE blas_opt_info: blas_mkl_info: libraries mkl,vml,guide not found in /usr/local/lib libraries mkl,vml,guide not found in /usr/lib NOT AVAILABLE atlas_blas_threads_info: Setting PTATLAS=ATLAS Setting PTATLAS=ATLAS Setting PTATLAS=ATLAS FOUND: libraries = ['lapack', 'blas'] library_dirs = ['/usr/lib64/atlas'] language = c include_dirs = ['/usr/include/atlas'] customize GnuFCompiler customize GnuFCompiler customize GnuFCompiler using config compiling '_configtest.c': /* This file is generated from numpy_distutils/system_info.py */ void ATL_buildinfo(void); int main(void) { ATL_buildinfo(); return 0; } C compiler: gcc -pthread -fno-strict-aliasing -DNDEBUG -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fPIC compile options: '-c' gcc: _configtest.c gcc -pthread _configtest.o -L/usr/lib64/atlas -llapack -lblas -o _configtest ATLAS version 3.6.0 built by root on Thu Jun 3 21:13:07 UTC 2004: UNAME : Linux ravel 2.6.5 #3 SMP Tue Apr 13 13:41:54 UTC 2004 x86_64 GNU/Linux INSTFLG : MMDEF : /home/camm/atlas3-3.6.0/CONFIG/ARCHS/HAMMER64SSE2/gcc/gemm ARCHDEF : /home/camm/atlas3-3.6.0/CONFIG/ARCHS/HAMMER64SSE2/gcc/misc F2CDEFS : -DAdd__ -DStringSunStyle CACHEEDGE: 720896 F77 : /home/camm/usr/bin/g77, version GNU Fortran (GCC) 3.3.3 (Debian 20040422) F77FLAGS : -fomit-frame-pointer -O -m64 CC : /usr/bin/gcc, version gcc (GCC) 3.3.3 (Debian 20040422) CC FLAGS : -fomit-frame-pointer -O -mfpmath=387 -m64 MCC : /usr/bin/gcc, version gcc (GCC) 3.3.3 (Debian 20040422) MCCFLAGS : -fomit-frame-pointer -O -mfpmath=387 -m64 success! removing: _configtest.c _configtest.o _configtest FOUND: libraries = ['lapack', 'blas'] library_dirs = ['/usr/lib64/atlas'] language = c define_macros = [('ATLAS_INFO', '"\\"3.6.0\\""')] include_dirs = ['/usr/include/atlas'] ATLAS version 3.6.0 lapack_opt_info: lapack_mkl_info: NOT AVAILABLE atlas_threads_info: Setting PTATLAS=ATLAS numpy.distutils.system_info.atlas_threads_info Setting PTATLAS=ATLAS Setting PTATLAS=ATLAS FOUND: libraries = ['lapack', 'lapack', 'blas'] library_dirs = ['/usr/lib64/atlas'] language = f77 include_dirs = ['/usr/include/atlas'] customize GnuFCompiler customize GnuFCompiler customize GnuFCompiler using config compiling '_configtest.c': /* This file is generated from numpy_distutils/system_info.py */ void ATL_buildinfo(void); int main(void) { ATL_buildinfo(); return 0; } C compiler: gcc -pthread -fno-strict-aliasing -DNDEBUG -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fPIC compile options: '-c' gcc: _configtest.c gcc -pthread _configtest.o -L/usr/lib64/atlas -llapack -llapack -lblas -o _configtest ATLAS version 3.6.0 built by root on Thu Jun 3 21:13:07 UTC 2004: UNAME : Linux ravel 2.6.5 #3 SMP Tue Apr 13 13:41:54 UTC 2004 x86_64 GNU/Linux INSTFLG : MMDEF : /home/camm/atlas3-3.6.0/CONFIG/ARCHS/HAMMER64SSE2/gcc/gemm ARCHDEF : /home/camm/atlas3-3.6.0/CONFIG/ARCHS/HAMMER64SSE2/gcc/misc F2CDEFS : -DAdd__ -DStringSunStyle CACHEEDGE: 720896 F77 : /home/camm/usr/bin/g77, version GNU Fortran (GCC) 3.3.3 (Debian 20040422) F77FLAGS : -fomit-frame-pointer -O -m64 CC : /usr/bin/gcc, version gcc (GCC) 3.3.3 (Debian 20040422) CC FLAGS : -fomit-frame-pointer -O -mfpmath=387 -m64 MCC : /usr/bin/gcc, version gcc (GCC) 3.3.3 (Debian 20040422) MCCFLAGS : -fomit-frame-pointer -O -mfpmath=387 -m64 success! removing: _configtest.c _configtest.o _configtest FOUND: libraries = ['lapack', 'lapack', 'blas'] library_dirs = ['/usr/lib64/atlas'] language = f77 define_macros = [('ATLAS_INFO', '"\\"3.6.0\\""')] include_dirs = ['/usr/include/atlas'] ATLAS version 3.6.0 ATLAS version 3.6.0 non-existing path in 'Lib/linsolve': 'tests' umfpack_info: amd_info: FOUND: libraries = ['amd'] library_dirs = ['/usr/lib64'] FOUND: libraries = ['umfpack', 'amd'] library_dirs = ['/usr/lib64'] swig_opts = ['-I/usr/include/ufsparse'] define_macros = [('SCIPY_UMFPACK_H', None)] include_dirs = ['/usr/include/ufsparse'] non-existing path in 'Lib/maxentropy': 'doc' running install running build running config_fc running build_src building py_modules sources building library "dfftpack" sources building library "linpack_lite" sources building library "mach" sources building library "quadpack" sources building library "odepack" sources building library "fitpack" sources building library "superlu_src" sources building library "odrpack" sources building library "minpack" sources building library "rootfind" sources building library "c_misc" sources building library "cephes" sources building library "mach" sources building library "toms" sources building library "amos" sources building library "cdf" sources building library "specfun" sources building library "statlib" sources building extension "scipy.cluster._vq" sources building extension "scipy.fftpack._fftpack" sources f2py options: [] adding 'build/src.linux-x86_64-2.4/fortranobject.c' to sources. adding 'build/src.linux-x86_64-2.4' to include_dirs. building extension "scipy.fftpack.convolve" sources f2py options: [] adding 'build/src.linux-x86_64-2.4/fortranobject.c' to sources. adding 'build/src.linux-x86_64-2.4' to include_dirs. building extension "scipy.integrate._quadpack" sources building extension "scipy.integrate._odepack" sources building extension "scipy.integrate.vode" sources f2py options: [] adding 'build/src.linux-x86_64-2.4/fortranobject.c' to sources. adding 'build/src.linux-x86_64-2.4' to include_dirs. building extension "scipy.interpolate._fitpack" sources building extension "scipy.interpolate.dfitpack" sources f2py options: [] adding 'build/src.linux-x86_64-2.4/fortranobject.c' to sources. adding 'build/src.linux-x86_64-2.4' to include_dirs. adding 'build/src.linux-x86_64-2.4/Lib/interpolate/dfitpack- f2pywrappers.f' to sources. building extension "scipy.io.numpyio" sources building extension "scipy.lib.blas.fblas" sources f2py options: ['skip:', ':'] adding 'build/src.linux-x86_64-2.4/fortranobject.c' to sources. adding 'build/src.linux-x86_64-2.4' to include_dirs. adding 'build/src.linux-x86_64-2.4/build/src.linux-x86_64-2.4/Lib/lib/blas/fblas- f2pywrappers.f' to sources. building extension "scipy.lib.blas.cblas" sources adding 'Lib/lib/blas/cblas.pyf.src' to sources. f2py options: ['skip:', ':'] adding 'build/src.linux-x86_64-2.4/fortranobject.c' to sources. adding 'build/src.linux-x86_64-2.4' to include_dirs. building extension "scipy.lib.lapack.flapack" sources f2py options: ['skip:', ':'] adding 'build/src.linux-x86_64-2.4/fortranobject.c' to sources. adding 'build/src.linux-x86_64-2.4' to include_dirs. building extension "scipy.lib.lapack.clapack" sources adding 'Lib/lib/lapack/clapack.pyf.src' to sources. f2py options: ['skip:', ':'] adding 'build/src.linux-x86_64-2.4/fortranobject.c' to sources. adding 'build/src.linux-x86_64-2.4' to include_dirs. building extension "scipy.lib.lapack.calc_lwork" sources f2py options: [] adding 'build/src.linux-x86_64-2.4/fortranobject.c' to sources. adding 'build/src.linux-x86_64-2.4' to include_dirs. building extension "scipy.lib.lapack.atlas_version" sources building extension "scipy.linalg.fblas" sources adding 'build/src.linux-x86_64-2.4/scipy/linalg/fblas.pyf' to sources. f2py options: [] adding 'build/src.linux-x86_64-2.4/fortranobject.c' to sources. adding 'build/src.linux-x86_64-2.4' to include_dirs. adding 'build/src.linux-x86_64-2.4/build/src.linux-x86_64-2.4/scipy/linalg/fblas- f2pywrappers.f' to sources. building extension "scipy.linalg.cblas" sources adding 'build/src.linux-x86_64-2.4/scipy/linalg/cblas.pyf' to sources. f2py options: [] adding 'build/src.linux-x86_64-2.4/fortranobject.c' to sources. adding 'build/src.linux-x86_64-2.4' to include_dirs. building extension "scipy.linalg.flapack" sources adding 'build/src.linux-x86_64-2.4/scipy/linalg/flapack.pyf' to sources. f2py options: [] adding 'build/src.linux-x86_64-2.4/fortranobject.c' to sources. adding 'build/src.linux-x86_64-2.4' to include_dirs. adding 'build/src.linux-x86_64-2.4/build/src.linux-x86_64-2.4/scipy/linalg/flapack- f2pywrappers.f' to sources. building extension "scipy.linalg.clapack" sources adding 'build/src.linux-x86_64-2.4/scipy/linalg/clapack.pyf' to sources. f2py options: [] adding 'build/src.linux-x86_64-2.4/fortranobject.c' to sources. adding 'build/src.linux-x86_64-2.4' to include_dirs. building extension "scipy.linalg._flinalg" sources f2py options: [] adding 'build/src.linux-x86_64-2.4/fortranobject.c' to sources. adding 'build/src.linux-x86_64-2.4' to include_dirs. building extension "scipy.linalg.calc_lwork" sources f2py options: [] adding 'build/src.linux-x86_64-2.4/fortranobject.c' to sources. adding 'build/src.linux-x86_64-2.4' to include_dirs. building extension "scipy.linalg.atlas_version" sources building extension "scipy.linalg._iterative" sources f2py options: [] adding 'build/src.linux-x86_64-2.4/fortranobject.c' to sources. adding 'build/src.linux-x86_64-2.4' to include_dirs. building extension "scipy.linsolve._zsuperlu" sources building extension "scipy.linsolve._dsuperlu" sources building extension "scipy.linsolve._csuperlu" sources building extension "scipy.linsolve._ssuperlu" sources building extension "scipy.linsolve.umfpack.__umfpack" sources adding 'Lib/linsolve/umfpack/umfpack.i' to sources. building extension "scipy.odr.__odrpack" sources building extension "scipy.optimize._minpack" sources building extension "scipy.optimize._zeros" sources building extension "scipy.optimize._lbfgsb" sources f2py options: [] adding 'build/src.linux-x86_64-2.4/fortranobject.c' to sources. adding 'build/src.linux-x86_64-2.4' to include_dirs. building extension "scipy.optimize.moduleTNC" sources building extension "scipy.optimize._cobyla" sources f2py options: [] adding 'build/src.linux-x86_64-2.4/fortranobject.c' to sources. adding 'build/src.linux-x86_64-2.4' to include_dirs. building extension "scipy.optimize.minpack2" sources f2py options: [] adding 'build/src.linux-x86_64-2.4/fortranobject.c' to sources. adding 'build/src.linux-x86_64-2.4' to include_dirs. building extension "scipy.signal.sigtools" sources building extension "scipy.signal.spline" sources building extension "scipy.sparse.sparsetools" sources f2py options: [] adding 'build/src.linux-x86_64-2.4/fortranobject.c' to sources. adding 'build/src.linux-x86_64-2.4' to include_dirs. building extension "scipy.special._cephes" sources building extension "scipy.special.specfun" sources f2py options: ['--no-wrap-functions'] adding 'build/src.linux-x86_64-2.4/fortranobject.c' to sources. adding 'build/src.linux-x86_64-2.4' to include_dirs. building extension "scipy.stats.statlib" sources f2py options: ['--no-wrap-functions'] adding 'build/src.linux-x86_64-2.4/fortranobject.c' to sources. adding 'build/src.linux-x86_64-2.4' to include_dirs. building extension "scipy.stats.futil" sources f2py options: [] adding 'build/src.linux-x86_64-2.4/fortranobject.c' to sources. adding 'build/src.linux-x86_64-2.4' to include_dirs. building extension "scipy.stats.mvn" sources f2py options: [] adding 'build/src.linux-x86_64-2.4/fortranobject.c' to sources. adding 'build/src.linux-x86_64-2.4' to include_dirs. adding 'build/src.linux-x86_64-2.4/Lib/stats/mvn-f2pywrappers.f' to sources. building extension "scipy.ndimage._nd_image" sources building data_files sources running build_py copying build/src.linux-x86_64-2.4/scipy/__config__.py -> build/lib.linux-x86_64-2.4/scipy running build_clib customize UnixCCompiler customize UnixCCompiler using build_clib customize GnuFCompiler customize GnuFCompiler customize GnuFCompiler using build_clib running build_ext customize UnixCCompiler customize UnixCCompiler using build_ext customize GnuFCompiler customize GnuFCompiler customize GnuFCompiler using build_ext running install_lib copying build/lib.linux-x86_64-2.4/scipy/__config__.py -> /usr/lib64/python2.4/site-packages/scipy byte-compiling /usr/lib64/python2.4/site-packages/scipy/__config__.py to __config__.pyc running install_data From pearu at cens.ioc.ee Mon Dec 18 02:31:47 2006 From: pearu at cens.ioc.ee (pearu at cens.ioc.ee) Date: Mon, 18 Dec 2006 09:31:47 +0200 (EET) Subject: [SciPy-user] Using fftpack.diff In-Reply-To: Message-ID: On Tue, 5 Dec 2006, Neilen Marais wrote: > Hi > > I'm trying to get the DFT of the derivative of a real valued periodic > signal. If I have the time domain > signal in ts (where one period is stored in ts), I do: > > fs = scipy.fftpack.fft(ts) > dfs = scipy.fftpack.diff(fs) > dts = scipy.fftpack.ifft(dfs) > > However dts seems to be almost purely imaginary whereas ts is purely real. > I haven't done DFT related stuff for a while, am I misunderstanding > completely? Yes, you can do dts = scipy.fftpack.diff(ts) Pearu From kxroberto at googlemail.com Mon Dec 18 05:08:39 2006 From: kxroberto at googlemail.com (Robert) Date: Mon, 18 Dec 2006 11:08:39 +0100 Subject: [SciPy-user] GA framework with automatic adjustment of mutation/crossover parameters ? In-Reply-To: References: Message-ID: Marek Wojciechowski wrote: > On Tue, 12 Dec 2006 12:35:39 -0000, wrote: > >> There are some GA frameworks and recipes around for Python (that one of >> scipy (scipy.ga?) disapeared?). Which can be recommended? > > There is pikaia optimizer in fortran: > http://www.hao.ucar.edu/Public/models/pikaia/pikaia.html > > I created a python wrapper of this routine (with f2py) and this is > included in ffnet distribution. > Look at http://sourceforge.net/projects/ffnet > After downloadind and installing ffnet you are granted with pikaia in > python. Just write: > > from pikaia import pikaia > > Look at pikaia docstring for info how to use it. > thanks, is there some info on the method used there - especially if/how it will auto-adjust the mutation/crossover rates? Robert From gael.varoquaux at normalesup.org Mon Dec 18 10:02:17 2006 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Mon, 18 Dec 2006 16:02:17 +0100 Subject: [SciPy-user] scipy.signal.convolve speed depending on order of parameters Message-ID: <20061218150205.GB27703@clipper.ens.fr> Hi, One of my colleagues is complaining that the speed of scipy.signal.convolve depends on the order of the arrays passed to the convolve call. He woul like some magic optimisation here. I told him this was the kind of optimisation left to the user but he still isn't happy, so I am propagating his remark here to have other people's opinions. Cheers, Ga?l From mwojc at p.lodz.pl Mon Dec 18 14:29:54 2006 From: mwojc at p.lodz.pl (Marek Wojciechowski) Date: Mon, 18 Dec 2006 19:29:54 -0000 Subject: [SciPy-user] GA framework with automatic adjustment of mutation/crossover parameters ? In-Reply-To: References: Message-ID: On Mon, 18 Dec 2006 18:00:06 -0000, wrote: > Marek Wojciechowski wrote: >> On Tue, 12 Dec 2006 12:35:39 -0000, >> wrote: >> >>> There are some GA frameworks and recipes around for Python (that one of >>> scipy (scipy.ga?) disapeared?). Which can be recommended? >> >> There is pikaia optimizer in fortran: >> http://www.hao.ucar.edu/Public/models/pikaia/pikaia.html >> >> I created a python wrapper of this routine (with f2py) and this is >> included in ffnet distribution. >> Look at http://sourceforge.net/projects/ffnet >> After downloadind and installing ffnet you are granted with pikaia in >> python. Just write: >> >> from pikaia import pikaia >> >> Look at pikaia docstring for info how to use it. >> > thanks, is there some info on the method used there - especially if/how > it will auto-adjust the mutation/crossover rates? > Robert Yes, there is some info. And more in pikaia userguide downloadable from their homepage. -- Marek Wojciechowski From tom.denniston at alum.dartmouth.org Mon Dec 18 13:53:17 2006 From: tom.denniston at alum.dartmouth.org (Tom Denniston) Date: Mon, 18 Dec 2006 12:53:17 -0600 Subject: [SciPy-user] Errors compiling scipy with icc and intel mkl Message-ID: I get a recurrent error no matter what i seem to try compiling scipy. The latest attempt I used the intel compiler and the intel mkl following the instructions on the wiki. I keep getting these undefined references to __MAIN. I am sure this is something really obvious that I am missing but I'm using intel mkl version 8.0.1 and icc version 8.1 and g77 from gcc version GCC 3.2.3. Does anyone know what this error means. I know the g77 is rather old but I am unfortunately stuck with it: compile options: '-DSCIPY_MKL_H -DSCIPY_MKL_H -I/local/intel/mkl/8.0.1/include -I/usr/local/include -I/usr/include -I/local/include -Ibuild/src.linux-i686-2.5 -I/local/lib/python2.5/site-packages/numpy/core/include -I/local/include/python2.5 -c' /local/tools/gcc/3.2.3/bin/g77 -L/local/lib -L/lib -L/usr/lib -L/usr/X11R6/lib -L/local/intel/mkl/8.0.1/lib/32/ build/temp.linux-i686-2.5/build/src.linux-i686-2.5/Lib/fftpack/_fftpackmodule.o build/temp.linux-i686-2.5/Lib/fftpack/src/zfft.o build/temp.linux-i686-2.5/Lib/fftpack/src/drfft.o build/temp.linux-i686-2.5/Lib/fftpack/src/zrfft.o build/temp.linux-i686-2.5/Lib/fftpack/src/zfftnd.o build/temp.linux-i686-2.5/build/src.linux-i686-2.5/fortranobject.o -L/local/intel/mkl/8.0.1/lib/32 -L/local/lib/python2.5/config -Lbuild/temp.linux-i686-2.5 -ldfftpack -lmkl -lvml -lpthread -lpython2.5 -lg2c -o build/lib.linux-i686-2.5/scipy/fftpack/_fftpack.so build/temp.linux-i686-2.5/Lib/fftpack/src/zfftnd.o(.text+0x215): In function `get_cache_id_zmklfftnd': : undefined reference to `_intel_fast_memcpy' build/temp.linux-i686-2.5/build/src.linux-i686-2.5/fortranobject.o(.text+0xa7e): In function `fortran_setattr': : undefined reference to `_intel_fast_memcpy' build/temp.linux-i686-2.5/build/src.linux-i686-2.5/fortranobject.o(.text+0xb0b): In function `fortran_setattr': : undefined reference to `_intel_fast_memcpy' build/temp.linux-i686-2.5/build/src.linux-i686-2.5/fortranobject.o(.text+0xf76): In function `array_from_pyobj': : undefined reference to `_intel_fast_memset' /local/tools/gcc/3.2.3/bin/../lib/gcc-lib/i686-pc-linux-gnu/3.2.3/../../../libfrtbegin.a(frtbegin.o)(.text+0x32): In function `main': : undefined reference to `MAIN__' collect2: ld returned 1 exit status build/temp.linux-i686-2.5/Lib/fftpack/src/zfftnd.o(.text+0x215): In function `get_cache_id_zmklfftnd': : undefined reference to `_intel_fast_memcpy' build/temp.linux-i686-2.5/build/src.linux-i686-2.5/fortranobject.o(.text+0xa7e): In function `fortran_setattr': : undefined reference to `_intel_fast_memcpy' build/temp.linux-i686-2.5/build/src.linux-i686-2.5/fortranobject.o(.text+0xb0b): In function `fortran_setattr': : undefined reference to `_intel_fast_memcpy' build/temp.linux-i686-2.5/build/src.linux-i686-2.5/fortranobject.o(.text+0xf76): In function `array_from_pyobj': : undefined reference to `_intel_fast_memset' /local/tools/gcc/3.2.3/bin/../lib/gcc-lib/i686-pc-linux-gnu/3.2.3/../../../libfrtbegin.a(frtbegin.o)(.text+0x32): In function `main': : undefined reference to `MAIN__' collect2: ld returned 1 exit status From conor.robinson at gmail.com Mon Dec 18 19:03:02 2006 From: conor.robinson at gmail.com (Conor Robinson) Date: Mon, 18 Dec 2006 16:03:02 -0800 Subject: [SciPy-user] Errors compiling scipy with icc and intel mkl In-Reply-To: References: Message-ID: What about using ifort? You can get a non-commercial or year trial dl from intel? Also, I read that the new intel compilers were tested stable with gcc 4.0.0. Just some thoughts, hope it helps. On 12/18/06, Tom Denniston wrote: > I get a recurrent error no matter what i seem to try compiling scipy. > The latest attempt I used the intel compiler and the intel mkl > following the instructions on the wiki. I keep getting these > undefined references to __MAIN. I am sure this is something really > obvious that I am missing but > > > I'm using intel mkl version 8.0.1 and icc version 8.1 and g77 from gcc > version GCC 3.2.3. Does anyone know what this error means. I know > the g77 is rather old but I am unfortunately stuck with it: > > compile options: '-DSCIPY_MKL_H -DSCIPY_MKL_H > -I/local/intel/mkl/8.0.1/include -I/usr/local/include -I/usr/include > -I/local/include -Ibuild/src.linux-i686-2.5 > -I/local/lib/python2.5/site-packages/numpy/core/include > -I/local/include/python2.5 -c' > /local/tools/gcc/3.2.3/bin/g77 -L/local/lib -L/lib -L/usr/lib > -L/usr/X11R6/lib -L/local/intel/mkl/8.0.1/lib/32/ > build/temp.linux-i686-2.5/build/src.linux-i686-2.5/Lib/fftpack/_fftpackmodule.o > build/temp.linux-i686-2.5/Lib/fftpack/src/zfft.o > build/temp.linux-i686-2.5/Lib/fftpack/src/drfft.o > build/temp.linux-i686-2.5/Lib/fftpack/src/zrfft.o > build/temp.linux-i686-2.5/Lib/fftpack/src/zfftnd.o > build/temp.linux-i686-2.5/build/src.linux-i686-2.5/fortranobject.o > -L/local/intel/mkl/8.0.1/lib/32 -L/local/lib/python2.5/config > -Lbuild/temp.linux-i686-2.5 -ldfftpack -lmkl -lvml -lpthread > -lpython2.5 -lg2c -o > build/lib.linux-i686-2.5/scipy/fftpack/_fftpack.so > build/temp.linux-i686-2.5/Lib/fftpack/src/zfftnd.o(.text+0x215): In > function `get_cache_id_zmklfftnd': > : undefined reference to `_intel_fast_memcpy' > build/temp.linux-i686-2.5/build/src.linux-i686-2.5/fortranobject.o(.text+0xa7e): > In function `fortran_setattr': > : undefined reference to `_intel_fast_memcpy' > build/temp.linux-i686-2.5/build/src.linux-i686-2.5/fortranobject.o(.text+0xb0b): > In function `fortran_setattr': > : undefined reference to `_intel_fast_memcpy' > build/temp.linux-i686-2.5/build/src.linux-i686-2.5/fortranobject.o(.text+0xf76): > In function `array_from_pyobj': > : undefined reference to `_intel_fast_memset' > /local/tools/gcc/3.2.3/bin/../lib/gcc-lib/i686-pc-linux-gnu/3.2.3/../../../libfrtbegin.a(frtbegin.o)(.text+0x32): > In function `main': > : undefined reference to `MAIN__' > collect2: ld returned 1 exit status > build/temp.linux-i686-2.5/Lib/fftpack/src/zfftnd.o(.text+0x215): In > function `get_cache_id_zmklfftnd': > : undefined reference to `_intel_fast_memcpy' > build/temp.linux-i686-2.5/build/src.linux-i686-2.5/fortranobject.o(.text+0xa7e): > In function `fortran_setattr': > : undefined reference to `_intel_fast_memcpy' > build/temp.linux-i686-2.5/build/src.linux-i686-2.5/fortranobject.o(.text+0xb0b): > In function `fortran_setattr': > : undefined reference to `_intel_fast_memcpy' > build/temp.linux-i686-2.5/build/src.linux-i686-2.5/fortranobject.o(.text+0xf76): > In function `array_from_pyobj': > : undefined reference to `_intel_fast_memset' > /local/tools/gcc/3.2.3/bin/../lib/gcc-lib/i686-pc-linux-gnu/3.2.3/../../../libfrtbegin.a(frtbegin.o)(.text+0x32): > In function `main': > : undefined reference to `MAIN__' > collect2: ld returned 1 exit status > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From tom.denniston at alum.dartmouth.org Mon Dec 18 19:06:43 2006 From: tom.denniston at alum.dartmouth.org (Tom Denniston) Date: Mon, 18 Dec 2006 18:06:43 -0600 Subject: [SciPy-user] Errors compiling scipy with icc and intel mkl In-Reply-To: References: Message-ID: Ok. This seems to be totally a function of the LD_FLAGS environment variable. It looks like the Scipy build script or disutils puts the LD flags before the .o files on the command line which cause g77 to error out. Apparantely if you want -L directives on the command line for g77 you need to put them after the .o files. Simply removing the LD_FLAGS from the environment fixes the problem. Or is a workaround, at least. I unfortunately do not understand the issue well enough to fix the scipy build script if it is in fact broken. --Tom On 12/18/06, Tom Denniston wrote: > I get a recurrent error no matter what i seem to try compiling scipy. > The latest attempt I used the intel compiler and the intel mkl > following the instructions on the wiki. I keep getting these > undefined references to __MAIN. I am sure this is something really > obvious that I am missing but > > > I'm using intel mkl version 8.0.1 and icc version 8.1 and g77 from gcc > version GCC 3.2.3. Does anyone know what this error means. I know > the g77 is rather old but I am unfortunately stuck with it: > > compile options: '-DSCIPY_MKL_H -DSCIPY_MKL_H > -I/local/intel/mkl/8.0.1/include -I/usr/local/include -I/usr/include > -I/local/include -Ibuild/src.linux-i686-2.5 > -I/local/lib/python2.5/site-packages/numpy/core/include > -I/local/include/python2.5 -c' > /local/tools/gcc/3.2.3/bin/g77 -L/local/lib -L/lib -L/usr/lib > -L/usr/X11R6/lib -L/local/intel/mkl/8.0.1/lib/32/ > build/temp.linux-i686-2.5/build/src.linux-i686-2.5/Lib/fftpack/_fftpackmodule.o > build/temp.linux-i686-2.5/Lib/fftpack/src/zfft.o > build/temp.linux-i686-2.5/Lib/fftpack/src/drfft.o > build/temp.linux-i686-2.5/Lib/fftpack/src/zrfft.o > build/temp.linux-i686-2.5/Lib/fftpack/src/zfftnd.o > build/temp.linux-i686-2.5/build/src.linux-i686-2.5/fortranobject.o > -L/local/intel/mkl/8.0.1/lib/32 -L/local/lib/python2.5/config > -Lbuild/temp.linux-i686-2.5 -ldfftpack -lmkl -lvml -lpthread > -lpython2.5 -lg2c -o > build/lib.linux-i686-2.5/scipy/fftpack/_fftpack.so > build/temp.linux-i686-2.5/Lib/fftpack/src/zfftnd.o(.text+0x215): In > function `get_cache_id_zmklfftnd': > : undefined reference to `_intel_fast_memcpy' > build/temp.linux-i686-2.5/build/src.linux-i686-2.5/fortranobject.o(.text+0xa7e): > In function `fortran_setattr': > : undefined reference to `_intel_fast_memcpy' > build/temp.linux-i686-2.5/build/src.linux-i686-2.5/fortranobject.o(.text+0xb0b): > In function `fortran_setattr': > : undefined reference to `_intel_fast_memcpy' > build/temp.linux-i686-2.5/build/src.linux-i686-2.5/fortranobject.o(.text+0xf76): > In function `array_from_pyobj': > : undefined reference to `_intel_fast_memset' > /local/tools/gcc/3.2.3/bin/../lib/gcc-lib/i686-pc-linux-gnu/3.2.3/../../../libfrtbegin.a(frtbegin.o)(.text+0x32): > In function `main': > : undefined reference to `MAIN__' > collect2: ld returned 1 exit status > build/temp.linux-i686-2.5/Lib/fftpack/src/zfftnd.o(.text+0x215): In > function `get_cache_id_zmklfftnd': > : undefined reference to `_intel_fast_memcpy' > build/temp.linux-i686-2.5/build/src.linux-i686-2.5/fortranobject.o(.text+0xa7e): > In function `fortran_setattr': > : undefined reference to `_intel_fast_memcpy' > build/temp.linux-i686-2.5/build/src.linux-i686-2.5/fortranobject.o(.text+0xb0b): > In function `fortran_setattr': > : undefined reference to `_intel_fast_memcpy' > build/temp.linux-i686-2.5/build/src.linux-i686-2.5/fortranobject.o(.text+0xf76): > In function `array_from_pyobj': > : undefined reference to `_intel_fast_memset' > /local/tools/gcc/3.2.3/bin/../lib/gcc-lib/i686-pc-linux-gnu/3.2.3/../../../libfrtbegin.a(frtbegin.o)(.text+0x32): > In function `main': > : undefined reference to `MAIN__' > collect2: ld returned 1 exit status > From robert.kern at gmail.com Mon Dec 18 19:19:10 2006 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 18 Dec 2006 18:19:10 -0600 Subject: [SciPy-user] Errors compiling scipy with icc and intel mkl In-Reply-To: References: Message-ID: <45872FFE.9090705@gmail.com> Tom Denniston wrote: > Ok. This seems to be totally a function of the LD_FLAGS environment > variable. It looks like the Scipy build script or disutils puts the > LD flags before the .o files on the command line which cause g77 to > error out. Apparantely if you want -L directives on the command line > for g77 you need to put them after the .o files. Simply removing the > LD_FLAGS from the environment fixes the problem. Or is a workaround, > at least. I unfortunately do not understand the issue well enough to > fix the scipy build script if it is in fact broken. Yes, LDFLAGS will *replace* all of the link options that distutils will otherwise try to set. If you really need to add flags to *all* extensions, use the -L and -l flags for the build_ext command: $ python setup.py build_src build_clib build_ext -L/opt/local/lib build Preferably, though, use the site.cfg file to set these things. If you could tell us which instructions told you to set LDFLAGS, that would be helpful, since we should correct them. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From tom.denniston at alum.dartmouth.org Mon Dec 18 19:30:24 2006 From: tom.denniston at alum.dartmouth.org (Tom Denniston) Date: Mon, 18 Dec 2006 18:30:24 -0600 Subject: [SciPy-user] Errors compiling scipy with icc and intel mkl In-Reply-To: <45872FFE.9090705@gmail.com> References: <45872FFE.9090705@gmail.com> Message-ID: I actually didn't need the LDFLAGS to build scipy at all. I was building scipy within a script that was building other things that need LDFLAGS. I unset the variable and everything works now. Thanks --Tom On 12/18/06, Robert Kern wrote: > Tom Denniston wrote: > > Ok. This seems to be totally a function of the LD_FLAGS environment > > variable. It looks like the Scipy build script or disutils puts the > > LD flags before the .o files on the command line which cause g77 to > > error out. Apparantely if you want -L directives on the command line > > for g77 you need to put them after the .o files. Simply removing the > > LD_FLAGS from the environment fixes the problem. Or is a workaround, > > at least. I unfortunately do not understand the issue well enough to > > fix the scipy build script if it is in fact broken. > > Yes, LDFLAGS will *replace* all of the link options that distutils will > otherwise try to set. If you really need to add flags to *all* extensions, use > the -L and -l flags for the build_ext command: > > $ python setup.py build_src build_clib build_ext -L/opt/local/lib build > > Preferably, though, use the site.cfg file to set these things. > > If you could tell us which instructions told you to set LDFLAGS, that would be > helpful, since we should correct them. > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless enigma > that is made terrible by our own mad attempt to interpret it as though it had > an underlying truth." > -- Umberto Eco > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From joshuafr at gmail.com Tue Dec 19 02:44:57 2006 From: joshuafr at gmail.com (Joshua Petterson) Date: Tue, 19 Dec 2006 08:44:57 +0100 Subject: [SciPy-user] desactivating R Message-ID: Hi all, it's certainely a stupid question, but how can I desactivate the use of R when I do a "from scipy import *" or "from scipy.stats import stats", because I have some trouble with it (random segfault)? Do I have to uninstall "rpy"? Another question: is it possible to save a session from ipython, like we can do with R? thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Tue Dec 19 04:44:56 2006 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 19 Dec 2006 03:44:56 -0600 Subject: [SciPy-user] desactivating R In-Reply-To: References: Message-ID: <4587B498.5020102@gmail.com> Joshua Petterson wrote: > Hi all, > it's certainely a stupid question, but how can I desactivate the use of > R when I do a "from scipy import *" or "from scipy.stats import stats", > because I have some trouble with it (random segfault)? Do I have to > uninstall "rpy"? You can also edit Lib/stats/__init__.py (in the source distribution; scipy/stats/__init__.py if you want to edit your installed copy) and remove the lines that import from rpy. However, you should note that you are using a very old version of scipy that is not maintained any more. I removed that import from the current version of scipy quite some time ago. > Another question: is it possible to save a session from ipython, like we > can do with R? No, you can't really save a "live" workspace of the Python interpreter. It's a deep restriction of the interpreter implementation, so there's not much ipython can do about it. There is Stackless Python, a modification of the Python interpreter that can do this, but I've never really tested it much less with scipy or ipython. http://www.stackless.com/ -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From joshuafr at gmail.com Tue Dec 19 05:50:35 2006 From: joshuafr at gmail.com (Joshua Petterson) Date: Tue, 19 Dec 2006 11:50:35 +0100 Subject: [SciPy-user] desactivating R In-Reply-To: <4587B498.5020102@gmail.com> References: <4587B498.5020102@gmail.com> Message-ID: I don't understand why you say "an old version": |Test_Basemap|[11]>import scipy |Test_Basemap|[12]>scipy.__version__ Out [12]:'0.5.1' |Test_Basemap|[13]>import numpy |Test_Basemap|[14]>numpy.__version__ Out [14]:'1.0' |Test_Basemap|[15]>from scipy.stats import stats RHOME= /usr/lib/R RVERSION= 2.3.1 RVER= 2031 RUSER= /home/joshua Loading Rpy version 2031 .. Done. Creating the R object 'r' .. Done 2006/12/19, Robert Kern : > > Joshua Petterson wrote: > > Hi all, > > it's certainely a stupid question, but how can I desactivate the use of > > R when I do a "from scipy import *" or "from scipy.stats import stats", > > because I have some trouble with it (random segfault)? Do I have to > > uninstall "rpy"? > > You can also edit Lib/stats/__init__.py (in the source distribution; > scipy/stats/__init__.py if you want to edit your installed copy) and > remove the > lines that import from rpy. However, you should note that you are using a > very > old version of scipy that is not maintained any more. I removed that > import from > the current version of scipy quite some time ago. > > > Another question: is it possible to save a session from ipython, like we > > can do with R? > > No, you can't really save a "live" workspace of the Python interpreter. > It's a > deep restriction of the interpreter implementation, so there's not much > ipython > can do about it. There is Stackless Python, a modification of the Python > interpreter that can do this, but I've never really tested it much less > with > scipy or ipython. > > http://www.stackless.com/ > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless > enigma > that is made terrible by our own mad attempt to interpret it as though it > had > an underlying truth." > -- Umberto Eco > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Tue Dec 19 06:03:48 2006 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 19 Dec 2006 05:03:48 -0600 Subject: [SciPy-user] desactivating R In-Reply-To: References: <4587B498.5020102@gmail.com> Message-ID: <4587C714.7090907@gmail.com> Joshua Petterson wrote: > I don't understand why you say "an old version": > |Test_Basemap|[11]>import scipy > |Test_Basemap|[12]>scipy.__version__ > Out [12]:'0.5.1' > |Test_Basemap|[13]>import numpy > |Test_Basemap|[14]>numpy.__version__ > Out [14]:'1.0' > |Test_Basemap|[15]>from scipy.stats import stats > RHOME= /usr/lib/R > RVERSION= 2.3.1 > RVER= 2031 > RUSER= /home/joshua > Loading Rpy version 2031 .. Done. > Creating the R object 'r' .. Done Huh. Can you show us the scipy/stats/__init__.py file that you are executing? Are you sure that it's not left over from an old install of scipy 0.3.x? Because, rpy is simply not in the code anymore except mentioned in a docstring: [Lib]$ glark -ri rpy * stats/.svn/text-base/info.py.svn-base: 216 - 217 - For many more stat related functions install the software R and the 218 : interface package rpy. 219 + """ 220 + stats/info.py: 216 - 217 - For many more stat related functions install the software R and the 218 : interface package rpy. 219 + """ 220 + [Lib]$ -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From joshuafr at gmail.com Tue Dec 19 06:19:07 2006 From: joshuafr at gmail.com (Joshua Petterson) Date: Tue, 19 Dec 2006 12:19:07 +0100 Subject: [SciPy-user] desactivating R In-Reply-To: <4587C714.7090907@gmail.com> References: <4587B498.5020102@gmail.com> <4587C714.7090907@gmail.com> Message-ID: ################## # # stats - Statistical Functions # from info_stats import __doc__ try: import pstats except ImportError: import warnings import sys pyver = 'python%d.%d'%sys.version_info[:2] warnings.warn("The pstats module is not available.\nInstall the %s-profiler Debian package if you need it"%pyver, UserWarning, stacklevel=2) del warnings del sys del pyver from stats import * from distributions import * from rv import * from morestats import * try: # use R functions if installed. import rpy from rfuncs import * except ImportError: pass ################## 2006/12/19, Robert Kern : > > Joshua Petterson wrote: > > I don't understand why you say "an old version": > > |Test_Basemap|[11]>import scipy > > |Test_Basemap|[12]>scipy.__version__ > > Out [12]:'0.5.1' > > |Test_Basemap|[13]>import numpy > > |Test_Basemap|[14]>numpy.__version__ > > Out [14]:'1.0' > > |Test_Basemap|[15]>from scipy.stats import stats > > RHOME= /usr/lib/R > > RVERSION= 2.3.1 > > RVER= 2031 > > RUSER= /home/joshua > > Loading Rpy version 2031 .. Done. > > Creating the R object 'r' .. Done > > Huh. Can you show us the scipy/stats/__init__.py file that you are > executing? > Are you sure that it's not left over from an old install of scipy 0.3.x? > Because, rpy is simply not in the code anymore except mentioned in a > docstring: > > [Lib]$ glark -ri rpy * > stats/.svn/text-base/info.py.svn-base: > 216 - > 217 - For many more stat related functions install the software R and > the > 218 : interface package rpy. > 219 + """ > 220 + > stats/info.py: > 216 - > 217 - For many more stat related functions install the software R and > the > 218 : interface package rpy. > 219 + """ > 220 + > [Lib]$ > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless > enigma > that is made terrible by our own mad attempt to interpret it as though it > had > an underlying truth." > -- Umberto Eco > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Tue Dec 19 06:24:29 2006 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 19 Dec 2006 05:24:29 -0600 Subject: [SciPy-user] desactivating R In-Reply-To: References: <4587B498.5020102@gmail.com> <4587C714.7090907@gmail.com> Message-ID: <4587CBED.3050905@gmail.com> Joshua Petterson wrote: > ################## > # > # stats - Statistical Functions > # > > from info_stats import __doc__ > > try: > import pstats > except ImportError: > import warnings > import sys > pyver = 'python%d.%d'%sys.version_info[:2] > warnings.warn("The pstats module is not available.\nInstall the > %s-profiler Debian package if you need it"%pyver, UserWarning, stacklevel=2) > del warnings > del sys > del pyver > > from stats import * > from distributions import * > from rv import * > from morestats import * > try: # use R functions if installed. > import rpy > from rfuncs import * > except ImportError: > pass > ################## Okay, yeah, that's a very old version of that file. This is what it's supposed to look like: http://projects.scipy.org/scipy/scipy/browser/trunk/Lib/stats/__init__.py I recommend that you delete that scipy and reinstall. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From joshuafr at gmail.com Tue Dec 19 06:31:04 2006 From: joshuafr at gmail.com (Joshua Petterson) Date: Tue, 19 Dec 2006 12:31:04 +0100 Subject: [SciPy-user] desactivating R In-Reply-To: <4587CBED.3050905@gmail.com> References: <4587B498.5020102@gmail.com> <4587C714.7090907@gmail.com> <4587CBED.3050905@gmail.com> Message-ID: hmm, very strange, I have compiled scipy two weeks ago. Ok, I'm going to delete scipy and reinstall it. Do I just have to erase the scipy directory under "site-package" or is there others specials files? thanks for your help 2006/12/19, Robert Kern : > > Joshua Petterson wrote: > > ################## > > # > > # stats - Statistical Functions > > # > > > > from info_stats import __doc__ > > > > try: > > import pstats > > except ImportError: > > import warnings > > import sys > > pyver = 'python%d.%d'%sys.version_info[:2] > > warnings.warn("The pstats module is not available.\nInstall the > > %s-profiler Debian package if you need it"%pyver, UserWarning, > stacklevel=2) > > del warnings > > del sys > > del pyver > > > > from stats import * > > from distributions import * > > from rv import * > > from morestats import * > > try: # use R functions if installed. > > import rpy > > from rfuncs import * > > except ImportError: > > pass > > ################## > > Okay, yeah, that's a very old version of that file. This is what it's > supposed > to look like: > > > http://projects.scipy.org/scipy/scipy/browser/trunk/Lib/stats/__init__.py > > I recommend that you delete that scipy and reinstall. > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless > enigma > that is made terrible by our own mad attempt to interpret it as though it > had > an underlying truth." > -- Umberto Eco > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Tue Dec 19 12:46:59 2006 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 19 Dec 2006 11:46:59 -0600 Subject: [SciPy-user] desactivating R In-Reply-To: References: <4587B498.5020102@gmail.com> <4587C714.7090907@gmail.com> <4587CBED.3050905@gmail.com> Message-ID: <45882593.9020607@gmail.com> Joshua Petterson wrote: > hmm, very strange, I have compiled scipy two weeks ago. > Ok, I'm going to delete scipy and reinstall it. Do I just have to erase > the scipy directory under "site-package" or is there others specials files? > thanks for your help No, that should be it. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From joshuafr at gmail.com Wed Dec 20 03:30:53 2006 From: joshuafr at gmail.com (Joshua Petterson) Date: Wed, 20 Dec 2006 09:30:53 +0100 Subject: [SciPy-user] desactivating R In-Reply-To: <45882593.9020607@gmail.com> References: <4587B498.5020102@gmail.com> <4587C714.7090907@gmail.com> <4587CBED.3050905@gmail.com> <45882593.9020607@gmail.com> Message-ID: There are some problems from numpy package for ubuntu, after installing it, python doen't find linalg and fft. I'm going to install it from sources. 2006/12/19, Robert Kern : > > Joshua Petterson wrote: > > hmm, very strange, I have compiled scipy two weeks ago. > > Ok, I'm going to delete scipy and reinstall it. Do I just have to erase > > the scipy directory under "site-package" or is there others specials > files? > > thanks for your help > > No, that should be it. > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless > enigma > that is made terrible by our own mad attempt to interpret it as though it > had > an underlying truth." > -- Umberto Eco > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From strawman at astraw.com Wed Dec 20 03:37:58 2006 From: strawman at astraw.com (Andrew Straw) Date: Wed, 20 Dec 2006 00:37:58 -0800 Subject: [SciPy-user] desactivating R In-Reply-To: References: <4587B498.5020102@gmail.com> <4587C714.7090907@gmail.com> <4587CBED.3050905@gmail.com> <45882593.9020607@gmail.com> Message-ID: <4588F666.1030701@astraw.com> there's a bug in their version on edgy eft. i keep meaning to file a report on launchpad. install "python-numpy-ext" and you'll be fine. Joshua Petterson wrote: > There are some problems from numpy package for ubuntu, after > installing it, > python doen't find linalg and fft. I'm going to install it from sources. > > 2006/12/19, Robert Kern < robert.kern at gmail.com > >: > > Joshua Petterson wrote: > > hmm, very strange, I have compiled scipy two weeks ago. > > Ok, I'm going to delete scipy and reinstall it. Do I just have > to erase > > the scipy directory under "site-package" or is there others > specials files? > > thanks for your help > > No, that should be it. > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a > harmless enigma > that is made terrible by our own mad attempt to interpret it as > though it had > an underlying truth." > -- Umberto Eco > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > > ------------------------------------------------------------------------ > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From niels.ellegaard at gmail.com Wed Dec 20 10:07:15 2006 From: niels.ellegaard at gmail.com (Niels L Ellegaard) Date: Wed, 20 Dec 2006 15:07:15 +0000 (UTC) Subject: [SciPy-user] Installing scipy-5.2 on fedora 64 bit: cannot import References: Message-ID: Niels L Ellegaard gmail.com> writes: > I am trying to install scipy 5.2 on a fedora 64 bit machine with > numpy 1.0.1. When I try to run scipy.test() i get the following > error > > from scipy.linalg import flapack >ImportError: cannot import name flapack I did a little more testing. Apparently the problem is clapack.so (See below) I hope somebody can give me a hint. Otherwise i will have to install Ubuntu. Niels Some more info: $ python -c 'import os,sys;print os.name,sys.platform' posix linux2 $ python -c 'import sys;print sys.version' 2.4.3 (#1, Jun 13 2006, 11:46:22) [GCC 4.1.1 20060525 (Red Hat 4.1.1-1)] $ python -c 'import Numeric;print Numeric.__version__' 23.7 $ f2py -v 2_3396 $ uname -a Linux xx.ac.in 2.6.17-1.2139_FC5 #1 SMP Fri Jun 23 12:40:11 EDT 2006 x86_64 x86_64 x86_64 GNU/Linux $ python /usr/lib64/python2.4/site-packages/scipy/linalg/lapack.py ....... File "/usr/lib64/python2.4/site-packages/scipy/linalg/lapack.py", line 18, in ? from scipy.linalg import clapack ImportError: /usr/lib64/python2.4/site-packages/scipy/linalg/ clapack.so: undefined symbol: clapack_sgesv $ldd /usr/lib64/python2.4/site-packages/scipy/linalg/clapack.so shell-init: error retrieving current directory: getcwd: cannot access parent directories: No such file or directory liblapack.so.3 => /usr/lib64/atlas/liblapack.so.3 (0x00002aaaaabdc000) libblas.so.3 => /usr/lib64/atlas/libblas.so.3 (0x00002aaaab1ef000) libg2c.so.0 => /usr/lib64/libg2c.so.0 (0x00002aaaaba81000) libm.so.6 => /lib64/libm.so.6 (0x00002aaaabba4000) libgcc_s.so.1 => /lib64/libgcc_s.so.1 (0x00002aaaabd26000) libc.so.6 => /lib64/libc.so.6 (0x00002aaaabe34000) libgfortran.so.1 => /usr/lib64/libgfortran.so.1 (0x00002aaaac07e000) /lib64/ld-linux-x86-64.so.2 (0x0000555555554000) $ python setup_atlas_version.py /usr/lib64/python2.4/site-packages/numpy/distutils/ misc_util.py:1391: UserWarning: Use Configuration('linalg','',top_path=None) instead of deprecated default_config_dict('linalg','',None) warnings.warn('Use Configuration(%r,%r,top_path=%r) instead of '\ Traceback (most recent call last): File "setup_atlas_version.py", line 29, in ? setup(**configuration()) File "setup_atlas_version.py", line 13, in configuration del config['fortran_libraries'] KeyError: 'fortran_libraries' $ gcc -v Using built-in specs. Target: x86_64-redhat-linux Configured with: ../configure --prefix=/usr --mandir=/usr/share/man --infodir=/usr/share/info --enable-shared --enable-threads=posix --enable-checking=release --with-system-zlib --enable-__cxa_atexit --disable-libunwind-exceptions --enable-libgcj-multifile --enable-languages=c,c++,objc,obj-c++,java,fortran,ada --enable-java-awt=gtk --disable-dssi --with-java-home=/usr/lib/jvm/ java-1.4.2-gcj-1.4.2.0/jre --with-cpu=generic --host=x86_64-redhat-linux Thread model: posix gcc version 4.1.1 20060525 (Red Hat 4.1.1-1) $ g77 --version GNU Fortran (GCC 3.2.3 20030502 (Red Hat Linux 3.2.3-55.fc5)) 3.2.3 20030502 (Red Hat Linux 3.2.3-13) Copyright (C) 2002 Free Software Foundation, Inc. $ python /usr/lib64/python2.4/test/test_distutils.py test_home_installation_scheme (distutils.tests.test_install. InstallTestCase) ... FAIL test_package_data (distutils.tests.test_build_py.BuildPyTestCase) ... ok test_build (distutils.tests.test_build_scripts.BuildScriptsTestCase) ... ok test_default_settings (distutils.tests.test_build_scripts. BuildScriptsTestCase) ... ok test_command_packages_cmdline (distutils.tests.test_dist. DistributionTestCase) ... ok test_command_packages_configfile (distutils.tests.test_dist.DistributionTestCase) ... ok test_command_packages_unspecified (distutils.tests.test_dist.DistributionTestCase) ... ok test_default_settings (distutils.tests.test_install_scripts.InstallScriptsTestCase) ... ok test_installation (distutils.tests.test_install_scripts. InstallScriptsTestCase) ... ok ================================================================= FAIL: test_home_installation_scheme (distutils.tests.test_install. InstallTestCase) ----------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib64/python2.4/distutils/tests/test_install.py", line 43, in test_home_installation_scheme check_path(cmd.install_platlib, libdir) File "/usr/lib64/python2.4/distutils/tests/test_install.py", ine 39, in check_path self.assertEqual(got, expected) AssertionError: '/tmp/tmpddXPuo/installation/lib64/python' != '/tmp/tmpddXPuo/installation/lib/python' ---------------------------------------------------------------- Ran 9 tests in 0.075s FAILED (failures=1) Traceback (most recent call last): File "/usr/lib64/python2.4/test/test_distutils.py", line 17, in ? test_main() File "/usr/lib64/python2.4/test/test_distutils.py", line 13, in test_main test.test_support.run_unittest(distutils.tests. test_suite()) File "/usr/lib64/python2.4/test/test_support.py", line 290, in run_unittest run_suite(suite, testclass) File "/usr/lib64/python2.4/test/test_support.py", line 275, in run_suite raise TestFailed(err) test.test_support.TestFailed: Traceback (most recent call last): File "/usr/lib64/python2.4/distutils/tests/test_install.py", line 43, in test_home_installation_scheme check_path(cmd.install_platlib, libdir) File "/usr/lib64/python2.4/distutils/tests/test_install.py", line 39, in check_path self.assertEqual(got, expected) AssertionError: '/tmp/tmpddXPuo/installation/lib64/python' != '/tmp/tmpddXPuo/installation/lib/python' From ndbecker2 at gmail.com Wed Dec 20 11:01:24 2006 From: ndbecker2 at gmail.com (Neal Becker) Date: Wed, 20 Dec 2006 11:01:24 -0500 Subject: [SciPy-user] Installing scipy-5.2 on fedora 64 bit: cannot import References: Message-ID: Do you have to have 5.2? smart info scipy: Name: scipy Version: 0.5.1-4.fc6.1 at x86_64 Priority: 0 Group: Development/Libraries Installed Size: 18.4MB Reference URLs: http://numeric.scipy.org Flags: Channels: Fedora Extras 6 - x86_64; RPM Database [...] From robert.kern at gmail.com Wed Dec 20 13:30:26 2006 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 20 Dec 2006 12:30:26 -0600 Subject: [SciPy-user] desactivating R In-Reply-To: <4588F666.1030701@astraw.com> References: <4587B498.5020102@gmail.com> <4587C714.7090907@gmail.com> <4587CBED.3050905@gmail.com> <45882593.9020607@gmail.com> <4588F666.1030701@astraw.com> Message-ID: <45898142.5030909@gmail.com> Andrew Straw wrote: > there's a bug in their version on edgy eft. i keep meaning to file a > report on launchpad. install "python-numpy-ext" and you'll be fine. There's already a bug report in Debian for this. http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=389118 -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From s.mientki at ru.nl Fri Dec 22 18:58:26 2006 From: s.mientki at ru.nl (Stef Mientki) Date: Sat, 23 Dec 2006 00:58:26 +0100 Subject: [SciPy-user] (newbie) IIR, is there a MatLab "filter" equivalent ? Message-ID: <458C7122.2050209@ru.nl> hello, this might be a dumb question, if so, sorry for my ignorance, but my math is rusted, and I'm a completly newbie (to Python), I haven't even installed Python on my computer yet ;-) I'm evaluating some packages for easy data-acquisition and (real-time) data-analysis, specific for bio-physical signals (e.g. ECG, EOG, etc). I've seen LabView and threw it in the recycle bin. I've seen MatLab and decided it needed a user friendly wrapper, after which it looks quit powerfull. After my positive experiences with MatLab scripting (didn't realize scripting could be so fast), I searched for more and found 2 other interesting packages (both open source), - SciLab : almost identical to MatLab, and with an equivalent Simulation module - Python + huge collection of tools : suited for a wide range of platforms, even for small PIC micro controllers I quickly read some documents about NumPy and SciPy, and most functions I need are there, even the clip function which is missing in MatLab. But I couldn't find the equivalent of the MatLab "filter" function: y = filter(b,a,X) The filter function is implemented as a direct form II transposed structure, y(n) = b(1)*x(n) + b(2)*x(n-1) + ... + b(nb+1)*x(n-nb) - a(2)*y(n-1) - ... - a(na+1)*y(n-na) Is there a Python equivalent to this filter function ? I tried to look in the archives of this mailinglist, but that's not a job for humans. (btw, does anyone has the complete list archive in Thunderbird format ?) What I also couldn't find ( I do admit that Python information is overwhelming, but maybe that's the problem for newbies) is there a toolbox to generate the filter polynomes ? thanks, Stef Mientki From ckkart at hoc.net Fri Dec 22 23:28:43 2006 From: ckkart at hoc.net (Christian Kristukat) Date: Sat, 23 Dec 2006 13:28:43 +0900 Subject: [SciPy-user] (newbie) IIR, is there a MatLab "filter" equivalent ? In-Reply-To: <458C7122.2050209@ru.nl> References: <458C7122.2050209@ru.nl> Message-ID: <458CB07B.90300@hoc.net> Stef Mientki wrote: Is there a Python equivalent to this filter function ? > > I tried to look in the archives of this mailinglist, but that's not a > job for humans. Try gmane: http://news.gmane.org/gmane.comp.python.scientific.user Christian From oliphant at ee.byu.edu Sat Dec 23 02:19:42 2006 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Sat, 23 Dec 2006 00:19:42 -0700 Subject: [SciPy-user] (newbie) IIR, is there a MatLab "filter" equivalent ? In-Reply-To: <458C7122.2050209@ru.nl> References: <458C7122.2050209@ru.nl> Message-ID: <458CD88E.2020603@ee.byu.edu> Stef Mientki wrote: > hello, > > But I couldn't find the equivalent of the MatLab "filter" function: y = > filter(b,a,X) > The filter function is implemented as a direct form II transposed > structure, > y(n) = b(1)*x(n) + b(2)*x(n-1) + ... + b(nb+1)*x(n-nb) > - a(2)*y(n-1) - ... - a(na+1)*y(n-na) > > scipy.signal.lfilter (l for linear in front of the filter). -Travis From s.mientki at ru.nl Sat Dec 23 04:28:21 2006 From: s.mientki at ru.nl (Stef Mientki) Date: Sat, 23 Dec 2006 10:28:21 +0100 Subject: [SciPy-user] (newbie) IIR, is there a MatLab "filter" equivalent ? In-Reply-To: <458CB07B.90300@hoc.net> References: <458C7122.2050209@ru.nl> <458CB07B.90300@hoc.net> Message-ID: <458CF6B5.4050802@ru.nl> >> I tried to look in the archives of this mailinglist, but that's not a >> job for humans. >> > > Try gmane: > > http://news.gmane.org/gmane.comp.python.scientific.user > > Christian > thanks Christian, gmane does really look nice (and human) cheers, Stef From s.mientki at ru.nl Sat Dec 23 04:33:16 2006 From: s.mientki at ru.nl (Stef Mientki) Date: Sat, 23 Dec 2006 10:33:16 +0100 Subject: [SciPy-user] (newbie) IIR, is there a MatLab "filter" equivalent ? In-Reply-To: <458CD88E.2020603@ee.byu.edu> References: <458C7122.2050209@ru.nl> <458CD88E.2020603@ee.byu.edu> Message-ID: <458CF7DC.6060304@ru.nl> > cipy.signal.lfilter (l for linear in front of the filter). > > -Travis > > thanks Travis, that was just the answer I was looking for, and it removed my last excuse, not to start with Python ;-) cheers, Stef From mattknox_ca at hotmail.com Sat Dec 23 13:42:39 2006 From: mattknox_ca at hotmail.com (Matt Knox) Date: Sat, 23 Dec 2006 13:42:39 -0500 Subject: [SciPy-user] moving average, moving standard deviation, etc... Message-ID: Does anyone know of a slick way to calculate a simple moving average and/or moving standard deviation? And when I say slick, I mean loop-free way, because it would be fairly easy to code a looping way of doing it, it would just be really slow. To be more specific, when I say moving standard deviation, I mean for example... if A is a 1-dimensional array with 100 elements, and I'm using a window size of 10, then the result at index 10 would be A[0:10].std(), the result at index 11 would be A[1:11].std() , etc... And similarly for moving average, and so forth. Is their a general way to do these kind of things? Or would it have to be a special case for each type of calculation to avoid looping? Any help is greatly appreciated. Thanks in advance, - Matt From gael.varoquaux at normalesup.org Sat Dec 23 13:51:05 2006 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Sat, 23 Dec 2006 19:51:05 +0100 Subject: [SciPy-user] moving average, moving standard deviation, etc... In-Reply-To: References: Message-ID: <20061223185105.GD7838@clipper.ens.fr> Hi, For all linear operations a convolution will do the trick. For non-linear operations, such as the standard deviation an ugly but efficient way of doing it would be: sqrt(A[0:-10]**2 + A[1;-9]**2 + ... ) Maybe there is a more beautiful way of writing this. Ga?l On Sat, Dec 23, 2006 at 01:42:39PM -0500, Matt Knox wrote: > Does anyone know of a slick way to calculate a simple moving average and/or moving standard deviation? And when I say slick, I mean loop-free way, because it would be fairly easy to code a looping way of doing it, it would just be really slow. > To be more specific, when I say moving standard deviation, I mean for example... > if A is a 1-dimensional array with 100 elements, and I'm using a window size of 10, then the result at index 10 would be A[0:10].std(), the result at index 11 would be A[1:11].std() , etc... > And similarly for moving average, and so forth. Is their a general way to do these kind of things? Or would it have to be a special case for each type of calculation to avoid looping? > Any help is greatly appreciated. Thanks in advance, > - Matt > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user From peridot.faceted at gmail.com Sat Dec 23 14:07:19 2006 From: peridot.faceted at gmail.com (A. M. Archibald) Date: Sat, 23 Dec 2006 14:07:19 -0500 Subject: [SciPy-user] moving average, moving standard deviation, etc... In-Reply-To: References: Message-ID: On 23/12/06, Matt Knox wrote: > And similarly for moving average, and so forth. Is their a general way to do these kind of things? Or would it have to be a special case for each type of calculation to avoid looping? I don't imagine it would be particularly efficient, but you could use fancy indexing (or some weird code I posted a few months ago to avoid copying) to construct a new array such that slices along one axis are the (overlapping) chunks of the original array. Then any operation you want to perform on all the chunks can just be performed along this axis of the new array. A. M. Archibald From palazzol at comcast.net Sun Dec 24 07:46:08 2006 From: palazzol at comcast.net (Frank Palazzolo) Date: Sun, 24 Dec 2006 07:46:08 -0500 Subject: [SciPy-user] moving average, moving standard deviation, etc... In-Reply-To: References: Message-ID: <458E7690.9070205@comcast.net> Hi, The technique I've seen involves keeping some intermediate variables, and subtracting off the oldest value while adding on the newest. At each timestep then, you just update the variables and recalculate the result. xsum += (x[i] - x[i-window_size]) x2sum += (x[i]*x[i] - x[i-window_size]*x[i-window_size]) average = xsum/window_size stdev_squared = (x2sum - xsum*xsum)/window_size stdev = sqrt(stdev_squared) If you want the "sample" stdev, you can compute it directly, or use: stdev_sample_squared = window_size*stdev_squared/(window_size-1) stdev_sample = sqrt(stdev_squared) Don't get me started on this though... IMHO it's usually a waste of time. -Frank Matt Knox wrote: > Does anyone know of a slick way to calculate a simple moving average and/or moving standard deviation? And when I say slick, I mean loop-free way, because it would be fairly easy to code a looping way of doing it, it would just be really slow. > > To be more specific, when I say moving standard deviation, I mean for example... > > if A is a 1-dimensional array with 100 elements, and I'm using a window size of 10, then the result at index 10 would be A[0:10].std(), the result at index 11 would be A[1:11].std() , etc... > > And similarly for moving average, and so forth. Is their a general way to do these kind of things? Or would it have to be a special case for each type of calculation to avoid looping? > > Any help is greatly appreciated. Thanks in advance, > > - Matt > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From mattknox_ca at hotmail.com Sun Dec 24 11:17:30 2006 From: mattknox_ca at hotmail.com (Matt Knox) Date: Sun, 24 Dec 2006 11:17:30 -0500 Subject: [SciPy-user] accumulate with custom functions Message-ID: I work quite a bit with another language called FAME, which is very vector based. In FAME I calculate exponentially weighted moving averages (exp mave) in the following way: suppose X is a 1 dimensional array, then to Calculate an exp mave I would do: myExpMave = X Case firstvalue(X)+1 to lastvalue(X) #sets global operating range set myExpMave[N] = myExpMave[N-1] + k * (myExpMave[N]-myExpMave[N-1]) #where k is some constant smoothing factor, N is an index placeholder recognized by FAME essentially, what this does in numpy terms is accumulate with a function defined as "def mycalc(a, b, k): return a + k * (b - a)" So, getting to the topic of my post, is their anyway to accomplish this kind of accumulation swiftly without resorting to writing a new ufunc in C? Or what would be the best way to do this kind of thing generally? Ie. accumulate with some kind of arbitrary, but still simple, function. Thanks in advance for any help you can provide. - Matt From mattknox_ca at hotmail.com Sun Dec 24 11:23:08 2006 From: mattknox_ca at hotmail.com (Matt Knox) Date: Sun, 24 Dec 2006 16:23:08 +0000 (UTC) Subject: [SciPy-user] moving average, moving standard deviation, etc... References: <458E7690.9070205@comcast.net> Message-ID: > > Hi, > > The technique I've seen involves keeping some intermediate variables, > and subtracting off the oldest value while adding on the newest. At > each timestep then, you just update the variables and recalculate the > result. > > xsum += (x[i] - x[i-window_size]) > x2sum += (x[i]*x[i] - x[i-window_size]*x[i-window_size]) > average = xsum/window_size > stdev_squared = (x2sum - xsum*xsum)/window_size > stdev = sqrt(stdev_squared) > > If you want the "sample" stdev, you can compute it directly, or use: > > stdev_sample_squared = window_size*stdev_squared/(window_size-1) > stdev_sample = sqrt(stdev_squared) > > Don't get me started on this though... IMHO it's usually a waste of time. > > -Frank > Ok, thanks. So it sounds like there is no easy way to do this generally without doing some looping in python (for the standard deviation anyway), or writing some C code. I'm more interested in the general strategy for doing these kinds of calculations than these specific examples themselves necessarily. Although they are useful for making pretty charts sometimes. - Matt From stefan at sun.ac.za Sun Dec 24 14:01:50 2006 From: stefan at sun.ac.za (Stefan van der Walt) Date: Sun, 24 Dec 2006 21:01:50 +0200 Subject: [SciPy-user] moving average, moving standard deviation, etc... In-Reply-To: References: <458E7690.9070205@comcast.net> Message-ID: <20061224190150.GA313@mentat.za.net> Hi Matt On Sun, Dec 24, 2006 at 04:23:08PM +0000, Matt Knox wrote: > Ok, thanks. So it sounds like there is no easy way to do this generally without > doing some looping in python (for the standard deviation anyway), or writing > some C code. I'm more interested in the general strategy for doing these kinds > of calculations than these specific examples themselves necessarily. Although > they are useful for making pretty charts sometimes. Like someone mentioned earlier, you can do averaging using numpy.convolve. For the rest, C code may be the easiest option -- and with ctypes it's a breeze. Take a look at Albert's page, http://www.scipy.org/Cookbook/Ctypes2 and let me know if you need further examples. Cheers St?fan From stefan at sun.ac.za Sun Dec 24 14:11:18 2006 From: stefan at sun.ac.za (Stefan van der Walt) Date: Sun, 24 Dec 2006 21:11:18 +0200 Subject: [SciPy-user] accumulate with custom functions In-Reply-To: References: Message-ID: <20061224191118.GB313@mentat.za.net> On Sun, Dec 24, 2006 at 11:17:30AM -0500, Matt Knox wrote: > > I work quite a bit with another language called FAME, which is very vector based. In FAME I calculate exponentially weighted moving averages (exp mave) in the following way: > > suppose X is a 1 dimensional array, then to Calculate an exp mave I would do: > > myExpMave = X > Case firstvalue(X)+1 to lastvalue(X) #sets global operating range > set myExpMave[N] = myExpMave[N-1] + k * > (myExpMave[N]-myExpMave[N-1]) #where k is some constant smoothing > factor, N is an index placeholder recognized by FAME This can be done with standard indexing: y = x[:-1] + k * (x[1:] - x[:-1]) or y = x[:-1] + k * numpy.diff(x) You may also be interested in numpy.vectorize Cheers St?fan From palazzol at comcast.net Sun Dec 24 21:18:20 2006 From: palazzol at comcast.net (Frank Palazzolo) Date: Sun, 24 Dec 2006 21:18:20 -0500 Subject: [SciPy-user] moving average, moving standard deviation, etc... In-Reply-To: References: <458E7690.9070205@comcast.net> Message-ID: <458F34EC.9060002@comcast.net> Oops - I just realized that you said "no looping", as in "any looping is done inside the API function." Sorry about that - I don't know of any way to do it that way. -Frank From peridot.faceted at gmail.com Sun Dec 24 21:53:04 2006 From: peridot.faceted at gmail.com (A. M. Archibald) Date: Sun, 24 Dec 2006 22:53:04 -0400 Subject: [SciPy-user] moving average, moving standard deviation, etc... In-Reply-To: References: <458E7690.9070205@comcast.net> Message-ID: On 24/12/06, Matt Knox wrote: > Ok, thanks. So it sounds like there is no easy way to do this generally without > doing some looping in python (for the standard deviation anyway), or writing > some C code. I'm more interested in the general strategy for doing these kinds > of calculations than these specific examples themselves necessarily. Although > they are useful for making pretty charts sometimes. Fancy indexing will do the job, more or less. This is the general idea I was trying to get at: chunk = 10 data = arange(10000)/10000. indices = arange(len(data)-chunk+1)[:,newaxis]+arange(chunk)[newaxis,:] data_chunks = data[indices] average(data_chunks,axis=0) var(data_chunks,axis=0) and so on... The only problem is that it expands your data rather a lot. Personally I wouldn't worry about that until my application was written. I posted, some time ago, a function that would allow the easy construction of data_chunks as a view of the original array, that is, with no data copying. (Sorry, I don't remember how to find the thread.) A. M. Archibald From akinoame1 at gmail.com Mon Dec 25 07:32:50 2006 From: akinoame1 at gmail.com (Denis Simakov) Date: Mon, 25 Dec 2006 14:32:50 +0200 Subject: [SciPy-user] moving average, moving standard deviation, etc... In-Reply-To: References: Message-ID: <73eb51090612250432r51313d32q392c5246b28b9bd4@mail.gmail.com> For operations which may be reduced to sums (this includes average and std), you can use cumulative sum: >>> A = arange(10.0) >>> w = 2 # window length >>> C = zeros_like(A) >>> B = A.cumsum() >>> C[w:] = (B[w:] - B[:-w]) / w # running average >>> C[:w] = B[:w] / arange(1,w+1) # just average for first elements >>> print vstack((A, C)) [[ 0. 1. 2. 3. 4. 5. 6. 7. 8. 9. ] [ 0. 0.5 1.5 2.5 3.5 4.5 5.5 6.5 7.5 8.5]] Denis On 12/23/06, Matt Knox wrote: > > Does anyone know of a slick way to calculate a simple moving average and/or moving standard deviation? And when I say slick, I mean loop-free way, because it would be fairly easy to code a looping way of doing it, it would just be really slow. > > To be more specific, when I say moving standard deviation, I mean for example... > > if A is a 1-dimensional array with 100 elements, and I'm using a window size of 10, then the result at index 10 would be A[0:10].std(), the result at index 11 would be A[1:11].std() , etc... > > And similarly for moving average, and so forth. Is their a general way to do these kind of things? Or would it have to be a special case for each type of calculation to avoid looping? > > Any help is greatly appreciated. Thanks in advance, > > - Matt From mattknox_ca at hotmail.com Tue Dec 26 21:28:11 2006 From: mattknox_ca at hotmail.com (Matt Knox) Date: Wed, 27 Dec 2006 02:28:11 +0000 (UTC) Subject: [SciPy-user] accumulate with custom functions References: <20061224191118.GB313@mentat.za.net> Message-ID: > > On Sun, Dec 24, 2006 at 11:17:30AM -0500, Matt Knox wrote: > > > > I work quite a bit with another language called FAME, which is very vector based. In FAME I calculate > exponentially weighted moving averages (exp mave) in the following way: > > > > suppose X is a 1 dimensional array, then to Calculate an exp mave I would do: > > > > myExpMave = X > > Case firstvalue(X)+1 to lastvalue(X) #sets global operating range > > set myExpMave[N] = myExpMave[N-1] + k * > > (myExpMave[N]-myExpMave[N-1]) #where k is some constant smoothing > > factor, N is an index placeholder recognized by FAME > > This can be done with standard indexing: > > y = x[:-1] + k * (x[1:] - x[:-1]) > > or > > y = x[:-1] + k * numpy.diff(x) > > You may also be interested in > > numpy.vectorize > > Cheers > St?fan > Hi St?fan, that's not quite what I was looking for actually. My description was a bit too ambiguous I think. I'll just describe it by example. Suppose x = numpy.arange(20) let k = 0.2 if y is the result, then y[0] = x[0] + 0.2*(x[1] - x[0]) == 0 + 0.2*(1 - 0) = 0.2 y[1] = y[0] + 0.2*(x[2] - y[0]) == 0.2 + 0.2*(2 - 0.2) = 0.56 y[2] = y[1] + 0.2*(x[3] - y[1]) == 0.56 + 0.2*(3 - 0.56) = 1.048 y[3] = y[2] + 0.2*(x[4] - y[2]) == 1.048 + 0.2*(4 - 1.048) = 1.6384 etc... Is their any way to do this kind of cumulative function in numpy? I tried defining a function "def myfunc(a, b): return a + k * (b - a)" and using numpy.vectorize on that, but it doesn't seem to give me an accumulate method when I do that. Is writing a ufunc in C the only thing I can do here and get good speed? Thanks, - Matt From stefan at sun.ac.za Wed Dec 27 03:33:28 2006 From: stefan at sun.ac.za (Stefan van der Walt) Date: Wed, 27 Dec 2006 10:33:28 +0200 Subject: [SciPy-user] moving average, moving standard deviation, etc... In-Reply-To: References: <458E7690.9070205@comcast.net> Message-ID: <20061227083328.GE8241@mentat.za.net> On Sun, Dec 24, 2006 at 10:53:04PM -0400, A. M. Archibald wrote: > On 24/12/06, Matt Knox wrote: > > > Ok, thanks. So it sounds like there is no easy way to do this generally without > > doing some looping in python (for the standard deviation anyway), or writing > > some C code. I'm more interested in the general strategy for doing these kinds > > of calculations than these specific examples themselves necessarily. Although > > they are useful for making pretty charts sometimes. > > Fancy indexing will do the job, more or less. > > This is the general idea I was trying to get at: > > chunk = 10 > data = arange(10000)/10000. > > indices = arange(len(data)-chunk+1)[:,newaxis]+arange(chunk)[newaxis,:] > > data_chunks = data[indices] > > average(data_chunks,axis=0) > var(data_chunks,axis=0) > and so on... > > The only problem is that it expands your data rather a lot. Personally > I wouldn't worry about that until my application was written. I > posted, some time ago, a function that would allow the easy > construction of data_chunks as a view of the original array, that is, > with no data copying. (Sorry, I don't remember how to find the > thread.) http://thread.gmane.org/gmane.comp.python.scientific.user/9570/focus=9579 Cheers St?fan From stefan at sun.ac.za Wed Dec 27 06:20:16 2006 From: stefan at sun.ac.za (Stefan van der Walt) Date: Wed, 27 Dec 2006 13:20:16 +0200 Subject: [SciPy-user] moving average, moving standard deviation, etc... In-Reply-To: <20061227083328.GE8241@mentat.za.net> References: <458E7690.9070205@comcast.net> <20061227083328.GE8241@mentat.za.net> Message-ID: <20061227112016.GB16044@mentat.za.net> On Wed, Dec 27, 2006 at 10:33:28AM +0200, Stefan van der Walt wrote: > On Sun, Dec 24, 2006 at 10:53:04PM -0400, A. M. Archibald wrote: > > On 24/12/06, Matt Knox wrote: > > > > > Ok, thanks. So it sounds like there is no easy way to do this generally without > > > doing some looping in python (for the standard deviation anyway), or writing > > > some C code. I'm more interested in the general strategy for doing these kinds > > > of calculations than these specific examples themselves necessarily. Although > > > they are useful for making pretty charts sometimes. > > > > Fancy indexing will do the job, more or less. > > > > This is the general idea I was trying to get at: > > > > chunk = 10 > > data = arange(10000)/10000. > > > > indices = arange(len(data)-chunk+1)[:,newaxis]+arange(chunk)[newaxis,:] > > > > data_chunks = data[indices] > > > > average(data_chunks,axis=0) > > var(data_chunks,axis=0) > > and so on... > > > > The only problem is that it expands your data rather a lot. Personally > > I wouldn't worry about that until my application was written. I > > posted, some time ago, a function that would allow the easy > > construction of data_chunks as a view of the original array, that is, > > with no data copying. (Sorry, I don't remember how to find the > > thread.) > > http://thread.gmane.org/gmane.comp.python.scientific.user/9570/focus=9579 Or even better: http://thread.gmane.org/gmane.comp.python.numeric.general/12365/focus=12367 Cheers St?fan From s.mientki at ru.nl Fri Dec 29 09:43:40 2006 From: s.mientki at ru.nl (Stef Mientki) Date: Fri, 29 Dec 2006 15:43:40 +0100 Subject: [SciPy-user] probably a stupid question: MatLab equivalent of "diff" ? Message-ID: <4595299C.9040008@ru.nl> Does anyone know the equivalent of the MatLab "diff" function. The "diff" functions calculates the difference between 2 succeeding elements of an array. I need to detect (fast) the falling edge of a binary signal. There's derivate function in Python diff, but when you use an binary (true/false) input, it also detects rising edges. Probably a stupid question, but I still have troubles, digging to huge amount of information about Python. thanks, Stef Mientki From otto at tronarp.se Fri Dec 29 11:26:06 2006 From: otto at tronarp.se (Otto Tronarp) Date: Fri, 29 Dec 2006 17:26:06 +0100 Subject: [SciPy-user] probably a stupid question: MatLab equivalent of "diff" ? In-Reply-To: <4595299C.9040008@ru.nl> References: <4595299C.9040008@ru.nl> Message-ID: <20061229172606.hfhh08bveso8skso@mathcore.kicks-ass.org> Quoting Stef Mientki : > Does anyone know the equivalent of the MatLab "diff" function. > The "diff" functions calculates the difference between 2 succeeding > elements of an array. Then it is the numpy.diff function you'r looking for. In [19]:import numpy In [20]:numpy.diff([1,2,4,1]) Out[20]:array([ 1, 2, -3]) Otto From tom.denniston at alum.dartmouth.org Fri Dec 29 10:35:08 2006 From: tom.denniston at alum.dartmouth.org (Tom Denniston) Date: Fri, 29 Dec 2006 09:35:08 -0600 Subject: [SciPy-user] probably a stupid question: MatLab equivalent of "diff" ? In-Reply-To: <4595299C.9040008@ru.nl> References: <4595299C.9040008@ru.nl> Message-ID: I may be misunderstanding you but isn't what you want just: arr[1:]-arr[:-1] If you only want positive differences it's: (arr[1:]-arr[:-1]) > 0 or negative: (arr[1:]-arr[:-1]) < 0 These should be very fast. --Tom On 12/29/06, Stef Mientki wrote: > Does anyone know the equivalent of the MatLab "diff" function. > The "diff" functions calculates the difference between 2 succeeding > elements of an array. > I need to detect (fast) the falling edge of a binary signal. > > There's derivate function in Python diff, > but when you use an binary (true/false) input, > it also detects rising edges. > > Probably a stupid question, > but I still have troubles, > digging to huge amount of information about Python. > > thanks, > Stef Mientki > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From tom.denniston at alum.dartmouth.org Fri Dec 29 10:36:09 2006 From: tom.denniston at alum.dartmouth.org (Tom Denniston) Date: Fri, 29 Dec 2006 09:36:09 -0600 Subject: [SciPy-user] probably a stupid question: MatLab equivalent of "diff" ? In-Reply-To: References: <4595299C.9040008@ru.nl> Message-ID: Ignore, mine. Otto's answer is better. I didn't see it. On 12/29/06, Tom Denniston wrote: > I may be misunderstanding you but isn't what you want just: > > arr[1:]-arr[:-1] > > If you only want positive differences it's: > > (arr[1:]-arr[:-1]) > 0 > > or negative: > > (arr[1:]-arr[:-1]) < 0 > > These should be very fast. > > --Tom > > > On 12/29/06, Stef Mientki wrote: > > Does anyone know the equivalent of the MatLab "diff" function. > > The "diff" functions calculates the difference between 2 succeeding > > elements of an array. > > I need to detect (fast) the falling edge of a binary signal. > > > > There's derivate function in Python diff, > > but when you use an binary (true/false) input, > > it also detects rising edges. > > > > Probably a stupid question, > > but I still have troubles, > > digging to huge amount of information about Python. > > > > thanks, > > Stef Mientki > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > From lev at columbia.edu Fri Dec 29 10:41:33 2006 From: lev at columbia.edu (Lev Givon) Date: Fri, 29 Dec 2006 10:41:33 -0500 Subject: [SciPy-user] probably a stupid question: MatLab equivalent of "diff" ? In-Reply-To: <20061229172606.hfhh08bveso8skso@mathcore.kicks-ass.org> References: <4595299C.9040008@ru.nl> <20061229172606.hfhh08bveso8skso@mathcore.kicks-ass.org> Message-ID: <20061229154133.GB9125@avicenna.cc.columbia.edu> Received from Otto Tronarp on Fri, Dec 29, 2006 at 11:26:06AM EST: > Quoting Stef Mientki : > > > Does anyone know the equivalent of the MatLab "diff" function. > > The "diff" functions calculates the difference between 2 succeeding > > elements of an array. > > Then it is the numpy.diff function you'r looking for. > > In [19]:import numpy > In [20]:numpy.diff([1,2,4,1]) > Out[20]:array([ 1, 2, -3]) > > Otto Moreover, if you want to ignore the rising edges, you can use something like this: a = da = diff(a) where(da>=0,zeros(shape(da),dtype=int),-ones(shape(da),dtype=int)) L.G. From s.mientki at ru.nl Fri Dec 29 11:37:32 2006 From: s.mientki at ru.nl (Stef Mientki) Date: Fri, 29 Dec 2006 17:37:32 +0100 Subject: [SciPy-user] probably a stupid question: MatLab equivalent of "diff" ? In-Reply-To: References: <4595299C.9040008@ru.nl> Message-ID: <4595444C.2020503@ru.nl> thanks Tom, Otto,Lev, for the fast respons, but the problem still exists (btw I'm fairly newbie, for those didn't discovered that ;-). - I import SciPy - I do some calculations to find specific events, and the last calculation is BP_detect = ( dBP_dt > 0 ) & ( IOO > 40000 ) - now this is really great (compared to MatLab), because the resulting array is a real boolean array, so it only consists of True and False and presumably only takes 8 bits per value ??? - now if I run some standard diff formule, I'll detect both the edges, I expect that is due to the True/False elements of the array, probably they are internally translated to +1 and -1 ??? - even this gives both edges BP_peaks = numpy.diff ( BP_detect) > 0 - in the meanwhile I found a algorithm that works, but I find it very ugly ( I process the data in chunks, therefor I use and save the last sample "BP_peak_prev") BP_peak = logical_and( concatenate(( ([BP_peak_prev]), BP_detect[:-1])), logical_not(BP_detect) ) BP_peak_prev = BP_detect[-1] Any Ideas ? thanks, Stef Mientki Tom Denniston wrote: > Ignore, mine. Otto's answer is better. I didn't see it. > > On 12/29/06, Tom Denniston wrote: > >> I may be misunderstanding you but isn't what you want just: >> >> arr[1:]-arr[:-1] >> >> If you only want positive differences it's: >> >> (arr[1:]-arr[:-1]) > 0 >> >> or negative: >> >> (arr[1:]-arr[:-1]) < 0 >> >> These should be very fast. >> >> --Tom >> >> >> On 12/29/06, Stef Mientki wrote: >> >>> Does anyone know the equivalent of the MatLab "diff" function. >>> The "diff" functions calculates the difference between 2 succeeding >>> elements of an array. >>> I need to detect (fast) the falling edge of a binary signal. >>> >>> There's derivate function in Python diff, >>> but when you use an binary (true/false) input, >>> it also detects rising edges. >>> >>> Probably a stupid question, >>> but I still have troubles, >>> digging to huge amount of information about Python. >>> >>> thanks, >>> Stef Mientki >>> _______________________________________________ >>> SciPy-user mailing list >>> SciPy-user at scipy.org >>> http://projects.scipy.org/mailman/listinfo/scipy-user >>> >>> > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > > From tom.denniston at alum.dartmouth.org Fri Dec 29 12:26:16 2006 From: tom.denniston at alum.dartmouth.org (Tom Denniston) Date: Fri, 29 Dec 2006 11:26:16 -0600 Subject: [SciPy-user] probably a stupid question: MatLab equivalent of "diff" ? In-Reply-To: <4595444C.2020503@ru.nl> References: <4595299C.9040008@ru.nl> <4595444C.2020503@ru.nl> Message-ID: It would help if you could give a concrete example with a small set of data as below to demonstrate the problem: In [4]: numpy.diff(numpy.array([1,2,3,4,7])) Out[4]: array([1, 1, 1, 3]) In [5]: numpy.diff(numpy.array([1,2,1,3,2,4,7])) Out[5]: array([ 1, -1, 2, -1, 2, 3]) In [6]: numpy.diff(numpy.array([1,2,1,3,2,4,7]))>0 Out[6]: array([True, False, True, False, True, True], dtype=bool) On 12/29/06, Stef Mientki wrote: > thanks Tom, Otto,Lev, > for the fast respons, > but the problem still exists (btw I'm fairly newbie, for those didn't > discovered that ;-). > > - I import SciPy > > - I do some calculations to find specific events, and the last > calculation is > BP_detect = ( dBP_dt > 0 ) & ( IOO > 40000 ) > > - now this is really great (compared to MatLab), because the resulting > array is a real boolean array, > so it only consists of True and False and presumably only takes 8 bits > per value ??? > > - now if I run some standard diff formule, I'll detect both the edges, > I expect that is due to the True/False elements of the array, > probably they are internally translated to +1 and -1 ??? > > - even this gives both edges > BP_peaks = numpy.diff ( BP_detect) > 0 > > - in the meanwhile I found a algorithm that works, but I find it very ugly > ( I process the data in chunks, therefor I use and save the last sample > "BP_peak_prev") > > BP_peak = logical_and( concatenate(( ([BP_peak_prev]), > BP_detect[:-1])), logical_not(BP_detect) ) > BP_peak_prev = BP_detect[-1] > > Any Ideas ? > thanks, > Stef Mientki > > > Tom Denniston wrote: > > Ignore, mine. Otto's answer is better. I didn't see it. > > > > On 12/29/06, Tom Denniston wrote: > > > >> I may be misunderstanding you but isn't what you want just: > >> > >> arr[1:]-arr[:-1] > >> > >> If you only want positive differences it's: > >> > >> (arr[1:]-arr[:-1]) > 0 > >> > >> or negative: > >> > >> (arr[1:]-arr[:-1]) < 0 > >> > >> These should be very fast. > >> > >> --Tom > >> > >> > >> On 12/29/06, Stef Mientki wrote: > >> > >>> Does anyone know the equivalent of the MatLab "diff" function. > >>> The "diff" functions calculates the difference between 2 succeeding > >>> elements of an array. > >>> I need to detect (fast) the falling edge of a binary signal. > >>> > >>> There's derivate function in Python diff, > >>> but when you use an binary (true/false) input, > >>> it also detects rising edges. > >>> > >>> Probably a stupid question, > >>> but I still have troubles, > >>> digging to huge amount of information about Python. > >>> > >>> thanks, > >>> Stef Mientki > >>> _______________________________________________ > >>> SciPy-user mailing list > >>> SciPy-user at scipy.org > >>> http://projects.scipy.org/mailman/listinfo/scipy-user > >>> > >>> > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From s.mientki at ru.nl Fri Dec 29 13:22:57 2006 From: s.mientki at ru.nl (Stef Mientki) Date: Fri, 29 Dec 2006 19:22:57 +0100 Subject: [SciPy-user] probably a stupid question: MatLab equivalent of "diff" ? In-Reply-To: References: <4595299C.9040008@ru.nl> <4595444C.2020503@ru.nl> Message-ID: <45955D01.2080109@ru.nl> hi Tom, Tom Denniston wrote: > It would help if you could give a concrete example with a small set of > data as below to demonstrate the problem: > > >>> # the orginal array >>> a = array([True,True,False,False,True,True]) >>> B = numpy.diff(a) >>> B array([False, True, False, True, False], dtype=bool) >>> # now elements B[1] and B [3] are true, so both edges >>> # that doesn't seem to be unlogical, if we try to subtract booleans >>> True - False 1 >>> False - True -1 And now I guess that both -1 and +1 are translated into True, and can't be distinguished anymore :-( >>> # and now also this doesn't work: again 2 edges >>> a[1:] - a[:-1] array([False, True, False, True, False], dtype=bool) >>> a[1:] - a[:-1] > 0 array([False, True, False, True, False], dtype=bool) >>> # So the first thing that worked: >>> a[1:] & ~(a[:-1]) array([False, False, False, True, False], dtype=bool) And that was (besides the history element) the solution I posted in the previous mail. Due to some typing error, I thought my editor didn't except the "&" and "~ ", and therefore I used the longer logical_and, logical_not. Now I can live quiet well with this, but as Newbie I want to know that's the correct way, or are their better ways ? thanks, Stef From tom.denniston at alum.dartmouth.org Fri Dec 29 13:58:36 2006 From: tom.denniston at alum.dartmouth.org (Tom Denniston) Date: Fri, 29 Dec 2006 12:58:36 -0600 Subject: [SciPy-user] probably a stupid question: MatLab equivalent of "diff" ? In-Reply-To: <45955D01.2080109@ru.nl> References: <4595299C.9040008@ru.nl> <4595444C.2020503@ru.nl> <45955D01.2080109@ru.nl> Message-ID: Does this help? In [5]: numpy.diff(numpy.array([True,True,False,False,True,True], dtype=int)) Out[5]: array([ 0, -1, 0, 1, 0]) or if a is already defined: In [7]: numpy.diff(numpy.array([True,True,False,False,True,True], dtype=int)) Out[7]: array([ 0, -1, 0, 1, 0]) In [8]: a=numpy.array([True,True,False,False,True,True]) In [9]: a Out[9]: array([True, True, False, False, True, True], dtype=bool) In [10]: a.astype(int) Out[10]: array([1, 1, 0, 0, 1, 1]) In [11]: numpy.diff(a.astype(int)) Out[11]: array([ 0, -1, 0, 1, 0]) On 12/29/06, Stef Mientki wrote: > hi Tom, > > Tom Denniston wrote: > > It would help if you could give a concrete example with a small set of > > data as below to demonstrate the problem: > > > > > >>> # the orginal array > >>> a = array([True,True,False,False,True,True]) > > >>> B = numpy.diff(a) > >>> B > array([False, True, False, True, False], dtype=bool) > >>> # now elements B[1] and B [3] are true, so both edges > > >>> # that doesn't seem to be unlogical, if we try to subtract booleans > >>> True - False > 1 > >>> False - True > -1 > > And now I guess that both -1 and +1 are translated into True, > and can't be distinguished anymore :-( > > >>> # and now also this doesn't work: again 2 edges > >>> a[1:] - a[:-1] > array([False, True, False, True, False], dtype=bool) > >>> a[1:] - a[:-1] > 0 > array([False, True, False, True, False], dtype=bool) > > >>> # So the first thing that worked: > >>> a[1:] & ~(a[:-1]) > array([False, False, False, True, False], dtype=bool) > > And that was (besides the history element) the solution I posted in the > previous mail. > Due to some typing error, I thought my editor didn't except the "&" and > "~ ", > and therefore I used the longer logical_and, logical_not. > > Now I can live quiet well with this, > but as Newbie I want to know that's the correct way, > or are their better ways ? > > thanks, > Stef > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From s.mientki at ru.nl Fri Dec 29 14:30:53 2006 From: s.mientki at ru.nl (Stef Mientki) Date: Fri, 29 Dec 2006 20:30:53 +0100 Subject: [SciPy-user] probably a stupid question: MatLab equivalent of "diff" ? In-Reply-To: References: <4595299C.9040008@ru.nl> <4595444C.2020503@ru.nl> <45955D01.2080109@ru.nl> Message-ID: <45956CED.7070608@ru.nl> Tom Denniston wrote: > Does this help? > > thanks Tom, I'll try the different options later this evening to see if there's a significant speed difference), cheers, Stef From s.mientki at ru.nl Fri Dec 29 17:14:06 2006 From: s.mientki at ru.nl (Stef Mientki) Date: Fri, 29 Dec 2006 23:14:06 +0100 Subject: [SciPy-user] Wow, Python much faster than MatLab Message-ID: <4595932E.4060304@ru.nl> hi All, instead of questions, my first success story: I converted my first MatLab algorithm into Python (using SciPy), and it not only works perfectly, but also runs much faster: MatLab: 14 msec Python: 2 msec After taking the first difficult steps into Python, all kind of small problems as you already know, it nows seems a piece of cake to convert from MatLab to Python. (the final programs of MatLab and Python can almost only be distinguished by the comment character ;-) Especially I like: - more relaxed behavior of exceeded the upper limit of a (1-dimensional) array - much more functions available, like a simple "mean" - reducing datatype if it's allowed (booleans of 1 byte) thanks for all your help, probably need some more in the future, cheers, Stef Mientki From lanceboyle at qwest.net Fri Dec 29 18:41:11 2006 From: lanceboyle at qwest.net (Jerry) Date: Fri, 29 Dec 2006 16:41:11 -0700 Subject: [SciPy-user] Wow, Python much faster than MatLab In-Reply-To: <4595932E.4060304@ru.nl> References: <4595932E.4060304@ru.nl> Message-ID: <41BC656A-56EB-4F14-A10F-D393CBDA90DE@qwest.net> On Dec 29, 2006, at 3:14 PM, Stef Mientki wrote: > hi All, > > instead of questions, > my first success story: > > I converted my first MatLab algorithm into Python (using SciPy), > and it not only works perfectly, > but also runs much faster: > > MatLab: 14 msec > Python: 2 msec > > After taking the first difficult steps into Python, > all kind of small problems as you already know, > it nows seems a piece of cake to convert from MatLab to Python. > (the final programs of MatLab and Python can almost only be > distinguished by the comment character ;-) > > Especially I like: > - more relaxed behavior of exceeded the upper limit of a > (1-dimensional) array What do you like about that? > - much more functions available, like a simple "mean" > - reducing datatype if it's allowed (booleans of 1 byte) > > thanks for all your help, > probably need some more in the future, > cheers, > Stef Mientki From joishi at amnh.org Fri Dec 29 19:30:34 2006 From: joishi at amnh.org (J Oishi) Date: Fri, 29 Dec 2006 19:30:34 -0500 Subject: [SciPy-user] missing fopen? Message-ID: <4F9148C6-50A4-4423-B00C-0D5C7C613E34@amnh.org> Hi, I am just getting started with scipy (using 0.5.2), and I need to read large amounts of fortran77 unformatted data that comes out of our simulation code. Searching the web, it appears there used to be a class called fopen which had some fort_read methods which would do what I need. However, the source for this info is several years old. If I do a import scipy.io help(scipy.io) I get a hopeful message about fopen, but it doesn't seem to exist: In [2]: test = scipy.io.fopen('test.dat') ------------------------------------------------------------------------ --- exceptions.AttributeError Traceback (most recent call last) /Users/joishi/ AttributeError: 'module' object has no attribute 'fopen' Additionally, grepping through Lib/io/numpyiomodule.c seems to reveal nothing about fopen. Has it been expunged from scipy? If so, is there any way to read f77 unformatted binary? thanks, j From bthom at cs.hmc.edu Fri Dec 29 23:33:49 2006 From: bthom at cs.hmc.edu (belinda thom) Date: Fri, 29 Dec 2006 20:33:49 -0800 Subject: [SciPy-user] C API issue Message-ID: <5E0E1F81-DD08-49CC-99A3-EA521D4B09FE@cs.hmc.edu> Hi, I'm beginning to play around w/scipy, and have some problems when doing "from scipy import *" that don't occur when just doing "import scipy" (appended below). Is there an explanation for this behavior? The version of scipy I'm using came from http://pythonmac.org/ packages/py24-fat/index.html. Thanks, --b 8 % python Python 2.4.4 (#1, Oct 18 2006, 10:34:39) [GCC 4.0.1 (Apple Computer, Inc. build 5341)] on darwin Type "help", "copyright", "credits" or "license" for more information. history mechanism set up >>> from scipy import * RuntimeError: module compiled against version 1000002 of C-API but this version of numpy is 1000009 Traceback (most recent call last): File "", line 1, in ? File "/Library/Frameworks/Python.framework/Versions/2.4/lib/ python2.4/site-packages/scipy/io/__init__.py", line 8, in ? from numpyio import packbits, unpackbits, bswap, fread, fwrite, \ ImportError: numpy.core.multiarray failed to import >>> 9 % python Python 2.4.4 (#1, Oct 18 2006, 10:34:39) [GCC 4.0.1 (Apple Computer, Inc. build 5341)] on darwin Type "help", "copyright", "credits" or "license" for more information. history mechanism set up >>> import scipy >>> from scipy import * RuntimeError: module compiled against version 1000002 of C-API but this version of numpy is 1000009 Traceback (most recent call last): File "", line 1, in ? File "/Library/Frameworks/Python.framework/Versions/2.4/lib/ python2.4/site-packages/scipy/io/__init__.py", line 8, in ? from numpyio import packbits, unpackbits, bswap, fread, fwrite, \ ImportError: numpy.core.multiarray failed to import From s.mientki at ru.nl Sat Dec 30 05:44:03 2006 From: s.mientki at ru.nl (Stef Mientki) Date: Sat, 30 Dec 2006 11:44:03 +0100 Subject: [SciPy-user] Wow, Python much faster than MatLab In-Reply-To: <41BC656A-56EB-4F14-A10F-D393CBDA90DE@qwest.net> References: <4595932E.4060304@ru.nl> <41BC656A-56EB-4F14-A10F-D393CBDA90DE@qwest.net> Message-ID: <459642F3.1050903@ru.nl> >> Especially I like: >> - more relaxed behavior of exceeded the upper limit of a >> (1-dimensional) array >> > > What do you like about that? > > Well, I've to admit, that wasn't a very tactic remark, "noise" is still an unwanted issue in software. But in the meanwhile I've reading further and I should replace that by some other great things: - the very efficient way, comment is turned into help information - the (at first sight) very easy, but yet quit powerfull OOPs implemetation. cheers, Stef Mientki From bthom at cs.hmc.edu Sat Dec 30 14:32:46 2006 From: bthom at cs.hmc.edu (belinda thom) Date: Sat, 30 Dec 2006 11:32:46 -0800 Subject: [SciPy-user] mac os x python 2.4 superpak issues Message-ID: <00430830-4637-4EF1-B23C-9C2DF2AACA1B@cs.hmc.edu> Hi, I have been trying to determine the best way to install ipython in the scipy context for the mac. On the http://www.scipy.org/Download website I obtained "SciPy Superpack for Python 2.4 (Power PC)" because I liked the fact that everything came bundled together (and delivers receipts). I've run into a problem though: nowhere can I find an "ipython" executable. In addition, the "/doc" files listed in the "ipython-data" bom are also not present anywhere on my computer (the contents of the 2 bom files related to the ipython mkpg are appended) Have others been able to successfully use this method? Any idea what is wrong? Thanks, --b contents of ipython-data-0.7.2-py2.4-macosx10.4.pkg/Contents/Archive.bom ----------------- /doc /doc/ipython-0.7.2 /doc/ipython-0.7.2/COPYING /doc/ipython-0.7.2/ChangeLog /doc/ipython-0.7.2/README.txt /doc/ipython-0.7.2/README_Windows.txt /doc/ipython-0.7.2/build_doc_instruction.txt /doc/ipython-0.7.2/examples /doc/ipython-0.7.2/examples/example-demo.py /doc/ipython-0.7.2/examples/example-embed-short.py /doc/ipython-0.7.2/examples/example-embed.py /doc/ipython-0.7.2/examples/example-gnuplot.py /doc/ipython-0.7.2/examples/example-magic.py /doc/ipython-0.7.2/examples/magic_grepl.py /doc/ipython-0.7.2/ipnb_google_soc.lyx /doc/ipython-0.7.2/ipython.1 /doc/ipython-0.7.2/ipython.el /doc/ipython-0.7.2/magic.tex /doc/ipython-0.7.2/manual /doc/ipython-0.7.2/manual.lyx /doc/ipython-0.7.2/manual_base.lyx /doc/ipython-0.7.2/nbexample.py /doc/ipython-0.7.2/nbexample_latex.py /doc/ipython-0.7.2/nbexample_output.py /doc/ipython-0.7.2/new_design.lyx /doc/ipython-0.7.2/pycolor.1 /doc/ipython-0.7.2/pycon.ico /doc/ipython-0.7.2/update_manual.py /man /man/man1 /man/man1/ipython.1.gz /man/man1/pycolor.1.gz contents of ipython-purelib-0.7.2-py2.4-macosx10.4.pkg/Contents/Archive.bom ------------------ /ColorANSI.py /ColorANSI.pyc /ColorANSI.pyo /ConfigLoader.py /ConfigLoader.pyc /ConfigLoader.pyo /CrashHandler.py /CrashHandler.pyc /CrashHandler.pyo /DPyGetOpt.py /DPyGetOpt.pyc /DPyGetOpt.pyo /Debugger.py /Debugger.pyc /Debugger.pyo /Extensions /Extensions/InterpreterExec.py /Extensions/InterpreterExec.pyc /Extensions/InterpreterExec.pyo /Extensions/InterpreterPasteInput.py /Extensions/InterpreterPasteInput.pyc /Extensions/InterpreterPasteInput.pyo /Extensions/PhysicalQInput.py /Extensions/PhysicalQInput.pyc /Extensions/PhysicalQInput.pyo /Extensions/PhysicalQInteractive.py /Extensions/PhysicalQInteractive.pyc /Extensions/PhysicalQInteractive.pyo /Extensions/__init__.py /Extensions/__init__.pyc /Extensions/__init__.pyo /Extensions/astyle.py /Extensions/astyle.pyc /Extensions/astyle.pyo /Extensions/clearcmd.py /Extensions/clearcmd.pyc /Extensions/clearcmd.pyo /Extensions/ext_rehashdir.py /Extensions/ext_rehashdir.pyc /Extensions/ext_rehashdir.pyo /Extensions/ext_rescapture.py /Extensions/ext_rescapture.pyc /Extensions/ext_rescapture.pyo /Extensions/ibrowse.py /Extensions/ibrowse.pyc /Extensions/ibrowse.pyo /Extensions/ipipe.py /Extensions/ipipe.pyc /Extensions/ipipe.pyo /Extensions/ipy_defaults.py /Extensions/ipy_defaults.pyc /Extensions/ipy_defaults.pyo /Extensions/ipy_system_conf.py /Extensions/ipy_system_conf.pyc /Extensions/ipy_system_conf.pyo /Extensions/numeric_formats.py /Extensions/numeric_formats.pyc /Extensions/numeric_formats.pyo /Extensions/path.py /Extensions/path.pyc /Extensions/path.pyo /Extensions/pickleshare.py /Extensions/pickleshare.pyc /Extensions/pickleshare.pyo /Extensions/pspersistence.py /Extensions/pspersistence.pyc /Extensions/pspersistence.pyo /Extensions/win32clip.py /Extensions/win32clip.pyc /Extensions/win32clip.pyo /FakeModule.py /FakeModule.pyc /FakeModule.pyo /Gnuplot2.py /Gnuplot2.pyc /Gnuplot2.pyo /GnuplotInteractive.py /GnuplotInteractive.pyc /GnuplotInteractive.pyo /GnuplotRuntime.py /GnuplotRuntime.pyc /GnuplotRuntime.pyo /Itpl.py /Itpl.pyc /Itpl.pyo /Logger.py /Logger.pyc /Logger.pyo /Magic.py /Magic.pyc /Magic.pyo /OInspect.py /OInspect.pyc /OInspect.pyo /OutputTrap.py /OutputTrap.pyc /OutputTrap.pyo /Prompts.py /Prompts.pyc /Prompts.pyo /PyColorize.py /PyColorize.pyc /PyColorize.pyo /Release.py /Release.pyc /Release.pyo /Shell.py /Shell.pyc /Shell.pyo /UserConfig /UserConfig/ipy_profile_sh.py /UserConfig/ipy_profile_sh.pyc /UserConfig/ipy_profile_sh.pyo /UserConfig/ipy_user_conf.py /UserConfig/ipy_user_conf.pyc /UserConfig/ipy_user_conf.pyo /UserConfig/ipythonrc /UserConfig/ipythonrc-math /UserConfig/ipythonrc-numeric /UserConfig/ipythonrc-physics /UserConfig/ipythonrc-pysh /UserConfig/ipythonrc-scipy /UserConfig/ipythonrc-tutorial /__init__.py /__init__.pyc /__init__.pyo /background_jobs.py /background_jobs.pyc /background_jobs.pyo /completer.py /completer.pyc /completer.pyo /deep_reload.py /deep_reload.pyc /deep_reload.pyo /demo.py /demo.pyc /demo.pyo /excolors.py /excolors.pyc /excolors.pyo /genutils.py /genutils.pyc /genutils.pyo /hooks.py /hooks.pyc /hooks.pyo /ipapi.py /ipapi.pyc /ipapi.pyo /iplib.py /iplib.pyc /iplib.pyo /ipmaker.py /ipmaker.pyc /ipmaker.pyo /ipstruct.py /ipstruct.pyc /ipstruct.pyo /irunner.py /irunner.pyc /irunner.pyo /macro.py /macro.pyc /macro.pyo /numutils.py /numutils.pyc /numutils.pyo /platutils.py /platutils.pyc /platutils.pyo /platutils_dummy.py /platutils_dummy.pyc /platutils_dummy.pyo /platutils_posix.py /platutils_posix.pyc /platutils_posix.pyo /platutils_win32.py /platutils_win32.pyc /platutils_win32.pyo /rlineimpl.py /rlineimpl.pyc /rlineimpl.pyo /ultraTB.py /ultraTB.pyc /ultraTB.pyo /upgrade_dir.py /upgrade_dir.pyc /upgrade_dir.pyo /usage.py /usage.pyc /usage.pyo /wildcard.py /wildcard.pyc /wildcard.pyo /winconsole.py /winconsole.pyc /winconsole.pyo bthom at tweedledum.local [11:07am] ~/ rcpts 15 % From bthom at cs.hmc.edu Sat Dec 30 14:51:08 2006 From: bthom at cs.hmc.edu (belinda thom) Date: Sat, 30 Dec 2006 11:51:08 -0800 Subject: [SciPy-user] additional superpak problems Message-ID: <8BEA1EBE-59F7-4100-8C16-758C4C2B1FEB@cs.hmc.edu> Again, regarding SciPy Superpack for Python 2.4 (Power PC), PyMC-1.1- py2.4-macosx10.4.mpkg and scipy-0.5.2.dev2095-py2.4-macosx10.4.mpkg are problematic. For example, after installing all the superpack packages, behavior like the following results. A similar error occurs when doing spicy.test(). Only numpy's test worked w/no errors. It seems to me that superpaks with these kind of problems should not be listed as what to download at the Scipy download site. 22 % python Python 2.4.4 (#1, Oct 18 2006, 10:34:39) [GCC 4.0.1 (Apple Computer, Inc. build 5341)] on darwin Type "help", "copyright", "credits" or "license" for more information. history mechanism set up >>> import PyMC Matplotlib module not detected ... plotting disabled. Matplotlib module not detected ... plotting disabled. Traceback (most recent call last): File "", line 1, in ? File "/Library/Frameworks/Python.framework/Versions/2.4/lib/ python2.4/site-packages/PyMC/__init__.py", line 19, in ? exec "from %s import *" % mod File "", line 1, in ? File "/Library/Frameworks/Python.framework/Versions/2.4/lib/ python2.4/site-packages/PyMC/MCMC.py", line 37, in ? from flib import categor as _categorical ImportError: Failure linking new module: /Library/Frameworks/ Python.framework/Versions/2.4/lib/python2.4/site-packages/PyMC/ flib.so: Library not loaded: /usr/local/lib/libg2c.0.dylib Referenced from: /Library/Frameworks/Python.framework/Versions/2.4/ lib/python2.4/site-packages/PyMC/flib.so Reason: image not found From v-nijs at kellogg.northwestern.edu Sat Dec 30 15:19:05 2006 From: v-nijs at kellogg.northwestern.edu (Vincent Nijs) Date: Sat, 30 Dec 2006 14:19:05 -0600 Subject: [SciPy-user] additional superpak problems In-Reply-To: <8BEA1EBE-59F7-4100-8C16-758C4C2B1FEB@cs.hmc.edu> Message-ID: --b, I had the same problem with ipython on OS X 10.4. I installed it from http://ipython.scipy.org/moin/Download and then everything worked fine, including 'import PyMC'. I do get the warnings below from scipy.test(). However, I have not had any problems running scipy so I assume these are not problematic for speed or functionality. Best, Vincent Warning: FAILURE importing tests for /Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-package s/scipy/linsolve/umfpack/tests/test_umfpack.py:17: AttributeError: 'module' object has no attribute 'umfpack' (in ?) Warning: FAILURE importing tests for /Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-package s/scipy/linsolve/umfpack/tests/test_umfpack.py:17: AttributeError: 'module' object has no attribute 'umfpack' (in ?) **************************************************************** WARNING: clapack module is empty ----------- See scipy/INSTALL.txt for troubleshooting. Notes: * If atlas library is not found by numpy/distutils/system_info.py, then scipy uses flapack instead of clapack. **************************************************************** ....................................../Library/Frameworks/Python.framework/V ersions/2.4/lib/python2.4/site-packages/scipy/interpolate/fitpack2.py:457: UserWarning: The coefficients of the spline returned have been computed as the minimal norm least-squares solution of a (numerically) rank deficient system (deficiency=7). If deficiency is large, the results may be inaccurate. Deficiency may strongly depend on the value of eps. warnings.warn(message) ............................................................................ ........Ties preclude use of exact statistic. ..Ties preclude use of exact statistic. On 12/30/06 1:51 PM, "belinda thom" wrote: > Again, regarding SciPy Superpack for Python 2.4 (Power PC), PyMC-1.1- > py2.4-macosx10.4.mpkg and scipy-0.5.2.dev2095-py2.4-macosx10.4.mpkg > are problematic. > > For example, after installing all the superpack packages, behavior > like the following results. A similar error occurs when doing > spicy.test(). Only numpy's test worked w/no errors. > > It seems to me that superpaks with these kind of problems should not > be listed as what to download at the Scipy download site. > > 22 % python > Python 2.4.4 (#1, Oct 18 2006, 10:34:39) > [GCC 4.0.1 (Apple Computer, Inc. build 5341)] on darwin > Type "help", "copyright", "credits" or "license" for more information. > history mechanism set up >>>> import PyMC > Matplotlib module not detected ... plotting disabled. > Matplotlib module not detected ... plotting disabled. > Traceback (most recent call last): > File "", line 1, in ? > File "/Library/Frameworks/Python.framework/Versions/2.4/lib/ > python2.4/site-packages/PyMC/__init__.py", line 19, in ? > exec "from %s import *" % mod > File "", line 1, in ? > File "/Library/Frameworks/Python.framework/Versions/2.4/lib/ > python2.4/site-packages/PyMC/MCMC.py", line 37, in ? > from flib import categor as _categorical > ImportError: Failure linking new module: /Library/Frameworks/ > Python.framework/Versions/2.4/lib/python2.4/site-packages/PyMC/ > flib.so: Library not loaded: /usr/local/lib/libg2c.0.dylib > Referenced from: /Library/Frameworks/Python.framework/Versions/2.4/ > lib/python2.4/site-packages/PyMC/flib.so > Reason: image not found > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From bthom at cs.hmc.edu Sat Dec 30 15:26:57 2006 From: bthom at cs.hmc.edu (belinda thom) Date: Sat, 30 Dec 2006 12:26:57 -0800 Subject: [SciPy-user] additional superpak problems In-Reply-To: References: Message-ID: Vincent, Thanks for your reply. Just to clarify: On Dec 30, 2006, at 12:19 PM, Vincent Nijs wrote: > --b, > > I had the same problem with ipython on OS X 10.4. I installed it from > http://ipython.scipy.org/moin/Download and then everything worked > fine, > including 'import PyMC'. Are you using a Mac? Can you please elaborate on "I installed it...and everything worked fine". One of my main points of confusion is why importing PyMC would as a result of (?) reinstalling iPython. Thanks, --b From v-nijs at kellogg.northwestern.edu Sat Dec 30 15:54:33 2006 From: v-nijs at kellogg.northwestern.edu (Vincent Nijs) Date: Sat, 30 Dec 2006 14:54:33 -0600 Subject: [SciPy-user] additional superpak problems In-Reply-To: Message-ID: --b, Sorry ... I installed everything in the super-pack, including Ipython. Ipython didn't work however (no executable). I could already import PyMC after installing it from the superpack. Installing Ipython from source worked fine. I use OS X 10.4 (ppc, G4 and a G5). Vincent On 12/30/06 2:26 PM, "belinda thom" wrote: > Vincent, > > Thanks for your reply. Just to clarify: > > On Dec 30, 2006, at 12:19 PM, Vincent Nijs wrote: > >> --b, >> >> I had the same problem with ipython on OS X 10.4. I installed it from >> http://ipython.scipy.org/moin/Download and then everything worked >> fine, >> including 'import PyMC'. > > Are you using a Mac? > > Can you please elaborate on "I installed it...and everything worked > fine". One of my main points of confusion is why importing PyMC would > as a result of (?) reinstalling iPython. > > Thanks, > > --b > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From bthom at cs.hmc.edu Sat Dec 30 15:59:22 2006 From: bthom at cs.hmc.edu (belinda thom) Date: Sat, 30 Dec 2006 12:59:22 -0800 Subject: [SciPy-user] additional superpak problems In-Reply-To: References: Message-ID: <8DAE80C5-4F3E-464D-86A2-78C65A2D5E1E@cs.hmc.edu> On Dec 30, 2006, at 12:54 PM, Vincent Nijs wrote: > --b, > > Sorry ... I installed everything in the super-pack, including Ipython. > Ipython didn't work however (no executable). I could already import > PyMC > after installing it from the superpack. Hmmm...I didn't get such luck w/PyMC (or scipy). What python have you got installed? > Installing Ipython from source worked fine. > > I use OS X 10.4 (ppc, G4 and a G5). 10.4.? So to summarize, the superpak worked for you, except for not creating the ipython executable. Wish I had such luck. Thanks --b From v-nijs at kellogg.northwestern.edu Sat Dec 30 16:24:32 2006 From: v-nijs at kellogg.northwestern.edu (Vincent Nijs) Date: Sat, 30 Dec 2006 15:24:32 -0600 Subject: [SciPy-user] additional superpak problems In-Reply-To: <8DAE80C5-4F3E-464D-86A2-78C65A2D5E1E@cs.hmc.edu> Message-ID: OS X 10.4.8 MacPython2.4: http://www.pythonmac.org/packages/py24-fat/dmg/python-2.4.4-macosx2006-10-18 .dmg Your summary is correct. Vincent On 12/30/06 2:59 PM, "belinda thom" wrote: > > On Dec 30, 2006, at 12:54 PM, Vincent Nijs wrote: > >> --b, >> >> Sorry ... I installed everything in the super-pack, including Ipython. >> Ipython didn't work however (no executable). I could already import >> PyMC >> after installing it from the superpack. > > Hmmm...I didn't get such luck w/PyMC (or scipy). What python have you > got installed? > >> Installing Ipython from source worked fine. >> >> I use OS X 10.4 (ppc, G4 and a G5). > > 10.4.? > > So to summarize, the superpak worked for you, except for not creating > the ipython executable. Wish I had such luck. > > Thanks > > --b > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From bthom at cs.hmc.edu Sat Dec 30 18:56:04 2006 From: bthom at cs.hmc.edu (belinda thom) Date: Sat, 30 Dec 2006 15:56:04 -0800 Subject: [SciPy-user] additional superpak problems In-Reply-To: References: Message-ID: <4C056441-98BB-43DA-A59D-5A9864DA96D4@cs.hmc.edu> Vincent, I am now receiving only the errors you reported. Here's the fix (taken from http://www.scipy.org/Installing_SciPy/ Mac_OS_X): ------------ for Power PC machines A binary of g77 (version 3.4) is available from Guarav Khanna's High Performance Computing for Mac OSX site (Download here). Then gunzip g77v3.4-bin.tar.gz (if your browser didn't do so already) and: sudo tar -xvf g77v3.4-bin.tar -C / ------------ the following appeared: ------------ 69 % ll -t /usr/local/lib total 14216 lrwxrwxrwx 1 root wheel 18 Dec 30 15:29 libg2c.0.dylib@ -> libg2c.0.0.0.dylib lrwxrwxrwx 1 root wheel 18 Dec 30 15:29 libg2c.dylib@ -> libg2c.0.0.0.dylib lrwxrwxrwx 1 root wheel 18 Dec 30 15:29 libgcc_s.dylib@ -> libgcc_s.1.0.dylib I then ran the scipy, numpy, and matplotlib superpak packages and was able to run numpy.test() and scipy.test(). It appears this g77 compiler must be installed for the libraries that scipy needs to exist. Would it be possible to ensure that this was the case in side of the super pak itself? If so, it might save others the pain and agony I went through. --b From m.starnes at imperial.ac.uk Sat Dec 30 19:06:33 2006 From: m.starnes at imperial.ac.uk (Mark Starnes) Date: Sun, 31 Dec 2006 00:06:33 +0000 Subject: [SciPy-user] csc, csr sparse complex matrix problems. Possibly memory issues. In-Reply-To: <4581355A.2020806@imperial.ac.uk> References: <457E91FD.2000302@iam.uni-stuttgart.de> <4580C072.3060608@imperial.ac.uk> <458124BC.6080508@ntc.zcu.cz> <4581355A.2020806@imperial.ac.uk> Message-ID: <4596FF09.3050208@imperial.ac.uk> An HTML attachment was scrubbed... URL: From bthom at cs.hmc.edu Sat Dec 30 19:13:52 2006 From: bthom at cs.hmc.edu (belinda thom) Date: Sat, 30 Dec 2006 16:13:52 -0800 Subject: [SciPy-user] additional superpak problems In-Reply-To: References: Message-ID: <6449E8E3-9928-4BF5-8373-DFED306BB9E9@cs.hmc.edu> And yet more strangeness...the superpack matplotlib breaks the ability to use pylab: running ipython -pylab produces: File "/Library/Frameworks/Python.framework/Versions/2.4/lib/ python2.4/site-packages/matplotlib/backends/tkagg.py", line 1, in ? import _tkagg ImportError: Failure linking new module: /Library/Frameworks/ Python.framework/Versions/2.4/lib/python2.4/site-packages/matplotlib/ backends/_tkagg.so: Library not loaded: /usr/local/lib/libfreetype. 6.dylib Referenced from: /Library/Frameworks/Python.framework/Versions/2.4/ lib/python2.4/site-packages/matplotlib/backends/_tkagg.so Reason: image not found Reverting back to the matplotlib at http://www.macpython.org/packages/ py24-fat/index.html allows ipython -pylab to work. I have no idea if mixing and matching parts of superpak with other packages will cause me later problems. I'm curious, Vincent, do you run matplotlib interactively with ipython? Does that piece installed from the superpack work for you? Thanks again, --b From bthom at cs.hmc.edu Sat Dec 30 19:48:32 2006 From: bthom at cs.hmc.edu (belinda thom) Date: Sat, 30 Dec 2006 16:48:32 -0800 Subject: [SciPy-user] Fwd: additional superpak problems References: Message-ID: Begin forwarded message: > From: Vincent Nijs > Date: December 30, 2006 4:32:08 PM PST > To: belinda thom > Subject: Re: [SciPy-user] additional superpak problems > > g77 was part of the super pack I believe. Perhaps it is important > that you > install that first. > > Ipython -pylab works fine on my laptop (G4) but not on my G5 (same > error you > report). I'll try your fix of using the package from macpython next > week. > > Vincent > > > On 12/30/06 6:13 PM, "belinda thom" wrote: > >> And yet more strangeness...the superpack matplotlib breaks the >> ability to use pylab: >> >> running >> ipython -pylab >> >> produces: >> >> File "/Library/Frameworks/Python.framework/Versions/2.4/lib/ >> python2.4/site-packages/matplotlib/backends/tkagg.py", line 1, in ? >> import _tkagg >> ImportError: Failure linking new module: /Library/Frameworks/ >> Python.framework/Versions/2.4/lib/python2.4/site-packages/matplotlib/ >> backends/_tkagg.so: Library not loaded: /usr/local/lib/libfreetype. >> 6.dylib >> Referenced from: /Library/Frameworks/Python.framework/Versions/ >> 2.4/ >> lib/python2.4/site-packages/matplotlib/backends/_tkagg.so >> Reason: image not found >> >> Reverting back to the matplotlib at http://www.macpython.org/ >> packages/ >> py24-fat/index.html allows ipython -pylab to work. >> >> I have no idea if mixing and matching parts of superpak with other >> packages will cause me later problems. >> >> I'm curious, Vincent, do you run matplotlib interactively with >> ipython? Does that piece installed from the superpack work for you? >> >> Thanks again, >> >> --b >> > > -- > Vincent R. Nijs > Assistant Professor of Marketing > Kellogg School of Management, Northwestern University > 2001 Sheridan Road, Evanston, IL 60208-2001 > Phone: +1-847-491-4574 Fax: +1-847-491-2498 > E-mail: v-nijs at kellogg.northwestern.edu > Skype: vincentnijs > > > From v-nijs at kellogg.northwestern.edu Sat Dec 30 23:13:30 2006 From: v-nijs at kellogg.northwestern.edu (Vincent Nijs) Date: Sat, 30 Dec 2006 22:13:30 -0600 Subject: [SciPy-user] FW: Pylab load dates In-Reply-To: Message-ID: I there someone on this list that can build matplotlib for the mac? I used the matplotlib package in the superpack and also tried the one on macpython to run the example below to read dates from a csv file but it didn't work. Apparently because the pytz module is not available in the matplotlib packages for the mac. I tried compiling matplotlib myself but got the following error on build: gcc: cannot specify -o with -c or -S and multiple compilations Vincent from pylab import figure, show, datestr2num, load dates, closes = load( 'data/msft.csv', delimiter=',', converters={0:datestr2num}, skiprows=1, usecols=(0,2), unpack=True) Date,Open,High,Low,Close,Volume,Adj. Close* 19-Sep-03,29.76,29.97,29.52,29.96,92433800,29.79 18-Sep-03,28.49,29.51,28.42,29.50,67268096,29.34 17-Sep-03,28.76,28.95,28.47,28.50,47221600,28.34 16-Sep-03,28.41,28.95,28.32,28.90,52060600,28.74 15-Sep-03,28.37,28.61,28.33,28.36,41432300,28.20 12-Sep-03,27.48,28.40,27.45,28.34,55777200,28.18 11-Sep-03,27.66,28.11,27.59,27.84,37813300,27.68 10-Sep-03,28.03,28.18,27.48,27.55,54763500,27.40 From bthom at cs.hmc.edu Sun Dec 31 00:57:59 2006 From: bthom at cs.hmc.edu (belinda thom) Date: Sat, 30 Dec 2006 21:57:59 -0800 Subject: [SciPy-user] FW: Pylab load dates In-Reply-To: References: Message-ID: <734E1706-A6E2-4C0A-A6E9-9023FED6CE43@cs.hmc.edu> Interesting. I never tried the date stuff. Bet it would probably die on my end too. --b On Dec 30, 2006, at 8:13 PM, Vincent Nijs wrote: > I there someone on this list that can build matplotlib for the mac? > I used > the matplotlib package in the superpack and also tried the one on > macpython > to run the example below to read dates from a csv file but it > didn't work. > Apparently because the pytz module is not available in the matplotlib > packages for the mac. > > I tried compiling matplotlib myself but got the following error on > build: > > gcc: cannot specify -o with -c or -S and multiple compilations > > Vincent > > > > from pylab import figure, show, datestr2num, load > dates, closes = load( > 'data/msft.csv', delimiter=',', > converters={0:datestr2num}, skiprows=1, usecols=(0,2), > unpack=True) > > > Date,Open,High,Low,Close,Volume,Adj. Close* > 19-Sep-03,29.76,29.97,29.52,29.96,92433800,29.79 > 18-Sep-03,28.49,29.51,28.42,29.50,67268096,29.34 > 17-Sep-03,28.76,28.95,28.47,28.50,47221600,28.34 > 16-Sep-03,28.41,28.95,28.32,28.90,52060600,28.74 > 15-Sep-03,28.37,28.61,28.33,28.36,41432300,28.20 > 12-Sep-03,27.48,28.40,27.45,28.34,55777200,28.18 > 11-Sep-03,27.66,28.11,27.59,27.84,37813300,27.68 > 10-Sep-03,28.03,28.18,27.48,27.55,54763500,27.40 > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user From bthom at cs.hmc.edu Sun Dec 31 00:59:20 2006 From: bthom at cs.hmc.edu (belinda thom) Date: Sat, 30 Dec 2006 21:59:20 -0800 Subject: [SciPy-user] FW: Pylab load dates In-Reply-To: References: Message-ID: <790AF879-B6B5-450D-8BEC-37314607DC7B@cs.hmc.edu> p.s. when I tried my own build (via easy_install, which is pretty slick) I got the same blasted error. --b On Dec 30, 2006, at 8:13 PM, Vincent Nijs wrote: > I tried compiling matplotlib myself but got the following error on > build: > > gcc: cannot specify -o with -c or -S and multiple compilations > > Vincent From prabhu at aero.iitb.ac.in Sun Dec 31 02:07:01 2006 From: prabhu at aero.iitb.ac.in (Prabhu Ramachandran) Date: Sun, 31 Dec 2006 12:37:01 +0530 Subject: [SciPy-user] FW: Pylab load dates In-Reply-To: References: Message-ID: <17815.24981.122040.652217@monster.iitb.ac.in> >>>>> "Vincent" == Vincent Nijs writes: Vincent> I there someone on this list that can build matplotlib Vincent> for the mac? I used the matplotlib package in the Vincent> superpack and also tried the one on macpython to run the Vincent> example below to read dates from a csv file but it didn't Vincent> work. Apparently because the pytz module is not Vincent> available in the matplotlib packages for the mac. There are instructions on how to build a whole bunch of packages on the Intel Mac here: https://svn.enthought.com/enthought/wiki/IntelMacPython25 I just added some instructions on how to build matplotlib (with the Tk and wxPython backends) to this list. HTH, prabhu From niklassaers at gmail.com Sun Dec 31 06:44:45 2006 From: niklassaers at gmail.com (Niklas Saers) Date: Sun, 31 Dec 2006 12:44:45 +0100 Subject: [SciPy-user] New user questions Message-ID: <9BE12344-C99F-4CDD-82F5-B167E3C382B6@gmail.com> Hi guys, I'm a new user to SciPy, seeking a Python alternative to MatLab. I'm installing on an OS X box and downloaded and installed the SciPy Superpack with SciPy 0.5.2. I noticed that much of the copyright notices said 2001, 2002. Is SciPy still under active development? When running scipy.test(10) I got: Warning: FAILURE importing tests for Warning: FAILURE importing tests for I can import scipy.linsolve and scipy.linsolve.umfpack, though. Should I be worried or not worry about this at all? :-) Cheers Nik From lou_boog2000 at yahoo.com Sun Dec 31 08:20:41 2006 From: lou_boog2000 at yahoo.com (Lou Pecora) Date: Sun, 31 Dec 2006 05:20:41 -0800 (PST) Subject: [SciPy-user] additional superpak problems In-Reply-To: <4C056441-98BB-43DA-A59D-5A9864DA96D4@cs.hmc.edu> Message-ID: <332534.12875.qm@web34412.mail.mud.yahoo.com> I found similar problems with the superpack in that I had to manually install the fortran compiler (using tar command as you already indicated) and manually install iPython. Here's what I did for iPython: Go to: http://ipython.scipy.org/ Download the ipython-0.7.2.tar.gz version In the Terminal: tar -xvzf ipython-0.7.2.tar.gz cd ipython-0.7.2 python setup.py build sudo python setup.py install iPython works fine as does SciPy, MPL, and NumPy. The latter three installed fine from the superpack. Using G4 Al Laptop, MacOS X 10.4.8, Python 2.4. Hope that helps. I agree that it would be nice to have those superpacks well maintained since bundling can mitigate the painful problems of matching correct versions with all these packages (my main complaint about using Python as a scientific environment -- otherwise I love it). --- belinda thom wrote: > Vincent, > > I am now receiving only the errors you reported. [cut] -- Lou Pecora My views are my own. __________________________________________________ Do You Yahoo!? Tired of spam? Yahoo! Mail has the best spam protection around http://mail.yahoo.com From elcorto at gmx.net Sun Dec 31 08:38:17 2006 From: elcorto at gmx.net (Steve Schmerler) Date: Sun, 31 Dec 2006 14:38:17 +0100 Subject: [SciPy-user] New user questions In-Reply-To: <9BE12344-C99F-4CDD-82F5-B167E3C382B6@gmail.com> References: <9BE12344-C99F-4CDD-82F5-B167E3C382B6@gmail.com> Message-ID: <4597BD49.5010307@gmx.net> Niklas Saers wrote: > > Warning: FAILURE importing tests for 'scipy.linsolve.umfpack.umfpack' from '...y/linsolve/umfpack/ > umfpack.pyc'> > Warning: FAILURE importing tests for >from '.../linsolve/umfpack/__init__.pyc'> > > I can import scipy.linsolve and scipy.linsolve.umfpack, though. > Should I be worried or not worry about this at all? :-) On Linux, I get these warnings too but because I've no UMFPACK installed. As long as you don't plan to use linsolve heavily you should be set up just fine. I don't know much about linsolve because I don't use it myself but the doc says the default solver is from the SuperLU lib in case UMPFPACK isn't found. Which one is better is ahead of my knwoledge, but it's probably UMFPACK ... :) Anyway, development is well underway and the guys are doing a great job. -- cheers, steve Random number generation is the art of producing pure gibberish as quickly as possible. From bthom at cs.hmc.edu Sun Dec 31 14:23:59 2006 From: bthom at cs.hmc.edu (belinda thom) Date: Sun, 31 Dec 2006 11:23:59 -0800 Subject: [SciPy-user] additional superpak problems In-Reply-To: <332534.12875.qm@web34412.mail.mud.yahoo.com> References: <332534.12875.qm@web34412.mail.mud.yahoo.com> Message-ID: <9DC36E73-F8AE-4FA1-AFC5-A09C037D9FC8@cs.hmc.edu> So you've had no use of matplotlib? Getting that one running in interactive mode was as tricky as the scipy one, IMHO. --b On Dec 31, 2006, at 5:20 AM, Lou Pecora wrote: > I found similar problems with the superpack in that I > had to manually install the fortran compiler (using > tar command as you already indicated) and manually > install iPython. Here's what I did for iPython: > > Go to: http://ipython.scipy.org/ > > Download the ipython-0.7.2.tar.gz version > > In the Terminal: > > tar -xvzf ipython-0.7.2.tar.gz > cd ipython-0.7.2 > python setup.py build > sudo python setup.py install > > iPython works fine as does SciPy, MPL, and NumPy. The > latter three installed fine from the superpack. > > Using G4 Al Laptop, MacOS X 10.4.8, Python 2.4. > > Hope that helps. > > I agree that it would be nice to have those superpacks > well maintained since bundling can mitigate the > painful problems of matching correct versions with all > these packages (my main complaint about using Python > as a scientific environment -- otherwise I love it). > > > > --- belinda thom wrote: > >> Vincent, >> >> I am now receiving only the errors you reported. > > [cut] > > > > -- Lou Pecora > My views are my own. > > __________________________________________________ > Do You Yahoo!? > Tired of spam? Yahoo! Mail has the best spam protection around > http://mail.yahoo.com > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user From fperez.net at gmail.com Sun Dec 31 14:47:20 2006 From: fperez.net at gmail.com (Fernando Perez) Date: Sun, 31 Dec 2006 12:47:20 -0700 Subject: [SciPy-user] mac os x python 2.4 superpak issues In-Reply-To: <00430830-4637-4EF1-B23C-9C2DF2AACA1B@cs.hmc.edu> References: <00430830-4637-4EF1-B23C-9C2DF2AACA1B@cs.hmc.edu> Message-ID: On 12/30/06, belinda thom wrote: > Hi, > > I have been trying to determine the best way to install ipython in > the scipy context for the mac. On the > > http://www.scipy.org/Download As I mentioned elsewhere, my personal preference under *nix systems is either apt-get/yum/fink install foo or tarball/svn checkout with a manual python setup.py install --prefix=$HOME/usr/local/ But I don't personally use OSX, and from the amount of problems you and others seem to be having there, I'm afraid little of what I can say will be of any help. For ipython however (since it has no C/Fortran code), I think the above (with a .tgz download of 0.7.3) should give you what you need (modify the --prefix flag to suit your preferences). Regards, f